repo_name stringlengths 6 77 | path stringlengths 8 215 | license stringclasses 15
values | content stringlengths 335 154k |
|---|---|---|---|
minh5/cpsc | reports/.ipynb_checkpoints/api data-checkpoint.ipynb | mit | import pickle
import operator
import numpy as np
import pandas as pd
import gensim.models
data = pickle.load(open('/home/datauser/cpsc/data/processed/cleaned_api_data', 'rb'))
data.head()
"""
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#Introduction" data-toc-modified-id="Introduction-1"><span class="toc-item-num">1 </span>Introduction</a></div><div class="lev1 toc-item"><a href="#Pulling-in-Data" data-toc-modified-id="Pulling-in-Data-2"><span class="toc-item-num">2 </span>Pulling in Data</a></div><div class="lev1 toc-item"><a href="#Questions" data-toc-modified-id="Questions-3"><span class="toc-item-num">3 </span>Questions</a></div><div class="lev2 toc-item"><a href="#Are-there-certain-populations-we're-not-getting-reports-from?" data-toc-modified-id="Are-there-certain-populations-we're-not-getting-reports-from?-3.1"><span class="toc-item-num">3.1 </span>Are there certain populations we're not getting reports from?</a></div><div class="lev3 toc-item"><a href="#Findings" data-toc-modified-id="Findings-3.1.1"><span class="toc-item-num">3.1.1 </span>Findings</a></div><div class="lev2 toc-item"><a href="#If-we-wanted-to-raise-awareness-about-a-certain-tool-or-item,-where-should-we-focus-our-efforts" data-toc-modified-id="If-we-wanted-to-raise-awareness-about-a-certain-tool-or-item,-where-should-we-focus-our-efforts-3.2"><span class="toc-item-num">3.2 </span>If we wanted to raise awareness about a certain tool or item, where should we focus our efforts</a></div><div class="lev3 toc-item"><a href="#Findings" data-toc-modified-id="Findings-3.2.1"><span class="toc-item-num">3.2.1 </span>Findings</a></div><div class="lev2 toc-item"><a href="#Are-there-certain-complaints-that-people-are-filing?-Quality-issues-vs-injuries?" data-toc-modified-id="Are-there-certain-complaints-that-people-are-filing?-Quality-issues-vs-injuries?-3.3"><span class="toc-item-num">3.3 </span>Are there certain complaints that people are filing? Quality issues vs injuries?</a></div><div class="lev3 toc-item"><a href="#Findings" data-toc-modified-id="Findings-3.3.1"><span class="toc-item-num">3.3.1 </span>Findings</a></div><div class="lev2 toc-item"><a href="#Who-are-the-people-who-are-actually-reporting-to-us?" data-toc-modified-id="Who-are-the-people-who-are-actually-reporting-to-us?-3.4"><span class="toc-item-num">3.4 </span>Who are the people who are actually reporting to us?</a></div><div class="lev1 toc-item"><a href="#Further-Suggestions" data-toc-modified-id="Further-Suggestions-4"><span class="toc-item-num">4 </span>Further Suggestions</a></div><div class="lev2 toc-item"><a href="#Next-Steps" data-toc-modified-id="Next-Steps-4.1"><span class="toc-item-num">4.1 </span>Next Steps</a></div><div class="lev1 toc-item"><a href="#References" data-toc-modified-id="References-5"><span class="toc-item-num">5 </span>References</a></div><div class="lev2 toc-item"><a href="#Question-3.1" data-toc-modified-id="Question-3.1-5.1"><span class="toc-item-num">5.1 </span>Question 3.1</a></div><div class="lev2 toc-item"><a href="#Question-3.2" data-toc-modified-id="Question-3.2-5.2"><span class="toc-item-num">5.2 </span>Question 3.2</a></div><div class="lev2 toc-item"><a href="#Question-3.3" data-toc-modified-id="Question-3.3-5.3"><span class="toc-item-num">5.3 </span>Question 3.3</a></div>
# Introduction
This notebook serves as a reporting tool for the CPSC. In this notebook, I laid out the questions CPSC is interested in learning from their SaferProduct API. The format will be that there are a few questions presented and each question will have a `findings` section where there is a quick summary of the findings while in Section 5, there will be further information on how on the findings were conducted.
# Pulling in Data
Given that the API was down during this time of the reporting, I obtained data from Ana Carolina Areias via Dropbox link. Here I cleaned up the pure JSON format and converted it into a dataframe (the cleaning code can be found in the `exploratory.ipynb` in the `/notebook` directory. After that I saved the data using pickle where I can easily load it up for analysis.
End of explanation
"""
pd.crosstab(data['GenderDescription'], data['age_range'])
"""
Explanation: Questions
Are there certain populations we're not getting reports from?
We can create a basic cross tab between age and gender to see if there are any patterns that emerges.
End of explanation
"""
#removing minor harm incidents
no_injuries = ['Incident, No Injury', 'Unspecified', 'Level of care not known',
'No Incident, No Injury', 'No First Aid or Medical Attention Received']
damage = data.ix[~data['SeverityTypePublicName'].isin(no_injuries), :]
damage.ProductCategoryPublicName.value_counts()[0:9]
"""
Explanation: Findings
From the data, it seems that there's not much underrepresentation by gender. There are only around a thousand less males than females in a dataset of 28,000. Age seems to be a bigger issue. There appears to be a lack of representation of older people using the API. Given that older folks may be less likely to self report, or if they wanted to self report, they may not be tech-savvy enough to use with a web interface. My assumption that people over 70 are probably experience product harm at a higher rate and are not reporting this.
If we wanted to raise awareness about a certain tool or item, where should we focus our efforts
Findings
To construct this, I removed any incidents that did not cause any bodily harm and taking the top ten categories. There were several levels of severity. We can remove complaints that does not involve any physical harm. After removing these complaint, it is really interesting to see that "Footwear" was the product category of harm.
End of explanation
"""
data.SeverityTypeDescription.value_counts()
"""
Explanation: This is actually preplexing, so I decided to investigate further by analyzing the complaints filed for the "Footwear" category. To do this, I created a Word2Vec model that uses a convolution neural network for text analysis. This process maps a word and the linguistic context it is in to be able to calculate similarity between words. The purpose of this is to find words that related to each other. Rather than doing a simple cross tab of product categories, I can ingest the complaints and map out their relationship. For instance, using the complaints that resulted in bodily harm, I found that footwear was associated with pain and walking. It seems that there is injuries related to Sketcher sneakers specifically since it was the only brand that showed up enough to be included in the model's dictionary. In fact, there was a lawsuit regarding Sketchers and their toning shoes
Are there certain complaints that people are filing? Quality issues vs injuries?
Findings
Look below, we see that a vast majority are incidents with any bodily harm. Over 60% of all complaints were categorized as Incident, No Injury.
End of explanation
"""
model.most_similar('was')
"""
Explanation: Although, while it is label to have no injury, it does not necessarily mean that we can't take precaution. What I did was take the same approach as the previous model, I subsetted the data to only complaints that had "no injury" and ran a model to examine words used. From the analysis, we see that the word to, was, and it were the top three words. At first glance, it may seem that these words are meaningless, however if we examine words that are similar to it, we can start seeing a connection.
For instance, the word most closely related to "to" was "unable" and "trying", which conveys a sense of urgency in attempting to turn something on or off. Examining the words "unable," I was able to see it was related to words such as "attempted" and "disconnect." Further investigation lead me to find it was dealing with a switch or a plug, possibly dealing with an electrical item.
A similar picture is painted when trying to examine the word "was." The words that felt out of place was "emitting", "extinguish," and "smelled." It is not surprise that after a few investigations of these words, that words like "sparks" and "smoke" started popping up more. This leads me to believe that these complaints have something to do with encounters closely related to fire.
So while these complaints are maybe encounters with danger, it may be worthwile to review these complaints further with an eye out for fire related injuries or products that could cause fire.
End of explanation
"""
data.GenderDescription.value_counts()
data['age'] = map(lambda x: x/12, data['VictimAgeInMonths'])
labels = ['under 10', '10-20', '20-30', '30-40', '40-50', '50-60',
'60-70','70-80', '80-90', '90-100', 'over 100']
data['age_range'] = pd.cut(data['age'], bins=np.arange(0,120,10), labels=labels)
data['age_range'][data['age'] > 100] = 'over 100'
counts = data['age_range'].value_counts()
counts.sort()
counts
"""
Explanation: Who are the people who are actually reporting to us?
This question is difficult to answer because of a lack of data on the reporter. From the cross tabulation in Section 3.1, we see that the majority of our the respondents are female and the largest age group are 40-60. That is probably the best guess of who are the people who are using the API.
Further Suggestions
While text analysis is helpful, it is often not sufficient. What would really help the analysis process would be include more information from the user. The following information would be helpful to collect to make conduct more actionable insight.
Ethnicity/Race
Self Reported Income
Geographic information
Region (Mid Atlantic, New England, etc)
Closest Metropolitan Area
State
City
Geolocation of IP address
coordinates can be "jittered" to conserve anonymity
Next Steps
A great next step would be a deeper text analysis on shoes. It may be possible to train a neural network to consider smaller batches of words so we can capture the context better. Other steps that I would do if I had more time would be to find a way to fix up unicode issues with some of the complaints (there were special characters that prevented some of the complaints to be converted into strings).
I would also look further into the category that had the most complaints: "Electric Ranges and Stoves" and see what the complaints were.
References
Question 3.1
The data that we worked with had limited information regarding the victim's demographics beside age and gender. However, that was enough to draw some base inferences. Below we can grab a counts of gender, which a plurality is females.
Age is a bit tricky, we have the victim's birthday in months. I converted it into years and break them down into 10 year age ranges so we can better examine the data.
End of explanation
"""
#Top products affect by people with 0 age
data.ix[data['age_range'].isnull(), 'ProductCategoryPublicName'].value_counts()[0:9]
#top products that affect people overall
data.ProductCategoryPublicName.value_counts()[0:9]
"""
Explanation: However after doing this, we still have around 13,000 people with an age of zero, whether it is that they did not fill in the age or that the incident involves infant is still unknown but looking at the distribution betweeen of the product that are affecting people with an age of 0 and the overall dataset, it appears that null values in the age range represents people who did not fill out an age when reporting
End of explanation
"""
#overall products listed
data.ProductCategoryPublicName.value_counts()[0:9]
#removing minor harm incidents
no_injuries = ['Incident, No Injury', 'Unspecified', 'Level of care not known',
'No Incident, No Injury', 'No First Aid or Medical Attention Received']
damage = data.ix[~data['SeverityTypePublicName'].isin(no_injuries), :]
damage.ProductCategoryPublicName.value_counts()[0:9]
"""
Explanation: Question 3.2
At first glance, we can look at the products that were reported, like below. And see that Eletric Ranges or Ovens is at top in terms of harm. However, there are levels of severity within the API that needs to be filtered before we can assess which products causes the most harm.
End of explanation
"""
model = gensim.models.Word2Vec.load('/home/datauser/cpsc/models/footwear')
model.most_similar('walking')
model.most_similar('injury')
model.most_similar('instability')
"""
Explanation: This shows that incidents where there are actually injuries and medical attention was given was that in footwear, which was weird. To explore this, I created a Word2Vec model that maps out how certain words relate to each other. To train the model, I used the comments that were made from the API. This will train a model and help us identify words that are similar. For instance, if you type in foot, you will get left and right as these words are most closely related to the word foot. However after some digging around, I found out that the word "walking" was associated with "painful". I have some reason to believe that there are orthopedic injuries associated with shoes and people have been experience pain while walking with Sketchers that were supposed to tone up their bodies and having some instability or balance issues.
End of explanation
"""
model = gensim.models.Word2Vec.load('/home/datauser/cpsc/models/severity')
items_dict = {}
for word, vocab_obj in model.vocab.items():
items_dict[word] = vocab_obj.count
sorted_dict = sorted(items_dict.items(), key=operator.itemgetter(1))
sorted_dict.reverse()
sorted_dict[0:5]
"""
Explanation: Question 3.3
End of explanation
"""
|
xR86/ml-stuff | kaggle/machine-learning-with-a-heart/Lab5.ipynb | mit | from datetime import datetime as dt
import numpy as np
import pandas as pd
# viz libs
import matplotlib.pyplot as plt
%matplotlib inline
import plotly.graph_objs as go
import plotly.figure_factory as ff
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
init_notebook_mode(connected=True)
random_state=42
nb_start = dt.now()
"""
Explanation: Lab 5 - Unsupervised Learning <a class="tocSkip">
use elbow point for hierarchical and kmeans
kmeans:
+ interclass var (WSS within sum of squares) vs no of clusters => elbow
hierarchical:
+ use dendrogram height, last 2 clusters heights are relevant
need Silhouette Width (https://en.wikipedia.org/wiki/Silhouette_(clustering))
+ https://scikit-learn.org/stable/modules/generated/sklearn.metrics.silhouette_score.html
+ https://scikit-learn.org/stable/auto_examples/cluster/plot_kmeans_silhouette_analysis.html
+ https://plot.ly/scikit-learn/plot-kmeans-silhouette-analysis/
...
https://en.wikipedia.org/wiki/Rand_index#Adjusted_Rand_index
https://scikit-learn.org/stable/modules/generated/sklearn.metrics.adjusted_rand_score.html
https://www.scikit-yb.org/en/latest/api/cluster/elbow.html
...
https://scikit-learn.org/stable/auto_examples/cluster/plot_kmeans_silhouette_analysis.html
https://plot.ly/scikit-learn/plot-cluster-iris/
https://plot.ly/scikit-learn/plot-kmeans-digits/
https://plot.ly/python/3d-point-clustering/
https://community.plot.ly/t/what-colorscales-are-available-in-plotly-and-which-are-the-default/2079
https://plot.ly/python/cmocean-colorscales/
https://matplotlib.org/cmocean/
http://cs.joensuu.fi/sipu/datasets/
https://towardsdatascience.com/k-means-clustering-implementation-2018-ac5cd1e51d0a
https://github.com/deric/clustering-benchmark/blob/master/README.md
http://neupy.com/2017/12/09/sofm_applications.html
https://wonikjang.github.io/deeplearning_unsupervised_som/2017/06/30/som.html
https://www.kaggle.com/raghavrastogi75/fraud-detection-using-self-organising-maps
https://medium.com/@navdeepsingh_2336/self-organizing-maps-for-machine-learning-algorithms-ad256a395fc5
https://heartbeat.fritz.ai/introduction-to-self-organizing-maps-soms-98e88b568f5d
Imports
Import dependencies
End of explanation
"""
features = pd.read_csv('train_values.csv')
labels = pd.read_csv('train_labels.csv')
xlab = 'serum_cholesterol_mg_per_dl'
ylab = 'resting_blood_pressure'
print(labels.head())
features.head()
cluster_arr = np.array(features[[xlab,ylab]]).reshape(-1,2)
cluster_arr[:5]
"""
Explanation: Import data
End of explanation
"""
x = features['serum_cholesterol_mg_per_dl']
y = features['resting_blood_pressure']
trace = [go.Scatter(
x = x,
y = y,
name = 'data',
mode = 'markers',
hoverinfo = 'text',
text = ['x: %s<br>y: %s' % (x_i, y_i) for x_i, y_i in zip(x, y)]
)]
layout = go.Layout(
xaxis = dict({'title': xlab}),
yaxis = dict({'title': ylab})
)
fig = go.Figure(data=trace, layout=layout)
iplot(fig, layout)
"""
Explanation: Cluster subsample visualization
End of explanation
"""
from scipy.cluster.hierarchy import dendrogram, linkage
"""
Explanation: Hierarchical Clustering
https://scikit-learn.org/stable/modules/clustering.html
https://scikit-learn.org/stable/modules/classes.html#module-sklearn.cluster
https://stackabuse.com/hierarchical-clustering-with-python-and-scikit-learn/
End of explanation
"""
plt.figure(figsize=(15, 7))
linked = linkage(cluster_arr, 'single')
# labelList = range(1, 11)
dendrogram(linked,
orientation='top',
# labels=labelList,
distance_sort='descending',
show_leaf_counts=True)
plt.show()
"""
Explanation: Single Link
End of explanation
"""
plt.figure(figsize=(15, 7))
linked = linkage(cluster_arr, 'complete')
# labelList = range(1, 11)
dendrogram(linked,
orientation='top',
# labels=labelList,
distance_sort='descending',
show_leaf_counts=True)
plt.show()
"""
Explanation: Complete Link
End of explanation
"""
plt.figure(figsize=(15, 7))
linked = linkage(cluster_arr, 'average')
# labelList = range(1, 11)
dendrogram(linked,
orientation='top',
# labels=labelList,
distance_sort='descending',
show_leaf_counts=True)
plt.show()
"""
Explanation: Average Link
End of explanation
"""
plt.figure(figsize=(15, 7))
linked = linkage(cluster_arr, 'ward')
# labelList = range(1, 11)
dendrogram(linked,
orientation='top',
# labels=labelList,
distance_sort='descending',
show_leaf_counts=True)
plt.show()
"""
Explanation: Ward Variance
End of explanation
"""
from sklearn.cluster import DBSCAN
clustering = DBSCAN(eps=3, min_samples=2).fit(cluster_arr)
clustering
y_pred = clustering.labels_
y_pred
x = cluster_arr[:, 0]
y = cluster_arr[:, 1]
# col = ['#F33' if i == 1 else '#33F' for i in y_pred]
trace = [go.Scatter(
x = x,
y = y,
marker = dict(
# color = col,
color = y_pred,
colorscale='MAGMA',
colorbar=dict(
title='Labels'
),
),
name = 'data',
mode = 'markers',
hoverinfo = 'text',
text = ['x: %s<br>y: %s' % (x_i, y_i) for x_i, y_i in zip(x, y)]
)]
layout = go.Layout(
xaxis = dict({'title': xlab}),
yaxis = dict({'title': ylab})
)
fig = go.Figure(data=trace, layout=layout)
iplot(fig, layout)
"""
Explanation: Density-based clustering
DBSCAN
End of explanation
"""
from sklearn.cluster import KMeans
y_pred = KMeans(n_clusters=2, random_state=random_state).fit_predict(cluster_arr)
y_pred
x = cluster_arr[:, 0]
y = cluster_arr[:, 1]
# col = ['#F33' if i == 1 else '#33F' for i in y_pred]
trace = [go.Scatter(
x = x,
y = y,
marker = dict(
# color = col,
color = y_pred,
colorscale='YlOrRd',
colorbar=dict(
title='Labels'
),
),
name = 'data',
mode = 'markers',
hoverinfo = 'text',
text = ['x: %s<br>y: %s' % (x_i, y_i) for x_i, y_i in zip(x, y)]
)]
layout = go.Layout(
xaxis = dict({'title': xlab}),
yaxis = dict({'title': ylab})
)
fig = go.Figure(data=trace, layout=layout)
iplot(fig, layout)
Ks = range(2, 11)
km = [KMeans(n_clusters=i) for i in Ks] # , verbose=True
# score = [km[i].fit(cluster_arr).score(cluster_arr) for i in range(len(km))]
fitted = [km[i].fit(cluster_arr) for i in range(len(km))]
score = [fitted[i].score(cluster_arr) for i in range(len(km))]
inertia = [fitted[i].inertia_ for i in range(len(km))]
relative_diff = [inertia[0]]
relative_diff.extend([inertia[i-1] - inertia[i] for i in range(1, len(inertia))])
print(fitted[:1])
print(score[:1])
print(inertia[:1])
print(relative_diff)
fitted[0]
dir(fitted[0])[:5]
data = [
# go.Bar(
# x = list(Ks),
# y = score
# ),
go.Bar(
x = list(Ks),
y = inertia,
text = ['Diff is: %s' % diff for diff in relative_diff]
),
go.Scatter(
x = list(Ks),
y = inertia
),
]
layout = go.Layout(
xaxis = dict(
title = 'No of Clusters [%s-%s]' % (min(Ks), max(Ks))
),
yaxis = dict(
title = 'Sklearn score / inertia'
),
# barmode='stack'
)
fig = go.Figure(data=data, layout=layout)
iplot(fig)
data = [
go.Bar(
x = list(Ks),
y = relative_diff
),
go.Scatter(
x = list(Ks),
y = relative_diff
),
]
layout = go.Layout(
xaxis = dict(
title = 'No of Clusters [%s-%s]' % (min(Ks), max(Ks))
),
yaxis = dict(
title = 'Pairwise difference'
),
# barmode='stack'
)
fig = go.Figure(data=data, layout=layout)
iplot(fig)
nb_end = dt.now()
'Time elapsed: %s' % (nb_end - nb_start)
"""
Explanation: Other based on DBSCAN
K-Means
End of explanation
"""
|
the-deep-learners/TensorFlow-LiveLessons | notebooks/intro_to_tensorflow_times_a_million.ipynb | mit | import numpy as np
np.random.seed(42)
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import tensorflow as tf
tf.set_random_seed(42)
xs = np.linspace(0., 8., 8000000) # eight million points spaced evenly over the interval zero to eight
ys = 0.3*xs-0.8+np.random.normal(scale=0.25, size=len(xs)) # eight million labels given xs, m=0.3, b=-0.8, plus normally-distributed noise
fig, ax = plt.subplots()
data_subset = pd.DataFrame(list(zip(xs, ys)), columns=['x', 'y']).sample(n=1000)
_ = ax.scatter(data_subset.x, data_subset.y)
m = tf.Variable(-0.5)
b = tf.Variable(1.0)
batch_size = 8 # sample mini-batches of size eight for each step of gradient descent
"""
Explanation: (Introduction to Tensorflow) * 10^6
In this notebook, we modify the tensor-fied intro to TensorFlow notebook to use placeholder tensors and feed in data from a data set of millions of points. This is a derivation of Jared Ostmeyer's Naked Tensor code.
End of explanation
"""
xs_placeholder = tf.placeholder(tf.float32, [batch_size])
ys_placeholder = tf.placeholder(tf.float32, [batch_size])
"""
Explanation: Define placeholder tensors of length batch_size whose values will be filled in during graph execution
End of explanation
"""
ys_model = m*xs_placeholder+b
total_error = tf.reduce_sum((ys_placeholder-ys_model)**2)
optimizer_operation = tf.train.GradientDescentOptimizer(learning_rate=0.001).minimize(total_error) # demo 0.01, 0.0001
initializer_operation = tf.global_variables_initializer()
"""
Explanation: Define graph that incorporates placeholders
End of explanation
"""
with tf.Session() as session:
session.run(initializer_operation)
n_batches = 1000 # 10, then 1000
for iteration in range(n_batches):
random_indices = np.random.randint(len(xs), size=batch_size) # sample the batch by random selection
feed = { # feeds are dictionaries
xs_placeholder: xs[random_indices],
ys_placeholder: ys[random_indices]
}
session.run(optimizer_operation, feed_dict=feed) # minimize cost with the mini-batch
slope, intercept = session.run([m, b])
slope
intercept
"""
Explanation: Sample from the full data set while running the session
End of explanation
"""
|
erikdrysdale/erikdrysdale.github.io | _rmd/extra_auroc/.ipynb_checkpoints/auc_max-checkpoint.ipynb | mit | # Import the necessary modules
import numpy as np
from scipy.optimize import minimize
def sigmoid(x):
return( 1 / (1 + np.exp(-x)) )
def idx_I0I1(y):
return( (np.where(y == 0)[0], np.where(y == 1)[0] ) )
def AUROC(eta,idx0,idx1):
den = len(idx0) * len(idx1)
num = 0
for i in idx1:
num += sum( eta[i] > eta[idx0] ) + 0.5*sum(eta[i] == eta[idx0])
return(num / den)
def cAUROC(w,X,idx0,idx1):
eta = X.dot(w)
den = len(idx0) * len(idx1)
num = 0
for i in idx1:
num += sum( np.log(sigmoid(eta[i] - eta[idx0])) )
return( - num / den)
def dcAUROC(w, X, idx0, idx1):
eta = X.dot(w)
n0, n1 = len(idx0), len(idx1)
den = n0 * n1
num = 0
for i in idx1:
num += ((1 - sigmoid(eta[i] - eta[idx0])).reshape([n0,1]) * (X[[i]] - X[idx0]) ).sum(axis=0) # *
return( - num / den)
"""
Explanation: Direct AUROC optimization with PyTorch
$$
\newcommand{\by}{\boldsymbol{y}}
\newcommand{\beta}{\boldsymbol{\eta}}
\newcommand{\bw}{\boldsymbol{w}}
\newcommand{\bx}{\boldsymbol{x}}
$$
In this post I'll discuss how to directly optimize the Area Under the Receiver Operating Characteristic (AUROC), which measures the discriminatory ability of a model across a range of sensitivity/specicificity thresholds for binary classification. The AUROC is often used as method to benchmark different models and has the added benefit that its properties are independent of the underlying class imbalance.
The AUROC is a specific instance of the more general learning to rank class of problems as the AUROC is the proportion of scores from a positive class that exceed the scores from a negative class. More formally if the outcome for the $i^{th}$ observation is $y \in {0,1}$, and has a corresponding risk score $\eta_i$, then the AUROC for $\by$ and $\beta$ will be:
$$
\begin{align}
\text{AUROC}(\by,\beta) &= \frac{1}{|I_1|\cdot|I_0|} \sum_{i \in I_1} \sum_{j \in I_0} \Big[ I[\eta_i > \eta_j] + 0.5I[\eta_i = \eta_j] \Big] \
I_k &= {i: y_i = k }
\end{align}
$$
Most AUROC formulas grant a half-point for tied scores. As has been discussed before, optimizing indicator functions $I(\cdot)$ is NP-hard, so instead a convex relation of the AUROC can be calculated.
$$
\begin{align}
\text{cAUROC}(\by,\beta) &= \frac{1}{|I_1|\cdot|I_0|} \sum_{i \in I_1} \sum_{j \in I_0} \log \sigma [\eta_i - \eta_j] \
\sigma(z) &= \frac{1}{1+\exp(-z)}
\end{align}
$$
The cAUROC formula encorages the log-odds of the positive class ($y=0$) to be as large as possible with respect to the negative class ($y=0$).
(1) Optimization with linear methods
Before looking at a neural network method, this first section will show how to directly optimize the cAUROC with a linear combination of features. We'll compare this approach to the standard logistic regression method and see if there is a meaningful difference. If we encode $\eta_i = \bx_i^T\bw$, and apply the chain rule we can see that the derivative for the cAUROC will be:
$$
\begin{align}
\frac{\partial \text{cAUROC}(\by,\beta)}{\partial \bw} &= \frac{1}{|I_1|\cdot|I_0|} \sum_{i \in I_1} \sum_{j \in I_0} (1 - \sigma [\eta_i - \eta_j] ) [\bx_i - \bx_j]
\end{align}
$$
End of explanation
"""
import pandas as pd
from sklearn.datasets import load_boston
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
X, y = load_boston(return_X_y=True)
# binarize
y = np.where(y > np.quantile(y,0.9), 1 , 0)
nsim = 100
holder_auc = []
holder_w = []
winit = np.repeat(0,X.shape[1])
for kk in range(nsim):
y_train, y_test, X_train, X_test = train_test_split(y, X, test_size=0.2, random_state=kk, stratify=y)
enc = StandardScaler().fit(X_train)
idx0_train, idx1_train = idx_I0I1(y_train)
idx0_test, idx1_test = idx_I0I1(y_test)
w_auc = minimize(fun=cAUROC,x0=winit,
args=(enc.transform(X_train), idx0_train, idx1_train),
method='L-BFGS-B',jac=dcAUROC).x
eta_auc = enc.transform(X_test).dot(w_auc)
mdl_logit = LogisticRegression(penalty='none')
eta_logit = mdl_logit.fit(enc.transform(X_train),y_train).predict_proba(X_test)[:,1]
auc1, auc2 = roc_auc_score(y_test,eta_auc), roc_auc_score(y_test,eta_logit)
holder_auc.append([auc1, auc2])
holder_w.append(pd.DataFrame({'cn':load_boston()['feature_names'],'auc':w_auc,'logit':mdl_logit.coef_.flatten()}))
auc_mu = np.vstack(holder_auc).mean(axis=0)
print('AUC from cAUROC: %0.2f%%\nAUC for LogisticRegression: %0.2f%%' %
(auc_mu[0], auc_mu[1]))
"""
Explanation: In the example simulations below the Boston dataset will be used where the binary outcome is whether a house price is in the 90th percentile or higher (i.e. the top 10% of prices in the distribution).
End of explanation
"""
import seaborn as sns
from matplotlib import pyplot as plt
df_w = pd.concat(holder_w) #.groupby('cn').mean().reset_index()
g = sns.FacetGrid(data=df_w,col='cn',col_wrap=5,hue='cn',sharex=False,sharey=False)
g.map(plt.scatter, 'logit','auc')
g.set_xlabels('Logistic coefficients')
g.set_ylabels('cAUROC coefficients')
plt.subplots_adjust(top=0.9)
g.fig.suptitle('Figure: Comparison of LR and cAUROC cofficients per simulation',fontsize=18)
"""
Explanation: We can see that the AUC minimizer finds a linear combination of features that is has a significantly higher AUC. Because the logistic regression uses a simple logistic loss function, the model has an incentive in prioritizing predicting low probabilities because most of the labels are zero. In contrast, the AUC minimizer is independent of this class balance.
The figure below shows while the coefficients between the AUC model are highly correlated, their slight differences account for the meaningful performance gain.
End of explanation
"""
from sklearn.datasets import fetch_california_housing
np.random.seed(1234)
data = fetch_california_housing(download_if_missing=True)
cn_cali = data.feature_names
X_cali = data.data
y_cali = data.target
y_cali += np.random.randn(y_cali.shape[0])*(y_cali.std())
y_cali = np.where(y_cali > np.quantile(y_cali,0.95),1,0)
y_cali_train, y_cali_test, X_cali_train, X_cali_test = \
train_test_split(y_cali, X_cali, test_size=0.2, random_state=1234, stratify=y_cali)
enc = StandardScaler().fit(X_cali_train)
"""
Explanation: (2) AUC minimization with PyTorch
To optimize a neural network in PyTorch with the goal of minimizing the cAUROC we will draw a given $i,j$ pair where $i \in I_1$ and $j \in I_0$. While other mini-batch approaches are possible (including the full-batch approach used for the gradient functions above), this mini-batch of two method will have the smallest memory overhead. The stochastic gradient for our network $f_\theta$ will now be:
$$
\begin{align}
\Bigg[\frac{\partial f_\theta}{\partial \theta}\Bigg]{i,j} &= \frac{\partial}{\partial \theta} \log \sigma [ f\theta(\bx_i) - f_\theta(\bx_j) ]
\end{align}
$$
Where $f(\cdot)$ is 1-dimensional neural network output and $\theta$ are the network parameters. The gradient of this deep neural network will be calculated by PyTorch's automatic differention backend.
The example dataset will be the California housing price dataset. To make the prediction task challenging, house prices will first be partially scrambled with noise, and then outcome will binarize by labelling only the top 5% of housing prices as the positive class.
End of explanation
"""
import torch
import torch.nn as nn
import torch.nn.functional as F
class ffnet(nn.Module):
def __init__(self,num_features):
super(ffnet, self).__init__()
p = num_features
self.fc1 = nn.Linear(p, 36)
self.fc2 = nn.Linear(36, 12)
self.fc3 = nn.Linear(12, 6)
self.fc4 = nn.Linear(6,1)
def forward(self,x):
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = F.relu(self.fc3(x))
x = self.fc4(x)
return(x)
# Binary loss function
criterion = nn.BCEWithLogitsLoss()
# Seed the network
torch.manual_seed(1234)
nnet = ffnet(num_features=X_cali.shape[1])
optimizer = torch.optim.Adam(params=nnet.parameters(),lr=0.001)
"""
Explanation: In the next code block below, we will define the neural network class, the optimizer, and the loss function.
End of explanation
"""
np.random.seed(1234)
y_cali_R, y_cali_V, X_cali_R, X_cali_V = \
train_test_split(y_cali_train, X_cali_train, test_size=0.2, random_state=1234, stratify=y_cali_train)
enc = StandardScaler().fit(X_cali_R)
idx0_R, idx1_R = idx_I0I1(y_cali_R)
nepochs = 100
auc_holder = []
for kk in range(nepochs):
print('Epoch %i of %i' % (kk+1, nepochs))
# Sample class 0 pairs
idx0_kk = np.random.choice(idx0_R,len(idx1_R),replace=False)
for i,j in zip(idx1_R, idx0_kk):
optimizer.zero_grad() # clear gradient
dlogit = nnet(torch.Tensor(enc.transform(X_cali_R[[i]]))) - \
nnet(torch.Tensor(enc.transform(X_cali_R[[j]]))) # calculate log-odd differences
loss = criterion(dlogit.flatten(), torch.Tensor([1]))
loss.backward() # backprop
optimizer.step() # gradient-step
# Calculate AUC on held-out validation
auc_k = roc_auc_score(y_cali_V,
nnet(torch.Tensor(enc.transform(X_cali_V))).detach().flatten().numpy())
if auc_k > 0.9:
print('AUC > 90% achieved')
break
# Compare performance on final test set
auc_nnet_cali = roc_auc_score(y_cali_test,
nnet(torch.Tensor(enc.transform(X_cali_test))).detach().flatten().numpy())
# Fit a benchmark model
logit_cali = LogisticRegression(penalty='none',solver='lbfgs',max_iter=1000)
logit_cali.fit(enc.transform(X_cali_train), y_cali_train)
auc_logit_cali = roc_auc_score(y_cali_test,logit_cali.predict_proba(enc.transform(X_cali_test))[:,1])
print('nnet-AUC: %0.3f, logit: %0.3f' % (auc_nnet_cali, auc_logit_cali))
"""
Explanation: In the next code block, we'll set up the sampling strategy and train the network until the AUC on the validation set exceeds 90%.
End of explanation
"""
|
letsgoexploring/teaching | winter2017/econ129/python/Econ129_Class_14.ipynb | mit | # Define parameters
s = 0.1
delta = 0.025
alpha = 0.35
# Compute the steady state values of the endogenous variables
Kss = (s/delta)**(1/(1-alpha))
Yss = Kss**alpha
Css = (1-s)*Yss
Iss = Yss - Css
print('Steady states:\n')
print('capital: ',round(Kss,5))
print('output: ',round(Yss,5))
print('consumption:',round(Css,5))
print('investment: ',round(Iss,5))
"""
Explanation: Class 14: Introduction to Business Cycle Modeling (Continued)
A Baseline Real Business Cycle Model
Consider the following business cycle model:
\begin{align}
Y_t & = A_t K_t^{\alpha} \tag{1}\
C_t & = (1-s)Y_t \tag{2}\
I_t & = K_{t+1} - ( 1- \delta) \tag{3}\
Y_t & = C_t + I_t \tag{4}
\end{align}
where:
\begin{align}
\log A_{t+1} & = \rho \log A_t + \epsilon_t, \tag{5}
\end{align}
reflects exogenous fluctuation in TFP. The endogenous variables in the model are $K_t$, $Y_t$, $C_t$, $I_t$, and $A_t$ and $\epsilon_t$ is an exogenous white noise shock process with standard deviation $\sigma$. $K_t$ and $A_t$ are called state variables because their values in period $t$ affect the equilibrium of the model in period $t+1$.
The non-stochastic steady state
In the (non-stochastic) steady state:
\begin{align}
\epsilon_t & = 0
\end{align}
and
\begin{align}
K_t & = 0
\end{align}
for all $t$. So we drop the $t$ subscripts and write the steady state solution solution to the model as:
\begin{align}
A & = 1\
K & = \left(\frac{sA}{\delta}\right)^{\frac{1}{1-\alpha}}\
Y & = AK^{\alpha}\
I & = Y - C
\end{align}
End of explanation
"""
# Step 1: simulate eps
# Step 2: simulate and plot log(TFP) logA
# Step 3: compute and plot TFP A
# Step 4: Compulte and plot capital K
# Step 5: Compute Y, C, and I
# Step 6: Create a 2x2 plot of y, c, i, and k
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(2,2,1)
ax.plot(Y,lw=3,alpha = 0.7)
ax.set_title('$Y_t$')
ax.grid()
plt.tight_layout()
# Step 7: Compute y_dev, c_dev, i_dev, and k_dev to be the log deviations from steady state of the
# respective variables
y_dev = np.log(Y) - np.log(Yss)
# Step 8: Create a 2x2 plot of y_dev, c_dev, i_dev, and k_dev
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(2,2,1)
ax.plot(100*y_dev,lw=3,alpha = 0.7)
ax.set_title('$\hat{y}_t$')
ax.grid()
ax.set_ylabel('% dev from steady state')
plt.tight_layout()
# Step 9: Save the simulated data in a DataFrame called data_simulated
"""
Explanation: Stochastic simulation
Now, you will simulate how the model behaves in the presence of a set of random TFP shocks. The simulation will run for $T+1$ periods from $t = 0,\ldots, T$. Suppose that $T = 100$.
Initialize an array for $\epsilon_t$ called eps that is equal to a $T\times 1$ array of normally distributed random variables with mean 0 and standard deviation $\sigma = 0.006$. Set the seed for the Numpy random number generator to 192. Plot $\epsilon_t$.
Initialize an array for $\log A_t$ called logA that is equal to a $(T+1)\times 1$ array of zeros. Set $\rho = 0.75$ and use the simulated values for $\epsilon_t$ to compute $\log A_1, \log A_2, \ldots, \log A_T$. Plot $\log A_t$.
Create a new variable called A that stores simulated values of $A_t$ (Note: $A_t = e^{\log A_t}$). Plot $A_t$.
Initialize an array for $K_t$ called K that is a $(T+1)\times 1$ array of zeros. Set the first value in the array equal to steady state capital. Then compute the subsequent values for $K_t$ using the computed values for $A_t$. Plot $K_t$.
Create variables called Y, C, and I that store simulated values for $Y_t$, $C_t$, and $I_t$.
Construct a $2\times2$ grid of subplots of the simulated paths of capital, output, consumption, and investment.
Compute the log deviation of each variable from its steady state ($(X_t - X_{ss})/X_{ss}$) and store the results in variables called: k_dev, y_dev, c_dev, and i_dev.
Construct a $2\times2$ grid of subplots of the impulse responses of capital, output, consumption, and investment to the technology shock with each variable expressed as a deviation from steady state.
Save the simulated data in a DataFrame called data_simulated with columns output, consumption, investment, and tfp.
End of explanation
"""
# Create a DataFrame with actual cyclical components of output, consumption, investment, and TFP
data_actual = pd.read_csv('http://www.briancjenkins.com/teaching/winter2017/econ129/data/Econ129_Rbc_Data.csv',index_col=0)
data_actual = pd.DataFrame({'output':np.log(data_actual.gdp/data_actual.gdp_trend),
'consumption':np.log(data_actual.consumption/data_actual.consumption_trend),
'investment':np.log(data_actual.investment/data_actual.investment_trend),
'tfp':np.log(data_actual.tfp/data_actual.tfp_trend)})
data_actual.head()
"""
Explanation: Evaluation of the model
We've already examined business cycle data and computed the standard deviations and correlations of the cyclical components of output, consumption, investment, and TFP. Let's compute the same statistics for the simulated data a compare.
End of explanation
"""
# Compute the standard deviations of the actual business cycle data
print(data_actual.std())
# Compute the standard deviations of the simulated business cycle data
"""
Explanation: Volatility
End of explanation
"""
# Compute the coeficients of correlation for the actual business cycle data
print(data_actual.corr())
# Compute the coeficients of correlation for the actual business cycle data
"""
Explanation: Correlations
End of explanation
"""
|
ajtrask/ManyWaysToPerishInStarTrek | StarTrek.ipynb | unlicense | import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: Star Trek Causes of Death
Data and inspiration from www.thestartrekproject.net
Required Libraries
End of explanation
"""
allDeaths = pd.read_excel("data/all-deaths.xls")
print(allDeaths.shape)
allDeaths.head()
"""
Explanation: Read in the Data
End of explanation
"""
allDeathsTOS = allDeaths[allDeaths['EpisodeID'].str.contains("tos")]
print(allDeathsTOS.size)
"""
Explanation: Filter to Just Explore "The Original Star Trek"
End of explanation
"""
totals = allDeathsTOS.groupby('DeathBy')['BodyCount'].sum()
#returned a serires, so build a data frame and then sort in ascending order for plotting later
totalDeaths = pd.DataFrame({'DeathBy': totals.index,'TotalBodyCount': totals.values}).sort_values('TotalBodyCount')
totalDeaths.tail()
"""
Explanation: Group By Cause of Death and Sum the Body Count
End of explanation
"""
from bokeh.plotting import figure, output_notebook, show, ColumnDataSource
from bokeh.models import HoverTool
output_notebook()
# spiral parameters
a = 0.45
b = 0.15
# bubble size and spacing
spacing = 0.01
size=np.log10(1.0+totalDeaths['TotalBodyCount'])
# convert bubble size and spacing to arclengths
arclength = np.cumsum(2*size+spacing)
# solve for polar angle using spiral arclength equation
theta = np.log(b*arclength/(a*np.sqrt(1+np.power(b,2))))/b
# solve for polar radius using logrithmic spiral equation
r = a*np.exp(b*theta)
# cartesian
x=r * np.cos(theta)
y=r * np.sin(theta)
# build column data source for bokeh
source = ColumnDataSource(
data=dict(
x=x,
y=y,
bodyCount=totalDeaths['TotalBodyCount'],
size=size,
color=["#%02x%02x%02x" % (int(red), int(green), 150) for red, green in zip(np.floor(100+2*x), np.floor(30+2*y))],
desc=totalDeaths['DeathBy'].tolist(),
)
)
# setup hover tool for contextual labels
hover = HoverTool(
tooltips=[
("Body Count", "@bodyCount"),
("Desc", "@desc"),
]
)
# create the figure
p = figure(plot_width=800, plot_height=800, tools=[hover],
title="Death By")
# create the bubble scatter plot
p.scatter('x', 'y', radius='size', fill_color='color',
source=source, fill_alpha=0.8, line_color=None)
# display the figure
show(p)
"""
Explanation: Build a Spiral Bubble Plot
The concept for this chart is borrowed from http://thestartrekproject.net/files/Star_Trek/ch4/miscellanea-chapter-mockup%2012.pdf
End of explanation
"""
|
calebmadrigal/radio-hacking-scripts | radio_signal_generation.ipynb | mit | # Imports and boilerplate to make graphs look better
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import scipy
import wave
from IPython.display import Audio
def setup_graph(title='', x_label='', y_label='', fig_size=None):
fig = plt.figure()
if fig_size != None:
fig.set_size_inches(fig_size[0], fig_size[1])
ax = fig.add_subplot(111)
ax.set_title(title)
ax.set_xlabel(x_label)
ax.set_ylabel(y_label)
"""
Explanation: Generating radio signals
I want to be able to generate modulated digital signals such as this one...
This is a binary signal coded with On-off keying (also known as Amplitude-shift keying).
I'm taking the long pulses to mean "1" and the short pulses to mean "0", so this signal is transmitting the digital code, "0110100010000000".
End of explanation
"""
SAMPLE_BITSIZE = 16
MAX_AMP_16BIT = int(2**SAMPLE_BITSIZE/2 - 1)
def generate_wave(freq, len_in_sec=1, samp_rate=44100, amplitude=MAX_AMP_16BIT):
t = np.linspace(0, len_in_sec, samp_rate * len_in_sec)
sig = amplitude * np.sin(freq * 2 * np.pi * t)
return sig
def write_wav_file(file_path, wav_data, sample_rate=44100, num_channels=1):
f = wave.open(file_path, 'wb')
f.setparams((num_channels, 2, sample_rate, len(wav_data), "NONE", "Uncompressed"))
f.writeframes(np.array(wav_data, dtype=np.int16))
f.close()
def write_pcm_file(signal_data, file_path, dtype='complex64'):
np.array(signal_data).astype(dtype).tofile(file_path)
"""
Explanation: Basic wave functions
End of explanation
"""
samp_rate = 1000
len_in_sec = 1
carrier_freq = 20
low_amp = 0.1
high_amp = 1
t = np.linspace(0, 1, samp_rate * len_in_sec)
carrier = 1*np.sin(carrier_freq * 2 * np.pi * t)
# Modulate with the binary signal: ['0', '1']
amp_mult = np.array([low_amp]*500 + [high_amp]*500)
sig = amp_mult * carrier
setup_graph(title='sig', x_label='time', y_label='freq', fig_size=(12,6))
plt.plot(t, sig)
"""
Explanation: Amplitude modulation
First, we need to figure out how to modulate the amplitude of the carrier wave to "high amplitude" and "low amplitude" to correspond with the binary"s and 0s.
End of explanation
"""
SAMPLE_BITSIZE = 16
MAX_AMP_16BIT = int(2**SAMPLE_BITSIZE/2 - 1)
DEFAULT_RATIOS = {
'_': 1,
'0': 1,
'1': 3
}
DEFAULT_AMP_MAP = {
'0': MAX_AMP_16BIT * .02,
'1': MAX_AMP_16BIT
}
def get_modulation_array(binary_data, sample_rate, baud, sig_ratios, amp_map, dtype=np.int16):
data_points_in_bit = int(sample_rate * 1/baud)
modulation_array = np.array([], dtype=dtype)
for bit in binary_data:
bit_amplitude = amp_map[bit]
modulated_bit = np.full(data_points_in_bit, bit_amplitude, dtype=np.int16)
modulation_array = np.append(modulation_array, modulated_bit)
return modulation_array
def generate_on_off_key_signal(binary_data, carrier_wave_freq, sample_rate,
baud, sig_ratios=DEFAULT_RATIOS, amp_map=DEFAULT_AMP_MAP, dtype=np.int16):
signal_len_secs = len(binary_data) * (1/baud)
t = np.linspace(0, signal_len_secs, sample_rate * signal_len_secs)
carrier_wave = 1 * np.sin(carrier_wave_freq * 2 * np.pi * t)
modulation_array = get_modulation_array(binary_data, sample_rate, baud, sig_ratios, amp_map, dtype)
return t, carrier_wave * modulation_array
binary_data = '0110100010000000'
carrier_wave_freq = 20 #315e6
sample_rate = 100
baud = 1
t, sig = generate_on_off_key_signal(binary_data, carrier_wave_freq, sample_rate, baud)
setup_graph(title='sig', x_label='time', y_label='freq', fig_size=(12,6))
plt.plot(t, sig)
"""
Explanation: So looking at the above example, let's see how we calculate the size of the multiplier array...
The carrier frequency is 20Hz.
The baud, in this case, is 2 bits per second, which is why we need 1 second to contain the 2 bits.
The size of the multiplier wave to modulate the carrier wave is 1000 in size because it must match the number of samples there are (which in this case, 1 second x 1000 samples per second = 1000).
Note the width of each bit (in terms of samples) is samples_per_second / bits_per_second = 1000 / 2 = 500
Write function to modulate amplitude according to binary data
To modulate the carrier wave signal, we will simply:
* Generate the carrier wave
* Generate a "modulation array" - an array with the LOW or HIGH amplitudes for each data point in the carrier wave array
* Multiply the "carrier wave array" with the "modulation array", to result in the "modulated signal"
End of explanation
"""
DEFAULT_AMP_MAP = {
'LOW': MAX_AMP_16BIT * .02,
'HIGH': MAX_AMP_16BIT
}
def get_modulation_array(binary_data, sample_rate, baud, sig_ratios, amp_map, dtype=np.int16):
data_points_in_bit = int(sample_rate * 1/baud)
modulation_array = np.array([], dtype=dtype)
# To describe this general algorithms, I'll use the specific concrete pulse ratios:
# '_': 1,
# '0': 1,
# '1': 3
# Meaning that a "1" should be 3x longer than a "0" or a "space" pulse. Now since we need a space
# between "1"s (as well as "0"), we can calculate that the pulse for a "1" should be 3/4 of the bit
# and the pulse for a "0" should be 1/4 of the bit (since for the 1, it's 3 parts "1" and 1 part "space")
one_pulse_len = int((sig_ratios['1'] / (sig_ratios['1'] + sig_ratios['_'])) * data_points_in_bit)
one_space_len = data_points_in_bit - one_pulse_len
zero_pulse_len = int((sig_ratios['0'] / (sig_ratios['1'] + sig_ratios['_'])) * data_points_in_bit)
zero_space_len = data_points_in_bit - zero_pulse_len
modulated_one_bit = np.append(np.full(one_pulse_len, amp_map['HIGH'], dtype=dtype),
np.full(one_space_len, amp_map['LOW'], dtype=dtype))
modulated_zero_bit = np.append(np.full(zero_pulse_len, amp_map['HIGH'], dtype=dtype),
np.full(zero_space_len, amp_map['LOW'], dtype=dtype))
for bit in binary_data:
modulated_bit = modulated_one_bit if bit == '1' else modulated_zero_bit
modulation_array = np.append(modulation_array, modulated_bit)
return modulation_array
def generate_on_off_key_signal(binary_data, carrier_wave_freq, sample_rate,
baud, sig_ratios=DEFAULT_RATIOS, amp_map=DEFAULT_AMP_MAP, dtype=np.int16):
signal_len_secs = len(binary_data) * (1/baud)
t = np.linspace(0, signal_len_secs, sample_rate * signal_len_secs)
carrier_wave = 1 * np.sin(carrier_wave_freq * 2 * np.pi * t)
modulation_array = get_modulation_array(binary_data, sample_rate, baud, sig_ratios, amp_map, dtype)
# Pad (or trim) the modulation array to match the length of the carrier wave
if len(carrier_wave) > len(modulation_array):
pad_len = len(carrier_wave) - len(modulation_array)
modulation_array = np.append(modulation_array, np.full(pad_len, amp_map['LOW'], dtype=dtype))
elif len(carrier_wave) < len(modulation_array):
modulation_array = modulation_array[:len(carrier_wave)]
return t, carrier_wave * modulation_array
binary_data = '0110100010000000'
carrier_wave_freq = 20 #315e6
sample_rate = 1000
baud = 1
t, sig = generate_on_off_key_signal(binary_data, carrier_wave_freq, sample_rate, baud)
setup_graph(title='sig', x_label='time', y_label='freq', fig_size=(14,7))
plt.plot(t, sig)
"""
Explanation: Sweet! So it looks like it roughly worked. But there is a problem: we have no breaks between bits (so a group of 1's looks like a single pulse). Let's fix that...
Add the zero pulse and bit spacing
There are 2 problems with the above approach:
* Zeros should actually have a short pulse rather than LOW amplitude
* Bits should be spaced apart from each other
To accomplish this, we'll just rewrite the get_modulation_array() function so that each "modulated bit" will contain both the "1" or "0" pulse AND a "space" (which will simply be the LOW value).
Also, for simplicity/efficienty, we can just calculate the "1" and "0" modulation arrays once and reuse them.
I'm also going to turn up the sample rate a bit to get a more accurate signal.
End of explanation
"""
# Estimating baud
bits_transmitted = 16
viewing_sample_rate = 100000 # samples per second
real_sample_rate = 2000000 # samples per second
viewing_transmission_time = 8.219-6.664 # seconds
real_transmission_time = viewing_transmission_time * (viewing_sample_rate / real_sample_rate)
baud = bits_transmitted / real_transmission_time # bits per second
print('Real transmission time: {}\nBaud: {}'.format(real_transmission_time, baud))
binary_data = '0110100010000000'
carrier_wave_freq = 315e6
sample_rate = 2e6
baud = 205
t, sig = generate_on_off_key_signal(binary_data, carrier_wave_freq, sample_rate, baud)
setup_graph(title='sig', x_label='time', y_label='freq', fig_size=(14,7))
plt.plot(t, sig)
write_pcm_file(sig, 'raw_data/generated_sig1.pcm', dtype='int16')
"""
Explanation: Looks pretty close to the original!
Generate real wave and write to file
End of explanation
"""
# Generate preamble and repeat pattern
binary_data = '0110100010000000'
carrier_wave_freq = 315e6
sample_rate = 2e6
baud = 205
def generate_pulse(bit_val, carrier_wave_freq, sample_rate, baud, multiple_of_bit_len,
amp_map=DEFAULT_AMP_MAP, dtype=np.int16):
signal_len_secs = multiple_of_bit_len * (1/baud)
t = np.linspace(0, signal_len_secs, sample_rate * signal_len_secs)
high_or_low = 'HIGH' if bit_val == '1' else 'LOW'
pulse = amp_map[high_or_low] * np.sin(carrier_wave_freq * 2 * np.pi * t)
return t, pulse
t1, signal_header = generate_pulse('1', carrier_wave_freq, sample_rate, baud, 3.85)
t2, signal_spacer = generate_pulse('0', carrier_wave_freq, sample_rate, baud, 3.78)
setup_graph(title='sig', x_label='time', y_label='freq', fig_size=(14, 7))
plt.plot(np.append(signal_spacer, signal_header))
def join_all_arrays(array_list):
joined = array_list[0]
for a in array_list[1:]:
joined = np.append(joined, a)
return joined
full_signal = join_all_arrays([signal_header] + ([sig, signal_spacer] * 12))
setup_graph(title='sig', x_label='samples', y_label='freq', fig_size=(14, 7))
plt.plot(full_signal)
write_pcm_file(full_signal, 'raw_data/generated_sig2.pcm', dtype='int16')
"""
Explanation: Now let's try to replicate the full signal
First, we need a way to generate a HIGH or LOW pulse...
End of explanation
"""
t = np.linspace(0, 1, 1000)
amp = 3
freq = 10 # Hz
simple_sig = amp * np.cos(freq * 2 * np.pi * t)
plt.plot(t, simple_sig)
import cmath
complex_sig = 3 * np.e**(freq * 2 * np.pi * (0+1j) * t)
plt.plot(t, complex_sig)
simple_sig[5]
complex_sig[5]
abs(complex_sig[5]) # This is the amplitude of the wave
"""
Explanation: Comparison of recorded signal vs generated signal
Complex number waves
It turns out that the HackRF only can transmit complex signals (essentially, a 2-dimensional sine wave, where time is the x axis, and the signal spins around the x axis in a helical shape that would appear like a sine wave when viewed perpendicularly).
So let's generate a complex signal...
There may be a better way, but the best way I know to generate a complex sine wave is with Euler's formula:
e^(i*t) = cos(t) + i sin(t)
For more information on rotation with e, see https://github.com/calebmadrigal/FourierTalkOSCON/blob/master/05_RotationWithE.ipynb
End of explanation
"""
def generate_on_off_key_signal(binary_data, carrier_wave_freq, sample_rate,
baud, sig_ratios=DEFAULT_RATIOS, amp_map=DEFAULT_AMP_MAP, dtype=np.int16):
signal_len_secs = len(binary_data) * (1/baud)
t = np.linspace(0, signal_len_secs, sample_rate * signal_len_secs)
# Using Euler's formula to generate a complex sinusoidal wave
carrier_wave = 1 * np.e**(carrier_wave_freq * 2 * np.pi * (0+1j) * t)
modulation_array = get_modulation_array(binary_data, sample_rate, baud, sig_ratios, amp_map, dtype)
# Pad (or trim) the modulation array to match the length of the carrier wave
if len(carrier_wave) > len(modulation_array):
pad_len = len(carrier_wave) - len(modulation_array)
modulation_array = np.append(modulation_array, np.full(pad_len, amp_map['LOW'], dtype=dtype))
elif len(carrier_wave) < len(modulation_array):
modulation_array = modulation_array[:len(carrier_wave)]
return t, carrier_wave * modulation_array
def generate_pulse(bit_val, carrier_wave_freq, sample_rate, baud, multiple_of_bit_len,
amp_map=DEFAULT_AMP_MAP, dtype=np.int16):
signal_len_secs = multiple_of_bit_len * (1/baud)
t = np.linspace(0, signal_len_secs, sample_rate * signal_len_secs)
high_or_low = 'HIGH' if bit_val == '1' else 'LOW'
pulse = amp_map[high_or_low] * np.e**(carrier_wave_freq * 2 * np.pi * (0+1j) * t)
return t, pulse
binary_data = '0110100010000000'
carrier_wave_freq = 315e6
sample_rate = 2e6
baud = 205
complex64_amp_map = {
'LOW': 1.4 * .02,
'HIGH': 1.4
}
t, complex_signal = generate_on_off_key_signal(binary_data, carrier_wave_freq, sample_rate, baud, amp_map=complex64_amp_map, dtype='complex64')
t2, signal_header = generate_pulse('1', carrier_wave_freq, sample_rate, baud, 3.85, amp_map=complex64_amp_map, dtype='complex64')
t3, signal_spacer = generate_pulse('0', carrier_wave_freq, sample_rate, baud, 3.78, amp_map=complex64_amp_map, dtype='complex64')
full_signal = join_all_arrays([signal_header] + ([complex_signal, signal_spacer] * 12))
write_pcm_file(full_signal, 'raw_data/generated_sig2.pcm', dtype='complex64')
"""
Explanation: Now, let's rewrite the generation functions to use complex math
End of explanation
"""
def generate_on_off_key_signal(binary_data, carrier_wave_freq, sample_rate,
baud, sig_ratios=DEFAULT_RATIOS, amp_map=DEFAULT_AMP_MAP, dtype=np.int16):
signal_len_secs = len(binary_data) * (1/baud)
t = np.linspace(0, signal_len_secs, sample_rate * signal_len_secs)
# Using Euler's formula to generate a complex sinusoidal wave
carrier_wave = 1 * np.e**(carrier_wave_freq * 2 * np.pi * (0+1j) * t)
modulation_array = get_modulation_array(binary_data, sample_rate, baud, sig_ratios, amp_map, dtype)
# Pad (or trim) the modulation array to match the length of the carrier wave
if len(carrier_wave) > len(modulation_array):
pad_len = len(carrier_wave) - len(modulation_array)
modulation_array = np.append(modulation_array, np.full(pad_len, amp_map['LOW'], dtype=dtype))
elif len(carrier_wave) < len(modulation_array):
modulation_array = modulation_array[:len(carrier_wave)]
# Modulate by superwave
super_wave_freq = carrier_wave_freq / (160*2)
super_wave = 1 * np.e**(super_wave_freq * 2 * np.pi * (0+1j) * t)
return t, carrier_wave * modulation_array * super_wave
def generate_pulse(bit_val, carrier_wave_freq, sample_rate, baud, multiple_of_bit_len,
amp_map=DEFAULT_AMP_MAP, dtype=np.int16):
signal_len_secs = multiple_of_bit_len * (1/baud)
t = np.linspace(0, signal_len_secs, sample_rate * signal_len_secs)
high_or_low = 'HIGH' if bit_val == '1' else 'LOW'
pulse = amp_map[high_or_low] * np.e**(carrier_wave_freq * 2 * np.pi * (0+1j) * t)
return t, pulse
binary_data = '0110100010000000'
carrier_wave_freq = 315e6
sample_rate = 2e6
baud = 205 * 2
complex64_amp_map = {
'LOW': 1.4 * .02,
'HIGH': 1.4
}
t, complex_signal = generate_on_off_key_signal(binary_data, carrier_wave_freq, sample_rate, baud, amp_map=complex64_amp_map, dtype='complex64')
t2, signal_header = generate_pulse('1', carrier_wave_freq, sample_rate, baud, 3.85, amp_map=complex64_amp_map, dtype='complex64')
t3, signal_spacer = generate_pulse('0', carrier_wave_freq, sample_rate, baud, 3.78, amp_map=complex64_amp_map, dtype='complex64')
full_signal = join_all_arrays([signal_header] + ([complex_signal, signal_spacer] * 12))
write_pcm_file(full_signal, 'raw_data/generated_sig3.pcm', dtype='complex64')
"""
Explanation: Sadly, this signal did not trigger the outlet
So what's different? Well, for one, when I play the 2 signals audibly in Audacity, the original signal is pretty loud, whereas, the one I generated sounds really quiet - almost as if there is the waves I'm generating is self-canceling.
Here's a close-up of the wave I generated:
It almost looks like ther eare equal and opposite waves that would be canceling each other out, but when you look closer, it doesn't seem so:
And here's a close-up of the original wave:
It looks like in the original, the carrier wave is modulated by multiplying by another wave, rather than by unchanging "HIGH" values.
So I'm going to try that now: to modulate the carrier wave by another wave, rather than by a static HIGH value...
End of explanation
"""
|
kubeflow/kfp-tekton-backend | samples/tutorials/mnist/01_Lightweight_Python_Components.ipynb | apache-2.0 | import kfp
import kfp.gcp as gcp
import kfp.dsl as dsl
import kfp.compiler as compiler
import kfp.components as comp
import kubernetes as k8s
# Required Parameters
PROJECT_ID='<ADD GCP PROJECT HERE>'
GCS_BUCKET='gs://<ADD STORAGE LOCATION HERE>'
"""
Explanation: Lightweight Python Components
To build a component, define a standalone python function and then call kfp.components.func_to_container_op(func) to convert your function to a component that can be used in a pipeline.
There are several requirements for the function:
The function should be standalone. It should not use any code declared outside of the function definition. Any imports should be added inside the main function. Any helper functions must be defined inside the main function.
The function can only import packages that are available in the base image. If you need to import a package that's not available, you can try to find a container image that already includes the required packages.
If the function operates on numbers, the parameters need to have type hints. Supported types are [int, float, bool]. Everything else is passed as string.
To build a component with multiple output values, use the typing.NamedTuple type hint syntax: NamedTuple('MyFunctionOutputs', [('output_name_1', type), ('output_name_2', float)])
End of explanation
"""
# Optional Parameters, but required for running outside Kubeflow cluster
# The host for 'AI Platform Pipelines' ends with 'pipelines.googleusercontent.com'
# The host for pipeline endpoint of 'full Kubeflow deployment' ends with '/pipeline'
# Examples are:
# https://7c021d0340d296aa-dot-us-central2.pipelines.googleusercontent.com
# https://kubeflow.endpoints.kubeflow-pipeline.cloud.goog/pipeline
HOST = '<ADD HOST NAME TO TALK TO KUBEFLOW PIPELINE HERE>'
# For 'full Kubeflow deployment' on GCP, the endpoint is usually protected through IAP, therefore the following
# will be needed to access the endpoint.
CLIENT_ID = '<ADD OAuth CLIENT ID USED BY IAP HERE>'
OTHER_CLIENT_ID = '<ADD OAuth CLIENT ID USED TO OBTAIN AUTH CODES HERE>'
OTHER_CLIENT_SECRET = '<ADD OAuth CLIENT SECRET USED TO OBTAIN AUTH CODES HERE>'
# This is to ensure the proper access token is present to reach the end point for 'AI Platform Pipelines'
# If you are not working with 'AI Platform Pipelines', this step is not necessary
! gcloud auth print-access-token
# Create kfp client
in_cluster = True
try:
k8s.config.load_incluster_config()
except:
in_cluster = False
pass
if in_cluster:
client = kfp.Client()
else:
if HOST.endswith('googleusercontent.com'):
CLIENT_ID = None
OTHER_CLIENT_ID = None
OTHER_CLIENT_SECRET = None
client = kfp.Client(host=HOST,
client_id=CLIENT_ID,
other_client_id=OTHER_CLIENT_ID,
other_client_secret=OTHER_CLIENT_SECRET)
"""
Explanation: Create client
If you run this notebook outside of a Kubeflow cluster, run the following command:
- host: The URL of your Kubeflow Pipelines instance, for example "https://<your-deployment>.endpoints.<your-project>.cloud.goog/pipeline"
- client_id: The client ID used by Identity-Aware Proxy
- other_client_id: The client ID used to obtain the auth codes and refresh tokens.
- other_client_secret: The client secret used to obtain the auth codes and refresh tokens.
python
client = kfp.Client(host, client_id, other_client_id, other_client_secret)
If you run this notebook within a Kubeflow cluster, run the following command:
python
client = kfp.Client()
You'll need to create OAuth client ID credentials of type Other to get other_client_id and other_client_secret. Learn more about creating OAuth credentials
End of explanation
"""
#Define a Python function
def add(a: float, b: float) -> float:
'''Calculates sum of two arguments'''
return a + b
"""
Explanation: Start with a simple function
End of explanation
"""
add_op = comp.func_to_container_op(add)
"""
Explanation: Convert the function to a pipeline operation
End of explanation
"""
# Advanced function
# Demonstrates imports, helper functions and multiple outputs
from typing import NamedTuple
def my_divmod(dividend: float,
divisor: float,
) -> NamedTuple('MyDivmodOutput', [('quotient', float), ('remainder', float),
('mlpipeline_ui_metadata', 'UI_metadata'),
('mlpipeline_metrics', 'Metrics')]):
'''Divides two numbers and calculate the quotient and remainder'''
#Imports inside a component function:
import numpy as np
#This function demonstrates how to use nested functions inside a component function:
def divmod_helper(dividend, divisor):
return np.divmod(dividend, divisor)
(quotient, remainder) = divmod_helper(dividend, divisor)
import json
# Exports a sample tensorboard:
metadata = {
'outputs' : [{
'type': 'tensorboard',
'source': 'gs://ml-pipeline-dataset/tensorboard-train',
}]
}
# Exports two sample metrics:
metrics = {
'metrics': [{
'name': 'quotient',
'numberValue': float(quotient),
},{
'name': 'remainder',
'numberValue': float(remainder),
}]}
from collections import namedtuple
divmod_output = namedtuple('MyDivmodOutput',
['quotient', 'remainder', 'mlpipeline_ui_metadata', 'mlpipeline_metrics'])
return divmod_output(quotient, remainder, json.dumps(metadata), json.dumps(metrics))
my_divmod(100, 7)
divmod_op = comp.func_to_container_op(func=my_divmod,
base_image="tensorflow/tensorflow:1.15.0-py3")
"""
Explanation: A more complex example, with multiple outputs
End of explanation
"""
import kfp.dsl as dsl
@dsl.pipeline(
name='Calculation pipeline',
description='A toy pipeline that performs arithmetic calculations.'
)
def calc_pipeline(
a='a',
b='7',
c='17',
):
#Passing pipeline parameter and a constant value as operation arguments
add_task = add_op(a, 4) #Returns a dsl.ContainerOp class instance.
#Passing a task output reference as operation arguments
#For an operation with a single return value, the output reference can be accessed using `task.output` or `task.outputs['output_name']` syntax
divmod_task = divmod_op(add_task.output, b)
#For an operation with a multiple return values, the output references can be accessed using `task.outputs['output_name']` syntax
result_task = add_op(divmod_task.outputs['quotient'], c)
"""
Explanation: Define the pipeline
End of explanation
"""
pipeline_func = calc_pipeline
experiment_name = 'python-functions'
#Specify pipeline argument values
arguments = {'a': '7', 'b': '8'}
run_name = pipeline_func.__name__ + ' run'
# Submit pipeline directly from pipeline function
run_result = client.create_run_from_pipeline_func(pipeline_func,
experiment_name=experiment_name,
run_name=run_name,
arguments=arguments)
"""
Explanation: Submit the pipeline
End of explanation
"""
def mnist_train(model_file: str, bucket: str) -> str:
from datetime import datetime
import tensorflow as tf
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(512, activation=tf.nn.relu),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
print(model.summary())
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
callbacks = [
tf.keras.callbacks.TensorBoard(log_dir=bucket + '/logs/' + datetime.now().date().__str__()),
# Interrupt training if `val_loss` stops improving for over 2 epochs
tf.keras.callbacks.EarlyStopping(patience=2, monitor='val_loss'),
]
model.fit(x_train, y_train, batch_size=32, epochs=5, callbacks=callbacks,
validation_data=(x_test, y_test))
model.save(model_file)
from tensorflow import gfile
gcs_path = bucket + "/" + model_file
if gfile.Exists(gcs_path):
gfile.Remove(gcs_path)
gfile.Copy(model_file, gcs_path)
return gcs_path
mnist_train(model_file='mnist_model.h5',
bucket=GCS_BUCKET)
model_train_op = comp.func_to_container_op(func=mnist_train,
base_image="tensorflow/tensorflow:1.15.0-py3")
"""
Explanation: Train a keras model
This following steps trains a neural network model to classify hand writing images using the MNIST dataset.
End of explanation
"""
@dsl.pipeline(
name='Mnist pipeline',
description='A toy pipeline that performs mnist model training.'
)
def mnist_pipeline(
model_file: str = 'mnist_model.h5',
bucket: str = GCS_BUCKET
):
model_train_op(model_file=model_file, bucket=bucket).apply(gcp.use_gcp_secret('user-gcp-sa'))
"""
Explanation: Define and submit the pipeline
End of explanation
"""
pipeline_func = mnist_pipeline
experiment_name = 'minist_kubeflow'
arguments = {"model_file":"mnist_model.h5",
"bucket":GCS_BUCKET}
run_name = pipeline_func.__name__ + ' run'
# Submit pipeline directly from pipeline function
run_result = client.create_run_from_pipeline_func(pipeline_func,
experiment_name=experiment_name,
run_name=run_name,
arguments=arguments)
"""
Explanation: Submit a pipeline run
End of explanation
"""
|
SlipknotTN/udacity-deeplearning-nanodegree | tv-script-generation/dlnd_tv_script_generation_deep_orlando.ipynb | mit | """
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
data_dir = './data/orlando_furioso.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
#text = text[81:]
# Need to clean by all numbers and subtitute italian tokens not present in english
"""
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
"""
view_sentence_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
"""
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
"""
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
words_ordered = sorted(set(text))
# TODO: Implement Function
vocab_to_int = {word: index for index, word in enumerate(words_ordered)}
int_to_vocab = {index: word for index, word in enumerate(words_ordered)}
return vocab_to_int, int_to_vocab
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
"""
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
"""
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
token_dict = dict()
token_dict['.'] = "||Period||"
token_dict[','] = "||Comma||"
token_dict['"'] = "||Quotation_Mark||"
token_dict[';'] = "||Semicolon||"
token_dict['!'] = "||Exclamation_Mark||"
token_dict['?'] = "||Question_Mark||"
token_dict['('] = "||Left_Parentheses||"
token_dict[')'] = "||Right_Parentheses||"
token_dict['--'] = "||Dash||"
token_dict['\n'] = "||Return||"
return token_dict
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
"""
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
"""
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
"""
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
"""
def get_inputs():
"""
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
"""
# TODO: Implement Function
inputs = tf.placeholder(tf.int32, shape=(None, None), name="input")
targets = tf.placeholder(tf.int32, shape=(None,None), name="targets")
learning_rate = tf.placeholder(tf.float32, name="learning_rate")
return inputs, targets, learning_rate
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_inputs(get_inputs)
"""
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following tuple (Input, Targets, LearningRate)
End of explanation
"""
def get_init_cell(batch_size, rnn_size):
"""
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
"""
# TODO: Implement Function
lstm_layers = 2 #Need to pass test?! (otherwise final_state shape will be wrong)
lstm = tf.contrib.rnn.BasicLSTMCell(num_units=rnn_size)
cell = tf.contrib.rnn.MultiRNNCell([lstm] * lstm_layers)
initial_state = cell.zero_state(batch_size, tf.float32)
# print(initial_state)
initial_state = tf.identity(initial_state, name="initial_state")
return cell, initial_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_init_cell(get_init_cell)
"""
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
"""
def get_embed(input_data, vocab_size, embed_dim):
"""
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
"""
# TODO: Implement Function
embeddings = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
embed = tf.nn.embedding_lookup(embeddings, ids=input_data)
return embed
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_embed(get_embed)
"""
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
"""
def build_rnn(cell, inputs):
"""
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
"""
# TODO: Implement Function
#print(cell)
#print(inputs)
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
final_state = tf.identity(final_state, name="final_state")
# Shape is lstm_layers x 2 (inputs and targets) x None (batch_size) x lstm_units
#print(final_state)
return outputs, final_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_rnn(build_rnn)
"""
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
"""
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
"""
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
"""
# TODO: Implement Function
embed = get_embed(input_data, vocab_size, embed_dim=embed_dim)
# outputs shape is batch_size x seq_len x lstm_units
outputs, final_state = build_rnn(cell, inputs=embed)
#print(outputs.shape)
logits = tf.contrib.layers.fully_connected(outputs, vocab_size, activation_fn=None)
# logits shape is batch_size x seq_len x vocab_size
#print(logits.shape)
return logits, final_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_nn(build_nn)
"""
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
"""
def get_batches(int_text, batch_size, seq_length):
"""
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
"""
#print("Batch_size: " + str(batch_size))
#print("Seq length: " + str(seq_length))
# Consider that targets is shifted by 1
num_batches = len(int_text)//(batch_size * seq_length + 1)
#print("Num batches: " + str(num_batches))
#print("Text length: " + str(len(int_text)))
batches = np.zeros(shape=(num_batches, 2, batch_size, seq_length), dtype=np.int32)
#print(batches.shape)
# TODO: Add a smarter check
for batch_index in range(0, num_batches):
for in_batch_index in range(0, batch_size):
start_x = (batch_index * seq_length) + (seq_length * num_batches * in_batch_index)
start_y = start_x + 1
x = int_text[start_x : start_x + seq_length]
y = int_text[start_y : start_y + seq_length]
#print("batch_index: " + str(batch_index))
#print("in_batch_index: " + str(in_batch_index))
#print("start_x: " + str(start_x))
#print(x)
batches[batch_index][0][in_batch_index] = np.asarray(x)
batches[batch_index][1][in_batch_index] = np.asarray(y)
#print(batches)
return batches
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_batches(get_batches)
"""
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2 3], [ 7 8 9]],
# Batch of targets
[[ 2 3 4], [ 8 9 10]]
],
# Second Batch
[
# Batch of Input
[[ 4 5 6], [10 11 12]],
# Batch of targets
[[ 5 6 7], [11 12 13]]
]
]
```
End of explanation
"""
# FINAL LOSS: - Seq length 20, LR 0.001, Epochs 200
# Number of Epochs
num_epochs = 200
# Batch Size
batch_size = 64
# RNN Size
rnn_size = 256
# Embedding Dimension Size
embed_dim = 300
# Sequence Length
seq_length = 20
# Learning Rate
learning_rate = 0.001
# Show stats for every n number of batches
show_every_n_batches = 99
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
save_dir = './save'
"""
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set embed_dim to the size of the embedding.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
# probs shape is batch_size x seq_len x vocab_size
probs = tf.nn.softmax(logits, name='probs')
#print(probs.shape)
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients]
train_op = optimizer.apply_gradients(capped_gradients)
"""
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
# x and y shapes are batch_size x seq_len
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
"""
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
"""
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
"""
Explanation: Checkpoint
End of explanation
"""
def get_tensors(loaded_graph):
"""
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
"""
# TODO: Implement Function
return loaded_graph.get_tensor_by_name("input:0"), loaded_graph.get_tensor_by_name("initial_state:0"), \
loaded_graph.get_tensor_by_name("final_state:0"), loaded_graph.get_tensor_by_name("probs:0")
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_tensors(get_tensors)
"""
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
"""
def pick_word(probabilities, int_to_vocab):
"""
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
"""
# TODO: Implement Function
return int_to_vocab[np.argmax(probabilities)]
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_pick_word(pick_word)
"""
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
"""
#print(vocab_to_int)
gen_length = 200
prime_word = 'perché mi piace'
prime_word = str.lower(prime_word)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = prime_word.split()
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
"""
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation
"""
|
GoogleCloudPlatform/ai-notebooks-extended | dataproc-hub-example/build/infrastructure-builder/mig/files/gcs_working_folder/examples/Environment Checks/00 - Authentication.ipynb | apache-2.0 | !gcloud auth revoke --quiet
!gcloud auth application-default revoke --quiet
"""
Explanation: Runs this to start from scratch
Both should return an error if no credentials were previously set and your are using the service account of the instance.
End of explanation
"""
# General
import google.auth
credentials, project_id = google.auth.default()
# Transparent for google.cloud libraries
from google.cloud import bigquery
client = bigquery.Client()
# If you try to run a query, this gets updated with values.
client.__dict__['_credentials'].__dict__
"""
Explanation: Authentication
As a developer, I want to interact with GCP via gcloud.
gcloud auth login (run from a Notebook terminal)
This obtains your credentials via a web flow and stores them in /root/.config/gcloud/credentials.db and for backward compatibility in /root/.config/gcloud/legacy_credentials/[YOUR_EMAIL]/adc.json
Now:
- gcloud commands runs from the Notebook's cells finds your credentials automatically.
- Other code or SDK (Python, Java,...) not automatically picks up those credentials.
Reference: https://cloud.google.com/sdk/gcloud/reference/auth/login
As a developer, I want my code to interact with GCP via SDK.
gcloud auth application-default login (run using the GCP option in the navigation menu)
This obtains your credentials via a web flow and stores them in /root/.config/gcloud/application_default_credentials.json.
Now:
- Other code or SDK (Python, Java,...) finds the credentials automatically.
- Can run code locally which would normally run on a server without the need of a credentials file.
Reference: https://cloud.google.com/sdk/gcloud/reference/auth/application-default/login
For more information, you can read the Google documentation or this excellent blog post.
Authenticate code running in the Notebook
End of explanation
"""
import base64
CREDENTIALS_FILE = "/root/.config/gcloud/application_default_credentials.json"
def get_credentials_text():
if not os.path.isfile(CREDENTIALS_FILE):
print("\x1b[31m\nNo credentials defined. Run gcloud auth application-default login.\n\x1b[0m")
return
return open(CREDENTIALS_FILE, "r").read()
credentials_txt = get_credentials_text()
credentials_b64 = base64.b64encode(credentials_txt.encode('utf-8')).decode('utf-8')
# Can not have both credentialsFile and credentials set.
spark.conf.unset("credentialsFile")
spark.conf.set("credentials", credentials_b64)
print("\x1b[32m\nSpark is now authenticated on this Master node.\n\x1b[0m")
"""
Explanation: NOTE
For BigQuery, if you run a query using the following, your identity should have the following IAM roles or similar:
- roles/bigquery.jobUser (Lower resource is Project) that includes the bigquery.jobs.create permission.
- roles/bigquery.dataViewer (Lower resource is Dataset) that includes bigquery.tables.getData permission.
py
query_job = client.query(QUERY)
rows = query_job.result()
Authenticate Spark
You can authenticate Spark using the credentials file or its content. Although you could use the file directly, workers would not have it locally because gcloud auth application-default login runs only for the Master. It means that the application_default_credentials.json file is only created on the Master node.
We have 3 options:
Option 1 [Recommended]: Read the file and pass the value as a string.
Option 2: Have the add-on to write the file to the master and workers. Requires proper permissions.
Option 3: Manually copy the file using a gcloud scp for example. Requires proper firewall access.
End of explanation
"""
|
authman/DAT210x | Module5/Module5 - Lab3.ipynb | mit | import pandas as pd
from datetime import timedelta
import matplotlib.pyplot as plt
import matplotlib
matplotlib.style.use('ggplot') # Look Pretty
"""
Explanation: DAT210x - Programming with Python for DS
Module5- Lab3
End of explanation
"""
def clusterInfo(model):
print("Cluster Analysis Inertia: ", model.inertia_)
print('------------------------------------------')
for i in range(len(model.cluster_centers_)):
print("\n Cluster ", i)
print(" Centroid ", model.cluster_centers_[i])
print(" #Samples ", (model.labels_==i).sum()) # NumPy Power
# Find the cluster with the least # attached nodes
def clusterWithFewestSamples(model):
# Ensure there's at least on cluster...
minSamples = len(model.labels_)
minCluster = 0
for i in range(len(model.cluster_centers_)):
if minSamples > (model.labels_==i).sum():
minCluster = i
minSamples = (model.labels_==i).sum()
print("\n Cluster With Fewest Samples: ", minCluster)
return (model.labels_==minCluster)
"""
Explanation: A convenience function for you to use:
End of explanation
"""
# .. your code here ..
"""
Explanation: CDRs
A call detail record (CDR) is a data record produced by a telephone exchange or other telecommunications equipment that documents the details of a telephone call or other telecommunications transaction (e.g., text message) that passes through that facility or device.
The record contains various attributes of the call, such as time, duration, completion status, source number, and destination number. It is the automated equivalent of the paper toll tickets that were written and timed by operators for long-distance calls in a manual telephone exchange.
The dataset we've curated for you contains call records for 10 people, tracked over the course of 3 years. Your job in this assignment is to find out where each of these people likely live and where they work at!
Start by loading up the dataset and taking a peek at its head and dtypes. You can convert date-strings to real date-time objects using pd.to_datetime, and the times using pd.to_timedelta:
End of explanation
"""
# .. your code here ..
"""
Explanation: Create a unique list of the phone number values (people) stored in the In column of the dataset, and save them in a regular python list called unique_numbers. Manually check through unique_numbers to ensure the order the numbers appear is the same order they (uniquely) appear in your dataset:
End of explanation
"""
print("Examining person: ", 0)
"""
Explanation: Using some domain expertise, your intuition should direct you to know that people are likely to behave differently on weekends vs on weekdays:
On Weekends
People probably don't go into work
They probably sleep in late on Saturday
They probably run a bunch of random errands, since they couldn't during the week
They should be home, at least during the very late hours, e.g. 1-4 AM
On Weekdays
People probably are at work during normal working hours
They probably are at home in the early morning and during the late night
They probably spend time commuting between work and home everyday
End of explanation
"""
# .. your code here ..
"""
Explanation: Create a slice called user1 that filters to only include dataset records where the In feature (user phone number) is equal to the first number on your unique list above:
End of explanation
"""
# .. your code here ..
"""
Explanation: Alter your slice so that it includes only Weekday (Mon-Fri) values:
End of explanation
"""
# .. your code here ..
"""
Explanation: The idea is that the call was placed before 5pm. From Midnight-730a, the user is probably sleeping and won't call / wake up to take a call. There should be a brief time in the morning during their commute to work, then they'll spend the entire day at work. So the assumption is that most of the time is spent either at work, or in 2nd, at home:
End of explanation
"""
# .. your code here ..
def doKMeans(data, num_clusters=0):
# TODO: Be sure to only feed in Lat and Lon coordinates to the KMeans algo, since none of the other
# data is suitable for your purposes. Since both Lat and Lon are (approximately) on the same scale,
# no feature scaling is required. Print out the centroid locations and add them onto your scatter
# plot. Use a distinguishable marker and color.
#
# Hint: Make sure you fit ONLY the coordinates, and in the CORRECT order (lat first). This is part
# of your domain expertise. Also, *YOU* need to create, initialize (and return) the variable named
# `model` here, which will be a SKLearn K-Means model for this to work:
# .. your code here ..
return model
"""
Explanation: Plot the Cell Towers the user connected to
End of explanation
"""
model = doKMeans(user1, 3)
"""
Explanation: Let's tun K-Means with K=3 or K=4. There really should only be a two areas of concentration. If you notice multiple areas that are "hot" (multiple areas the user spends a lot of time at that are FAR apart from one another), then increase K=5, with the goal being that all centroids except two will sweep up the annoying outliers and not-home, not-work travel occasions. the other two will zero in on the user's approximate home location and work locations. Or rather the location of the cell tower closest to them.....
End of explanation
"""
midWayClusterIndices = clusterWithFewestSamples(model)
midWaySamples = user1[midWayClusterIndices]
print(" Its Waypoint Time: ", midWaySamples.CallTime.mean())
"""
Explanation: Print out the mean CallTime value for the samples belonging to the cluster with the LEAST samples attached to it. If our logic is correct, the cluster with the MOST samples will be work. The cluster with the 2nd most samples will be home. And the K=3 cluster with the least samples should be somewhere in between the two. What time, on average, is the user in between home and work, between the midnight and 5pm?
End of explanation
"""
ax.scatter(model.cluster_centers_[:,1], model.cluster_centers_[:,0], s=169, c='r', marker='x', alpha=0.8, linewidths=2)
ax.set_title('Weekday Calls Centroids')
plt.show()
"""
Explanation: Let's visualize the results! First draw the X's for the clusters:
End of explanation
"""
|
chunweixu/Deep-Learning | tv-script-generation/dlnd_tv_script_generation.ipynb | mit | """
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
"""
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
"""
view_sentence_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
"""
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
"""
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
text = set(text)
vocab_to_int = dict((word, _) for _, word in enumerate(text))
int_to_vocab = dict((_, word) for _, word in enumerate(text))
return vocab_to_int, int_to_vocab
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
"""
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
"""
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
"""
p_dict = {}
p_dict['.'] = '||Period||'
p_dict[','] = '||Comma||'
p_dict['"'] = '||Quotation_MArk||'
p_dict[';'] = '||Semicolon||'
p_dict['!'] = '||Exclamation_Mark||'
p_dict['?'] = '||Question_Mark||'
p_dict['('] = '||Left_Parentheses||'
p_dict[')'] = '||Right_Parentheses||'
p_dict['--'] = '||Dash||'
p_dict['\n'] = '||Return||'
return p_dict
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
"""
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
"""
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
"""
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
"""
def get_inputs():
"""
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
"""
inputs = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None], name='targets')
rate = tf.placeholder(tf.float32, name='rate')
return inputs, targets, rate
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_inputs(get_inputs)
"""
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following tuple (Input, Targets, LearningRate)
End of explanation
"""
def get_init_cell(batch_size, rnn_size):
"""
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
"""
# Your basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
# Add dropout to the cell
#drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([lstm])
# Getting an initial state of all zeros
initial_state = cell.zero_state(batch_size, tf.float32)
initial_state = tf.identity(initial_state, name='initial_state')
return cell, initial_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_init_cell(get_init_cell)
"""
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
"""
def get_embed(input_data, vocab_size, embed_dim):
"""
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
"""
embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
embed = tf.nn.embedding_lookup(embedding, input_data)
return embed
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_embed(get_embed)
"""
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
"""
def build_rnn(cell, inputs):
"""
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
"""
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
final_state = tf.identity(final_state, name='final_state')
return outputs, final_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_rnn(build_rnn)
"""
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
"""
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
"""
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
"""
embed = get_embed(input_data, vocab_size, embed_dim)
outputs, final_state = build_rnn(cell, embed)
logits = tf.contrib.layers.fully_connected(outputs, num_outputs=vocab_size, weights_initializer=tf.contrib.layers.xavier_initializer(seed=1),
biases_initializer=tf.zeros_initializer(), activation_fn=None)
return logits, final_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_nn(build_nn)
"""
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
"""
def get_batches(int_text, batch_size, seq_length):
"""
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
"""
n_batches = len(int_text)//(batch_size*seq_length)
inputs = np.array(int_text[:n_batches*batch_size*seq_length])
targets = np.array(int_text[1:n_batches*batch_size*seq_length+1])
targets[-1] = inputs[0]
input_batches = np.split(inputs.reshape(batch_size, -1), n_batches, 1)
target_batches = np.split(targets.reshape(batch_size, -1), n_batches, 1)
output = np.array(list(zip(input_batches, target_batches)))
return output
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_batches(get_batches)
"""
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2], [ 7 8], [13 14]]
# Batch of targets
[[ 2 3], [ 8 9], [14 15]]
]
# Second Batch
[
# Batch of Input
[[ 3 4], [ 9 10], [15 16]]
# Batch of targets
[[ 4 5], [10 11], [16 17]]
]
# Third Batch
[
# Batch of Input
[[ 5 6], [11 12], [17 18]]
# Batch of targets
[[ 6 7], [12 13], [18 1]]
]
]
```
Notice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive.
End of explanation
"""
# Number of Epochs
num_epochs = 100
# Batch Size
batch_size = 256
# RNN Size
rnn_size = 256
# Embedding Dimension Size
embed_dim = 60
# Sequence Length
seq_length = 10
# Learning Rate
learning_rate = 0.005
# Show stats for every n number of batches
show_every_n_batches = 100
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
save_dir = './save'
"""
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set embed_dim to the size of the embedding.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
"""
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
"""
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
"""
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
"""
Explanation: Checkpoint
End of explanation
"""
def get_tensors(loaded_graph):
"""
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
"""
inputTensor = loaded_graph.get_tensor_by_name('input:0')
initialStateTensor = loaded_graph.get_tensor_by_name('initial_state:0')
finalStateTensor = loaded_graph.get_tensor_by_name('final_state:0')
probsTensor = loaded_graph.get_tensor_by_name('probs:0')
return inputTensor, initialStateTensor, finalStateTensor, probsTensor
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_tensors(get_tensors)
"""
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
"""
def pick_word(probabilities, int_to_vocab):
"""
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
"""
#return np.random.choice(list(int_to_vocab.values()), p=probabilities)
return np.random.choice(list(int_to_vocab.values()), 1, p=np.squeeze(probabilities))[0]
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_pick_word(pick_word)
"""
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
"""
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[:,dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
"""
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation
"""
|
jforbess/pvlib-python | docs/tutorials/pvsystem.ipynb | bsd-3-clause | # built-in python modules
import os
import inspect
# scientific python add-ons
import numpy as np
import pandas as pd
# plotting stuff
# first line makes the plots appear in the notebook
%matplotlib inline
import matplotlib.pyplot as plt
# seaborn makes your plots look better
try:
import seaborn as sns
sns.set(rc={"figure.figsize": (12, 6)})
except ImportError:
print('We suggest you install seaborn using conda or pip and rerun this cell')
# finally, we import the pvlib library
import pvlib
import pvlib
from pvlib import pvsystem
"""
Explanation: pvsystem tutorial
This tutorial explores the pvlib.pvsystem module. The module has functions for importing PV module and inverter data and functions for modeling module and inverter performance.
systemdef
Angle of Incidence Modifiers
Sandia Cell Temp correction
Sandia Inverter Model
Sandia Array Performance Model
SAPM IV curves
DeSoto Model
Single Diode Model
This tutorial has been tested against the following package versions:
* pvlib 0.2.0
* Python 2.7.10
* IPython 3.2
* Pandas 0.16.2
It should work with other Python and Pandas versions. It requires pvlib >= 0.2.0 and IPython >= 3.0.
Authors:
* Will Holmgren (@wholmgren), University of Arizona. 2015.
End of explanation
"""
pvlib_abspath = os.path.dirname(os.path.abspath(inspect.getfile(pvlib)))
tmy3_data, tmy3_metadata = pvlib.tmy.readtmy3(os.path.join(pvlib_abspath, 'data', '703165TY.csv'))
tmy2_data, tmy2_metadata = pvlib.tmy.readtmy2(os.path.join(pvlib_abspath, 'data', '12839.tm2'))
pvlib.pvsystem.systemdef(tmy3_metadata, 0, 0, .1, 5, 5)
pvlib.pvsystem.systemdef(tmy2_metadata, 0, 0, .1, 5, 5)
"""
Explanation: systemdef
pvlib can import TMY2 and TMY3 data. Here, we import the example files.
End of explanation
"""
angles = np.linspace(-180,180,3601)
ashraeiam = pd.Series(pvsystem.ashraeiam(.05, angles), index=angles)
ashraeiam.plot()
plt.ylabel('ASHRAE modifier')
plt.xlabel('input angle (deg)')
angles = np.linspace(-180,180,3601)
physicaliam = pd.Series(pvsystem.physicaliam(4, 0.002, 1.526, angles), index=angles)
physicaliam.plot()
plt.ylabel('physical modifier')
plt.xlabel('input index')
plt.figure()
ashraeiam.plot(label='ASHRAE')
physicaliam.plot(label='physical')
plt.ylabel('modifier')
plt.xlabel('input angle (deg)')
plt.legend()
"""
Explanation: Angle of Incidence Modifiers
End of explanation
"""
# scalar inputs
pvsystem.sapm_celltemp(900, 5, 20) # irrad, wind, temp
# vector inputs
times = pd.DatetimeIndex(start='2015-01-01', end='2015-01-02', freq='12H')
temps = pd.Series([0, 10, 5], index=times)
irrads = pd.Series([0, 500, 0], index=times)
winds = pd.Series([10, 5, 0], index=times)
pvtemps = pvsystem.sapm_celltemp(irrads, winds, temps)
pvtemps.plot()
"""
Explanation: Sandia Cell Temp correction
PV system efficiency can vary by up to 0.5% per degree C, so it's important to accurately model cell and module temperature. The sapm_celltemp function uses plane of array irradiance, ambient temperature, wind speed, and module and racking type to calculate cell and module temperatures. From King et. al. (2004):
$$T_m = E e^{a+b*WS} + T_a$$
$$T_c = T_m + \frac{E}{E_0} \Delta T$$
The $a$, $b$, and $\Delta T$ parameters depend on the module and racking type. The default parameter set is open_rack_cell_glassback.
sapm_celltemp works with either scalar or vector inputs, but always returns a pandas DataFrame.
End of explanation
"""
wind = np.linspace(0,20,21)
temps = pd.DataFrame(pvsystem.sapm_celltemp(900, wind, 20), index=wind)
temps.plot()
plt.legend()
plt.xlabel('wind speed (m/s)')
plt.ylabel('temperature (deg C)')
"""
Explanation: Cell and module temperature as a function of wind speed.
End of explanation
"""
atemp = np.linspace(-20,50,71)
temps = pvsystem.sapm_celltemp(900, 2, atemp).set_index(atemp)
temps.plot()
plt.legend()
plt.xlabel('ambient temperature (deg C)')
plt.ylabel('temperature (deg C)')
"""
Explanation: Cell and module temperature as a function of ambient temperature.
End of explanation
"""
irrad = np.linspace(0,1000,101)
temps = pvsystem.sapm_celltemp(irrad, 2, 20).set_index(irrad)
temps.plot()
plt.legend()
plt.xlabel('incident irradiance (W/m**2)')
plt.ylabel('temperature (deg C)')
"""
Explanation: Cell and module temperature as a function of incident irradiance.
End of explanation
"""
models = ['open_rack_cell_glassback',
'roof_mount_cell_glassback',
'open_rack_cell_polymerback',
'insulated_back_polymerback',
'open_rack_polymer_thinfilm_steel',
'22x_concentrator_tracker']
temps = pd.DataFrame(index=['temp_cell','temp_module'])
for model in models:
temps[model] = pd.Series(pvsystem.sapm_celltemp(1000, 5, 20, model=model).ix[0])
temps.T.plot(kind='bar') # try removing the transpose operation and replotting
plt.legend()
plt.ylabel('temperature (deg C)')
"""
Explanation: Cell and module temperature for different module and racking types.
End of explanation
"""
inverters = pvsystem.retrieve_sam('sandiainverter')
inverters
vdcs = pd.Series(np.linspace(0,50,51))
idcs = pd.Series(np.linspace(0,11,110))
pdcs = idcs * vdcs
pacs = pvsystem.snlinverter(inverters['ABB__MICRO_0_25_I_OUTD_US_208_208V__CEC_2014_'], vdcs, pdcs)
#pacs.plot()
plt.plot(pacs, pdcs)
plt.ylabel('ac power')
plt.xlabel('dc power')
"""
Explanation: snlinverter
End of explanation
"""
cec_modules = pvsystem.retrieve_sam('cecmod')
cec_modules
cecmodule = cec_modules.Example_Module
cecmodule
"""
Explanation: Need to put more effort into describing this function.
SAPM
The CEC module database.
End of explanation
"""
sandia_modules = pvsystem.retrieve_sam(name='SandiaMod')
sandia_modules
sandia_module = sandia_modules.Canadian_Solar_CS5P_220M___2009_
sandia_module
"""
Explanation: The Sandia module database.
End of explanation
"""
from pvlib import clearsky
from pvlib import irradiance
from pvlib import atmosphere
from pvlib.location import Location
tus = Location(32.2, -111, 'US/Arizona', 700, 'Tucson')
times = pd.date_range(start=datetime.datetime(2014,4,1), end=datetime.datetime(2014,4,2), freq='30s')
ephem_data = pvlib.solarposition.get_solarposition(times, tus)
irrad_data = clearsky.ineichen(times, tus)
#irrad_data.plot()
aoi = irradiance.aoi(0, 0, ephem_data['apparent_zenith'], ephem_data['azimuth'])
#plt.figure()
#aoi.plot()
am = atmosphere.relativeairmass(ephem_data['apparent_zenith'])
# a hot, sunny spring day in the desert.
temps = pvsystem.sapm_celltemp(irrad_data['ghi'], 0, 30)
"""
Explanation: Generate some irradiance data for modeling.
End of explanation
"""
sapm_1 = pvsystem.sapm(sandia_module, irrad_data['dni']*np.cos(np.radians(aoi)),
irrad_data['ghi'], temps['temp_cell'], am, aoi)
sapm_1.head()
def plot_sapm(sapm_data):
"""
Makes a nice figure with the SAPM data.
Parameters
----------
sapm_data : DataFrame
The output of ``pvsystem.sapm``
"""
fig, axes = plt.subplots(2, 3, figsize=(16,10), sharex=False, sharey=False, squeeze=False)
plt.subplots_adjust(wspace=.2, hspace=.3)
ax = axes[0,0]
sapm_data.filter(like='i_').plot(ax=ax)
ax.set_ylabel('Current (A)')
ax = axes[0,1]
sapm_data.filter(like='v_').plot(ax=ax)
ax.set_ylabel('Voltage (V)')
ax = axes[0,2]
sapm_data.filter(like='p_').plot(ax=ax)
ax.set_ylabel('Power (W)')
ax = axes[1,0]
[ax.plot(sapm_data['effective_irradiance'], current, label=name) for name, current in
sapm_data.filter(like='i_').iteritems()]
ax.set_ylabel('Current (A)')
ax.set_xlabel('Effective Irradiance')
ax.legend(loc=2)
ax = axes[1,1]
[ax.plot(sapm_data['effective_irradiance'], voltage, label=name) for name, voltage in
sapm_data.filter(like='v_').iteritems()]
ax.set_ylabel('Voltage (V)')
ax.set_xlabel('Effective Irradiance')
ax.legend(loc=4)
ax = axes[1,2]
ax.plot(sapm_data['effective_irradiance'], sapm_data['p_mp'], label='p_mp')
ax.set_ylabel('Power (W)')
ax.set_xlabel('Effective Irradiance')
ax.legend(loc=2)
# needed to show the time ticks
for ax in axes.flatten():
for tk in ax.get_xticklabels():
tk.set_visible(True)
plot_sapm(sapm_1)
"""
Explanation: Now we can run the module parameters and the irradiance data through the SAPM function.
End of explanation
"""
temps = pvsystem.sapm_celltemp(irrad_data['ghi'], 10, 5)
sapm_2 = pvsystem.sapm(sandia_module, irrad_data['dni']*np.cos(np.radians(aoi)),
irrad_data['dhi'], temps['temp_cell'], am, aoi)
plot_sapm(sapm_2)
sapm_1['p_mp'].plot(label='30 C, 0 m/s')
sapm_2['p_mp'].plot(label=' 5 C, 10 m/s')
plt.legend()
plt.ylabel('Pmp')
plt.title('Comparison of a hot, calm day and a cold, windy day')
"""
Explanation: For comparison, here's the SAPM for a sunny, windy, cold version of the same day.
End of explanation
"""
import warnings
warnings.simplefilter('ignore', np.RankWarning)
def sapm_to_ivframe(sapm_row):
pnt = sapm_row.T.ix[:,0]
ivframe = {'Isc': (pnt['i_sc'], 0),
'Pmp': (pnt['i_mp'], pnt['v_mp']),
'Ix': (pnt['i_x'], 0.5*pnt['v_oc']),
'Ixx': (pnt['i_xx'], 0.5*(pnt['v_oc']+pnt['v_mp'])),
'Voc': (0, pnt['v_oc'])}
ivframe = pd.DataFrame(ivframe, index=['current', 'voltage']).T
ivframe = ivframe.sort('voltage')
return ivframe
def ivframe_to_ivcurve(ivframe, points=100):
ivfit_coefs = np.polyfit(ivframe['voltage'], ivframe['current'], 30)
fit_voltages = np.linspace(0, ivframe.ix['Voc', 'voltage'], points)
fit_currents = np.polyval(ivfit_coefs, fit_voltages)
return fit_voltages, fit_currents
sapm_to_ivframe(sapm_1['2014-04-01 10:00:00'])
times = ['2014-04-01 07:00:00', '2014-04-01 08:00:00', '2014-04-01 09:00:00',
'2014-04-01 10:00:00', '2014-04-01 11:00:00', '2014-04-01 12:00:00']
times.reverse()
fig, ax = plt.subplots(1, 1, figsize=(12,8))
for time in times:
ivframe = sapm_to_ivframe(sapm_1[time])
fit_voltages, fit_currents = ivframe_to_ivcurve(ivframe)
ax.plot(fit_voltages, fit_currents, label=time)
ax.plot(ivframe['voltage'], ivframe['current'], 'ko')
ax.set_xlabel('Voltage (V)')
ax.set_ylabel('Current (A)')
ax.set_ylim(0, None)
ax.set_title('IV curves at multiple times')
ax.legend()
"""
Explanation: SAPM IV curves
The IV curve function only calculates the 5 points of the SAPM. We will add arbitrary points in a future release, but for now we just interpolate between the 5 SAPM points.
End of explanation
"""
photocurrent, saturation_current, resistance_series, resistance_shunt, nNsVth = (
pvsystem.calcparams_desoto(irrad_data.ghi,
temp_cell=temps['temp_cell'],
alpha_isc=cecmodule['Alpha_sc'],
module_parameters=cecmodule,
EgRef=1.121,
dEgdT=-0.0002677) )
photocurrent.plot()
plt.ylabel('Light current I_L (A)')
saturation_current.plot()
plt.ylabel('Saturation current I_0 (A)')
resistance_series
resistance_shunt.plot()
plt.ylabel('Shunt resistance (ohms)')
plt.ylim(0,100)
nNsVth.plot()
plt.ylabel('nNsVth')
"""
Explanation: desoto
The same data run through the desoto model.
End of explanation
"""
single_diode_out = pvsystem.singlediode(cecmodule, photocurrent, saturation_current,
resistance_series, resistance_shunt, nNsVth)
single_diode_out
single_diode_out['i_sc'].plot()
single_diode_out['v_oc'].plot()
single_diode_out['p_mp'].plot()
"""
Explanation: Single diode model
End of explanation
"""
|
kingmolnar/DataScienceProgramming | 07-Data-Visualization/MoreAPD_orig.ipynb | cc0-1.0 | import numpy as np
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
ls -l /home/data/APD/COBRA-YTD*.csv.gz
df = pd.read_csv('/home/data/APD/COBRA-YTD-multiyear.csv.gz')
df.shape
df.dtypes
dataDict = pd.DataFrame({'DataType': df.dtypes.values, 'Description': '', }, index=df.columns.values)
"""
Explanation: Atlanta Police Department
The Atlanta Police Department provides Part 1 crime data at http://www.atlantapd.org/i-want-to/crime-data-downloads
A recent copy of the data file is stored in the cluster. <span style="color: red; font-weight: bold;">Please, do not copy this data file into your home directory!</span>
Introduction
This notebooks leads into an exploration of public crime data provided by the Atlanta Police Department.
The original data set and supplemental information can be found at http://www.atlantapd.org/i-want-to/crime-data-downloads
The data set is available on ARC, please, don't download into your home directory on ARC!
End of explanation
"""
dataDict
with open("datadict2.py", "w") as io:
for i in dataDict.index:
io.write("dataDict.loc['%s'].Description = '' # type: %s\n" % (i, str(dataDict.loc[i].DataType)))
ls -l datadict2.py
# %load datadict2.py
dataDict.loc['MI_PRINX'].Description = '' # type: int64
dataDict.loc['offense_id'].Description = '' # type: int64
dataDict.loc['rpt_date'].Description = '' # type: object
dataDict.loc['occur_date'].Description = '' # type: object
dataDict.loc['occur_time'].Description = '' # type: object
dataDict.loc['poss_date'].Description = '' # type: object
dataDict.loc['poss_time'].Description = '' # type: object
dataDict.loc['beat'].Description = '' # type: int64
dataDict.loc['apt_office_prefix'].Description = '' # type: object
dataDict.loc['apt_office_num'].Description = '' # type: object
dataDict.loc['location'].Description = '' # type: object
dataDict.loc['MinOfucr'].Description = '' # type: int64
dataDict.loc['MinOfibr_code'].Description = '' # type: object
dataDict.loc['dispo_code'].Description = '' # type: object
dataDict.loc['MaxOfnum_victims'].Description = '' # type: float64
dataDict.loc['Shift'].Description = '' # type: object
dataDict.loc['Avg Day'].Description = '' # type: object
dataDict.loc['loc_type'].Description = '' # type: float64
dataDict.loc['UC2 Literal'].Description = '' # type: object
dataDict.loc['neighborhood'].Description = '' # type: object
dataDict.loc['npu'].Description = '' # type: object
dataDict.loc['x'].Description = '' # type: float64
dataDict.loc['y'].Description = '' # type: float64
# %load datadict.py
dataDict.loc['MI_PRINX'].Description = '' # type: int64
dataDict.loc['offense_id'].Description = 'Unique ID in the format YYDDDNNNN with the year YY, the day of the year DDD and a counter NNNN' # type: int64
dataDict.loc['rpt_date'].Description = 'Date the crime was reported' # type: object
dataDict.loc['occur_date'].Description = 'Estimated date when the crime occured' # type: object
dataDict.loc['occur_time'].Description = 'Estimated time when the crime occured' # type: object
dataDict.loc['poss_date'].Description = '' # type: object
dataDict.loc['poss_time'].Description = '' # type: object
dataDict.loc['beat'].Description = '' # type: int64
dataDict.loc['apt_office_prefix'].Description = '' # type: object
dataDict.loc['apt_office_num'].Description = '' # type: object
dataDict.loc['location'].Description = '' # type: object
dataDict.loc['MinOfucr'].Description = '' # type: int64
dataDict.loc['MinOfibr_code'].Description = '' # type: object
dataDict.loc['dispo_code'].Description = '' # type: object
dataDict.loc['MaxOfnum_victims'].Description = '' # type: float64
dataDict.loc['Shift'].Description = 'Zones have 8 or 10 hour shifts' # type: object
dataDict.loc['Avg Day'].Description = '' # type: object
dataDict.loc['loc_type'].Description = '' # type: float64
dataDict.loc['UC2 Literal'].Description = '' # type: object
dataDict.loc['neighborhood'].Description = '' # type: object
dataDict.loc['npu'].Description = '' # type: object
dataDict.loc['x'].Description = '' # type: float64
dataDict.loc['y'].Description = '' # type: float64
dataDict.to_csv("COBRA_Data_Dictionary.csv")
sorted(df.npu.unique())
len(df.neighborhood.unique())
"""
Explanation: We need to enter the descriptions for each entry in our dictionary manually. However, why not just create a the Python code automatically...
Run the code below only if you haven't edited the datadict.py file in a different way, since it will overwrite what you have so far. (That's why the code is commented-out.)
End of explanation
"""
print df.groupby("Shift").count().index
"""
Explanation: Fixing Data Types
End of explanation
"""
df[['occur_date', 'occur_time']]
# function currying
def fixdatetime(fld):
def _fix(s):
date_col = '%s_date' % fld
time_col = '%s_time' % fld
if time_col in s.index:
return str(s[date_col])+' '+str(s[time_col])
else:
return str(s[date_col])+' 00:00:00'
return _fix
df.apply(fixdatetime('occur'), axis=1)[:10]
"""
Explanation: Date and Time
Working with dates can be tricky. Often dates and times are coded as strings and need to be converted to a date and time data format.
Python provides a module datetime to deal with converting parsing and formatting dates and times. See https://docs.python.org/2/library/datetime.html
The pandas package provides functionality to convert text fields into date/time fields...given the values adhere to a given format. See http://pandas.pydata.org/pandas-docs/version/0.20/generated/pandas.to_datetime.html
Create a proper text field
In order to use the text to date/time converter our text columns need to have the appropriate format.
End of explanation
"""
for col in ['rpt', 'occur', 'poss']:
datser = df.apply(fixdatetime(col), axis=1)
df['%s_dt'%col] = pd.to_datetime(datser, format="%m/%d/%Y %H:%M:%S", errors='coerce')
df.head()
df.dtypes
"""
Explanation: Convert Columns
End of explanation
"""
df.beat[:10]
df['Zone'] = df['beat']//100
df.Zone[:4]
df['UC2 Literal'].unique()
##df[df['UC2 Literal']=='LARCENY-FROM VEHICLE']
df.occur_date.min(), df.occur_date.max()
df['Year'] = df.rpt_dt.map(lambda d: d.year)
df.groupby('Year').offense_id.count()
brdf = df[df['UC2 Literal']=='BURGLARY-RESIDENCE'].copy()
brdf.shape, df.shape
def gethour(d):
return d.hour
brdf.occur_dt.map(gethour)
##brdf['occur_dt'].map(gethour)
##brdf.occur_dt.map(lambda d: d.hour)
print type(brdf.occur_dt)
brdf['Hour'] = brdf.occur_dt.apply(gethour)
brdf.head()
br_hr = brdf.groupby(['Hour']).offense_id.count()
plt.step(br_hr.index, br_hr.values)
plt.figure(figsize=(20,8))
for z in range(1,7):
plt.subplot(3,2,z)
plt.title("Zone %d" % z)
#brdf[brdf.Zone==z].hist(column='Hour', bins=24)
plt.hist(brdf[brdf.Zone==z].Hour, bins=24)
plt.show()
plt.figure(figsize=(30,15))
for h in range(24):
plt.subplot(4,6,h+1)
plt.title("Hour %d" % h)
#brdf[brdf.Zone==z].hist(column='Hour', bins=24)
plt.hist(brdf[brdf.Hour==h].Zone, bins=6)
plt.ylim(0,40) ## sets limit on Y-axis
plt.show()
df['UC2 Literal'].unique()
df.groupby(['UC2 Literal', 'Zone']).offense_id.count()
df['dayofweek'] = df.occur_dt.map(lambda d: d.dayofweek)
df.groupby(['UC2 Literal','dayofweek']).offense_id.count()
brdf.apply(lambda r: str(r.location)+', '+str(r.npu), axis=1)
brdf.apply(np.min, axis=0)
df.occur_dt.map(lambda d: d.year).unique()
df['Year'] = df.occur_dt.map(lambda d: d.year)
df2 = df[(df.Year>=2010) & (df.Year<=2017)]
df2.shape, df.shape
"""
Explanation: Beats and Zones
The City of Atlanta is divided into 6 zones. Each with 12 to 14 beats.
Let's create a separate column for the zones:
End of explanation
"""
df_LarcenyFromVehicle = df2[(df2['UC2 Literal']=='LARCENY-FROM VEHICLE')&(df2.Year==2017)].copy()
agr_LarcenyFromVehicle = df_LarcenyFromVehicle.set_index('occur_dt').resample('W').offense_id.count()
agr_LarcenyFromVehicle
df_LarcenyFromVehicle["Hour"] = df_LarcenyFromVehicle.occur_dt.map(lambda d: d.hour)
df_LarcenyFromVehicle.groupby("Hour").offense_id.count()
hourly = df_LarcenyFromVehicle.resample('H', on='occur_dt').offense_id.count()
hourly.reset_index().occur_dt.map(lambda d: d.week)
df3 = pd.DataFrame({"N": hourly})
##df3['Day'] = df3.reset_index().occur_dt ##.map(lambda d: d.day)
df3
ls
"""
Explanation: Descriptive Statistics
https://pandas.pydata.org/pandas-docs/stable/basics.html#descriptive-statistics
Time Series
https://pandas.pydata.org/pandas-docs/stable/timeseries.html
https://pandas.pydata.org/pandas-docs/stable/api.html#id10
End of explanation
"""
fig = plt.figure(figsize=(10,6)) # 10inx10in
#plt.plot(resdf['BURGLARY-RESIDENCE'].index, resdf['BURGLARY-RESIDENCE'])
plt.scatter(resdf['BURGLARY-RESIDENCE'].index, resdf['BURGLARY-RESIDENCE'], marker='x')
plt.scatter(resdf['BURGLARY-NONRES'].index, resdf['BURGLARY-NONRES'], marker='o')
plt.ylim(0, 500)
plt.title('BURGLARY-RESIDENCE')
plt.xticks(range(13), ['', 'Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'])
fig.savefig('BurglaryResidence_over_month.svg')
x = 1
def getTheMonth(x):
return x.month
df['occur_month'] = df['occur_ts'].map(getTheMonth)
resdf = df.groupby(['UC2 Literal', 'occur_month']).offense_id.count()
fig = plt.figure(figsize=(10,6))
plt.scatter(resdf['BURGLARY-RESIDENCE'].index, resdf['BURGLARY-RESIDENCE'], marker='x')
plt.ylim(0, 500)
plt.title('BURGLARY-RESIDENCE')
plt.xticks(range(13), ['', 'Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'])
plt.savefig('quiz3-burglary-residence.png')
"""
Explanation: Plotting
The Pandas package provides a number of plotting features. Let's try them out.
- https://pandas.pydata.org/pandas-docs/stable/api.html#api-dataframe-plotting
- https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.boxplot.html
End of explanation
"""
fig = plt.figure(figsize=(40,30))
crime_types = crime_year.index.levels[0]
years = crime_year.index.levels[1]
for c in range(len(crime_types)):
y_max = max(crime_year.loc[crime_types[c]])
plt.subplot(4,3,c+1)
plt.hlines(crime_year.loc[crime_types[c]].iloc[-1]*100/y_max, years[0], years[-1], linestyles="dashed", color="r")
plt.bar(crime_year.loc[crime_types[c]].index, crime_year.loc[crime_types[c]]*100/y_max, label=crime_types[c], alpha=0.5)
##plt.legend()
plt.ylim(0, 100)
plt.xticks(years+0.4, [str(int(y)) for y in years], rotation=0, fontsize=24)
plt.yticks([0,20,40,60,80,100], ['0%','20%','40%','60%','80%','100%'], fontsize=24)
plt.title(crime_types[c], fontsize=30)
None
c = 3 ## 'BURGLARY-RESIDENCE'
resburglaries = crime_year_month.loc[crime_types[c]]
fig = plt.figure(figsize=(20,10))
for y in years:
plt.plot(resburglaries.loc[y].index, resburglaries.loc[y], label=("%4.0f"%y))
plt.legend()
plt.title("Seasonal Trends - %s"%crime_types[c], fontsize=20)
plt.xticks(range(13), ['', 'Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'])
plt.xlim(0,13)
None
c = 3 ## 'BURGLARY-RESIDENCE'
fig = plt.figure(figsize=(20,10))
for y in years:
avg = resburglaries.loc[y].mean()
std = resburglaries.loc[y].std()
##plt.hlines(avg, 1, 13, linestyle='dashed')
plt.plot(resburglaries.loc[y].index, (resburglaries.loc[y]-avg)/std, label=("%4.0f"%y))
plt.legend()
plt.title("Seasonal Trends - %s (normalized)"%crime_types[c], fontsize=20)
plt.xticks(list(range(1,13)), ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'])
plt.xlim(0,13)
plt.ylabel("Standard deviations $\sigma_y$")
None
"""
Explanation: Seasonal Model
End of explanation
"""
|
msadegh97/machine-learning-course | homeworks/logistic-regression.ipynb | gpl-3.0 | # import what we need
import numpy as np
from matplotlib import pyplot as plt
%matplotlib inline
"""
Explanation: Homework #3.1: Logistic Regression
In this homework you will learn the concepts of logistic regression by implementing it.
Implement the body of each function and test whether you have done right for each of them or not by running the tests. Each function has a test code block just below its definition.
Remember: m = number of data items (size of the data set) and n = number of features
Good luck!
End of explanation
"""
def sigmoid(z):
"""
Sigmoid function
z: an arbitrary matrix (a x b)
Return: a matrix (a x b) containing sigmoid of each element of z (element wise sigmoid values)
"""
# YOUR CODE GOES HERE (~ 1 line of code)
# test the sigmoid function by visualization
x = np.arange(-10, 10, .1)
y = sigmoid(x)
plt.plot(x, y)
plt.axvline(0, c='gray', linewidth=1)
"""
Explanation: Sigmoid
Sigmoid is a non linear function defined as:
$$ g(z) = \frac{1}{1 + e^{-z}} $$
and its derivation is:
$$ g'(z) = g(z)(1 - g(z)) $$
End of explanation
"""
def h(X, w, b):
"""
The hypothesis function
X: the data set matrix (m x n)
w: the weights vector (n x 1)
b: the bias (1 x 1)
Return: a vector stacking predictions for all data items (m x 1)
"""
# YOUR CODE GOES HERE (~ 1 line of code)
# test the hypothesis
X, y, w, b = (np.array([[.1, .2, .3], [.4, .5, .6]]), np.array([[1], [0]]), np.array([[.3], [.4], [.5]]), .5)
hyp = h(X, w, b)
true = np.array([[0.68135373], [0.75398872]])
assert hyp.shape == (X.shape[0], 1), \
'The result should be in shape ({}, 1). Currently is {}'.format(X.shape[0], hyp.shape)
if np.allclose(hyp, true):
print('Hypothesis ok.')
else:
print('Hypothesis does not work properly.')
"""
Explanation: Hypothesis Function
For a single data item $ x_{1 \times n} $, the hypothesis is the sigmoid of the linear product of $ w_{n \times 1} $ and $ x_{1 \times n} $ in addition to bias $ b_{1 \times 1} $. The result is a number:
$$ h_{w,b}(x) = g(w^Tx + b) = \frac{1}{1 + e^{-w^Tx + b}} $$
For a data set $ X_{m \times n} $ which contains multiple data items stacking vertically, the result is a vector $ h_{m \times 1} $ which contains predections for all data items:
$$ h_{w,b}(X) = g(Xw + b) = \frac{1}{1 + e^{-Xw + b}} $$
End of explanation
"""
def cost(y_true, y_pred):
"""
Cross Entropy function
y_true: the vector of true labels of data items (m x 1)
y_pred: the vector of predictions of data items (m x 1)
Return: a single number representing the cost
"""
# YOUR CODE GOES HERE (~ 2 lines of code)
# test cost function
from sklearn.metrics import log_loss
X, y, w, b = (np.array([[.1, .2, .3], [.4, .5, .6]]), np.array([[1], [0]]), np.array([[.3], [.4], [.5]]), .5)
cre = log_loss(y, h(X, w, b))
cst = cost(y, h(X, w, b))
if np.isclose(cre, cst):
print('Cost function ok.')
else:
print('Cost function does not work properly.')
print('Should\'ve returned:', cre)
print('Returned:', cst)
"""
Explanation: Cost Function
Cost function for a logistic regression model is Cross Entropy over data set:
$$ \begin{equation}
\begin{split}
J_{w,b}(X) &= -\frac{1}{m}\sum_{i=1}^m y^{(i)} \log(h_{w,b}(x^{(i)})) + (1 - y^{(i)}) \log(1 - h_{w,b}(x^{(i)})) \
&= -\frac{1}{m}\sum_{i=1}^m y^{(i)} \log(\hat{y}^{(i)}) + (1 - y^{(i)}) \log(1 - \hat{y}^{(i)})
\end{split}
\end{equation} $$
The goal of logistic regression is to minimize this cost.
End of explanation
"""
def gradient(X, y_true, y_pred):
"""
The gradient of cost function
X: the data set matrix (m x n)
y_true: the vector of true labels of data items (m x 1)
y_pred: the vector of predictions of data items (m x 1)
Return: vector dJ/dw (n x 1) and number dJ/db (1 x 1)
"""
# YOUR CODE GOES HERE (~ 4 lines of code)
X, y, w, b = (np.array([[.1, .2, .3], [.4, .5, .6]]), np.array([[1], [0]]), np.array([[.3], [.4], [.5]]), .5)
true = (np.array([[0.13486543], [0.15663255], [0.17839968]]), 0.2176712251189869)
res = gradient(X, y, h(X, w, b))
if np.allclose(res[0], true[0]) and np.isclose(res[1], true[1]):
print('Gradient function ok.')
else:
print('Gradient function is not working properly.')
print('should output:', true)
print('Outputted:', res)
def update_parameters(X, y_true, y_pred, w, b, alpha):
"""
This function updates parameters w and b according to their derivations.
It should compute the cost function derivations with respect to w and b first,
then take a step for each parameters in w and b.
X: the data set matrix (m x n)
y_true: the vector of true labels of data items (m x 1)
y_pred: the vector of predictions of data items (m x 1)
w: the weights vector (n x 1)
b: the bias (1 x 1)
alpha: the learning rate
Returns: the updated parameters w and b
"""
# YOUR CODE GOES HERE (~ 4 lines of code)
# test update_parameters function
X, y, w, b = (np.array([[.1, .2, .3], [.4, .5, .6]]), np.array([[1], [0]]), np.array([[.3], [.4], [.5]]), .5)
res = update_parameters(X, y, h(X, w, b), w, b, 0.01)
true = (np.array([[0.29865135], [0.39843367], [0.498216]]), 0.49782328774881013)
if np.allclose(res[0], true[0]) and np.isclose(res[1], true[1]):
print('Update parameters function ok.')
else:
print('Update parameters function is not working properly.')
print('should output:', true)
print('Outputted:', res)
def gradient_descent(X, y, alpha, n_iterations):
"""
The gradient descent algorithm:
1. initialize parameters w and b to zeros (not random)
for i in n_iterations:
2. compute the hypothesis h(X, w, b)
3. update the parameters using function update_parameters(X, y_true, y_pred, w, b, alpha)
4. compute the cost and see the cost is decreasing in each step (optional)
X: the data set matrix (m x n)
y: the vector of true labels of data items (m x 1)
alpha: the learning rate
n_iterations: number of steps gradient descent should take to converge
Returns: the best parameters w and b gradient descent found at last
"""
# YOUR CODE GOES HERE (~ 7 lines of code)
# test gradient_descent function
true = (np.array([[-0.01488461], [-0.014848], [-0.0148114]]), 0.00036601406503539797)
res = gradient_descent(X, y, 0.01, 20)
if np.allclose(res[0], true[0]) and np.isclose(res[1], true[1]):
print('Gradient descent function ok.')
else:
print('Gradient descent function is not working properly.')
print('should output:', true)
print('Outputted:', res)
"""
Explanation: Gradient Descent
Gradient descent algorithm tries to find the minimum of a function by starting somewhere on the function and taking small steps through the gradient of the function.
In logistic regression, the function we are trying to minimize is the cost function $ J_{w,b}(X) $. The derivations are:
$$ \begin{equation}
\begin{split}
\frac{\partial J_{w,b}(X)}{\partial w_j}
&= \frac{1}{m}\sum_{i=1}^m (h_{w,b}(x^{(i)}) - y^{(i)})x_j^{(i)} \
&= \frac{1}{m}\sum_{i=1}^m (\hat{y}^{(i)} - y^{(i)})x_j^{(i)}
\end{split}
\end{equation} $$
$$ \begin{equation}
\begin{split}
\frac{\partial J_{w,b}(X)}{\partial b}
&= \frac{1}{m}\sum_{i=1}^m (h_{w,b}(x^{(i)}) - y^{(i)}) \
&= \frac{1}{m}\sum_{i=1}^m (\hat{y}^{(i)} - y^{(i)})
\end{split}
\end{equation} $$
Actually these two derivations are the same except that in the second one, $ x_{0}^{(i)} = 1 $.
This cost is similar to the cost of linear regression except that here, there is a sigmoid function over the hypothesis; i.e. $ h_{w,b}(x) = g(w^Tx + b) $.
End of explanation
"""
# load the data
from sklearn.datasets import load_iris
from sklearn.preprocessing import scale
X, y = load_iris(return_X_y=True)
X = X[:, 0:2]
X = X[y != 2]
X = scale(X)
y = y[y != 2]
y = y.reshape((y.shape[0], 1))
# train a linear regression model from sklearn
from sklearn.linear_model import LogisticRegression
model = LogisticRegression().fit(X, y.ravel())
# train our linear regression model
w, b = gradient_descent(X, y, 0.1, 100)
# plot the result
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
xx, yy = np.meshgrid(np.arange(x_min, x_max, .02), np.arange(y_min, y_max, .02))
Z1 = model.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 1]
Z2 = h(np.c_[xx.ravel(), yy.ravel()], w, b)
f, axes = plt.subplots(1, 2)
axes[0].contourf(xx, yy, Z1.reshape(xx.shape), cmap=plt.cm.binary, alpha=.8)
axes[1].contourf(xx, yy, Z2.reshape(xx.shape), cmap=plt.cm.binary, alpha=.8)
axes[0].scatter(X[:, 0], X[:, 1], c=model.predict(X), s=10, cmap=plt.cm.winter)
axes[1].scatter(X[:, 0], X[:, 1], c=h(X, w, b) > .5, s=10, cmap=plt.cm.winter)
axes[0].set_title('Best');
axes[1].set_title('Ours');
"""
Explanation: Test on Real Data
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.24/_downloads/5b9edf9c05aec2b9bb1f128f174ca0f3/40_cluster_1samp_time_freq.ipynb | bsd-3-clause | # Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr>
#
# License: BSD-3-Clause
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.time_frequency import tfr_morlet
from mne.stats import permutation_cluster_1samp_test
from mne.datasets import sample
print(__doc__)
"""
Explanation: Non-parametric 1 sample cluster statistic on single trial power
This script shows how to estimate significant clusters
in time-frequency power estimates. It uses a non-parametric
statistical procedure based on permutations and cluster
level statistics.
The procedure consists of:
extracting epochs
compute single trial power estimates
baseline line correct the power estimates (power ratios)
compute stats to see if ratio deviates from 1.
End of explanation
"""
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
tmin, tmax, event_id = -0.3, 0.6, 1
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname)
events = mne.find_events(raw, stim_channel='STI 014')
include = []
raw.info['bads'] += ['MEG 2443', 'EEG 053'] # bads + 2 more
# picks MEG gradiometers
picks = mne.pick_types(raw.info, meg='grad', eeg=False, eog=True,
stim=False, include=include, exclude='bads')
# Load condition 1
event_id = 1
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), preload=True,
reject=dict(grad=4000e-13, eog=150e-6))
# just use right temporal sensors for speed
epochs.pick_channels(mne.read_vectorview_selection('Right-temporal'))
evoked = epochs.average()
# Factor to down-sample the temporal dimension of the TFR computed by
# tfr_morlet. Decimation occurs after frequency decomposition and can
# be used to reduce memory usage (and possibly computational time of downstream
# operations such as nonparametric statistics) if you don't need high
# spectrotemporal resolution.
decim = 5
freqs = np.arange(8, 40, 2) # define frequencies of interest
sfreq = raw.info['sfreq'] # sampling in Hz
tfr_epochs = tfr_morlet(epochs, freqs, n_cycles=4., decim=decim,
average=False, return_itc=False, n_jobs=1)
# Baseline power
tfr_epochs.apply_baseline(mode='logratio', baseline=(-.100, 0))
# Crop in time to keep only what is between 0 and 400 ms
evoked.crop(-0.1, 0.4)
tfr_epochs.crop(-0.1, 0.4)
epochs_power = tfr_epochs.data
"""
Explanation: Set parameters
End of explanation
"""
sensor_adjacency, ch_names = mne.channels.find_ch_adjacency(
tfr_epochs.info, 'grad')
# Subselect the channels we are actually using
use_idx = [ch_names.index(ch_name.replace(' ', ''))
for ch_name in tfr_epochs.ch_names]
sensor_adjacency = sensor_adjacency[use_idx][:, use_idx]
assert sensor_adjacency.shape == \
(len(tfr_epochs.ch_names), len(tfr_epochs.ch_names))
assert epochs_power.data.shape == (
len(epochs), len(tfr_epochs.ch_names),
len(tfr_epochs.freqs), len(tfr_epochs.times))
adjacency = mne.stats.combine_adjacency(
sensor_adjacency, len(tfr_epochs.freqs), len(tfr_epochs.times))
# our adjacency is square with each dim matching the data size
assert adjacency.shape[0] == adjacency.shape[1] == \
len(tfr_epochs.ch_names) * len(tfr_epochs.freqs) * len(tfr_epochs.times)
"""
Explanation: Define adjacency for statistics
To compute a cluster-corrected value, we need a suitable definition
for the adjacency/adjacency of our values. So we first compute the
sensor adjacency, then combine that with a grid/lattice adjacency
assumption for the time-frequency plane:
End of explanation
"""
threshold = 3.
n_permutations = 50 # Warning: 50 is way too small for real-world analysis.
T_obs, clusters, cluster_p_values, H0 = \
permutation_cluster_1samp_test(epochs_power, n_permutations=n_permutations,
threshold=threshold, tail=0,
adjacency=adjacency,
out_type='mask', verbose=True)
"""
Explanation: Compute statistic
End of explanation
"""
evoked_data = evoked.data
times = 1e3 * evoked.times
plt.figure()
plt.subplots_adjust(0.12, 0.08, 0.96, 0.94, 0.2, 0.43)
# Create new stats image with only significant clusters
T_obs_plot = np.nan * np.ones_like(T_obs)
for c, p_val in zip(clusters, cluster_p_values):
if p_val <= 0.05:
T_obs_plot[c] = T_obs[c]
# Just plot one channel's data
ch_idx, f_idx, t_idx = np.unravel_index(
np.nanargmax(np.abs(T_obs_plot)), epochs_power.shape[1:])
# ch_idx = tfr_epochs.ch_names.index('MEG 1332') # to show a specific one
vmax = np.max(np.abs(T_obs))
vmin = -vmax
plt.subplot(2, 1, 1)
plt.imshow(T_obs[ch_idx], cmap=plt.cm.gray,
extent=[times[0], times[-1], freqs[0], freqs[-1]],
aspect='auto', origin='lower', vmin=vmin, vmax=vmax)
plt.imshow(T_obs_plot[ch_idx], cmap=plt.cm.RdBu_r,
extent=[times[0], times[-1], freqs[0], freqs[-1]],
aspect='auto', origin='lower', vmin=vmin, vmax=vmax)
plt.colorbar()
plt.xlabel('Time (ms)')
plt.ylabel('Frequency (Hz)')
plt.title(f'Induced power ({tfr_epochs.ch_names[ch_idx]})')
ax2 = plt.subplot(2, 1, 2)
evoked.plot(axes=[ax2], time_unit='s')
plt.show()
"""
Explanation: View time-frequency plots
End of explanation
"""
|
mas-dse-greina/neon | Basic Regression with neon.ipynb | apache-2.0 | import numpy as np
m = 123.45 # Slope of our line (weight)
b = -67.89 # Intercept of our line (bias)
numDataPoints = 100 # Let's just have 100 total data points
X = np.random.rand(numDataPoints, 1) # Let's generate a vector X with numDataPoints random numbers
noiseScale = 1.2 # The larger this value, the noisier the data.
trueLine = m*X + b # Let's generate a vector Y based on a linear model of X
y = trueLine + noiseScale * np.random.randn(numDataPoints, 1) # Let's add some noise so the line is more like real data.
from neon.data import ArrayIterator
from neon.backends import gen_backend
gen_backend(backend='gpu', batch_size=2) # Change to 'gpu' if you have gpu support
train = ArrayIterator(X=X, y=y, make_onehot=False)
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure(figsize=(10,7))
plt.scatter(X, y, alpha=0.7, color='g')
plt.plot(X, trueLine, alpha=0.5, color='r')
plt.title('Raw data is a line with slope (m) of {} and intercept (b) of {}'.format(m, b), fontsize=14);
plt.grid('on');
plt.legend(['True line', 'Raw data'], fontsize=18);
"""
Explanation: Basic Linear Regression with neon
Tony Reina<br>
4 JULY 2017
This is a very basic example of doing linear regression with Intel-Nervana's neon Deep Learning platform. It is based on this code.
This code shows that neon is not just for neural networks. It can handle all sorts of numerical computations and optimizations.
Linear regression is a common statistical method for fitting a line to data. It allows us to create a linear model so that we can predict outcomes based on new data.
We'll generate a simple line with some random noise and then use gradient descent to determine the parameters.
This also shows how to load custom data (e.g. user generated numpy arrays) into the neon DataIterator (ArrayIterator).
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
Generate a backend for neon to use
This sets up either our GPU or CPU connection to neon. If we don't start with this, then ArrayIterator won't execute.
We're asking neon to use the cpu, but can change that to a gpu if it is avaliable. Batch size refers to how many data points are taken at a time. For example, here we want to computer the gradient of 5 data points at a time. Here's a primer on Gradient Descent.
Technical note: Your batch size must always be much less than the number of points in your data. So if you have 50 points, then set your batch size to something much less than 50.
Load custom data for neon to use
We can use this to generate datasets that neon understands for any custom data (e.g. python, numpy, etc.). The default behavior is to automatically turn the labels (y) into one-hot encoding for classification problems. We'll override that because we want to do regression (i.e. continuous values)
End of explanation
"""
from neon.initializers import Gaussian
from neon.optimizers import GradientDescentMomentum
from neon.layers import Linear, Bias
from neon.layers import GeneralizedCost
from neon.transforms import SumSquared
from neon.models import Model
from neon.callbacks.callbacks import Callbacks
"""
Explanation: Let's import the neon libraries we'll need for gradient descent
End of explanation
"""
init_norm = Gaussian(loc=0.0, scale=1)
"""
Explanation: Initialize the weights and bias variables
We'll use numbers from the Gaussian distribution ($\mu=0, \sigma=1$) to initialize the weights and bias terms for our regression model.
End of explanation
"""
layers = [Linear(1, init=init_norm), # Linear layer with 1 unit
Bias(init=init_norm)] # Bias layer
model = Model(layers=layers)
"""
Explanation: Create our single layer linear model
Neon is a pro at handling complicated graphs like deep neural networks. Nevertheless, it can also handle the simplest graph: a single layer linear model.
End of explanation
"""
# Loss function is the squared difference
cost = GeneralizedCost(costfunc=SumSquared())
"""
Explanation: Cost function
How "close" is the model's prediction is to the true value? For the case of regression we'll just define the sum of the squared error between the model's prediction and the true value. Other types of models may require different cost functions.
End of explanation
"""
optimizer = GradientDescentMomentum(0.1, momentum_coef=0.9)
"""
Explanation: Gradient descent
All of our models will use gradient descent. We will iteratively update the model weights and biases in order to minimize the cost of the model.
End of explanation
"""
# Execute the model
model.fit(train,
optimizer=optimizer,
num_epochs=11,
cost=cost,
callbacks=Callbacks(model))
"""
Explanation: Run the model
This starts gradient descent. The number of epochs is how many times we want to perform gradient descent on our entire training dataset. So 11 epochs means that we repeat gradient descent on our data 11 times in a row.
End of explanation
"""
# print weights
slope = model.get_description(True)['model']['config']['layers'][0]['params']['W'][0][0]
print ("calculated slope = {:.3f}, true slope = {:.3f}".format(slope, m))
bias_weight = model.get_description(True)['model']['config']['layers'][1]['params']['W'][0][0]
print ("calculated bias = {:.3f}, true bias = {:.3f}".format(bias_weight, b))
plt.figure(figsize=(10,7))
plt.plot(X, slope*X+bias_weight, alpha=0.5, color='b', marker='^')
plt.scatter(X, y, alpha=0.7, color='g')
plt.plot(X, trueLine, '--', alpha=0.5, color='r')
plt.title('How close is our predicted model?', fontsize=18);
plt.grid('on');
plt.legend(['Predicted Line', 'True line', 'Raw Data'], fontsize=18);
"""
Explanation: Print the results
How close are we to the true line?
Play around with the noiseScale, m, and b parameters to convince yourself that neon is properly fitting the model.
End of explanation
"""
|
steinam/teacher | jup_notebooks/data-science-ipython-notebooks-master/scikit-learn/scikit-learn-validation.ipynb | mit | from __future__ import print_function, division
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
# Use seaborn for plotting defaults
import seaborn as sns; sns.set()
"""
Explanation: Validation and Model Selection
Credits: Forked from PyCon 2015 Scikit-learn Tutorial by Jake VanderPlas
In this section, we'll look at model evaluation and the tuning of hyperparameters, which are parameters that define the model.
End of explanation
"""
from sklearn.datasets import load_digits
digits = load_digits()
X = digits.data
y = digits.target
"""
Explanation: Validating Models
One of the most important pieces of machine learning is model validation: that is, checking how well your model fits a given dataset. But there are some pitfalls you need to watch out for.
Consider the digits example we've been looking at previously. How might we check how well our model fits the data?
End of explanation
"""
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=1)
knn.fit(X, y)
"""
Explanation: Let's fit a K-neighbors classifier
End of explanation
"""
y_pred = knn.predict(X)
"""
Explanation: Now we'll use this classifier to predict labels for the data
End of explanation
"""
print("{0} / {1} correct".format(np.sum(y == y_pred), len(y)))
"""
Explanation: Finally, we can check how well our prediction did:
End of explanation
"""
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y)
X_train.shape, X_test.shape
"""
Explanation: It seems we have a perfect classifier!
Question: what's wrong with this?
Validation Sets
Above we made the mistake of testing our data on the same set of data that was used for training. This is not generally a good idea. If we optimize our estimator this way, we will tend to over-fit the data: that is, we learn the noise.
A better way to test a model is to use a hold-out set which doesn't enter the training. We've seen this before using scikit-learn's train/test split utility:
End of explanation
"""
knn = KNeighborsClassifier(n_neighbors=1)
knn.fit(X_train, y_train)
y_pred = knn.predict(X_test)
print("{0} / {1} correct".format(np.sum(y_test == y_pred), len(y_test)))
"""
Explanation: Now we train on the training data, and validate on the test data:
End of explanation
"""
from sklearn.metrics import accuracy_score
accuracy_score(y_test, y_pred)
"""
Explanation: This gives us a more reliable estimate of how our model is doing.
The metric we're using here, comparing the number of matches to the total number of samples, is known as the accuracy score, and can be computed using the following routine:
End of explanation
"""
knn.score(X_test, y_test)
"""
Explanation: This can also be computed directly from the model.score method:
End of explanation
"""
for n_neighbors in [1, 5, 10, 20, 30]:
knn = KNeighborsClassifier(n_neighbors)
knn.fit(X_train, y_train)
print(n_neighbors, knn.score(X_test, y_test))
"""
Explanation: Using this, we can ask how this changes as we change the model parameters, in this case the number of neighbors:
End of explanation
"""
X1, X2, y1, y2 = train_test_split(X, y, test_size=0.5, random_state=0)
X1.shape, X2.shape
print(KNeighborsClassifier(1).fit(X2, y2).score(X1, y1))
print(KNeighborsClassifier(1).fit(X1, y1).score(X2, y2))
"""
Explanation: We see that in this case, a small number of neighbors seems to be the best option.
Cross-Validation
One problem with validation sets is that you "lose" some of the data. Above, we've only used 3/4 of the data for the training, and used 1/4 for the validation. Another option is to use 2-fold cross-validation, where we split the sample in half and perform the validation twice:
End of explanation
"""
from sklearn.cross_validation import cross_val_score
cv = cross_val_score(KNeighborsClassifier(1), X, y, cv=10)
cv.mean()
"""
Explanation: Thus a two-fold cross-validation gives us two estimates of the score for that parameter.
Because this is a bit of a pain to do by hand, scikit-learn has a utility routine to help:
End of explanation
"""
cross_val_score(KNeighborsClassifier(1), X, y, cv=10)
"""
Explanation: K-fold Cross-Validation
Here we've used 2-fold cross-validation. This is just one specialization of $K$-fold cross-validation, where we split the data into $K$ chunks and perform $K$ fits, where each chunk gets a turn as the validation set.
We can do this by changing the cv parameter above. Let's do 10-fold cross-validation:
End of explanation
"""
def test_func(x, err=0.5):
y = 10 - 1. / (x + 0.1)
if err > 0:
y = np.random.normal(y, err)
return y
"""
Explanation: This gives us an even better idea of how well our model is doing.
Overfitting, Underfitting and Model Selection
Now that we've gone over the basics of validation, and cross-validation, it's time to go into even more depth regarding model selection.
The issues associated with validation and
cross-validation are some of the most important
aspects of the practice of machine learning. Selecting the optimal model
for your data is vital, and is a piece of the problem that is not often
appreciated by machine learning practitioners.
Of core importance is the following question:
If our estimator is underperforming, how should we move forward?
Use simpler or more complicated model?
Add more features to each observed data point?
Add more training samples?
The answer is often counter-intuitive. In particular, Sometimes using a
more complicated model will give worse results. Also, Sometimes adding
training data will not improve your results. The ability to determine
what steps will improve your model is what separates the successful machine
learning practitioners from the unsuccessful.
Illustration of the Bias-Variance Tradeoff
For this section, we'll work with a simple 1D regression problem. This will help us to
easily visualize the data and the model, and the results generalize easily to higher-dimensional
datasets. We'll explore a simple linear regression problem.
This can be accomplished within scikit-learn with the sklearn.linear_model module.
We'll create a simple nonlinear function that we'd like to fit
End of explanation
"""
def make_data(N=40, error=1.0, random_seed=1):
# randomly sample the data
np.random.seed(1)
X = np.random.random(N)[:, np.newaxis]
y = test_func(X.ravel(), error)
return X, y
X, y = make_data(40, error=1)
plt.scatter(X.ravel(), y);
"""
Explanation: Now let's create a realization of this dataset:
End of explanation
"""
X_test = np.linspace(-0.1, 1.1, 500)[:, None]
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
model = LinearRegression()
model.fit(X, y)
y_test = model.predict(X_test)
plt.scatter(X.ravel(), y)
plt.plot(X_test.ravel(), y_test)
plt.title("mean squared error: {0:.3g}".format(mean_squared_error(model.predict(X), y)));
"""
Explanation: Now say we want to perform a regression on this data. Let's use the built-in linear regression function to compute a fit:
End of explanation
"""
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
from sklearn.pipeline import make_pipeline
def PolynomialRegression(degree=2, **kwargs):
return make_pipeline(PolynomialFeatures(degree),
LinearRegression(**kwargs))
"""
Explanation: We have fit a straight line to the data, but clearly this model is not a good choice. We say that this model is biased, or that it under-fits the data.
Let's try to improve this by creating a more complicated model. We can do this by adding degrees of freedom, and computing a polynomial regression over the inputs. Scikit-learn makes this easy with the PolynomialFeatures preprocessor, which can be pipelined with a linear regression.
Let's make a convenience routine to do this:
End of explanation
"""
model = PolynomialRegression(2)
model.fit(X, y)
y_test = model.predict(X_test)
plt.scatter(X.ravel(), y)
plt.plot(X_test.ravel(), y_test)
plt.title("mean squared error: {0:.3g}".format(mean_squared_error(model.predict(X), y)));
"""
Explanation: Now we'll use this to fit a quadratic curve to the data.
End of explanation
"""
model = PolynomialRegression(30)
model.fit(X, y)
y_test = model.predict(X_test)
plt.scatter(X.ravel(), y)
plt.plot(X_test.ravel(), y_test)
plt.title("mean squared error: {0:.3g}".format(mean_squared_error(model.predict(X), y)))
plt.ylim(-4, 14);
"""
Explanation: This reduces the mean squared error, and makes a much better fit. What happens if we use an even higher-degree polynomial?
End of explanation
"""
from IPython.html.widgets import interact
def plot_fit(degree=1, Npts=50):
X, y = make_data(Npts, error=1)
X_test = np.linspace(-0.1, 1.1, 500)[:, None]
model = PolynomialRegression(degree=degree)
model.fit(X, y)
y_test = model.predict(X_test)
plt.scatter(X.ravel(), y)
plt.plot(X_test.ravel(), y_test)
plt.ylim(-4, 14)
plt.title("mean squared error: {0:.2f}".format(mean_squared_error(model.predict(X), y)))
interact(plot_fit, degree=[1, 30], Npts=[2, 100]);
"""
Explanation: When we increase the degree to this extent, it's clear that the resulting fit is no longer reflecting the true underlying distribution, but is more sensitive to the noise in the training data. For this reason, we call it a high-variance model, and we say that it over-fits the data.
Just for fun, let's use IPython's interact capability (only in IPython 2.0+) to explore this interactively:
End of explanation
"""
X, y = make_data(120, error=1.0)
plt.scatter(X, y);
from sklearn.learning_curve import validation_curve
def rms_error(model, X, y):
y_pred = model.predict(X)
return np.sqrt(np.mean((y - y_pred) ** 2))
degree = np.arange(0, 18)
val_train, val_test = validation_curve(PolynomialRegression(), X, y,
'polynomialfeatures__degree', degree, cv=7,
scoring=rms_error)
"""
Explanation: Detecting Over-fitting with Validation Curves
Clearly, computing the error on the training data is not enough (we saw this previously). As above, we can use cross-validation to get a better handle on how the model fit is working.
Let's do this here, again using the validation_curve utility. To make things more clear, we'll use a slightly larger dataset:
End of explanation
"""
def plot_with_err(x, data, **kwargs):
mu, std = data.mean(1), data.std(1)
lines = plt.plot(x, mu, '-', **kwargs)
plt.fill_between(x, mu - std, mu + std, edgecolor='none',
facecolor=lines[0].get_color(), alpha=0.2)
plot_with_err(degree, val_train, label='training scores')
plot_with_err(degree, val_test, label='validation scores')
plt.xlabel('degree'); plt.ylabel('rms error')
plt.legend();
"""
Explanation: Now let's plot the validation curves:
End of explanation
"""
model = PolynomialRegression(4).fit(X, y)
plt.scatter(X, y)
plt.plot(X_test, model.predict(X_test));
"""
Explanation: Notice the trend here, which is common for this type of plot.
For a small model complexity, the training error and validation error are very similar. This indicates that the model is under-fitting the data: it doesn't have enough complexity to represent the data. Another way of putting it is that this is a high-bias model.
As the model complexity grows, the training and validation scores diverge. This indicates that the model is over-fitting the data: it has so much flexibility, that it fits the noise rather than the underlying trend. Another way of putting it is that this is a high-variance model.
Note that the training score (nearly) always improves with model complexity. This is because a more complicated model can fit the noise better, so the model improves. The validation data generally has a sweet spot, which here is around 5 terms.
Here's our best-fit model according to the cross-validation:
End of explanation
"""
from sklearn.learning_curve import learning_curve
def plot_learning_curve(degree=3):
train_sizes = np.linspace(0.05, 1, 20)
N_train, val_train, val_test = learning_curve(PolynomialRegression(degree),
X, y, train_sizes, cv=5,
scoring=rms_error)
plot_with_err(N_train, val_train, label='training scores')
plot_with_err(N_train, val_test, label='validation scores')
plt.xlabel('Training Set Size'); plt.ylabel('rms error')
plt.ylim(0, 3)
plt.xlim(5, 80)
plt.legend()
"""
Explanation: Detecting Data Sufficiency with Learning Curves
As you might guess, the exact turning-point of the tradeoff between bias and variance is highly dependent on the number of training points used. Here we'll illustrate the use of learning curves, which display this property.
The idea is to plot the mean-squared-error for the training and test set as a function of Number of Training Points
End of explanation
"""
plot_learning_curve(1)
"""
Explanation: Let's see what the learning curves look like for a linear model:
End of explanation
"""
plot_learning_curve(3)
"""
Explanation: This shows a typical learning curve: for very few training points, there is a large separation between the training and test error, which indicates over-fitting. Given the same model, for a large number of training points, the training and testing errors converge, which indicates potential under-fitting.
As you add more data points, the training error will never increase, and the testing error will never decrease (why do you think this is?)
It is easy to see that, in this plot, if you'd like to reduce the MSE down to the nominal value of 1.0 (which is the magnitude of the scatter we put in when constructing the data), then adding more samples will never get you there. For $d=1$, the two curves have converged and cannot move lower. What about for a larger value of $d$?
End of explanation
"""
plot_learning_curve(10)
"""
Explanation: Here we see that by adding more model complexity, we've managed to lower the level of convergence to an rms error of 1.0!
What if we get even more complex?
End of explanation
"""
|
chrisbarnettster/cfg-analysis-on-heroku-jupyter | notebooks/notebooks/zscore_highbinders_for_galectin.ipynb | mit | ## House keeping tasks
%reset -f
"""
Explanation: check z-score and features of galectin data
also see https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3097418/ for suggestions on analysis of glycan arrays
z-score as the statistical test for significance of a sample
In the paper by Cholleti and Cummings http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3459425/#SD2
"To avoid using an arbitrary threshold in determining binders and non-binders, we used the z-score as the statistical test for significance of a sample. The z-score transformation is calculated by comparing the value of a sample, relative to the sample mean and standard deviation, with the null hypothesis being that a random sample pulled from the population would be a non-binder. If the converted p value is less than 0.15, the null hypothesis is rejected and the sample is considered a binding glycan. We used a non-conservative p value to allow more glycans in the list of candidate binders as an input to GLYMMR. The z-score transformation is based on the sum of the RFU intensity values for the three different concentrations of the glycan. This statistical test allows the program to discard not only non-binding glycans, but glycans that exhibit non-specific binding, which could distort the motif discovery algorithm. "
End of explanation
"""
# standard imports
import urllib2
import os
import json
import StringIO
import pickle
# dataframe and numerical
import pandas as pd
import numpy as np
# plotting
import matplotlib.pyplot as plt
%matplotlib inline
#scipy
from scipy import stats
from scipy.special import erf
from scipy import sqrt
from IPython.display import HTML
def addToggle():
return '''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide();
} else {
$('div.input').show();
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
The raw code for this IPython notebook is by default hidden for easier reading.
To toggle on/off the raw code, click <a href="javascript:code_toggle()">here</a>.'''
HTML(addToggle())
## variables for this project
samples_in="../data/galectin-3/galectin-3_5.0_human.json"
results_dir = "../results/galectin-3/"
dataframe_out=results_dir+"dataframes_galectin.pkl"
"""
Explanation: import all required dependencies
End of explanation
"""
# Check whether or not the dataframes exist
subdir="./"
dataframefile=dataframe_out
if not os.path.isfile(dataframefile):
print "calling the notebook that loads the data"
%run download_cfg_for_sna.ipynb
with open(os.path.join(subdir, dataframefile)) as f:
dataframes = pickle.load(f)
# peak at the data in frame 0
frame=dataframes[0]["dataframe"]
frame.head()
# recalculate CV
# rank the glycans by RFU
STDEV="StDev"
RFU="Average RFU"
frame["CV"]=100*frame[STDEV]/frame[RFU]
maxrfu=frame[RFU].max()
frame["rank"]=100*frame[RFU]/maxrfu
frame.head()
# choose data to work with
# ignore the 0.5 ug, use 2,5,10 ()
frames=[dataframes[1]["dataframe"],dataframes[2]["dataframe"], dataframes[3]["dataframe"]]
sample_keys=[dataframes[1]["sample"].encode('utf-8'),dataframes[2]["sample"].encode('utf-8'), dataframes[3]["sample"].encode('utf-8')]
# recalculate CV and rank the glycans by RFU
for frame in frames:
frame["%CV"]=frame["% CV"]
frame["CV"]=100*frame[STDEV]/frame[RFU]
maxrfu=frame[RFU].max()
frame["rank"]=100*frame[RFU]/maxrfu
# peak at all frames
result = pd.concat(frames, keys=sample_keys)
result
"""
Explanation: Load data from pickle
End of explanation
"""
# calculate rank, %CV for all frames, z-score, p-value for all frames and sort by average rank
Structure="Structure on Masterlist"
for aframe in frames:
aframe["CV"]=100*aframe[STDEV]/aframe[RFU]
maxrfu=aframe[RFU].max()
aframe["rank"]=100*aframe[RFU]/maxrfu
aframe["z-score"]=stats.zscore(aframe[RFU])
aframe["p-value"]=1- 0.5 * (1 + erf(aframe["z-score"] / sqrt(2)))
#. merge_frames
df_final = reduce(lambda left,right: pd.merge(left,right,on=[Structure,'Chart Number']), frames)
df_final
frames[2]["CV"], sample_keys[2]
#. calculate the average rank
df_final["avg_rank"]=df_final.filter(regex=("rank.*")).sum(axis=1)/df_final.filter(regex=("rank.*")).shape[1] # http://stackoverflow.com/questions/30808430/how-to-select-columns-from-dataframe-by-regex
#. calculate the summed RFU
df_final["summed_RFU"]=df_final.filter(regex=("RFU.*")).sum(axis=1)
#. calculate the z-score and p-value for the summed RFU
df_final.head()
df_final["summed_RFU_z-score"]=stats.zscore(df_final["summed_RFU"])
df_final["summed_RFU_p-value"]=1- 0.5 * (1 + erf(df_final["summed_RFU_z-score"] / sqrt(2)))
df_final.sort_values("avg_rank",ascending=False)
#frames_RFU_sum["p-value_them"]=1- 0.5 * (1 + erf(frames_RFU_sum["stats_z-score"] / sqrt(2)))
"""
Explanation: RFU, zscore and p-value
Must convert from z-score to p-value.
In the paper, the RFU is summed across all datasets and this is used to calculate a p-value
End of explanation
"""
#. extract the high binders. p-value < 0.15
df_final_high_binders = df_final[df_final["summed_RFU_p-value"] <0.15]
df_final_high_binders.sort_values("avg_rank",ascending=False)
#print df_final_high_binders.shape
high_binders= set(df_final_high_binders["Chart Number"])
high_binders
df_final[df_final["Chart Number"]==582]
"""
Explanation: extract only the high binders
End of explanation
"""
#. lets pull out any %CV column of df_final and ensure CV <20
#.. remember there are negative CV in this sample, exclude these
df_cv_20=df_final.filter(regex=("%CV.*")) <=20
df_cv_0=df_final.filter(regex=("%CV.*")) >0
df_cv_0_20=(df_cv_0 & df_cv_20)
#print df_cv_0_20.head(340)
andmask=df_cv_0_20["%CV_x"]&df_cv_0_20["%CV_y"]&df_cv_0_20["%CV"]
ormask=df_cv_0_20["%CV_x"]|df_cv_0_20["%CV_y"]|df_cv_0_20["%CV"]
ormask1=df_cv_0_20["%CV_x"]
ormask2=df_cv_0_20["%CV_y"]
ormask3=df_cv_0_20["%CV"]
#mask
glycan_ids_cv_20=df_final["Chart Number"][andmask]
print len(glycan_ids_cv_20)
df_final_high_binders.sort_values("avg_rank",ascending=False)["Chart Number"]
sample_keys # but note the way I made the frame means that rank_x is 2mg, rank_y is 5mg and rank is 10mg
glycan_ids_cv_20_1= df_final[ormask1].sort_values("rank_x",ascending=False)
print len(glycan_ids_cv_20_1)
glycan_ids_cv_20_2= df_final[ormask2].sort_values("rank_y",ascending=False)
print len(glycan_ids_cv_20_2)
glycan_ids_cv_20_3= df_final[ormask3].sort_values("rank",ascending=False)
print len(glycan_ids_cv_20_3)
"""
Explanation: What about the % CV?
See https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3097418/
There are three statements made about %CV
- >50% the data should be disregarded
- >20-40% the CV is high and the results for binding may not be reliable
- >30% when binding is low with high imprecision the data are classified as inconclusive.
comparing % CV with the high binders from z-scores.
End of explanation
"""
# create dictionary to store results
results = {}
def highbinders(dataframe, pvalue="p-value", rank="rank",pvalue_cutoff=0.15, rank_cutoff=75):
"""
A function which filter the input dataframe by pvalue and rank
returns the filtered dataframe and a list of glycan chart number for the current array
"""
dataframe_p=dataframe[dataframe[pvalue]<pvalue_cutoff]
dataframe_p_r = dataframe_p[dataframe_p[rank]>rank_cutoff]
dataframe_p_r.sort_values(rank,ascending=False)
return dataframe_p_r,set(dataframe_p_r["Chart Number"])
#. extract the high binders. p-value < 0.15 rank> 75
average_high_df, average_highbind_list = highbinders(df_final,pvalue="summed_RFU_p-value", rank="avg_rank")
average_highbind_list, len(average_highbind_list)
results["average"]=average_highbind_list
# rank filter 75
rf=75
#. extract the high binders for 2mg . p-value < 0.15 rank> 75
twomggly, twomg_highbind_list = highbinders(glycan_ids_cv_20_1,pvalue="p-value_x",rank="rank_x")
results["twomg_filter"+str(rf)]=set(twomg_highbind_list)
#. extract the high binders for 5mg . p-value < 0.15 rank> 75
fivemggly, fivemg_highbind_list = highbinders(glycan_ids_cv_20_2,pvalue="p-value_y",rank="rank_y")
results["fivemg_filter"+str(rf)]=set(fivemg_highbind_list)
#. extract the high binders for 5mg . p-value < 0.15 rank> 75
tenmggly, tenmg_highbind_list = highbinders(glycan_ids_cv_20_3,pvalue="p-value",rank="rank")
results["tenmg_filter"+str(rf)]=set(tenmg_highbind_list)
# rank filter 50
rf=50
#. extract the high binders for 2mg . p-value < 0.15 rank> 75
twomggly, twomg_highbind_list = highbinders(glycan_ids_cv_20_1,pvalue="p-value_x",rank="rank_x",rank_cutoff=rf)
results["twomg_filter"+str(rf)]=set(twomg_highbind_list)
#. extract the high binders for 5mg . p-value < 0.15 rank> 75
fivemggly, fivemg_highbind_list = highbinders(glycan_ids_cv_20_2,pvalue="p-value_y",rank="rank_y",rank_cutoff=rf)
results["fivemg_filter"+str(rf)]=set(fivemg_highbind_list)
#. extract the high binders for 5mg . p-value < 0.15 rank> 75
tenmggly, tenmg_highbind_list = highbinders(glycan_ids_cv_20_3,pvalue="p-value",rank="rank",rank_cutoff=rf)
results["tenmg_filter"+str(rf)]=set(tenmg_highbind_list)
# top 10 without filtering
results["twomg_topten_nofilter"]=set(df_final.sort_values("rank_x",ascending=False)[0:10]["Chart Number"])
results["fivemg_topten_nofilter"]=set(df_final.sort_values("rank_y",ascending=False)[0:10]["Chart Number"])
results["tenmg_topten_nofilter"]=set(df_final.sort_values("rank",ascending=False)[0:10]["Chart Number"])
results
#a={"average":average_highbind_list, "twomg":set(twomg_highbind_list),"fivemg":set(fivemg_highbind_list),"tenmg":set(tenmg_highbind_list)}
#a
# see http://pandas.pydata.org/pandas-docs/stable/options.html for pandas options
pd.set_option('display.max_columns',1000)
pd.set_option('max_columns', 100)
df_final[df_final["Chart Number"]==340]
"""
Explanation: extract highbinders and other sets for MCAW analysis
these should be 2,5,10, but this is manually coded so watch out
End of explanation
"""
# make various views of frame 0 based on the %CV
df_cv_50 = frame[frame.CV <50]
df_cv_30 = frame[frame.CV <30]
df_cv_20 = frame[frame.CV <20]
df_cv_20_0 = df_cv_20[df_cv_20.CV>0]
# plot rank v %CV
# plot comparison of different %CV subsets
plt.figure()
df_cv_20["CV"].plot(legend=True, title='%CV<=20%')
df_cv_20[STDEV].plot(secondary_y=True, style='g', legend=True)
plt.figure()
df_cv_20_0["CV"].plot(legend=True, title='0<%CV<=20%')
df_cv_20_0[STDEV].plot(secondary_y=True, style='g', legend=True)
plt.figure()
df_cv_30["CV"].plot(legend=True, title='%CV<=30%')
df_cv_30[STDEV].plot(secondary_y=True, style='g', legend=True)
plt.figure()
df_cv_50["CV"].plot(legend=True, title='%CV<=50%')
df_cv_50[STDEV].plot(secondary_y=True, style='g', legend=True)
# use 0<cv<20 and order by rank
pd.set_option('max_rows', 300)
df_cv_20_0.sort_values("rank",ascending=False)
plt.figure()
df_cv_20[RFU].plot(legend=True, title='%CV<=20%')
df_cv_20[STDEV].plot(secondary_y=True, style='g', legend=True)
"""
Explanation: A consideration of the glycans by %CV for the first frame
End of explanation
"""
|
jupyter/nbgrader | nbgrader/tests/apps/files/test-no-metadata.ipynb | bsd-3-clause | def squares(n):
"""Compute the squares of numbers from 1 to n, such that the
ith element of the returned list equals i^2.
"""
### BEGIN SOLUTION
if n < 1:
raise ValueError("n must be greater than or equal to 1")
return [i ** 2 for i in range(1, n + 1)]
### END SOLUTION
"""
Explanation: For this problem set, we'll be using the Jupyter notebook:
Part A (2 points)
Write a function that returns a list of numbers, such that $x_i=i^2$, for $1\leq i \leq n$. Make sure it handles the case where $n<1$ by raising a ValueError.
End of explanation
"""
squares(10)
"""Check that squares returns the correct output for several inputs"""
assert squares(1) == [1]
assert squares(2) == [1, 4]
assert squares(10) == [1, 4, 9, 16, 25, 36, 49, 64, 81, 100]
assert squares(11) == [1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121]
"""Check that squares raises an error for invalid inputs"""
try:
squares(0)
except ValueError:
pass
else:
raise AssertionError("did not raise")
try:
squares(-4)
except ValueError:
pass
else:
raise AssertionError("did not raise")
"""
Explanation: Your function should print [1, 4, 9, 16, 25, 36, 49, 64, 81, 100] for $n=10$. Check that it does:
End of explanation
"""
def sum_of_squares(n):
"""Compute the sum of the squares of numbers from 1 to n."""
### BEGIN SOLUTION
return sum(squares(n))
### END SOLUTION
"""
Explanation: Part B (1 point)
Using your squares function, write a function that computes the sum of the squares of the numbers from 1 to $n$. Your function should call the squares function -- it should NOT reimplement its functionality.
End of explanation
"""
sum_of_squares(10)
"""Check that sum_of_squares returns the correct answer for various inputs."""
assert sum_of_squares(1) == 1
assert sum_of_squares(2) == 5
assert sum_of_squares(10) == 385
assert sum_of_squares(11) == 506
"""Check that sum_of_squares relies on squares."""
orig_squares = squares
del squares
try:
sum_of_squares(1)
except NameError:
pass
else:
raise AssertionError("sum_of_squares does not use squares")
finally:
squares = orig_squares
"""
Explanation: The sum of squares from 1 to 10 should be 385. Verify that this is the answer you get:
End of explanation
"""
def pyramidal_number(n):
"""Returns the n^th pyramidal number"""
return sum_of_squares(n)
"""
Explanation: Part C (1 point)
Using LaTeX math notation, write out the equation that is implemented by your sum_of_squares function.
$\sum_{i=1}^n i^2$
Part D (2 points)
Find a usecase for your sum_of_squares function and implement that usecase in the cell below.
End of explanation
"""
|
gth158a/learning | Keras as simplified TensorFlow.ipynb | apache-2.0 | import tensorflow as tf
sess = tf.Session()
from keras import backend as K
K.set_session(sess)
"""
Explanation: Source: https://blog.keras.io/keras-as-a-simplified-interface-to-tensorflow-tutorial.html
Note the version of Python. I am currently using 3.6 but it seems the tutorial is using Python 2.
End of explanation
"""
# this placeholder will contain our input digits, as flat vectors
img = tf.placeholder(tf.float32, shape=(None, 784))
from keras.layers import Dense
# Keras layers can be called on TensorFlow tensors:
x = Dense(128, activation='relu')(img) # fully-connected layer with 128 units and ReLU activation
x = Dense(128, activation='relu')(x)
preds = Dense(10, activation='softmax')(x) # output layer with 10 units and a softmax activation
labels = tf.placeholder(tf.float32, shape=(None, 10))
from keras.objectives import categorical_crossentropy
loss = tf.reduce_mean(categorical_crossentropy(labels, preds))
"""
Explanation: Example with MNIST
End of explanation
"""
from tensorflow.examples.tutorials.mnist import input_data
mnist_data = input_data.read_data_sets('MNIST_data', one_hot=True)
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Initialize all variables
init_op = tf.global_variables_initializer()
sess.run(init_op)
# Run training loop
with sess.as_default():
for i in range(100):
batch = mnist_data.train.next_batch(50)
train_step.run(feed_dict={img: batch[0],
labels: batch[1]})
"""
Explanation: losses: the math and their implementation
trianing with tensorflow optimizer
End of explanation
"""
from keras.metrics import categorical_accuracy as accuracy
acc_value = accuracy(labels, preds)
with sess.as_default():
print(acc_value.eval(feed_dict={img: mnist_data.test.images, labels: mnist_data.test.labels}))
"""
Explanation: Evaluating the model
End of explanation
"""
from keras import backend as K
print (K.learning_phase())
# train mode
train_step.run(feed_dict={x: batch[0], labels: batch[1], K.learning_phase(): 1})
from keras.layers import Dropout
from keras import backend as K
img = tf.placeholder(tf.float32, shape=(None, 784))
labels = tf.placeholder(tf.float32, shape=(None, 10))
x = Dense(128, activation='relu')(img)
x = Dropout(0.5)(x)
x = Dense(128, activation='relu')(x)
x = Dropout(0.5)(x)
preds = Dense(10, activation='softmax')(x)
loss = tf.reduce_mean(categorical_crossentropy(labels, preds))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
with sess.as_default():
for i in range(100):
batch = mnist_data.train.next_batch(50)
train_step.run(feed_dict={img: batch[0],
labels: batch[1],
K.learning_phase(): 1})
acc_value = accuracy(labels, preds)
with sess.as_default():
print (acc_value.eval(feed_dict={img: mnist_data.test.images,
labels: mnist_data.test.labels,
K.learning_phase(): 0}))
x = tf.placeholder(tf.float32, shape=(None, 20, 64))
with tf.name_scope('block1'):
y = LSTM(32, name='mylstm')(x)
with tf.device('/gpu:0'):
x = tf.placeholder(tf.float32, shape=(None, 20, 64))
y = LSTM(32)(x) # all ops / variables in the LSTM layer will live on GPU:0
from keras.layers import LSTM
import tensorflow as tf
my_graph = tf.Graph()
with my_graph.as_default():
x = tf.placeholder(tf.float32, shape=(None, 20, 64))
y = LSTM(32)(x) # all ops / variables in the LSTM layer are created as part of our graph
"""
Explanation: odd?! I am getting the prediction not the accuracy percentage??
The optimization is done via a native TensorFlow optimizer rather than a Keras optimizer.
Keras is 5% faster but bo big noticiable difference
Different behaviors during training and testing
End of explanation
"""
|
jiumem/tuthpc | multiprocessing.ipynb | bsd-3-clause | %%file multihello.py
'''hello from another process
'''
from multiprocessing import Process
def f(name):
print 'hello', name
if __name__ == '__main__':
p = Process(target=f, args=('world',))
p.start()
p.join()
# EOF
!python2.7 multihello.py
"""
Explanation: Multiprocessing and multithreading
Parallelism in python
End of explanation
"""
if __name__ == '__main__':
from multiprocessing import freeze_support
freeze_support()
# Then, do multiprocessing stuff...
"""
Explanation: On Windows: multiprocessing spawns with subprocess.Popen
End of explanation
"""
%%file sharedobj.py
'''demonstrate shared objects in multiprocessing
'''
from multiprocessing import Process, Value, Array
def f(n, a):
n.value = 3.1415927
for i in range(len(a)):
a[i] = -a[i]
if __name__ == '__main__':
num = Value('d', 0.0)
arr = Array('i', range(10))
p = Process(target=f, args=(num, arr))
p.start()
p.join()
print num.value
print arr[:]
# EOF
!python2.7 sharedobj.py
"""
Explanation: Data parallelism versus task parallelism
Multithreading versus multiple threads
The global interpreter lock
Processes versus threads
Shared memory and shared objects
Shared objects: Value and Array
End of explanation
"""
%%file sharedproxy.py
'''demonstrate sharing objects by proxy through a manager
'''
from multiprocessing import Process, Manager
def f(d, l):
d[1] = '1'
d['2'] = 2
d[0.25] = None
l.reverse()
if __name__ == '__main__':
manager = Manager()
d = manager.dict()
l = manager.list(range(10))
p = Process(target=f, args=(d, l))
p.start()
p.join()
print d
print l
# EOF
!python2.7 sharedproxy.py
"""
Explanation: Manager and proxies
End of explanation
"""
%%file numpyshared.py
'''demonstrating shared objects using numpy and ctypes
'''
import multiprocessing as mp
from multiprocessing import sharedctypes
from numpy import ctypeslib
def fill_arr(arr_view, i):
arr_view.fill(i)
if __name__ == '__main__':
ra = sharedctypes.RawArray('i', 4)
arr = ctypeslib.as_array(ra)
arr.shape = (2, 2)
p1 = mp.Process(target=fill_arr, args=(arr[:1, :], 1))
p2 = mp.Process(target=fill_arr, args=(arr[1:, :], 2))
p1.start(); p2.start()
p1.join(); p2.join()
print arr
!python2.7 numpyshared.py
"""
Explanation: See: https://docs.python.org/2/library/multiprocessing.html
Working in C with ctypes and numpy
End of explanation
"""
%%file mprocess.py
'''demonstrate the process claas
'''
import multiprocessing as mp
from time import sleep
from random import random
def worker(num):
sleep(2.0 * random())
name = mp.current_process().name
print "worker {},name:{}".format(num, name)
if __name__ == '__main__':
master = mp.current_process().name
print "Master name: {}".format(master)
for i in range(2):
p = mp.Process(target=worker, args=(i,))
p.start()
# Close all child processes spawn
[p.join() for p in mp.active_children()]
!python2.7 mprocess.py
"""
Explanation: Issues: threading and locks
Low-level task parallelism: point to point communication
Process
End of explanation
"""
%%file queuepipe.py
'''demonstrate queues and pipes
'''
import multiprocessing as mp
import pickle
def qworker(q):
v = q.get() # blocking!
print "queue worker got '{}' from parent".format(v)
def pworker(p):
import pickle # needed for encapsulation
msg = 'hello hello hello'
print "pipe worker sending {!r} to parent".format(msg)
p.send(msg)
v = p.recv()
print "pipe worker got {!r} from parent".format(v)
print "unpickled to {}".format(pickle.loads(v))
if __name__ == '__main__':
q = mp.Queue()
p = mp.Process(target=qworker, args=(q,))
p.start() # blocks at q.get()
v = 'python rocks!'
print "putting '{}' on queue".format(v)
q.put(v)
p.join()
print ''
# The two ends of the pipe: the parent and the child connections
p_conn, c_conn = mp.Pipe()
p = mp.Process(target=pworker, args=(c_conn,))
p.start()
msg = pickle.dumps([1,2,3],-1)
print "got {!r} from child".format(p_conn.recv())
print "sending {!r} to child".format(msg)
p_conn.send(msg)
import datetime
print "\nfinished: {}".format(datetime.date.today())
p.join()
!python2.7 queuepipe.py
"""
Explanation: Queue and Pipe
End of explanation
"""
%%file multi_sync.py
'''demonstrating locks
'''
import multiprocessing as mp
def print_lock(lk, i):
name = mp.current_process().name
lk.acquire()
for j in range(5):
print i, "from process", name
lk.release()
if __name__ == '__main__':
lk = mp.Lock()
ps = [mp.Process(target=print_lock, args=(lk,i)) for i in range(5)]
[p.start() for p in ps]
[p.join() for p in ps]
!python2.7 multi_sync.py
'''events
'''
import multiprocessing as mp
def wait_on_event(e):
name = mp.current_process().name
e.wait()
print name, "finished waiting"
if __name__ == '__main__':
e = mp.Event()
ps = [mp.Process(target=wait_on_event, args=(e,)) for i in range(10)]
[p.start() for p in ps]
print "e.is_set()", e.is_set()
#raw_input("press any key to set event")
e.set()
[p.join() for p in ps]
"""
Explanation: Synchronization with Lock and Event
End of explanation
"""
import multiprocessing as mp
def random_mean(x):
import numpy as np
return round(np.mean(np.random.randint(-x,x+1,10000)), 3)
if __name__ == '__main__':
# create a pool with cpu_count() procsesses
p = mp.Pool()
results = p.map(random_mean, range(1,10))
print results
print p.apply(random_mean, [100])
p.close()
p.join()
"""
Explanation: High-level task parallelism: collective communication
The task Pool
pipes (apply) and map
End of explanation
"""
import multiprocessing as mp
def random_mean_count(x):
import numpy as np
return x + round(np.mean(np.random.randint(-x,x+1,10000)), 3)
if __name__ == '__main__':
# create a pool with cpu_count() procsesses
p = mp.Pool()
results = p.imap_unordered(random_mean_count, range(1,10))
print "[",
for i in results:
print i,
if abs(i) <= 1.0:
print "...] QUIT"
break
list(results)
p.close()
p.join()
import multiprocessing as mp
def random_mean_count(x):
import numpy as np
return x + round(np.mean(np.random.randint(-x,x+1,10000)), 3)
if __name__ == '__main__':
# create a pool with cpu_count() procsesses
p = mp.Pool()
results = p.map_async(random_mean_count, range(1,10))
print "Waiting .",
i = 0
while not results.ready():
if not i%4000:
print ".",
i += 1
print results.get()
print "\n", p.apply_async(random_mean_count, [100]).get()
p.close()
p.join()
"""
Explanation: Variants: blocking, iterative, unordered, and asynchronous
End of explanation
"""
import numpy as np
def walk(x, n=100, box=.5, delta=.2):
"perform a random walk"
w = np.cumsum(x + np.random.uniform(-delta,delta,n))
w = np.where(abs(w) > box)[0]
return w[0] if len(w) else n
N = 10
# run N trials, all starting from x=0
pwalk = np.vectorize(walk)
print pwalk(np.zeros(N))
# run again, using list comprehension instead of ufunc
print [walk(0) for i in range(N)]
# run again, using multiprocessing's map
import multiprocessing as mp
p = mp.Pool()
print p.map(walk, [0]*N)
%%file state.py
"""some good state utilities
"""
def check_pickle(x, dill=False):
"checks the pickle across a subprocess"
import pickle
import subprocess
if dill:
import dill as pickle
pik = "dill"
else:
pik = "pickle"
fail = True
try:
_x = pickle.dumps(x)
fail = False
finally:
if fail:
print "DUMP FAILED"
msg = "python -c import {0}; print {0}.loads({1})".format(pik,repr(_x))
print "SUCCESS" if not subprocess.call(msg.split(None,2)) else "LOAD FAILED"
def random_seed(s=None):
"sets the seed for calls to 'random()'"
import random
random.seed(s)
try:
from numpy import random
random.seed(s)
except:
pass
return
def random_state(module='random', new=False, seed='!'):
"""return a (optionally manually seeded) random generator
For a given module, return an object that has random number generation (RNG)
methods available. If new=False, use the global copy of the RNG object.
If seed='!', do not reseed the RNG (using seed=None 'removes' any seeding).
If seed='*', use a seed that depends on the process id (PID); this is useful
for building RNGs that are different across multiple threads or processes.
"""
import random
if module == 'random':
rng = random
elif not isinstance(module, type(random)):
# convienence for passing in 'numpy'
if module == 'numpy': module = 'numpy.random'
try:
import importlib
rng = importlib.import_module(module)
except ImportError:
rng = __import__(module, fromlist=module.split('.')[-1:])
elif module.__name__ == 'numpy': # convienence for passing in numpy
from numpy import random as rng
else: rng = module
_rng = getattr(rng, 'RandomState', None) or \
getattr(rng, 'Random') # throw error if no rng found
if new:
rng = _rng()
if seed == '!': # special case: don't reset the seed
return rng
if seed == '*': # special case: random seeding for multiprocessing
try:
try:
import multiprocessing as mp
except ImportError:
import processing as mp
try:
seed = mp.current_process().pid
except AttributeError:
seed = mp.currentProcess().getPid()
except:
seed = 0
import time
seed += int(time.time()*1e6)
# set the random seed (or 'reset' with None)
rng.seed(seed)
return rng
# EOF
"""
Explanation: Issues: random number generators
End of explanation
"""
import multiprocess
print multiprocess.Pool().map(lambda x:x**2, range(10))
"""
Explanation: Issues: serialization
Better serialization: multiprocess
End of explanation
"""
%%file runppft.py
'''demonstrate ppft
'''
import ppft
def squared(x):
return x*x
server = ppft.Server() # can take 'localhost:8000' or remote:port
result = server.submit(squared, (5,))
result.wait()
print result.finished
print result()
!python2.7 runppft.py
"""
Explanation: EXERCISE: << Either the mystic multi-solve or one of the pathos tests or with rng >>
Code-based versus object-based serialization: pp(ft)
End of explanation
"""
%%file allpool.py
'''demonstrate pool API
'''
import pathos
def sum_squared(x,y):
return (x+y)**2
x = range(5)
y = range(0,10,2)
if __name__ == '__main__':
sp = pathos.pools.SerialPool()
pp = pathos.pools.ParallelPool()
mp = pathos.pools.ProcessPool()
tp = pathos.pools.ThreadPool()
for pool in [sp,pp,mp,tp]:
print pool.map(sum_squared, x, y)
pool.close()
pool.join()
!python2.7 allpool.py
"""
Explanation: Programming efficiency: pathos
Multi-argument map functions
Unified API for threading, multiprocessing, and serial and parallel python (pp)
End of explanation
"""
from itertools import izip
PRIMES = [
112272535095293,
112582705942171,
112272535095293,
115280095190773,
115797848077099,
1099726899285419]
def is_prime(n):
if n % 2 == 0:
return False
import math
sqrt_n = int(math.floor(math.sqrt(n)))
for i in range(3, sqrt_n + 1, 2):
if n % i == 0:
return False
return True
def sleep_add1(x):
from time import sleep
if x < 4: sleep(x/10.0)
return x+1
def sleep_add2(x):
from time import sleep
if x < 4: sleep(x/10.0)
return x+2
def test_with_multipool(Pool):
inputs = range(10)
with Pool() as pool1:
res1 = pool1.amap(sleep_add1, inputs)
with Pool() as pool2:
res2 = pool2.amap(sleep_add2, inputs)
with Pool() as pool3:
for number, prime in izip(PRIMES, pool3.imap(is_prime, PRIMES)):
assert prime if number != PRIMES[-1] else not prime
assert res1.get() == [i+1 for i in inputs]
assert res2.get() == [i+2 for i in inputs]
print "OK"
if __name__ == '__main__':
from pathos.pools import ProcessPool
test_with_multipool(ProcessPool)
"""
Explanation: Strives for natural programming constructs in parallel code
End of explanation
"""
import pathos
from math import sin, cos
if __name__ == '__main__':
mp = pathos.pools.ProcessPool()
tp = pathos.pools.ThreadPool()
print mp.amap(tp.map, [sin, cos], [range(3),range(3)]).get()
mp.close(); tp.close()
mp.join(); tp.join()
"""
Explanation: Programming models and hierarchical computing
End of explanation
"""
import pathos
import sys
rhost = 'localhost'
rport = 23
if __name__ == '__main__':
tunnel = pathos.secure.Tunnel()
lport = tunnel.connect(rhost, rport)
print 'SSH Tunnel to:', rhost
print 'Remote port:', rport
print 'Local port:', lport
print 'Press <Enter> to disconnect'
sys.stdin.readline()
tunnel.disconnect()
"""
Explanation: Pool caching
Not covered: IPython.parallel and scoop
EXERCISE: Let's take another swing at Monte Carlo betting. You'll want to focus on roll.py, trials.py and optimize.py. Can you speed things up with careful placement of a Pool? Are there small modifications to the code that would allow hierarchical parallelism? Can we speed up the calculation, or does parallel computing lose to spin-up overhead? Where are we now hitting the wall?
See: 'solution'
Remote execution
Easy: the pp.Server
Even easier: Pool().server in pathos
Not covered: rpyc, pyro, and zmq
Related: secure authentication with ssh
pathos.secure: connection and tunnel
End of explanation
"""
|
LSSTC-DSFP/LSSTC-DSFP-Sessions | Sessions/Session14/Day1/SeparatingStarsAndGalaxies.ipynb | mit | import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: An Astronomical Application of Machine Learning:
Separating Stars and Galaxies from SDSS
Version 0.3
By AA Miller 2017 Jan 22
AA Miller 2022 Mar 06 (v0.03)
The problems in the following notebook develop an end-to-end machine learning model using actual astronomical data to separate stars and galaxies. There are 5 steps in this machine learning workflow:
Data Preparation
Model Building
Model Evaluation
Model Optimization
Model Predictions
The data come from the Sloan Digital Sky Survey (SDSS), an imaging survey that has several similarities to LSST (though the telescope was significantly smaller and the survey did not cover as large an area).
Science background: Many (nearly all?) of the science applications for LSST data will rely on the accurate separation of stars and galaxies in the LSST imaging data. As an example, imagine measuring the structure of the Milky Way without knowing which sources are galaxies and which are stars.
During this exercise, we will utilize supervised machine learning methods to separate extended sources (galaxies) and point sources (stars) in imaging data. These methods are highly flexible, and as a result can classify sources at higher fidelity than methods that simply make cuts in a low-dimensional space.
End of explanation
"""
sdss_df = pd.read_hdf("sdss_training_set.h5")
sns.pairplot(sdss_df, hue = 'class', diag_kind = 'hist')
"""
Explanation: Problem 1) Examine the Training Data
For this problem the training set, i.e. sources with known labels, includes stars and galaxies that have been confirmed with spectroscopic observations. The machine learning model is needed because there are $\gg 10^8$ sources with photometric observations in SDSS, and only $4 \times 10^6$ sources with spectroscopic observations. The model will allow us to translate our knowledge from the spectroscopic observations to the entire data set. The features include each $r$-band magnitude measurement made by SDSS (don't worry if you don't know what this means...). This yields 8 features to train the models (significantly fewer than the 454 properties measured for each source in SDSS).
If you are curious (and it is fine if you are not) this training set was constructed by running the following query on the SDSS database:
SELECT TOP 20000
p.psfMag_r, p.fiberMag_r, p.fiber2Mag_r, p.petroMag_r,
p.deVMag_r, p.expMag_r, p.modelMag_r, p.cModelMag_r,
s.class
FROM PhotoObjAll AS p JOIN specObjAll s ON s.bestobjid = p.objid
WHERE p.mode = 1 AND s.sciencePrimary = 1 AND p.clean = 1 AND s.class != 'QSO'
ORDER BY p.objid ASC
First download the training set and the blind test set for this problem.
Problem 1a
Visualize the training set data. The data have 8 features ['psfMag_r', 'fiberMag_r', 'fiber2Mag_r', 'petroMag_r', 'deVMag_r', 'expMag_r', 'modelMag_r', 'cModelMag_r'], and a 9th column ['class'] corresponding to the labels ('STAR' or 'GALAXY' in this case).
Hint - just execute the cell below.
End of explanation
"""
from sklearn.model_selection import train_test_split
rs = 1851
# complete
X = # complete
y = # complete
train_X, test_X, train_y, test_y = # complete
"""
Explanation: Problem 1b
Based on your plots of the data, which feature do you think will be the most important for separating stars and galaxies? Why?
write your answer here - do not change it after later completing the problem
The final data preparation step it to create an independent test set to evalute the generalization error of the final tuned model. Independent test sets are generated by witholding a fraction of the training set. No hard and fast rules apply for the fraction to be withheld, though typical choices vary between $\sim{0.2}-0.5$.
sklearn.model_selection has a useful helper function train_test_split.
Problem 1c Split the 20k spectroscopic sources 70-30 into training and test sets. Save the results in arrays called: train_X, train_y, test_X, test_y, respectively. Use rs for the random_state in train_test_split.
Hint - recall that sklearn utilizes X, a 2D np.array(), and y as the features and labels arrays, respecitively.
End of explanation
"""
from sklearn.neighbors import KNeighborsClassifier
knn_clf = # complete
# complete
"""
Explanation: We will now ignore everything in the test set until we have fully optimized the machine learning model.
Problem 2) Model Building
After curating the data, you must select a specific machine learning algorithm. With experience, it is possible to develop intuition for the best ML algorithm given a specific problem.
Short of that? Try two (or three, or four, or five) different models and choose whichever works the best.
Problem 2a
Train a $k$-nearest neighbors model on the star-galaxy training set. Select $k$ = 25 for this model.
Hint - the KNeighborsClassifier object in the sklearn.neighbors module may be useful for this task.
End of explanation
"""
from sklearn.ensemble import RandomForestClassifier
rf_clf = # complete
# complete
"""
Explanation: Problem 2b
Train a Random Forest (RF) model (Breiman 2001) on the training set. Include 50 trees in the forest using the n_estimators parameter. Again, set random_state = rs.
Hint - use the RandomForestClassifier object from the sklearn.ensemble module. Also - be sure to set n_jobs = -1 in every call of RandomForestClassifier.
End of explanation
"""
feat_str = ',\n'.join(['{}'.format(feat) for feat in np.array(feats)[np.argsort(rf_clf.feature_importances_)[::-1]]])
print('From most to least important: \n{}'.format(feat_str))
"""
Explanation: A nice property of RF, relative to $k$NN, is that RF naturally provides an estimate of the most important features in a model.
RF feature importance is measured by randomly shuffling the values of a particular feature, and measuring the decrease in the model's overall accuracy. The relative feature importances can be accessed using the .feature_importances_ attribute associated with the RandomForestClassifer() object. The higher the value, the more important feature.
Problem 2c
Calculate the relative importance of each feature.
Which feature is most important? Does this match your answer from 1c?
End of explanation
"""
from sklearn.metrics import accuracy_score
phot_y = # complete
# complete
# complete
# complete
print("The baseline FoM = {:.4f}".format( # complete
"""
Explanation: write your answer here
Problem 3) Model Evaluation
To evaluate the performance of the model we establish a baseline (or figure of merit) that we would like to exceed. For our current application we want to maximize the accuracy of the model.
If the model does not improve upon the baseline (or reach the desired figure of merit) then one must iterate on previous steps (feature engineering, algorithm selection, etc) to accomplish the desired goal.
The SDSS photometric pipeline uses a simple parametric model to classify sources as either stars or galaxies. If we are going to the trouble of building a complex ML model, then it stands to reason that its performance should exceed that of the simple model. Thus, we adopt the SDSS photometric classifier as our baseline.
The SDSS photometric classifier uses a single hard cut to separate stars and galaxies in imaging data:
$$\mathtt{psfMag_r} - \mathtt{cModelMag_r} > 0.145.$$
Sources that satisfy this criteria are considered galaxies.
Problem 3a
Determine the baseline figure of merit by measuring the accuracy of the SDSS photometric classifier on the training set.
Hint - the accuracy_score function in the sklearn.metrics module may be useful.
End of explanation
"""
from sklearn.model_selection import cross_val_score
knn_cv = cross_val_score( # complete
print('The kNN model FoM = {:.4f} +/- {:.4f}'.format( # complete
"""
Explanation: Problem 3b
Use 10-fold cross validation to estimate the FoM for the $k$NN model. Take the mean value across all folds as the FoM estimate.
Hint - the cross_val_score function from the sklearn.model_selection module performs the necessary calculations.
End of explanation
"""
rf_cv = cross_val_score( # complete
print('The RF model FoM = {:.4f} +/- {:.4f}'.format( # complete
"""
Explanation: Problem 3c
Use 10-fold cross validation to estimate the FoM for the random forest model.
End of explanation
"""
for k in [1,10,100]:
# complete
print('With k = {:d}, the kNN FoM = {:.4f} +/- {:.4f}'.format( # complete
"""
Explanation: Problem 3d
Do the machine-learning models outperform the SDSS photometric classifier?
write your answer here
Problem 4) Model Optimization
While the "off-the-shelf" model provides an improvement over the SDSS photometric classifier, we can further refine and improve the performance of the machine learning model by adjusting the model tuning parameters. A process known as model optimization.
All machine-learning models have tuning parameters. In brief, these parameters capture the smoothness of the model in the multidimentional-feature space. Whether the model is smooth or coarse is application dependent -- be weary of over-fitting or under-fitting the data. Generally speaking, RF (and most tree-based methods) have 3 flavors of tuning parameter:
$N_\mathrm{tree}$ - the number of trees in the forest n_estimators (default: 10) in sklearn
$m_\mathrm{try}$ - the number of (random) features to explore as splitting criteria at each node max_features (default: sqrt(n_features)) in sklearn
Pruning criteria - defined stopping criteria for ending continued growth of the tree, there are many choices for this in sklearn (My preference is min_samples_leaf (default: 1) which sets the minimum number of sources allowed in a terminal node, or leaf, of the tree)
Just as we previously evaluated the model using CV, we must optimize the tuning paramters via CV. Until we "finalize" the model by fixing all the input parameters, we cannot evalute the accuracy of the model with the test set as that would be "snooping."
Before globally optimizing the model, let's develop some intuition for how the tuning parameters affect the final model predictions.
Problem 4a
Determine the 10-fold cross validation accuracy for $k$NN models with $k$ = 1, 10, 100.
How do you expect changing the number of neighbors to affect the results?
End of explanation
"""
for ntree in [1,10,30,100,300]:
# complete
print('With {:d} trees the FoM = {:.4f} +/- {:.4f}'.format( # complete
"""
Explanation: write your answer here
Problem 4b
Determine the 10-fold cross validation accuracy for RF models with $N_\mathrm{tree}$ = 1, 10, 30, 100, and 300.
How do you expect changing the number of trees to affect the results?
End of explanation
"""
phot_y = # complete
# complete
# complete
# complete
print("The baseline FoM = {:.4f}".format( # complete
"""
Explanation: write your answer here
Now you are ready for the moment of truth!
Problem 5) Model Predictions
Problem 5a
Calculate the FoM for the SDSS photometric model on the test set.
End of explanation
"""
rf_clf = RandomForestClassifier( # complete
# complete
# complete
print("The RF model has FoM = {:.4f}".format( # complete
"""
Explanation: Problem 5b
Using the optimal number of trees from 4b calculate the FoM for the random forest model.
Hint - remember that the model should be trained on the training set, but the predictions are for the test set.
End of explanation
"""
from sklearn.metrics import confusion_matrix
print(confusion_matrix( # complete
"""
Explanation: Problem 5c
Calculate the confusion matrix for the test set. Is there symmetry to the misclassifications?
Hint - the confusion_matrix function in sklearn.metrics will help.
End of explanation
"""
from sklearn.metrics import roc_curve
test_y_int = # complete
# complete
test_preds_proba = rf_clf.predict_proba( # complete
fpr, tpr, thresh = roc_curve( # complete
fig, ax = plt.subplots()
ax.plot( # complete
ax.set_xlabel('FPR')
ax.set_ylabel('TPR')
"""
Explanation: write your answer here
Problem 5d
Calculate (and plot the region of interest) the ROC curve assumming that stars are the positive class.
Hint 1 - you will need to calculate probabilistic classifications for the test set using the predict_proba() method.
Hint 2 - the roc_curve function in the sklearn.metrics module will be useful.
End of explanation
"""
tpr_99_thresh = # complete
print('This model requires a classification threshold of {:.4f}'.format(tpr_99_thresh))
fpr_at_tpr_99 = # complete
print('This model misclassifies {:.2f}% of galaxies'.format(fpr_at_tpr_99*100))
"""
Explanation: Problem 5e
Suppose that (like me) you really care about supernovae. In this case you want a model that correctly classifies 99% of all stars, so that stellar flares do not fool you into thinking you have found a new supernova.
What classification threshold should be adopted for this model?
What fraction of galaxies does this model misclassify?
End of explanation
"""
new_data_df = pd.read_hdf("blind_test_set.h5")
"""
Explanation: Problem 6) Classify New Data
Run the cell below to load in some new data (which in this case happens to have known labels, but in practice this will almost never be the case...)
End of explanation
"""
new_X = # complete
new_y = # complete
"""
Explanation: Problem 6a
Create a feature and label array for the new data.
Hint - copy the code you developed above in Problem 2.
End of explanation
"""
new_preds = # complete
print("The model has an accuracy of {:.4f}".format( # complete
"""
Explanation: Problem 6b
Calculate the accuracy of the model predictions on the new data.
End of explanation
"""
from sklearn.model_selection import GridSearchCV
# complete
print('The best model has {}'.format( # complete
"""
Explanation: Problem 6c
Can you explain why the accuracy for the new data is significantly lower than what you calculated previously?
If you can build and train a better model (using the trianing data) for classifying the new data - I will be extremely impressed.
write your answer here
Challenge Problem) Full RF Optimization
Now we will optimize the model over all tuning parameters. How does one actually determine the optimal set of tuning parameters? Brute force.
We will optimize the model via a grid search that performs CV at each point in the 3D grid. The final model will adopt the point with the highest accuracy.
It is important to remember two general rules of thumb: (i) if the model is optimized at the edge of the grid, refit a new grid centered on that point, and (ii) the results should be stable in the vicinity of the grid maximum. If this is not the case the model is likely overfit.
Use GridSearchCV to perform a 3-fold CV grid search to optimize the RF star-galaxy model. Remember the rules of thumb.
What are the optimal tuning parameters for the model?
Hint 1 - think about the computational runtime based on the number of points in the grid. Do not start with a very dense or large grid.
Hint 2 - if the runtime is long, don't repeat the grid search even if the optimal model is on an edge of the grid
End of explanation
"""
|
gojomo/gensim | docs/notebooks/FastText_Tutorial.ipynb | lgpl-2.1 | from gensim.models.fasttext import FastText as FT_gensim
from gensim.test.utils import datapath
# Set file names for train and test data
corpus_file = datapath('lee_background.cor')
model_gensim = FT_gensim(size=100)
# build the vocabulary
model_gensim.build_vocab(corpus_file=corpus_file)
# train the model
model_gensim.train(
corpus_file=corpus_file, epochs=model_gensim.epochs,
total_examples=model_gensim.corpus_count, total_words=model_gensim.corpus_total_words
)
print(model_gensim)
"""
Explanation: Using FastText via Gensim
This tutorial is about using fastText model in Gensim. There are two ways you can use fastText in Gensim - Gensim's native implementation of fastText and Gensim wrapper for fastText's original C++ code. Here, we'll learn to work with fastText library for training word-embedding models, saving & loading them and performing similarity operations & vector lookups analogous to Word2Vec.
When to use FastText?
The main principle behind fastText is that the morphological structure of a word carries important information about the meaning of the word, which is not taken into account by traditional word embeddings, which train a unique word embedding for every individual word. This is especially significant for morphologically rich languages (German, Turkish) in which a single word can have a large number of morphological forms, each of which might occur rarely, thus making it hard to train good word embeddings.
fastText attempts to solve this by treating each word as the aggregation of its subwords. For the sake of simplicity and language-independence, subwords are taken to be the character ngrams of the word. The vector for a word is simply taken to be the sum of all vectors of its component char-ngrams.
According to a detailed comparison of Word2Vec and FastText in this notebook, fastText does significantly better on syntactic tasks as compared to the original Word2Vec, especially when the size of the training corpus is small. Word2Vec slightly outperforms FastText on semantic tasks though. The differences grow smaller as the size of training corpus increases.
Training time for fastText is significantly higher than the Gensim version of Word2Vec (15min 42s vs 6min 42s on text8, 17 mil tokens, 5 epochs, and a vector size of 100).
fastText can be used to obtain vectors for out-of-vocabulary (OOV) words, by summing up vectors for its component char-ngrams, provided at least one of the char-ngrams was present in the training data.
Training models
For the following examples, we'll use the Lee Corpus (which you already have if you've installed gensim) for training our model.
For using the wrapper for fastText, you need to have fastText setup locally to be able to train models. See installation instructions for fastText if you don't have fastText installed already.
Using Gensim's implementation of fastText
End of explanation
"""
from gensim.models.wrappers.fasttext import FastText as FT_wrapper
# Set FastText home to the path to the FastText executable
ft_home = '/home/misha/src/fastText-0.1.0/fasttext'
# train the model
model_wrapper = FT_wrapper.train(ft_home, corpus_file)
print(model_wrapper)
"""
Explanation: Using wrapper for fastText's C++ code
End of explanation
"""
# saving a model trained via Gensim's fastText implementation
model_gensim.save('saved_model_gensim')
loaded_model = FT_gensim.load('saved_model_gensim')
print(loaded_model)
# saving a model trained via fastText wrapper
model_wrapper.save('saved_model_wrapper')
loaded_model = FT_wrapper.load('saved_model_wrapper')
print(loaded_model)
"""
Explanation: Training hyperparameters
Hyperparameters for training the model follow the same pattern as Word2Vec. FastText supports the following parameters from the original word2vec -
- model: Training architecture. Allowed values: cbow, skipgram (Default cbow)
- size: Size of embeddings to be learnt (Default 100)
- alpha: Initial learning rate (Default 0.025)
- window: Context window size (Default 5)
- min_count: Ignore words with number of occurrences below this (Default 5)
- loss: Training objective. Allowed values: ns, hs, softmax (Default ns)
- sample: Threshold for downsampling higher-frequency words (Default 0.001)
- negative: Number of negative words to sample, for ns (Default 5)
- iter: Number of epochs (Default 5)
- sorted_vocab: Sort vocab by descending frequency (Default 1)
- threads: Number of threads to use (Default 12)
In addition, FastText has three additional parameters -
- min_n: min length of char ngrams (Default 3)
- max_n: max length of char ngrams (Default 6)
- bucket: number of buckets used for hashing ngrams (Default 2000000)
Parameters min_n and max_n control the lengths of character ngrams that each word is broken down into while training and looking up embeddings. If max_n is set to 0, or to be lesser than min_n, no character ngrams are used, and the model effectively reduces to Word2Vec.
To bound the memory requirements of the model being trained, a hashing function is used that maps ngrams to integers in 1 to K. For hashing these character sequences, the Fowler-Noll-Vo hashing function (FNV-1a variant) is employed.
Note: As in the case of Word2Vec, you can continue to train your model while using Gensim's native implementation of fastText.
Saving/loading models
Models can be saved and loaded via the load and save methods.
End of explanation
"""
print('night' in model_wrapper.wv.vocab)
print('nights' in model_wrapper.wv.vocab)
print(model_wrapper['night'])
print(model_wrapper['nights'])
"""
Explanation: The save_word2vec_method causes the vectors for ngrams to be lost. As a result, a model loaded in this way will behave as a regular word2vec model.
Word vector lookup
Note: Operations like word vector lookups and similarity queries can be performed in exactly the same manner for both the implementations of fastText so they have been demonstrated using only the fastText wrapper here.
FastText models support vector lookups for out-of-vocabulary words by summing up character ngrams belonging to the word.
End of explanation
"""
# Raises a KeyError since none of the character ngrams of the word `axe` are present in the training data
try:
model_wrapper['axe']
except KeyError:
#
# trap the error here so it does not interfere
# with the execution of the cells below
#
pass
else:
assert False, 'the above code should have raised a KeyError'
"""
Explanation: The word vector lookup operation only works if at least one of the component character ngrams is present in the training corpus. For example -
End of explanation
"""
# Tests if word present in vocab
print("word" in model_wrapper.wv.vocab)
# Tests if vector present for word
print("word" in model_wrapper)
"""
Explanation: The in operation works slightly differently from the original word2vec. It tests whether a vector for the given word exists or not, not whether the word is present in the word vocabulary. To test whether a word is present in the training word vocabulary -
End of explanation
"""
print("nights" in model_wrapper.wv.vocab)
print("night" in model_wrapper.wv.vocab)
model_wrapper.similarity("night", "nights")
"""
Explanation: Similarity operations
Similarity operations work the same way as word2vec. Out-of-vocabulary words can also be used, provided they have at least one character ngram present in the training data.
End of explanation
"""
# The example training corpus is a toy corpus, results are not expected to be good, for proof-of-concept only
model_wrapper.most_similar("nights")
model_wrapper.n_similarity(['sushi', 'shop'], ['japanese', 'restaurant'])
model_wrapper.doesnt_match("breakfast cereal dinner lunch".split())
model_wrapper.most_similar(positive=['baghdad', 'england'], negative=['london'])
model_wrapper.accuracy(questions=datapath('questions-words.txt'))
# Word Movers distance
sentence_obama = 'Obama speaks to the media in Illinois'.lower().split()
sentence_president = 'The president greets the press in Chicago'.lower().split()
# Remove their stopwords.
from nltk.corpus import stopwords
stopwords = stopwords.words('english')
sentence_obama = [w for w in sentence_obama if w not in stopwords]
sentence_president = [w for w in sentence_president if w not in stopwords]
# Compute WMD.
distance = model_wrapper.wmdistance(sentence_obama, sentence_president)
distance
"""
Explanation: Syntactically similar words generally have high similarity in fastText models, since a large number of the component char-ngrams will be the same. As a result, fastText generally does better at syntactic tasks than Word2Vec. A detailed comparison is provided here.
Other similarity operations
End of explanation
"""
|
mkcor/csv-cleanup | load_and_cleanup.ipynb | cc0-1.0 | import pandas as pd
"""
Explanation: Lecture des données
Quand j'explore/analyse des données, la première chose que je fais est toujours :
End of explanation
"""
pd.__version__
"""
Explanation: Pour information/rappel,
End of explanation
"""
pd.read_csv('data/enfants.csv')
"""
Explanation: Pour lire un fichier CSV, nous utilisons la bien nommée fonction...
End of explanation
"""
pd.read_csv('data/enfants.csv', sep=';')
"""
Explanation: Ah, oui, j'ai pensé que votre feuille Excel serait exportée avec «;» comme caractère séparateur, car certains champs sont plus susceptibles de contenir des virgules...
End of explanation
"""
pd.read_csv('data/enfants.csv', sep=';', na_values=99)
"""
Explanation: Bien. On dirait que les valeurs inconnues/manquantes sont indiquées par «99». Alors spécifions cela dans les options de la merveilleuse fonction read_csv().
End of explanation
"""
enfants = pd.read_csv('data/enfants.csv', sep=';', na_values='99')
"""
Explanation: Nous allons donner un nom à ce «data frame» (structure de données très pratique), ce sera enfants.
End of explanation
"""
enfants['garde']
"""
Explanation: Nous pouvons accéder aux données de garde avec la syntaxe (intuitive) suivante :
End of explanation
"""
enfants.drop_duplicates(subset=['prénom', 'nom'])
"""
Explanation: ou bien enfants.loc[:, 'garde'].
Retrait des doublons
Remarquons que nous avons deux entrées pour Toto Le Magnifique. Si nous ne voulons conserver qu'une entrée (ligne) par enfant, nous pouvons utiliser la méthode drop_duplicates().
End of explanation
"""
enfants.groupby(by=['prénom', 'nom'])['garde'].sum()
"""
Explanation: Par défaut, c'est la première entrée qui est conservée (voir la documentation). Nous perdons alors l'information contenue dans les autres entrées. Nous voudrions plutôt les grouper.
End of explanation
"""
enfants.groupby(by=['prénom', 'nom'])['garde'].apply(lambda x: '%s' % ', '.join(x.astype(str)))
"""
Explanation: Par défaut, sum() concatène. Pour une meilleure lisibilité, nous voulons peut-être appliquer une fonction faite maison.
End of explanation
"""
def groupe_garde(x):
return pd.Series(dict(age = x['âge'].mean(), garde_complete = '%s' % ', '.join(x['garde'].astype(str))))
enfants.groupby(by=['prénom', 'nom']).apply(groupe_garde)
"""
Explanation: Ecrivons une fonction (à appliquer au data frame).
End of explanation
"""
enfants.groupby(by=['prénom', 'nom']).apply(groupe_garde).to_csv('results/enfants_cleanup.csv', sep=';', na_rep='nan')
"""
Explanation: Sortie du résultat
End of explanation
"""
pd.read_csv('results/enfants_cleanup.csv', sep=';', na_values='nan')
"""
Explanation: Autres manipulations
Et si nous voulions continuer...
End of explanation
"""
|
cstrelioff/ARM-ipynb | Chapter3/chptr3.2.ipynb | mit | from __future__ import print_function, division
%matplotlib inline
import matplotlib
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# use matplotlib style sheet
plt.style.use('ggplot')
# import statsmodels for R-style regression
import statsmodels.formula.api as smf
"""
Explanation: 3.2: Multiple predictors
End of explanation
"""
kidiq = pd.read_stata("../../ARM_Data/child.iq/kidiq.dta")
kidiq.head()
"""
Explanation: Read the data
Data are in the child.iq directory of the ARM_Data download-- you might have
to change the path I use below to reflect the path on your computer.
End of explanation
"""
fit= smf.ols('kid_score ~ mom_hs + mom_iq', data=kidiq).fit()
print(fit.summary())
"""
Explanation: Regression -- multiple predictors, Pg 33
End of explanation
"""
fig, ax = plt.subplots(figsize=(8, 6))
iq_linspace = np.linspace(kidiq['mom_iq'].min(), kidiq['mom_iq'].max(), 50)
# default color cycle
colors = plt.rcParams['axes.color_cycle']
# mom_hs == 0
hs0 = (kidiq['mom_hs'] == 0)
plt.scatter(kidiq[hs0]['mom_iq'], kidiq[hs0]['kid_score'],
s=60, alpha=0.5, c=colors[0])
# mom_hs == 1
hs1 = (kidiq['mom_hs'] == 1)
plt.scatter(kidiq[hs1]['mom_iq'], kidiq[hs1]['kid_score'],
s=60, alpha=0.5, c=colors[1])
# add fits
# mom_hs == 0
plt.plot(iq_linspace, fit.params[0] + fit.params[1] * 0. + fit.params[2] * iq_linspace,
lw=3, c=colors[0])
# mom_hs == 1
plt.plot(iq_linspace, fit.params[0] + fit.params[1] * 1. + fit.params[2] * iq_linspace,
lw=3, c=colors[1])
plt.xlabel("Mother IQ score")
plt.ylabel("Child test score")
"""
Explanation: Figure 3.3, Pg 33
End of explanation
"""
|
susantabiswas/Natural-Language-Processing | Notebooks/Word_Prediction_using_Quadgrams_Memory_Efficient.ipynb | mit | #import the modules necessary
from nltk.util import ngrams
from collections import defaultdict
import nltk
import string
import time
start_time = time.time()
"""
Explanation: Word prediction based on Quadgram
This program reads the corpus line by line so it is slower than the program which reads the corpus
in one go.This reads the corpus one line at a time loads it into the memory
Import corpus
End of explanation
"""
#returns: string
#arg: string
#remove punctuations and make the string lowercase
def removePunctuations(sen):
#split the string into word tokens
temp_l = sen.split()
i = 0
#changes the word to lowercase and removes punctuations from it
for word in temp_l :
for l in word :
if l in string.punctuation:
word = word.replace(l," ")
temp_l[i] = word.lower()
i=i+1
#spliting is being don here beacause in sentences line here---so after punctuation removal it should
#become "here so"
content = " ".join(temp_l)
return content
"""
Explanation: Do preprocessing:
Remove the punctuations and lowercase the tokens
End of explanation
"""
#returns : void
#arg: string,dict,dict,dict
#loads the corpus for the dataset and makes the frequency count of quadgram and trigram strings
def loadCorpus(file_path,tri_dict,quad_dict,vocab_dict):
w1 = '' #for storing the 3rd last word to be used for next token set
w2 = '' #for storing the 2nd last word to be used for next token set
w3 = '' #for storing the last word to be used for next token set
token = []
#open the corpus file and read it line by line
with open(file_path,'r') as file:
for line in file:
#split the line into tokens
token = line.split()
i = 0
#for each word in the token list ,remove pucntuations and change to lowercase
for word in token :
for l in word :
if l in string.punctuation:
word = word.replace(l," ")
token[i] = word.lower()
i=i+1
#make the token list into a string
content = " ".join(token)
token = content.split()
#word_len = word_len + len(token)
if not token:
continue
#since we are reading line by line some combinations of word might get missed for pairing
#for trigram
#first add the previous words
if w2!= '':
token.insert(0,w2)
if w3!= '':
token.insert(1,w3)
#tokens for trigrams
temp1 = list(ngrams(token,3))
#insert the 3rd last word from previous line for quadgram pairing
if w1!= '':
token.insert(0,w1)
#add new unique words to the vocaulary set if available
for word in token:
if word not in vocab_dict:
vocab_dict[word] = 1
else:
vocab_dict[word]+= 1
#tokens for quadgrams
temp2 = list(ngrams(token,4))
#count the frequency of the trigram sentences
for t in temp1:
sen = ' '.join(t)
tri_dict[sen] += 1
#count the frequency of the quadgram sentences
for t in temp2:
sen = ' '.join(t)
quad_dict[sen] += 1
#then take out the last 3 words
n = len(token)
#store the last few words for the next sentence pairing
w1 = token[n -3]
w2 = token[n -2]
w3 = token[n -1]
"""
Explanation: Tokenize and load the corpus data
End of explanation
"""
#returns : float
#arg : string sentence,string word,dict,dict
def findprobability(s,w,tri_dict,quad_dict):
c1 = 0 # for count of sentence 's' with word 'w'
c2 = 0 # for count of sentence 's'
s1 = s + ' ' + w
if s1 in quad_dict:
c1 = quad_dict[s1]
if s in tri_dict:
c2 = tri_dict[s]
if c2 == 0:
return 0
return c1/c2
"""
Explanation: Find the probability
End of explanation
"""
def doPrediction(sen,tri_dict,quad_dict,vocab_dict):
sen = removePunctuations(sen)
max_prob = 0
#when there is no probable word available
#now for guessing the word which should exist we use quadgram
right_word = 'apple'
for word in vocab_dict:
prob = findprobability(sen,word,tri_dict,quad_dict)
if prob > max_prob:
max_prob = prob
right_word = word
print('Word Prediction is :',right_word)
"""
Explanation: Driver function for doing the prediction
End of explanation
"""
def main():
#variable declaration
tri_dict = defaultdict(int) #for keeping count of sentences of three words
quad_dict = defaultdict(int) #for keeping count of sentences of three words
vocab_dict = defaultdict(int) #for storing the different words with their frequencies
#load the corpus for the dataset
loadCorpus('corpusfile.txt',tri_dict,quad_dict,vocab_dict)
print("---Preprocessing Time: %s seconds ---" % (time.time() - start_time))
cond = False
#take input
while(cond == False):
sen = input('Enter the string\n')
sen = removePunctuations(sen)
temp = sen.split()
if len(temp) < 3:
print("Please enter atleast 3 words !")
else:
cond = True
temp = temp[-3:]
sen = " ".join(temp)
start_time1 = time.time()
doPrediction(sen,tri_dict,quad_dict,vocab_dict)
print("---Time for Prediction Operation: %s seconds ---" % (time.time() - start_time1))
if __name__ == '__main__':
main()
"""
Explanation: main function
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst | courses/machine_learning/deepdive2/text_classification/labs/rnn.ipynb | apache-2.0 | import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
"""
Explanation: Recurrent Neural Networks (RNN) with Keras
Learning Objectives
Add built-in RNN layers.
Build bidirectional RNNs.
Using CuDNN kernels when available.
Build a RNN model with nested input/output.
Introduction
Recurrent neural networks (RNN) are a class of neural networks that is powerful for
modeling sequence data such as time series or natural language.
Schematically, a RNN layer uses a for loop to iterate over the timesteps of a
sequence, while maintaining an internal state that encodes information about the
timesteps it has seen so far.
The Keras RNN API is designed with a focus on:
Ease of use: the built-in keras.layers.RNN, keras.layers.LSTM,
keras.layers.GRU layers enable you to quickly build recurrent models without
having to make difficult configuration choices.
Ease of customization: You can also define your own RNN cell layer (the inner
part of the for loop) with custom behavior, and use it with the generic
keras.layers.RNN layer (the for loop itself). This allows you to quickly
prototype different research ideas in a flexible way with minimal code.
Each learning objective will correspond to a #TODO in the notebook where you will complete the notebook cell's code before running. Refer to the solution for reference.
Setup
End of explanation
"""
model = keras.Sequential()
# Add an Embedding layer expecting input vocab of size 1000, and
# output embedding dimension of size 64.
model.add(layers.Embedding(input_dim=1000, output_dim=64))
# Add a LSTM layer with 128 internal units.
# TODO -- your code goes here
# Add a Dense layer with 10 units.
# TODO -- your code goes here
model.summary()
"""
Explanation: Built-in RNN layers: a simple example
There are three built-in RNN layers in Keras:
keras.layers.SimpleRNN, a fully-connected RNN where the output from previous
timestep is to be fed to next timestep.
keras.layers.GRU, first proposed in
Cho et al., 2014.
keras.layers.LSTM, first proposed in
Hochreiter & Schmidhuber, 1997.
In early 2015, Keras had the first reusable open-source Python implementations of LSTM
and GRU.
Here is a simple example of a Sequential model that processes sequences of integers,
embeds each integer into a 64-dimensional vector, then processes the sequence of
vectors using a LSTM layer.
End of explanation
"""
model = keras.Sequential()
model.add(layers.Embedding(input_dim=1000, output_dim=64))
# The output of GRU will be a 3D tensor of shape (batch_size, timesteps, 256)
model.add(layers.GRU(256, return_sequences=True))
# The output of SimpleRNN will be a 2D tensor of shape (batch_size, 128)
model.add(layers.SimpleRNN(128))
model.add(layers.Dense(10))
model.summary()
"""
Explanation: Built-in RNNs support a number of useful features:
Recurrent dropout, via the dropout and recurrent_dropout arguments
Ability to process an input sequence in reverse, via the go_backwards argument
Loop unrolling (which can lead to a large speedup when processing short sequences on
CPU), via the unroll argument
...and more.
For more information, see the
RNN API documentation.
Outputs and states
By default, the output of a RNN layer contains a single vector per sample. This vector
is the RNN cell output corresponding to the last timestep, containing information
about the entire input sequence. The shape of this output is (batch_size, units)
where units corresponds to the units argument passed to the layer's constructor.
A RNN layer can also return the entire sequence of outputs for each sample (one vector
per timestep per sample), if you set return_sequences=True. The shape of this output
is (batch_size, timesteps, units).
End of explanation
"""
encoder_vocab = 1000
decoder_vocab = 2000
encoder_input = layers.Input(shape=(None,))
encoder_embedded = layers.Embedding(input_dim=encoder_vocab, output_dim=64)(
encoder_input
)
# Return states in addition to output
output, state_h, state_c = layers.LSTM(64, return_state=True, name="encoder")(
encoder_embedded
)
encoder_state = [state_h, state_c]
decoder_input = layers.Input(shape=(None,))
decoder_embedded = layers.Embedding(input_dim=decoder_vocab, output_dim=64)(
decoder_input
)
# Pass the 2 states to a new LSTM layer, as initial state
decoder_output = layers.LSTM(64, name="decoder")(
decoder_embedded, initial_state=encoder_state
)
output = layers.Dense(10)(decoder_output)
model = keras.Model([encoder_input, decoder_input], output)
model.summary()
"""
Explanation: In addition, a RNN layer can return its final internal state(s). The returned states
can be used to resume the RNN execution later, or
to initialize another RNN.
This setting is commonly used in the
encoder-decoder sequence-to-sequence model, where the encoder final state is used as
the initial state of the decoder.
To configure a RNN layer to return its internal state, set the return_state parameter
to True when creating the layer. Note that LSTM has 2 state tensors, but GRU
only has one.
To configure the initial state of the layer, just call the layer with additional
keyword argument initial_state.
Note that the shape of the state needs to match the unit size of the layer, like in the
example below.
End of explanation
"""
paragraph1 = np.random.random((20, 10, 50)).astype(np.float32)
paragraph2 = np.random.random((20, 10, 50)).astype(np.float32)
paragraph3 = np.random.random((20, 10, 50)).astype(np.float32)
lstm_layer = layers.LSTM(64, stateful=True)
output = lstm_layer(paragraph1)
output = lstm_layer(paragraph2)
output = lstm_layer(paragraph3)
# reset_states() will reset the cached state to the original initial_state.
# If no initial_state was provided, zero-states will be used by default.
# TODO -- your code goes here
"""
Explanation: RNN layers and RNN cells
In addition to the built-in RNN layers, the RNN API also provides cell-level APIs.
Unlike RNN layers, which processes whole batches of input sequences, the RNN cell only
processes a single timestep.
The cell is the inside of the for loop of a RNN layer. Wrapping a cell inside a
keras.layers.RNN layer gives you a layer capable of processing batches of
sequences, e.g. RNN(LSTMCell(10)).
Mathematically, RNN(LSTMCell(10)) produces the same result as LSTM(10). In fact,
the implementation of this layer in TF v1.x was just creating the corresponding RNN
cell and wrapping it in a RNN layer. However using the built-in GRU and LSTM
layers enable the use of CuDNN and you may see better performance.
There are three built-in RNN cells, each of them corresponding to the matching RNN
layer.
keras.layers.SimpleRNNCell corresponds to the SimpleRNN layer.
keras.layers.GRUCell corresponds to the GRU layer.
keras.layers.LSTMCell corresponds to the LSTM layer.
The cell abstraction, together with the generic keras.layers.RNN class, make it
very easy to implement custom RNN architectures for your research.
Cross-batch statefulness
When processing very long sequences (possibly infinite), you may want to use the
pattern of cross-batch statefulness.
Normally, the internal state of a RNN layer is reset every time it sees a new batch
(i.e. every sample seen by the layer is assumed to be independent of the past). The
layer will only maintain a state while processing a given sample.
If you have very long sequences though, it is useful to break them into shorter
sequences, and to feed these shorter sequences sequentially into a RNN layer without
resetting the layer's state. That way, the layer can retain information about the
entirety of the sequence, even though it's only seeing one sub-sequence at a time.
You can do this by setting stateful=True in the constructor.
If you have a sequence s = [t0, t1, ... t1546, t1547], you would split it into e.g.
s1 = [t0, t1, ... t100]
s2 = [t101, ... t201]
...
s16 = [t1501, ... t1547]
Then you would process it via:
python
lstm_layer = layers.LSTM(64, stateful=True)
for s in sub_sequences:
output = lstm_layer(s)
When you want to clear the state, you can use layer.reset_states().
Note: In this setup, sample i in a given batch is assumed to be the continuation of
sample i in the previous batch. This means that all batches should contain the same
number of samples (batch size). E.g. if a batch contains [sequence_A_from_t0_to_t100,
sequence_B_from_t0_to_t100], the next batch should contain
[sequence_A_from_t101_to_t200, sequence_B_from_t101_to_t200].
Here is a complete example:
End of explanation
"""
paragraph1 = np.random.random((20, 10, 50)).astype(np.float32)
paragraph2 = np.random.random((20, 10, 50)).astype(np.float32)
paragraph3 = np.random.random((20, 10, 50)).astype(np.float32)
lstm_layer = layers.LSTM(64, stateful=True)
output = lstm_layer(paragraph1)
output = lstm_layer(paragraph2)
existing_state = lstm_layer.states
new_lstm_layer = layers.LSTM(64)
new_output = new_lstm_layer(paragraph3, initial_state=existing_state)
"""
Explanation: RNN State Reuse
<a id="rnn_state_reuse"></a>
The recorded states of the RNN layer are not included in the layer.weights(). If you
would like to reuse the state from a RNN layer, you can retrieve the states value by
layer.states and use it as the
initial state for a new layer via the Keras functional API like new_layer(inputs,
initial_state=layer.states), or model subclassing.
Please also note that sequential model might not be used in this case since it only
supports layers with single input and output, the extra input of initial state makes
it impossible to use here.
End of explanation
"""
model = keras.Sequential()
# Add Bidirectional layers
# TODO -- your code goes here
model.summary()
"""
Explanation: Bidirectional RNNs
For sequences other than time series (e.g. text), it is often the case that a RNN model
can perform better if it not only processes sequence from start to end, but also
backwards. For example, to predict the next word in a sentence, it is often useful to
have the context around the word, not only just the words that come before it.
Keras provides an easy API for you to build such bidirectional RNNs: the
keras.layers.Bidirectional wrapper.
End of explanation
"""
batch_size = 64
# Each MNIST image batch is a tensor of shape (batch_size, 28, 28).
# Each input sequence will be of size (28, 28) (height is treated like time).
input_dim = 28
units = 64
output_size = 10 # labels are from 0 to 9
# Build the RNN model
def build_model(allow_cudnn_kernel=True):
# CuDNN is only available at the layer level, and not at the cell level.
# This means `LSTM(units)` will use the CuDNN kernel,
# while RNN(LSTMCell(units)) will run on non-CuDNN kernel.
if allow_cudnn_kernel:
# The LSTM layer with default options uses CuDNN.
lstm_layer = keras.layers.LSTM(units, input_shape=(None, input_dim))
else:
# Wrapping a LSTMCell in a RNN layer will not use CuDNN.
lstm_layer = keras.layers.RNN(
keras.layers.LSTMCell(units), input_shape=(None, input_dim)
)
model = keras.models.Sequential(
[
lstm_layer,
keras.layers.BatchNormalization(),
keras.layers.Dense(output_size),
]
)
return model
"""
Explanation: Under the hood, Bidirectional will copy the RNN layer passed in, and flip the
go_backwards field of the newly copied layer, so that it will process the inputs in
reverse order.
The output of the Bidirectional RNN will be, by default, the concatenation of the forward layer
output and the backward layer output. If you need a different merging behavior, e.g.
concatenation, change the merge_mode parameter in the Bidirectional wrapper
constructor. For more details about Bidirectional, please check
the API docs.
Performance optimization and CuDNN kernels
In TensorFlow 2.0, the built-in LSTM and GRU layers have been updated to leverage CuDNN
kernels by default when a GPU is available. With this change, the prior
keras.layers.CuDNNLSTM/CuDNNGRU layers have been deprecated, and you can build your
model without worrying about the hardware it will run on.
Since the CuDNN kernel is built with certain assumptions, this means the layer will
not be able to use the CuDNN kernel if you change the defaults of the built-in LSTM or
GRU layers. E.g.:
Changing the activation function from tanh to something else.
Changing the recurrent_activation function from sigmoid to something else.
Using recurrent_dropout > 0.
Setting unroll to True, which forces LSTM/GRU to decompose the inner
tf.while_loop into an unrolled for loop.
Setting use_bias to False.
Using masking when the input data is not strictly right padded (if the mask
corresponds to strictly right padded data, CuDNN can still be used. This is the most
common case).
For the detailed list of constraints, please see the documentation for the
LSTM and
GRU layers.
Using CuDNN kernels when available
Let's build a simple LSTM model to demonstrate the performance difference.
We'll use as input sequences the sequence of rows of MNIST digits (treating each row of
pixels as a timestep), and we'll predict the digit's label.
End of explanation
"""
mnist = keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
sample, sample_label = x_train[0], y_train[0]
"""
Explanation: Let's load the MNIST dataset:
End of explanation
"""
model = build_model(allow_cudnn_kernel=True)
# Compile the model
# TODO -- your code goes here
model.fit(
x_train, y_train, validation_data=(x_test, y_test), batch_size=batch_size, epochs=1
)
"""
Explanation: Let's create a model instance and train it.
We choose sparse_categorical_crossentropy as the loss function for the model. The
output of the model has shape of [batch_size, 10]. The target for the model is an
integer vector, each of the integer is in the range of 0 to 9.
End of explanation
"""
noncudnn_model = build_model(allow_cudnn_kernel=False)
noncudnn_model.set_weights(model.get_weights())
noncudnn_model.compile(
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer="sgd",
metrics=["accuracy"],
)
noncudnn_model.fit(
x_train, y_train, validation_data=(x_test, y_test), batch_size=batch_size, epochs=1
)
"""
Explanation: Now, let's compare to a model that does not use the CuDNN kernel:
End of explanation
"""
import matplotlib.pyplot as plt
with tf.device("CPU:0"):
cpu_model = build_model(allow_cudnn_kernel=True)
cpu_model.set_weights(model.get_weights())
result = tf.argmax(cpu_model.predict_on_batch(tf.expand_dims(sample, 0)), axis=1)
print(
"Predicted result is: %s, target result is: %s" % (result.numpy(), sample_label)
)
plt.imshow(sample, cmap=plt.get_cmap("gray"))
"""
Explanation: When running on a machine with a NVIDIA GPU and CuDNN installed,
the model built with CuDNN is much faster to train compared to the
model that uses the regular TensorFlow kernel.
The same CuDNN-enabled model can also be used to run inference in a CPU-only
environment. The tf.device annotation below is just forcing the device placement.
The model will run on CPU by default if no GPU is available.
You simply don't have to worry about the hardware you're running on anymore. Isn't that
pretty cool?
End of explanation
"""
class NestedCell(keras.layers.Layer):
def __init__(self, unit_1, unit_2, unit_3, **kwargs):
self.unit_1 = unit_1
self.unit_2 = unit_2
self.unit_3 = unit_3
self.state_size = [tf.TensorShape([unit_1]), tf.TensorShape([unit_2, unit_3])]
self.output_size = [tf.TensorShape([unit_1]), tf.TensorShape([unit_2, unit_3])]
super(NestedCell, self).__init__(**kwargs)
def build(self, input_shapes):
# expect input_shape to contain 2 items, [(batch, i1), (batch, i2, i3)]
i1 = input_shapes[0][1]
i2 = input_shapes[1][1]
i3 = input_shapes[1][2]
self.kernel_1 = self.add_weight(
shape=(i1, self.unit_1), initializer="uniform", name="kernel_1"
)
self.kernel_2_3 = self.add_weight(
shape=(i2, i3, self.unit_2, self.unit_3),
initializer="uniform",
name="kernel_2_3",
)
def call(self, inputs, states):
# inputs should be in [(batch, input_1), (batch, input_2, input_3)]
# state should be in shape [(batch, unit_1), (batch, unit_2, unit_3)]
input_1, input_2 = tf.nest.flatten(inputs)
s1, s2 = states
output_1 = tf.matmul(input_1, self.kernel_1)
output_2_3 = tf.einsum("bij,ijkl->bkl", input_2, self.kernel_2_3)
state_1 = s1 + output_1
state_2_3 = s2 + output_2_3
output = (output_1, output_2_3)
new_states = (state_1, state_2_3)
return output, new_states
def get_config(self):
return {"unit_1": self.unit_1, "unit_2": unit_2, "unit_3": self.unit_3}
"""
Explanation: RNNs with list/dict inputs, or nested inputs
Nested structures allow implementers to include more information within a single
timestep. For example, a video frame could have audio and video input at the same
time. The data shape in this case could be:
[batch, timestep, {"video": [height, width, channel], "audio": [frequency]}]
In another example, handwriting data could have both coordinates x and y for the
current position of the pen, as well as pressure information. So the data
representation could be:
[batch, timestep, {"location": [x, y], "pressure": [force]}]
The following code provides an example of how to build a custom RNN cell that accepts
such structured inputs.
Define a custom cell that supports nested input/output
See Making new Layers & Models via subclassing
for details on writing your own layers.
End of explanation
"""
unit_1 = 10
unit_2 = 20
unit_3 = 30
i1 = 32
i2 = 64
i3 = 32
batch_size = 64
num_batches = 10
timestep = 50
cell = NestedCell(unit_1, unit_2, unit_3)
rnn = keras.layers.RNN(cell)
input_1 = keras.Input((None, i1))
input_2 = keras.Input((None, i2, i3))
outputs = rnn((input_1, input_2))
model = keras.models.Model([input_1, input_2], outputs)
model.compile(optimizer="adam", loss="mse", metrics=["accuracy"])
"""
Explanation: Build a RNN model with nested input/output
Let's build a Keras model that uses a keras.layers.RNN layer and the custom cell
we just defined.
End of explanation
"""
input_1_data = np.random.random((batch_size * num_batches, timestep, i1))
input_2_data = np.random.random((batch_size * num_batches, timestep, i2, i3))
target_1_data = np.random.random((batch_size * num_batches, unit_1))
target_2_data = np.random.random((batch_size * num_batches, unit_2, unit_3))
input_data = [input_1_data, input_2_data]
target_data = [target_1_data, target_2_data]
model.fit(input_data, target_data, batch_size=batch_size)
"""
Explanation: Train the model with randomly generated data
Since there isn't a good candidate dataset for this model, we use random Numpy data for
demonstration.
End of explanation
"""
|
dostrebel/working_place_ds_17 | 06_Python_Rückblick/01+Rückblick+02+For-Loop-Übungen+.ipynb | mit | primzweibissieben = [2, 3, 5, 7]
for prime in primzweibissieben:
print(prime)
"""
Explanation: 10 For-Loop-Rückblick-Übungen
In den Teilen der folgenden Übungen habe ich den Code mit "XXX" ausgewechselt. Es gilt in allen Übungen, den korrekten Code auszuführen und die Zelle dann auszuführen.
1.Drucke alle diese Prim-Zahlen aus:
End of explanation
"""
for x in range(5):
print(x)
for x in range(3, 6):
print(x)
"""
Explanation: 2.Drucke alle die Zahlen von 0 bis 4 aus:
End of explanation
"""
numbers = [
951, 402, 984, 651, 360, 69, 408, 319, 601, 485, 980, 507, 725, 547, 544,
615, 83, 165, 141, 501, 263, 617, 865, 575, 219, 390, 984, 592, 236, 105, 942, 941,
386, 462, 47, 418, 907, 344, 236, 375, 823, 566, 597, 978, 328, 615, 953, 345,
399, 162, 758, 219, 918, 237, 412, 566, 826, 248, 866, 950, 626, 949, 687, 217,
815, 67, 104, 58, 512, 24, 892, 894, 767, 553, 81, 379, 843, 831, 445, 742, 717,
958, 609, 842, 451, 688, 753, 854, 685, 93, 857, 440, 380, 126, 721, 328, 753, 470,
743, 527
]
# Hier kommt Dein Code:
new_lst = [] # braucht es nicht
for elem in numbers:
if elem < 238 and elem % 2 == 0:
new_lst.append(elem)
else:#braucht es nicht
continue #braucht es nicht
print(new_lst) #aber dann muss print in der if-clause sein(eingezogen)
#Lösung:
"""
Explanation: 4.Baue einen For-Loop, indem Du alle geraden Zahlen ausdruckst, die tiefer sind als 237.
End of explanation
"""
sum(numbers)
count = 0
for x in numbers:
count = count + x
print(count)
#Lösung:
"""
Explanation: 5.Addiere alle Zahlen in der Liste
End of explanation
"""
evennumber = []
for elem in numbers:
if elem % 2 == 0:
evennumber.append(elem)
sum(evennumber)
"""
Explanation: 6.Addiere nur die Zahlen, die gerade sind
End of explanation
"""
Satz = ['Hello World', 'Hello World','Hello World','Hello World','Hello World']
for elem in Satz:
print(elem)
hello = 'Hello World'
for x in range(5):
print(hello)
#Lösung
"""
Explanation: 7.Drucke mit einem For Loop 5 Mal hintereinander Hello World aus
End of explanation
"""
l=[]
for i in range(2000, 3201):
if (i % 7==0) and (i % 5!=0):#!= entspricht nichtgleich
l.append(str(i))
print(','.join(l))#join verbindet alle elemente einer liste, hier setze ich noch ein komma dazwischen
"""
Explanation: 8.Entwickle ein Programm, das alle Nummern zwischen 2000 und 3200 findet, die durch 7, aber nicht durch 5 teilbar sind. Das Ergebnis sollte auf einer Zeile ausgedruckt werden. Tipp: Schaue Dir hier die Vergleichsoperanden von Python an.
End of explanation
"""
lst = range(45,99)
newlst = []
for i in lst:
i = str(i)
newlst.append(i)
print(newlst)#return kommt immer in Funktionen!
"""
Explanation: 9.Schreibe einen For Loop, der die Nummern in der folgenden Liste von int in str verwandelt.
End of explanation
"""
newnewlist = [] #Replace ist ein wichtiger Befehl, da man so grosse Datenmengen bereinigt. Man ersetzt dann einfach durch ''
for elem in newlst:
if '4' in elem:
elem = elem.replace('4', 'A')
if '5' in elem:
elem = elem.replace('5', 'B')
newnewlist.append(elem)
newnewlist
"""
Explanation: 10.Schreibe nun ein Programm, das alle Ziffern 4 mit dem Buchstaben A ersetzte, alle Ziffern 5 mit dem Buchtaben B.
End of explanation
"""
|
NORCatUofC/rain | n-year/notebooks/Frequency of N-Year Storms.ipynb | mit | from __future__ import absolute_import, division, print_function, unicode_literals
import pandas as pd
from datetime import datetime, timedelta
import operator
import matplotlib.pyplot as plt
import numpy as np
from collections import namedtuple
%matplotlib inline
n_year_storms = pd.read_csv('data/n_year_storms_ohare_noaa.csv')
n_year_storms['start_time'] = pd.to_datetime(n_year_storms['start_time'])
n_year_storms['end_time'] = pd.to_datetime(n_year_storms['end_time'])
n_year_storms = n_year_storms.set_index('start_time')
n_year_storms.head()
# Based on previous notebooks, we should have 83 n-year events in this timeframe.
len(n_year_storms)
ns_by_year = {year: {n: 0 for n in list(n_year_storms['n'].unique())} for year in range(1970, 2017)}
for index, event in n_year_storms.iterrows():
ns_by_year[event['year']][int(event['n'])] += 1
ns_by_year = pd.DataFrame(ns_by_year).transpose()
ns_by_year.head()
# Double check that we still have 83 events
ns_by_year.sum().sum()
"""
Explanation: Frequency of N-Year Storms
This notebook investigates changes in the frequency of N-Year Storms
Please see previous notebook "N-Year Storms" to see how N-Year storms were calculated, and a little more information about how often these occur. Building off this, this notebook will break the time into buckets, and use that to see if these storms are happening more or less frequently
End of explanation
"""
all_years = [i for i in range(1970, 2016)]
small_events = ns_by_year[(ns_by_year[1] > 0) | (ns_by_year[2] > 0)][[1,2]]
small_events = small_events.reindex(all_years, fill_value=0)
small_events.columns = [str(n) + '-year' for n in small_events.columns]
small_events.head()
# Number of 1 and 2 year events per year
small_events.cumsum().plot(kind='line', stacked=False, title="1- and 2-year Storms by Year - Cumulative Total over Time")
"""
Explanation: Looking at the "N-Year Storms" notebook, it is pretty obvious when the big storms are happening -- for the most part more recently. However, there are so many 1 and 2 year events, that it is tough to tell when they are happening. Let's create a graph with only those events.
End of explanation
"""
# Divide into buckets using resampling
n_year_storms.resample('15A',how={'year':'count'})
# Using the resample method is not really want giving me what I want. Do this brute force
# TODO: Play around with resample to do this more efficiently
# I'd like to try and be a little more explicit in how I'm breaking this up
def find_bucket(year):
if year < 1986:
return '1970-1985'
elif year <= 2000:
return '1986-2000'
else:
return '2001-2015'
ns_by_year['year'] = ns_by_year.index.values
ns_by_year['bucket3'] = ns_by_year['year'].apply(find_bucket)
ns_by_year = ns_by_year.drop('year', 1)
ns_by_year.head()
bucket3 = ns_by_year.groupby('bucket3').sum()
bucket3.head()
# Make sure there are 83 storms
bucket3.sum().sum().sum()
bucket3.plot(kind='bar', stacked=True, title="N-Year Storms across 3 time intervals")
"""
Explanation: From the graph above, it actually looks like the middle of the dataset has the most action.
Let's try something else. Dividing the timeframes into buckets, and seeing if that helps get a big picture
End of explanation
"""
ns_by_year.head()
def find_bucket(year):
if year < 1976:
return "1970-1975"
elif year < 1981:
return '1976-1980'
elif year < 1986:
return '1981-1985'
elif year < 1991:
return '1986-1990'
elif year < 1996:
return '1991-1995'
elif year < 2001:
return '1996-2000'
elif year < 2006:
return '2001-2005'
elif year < 2011:
return '2006-2010'
else:
return '2011-2015'
ns_by_year['year'] = ns_by_year.index.values
ns_by_year['bucket8'] = ns_by_year['year'].apply(find_bucket)
ns_by_year = ns_by_year.drop('year', 1)
ns_by_year.head()
bucket8 = ns_by_year.drop('bucket3',1).groupby('bucket8').sum()
bucket8.head()
bucket8.sum().sum().sum()
bucket8.plot(kind='bar', stacked=True, title="N-Year Storms across 8 Intervals")
"""
Explanation: A few thoughts.
The middle interval has the most events
66% of the 100-year events and 100% of the 50-year events happened in the most recent interval
The number of 2- and 5-year storms are going up
The number of 1- and 10-year storms are going down
Let's break this into smaller intervals
End of explanation
"""
|
liufuyang/deep_learning_tutorial | jizhi-pytorch-2/03_text_generation/Homework_3/Homeword_LSTM_Name_Generator.ipynb | mit | # 第一步当然是引入PyTorch及相关包
import torch
import torch.nn as nn
import torch.optim
from torch.autograd import Variable
import numpy as np
"""
Explanation: 火炬上的深度学习(下)第三节:神经网络莫扎特
课后作业:使用 LSTM 编写一个国际姓氏生成模型
在火炬课程中,我们学习了使用 LSTM 来生成 MIDI 音乐。这节课我们使用类似的方法,再创建一个 LSTM 国际起名大师!
完成后的模型能够像下面这样使用,指定一个国家名,模型即生成几个属于这个国家的姓氏。
```
python generate.py Russian
Rovakov Uantov Shavakov
python generate.py German
Gerren Ereng Rosher
python generate.py Spanish
Salla Parer Allan
python generate.py Chinese
Chan Hang Iun
```
End of explanation
"""
import glob
import unicodedata
import string
# all_letters 即课支持打印的字符+标点符号
all_letters = string.ascii_letters + " .,;'-"
# Plus EOS marker
n_letters = len(all_letters) + 1
EOS = n_letters - 1
def unicode_to_ascii(s):
return ''.join(
c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn'
and c in all_letters
)
print(unicode_to_ascii("O'Néàl"))
"""
Explanation: 准备数据
这次的数据仍然是18个文本文件,每个文件以“国家名字”命名,文件中存储了大量这个国家的姓氏。
在读取这些数据前,为了简化神经网络的输入参数规模,我们把各国各语言人名都转化成用26个英文字母来表示,下面就是转换的方法。
End of explanation
"""
# 姓氏中所有的可视字符
print('all_letters: ', all_letters)
# 所有字符的长度 +1 EOS结束符
print('n_letters: ', n_letters)
# 结束符,没有实质内容
print('EOS: ', EOS)
"""
Explanation: 可以看到 "O'Néàl" 被转化成了以普通ASCII字符表示的 O'Neal。
在上面的代码中,还要注意这么几个变量。
End of explanation
"""
# 按行读取出文件中的名字,并返回包含所有名字的列表
def read_lines(filename):
lines = open(filename).read().strip().split('\n')
return [unicode_to_ascii(line) for line in lines]
# category_lines是一个字典
# 其中索引是国家名字,内容是从文件读取出的这个国家的所有名字
category_lines = {}
# all_categories是一个列表
# 其中包含了所有的国家名字
all_categories = []
# 循环所有文件
for filename in glob.glob('./names/*.txt'):
# 从文件名中切割出国家名字
category = filename.split('/')[-1].split('.')[0]
# 将国家名字添加到列表中
all_categories.append(category)
# 读取对应国别文件中所有的名字
lines = read_lines(filename)
# 将所有名字存储在字典中对应的国别下
category_lines[category] = lines
# 共有的国别数
n_categories = len(all_categories)
print('# categories: ', n_categories, all_categories)
print()
print('# Russian names: ', category_lines['Russian'][:10])
# 再统计下手头共有多少条训练数据
all_line_num = 0
for key in category_lines:
all_line_num += len(category_lines[key])
print(all_line_num)
"""
Explanation: 其中 all_letters 包含了我们数据集中所有可能出现的字符,也就是“字符表”。
n_letters 是字符表的长度,在本例中长度为59。EOS 的索引号为58,它在字符表中没有对应的字符,仅代表结束。
读取数据
准备好处理数据的方法,下面就可以放心的读取数据了。
我们建立一个列表 all_categories 用于存储所有的国家名字。
建立一个字典 category_lines,以读取的国名作为字典的索引,国名下存储对应国别的名字。
End of explanation
"""
import random
def random_training_pair():
# 随机选择一个国别名
category = random.choice(all_categories)
# 读取这个国别名下的所有人名
line = random.choice(category_lines[category])
return category, line
print(random_training_pair())
"""
Explanation: 现在我们的数据准备好了,可以搭建神经网络了!
准备训练
首先建立一个可以随机选择数据对 (category, line) 的方法,以方便训练时调用。
End of explanation
"""
# 将名字所属的国家名转化为“独热向量”
def make_category_input(category):
li = all_categories.index(category)
return li
print(make_category_input('Italian'))
"""
Explanation: 首先处理国别,将国别名转化为索引。
这个索引是要和姓氏一起传入神经网络模型的。我们这次编写的是根据“国名条件”生成“符合条件的姓氏”的 LSTM 模型。这种将“条件”和“符合条件的数据”合并一起作为训练输入数据的方法,在“条件模型”里非常流行。
比如 条件GAN(Conditional GAN),在训练时是把数据标签拼接到数据图片中一起进行训练的。
End of explanation
"""
def make_chars_input(nameStr):
name_char_list = list(map(lambda x: all_letters.find(x), nameStr))
return name_char_list
def make_target(nameStr):
target_char_list = list(map(lambda x: all_letters.find(x), nameStr[1:]))
target_char_list.append(n_letters - 1)# EOS
return target_char_list
"""
Explanation: 对于训练过程中的每一步,或者说对于训练数据中每个名字的每个字符来说,神经网络的输入是 (category, current letter, hidden state),输出是 (next letter, next hidden state)。
与在课程中讲的一样,神经网络还是依据“当前的字符”预测“下一个字符”。比如对于“Kasparov”这个名字,创建的(input, target)数据对是 ("K", "a"), ("a", "s"), ("s", "p"), ("p", "a"), ("a", "r"), ("r", "o"), ("o", "v"), ("v", "EOS")。
End of explanation
"""
def random_training_set():
# 随机选择数据集
category, line = random_training_pair()
#print(category, line)
# 转化成对应 Tensor
category_input = make_category_input(category)
line_input = make_chars_input(line)
#category_name_input = make_category_name_input(category, line)
line_target = make_target(line)
return category_input, line_input, line_target
#return category_name_input, line_target
print(random_training_set())
"""
Explanation: 同样为了训练时方便使用,我们建立一个 random_training_set 函数,以随机选择出数据集 (category, line) 并转化成训练需要的 Tensor: (category, input, target)。
End of explanation
"""
# 一个手动实现的LSTM模型,
class LSTMNetwork(nn.Module):
def __init__(self, category_size, name_size, hidden_size, output_size, num_layers = 1):
super(LSTMNetwork, self).__init__()
self.hidden_size = hidden_size
self.num_layers = num_layers
# 进行嵌入
self.embedding1 = nn.Embedding(category_size, hidden_size)
self.embedding2 = nn.Embedding(name_size, hidden_size)
self.lstm = nn.LSTM(hidden_size*2, hidden_size, num_layers, batch_first = True)
# 隐含层内部的相互链接
# self.dropout = nn.Dropout(0.2)
self.fc = nn.Linear(hidden_size, output_size)
# 输出层
self.softmax = nn.LogSoftmax()
def forward(self, category_variable, name_variable, hidden):
# 先分别进行embedding层的计算
#print('---->')
#print(category_variable)
category = self.embedding1(category_variable)
name = self.embedding2(name_variable)
#print('---->')
#print(category)
#print(name)
input_variable = torch.cat([category,name]).view(HIDDEN_SIZE*2,-1)
#print('---->---->')
#print(input_variable)
# 从输入到隐含层的计算
output, hidden = self.lstm(input_variable, hidden)
# output的尺寸:batch_size, len_seq, hidden_size
output = output[:, -1, ...]
# 此时,output的尺寸为:batch_size, hidden_size
# 全连接层
output = self.fc(output)
# output的尺寸:batch_size, output_size
# softmax函数
output = self.softmax(output)
return output, hidden
def initHidden(self):
# 对隐含单元的初始化
# 注意尺寸是: layer_size, batch_size, hidden_size
# 对隐单元的初始化
# 对引单元输出的初始化,全0.
# 注意hidden和cell的维度都是layers,batch_size,hidden_size
hidden = Variable(torch.zeros(self.num_layers, 1, self.hidden_size))
# 对隐单元内部的状态cell的初始化,全0
cell = Variable(torch.zeros(self.num_layers, 1, self.hidden_size))
return (hidden, cell)
"""
Explanation: 搭建神经网络
这次使用的 LSTM 神经网络整体结构上与课上讲的生成音乐的模型非常相似,不过有一点请注意一下。
我们要把国别和国别对应的姓氏一同输入到神经网络中,这样 LSTM 模型才能分别学习到每个国家姓氏的特色,从而生成不同国家不同特色的姓氏。
那国别数据与姓氏数据应该如何拼接哪?应该在嵌入前拼接,还是在嵌入后再进行拼接哪?嵌入后的维度与 hidden_size 有怎样的关系哪?
你需要参考课上的模型,将这个模型补充完整。
End of explanation
"""
# 定义训练函数,在这个函数里,我们可以随机选择一条训练数据,遍历每个字符进行训练
def train_LSTM(lstm):
# 初始化 隐藏层、梯度清零、损失清零
hidden = lstm.initHidden()
optimizer.zero_grad()
loss = 0
# 随机选取一条训练数据
category_input, line_input, line_target = random_training_set()
#print('--- getting random data ---')
#print(category_input, line_input, line_target)
# 处理国别数据
category_variable = Variable(torch.LongTensor([category_input]))
# 循环字符
for t in range(len(line_input)):
# 姓氏
name_variable = Variable(torch.LongTensor([line_input[t]]))
# 目标
name_target = Variable(torch.LongTensor([line_target[t]]))
# 传入模型
output, hidden = lstm(category_variable, name_variable, hidden)
# 累加损失
loss += criterion(output, name_target)
# 计算平均损失
l = len(line_input)
loss = 1.0 * loss / l
# 反向传播、更新梯度
loss.backward()
optimizer.step()
return loss
"""
Explanation: 开始训练!
与之前处理得分类问题不同,在分类问题中只有最后的输出被使用。而在当前的 生成 姓氏的任务中,神经网络在每一步都会做预测,所以我们需要在每一步计算损失值。
PyTorch 非常易用,它允许我们只是简单的把每一步计算的损失加起来,在遍历完一个姓氏后,再进行反向传播。
你需要将训练函数补充完整,或者编写自己的训练函数。
End of explanation
"""
import time
import math
def time_since(t):
now = time.time()
s = now - t
m = math.floor(s / 60)
s -= m * 60
return '%dm %ds' % (m, s)
"""
Explanation: 我们定义 time_since 函数,它可以打印出训练持续的时间。
End of explanation
"""
HIDDEN_SIZE = 64
num_epoch = 3
learning_rate = 0.002
num_layers = 2
# 实例化模型
lstm = LSTMNetwork(n_categories, n_letters-1, HIDDEN_SIZE, n_letters, num_layers=num_layers)
# 定义损失函数与优化方法
optimizer = torch.optim.Adam(lstm.parameters(), lr = learning_rate)
criterion = torch.nn.NLLLoss()
"""
Explanation: 在下面你要定义损失函数、优化函数、实例化模型参数。
End of explanation
"""
start = time.time()
records = []
# 开始训练循环
for epoch in range(num_epoch):
train_loss = 0
# 按所有数据的行数随机循环
for i in range(all_line_num):
loss = train_LSTM(lstm)
# train_loss += loss
#每隔3000步,跑一次校验集,并打印结果
if i % 2000 == 0:
training_process = (all_line_num * epoch + i) / (all_line_num * num_epoch) * 100
training_process = '%.2f' % training_process
print('第{}轮,训练损失:{:.2f},训练进度:{:.2f}%,({})'\
.format(epoch, loss.data.numpy()[0], float(training_process), time_since(start)))
records.append([loss.data.numpy()[0]])
"""
Explanation: 训练的过程与我们前几节课一样,都是老套路啦!
End of explanation
"""
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
%matplotlib inline
a = [i[0] for i in records]
plt.plot(a[0::500], label = 'Train Loss')
plt.xlabel('Steps')
plt.ylabel('Loss')
plt.legend()
"""
Explanation: 绘制观察损失曲线
让我们将训练过程中记录的损失绘制成一条曲线,观察下神经网络学习的效果。
End of explanation
"""
max_length = 20
# 通过指定国别名 category
# 以及开始字符 start_char
# 还有混乱度 temperature 来生成一个名字
def generate_one(category, start_char='A', temperature=0.2):
# 初始化输入数据,国别 以及 输入的第一个字符
# 国别
category_idx = make_category_input(category)
category_variable = Variable(torch.LongTensor([category_idx]))
# 第一个字符
name_idx = all_letters.index(start_char)
name_variable = Variable(torch.LongTensor([name_idx]))
# 初始化隐藏层
hidden = lstm.initHidden()
output_str = start_char
for i in range(max_length):
# 调用模型
output, hidden = lstm(category_variable, name_variable, hidden)
# 这里是将输出转化为一个多项式分布
output_dist = output.data.view(-1).div(temperature).exp()
# 从而可以根据混乱度 temperature 来选择下一个字符
# 混乱度低,则趋向于选择网络预测最大概率的那个字符
# 混乱度高,则趋向于随机选择字符
top_i = torch.multinomial(output_dist, 1)[0]
# 生成字符是 EOS,则生成结束
if top_i == EOS:
break
else:
# 继续下一个字符
char = all_letters[top_i]
output_str += char
chars_input = all_letters.index(char)
name_variable = Variable(torch.LongTensor([chars_input]))
return output_str
# 再定义一个函数,方便每次生成多个名字
def generate(category, start_chars='ABC'):
for start_char in start_chars:
print(generate_one(category, start_char))
generate('Russian', 'RUSKCJ')
generate('German', 'GERS')
generate('Spanish', 'SPAJFC')
generate('Chinese', 'CHIFYL')
generate('English', 'ABCKFJSIL')
"""
Explanation: 因为我在计算损失平均值时有“除0错误”,所以在损失曲线中有间断,大家可以改进我的计算方法,让损失曲线连贯起来。
测试使用神经网络
既然神经网络训练好了,那也就是说,我们喂给它第一个字符,他就能生成第二个字符,喂给它第二个字符,它就会生成第三个,这样一直持续下去,直至生成 EOS 才结束。
那下面我们编写 generate_one 函数以方便的使用神经网络生成我们想要的名字字符串,在这个函数里我们定义以下内容:
建立输入国别,开始字符,初始隐藏层状态的 Tensor
创建 output_str 变量,创建时其中只包含“开始字符”
定义生成名字的长度最大不超过 max_length
将当前字符传入神经网络
在输出中选出预测的概率最大的下一个字符,同时取出当前的隐藏层状态
如果字符是 EOS,则生成结束
如果是常规字符,则加入到 output_str 中并继续下一个流程
返回最终生成的名字字符串
你需要自行编写模型验证方法。
End of explanation
"""
print(lstm)
"""
Explanation: 可以看到 LSTM 预测的效果,但显然还不理想,我想你可以通过调整网络模型,或者通过调整超参数让模型表现的更好。
End of explanation
"""
|
Z0m6ie/Zombie_Code | Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb | mit | x = 1
y = 2
x + y
x
"""
Explanation: You are currently looking at version 1.1 of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the Jupyter Notebook FAQ course resource.
The Python Programming Language: Functions
End of explanation
"""
def add_numbers(x, y):
return x + y
add_numbers(x, y)
"""
Explanation: <br>
add_numbers is a function that takes two numbers and adds them together.
End of explanation
"""
def add_numbers(x,y,z=None):
if (z==None):
return x+y
else:
return x+y+z
print(add_numbers(1, 2))
print(add_numbers(1, 2, 3))
"""
Explanation: <br>
add_numbers updated to take an optional 3rd parameter. Using print allows printing of multiple expressions within a single cell.
End of explanation
"""
def add_numbers(x, y, z=None, flag=False):
if (flag):
print('Flag is true!')
if (z==None):
return x + y
else:
return x + y + z
print(add_numbers(1, 2, flag=True))
"""
Explanation: <br>
add_numbers updated to take an optional flag parameter.
End of explanation
"""
def add_numbers(x,y):
return x+y
a = add_numbers
a(1,2)
"""
Explanation: <br>
Assign function add_numbers to variable a.
End of explanation
"""
type('This is a string')
type(None)
type(1)
type(1.0)
type(add_numbers)
"""
Explanation: <br>
The Python Programming Language: Types and Sequences
<br>
Use type to return the object's type.
End of explanation
"""
x = (1, 'a', 2, 'b')
type(x)
"""
Explanation: <br>
Tuples are an immutable data structure (cannot be altered).
End of explanation
"""
x = [1, 'a', 2, 'b']
type(x)
"""
Explanation: <br>
Lists are a mutable data structure.
End of explanation
"""
x.append(3.3)
print(x)
"""
Explanation: <br>
Use append to append an object to a list.
End of explanation
"""
for item in x:
print(item)
"""
Explanation: <br>
This is an example of how to loop through each item in the list.
End of explanation
"""
i=0
while( i != len(x) ):
print(x[i])
i = i + 1
"""
Explanation: <br>
Or using the indexing operator:
End of explanation
"""
[1,2] + [3,4]
"""
Explanation: <br>
Use + to concatenate lists.
End of explanation
"""
[1]*3
"""
Explanation: <br>
Use * to repeat lists.
End of explanation
"""
1 in [1, 2, 3]
"""
Explanation: <br>
Use the in operator to check if something is inside a list.
End of explanation
"""
x = 'This is a string'
print(x[0]) #first character
print(x[0:1]) #first character, but we have explicitly set the end character
print(x[0:2]) #first two characters
print(x[::-1])
"""
Explanation: <br>
Now let's look at strings. Use bracket notation to slice a string.
End of explanation
"""
x[-1]
"""
Explanation: <br>
This will return the last element of the string.
End of explanation
"""
x[-4:-2]
"""
Explanation: <br>
This will return the slice starting from the 4th element from the end and stopping before the 2nd element from the end.
End of explanation
"""
x[:3]
"""
Explanation: <br>
This is a slice from the beginning of the string and stopping before the 3rd element.
End of explanation
"""
x[3:]
firstname = 'Christopher'
lastname = 'Brooks'
print(firstname + ' ' + lastname)
print(firstname*3)
print('Chris' in firstname)
"""
Explanation: <br>
And this is a slice starting from the 3rd element of the string and going all the way to the end.
End of explanation
"""
firstname = 'Christopher Arthur Hansen Brooks'.split(' ')[0] # [0] selects the first element of the list
lastname = 'Christopher Arthur Hansen Brooks'.split(' ')[-1] # [-1] selects the last element of the list
print(firstname)
print(lastname)
"""
Explanation: <br>
split returns a list of all the words in a string, or a list split on a specific character.
End of explanation
"""
'Chris' + 2
'Chris' + str(2)
"""
Explanation: <br>
Make sure you convert objects to strings before concatenating.
End of explanation
"""
x = {'Christopher Brooks': 'brooksch@umich.edu', 'Bill Gates': 'billg@microsoft.com'}
x['Christopher Brooks'] # Retrieve a value by using the indexing operator
x['Kevyn Collins-Thompson'] = "Test Test"
x['Kevyn Collins-Thompson']
"""
Explanation: <br>
Dictionaries associate keys with values.
End of explanation
"""
for name in x:
print(x[name])
"""
Explanation: <br>
Iterate over all of the keys:
End of explanation
"""
for email in x.values():
print(email)
"""
Explanation: <br>
Iterate over all of the values:
End of explanation
"""
for name, email in x.items():
print(name)
print(email)
"""
Explanation: <br>
Iterate over all of the items in the list:
End of explanation
"""
x = ('Christopher', 'Brooks', 'brooksch@umich.edu')
fname, lname, email = x
fname
lname
"""
Explanation: <br>
You can unpack a sequence into different variables:
End of explanation
"""
x = ('Christopher', 'Brooks', 'brooksch@umich.edu', 'Ann Arbor')
fname, lname, email, location = x
"""
Explanation: <br>
Make sure the number of values you are unpacking matches the number of variables being assigned.
End of explanation
"""
print("Chris" + 2)
print('Chris' + str(2))
"""
Explanation: <br>
The Python Programming Language: More on Strings
End of explanation
"""
sales_record = {
'price': 3.24,
'num_items': 4,
'person': 'Chris'}
sales_statement = '{} bought {} item(s) at a price of {} each for a total of {}'
print(sales_statement.format(sales_record['person'],
sales_record['num_items'],
sales_record['price'],
sales_record['num_items']*sales_record['price']))
"""
Explanation: <br>
Python has a built in method for convenient string formatting.
End of explanation
"""
import csv
import pandas as pd
# Nice, sets decimple point
%precision 2
with open('mpg.csv') as csvfile:
mpg = list(csv.DictReader(csvfile))
df = pd.read_csv('mpg.csv')
mpg[:3] # The first three dictionaries in our list.
df
"""
Explanation: <br>
Reading and Writing CSV files
<br>
Let's import our datafile mpg.csv, which contains fuel economy data for 234 cars.
mpg : miles per gallon
class : car classification
cty : city mpg
cyl : # of cylinders
displ : engine displacement in liters
drv : f = front-wheel drive, r = rear wheel drive, 4 = 4wd
fl : fuel (e = ethanol E85, d = diesel, r = regular, p = premium, c = CNG)
hwy : highway mpg
manufacturer : automobile manufacturer
model : model of car
trans : type of transmission
year : model year
End of explanation
"""
len(mpg)
"""
Explanation: <br>
csv.Dictreader has read in each row of our csv file as a dictionary. len shows that our list is comprised of 234 dictionaries.
End of explanation
"""
mpg[0].keys()
"""
Explanation: <br>
keys gives us the column names of our csv.
End of explanation
"""
sum(float(d['cty']) for d in mpg) / len(mpg)
"""
Explanation: <br>
This is how to find the average cty fuel economy across all cars. All values in the dictionaries are strings, so we need to convert to float.
End of explanation
"""
sum(float(d['hwy']) for d in mpg) / len(mpg)
"""
Explanation: <br>
Similarly this is how to find the average hwy fuel economy across all cars.
End of explanation
"""
# set returns unique values
cylinders = set(d['cyl'] for d in mpg)
cylinders
"""
Explanation: <br>
Use set to return the unique values for the number of cylinders the cars in our dataset have.
End of explanation
"""
CtyMpgByCyl = []
for c in cylinders: # iterate over all the cylinder levels
summpg = 0
cyltypecount = 0
for d in mpg: # iterate over all dictionaries
if d['cyl'] == c: # if the cylinder level type matches,
summpg += float(d['cty']) # add the cty mpg
cyltypecount += 1 # increment the count
CtyMpgByCyl.append((c, summpg / cyltypecount)) # append the tuple ('cylinder', 'avg mpg')
CtyMpgByCyl.sort(key=lambda x: x[0])
CtyMpgByCyl
"""
Explanation: <br>
Here's a more complex example where we are grouping the cars by number of cylinder, and finding the average cty mpg for each group.
End of explanation
"""
vehicleclass = set(d['class'] for d in mpg) # what are the class types
vehicleclass
"""
Explanation: <br>
Use set to return the unique values for the class types in our dataset.
End of explanation
"""
HwyMpgByClass = []
for t in vehicleclass: # iterate over all the vehicle classes
summpg = 0
vclasscount = 0
for d in mpg: # iterate over all dictionaries
if d['class'] == t: # if the cylinder amount type matches,
summpg += float(d['hwy']) # add the hwy mpg
vclasscount += 1 # increment the count
HwyMpgByClass.append((t, summpg / vclasscount)) # append the tuple ('class', 'avg mpg')
HwyMpgByClass.sort(key=lambda x: x[1])
HwyMpgByClass
"""
Explanation: <br>
And here's an example of how to find the average hwy mpg for each class of vehicle in our dataset.
End of explanation
"""
import datetime as dt
import time as tm
"""
Explanation: <br>
The Python Programming Language: Dates and Times
End of explanation
"""
tm.time()
"""
Explanation: <br>
time returns the current time in seconds since the Epoch. (January 1st, 1970)
End of explanation
"""
dtnow = dt.datetime.fromtimestamp(tm.time())
dtnow
"""
Explanation: <br>
Convert the timestamp to datetime.
End of explanation
"""
dtnow.year, dtnow.month, dtnow.day, dtnow.hour, dtnow.minute, dtnow.second # get year, month, day, etc.from a datetime
"""
Explanation: <br>
Handy datetime attributes:
End of explanation
"""
delta = dt.timedelta(days = 100) # create a timedelta of 100 days
delta
dt.date.today()
"""
Explanation: <br>
timedelta is a duration expressing the difference between two dates.
End of explanation
"""
today = dt.date.today()
today - delta # the date 100 days ago
today > today-delta # compare dates
"""
Explanation: <br>
date.today returns the current local date.
End of explanation
"""
class Person:
department = 'School of Information' #a class variable
def set_name(self, new_name): #a method
self.name = new_name
def set_location(self, new_location):
self.location = new_location
person = Person()
person.set_name('Christopher Brooks')
person.set_location('Ann Arbor, MI, USA')
print('{} live in {} and works in the department {}'.format(person.name, person.location, person.department))
"""
Explanation: <br>
The Python Programming Language: Objects and map()
<br>
An example of a class in python:
End of explanation
"""
store1 = [10.00, 11.00, 12.34, 2.34]
store2 = [9.00, 11.10, 12.34, 2.01]
cheapest = map(min, store1, store2)
cheapest
"""
Explanation: <br>
Here's an example of mapping the min function between two lists.
End of explanation
"""
for item in cheapest:
print (item)
people = ['Dr. Christopher Brooks', 'Dr. Kevyn Collins-Thompson', 'Dr. VG Vinod Vydiswaran', 'Dr. Daniel Romero']
def split_title_and_name(person):
title = person.split(' ')[0]
lname = person.split(' ')[-1]
return title +" "+ lname
list(map(split_title_and_name, people))
"""
Explanation: <br>
Now let's iterate through the map object to see the values.
End of explanation
"""
# Single function only
my_function = lambda a, b, c : a + b + c
my_function(1, 2, 3)
people = ['Dr. Christopher Brooks', 'Dr. Kevyn Collins-Thompson', 'Dr. VG Vinod Vydiswaran', 'Dr. Daniel Romero']
def split_title_and_name(person):
return person.split()[0] + ' ' + person.split()[-1]
#option 1
for person in people:
print(split_title_and_name(person) == (lambda x: x.split()[0] + ' ' + x.split()[-1])(person))
#option 2
list(map(split_title_and_name, people)) == list(map(lambda person: person.split()[0] + ' ' + person.split()[-1], people))
"""
Explanation: <br>
The Python Programming Language: Lambda and List Comprehensions
<br>
Here's an example of lambda that takes in three parameters and adds the first two.
End of explanation
"""
my_list = []
for number in range(0, 1000):
if number % 2 == 0:
my_list.append(number)
my_list
"""
Explanation: <br>
Let's iterate from 0 to 999 and return the even numbers.
End of explanation
"""
my_list = [number for number in range(0,1000) if number % 2 == 0]
my_list
def times_tables():
lst = []
for i in range(10):
for j in range (10):
lst.append(i*j)
return lst
times_tables() == [j*i for i in range(10) for j in range(10)]
lowercase = 'abcdefghijklmnopqrstuvwxyz'
digits = '0123456789'
correct_answer = [a+b+c+d for a in lowercase for b in lowercase for c in digits for d in digits]
correct_answer[0:100]
"""
Explanation: <br>
Now the same thing but with list comprehension.
End of explanation
"""
import numpy as np
"""
Explanation: <br>
The Python Programming Language: Numerical Python (NumPy)
End of explanation
"""
mylist = [1, 2, 3]
x = np.array(mylist)
x
"""
Explanation: <br>
Creating Arrays
Create a list and convert it to a numpy array
End of explanation
"""
y = np.array([4, 5, 6])
y
"""
Explanation: <br>
Or just pass in a list directly
End of explanation
"""
m = np.array([[7, 8, 9], [10, 11, 12]])
m
"""
Explanation: <br>
Pass in a list of lists to create a multidimensional array.
End of explanation
"""
m.shape
"""
Explanation: <br>
Use the shape method to find the dimensions of the array. (rows, columns)
End of explanation
"""
n = np.arange(0, 30, 2) # start at 0 count up by 2, stop before 30
n
"""
Explanation: <br>
arange returns evenly spaced values within a given interval.
End of explanation
"""
n = n.reshape(3, 5) # reshape array to be 3x5
n
"""
Explanation: <br>
reshape returns an array with the same data with a new shape.
End of explanation
"""
o = np.linspace(0, 4, 9) # return 9 evenly spaced values from 0 to 4
o
"""
Explanation: <br>
linspace returns evenly spaced numbers over a specified interval.
End of explanation
"""
o.resize(3, 3)
o
"""
Explanation: <br>
resize changes the shape and size of array in-place.
End of explanation
"""
np.ones((3, 2))
"""
Explanation: <br>
ones returns a new array of given shape and type, filled with ones.
End of explanation
"""
np.zeros((2, 3))
"""
Explanation: <br>
zeros returns a new array of given shape and type, filled with zeros.
End of explanation
"""
np.eye(3)
"""
Explanation: <br>
eye returns a 2-D array with ones on the diagonal and zeros elsewhere.
End of explanation
"""
np.diag(y)
"""
Explanation: <br>
diag extracts a diagonal or constructs a diagonal array.
End of explanation
"""
np.array([1, 2, 3] * 3)
"""
Explanation: <br>
Create an array using repeating list (or see np.tile)
End of explanation
"""
np.repeat([1, 2, 3], 3)
"""
Explanation: <br>
Repeat elements of an array using repeat.
End of explanation
"""
p = np.ones([2, 3], int)
p
"""
Explanation: <br>
Combining Arrays
End of explanation
"""
np.vstack([p, 2*p])
"""
Explanation: <br>
Use vstack to stack arrays in sequence vertically (row wise).
End of explanation
"""
np.hstack([p, 2*p])
"""
Explanation: <br>
Use hstack to stack arrays in sequence horizontally (column wise).
End of explanation
"""
print(x + y) # elementwise addition [1 2 3] + [4 5 6] = [5 7 9]
print(x - y) # elementwise subtraction [1 2 3] - [4 5 6] = [-3 -3 -3]
print(x * y) # elementwise multiplication [1 2 3] * [4 5 6] = [4 10 18]
print(x / y) # elementwise divison [1 2 3] / [4 5 6] = [0.25 0.4 0.5]
print(x**2) # elementwise power [1 2 3] ^2 = [1 4 9]
"""
Explanation: <br>
Operations
Use +, -, *, / and ** to perform element wise addition, subtraction, multiplication, division and power.
End of explanation
"""
x.dot(y) # dot product 1*4 + 2*5 + 3*6
z = np.array([y, y**2])
print(len(z)) # number of rows of array
"""
Explanation: <br>
Dot Product:
$ \begin{bmatrix}x_1 \ x_2 \ x_3\end{bmatrix}
\cdot
\begin{bmatrix}y_1 \ y_2 \ y_3\end{bmatrix}
= x_1 y_1 + x_2 y_2 + x_3 y_3$
End of explanation
"""
z = np.array([y, y**2])
z
"""
Explanation: <br>
Let's look at transposing arrays. Transposing permutes the dimensions of the array.
End of explanation
"""
z.shape
"""
Explanation: <br>
The shape of array z is (2,3) before transposing.
End of explanation
"""
z.T
"""
Explanation: <br>
Use .T to get the transpose.
End of explanation
"""
z.T.shape
"""
Explanation: <br>
The number of rows has swapped with the number of columns.
End of explanation
"""
z.dtype
"""
Explanation: <br>
Use .dtype to see the data type of the elements in the array.
End of explanation
"""
z = z.astype('f')
z.dtype
"""
Explanation: <br>
Use .astype to cast to a specific type.
End of explanation
"""
a = np.array([-4, -2, 1, 3, 5])
a.sum()
a.max()
a.min()
a.mean()
a.std()
"""
Explanation: <br>
Math Functions
Numpy has many built in math functions that can be performed on arrays.
End of explanation
"""
a.argmax()
a.argmin()
"""
Explanation: <br>
argmax and argmin return the index of the maximum and minimum values in the array.
End of explanation
"""
s = np.arange(13)**2
s
"""
Explanation: <br>
Indexing / Slicing
End of explanation
"""
s[0], s[4], s[-1]
"""
Explanation: <br>
Use bracket notation to get the value at a specific index. Remember that indexing starts at 0.
End of explanation
"""
s[1:5]
"""
Explanation: <br>
Use : to indicate a range. array[start:stop]
Leaving start or stop empty will default to the beginning/end of the array.
End of explanation
"""
s[-4:]
"""
Explanation: <br>
Use negatives to count from the back.
End of explanation
"""
s[-5::-2]
"""
Explanation: <br>
A second : can be used to indicate step-size. array[start:stop:stepsize]
Here we are starting 5th element from the end, and counting backwards by 2 until the beginning of the array is reached.
End of explanation
"""
r = np.arange(36)
r.resize((6, 6))
r
"""
Explanation: <br>
Let's look at a multidimensional array.
End of explanation
"""
r[2, 2]
"""
Explanation: <br>
Use bracket notation to slice: array[row, column]
End of explanation
"""
r[3, 3:6]
"""
Explanation: <br>
And use : to select a range of rows or columns
End of explanation
"""
r[:2, :-1]
"""
Explanation: <br>
Here we are selecting all the rows up to (and not including) row 2, and all the columns up to (and not including) the last column.
End of explanation
"""
r[-1, ::2]
"""
Explanation: <br>
This is a slice of the last row, and only every other element.
End of explanation
"""
r[r > 30]
"""
Explanation: <br>
We can also perform conditional indexing. Here we are selecting values from the array that are greater than 30. (Also see np.where)
End of explanation
"""
r[r > 30] = 30
r
"""
Explanation: <br>
Here we are assigning all values in the array that are greater than 30 to the value of 30.
End of explanation
"""
r2 = r[:3,:3]
r2
"""
Explanation: <br>
Copying Data
Be careful with copying and modifying arrays in NumPy!
r2 is a slice of r
End of explanation
"""
r2[:] = 0
r2
"""
Explanation: <br>
Set this slice's values to zero ([:] selects the entire array)
End of explanation
"""
r
"""
Explanation: <br>
r has also been changed!
End of explanation
"""
r_copy = r.copy()
r_copy
"""
Explanation: <br>
To avoid this, use r.copy to create a copy that will not affect the original array
End of explanation
"""
r_copy[:] = 10
print(r_copy, '\n')
print(r)
"""
Explanation: <br>
Now when r_copy is modified, r will not be changed.
End of explanation
"""
test = np.random.randint(0, 10, (4,3))
test
"""
Explanation: <br>
Iterating Over Arrays
Let's create a new 4 by 3 array of random numbers 0-9.
End of explanation
"""
for row in test:
print(row)
"""
Explanation: <br>
Iterate by row:
End of explanation
"""
for i in range(len(test)):
print(test[i])
"""
Explanation: <br>
Iterate by index:
End of explanation
"""
for i, row in enumerate(test):
print('row', i, 'is', row)
"""
Explanation: <br>
Iterate by row and index:
End of explanation
"""
test2 = test**2
test2
for i, j in zip(test, test2):
print(i,'+',j,'=',i+j)
"""
Explanation: <br>
Use zip to iterate over multiple iterables.
End of explanation
"""
|
paulovn/ml-vm-notebook | vmfiles/IPNB/Examples/a Basic/03 Matplotlib essentials.ipynb | bsd-3-clause | %matplotlib inline
"""
Explanation: Matplotlib
This notebook is (will be) a small crash course on the functionality of the Matplotlib Python module for creating graphs (and embedding it in notebooks). It is of course no substitute for the proper Matplotlib thorough documentation.
Initialization
We need to add a bit of IPython magic to tell the notebook backend that we want to display all graphs within the notebook. Otherwise they would generate objects instead of displaying into the interface; objects that we later can output to file or display explicitly with plt.show().
This is done by the following declaration:
End of explanation
"""
import matplotlib.pyplot as plt
"""
Explanation: Now we need to import the library in our notebook. There are a number of different ways to do it, depending on what part of matplotlib we want to import, and how should it be imported into the namespace. This is one of the most common ones; it means that we will use the plt. prefix to refer to the Matplotlib API
End of explanation
"""
from __future__ import print_function
print(plt.style.available)
# Let's choose one style. And while we are at it, define thicker lines and big graphic sizes
plt.style.use('bmh')
plt.rcParams['lines.linewidth'] = 1.5
plt.rcParams['figure.figsize'] = (15, 5)
"""
Explanation: Matplotlib allows extensive customization of the graph aspect. Some of these customizations come together in "styles". Let's see which styles are available:
End of explanation
"""
import numpy as np
x = np.arange( -10, 11 )
y = x*x
"""
Explanation: Simple plots
Without much more ado, let's display a simple graphic. For that we define a vector variable, and a function of that vector to be plotted
End of explanation
"""
plt.plot(x,y)
plt.xlabel('x');
plt.ylabel('x square');
"""
Explanation: And we plot it
End of explanation
"""
plt.plot(x,y,'ro-');
"""
Explanation: We can extensively alter the aspect of the plot. For instance, we can add markers and change color:
End of explanation
"""
# Create a figure object
fig = plt.figure()
# Add a graph to the figure. We get an axes object
ax = fig.add_subplot(1, 1, 1) # specify (nrows, ncols, axnum)
# Create two vectors: x, y
x = np.linspace(0, 10, 1000)
y = np.sin(x)
# Plot those vectors on the axes we have
ax.plot(x, y)
# Add another plot to the same axes
y2 = np.cos(x)
ax.plot(x, y2)
# Modify the axes
ax.set_ylim(-1.5, 1.5)
# Add labels
ax.set_xlabel("$x$")
ax.set_ylabel("$f(x)$")
ax.set_title("Sinusoids")
# Add a legend
ax.legend(['sine', 'cosine']);
"""
Explanation: Matplotlib syntax
Matplotlib commands have two variants:
* A declarative syntax, with direct plotting commands. It is inspired by Matlab graphics syntax, so if you know Matlab it will be easy. It is the one used above.
* An object-oriented syntax, more complicated but somehow more powerful
The next cell shows an example of the object-oriented syntax
End of explanation
"""
|
aoool/traffic-sign-classifier | Traffic_Sign_Classifier.ipynb | mit | # Load pickled data
import pickle
import pandas as pd
# Data's location
training_file = "traffic-sign-data/train.p"
validation_file = "traffic-sign-data/valid.p"
testing_file = "traffic-sign-data/test.p"
with open(training_file, mode='rb') as f:
train = pickle.load(f)
with open(validation_file, mode='rb') as f:
valid = pickle.load(f)
with open(testing_file, mode='rb') as f:
test = pickle.load(f)
# features and labels
X_train, y_train = train['features'], train['labels']
X_valid, y_valid = valid['features'], valid['labels']
X_test, y_test = test['features'], test['labels']
# Sign id<->name mapping
sign_names = pd.read_csv('signnames.csv').to_dict(orient='index')
sign_names = { key : val['SignName'] for key, val in sign_names.items() }
"""
Explanation: Self-Driving Car Engineer Nanodegree
Deep Learning
Project: Build a Traffic Sign Recognition Classifier
Author: Sergey Morozov
In this notebook, a traffic sign classifier is implemented. German Traffic Sign Dataset is used to train the model. There is a write-up where different stages of the implementation are described including analysis of the pros and cons of the chosen approaches and suggestions for further improvements.
Step 0: Load The Data
End of explanation
"""
import numpy as np
# Number of training examples
n_train = len(X_train)
# Number of testing examples.
n_test = len(X_test)
# Number of validation examples.
n_valid = len(X_valid)
# What's the shape of an traffic sign image?
image_shape = X_train.shape[1:]
# How many unique classes/labels there are in the dataset.
n_classes = len(np.unique(y_train))
print("Number of training examples =", n_train)
print("Number of testing examples =", n_test)
print("Number of validation examples =", n_valid)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
"""
Explanation: Step 1: Dataset Summary & Exploration
The pickled data is a dictionary with 4 key/value pairs:
'features' is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).
'labels' is a 1D array containing the label/class id of the traffic sign. The file signnames.csv contains id -> name mappings for each id.
'sizes' is a list containing tuples, (width, height) representing the original width and height the image.
'coords' is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES.
A Basic Summary of the Dataset
End of explanation
"""
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.font_manager as fm
plt.rcdefaults()
fig, ax = plt.subplots()
samples_per_category = [len(np.where(y_train==cat_id)[0]) for cat_id in sign_names.keys()]
category_names = tuple([val + " [ id:{id} ]".format(id=key) for key,val in sign_names.items()])
min_cnt = min(samples_per_category)
max_cnt = max(samples_per_category)
y_pos = np.arange(len(category_names))
rects = ax.barh(y_pos,
samples_per_category,
align='center',
color=['green' if val != min_cnt and val != max_cnt \
else 'yellow' if val == min_cnt \
else 'red' for val in samples_per_category])
# setting labels for each bar
for i in range(0,len(rects)):
ax.text(int(rects[i].get_width()),
int(rects[i].get_y()+rects[i].get_height()/2.0),
samples_per_category[i],
fontproperties=fm.FontProperties(size=5))
ax.set_yticks(y_pos)
ax.set_yticklabels(category_names,fontproperties=fm.FontProperties(size=5))
ax.invert_yaxis()
ax.set_title('Samples per Category')
plt.show()
"""
Explanation: An Exploratory Visualization of the Dataset
Number of Samples in Each Category
The categories with minimum/maximum number of samples are marked with yellow/red color correspondingly.
End of explanation
"""
import random
import numpy as np
import matplotlib.pyplot as plt
import math
# Visualizations will be shown in the notebook.
%matplotlib inline
h_or_w = image_shape[0]
fig = plt.figure(figsize=(h_or_w,h_or_w))
for i in range(0, n_classes):
samples = np.where(y_train==i)[0]
index = random.randint(0, len(samples) - 1)
image = X_train[samples[index]]
ax = fig.add_subplot(math.ceil(n_classes/5), 5, i+1)
ax.set_title(sign_names[i])
ax.set_ylabel("id: {id}".format(id=i))
plt.imshow(image)
plt.show()
"""
Explanation: Random Image from Each Category
Output a sample image from each category. Note, that images will be transformed before they are passed to neural network.
End of explanation
"""
from sklearn.utils import shuffle
X_train, y_train = shuffle(X_train, y_train)
"""
Explanation: Step 2: Design and Test a Model Architecture
Design and implement a deep learning model that learns to recognize traffic signs. The LeNet-5 CNN architecture is used here with minor modifications: dropout parameter added to the first fully connected layer.
Pre-process the Data Set (normalization, grayscale, etc.)
Shuffle Data
End of explanation
"""
import cv2
def prepare_image(image_set):
"""Transform initial set of images so that they are ready to be fed to neural network.
(1) normalize image
(2) convert RGB image to gray scale
"""
# initialize empty image set for prepared images
new_shape = image_shape[0:2] + (1,)
prep_image_set = np.empty(shape=(len(image_set),) + new_shape, dtype=int)
for ind in range(0, len(image_set)):
# normalize
norm_img = cv2.normalize(image_set[ind], np.zeros(image_shape[0:2]), 0, 255, cv2.NORM_MINMAX)
# grayscale
gray_img = cv2.cvtColor(norm_img, cv2.COLOR_RGB2GRAY)
# set new image to the corresponding position
prep_image_set[ind] = np.reshape(gray_img, new_shape)
return prep_image_set
def equalize_number_of_samples(image_set, image_labels):
"""Make number of samples in each category equal.
The data set has different number of samples for each category.
This function will transform the data set in a way that each category
will contain the number of samples equal to maximum samples per category
from the initial set. This will provide an equal probability to meet
traffic sign of each category during the training process.
"""
num = max([len(np.where(image_labels==cat_id)[0]) for cat_id in sign_names.keys()])
equalized_image_set = np.empty(shape=(num * n_classes,) + image_set.shape[1:], dtype=int)
equalized_image_labels = np.empty(shape=(num * n_classes,), dtype=int)
j = 0
for cat_id in sign_names.keys():
cat_inds = np.where(y_train==cat_id)[0]
cat_inds_len = len(cat_inds)
for i in range(0, num):
equalized_image_set[j] = image_set[cat_inds[i % cat_inds_len]]
equalized_image_labels[j] = image_labels[cat_inds[i % cat_inds_len]]
j += 1
# at this stage data is definitely not randomly shuffled, so shuffle it
return shuffle(equalized_image_set, equalized_image_labels)
X_train_prep = prepare_image(X_train)
X_test_prep = prepare_image(X_test)
X_valid_prep = prepare_image(X_valid)
X_train_prep, y_train_prep = equalize_number_of_samples(X_train_prep, y_train)
# we do not need to transform labes for validation and test sets
y_test_prep = y_test
y_valid_prep = y_valid
image_shape_prep = X_train_prep[0].shape
"""
Explanation: Prepare Input Images
End of explanation
"""
# LeNet-5 architecture is used.
import tensorflow as tf
from tensorflow.contrib.layers import flatten
def LeNet(x, channels, classes, keep_prob, mu=0, sigma=0.01):
# Arguments used for tf.truncated_normal, randomly defines variables
# for the weights and biases for each layer
# Layer 1: Convolutional. Input = 32x32xchannels. Output = 28x28x6.
conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5, channels, 6), mean = mu, stddev = sigma))
conv1_b = tf.Variable(tf.zeros(6))
conv1 = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b
# Layer 1: Activation.
conv1 = tf.nn.relu(conv1)
# Layer 1: Pooling. Input = 28x28x6. Output = 14x14x6.
conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# Layer 2: Convolutional. Output = 10x10x16.
conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 6, 16), mean = mu, stddev = sigma))
conv2_b = tf.Variable(tf.zeros(16))
conv2 = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b
# Layer 2: Activation.
conv2 = tf.nn.relu(conv2)
# Layer 2: Pooling. Input = 10x10x16. Output = 5x5x16.
conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# Layer 2: Flatten. Input = 5x5x16. Output = 400.
fc0 = flatten(conv2)
fc0 = tf.nn.dropout(fc0, keep_prob=keep_prob)
# Layer 3: Fully Connected. Input = 400. Output = 120.
fc1_W = tf.Variable(tf.truncated_normal(shape=(400, 120), mean = mu, stddev = sigma))
fc1_b = tf.Variable(tf.zeros(120))
fc1 = tf.matmul(fc0, fc1_W) + fc1_b
# Layer 3: Activation.
fc1 = tf.nn.relu(fc1)
# Layer 4: Fully Connected. Input = 120. Output = 84.
fc2_W = tf.Variable(tf.truncated_normal(shape=(120, 84), mean = mu, stddev = sigma))
fc2_b = tf.Variable(tf.zeros(84))
fc2 = tf.matmul(fc1, fc2_W) + fc2_b
# Layer 4: Activation.
fc2 = tf.nn.relu(fc2)
# Layer 5: Fully Connected. Input = 84. Output = 10.
fc3_W = tf.Variable(tf.truncated_normal(shape=(84, classes), mean = mu, stddev = sigma))
fc3_b = tf.Variable(tf.zeros(classes))
logits = tf.matmul(fc2, fc3_W) + fc3_b
return logits
"""
Explanation: Model Architecture
End of explanation
"""
# x is a placeholder for a batch of input images
x = tf.placeholder(tf.float32, (None,) + image_shape_prep)
# y is a placeholder for a batch of output labels
y = tf.placeholder(tf.int32, (None))
one_hot_y = tf.one_hot(y, n_classes)
"""
Explanation: Train, Validate and Test the Model
A validation set can be used to assess how well the model is performing. A low accuracy on the training and validation
sets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting.
Features and Labels
End of explanation
"""
# hyperparameters of the training process
RATE = 0.0008
EPOCHS = 30
BATCH_SIZE = 128
KEEP_PROB = 0.7
STDDEV = 0.01
keep_prob = tf.placeholder(tf.float32)
logits = LeNet(x, image_shape_prep[-1], n_classes, keep_prob, sigma=STDDEV)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y, logits=logits)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = RATE)
training_operation = optimizer.minimize(loss_operation)
"""
Explanation: Training Pipeline
End of explanation
"""
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver()
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y, keep_prob: 1})
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples
"""
Explanation: Model Evaluation
End of explanation
"""
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
num_examples = len(X_train_prep)
print("Training...")
print()
for i in range(EPOCHS):
X_train_prep, y_train_prep = shuffle(X_train_prep, y_train_prep)
for offset in range(0, num_examples, BATCH_SIZE):
end = offset + BATCH_SIZE
batch_x, batch_y = X_train_prep[offset:end], y_train_prep[offset:end]
sess.run(training_operation, feed_dict={x: batch_x, y: batch_y, keep_prob: KEEP_PROB})
train_accuracy = evaluate(X_train_prep, y_train_prep)
validation_accuracy = evaluate(X_valid_prep, y_valid_prep)
print("EPOCH {} ...".format(i+1))
print("Train Accuracy = {:.3f}".format(train_accuracy))
print("Validation Accuracy = {:.3f}".format(validation_accuracy))
print()
saver.save(sess, './model.ckpt')
print("Model saved")
"""
Explanation: Train the Model
End of explanation
"""
with tf.Session() as sess:
saver.restore(sess, './model.ckpt')
test_accuracy = evaluate(X_test_prep, y_test_prep)
print("Test Accuracy = {:.3f}".format(test_accuracy))
"""
Explanation: Evaluate Trained Model Using Test Samples
End of explanation
"""
import os
import cv2
import matplotlib.image as mpimg
img_paths = os.listdir("traffic-sign-images")
images = list()
labels = list()
# read images and resize
for img_path in img_paths:
# read image from file
img = mpimg.imread(os.path.join("traffic-sign-images", img_path))
img = cv2.resize(img, image_shape[0:2], interpolation=cv2.INTER_CUBIC)
images.append(img)
# prefix of each image name is a number of its category
labels.append(int(img_path[0:img_path.find('-')]))
images = np.array(images)
labels = np.array(labels)
# output the resized images
h_or_w = image_shape[0]
fig = plt.figure(figsize=(h_or_w,h_or_w))
for i in range(0, len(images)):
ax = fig.add_subplot(1, len(images), i+1)
ax.set_title(sign_names[labels[i]])
ax.set_ylabel("id: {id}".format(id=labels[i]))
plt.imshow(images[i])
plt.show()
"""
Explanation: Step 3: Test a Model on New Images
It is time to apply the trained model to the German trafic sign images that were obtained from the Internet.
Load and Output the Images
End of explanation
"""
# preprocess images first
images_prep = prepare_image(images)
labels_prep = labels
# then make a prediction
with tf.Session() as sess:
saver.restore(sess, './model.ckpt')
sign_ids = sess.run(tf.argmax(logits, 1), feed_dict={x: images_prep, y: labels_prep, keep_prob: 1})
# output the results in the table
print('-' * 93)
print("| {p:^43} | {a:^43} |".format(p='PREDICTED', a='ACTUAL'))
print('-' * 93)
for i in range(len(sign_ids)):
print('| {p:^2} {strp:^40} | {a:^2} {stra:^40} |'.format(
p=sign_ids[i], strp=sign_names[sign_ids[i]], a=labels[i], stra=sign_names[labels[i]]))
print('-' * 93)
"""
Explanation: Predict the Sign Type for Each Image
End of explanation
"""
# run evaluation on the new images
with tf.Session() as sess:
saver.restore(sess, './model.ckpt')
test_accuracy = evaluate(images_prep, labels_prep)
print("Accuracy = {:.3f}".format(test_accuracy))
"""
Explanation: Analyze Performance
End of explanation
"""
# Print out the top five softmax probabilities for the predictions on
# the German traffic sign images found on the web.
with tf.Session() as sess:
saver.restore(sess, './model.ckpt')
top_k = sess.run(tf.nn.top_k(tf.nn.softmax(logits), k=5),
feed_dict={x: images_prep, y: labels_prep, keep_prob: 1})
print(top_k)
plt.rcdefaults()
# show histogram of top 5 softmax probabilities for each image
h_or_w = image_shape[0]
fig = plt.figure()
for i in range(0, len(images)):
ax = fig.add_subplot(len(images), 1, i+1)
probabilities = top_k.values[i]
y_pos = np.arange(len(probabilities))
ax.set_ylabel("actual id: {id}".format(id=labels[i]), fontproperties=fm.FontProperties(size=5))
rects = ax.barh(y_pos,
probabilities,
align='center',
color='blue')
# setting labels for each bar
for j in range(0,len(rects)):
ax.text(int(rects[j].get_width()),
int(rects[j].get_y()+rects[j].get_height()/2.0),
probabilities[j],
fontproperties=fm.FontProperties(size=5), color='red')
ax.set_yticks(y_pos)
ax.set_yticklabels(top_k.indices[i], fontproperties=fm.FontProperties(size=5))
xticks = [0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]
ax.set_xticks(xticks)
ax.set_xticklabels(xticks, fontproperties=fm.FontProperties(size=5))
ax.invert_yaxis()
plt.tight_layout()
plt.show()
"""
Explanation: Top 5 Softmax Probabilities For Each Image Found on the Web
End of explanation
"""
|
UDST/activitysim | activitysim/examples/example_estimation/notebooks/11_joint_tour_composition.ipynb | bsd-3-clause | import os
import larch # !conda install larch -c conda-forge # for estimation
import pandas as pd
"""
Explanation: Estimating Joint Tour Composition
This notebook illustrates how to re-estimate a single model component for ActivitySim. This process
includes running ActivitySim in estimation mode to read household travel survey files and write out
the estimation data bundles used in this notebook. To review how to do so, please visit the other
notebooks in this directory.
Load libraries
End of explanation
"""
os.chdir('test')
"""
Explanation: We'll work in our test directory, where ActivitySim has saved the estimation data bundles.
End of explanation
"""
modelname = "joint_tour_composition"
from activitysim.estimation.larch import component_model
model, data = component_model(modelname, return_data=True)
"""
Explanation: Load data and prep model for estimation
End of explanation
"""
data.coefficients
"""
Explanation: Review data loaded from the EDB
The next step is to read the EDB, including the coefficients, model settings, utilities specification, and chooser and alternative data.
Coefficients
End of explanation
"""
data.spec
"""
Explanation: Utility specification
End of explanation
"""
data.chooser_data
"""
Explanation: Chooser data
End of explanation
"""
model.estimate(method='SLSQP')
"""
Explanation: Estimate
With the model setup for estimation, the next step is to estimate the model coefficients. Make sure to use a sufficiently large enough household sample and set of zones to avoid an over-specified model, which does not have a numerically stable likelihood maximizing solution. Larch has a built-in estimation methods including BHHH, and also offers access to more advanced general purpose non-linear optimizers in the scipy package, including SLSQP, which allows for bounds and constraints on parameters. BHHH is the default and typically runs faster, but does not follow constraints on parameters.
End of explanation
"""
model.parameter_summary()
"""
Explanation: Note that in the example data for this model, there are only 91 joint tours, which an is insufficient
number of observations to successfully estimate all 31 parameters in this model.
Estimated coefficients
End of explanation
"""
from activitysim.estimation.larch import update_coefficients
result_dir = data.edb_directory/"estimated"
update_coefficients(
model, data, result_dir,
output_file=f"{modelname}_coefficients_revised.csv",
);
"""
Explanation: Output Estimation Results
End of explanation
"""
model.to_xlsx(
result_dir/f"{modelname}_model_estimation.xlsx",
data_statistics=False,
)
"""
Explanation: Write the model estimation report, including coefficient t-statistic and log likelihood
End of explanation
"""
pd.read_csv(result_dir/f"{modelname}_coefficients_revised.csv")
"""
Explanation: Next Steps
The final step is to either manually or automatically copy the *_coefficients_revised.csv file to the configs folder, rename it to *_coefficients.csv, and run ActivitySim in simulation mode.
End of explanation
"""
|
CSC-IT-Center-for-Science/kajaani-science-days-workshop | data-analytics.ipynb | mit | # Luetaan loitsut, jotka alustavat ympäristön
from pandas import DataFrame, Series, read_csv
from numpy import vstack, round, random
from bokeh.plotting import figure, show, output_notebook, hplot
from bokeh.charts import Bar, Scatter
from bokeh._legacy_charts import HeatMap
from bokeh.palettes import YlOrRd9
output_notebook()
import warnings
warnings.filterwarnings("ignore")
# Ladataan datatiedosto
data = read_csv('https://raw.githubusercontent.com/CSC-IT-Center-for-Science/kajaani-science-days-workshop/master/data.csv', sep=';', decimal=',')
# Katsotaan miltä data näyttää
data
"""
Explanation: Data-analyysityöpaja Kajaanin Tiedepäivillä
Mitä data-analyysi on? Data-analyysi tarkoitaa sitä, että datan pohjalta päätellään jotain uutta. Esimerkiksi mittausdatan perusteella voidaan todeta, että uusi lääkeaine näyttää laskevan verenpainetta.
No mitä se data on? Nykypäivänä data voi olla mitä tahansa, mikä on saatavissa digitaalisessa muodossa. Perinteisesti data on ollut tieteellisiä havaintoja, joita on tunnollisesti kirjattu ylös, vaikkapa jonkinlaiseksi taulukoksi. Näin on edellisen verenpaine-esimerkin tapauksessa. Nykyään kuitenkin tehdään jo paljon analyysiä esimerkiksi reaaliaikaisesta videokuvasta. Hyvä esimerkki tästä on vaikkapa robottilennokki, joka lentää pitkin voimalinjoja ja videokameran kuvan avulla analysoi, että milloin lumikuorma on vaarallisen suuri.
Mihin data-analyysia tarvitaan? Jos visionäärejä on uskominen, niin kohta ihan kaikkeen. Tieteessä datan analysointi on ollut keskeistä viimeistään 1900-luvun alusta alkaen. Tämä perinteinen tieteen ja asiantuntijatyön analytiikka on kuitenkin nyt saamassa rinnalleen uuden käyttäjäkunnan, kun arkisemmat data-analyysitarpeet ovat suoraan sanoen räjähtäneet. Facebookin ja Googlen kaltaiset internetajan yritykset vetävät uuden data-analytiikan nopeaa kehitystä. Yritysmaailmassa niin kutsuttu Big Data on tällä hetkellä hyvin kuuma aihe.
Joka tapauksessa on selvää, että tulevaisuudessa data-analyysiä tehdään paljon enemmän ja paljon laajemmin. Eli ei pelkästään tutkimuslaitoksissa, vaan myös tavallisissa yrityksissä, virastoissa ja järjestöissä. Jos opettelee ainakin perusasiat, niin saa melkoisen hyödyn tulevaisuutta ajatellen.
Asiaan
Jotta voi analysoida dataa, niin aluksi pitää ladata dataa. Alla oleva koodinpätkä tekee juuri sen.
Koodi ajetaan klikkaamalla harmaaseen laatikkoon, jolloin se tulee valituksi. Valitse ylävalikosta Cell -> Run ja koodi käynnistyy. Sen merkkinä ilmestyy In-riville tähti. Kun homma on valmis, niin alle ilmestyy tulokset. Tässä tapauksessa pitäisi tulla ladattu data näkyviin taulukkona.
Jatkossa koodin voi ajaa myös näppärämmin painamalla Ctrl ja Enter.
End of explanation
"""
show(Bar(data, label='Kuukausi', values='Jaatelomyynti'))
"""
Explanation: Miltä data näyttää?
Data-analyysi alkaa yleensä sillä, että piirretään data kuvaajaksi eli visualisoidaan se. Tai jos rehellisiä ollaan, niin aluksi yleensä tapellaan sen kanssa, että data saadaan siivottua, oikeaan muotoon ja ladattua koneelle. Mutta sen jälkeen siis visualisoidaan.
Meidän datataulukkomme sisältää vähän kaikenlaista tietoa kymmenen viime vuoden ajalta. Sen perusajatus on, että kaikki tiedot on kytketty kuukauteen. Eli on jokaiselle kuukaudelle erilaisia mittauksia ja muita tietoja, kuten kuukauden keskilämpötila Ilmatieteen laitokselta, myyntitilastoja ja sanojen esiintymistä Suomi24-keskustelufoorumin keskusteluissa.
Piirretään taulukosta löytyvät jäätelömyynnin määrät kuukausittain. Dataa on pitkältä ajalta ja visualisointi on siksi vaikea lukea sellaisenaan, mutta sitä pystyy liikuttelemaan ja sillä tavalla tekemään paremmin ymmärrettäväksi.
End of explanation
"""
data2005 = data[0:12]
show(Bar(data2005, label='Kuukausi', values='Jaatelomyynti'))
"""
Explanation: Surffaile hetken ajan ylläolevaa visualisointia. Mikä on jäätelömyynnin perusidea?
Satunnaisuus
Katsotaan ensimmäistä vuotta tarkemmin. Mitä muuta huomaat kuin kuin kesän vaikutuksen?
End of explanation
"""
talvikuukaudet = [i % 12 in (11, 0, 1) for i in range(120)]
datajoulu = data.copy()
datajoulu['Joulu'] = Series(i[-2:] != '12' for i in data['Kuukausi'])
show(Bar(datajoulu[talvikuukaudet], label='Kuukausi', values='Jaatelomyynti', title='Talven myynti', group='Joulu'))
"""
Explanation: Joulukuun myynti on myös kohollaan. Syödäänkö jouluna paljon jäätelöä? Ehkäpä.
Nyt katsoimme kuitenkin vain yhtä vuotta ja yksittäiseen havaintoon ei pidä luottaa. Valitettavasti suuresta kaaviosta on mahdoton nähdä, että mitä joulukuussa myynti on keskimäärin verrattuna muihin talvikuukausiin. Voimme kuitenkin kätevästi poimia datasta halutut tiedot ja piirtää uudet visualisoinnit.
End of explanation
"""
show(Bar(data, label='Kuukausi', values='Jaatelomyynti'))
show(Bar(data, label='Kuukausi', values='Allergialaakemyynti'))
"""
Explanation: Nyt näemme, että talvikuukausien myynti vaihtelee aika paljon, eivätkä punaiseksi piirretyt joulukuut poikkea muista kuukausista juurikaan. Varsinaisessa data-analyysissa satunnaisen vaihtelun käsittelyyn käytetään tilastollisia testejä. Tällainen yksinkertainen visualisointi kuitenkin auttaa jo silmämääräisesti hahmottamaan, että kuinka suurta satunnaista vaihtelua datassa on ja tekemään jonkinlaisen arvion siitä, että onko havaittu arvo todellakin poikkeava.
Datan yhdistelyä
Data-analyytikkoa usein kiinnostaa, että millaisia yhteyksiä kahden eri asian välillä on. Meidän taulukossamme se käytännössä tarkoittaa, että löytyyko eri sarakkeiden arvojen väliltä kiinnostavia yhteyksiä.
Piirretään datasta kaksi eri tietoa: jäätelömyynti ja allergialääkkeiden myynti. Voiko näistä kaavioista nähdä mitään?
End of explanation
"""
show(Scatter(data, x='Jaatelomyynti', y='Allergialaakemyynti'))
"""
Explanation: Suoraan piirrettynä tiedoista näkee, että molemmissa on selvä vuositasolla toistuva kuvio. Mutta osuvatko ne yhteen, ja jos osuvat, niin kuinka paljon?
Sitä varten vaihdetaan toisenlaiseen kuvaajaan, nimittäin hajontakaavioon. Siinä nämä kaksi tietoa x- ja y-akseleille. Jokaista täplää vastaa yksi kuukausi ja sen x- ja y-koordinaatit otetaan jäätelö- ja allergialääkemyyntiä vastaavista sarakkeista.
End of explanation
"""
datakesa = data.copy()
datakesa['Kesa'] = Series(i[-2:] in ('06', '07', '08') for i in data['Kuukausi'])
show(Scatter(datakesa, x='Jaatelomyynti', y='Allergialaakemyynti', color='Kesa'))
"""
Explanation: Nähdään, että nämä kaksi asiaa tavallaan kulkevat käsi kädessä. Kun jäätelömyynti on suuri, niin myös allergialääkkeiden myynti on suuri. Niinpä hajontakaavio näyttää vasemmasta alakulmasta oikeaan yläkulmaan lentävälle täpläparvelle.
Voidaanko tästä siis päätellä, että toinen aiheuttaa toisen? Eli jos syödään paljon jäätelöä, niin se aiheuttaa heinänuhaa? Tai että jos podetaan heinänuhaa, niin sitä hoidetaan lurpsimalla jäätelöä? Koska sitähän se data näyttäisi sanovan?
Näiden kahden muuttujan välinen suhde ei kuitenkaan ole näin yksinkertainen, vaan mukana on tavallaan kolmas pyörä. Piirretäänpä sama hajontakaavio uudelleen niin, että kesäkuukaudet saavat vihreän värin.
End of explanation
"""
data.columns.values.tolist()[1:10]
"""
Explanation: Mysteeri ratkesi! Taitaakin olla niin, että kesäkelit aiheuttavat sekä jäätelömyynnin kohoamista, että heinänuhaa.
Uusien yhteyksien löytämistä
Tähän asti emme ole koskeneet koodiin. Mutta seuraavaksi pääset itse tutkailemaan eri sarakkeiden välisiä yhteyksiä. Katsotaan aluksi, että minkä nimiset sarakkeet datasta löytyvät.
End of explanation
"""
# Muuta hipsujen sisällä olevia arvoja alla
# Älä poista hipsuja tai lisää välilyöntejä niiden sisälle
sarake1 = 'Jaatelomyynti'
sarake2 = 'Lampotila'
# Ja piirretään
show(Scatter(data, x=sarake1, y=sarake2))
"""
Explanation: Alla on esimerkiksi piirretty jäätelömyynnin ja kuukauden keskilämpötilan yhteys. Voit muuttaa sarakkeiden nimiä ja ajaa koodin uudelleen, jolloin näet valitsemiesi sarakkeiden yhteyden.
End of explanation
"""
show(HeatMap(data.corr(), title="Sarakkeiden yhteys (korrelaatio)", palette=YlOrRd9[::-1]))
"""
Explanation: Voit esimerkiksi kokeilla Suomi24-foorumin sanojen yleisyyttä ja verrata niitä toisiinsa, tai muihin sarakkeisiin.
Vähän lisää automaatiota
Sarakkeiden välisten yhteyksien etsimistä voi myös automatisoida. Voidaan esimerkiksi mitata sarakkeiden yhteys nk. korrelaation avulla ja tällä tavalla verrata kaikki sarakkeita keskenään. Tuloksena piirretään lämpökartta, jossa tumma väri vastaa vahvaa korrelaatiota.
End of explanation
"""
|
astroumd/GradMap | notebooks/Haiti2016/Math.ipynb | gpl-3.0 | # setting a variable
a = 1.23
# just writing the variable will show it's value, but this is not the recommended
# way, because per cell only the last one will be printed and stored in the out[]
# list that the notebook maintains
a
a+1
# the right way to print is using the official **print** function in python
# and this way you can also print out multiple lines in the out[]
print(a)
print(type(a),str(a))
b=2
print(b)
# overwriting the same variable , now as a string
a="1.23"
a,type(a)
# checking the value of the variable
a
"""
Explanation: Some very basic python
Showing some very basic python, variables, arrays, math and plotting:
End of explanation
"""
from __future__ import print_function
"""
Explanation: Python versions
Python2 and Python3 are still being used today. So safeguard printing between python2 and python3 you will need a special import for old python2:
End of explanation
"""
pi = 3.1415
print("pi=",pi)
print("pi=%15.10f" % pi)
# for reference, here is the old style of printing in python2
# print "pi=",pi
"""
Explanation: Now we can print(pi) in python2. The old style would be print pi
End of explanation
"""
n = 1
if n > 0:
print("yes, n>0")
else:
print("not")
for i in [2,4,n,6]:
print("i=",i)
print("oulala, i=",i)
n = 10
while n>0:
# n = n - 2
print("whiling",n)
n = n - 2
print("last n",n)
"""
Explanation: Control stuctures
Most programming languages have a way to control the flow of the program. The common ones are
if/then/else
for-loop
while-loop
End of explanation
"""
a1 = [1,2,3,4]
a2 = range(1,5)
print(a1)
print(a2)
a2 = ['a',1,'cccc']
print(a1)
print(a2)
a3 = range(12,20,2)
print(a3)
a1=range(3)
a2=range(1,4)
print(a1,a2)
a1+a2
"""
Explanation: Python Data Structures
A list is one of four major data structures (lists, dictionaries, sets, tuples) that python uses. It is the most simple one, and has direct parallels to those in other languages such as Fortran, C/C++, Java etc.
Python Lists
Python uses special symbols to make up these collection, briefly they are:
* list: [1,2,3]
* dictionary: { "a":1 , "b":2 , "c": 3}
* set: {1,2,3,"abc"}
* tuple: (1,2,3)
End of explanation
"""
import math
import numpy as np
math.pi
np.pi
# %matplotlib inline
import matplotlib.pyplot as plt
a=np.arange(0,1,0.01)
b = a*a
c = np.sqrt(a)
plt.plot(a,b,'-bo',label='b')
plt.plot(a,c,'-ro',label='c')
plt.legend()
plt.plot(a,a+1)
plt.show()
"""
Explanation: Math and Numeric Arrays
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.20/_downloads/142c866d928b3d3a3a76c80e0ef4ea81/plot_rereference_eeg.ipynb | bsd-3-clause | # Authors: Marijn van Vliet <w.m.vanvliet@gmail.com>
# Alexandre Gramfort <alexandre.gramfort@inria.fr>
#
# License: BSD (3-clause)
import mne
from mne.datasets import sample
from matplotlib import pyplot as plt
print(__doc__)
# Setup for reading the raw data
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
event_id, tmin, tmax = 1, -0.2, 0.5
# Read the raw data
raw = mne.io.read_raw_fif(raw_fname, preload=True)
events = mne.read_events(event_fname)
# The EEG channels will be plotted to visualize the difference in referencing
# schemes.
picks = mne.pick_types(raw.info, meg=False, eeg=True, eog=True, exclude='bads')
"""
Explanation: Re-referencing the EEG signal
This example shows how to load raw data and apply some EEG referencing schemes.
End of explanation
"""
reject = dict(eog=150e-6)
epochs_params = dict(events=events, event_id=event_id, tmin=tmin, tmax=tmax,
picks=picks, reject=reject, proj=True)
fig, (ax1, ax2, ax3) = plt.subplots(nrows=3, ncols=1, sharex=True)
# We first want to plot the data without any added reference (i.e., using only
# the reference that was applied during recording of the data).
# However, this particular data already has an average reference projection
# applied that we now need to remove again using :func:`mne.set_eeg_reference`
raw, _ = mne.set_eeg_reference(raw, []) # use [] to remove average projection
evoked_no_ref = mne.Epochs(raw, **epochs_params).average()
evoked_no_ref.plot(axes=ax1, titles=dict(eeg='Original reference'), show=False,
time_unit='s')
# Now we want to plot the data with an average reference, so let's add the
# projection we removed earlier back to the data. Note that we can use
# "set_eeg_reference" as a method on the ``raw`` object as well.
raw.set_eeg_reference('average', projection=True)
evoked_car = mne.Epochs(raw, **epochs_params).average()
evoked_car.plot(axes=ax2, titles=dict(eeg='Average reference'), show=False,
time_unit='s')
# Re-reference from an average reference to the mean of channels EEG 001 and
# EEG 002.
raw.set_eeg_reference(['EEG 001', 'EEG 002'])
evoked_custom = mne.Epochs(raw, **epochs_params).average()
evoked_custom.plot(axes=ax3, titles=dict(eeg='Custom reference'),
time_unit='s')
"""
Explanation: We will now apply different EEG referencing schemes and plot the resulting
evoked potentials. Note that when we construct epochs with mne.Epochs, we
supply the proj=True argument. This means that any available projectors
are applied automatically. Specifically, if there is an average reference
projector set by raw.set_eeg_reference('average', projection=True), MNE
applies this projector when creating epochs.
End of explanation
"""
|
diging/methods | 1.2 Change and difference/1.2.4 Comparing word use between corpora.ipynb | gpl-3.0 | from tethne.readers import wos
pj_corpus = wos.read('../data/Baldwin/PlantJournal/')
pp_corpus = wos.read('../data/Baldwin/PlantPhysiology/')
"""
Explanation: 1.2.4. Comparing word use between corpora
In previous notebooks we examined changes in word use over time using several different statistical approaches. In this notebook, we will examine differences in word use between two different corpora.
Web of Science dataset
In this notebook we will use data retrieved from the ISI Web of Science database. One corpus is from the journal Plant Journal over the period 1991-2013. The other corpus is from the journal Plant Journal, 19991-2013. Each corpus is comprised of several WoS field-tagged metadata files contained in a folder.
Tethne's WoS parser can load all of the data files in a single directory all at once. This may take a few minutes, since Tethne goes to a lot of trouble in indexing all of the records for easy access later on.
End of explanation
"""
word_counts = nltk.ConditionalFreqDist([
(paper.journal, normalize_token(token))
for paper in chain(pj_corpus, pp_corpus) # chain() strings the two corpora together.
for token in nltk.word_tokenize(getattr(paper, 'abstract', ''))
if filter_token(token)
])
"""
Explanation: Conditional frequency distribution
This next step should look familiar. We will create a conditional frequency distribution for words in these two corpora. We have two conditions: the journal is Plant Physiology and the journal is Plant Journal.
End of explanation
"""
# Don't run this without setting ``samples``!
word_counts.tabulate(samples=['photosynthesis', 'growth', 'stomatal'])
"""
Explanation: Now we can use tabulate to generate a contingency table showing the number of times each word is used within each journal.
End of explanation
"""
plant_jour_photosynthesis = word_counts['PLANT JOURNAL']['photosynthesis']
plant_jour_notphotosynthesis = word_counts['PLANT JOURNAL'].N() - plant_jour_photosynthesis
plant_phys_photosynthesis = word_counts['PLANT PHYSIOLOGY']['photosynthesis']
plant_phys_notphotosynthesis = word_counts['PLANT PHYSIOLOGY'].N() - plant_phys_photosynthesis
# Create a 2x2 array.
contingency_table = np.array([[plant_jour_photosynthesis, plant_jour_notphotosynthesis],
[plant_phys_photosynthesis, plant_phys_notphotosynthesis]],
dtype=int)
contingency_table
"""
Explanation: Is there a difference?
As a first step, we may wish to establish whether or not there is a difference between the two corpora. In this simplistic example, we will compare the rate at which a specific word is used in the two journals. In practice, your comparisons will probably be more sophisticated -- but this is a starting point.
So: Is the term photosynthesis used disproportionately in Plant Physiology compared to Plant Journal?
$H_0: P("photosynthesis" \Bigm|J = "Plant Journal") = P("photosynthesis" \Bigm| J="Plant Physiology")$
To test this hypothesis, we will use Dunning's log-likelihood ratio, which is a popular metric in text analysis. In a nutshell, we want to assess whether or not the relative use of the term "photosynthesis" is sufficiently skewed to reject the null hypothesis.
The log likelihood ratio is calculated from a contingency table, similar to the one above. For a single word, our table will show the number of tokens that are the word "photosynthesis", and the number of tokens that are not, for each journal.
[ show table here ]
$
\sum_i O_i ln \frac{O_i}{E_i}
$
where $O_i$ is the observed value in cell $i$, and $E_i$ is the expected value in cell $i$.
First we will calculate the observed contingency table.
End of explanation
"""
# We multiply the values in the contingency table by 1. to coerce the
# integers to floating-point numbers, so that we can divide without
# losing precision.
expected_probabilities = 1.*contingency_table.sum(axis=0)/contingency_table.sum()
expected_probabilities
"""
Explanation: To calculate the expected values, we first calculate the expected probabilities of each word under the null hypothesis. The probability of "photosynthesis" occurring is the total number of occurrences of "photosynthesis" (sum of the first column) divided by the total number of tokens (sum of the whole table). The probability of "photosynthesis" not occuring is calculated similarly, using the second column.
End of explanation
"""
# We multiply each 2-element array by a square matrix containing ones, and then
# transpose one of the resulting matrices so that the product gives the expected
# counts.
expected_counts = np.floor((np.ones((2, 2))*expected_probabilities)*\
(np.ones((2, 2))*contingency_table.sum(axis=1)).T).astype(int)
expected_counts
"""
Explanation: Now we calculate the expected counts from those probabilities. The expected counts can be found by multiplying the probabilities of the word occuring and not occuring by the total number of tokens in each corpus.
End of explanation
"""
loglikelihood = np.sum(1.*contingency_table*np.log(1.*contingency_table/expected_counts))
loglikelihood
"""
Explanation: Now we obtain the log likelihood using the equation above:
End of explanation
"""
distribution = stats.chi2(df=1) # df: degrees of freedom.
"""
Explanation: So, do the two corpora differ in terms of their use of the word "photosynthesis"? In other words, can we reject the null hypothesis (that they do not)? Per Dunning (1993), under the null hypothesis the distribution of the test statistic (log likelihood) should follow a $\chi^2$ distribution. So we can obtain the probability of the calculated log-likelihood under the null hypothesis using the PDF of $\chi^2$ with one degree of freedom.
The Scientific Python (SciPy) package has a whole bunch of useful distributions, including $\chi^2$.
End of explanation
"""
X = np.arange(1, 100, 0.1)
plt.plot(X, distribution.pdf(X), lw=2)
plt.ylabel('Probability')
plt.xlabel('Value of $\chi^2$')
plt.show()
"""
Explanation: Here's the PDF of $\chi^2$ with one degree of freedom.
End of explanation
"""
distribution.pdf(loglikelihood), distribution.pdf(loglikelihood) < 0.05
"""
Explanation: We can calculate the probability of our observed log-likelihood from the PDF. If it is less than 0.05, then we can reject the null hypothesis.
End of explanation
"""
count_data = pd.DataFrame(columns=['Journal', 'Year', 'Count'])
chunk_size = 400 # This shouldn't be too large.
i = 0
# The slice() function automagically divides each corpus up into
# sequential years. We can use chain() to combine the two iterators
# so that we only have to write this code once.
for year, papers in chain(pj_corpus.slice(), pp_corpus.slice()):
tokens = [normalize_token(token)
for paper in papers # getattr() lets us set a default.
for token in nltk.word_tokenize(getattr(paper, 'abstract', ''))
if filter_token(token)]
N = len(tokens) # Number of tokens in this year.
for x in xrange(0, N, chunk_size):
current_chunk = tokens[x:x+chunk_size]
count = nltk.FreqDist(current_chunk)['photosynthesis']
# Store the count for this chunk as an observation.
count_data.loc[i] = [paper.journal, year, count]
i += 1 # Increment the index variable.
PJ_mean = pymc.Gamma('PJ_mean', beta=1.)
PP_mean = pymc.Gamma('PP_mean', beta=1.)
PJ_counts = pymc.Poisson('PJ_counts',
mu=PJ_mean,
value=count_data[count_data.Journal == 'PLANT JOURNAL'].Count,
observed=True)
PP_counts = pymc.Poisson('PP_counts',
mu=PP_mean,
value=count_data[count_data.Journal == 'PLANT PHYSIOLOGY'].Count,
observed=True)
model = pymc.Model({
'PJ_mean': PJ_mean,
'PP_mean': PP_mean,
'PJ_counts': PJ_counts,
'PP_counts': PP_counts
})
M1 = pymc.MCMC(model)
M2 = pymc.MCMC(model)
M3 = pymc.MCMC(model)
M1.sample(iter=20000, burn=2000, thin=20)
M2.sample(iter=20000, burn=2000, thin=20)
M3.sample(iter=20000, burn=2000, thin=20)
pymc.Matplot.plot(M1)
PJ_mean_samples = M1.PJ_mean.trace()[:]
PJ_mean_samples = np.append(PJ_mean_samples, M2.PJ_mean.trace()[:])
PJ_mean_samples = np.append(PJ_mean_samples, M3.PJ_mean.trace()[:])
PP_mean_samples = M1.PP_mean.trace()[:]
PP_mean_samples = np.append(PP_mean_samples, M2.PP_mean.trace()[:])
PP_mean_samples = np.append(PP_mean_samples, M3.PP_mean.trace()[:])
# Plot the 95% credible interval as box/whiskers.
plt.boxplot([PJ_mean_samples, PP_mean_samples],
whis=[2.5, 97.5],
labels=['Plant Journal', 'Plant Physiology'],
showfliers=False)
plt.ylim(0, 0.3)
plt.ylabel('Rate for term "photosyntheis"')
plt.show()
"""
Explanation: Money.
A Bayesian approach
We have shown that these two corpora differ significantly in their usage of the term "photosynthesis". In many cases, we may want to go one step further, and actually quantify that difference. We can use a similar approach to the one that we used when comparing word use between years: use an MCMC simulation to infer mean rates of use (and credibility intervals) for each corpus.
Rather than starting with a null hypothesis that there is no difference between corpora, we will begin with the belief that there is an independent rate of use for each corpus. We will then infer those rates, and sample from their posterior distributions to generate credible intervals.
Once again, we will model the rate of use with the Poisson distribution. So we must generate count data for evenly-sized chunks of each corpus. We'll put all of our count observations into a single dataframe.
End of explanation
"""
|
whitead/numerical_stats | unit_8/hw_2018/Homework_8_Key.ipynb | gpl-3.0 | from scipy import stats as ss
import numpy as np
data1 = np.array([0.41,2.69,3.82,0.42,1.20])
CI = 0.80
sample_mean = np.mean(data1)
sample_var = np.var(data1, ddof=1)
T = ss.t.ppf((1 - CI) / 2, df=len(data1)-1)
y = -T * np.sqrt(sample_var / len(data1))
print('{} +/ {}'.format(sample_mean, y))
"""
Explanation: Homework 8 Key
CHE 116: Numerical Methods and Statistics
3/22/2018
1.CLT Concepts (8 Points)
If you sum together 20 numbers sampled from a binomial distribution and 10 from a Poisson distribution, how is your sum distribted?
If you sample 25 numbers from different beta distributions, how will each of the numbers be distributed?
Assume a HW grade is determined as the average of 3 HW assignments. How is the HW grade distributed?
You measure the height of 3 people. What distribution will the uncertainty of the mean of the heights follow?
Answers:
1.1 The sum should follow a normal distribution. Since the total sample size is 30, it is large enough for the CLT to apply. Hence the differences in distributions that we are sampling from does not apply.
1.2 Since we are just sampling 25 numbers without taking either their sum or mean, each of the numbers will reflect the beta distribution that it is sampled from.
1.3 Since we are taking the mean of 3 HW assignments as the HW grade, it will follow a normal distribution according to CLT. Here we are assuming that we know the individual standard deviations of each of the HW assignments separately and that they are normally distributed. However, if the standard deviation of each HW assignment is not known, the HW grade will follow a t-distribution.
1.4 The uncertainty of the mean of weights will follow a t-distribution. This is because 3 is a small sample size and we do not know the value of the true weight standard deviation.
2.Confidence Interval (16 Points)
Report the given confidence interval for error in the mean using the data in the next cell and describe in words what the confidence interval is for each example. 4 points each
80% Double.
99% Upper ( a value such that the mean lies above that value 99% of the time)
95% Double
Redo part 3 with a known standard deviation of 2
data_1 = [0.41,2.69,3.82,0.42,1.20]
data_2 = [5.07,2.79,1.24,6.50,3.17,3.59,5.42,4.10,1.26,0.54,1.22,4.43,3.83,0.93,3.45,5.24,3.51,4.64,0.65,3.27,2.41,4.31,4.15,2.24,2.30,3.3]
data_3 = [5.62,2.34,2.76,2.80,1.15,5.19,-0.91]
2.1 Answer
Since $N=5$ and the true standard deviation is not known, we use the t-distribution.
We can say with 80% confidence that the true mean lies between the interval 1.7 $\pm$ 1.0 .
End of explanation
"""
data2 = np.array([5.07,2.79,1.24,6.50,3.17,3.59,5.42,4.10,1.26,0.54,1.22,4.43,3.83,0.93,3.45,5.24,3.51,4.64,0.65,3.27,2.41,4.31,4.15,2.24,2.30,3.3])
CI = 0.99
sample_mean = np.mean(data2)
sample_var = np.var(data2, ddof=1)
Z = ss.norm.ppf((1 - CI))
y = -Z * np.sqrt(sample_var / len(data2))
print('{} + {}'.format(sample_mean, y))
"""
Explanation: 2.2 Answer
Since $N=26$, we use the normal distribution.
We can say with 99% confidence that the true mean lies above 3.693 (3.214 + 0.479).
End of explanation
"""
data3 = np.array([5.62,2.34,2.76,2.80,1.15,5.19,-0.91])
CI = 0.95
sample_mean = np.mean(data3)
sample_var = np.var(data3,ddof=1)
T = ss.t.ppf((1 - CI)/2,df=len(data3)-1)
y = -T * np.sqrt(sample_var / len(data3))
print('{} +/ {}'.format(sample_mean, y))
"""
Explanation: 2.3 Answer
Since $N=7$ and the true standard deviation is not known, we use the t-distribution.
We can say with 95% confidence that the true mean lies between the interval 2.70 $\pm$ 2 .
End of explanation
"""
data3 = np.array([5.62,2.34,2.76,2.80,1.15,5.19,-0.91])
CI = 0.95
sample_mean = np.mean(data3)
true_var = 2**2
T = ss.norm.ppf((1 - CI)/2)
y = -T * np.sqrt(true_var / len(data3))
print('{} +/ {}'.format(sample_mean, y))
"""
Explanation: 2.4 Answer
Even though we have small sample size, $N=7$, we use the normal distribution since we know the true standard deviation.
We can say with 95% confidence that the true mean lies between the interval 2.7 $\pm$ 1.5 .
End of explanation
"""
|
rmanak/nlp_tutorials | popcorn.ipynb | mit | import pandas as pd
import numpy as np
import re
from nltk.corpus import stopwords
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.linear_model import LogisticRegression
from sklearn.cross_validation import cross_val_score
from os.path import join
from bs4 import BeautifulSoup
"""
Explanation: Bag of words meets bag of popcorn
A tutorial in text mining and NLP
Please first download the data from here:
https://www.kaggle.com/c/word2vec-nlp-tutorial/data
Let's first import all the libraries we will need
End of explanation
"""
root_dir = '/Users/arman/kaggledata/popcorn'
dfTrain = pd.read_csv(join(root_dir,'labeledTrainData.tsv'),header=0,\
delimiter="\t",quoting=3)
dfTest = pd.read_csv(join(root_dir,'testData.tsv'), header=0,\
delimiter="\t", quoting=3 )
"""
Explanation: If you are missing bs4 or nltk you can install them via:
pip install bs4
pip install nltk
python -m nltk.downloader all
Setup an I/O directory and put your downloaded data there, we will call this root_dir in the following.
Let's now load the data:
(make sure you change the root_dir to your own path)
End of explanation
"""
dfTrain.head(5)
dfTest.head(5)
"""
Explanation: Let's take a quick look at the data:
End of explanation
"""
dfTrain['review'][11]
"""
Explanation: In particular note that the review column has some html tags:
End of explanation
"""
target = dfTrain['sentiment']
"""
Explanation: Our target is to use sentiment column to predict the same for the test set:
End of explanation
"""
def review_to_wordlist(review, remove_stopwords=False, split=False):
"""
Simple text cleaning function,
uses BeautifulSoup to extract text content from html
removes all non-alphabet
converts to lower case
can remove stopwords
can perform simple tokenization using split by whitespace
"""
review_text = BeautifulSoup(review, 'lxml').get_text()
review_text = re.sub("[^a-zA-Z]"," ", review_text)
words = review_text.lower().split()
if remove_stopwords:
stops = set(stopwords.words("english"))
words = [w for w in words if not w in stops]
if split:
return(words)
else:
return(' '.join(words))
"""
Explanation: Now we need some sort of "cleaning" processes, we simply eliminate all the non-alphabet characters and use BeautifulSoup library to extract the text content, Let's put everything together in a function:
End of explanation
"""
review_to_wordlist(dfTrain['review'][11])
"""
Explanation: Before proceeding, let's test what our function does: on the review example above:
End of explanation
"""
review_to_wordlist(dfTrain['review'][11],remove_stopwords=True)
"""
Explanation: and with the remove_stopwords flag on, it will give us:
End of explanation
"""
token = review_to_wordlist(dfTrain['review'][11],remove_stopwords=True, split=True)
print(token)
"""
Explanation: and with split flag on, it can actually perform a simple tokenization:
End of explanation
"""
dfTrain['review'] = dfTrain['review'].map(review_to_wordlist)
dfTest['review'] = dfTest['review'].map(review_to_wordlist)
train_len = len(dfTrain)
"""
Explanation: Notice the words
reading, purely, written,
raised, films, clearly
that all need stemming, but let's for now continue with what we have
Let's now apply our cleaning process to the review columns:
End of explanation
"""
corpus = list(dfTrain['review']) + list(dfTest['review'])
"""
Explanation: Our corpus is all of the reviews:
End of explanation
"""
tfv = TfidfVectorizer(min_df=3, max_features=None, ngram_range=(1, 2),\
use_idf=True,smooth_idf=True,sublinear_tf=True,\
stop_words = 'english')
tfv.fit(corpus)
"""
Explanation: Not let's use sklearn's tf-idf vectorizer with unigram and bigrams, and a log TF function (sublinear_tf=True)
Note that we can remove the stop_words here
End of explanation
"""
X_all = tfv.transform(corpus)
"""
Explanation: We can now use the object tfv to build the tf-idf vector-space representation of the reviews, the transformation returns a sparse scipy matrix
Note: Following can take upto 1 min
End of explanation
"""
print(X_all.shape)
"""
Explanation: Notice the shape of the X_all matrix:
End of explanation
"""
train = X_all[:train_len]
test = X_all[train_len:]
"""
Explanation: So it created about 300K numerical features! (the total count of words in the corpus + number of unique bigrams)
It is highly sparse though (which allows python to use scipy's sparse matrix representation and keep everything on the RAM!)
Now let's split the X_all matrix back to our train and test set:
End of explanation
"""
Cs = [1,3,10,30,100,300]
for c in Cs:
clf = LogisticRegression(penalty='l2', dual=True, tol=0.0001,\
C=c, fit_intercept=True, intercept_scaling=1.0,\
class_weight=None, random_state=None)
print("c:",c," score:", np.mean(cross_val_score(clf, train, target,\
cv=5, scoring='roc_auc')))
"""
Explanation: We now use a Logistic Regression model to fit to the numerical features, (LR is quite safe here to use for such a high number of features, to use tree based models we definitely need feature selection)
Let's perform a simple 5-fold cross-validation using AUC score and also fine tune one of the parameters of the LR model, the penalty constant c
End of explanation
"""
clf = LogisticRegression(penalty='l2', dual=True, tol=0.0001,\
C=30, fit_intercept=True, intercept_scaling=1.0,\
class_weight=None, random_state=None)
clf.fit(train,target)
"""
Explanation: Our CV experiment suggests that c = 30 is the best choice, so we use our best model to fit to the entire train set now:
End of explanation
"""
preds = clf.predict_proba(test)[:,1]
dfOut = pd.DataFrame( data={"id":dfTest["id"], "sentiment":preds} )
dfOut.to_csv(join(root_dir,'submission.csv'), index=False, quoting=3)
"""
Explanation: and finally predicting for test set and storing the results
End of explanation
"""
|
poldrack/fmri-analysis-vm | analysis/connectivity/GrangerCausality.ipynb | mit | import os,sys
import numpy
%matplotlib inline
import matplotlib.pyplot as plt
import statsmodels.tsa.stattools
from dcm_sim import sim_dcm_dataset
sys.path.insert(0,'../')
from utils.graph_utils import show_graph_from_adjmtx,show_graph_from_pattern
# first we simulate some data using our DCM model, with the same HRF across all regions
_,data_conv,params=sim_dcm_dataset(verbose=True)
A=params['A']
B=params['B']
C=params['C']
data=data_conv[range(0,data_conv.shape[0],int(1./params['stepsize']))]
"""
Explanation: In this notebook we will show how to perform a Granger Causality analysis, and demonstrate how poorly it performs on our simulated data.
End of explanation
"""
gc=numpy.zeros(A.shape)
for i in range(A.shape[0]):
for j in range(A.shape[0]):
if i==j: # don't compute self-connectivity
continue
result=statsmodels.tsa.stattools.grangercausalitytests(data[:,[i,j]],1)
if result[1][0]['params_ftest'][1]<0.05:
gc[i,j]=1
show_graph_from_adjmtx(gc,numpy.zeros(B.shape),numpy.zeros(C.shape),title='Granger')
show_graph_from_adjmtx(A,B,C,title='True model')
"""
Explanation: Now compute Granger causality across all pairs of timeseries
End of explanation
"""
|
swirlingsand/deep-learning-foundations | transfer-learning/Transfer_Learning.ipynb | mit | from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
vgg_dir = 'tensorflow_vgg/'
# Make sure vgg exists
if not isdir(vgg_dir):
raise Exception("VGG directory doesn't exist!")
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(vgg_dir + "vgg16.npy"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='VGG16 Parameters') as pbar:
urlretrieve(
'https://s3.amazonaws.com/content.udacity-data.com/nd101/vgg16.npy',
vgg_dir + 'vgg16.npy',
pbar.hook)
else:
print("Parameter file already exists!")
"""
Explanation: Transfer Learning
Most of the time you won't want to train a whole convolutional network yourself. Modern ConvNets training on huge datasets like ImageNet take weeks on multiple GPUs. Instead, most people use a pretrained network either as a fixed feature extractor, or as an initial network to fine tune. In this notebook, you'll be using VGGNet trained on the ImageNet dataset as a feature extractor. Below is a diagram of the VGGNet architecture.
<img src="assets/cnnarchitecture.jpg" width=700px>
VGGNet is great because it's simple and has great performance, coming in second in the ImageNet competition. The idea here is that we keep all the convolutional layers, but replace the final fully connected layers with our own classifier. This way we can use VGGNet as a feature extractor for our images then easily train a simple classifier on top of that. What we'll do is take the first fully connected layer with 4096 units, including thresholding with ReLUs. We can use those values as a code for each image, then build a classifier on top of those codes.
You can read more about transfer learning from the CS231n course notes.
Pretrained VGGNet
We'll be using a pretrained network from https://github.com/machrisaa/tensorflow-vgg.
End of explanation
"""
import tarfile
dataset_folder_path = 'flower_photos'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile('flower_photos.tar.gz'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Flowers Dataset') as pbar:
urlretrieve(
'http://download.tensorflow.org/example_images/flower_photos.tgz',
'flower_photos.tar.gz',
pbar.hook)
if not isdir(dataset_folder_path):
with tarfile.open('flower_photos.tar.gz') as tar:
tar.extractall()
tar.close()
"""
Explanation: Flower power
Here we'll be using VGGNet to classify images of flowers. To get the flower dataset, run the cell below. This dataset comes from the TensorFlow inception tutorial.
End of explanation
"""
import os
import numpy as np
import tensorflow as tf
from tensorflow_vgg import vgg16
from tensorflow_vgg import utils
data_dir = 'flower_photos/'
contents = os.listdir(data_dir)
classes = [each for each in contents if os.path.isdir(data_dir + each)]
"""
Explanation: ConvNet Codes
Below, we'll run through all the images in our dataset and get codes for each of them. That is, we'll run the images through the VGGNet convolutional layers and record the values of the first fully connected layer. We can then write these to a file for later when we build our own classifier.
Here we're using the vgg16 module from tensorflow_vgg. The network takes images of size $224 \times 224 \times 3$ as input. Then it has 5 sets of convolutional layers. The network implemented here has this structure (copied from the source code):
```
self.conv1_1 = self.conv_layer(bgr, "conv1_1")
self.conv1_2 = self.conv_layer(self.conv1_1, "conv1_2")
self.pool1 = self.max_pool(self.conv1_2, 'pool1')
self.conv2_1 = self.conv_layer(self.pool1, "conv2_1")
self.conv2_2 = self.conv_layer(self.conv2_1, "conv2_2")
self.pool2 = self.max_pool(self.conv2_2, 'pool2')
self.conv3_1 = self.conv_layer(self.pool2, "conv3_1")
self.conv3_2 = self.conv_layer(self.conv3_1, "conv3_2")
self.conv3_3 = self.conv_layer(self.conv3_2, "conv3_3")
self.pool3 = self.max_pool(self.conv3_3, 'pool3')
self.conv4_1 = self.conv_layer(self.pool3, "conv4_1")
self.conv4_2 = self.conv_layer(self.conv4_1, "conv4_2")
self.conv4_3 = self.conv_layer(self.conv4_2, "conv4_3")
self.pool4 = self.max_pool(self.conv4_3, 'pool4')
self.conv5_1 = self.conv_layer(self.pool4, "conv5_1")
self.conv5_2 = self.conv_layer(self.conv5_1, "conv5_2")
self.conv5_3 = self.conv_layer(self.conv5_2, "conv5_3")
self.pool5 = self.max_pool(self.conv5_3, 'pool5')
self.fc6 = self.fc_layer(self.pool5, "fc6")
self.relu6 = tf.nn.relu(self.fc6)
```
So what we want are the values of the first fully connected layer, after being ReLUd (self.relu6). To build the network, we use
with tf.Session() as sess:
vgg = vgg16.Vgg16()
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
with tf.name_scope("content_vgg"):
vgg.build(input_)
This creates the vgg object, then builds the graph with vgg.build(input_). Then to get the values from the layer,
feed_dict = {input_: images}
codes = sess.run(vgg.relu6, feed_dict=feed_dict)
End of explanation
"""
# Set the batch size higher if you can fit in in your GPU memory
batch_size = 32
codes_list = []
labels = []
batch = []
codes = None
with tf.Session() as sess:
my_vgg = vgg16.Vgg16()
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
with tf.name_scope("content_vgg"):
my_vgg.build(input_)
for each in classes:
print("Starting {} images".format(each))
class_path = data_dir + each
files = os.listdir(class_path)
for ii, file in enumerate(files, 1):
# Add images to the current batch
# utils.load_image crops the input images for us, from the center
img = utils.load_image(os.path.join(class_path, file))
batch.append(img.reshape((1, 224, 224, 3)))
labels.append(each)
# Running the batch through the network to get the codes
if ii % batch_size == 0 or ii == len(files):
# Image batch to pass to VGG network
images = np.concatenate(batch)
# Get the values from the relu6 layer of the VGG network
feed_dict = {input_ : images}
# KEY!!!!
codes_batch = sess.run(my_vgg.relu6, feed_dict = feed_dict)
# Here I'm building an array of the codes
if codes is None:
codes = codes_batch
else:
codes = np.concatenate((codes, codes_batch))
# Reset to start building the next batch
batch = []
print('{} images processed'.format(ii))
# write codes to file
with open('codes', 'w') as f:
codes.tofile(f)
# write labels to file
import csv
with open('labels', 'w') as f:
writer = csv.writer(f, delimiter='\n')
writer.writerow(labels)
"""
Explanation: Below I'm running images through the VGG network in batches.
Exercise: Below, build the VGG network. Also get the codes from the first fully connected layer (make sure you get the ReLUd values).
End of explanation
"""
print(labels)
# read codes and labels from file
import csv
import numpy as np
with open('labels') as f:
reader = csv.reader(f, delimiter='\n')
labels = np.array([each for each in reader if len(each) > 0]).squeeze()
with open('codes') as f:
codes = np.fromfile(f, dtype=np.float32)
print(codes.size)
codes = codes.reshape((len(labels), -1))
print(codes.shape, labels.shape)
"""
Explanation: Building the Classifier
Now that we have codes for all the images, we can build a simple classifier on top of them. The codes behave just like normal input into a simple neural network. Below I'm going to have you do most of the work.
End of explanation
"""
from sklearn.preprocessing import LabelBinarizer
labelBinarizer = LabelBinarizer()
labelBinarizer.fit(labels)
labels_vecs = labelBinarizer.transform(labels)
print(labels_vecs.shape)
"""
Explanation: Data prep
As usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels!
Exercise: From scikit-learn, use LabelBinarizer to create one-hot encoded vectors from the labels.
End of explanation
"""
from sklearn.model_selection import StratifiedShuffleSplit
sss = StratifiedShuffleSplit(n_splits=1, test_size=.2)
i, j = next( sss.split(codes, labels) )
h = len(j) // 2
j, k = j[:h], j[h: ] # Validation 50%
# end j at half of j and start k at half of j
train_x, train_y = codes[i], labels_vecs[i]
val_x, val_y = codes[j], labels_vecs[j]
test_x, test_y = codes[k], labels_vecs[k]
print("Train shapes (x, y):", train_x.shape, train_y.shape)
print("Validation shapes (x, y):", val_x.shape, val_y.shape)
print("Test shapes (x, y):", test_x.shape, test_y.shape)
"""
Explanation: Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use StratifiedShuffleSplit from scikit-learn.
You can create the splitter like so:
ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)
Then split the data with
splitter = ss.split(x, y)
ss.split returns a generator of indices. You can pass the indices into the arrays to get the split sets. The fact that it's a generator means you either need to iterate over it, or use next(splitter) to get the indices. Be sure to read the documentation and the user guide.
Exercise: Use StratifiedShuffleSplit to split the codes and labels into training, validation, and test sets.
End of explanation
"""
inputs_ = tf.placeholder(tf.float32, shape=[None, codes.shape[1]])
labels_ = tf.placeholder(tf.int64, shape=[None, labels_vecs.shape[1]])
l_1 = tf.contrib.layers.fully_connected(inputs_, 256)
logits = tf.contrib.layers.fully_connected(l_1, labels_vecs.shape[1], activation_fn=None)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=labels_, logits=logits)
cost = tf.reduce_mean( cross_entropy )
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Operations for validation/test accuracy
predicted = tf.nn.softmax(logits)
correct_pred = tf.equal(tf.argmax(predicted, 1), tf.argmax(labels_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
"""
Explanation: If you did it right, you should see these sizes for the training sets:
Train shapes (x, y): (2936, 4096) (2936, 5)
Validation shapes (x, y): (367, 4096) (367, 5)
Test shapes (x, y): (367, 4096) (367, 5)
Classifier layers
End of explanation
"""
def get_batches(x, y, n_batches=10):
""" Return a generator that yields batches from arrays x and y. """
batch_size = len(x)//n_batches
for ii in range(0, n_batches*batch_size, batch_size):
# If we're not on the last batch, grab data with size batch_size
if ii != (n_batches-1)*batch_size:
X, Y = x[ii: ii+batch_size], y[ii: ii+batch_size]
# On the last batch, grab the rest of the data
else:
X, Y = x[ii:], y[ii:]
# I love generators
yield X, Y
"""
Explanation: Batches!
Here is just a simple way to do batches. I've written it so that it includes all the data. Sometimes you'll throw out some data at the end to make sure you have full batches. Here I just extend the last batch to include the remaining data.
End of explanation
"""
saver = tf.train.Saver()
e = 20
iteration = 0
with tf.Session() as sess:
# 1. Start session
sess.run(tf.global_variables_initializer() )
# 2. Do each epoch
for i in range(e):
#3. Do each batch
for x, y in get_batches(train_x, train_y):
#4. Input data
feed = {inputs_: x, labels_: y}
#5. Do loss
loss, _ = sess.run([cost, optimizer], feed_dict=feed)
#6. Increment counter
iteration += 1
#7. Print results
print("Epoch: {} / {}".format( i, e),
"Iteration: {}".format( iteration ),
"Train loss: {:.5f}".format( loss ))
#8. Do Validation
if iteration % 5 == 0:
feed = {inputs_: val_x, labels_: val_y}
val_acc = sess.run(accuracy, feed_dict=feed)
print("Epoch: {} / {}".format( i, e),
"Iteration: {} ".format( iteration),
"Validation Acc: {:.4f}".format(val_acc) )
saver.save(sess, "checkpoints/flowers.ckpt")
"""
Explanation: Training
End of explanation
"""
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: test_x,
labels_: test_y}
test_acc = sess.run(accuracy, feed_dict=feed)
print("Test accuracy: {:.4f}".format(test_acc))
%matplotlib inline
import matplotlib.pyplot as plt
from scipy.ndimage import imread
"""
Explanation: Testing
Below you see the test accuracy. You can also see the predictions returned for images.
End of explanation
"""
# Run this cell if you don't have a vgg graph built
if 'my_vgg' in globals():
print('"vgg" object already exists. Will not create again.')
else:
#create vgg
with tf.Session() as sess:
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
vgg = vgg16.Vgg16()
vgg.build(input_)
test_img_path = 'flower_photos/daisy/5547758_eea9edfd54_n.jpg'
test_img = imread(test_img_path)
plt.imshow(test_img)
with tf.Session() as sess:
img = utils.load_image(test_img_path)
img = img.reshape((1, 224, 224, 3))
feed_dict = {input_: img}
## KEY
code = sess.run(my_vgg.relu6, feed_dict=feed_dict)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: code}
prediction = sess.run(predicted, feed_dict=feed).squeeze()
print(max(prediction))
plt.barh(np.arange(5), prediction)
_ = plt.yticks(np.arange(5), labelBinarizer.classes_)
"""
Explanation: Below, feel free to choose images and see how the trained classifier predicts the flowers in them.
End of explanation
"""
|
karlstroetmann/Artificial-Intelligence | Python/1 Search/Breadth-First-Search.ipynb | gpl-2.0 | def search(start, goal, next_states):
Frontier = { start }
Visited = set()
Parent = { start: start }
while Frontier:
NewFrontier = set()
for s in Frontier:
for ns in next_states(s):
if ns not in Visited and ns not in Frontier:
NewFrontier.add(ns)
Parent[ns] = s
if ns == goal:
print(len(Visited) + len(Frontier) + len(NewFrontier))
return path_to(goal, Parent)
Visited |= Frontier
Frontier = NewFrontier
"""
Explanation: Breadth First Search
The function search takes three arguments to solve a search problem:
- start is the start state of the search problem,
- goal is the goal state, and
- next_states is a function with signature $\texttt{next_states}:Q \rightarrow 2^Q$, where $Q$ is the set of states.
For every state $s \in Q$, $\texttt{next_states}(s)$ is the set of states that can be reached from $s$ in one step.
If successful, search returns a path from start to goal that is a solution of the search problem
$$ \langle Q, \texttt{next_states}, \texttt{start}, \texttt{goal} \rangle. $$
The implementation of search uses the algorithm breadth first search to find a path from start to goal.
At the start of the $n^\textrm{th}$ iteration of the while loop, the following invariants are satisfied:
* Frontier contains exactly those states that have distance of $n-1$ from start.
* Visited contains those states that have distance from start that is less than n-1.
* Parent is a dictionary. The keys of this dictionaries are all states from the sets Visited, Frontier, and NewFrontier.
If $x = \texttt{Parent}[y]$, then $y \in \texttt{next_states}(x)$.
End of explanation
"""
def path_to(state, Parent):
p = Parent.get(state)
if p == state:
return [state]
return path_to(p, Parent) + [state]
"""
Explanation: Given a state and a parent dictionary Parent, the function path_to returns a path leading to the given state.
End of explanation
"""
import graphviz as gv
"""
Explanation: Display Code
End of explanation
"""
def toDot(source, goal, Edges, Frontier, Visited, Parent=None):
V = set()
for x, L in Edges.items():
V.add(x)
for y in L:
V.add(y)
dot = gv.Digraph(node_attr={'shape': 'record', 'style': 'rounded'})
dot.attr(rankdir='LR')
for x in V:
if x == source:
dot.node(str(x), color='blue', shape='doublecircle')
elif x in Frontier and x == goal:
dot.node(str(x), label=str(x), color='magenta')
elif x in Frontier:
dot.node(str(x), label=str(x), color='red')
elif x in Visited:
dot.node(str(x), label=str(x), color='blue')
else:
dot.node(str(x), label=str(x))
if Parent:
Path = path_to(goal, Parent)
for u in V:
if Edges.get(u):
for v in Edges[u]:
if Parent and v in Path and Parent[v] == u:
dot.edge(str(u), str(v), color='brown', style='bold')
else:
dot.edge(str(u), str(v))
return dot
"""
Explanation: The function $\texttt{toDot}(\texttt{source}, \texttt{Edges}, \texttt{Fringe}, \texttt{Visited})$ takes a graph that is represented by
its Edges, a set of nodes Fringe, and set Visited of nodes that have already been visited.
End of explanation
"""
def next_states_test(node):
x, y = node
return { (x+1, y), (x, y+1) }
def create_edges(n):
Edges = {}
for row in range(n):
for col in range(n):
if (row, col) != (n-1, n-1):
Edges[(row, col)] = list(next_states_test((row, col)))
for k in range(n-1):
Edges[(k, n-1)] = [(k+1, n-1)]
Edges[(n-1, k)] = [(n-1, k+1)]
return Edges
def search_show(start, goal, next_states, Edges):
Visited = set()
Frontier = { start }
Parent = { start: start }
while len(Frontier) > 0:
display(toDot(start, goal, Edges, Frontier, Visited))
NewFrontier = set()
Visited |= Frontier
for s in Frontier:
for ns in next_states(s):
if not (ns in Visited):
NewFrontier.add(ns)
Parent[ns] = s
if ns == goal:
display(toDot(start, goal, Edges, NewFrontier, Visited, Parent))
return
Frontier = NewFrontier
def main(n):
Edges = create_edges(n)
search_show((0,0), (n-1, n -1), next_states_test, Edges)
main(6)
"""
Explanation: Testing
End of explanation
"""
%run Missionaries.ipynb
dot_graph(createRelation(start))
%%time
Path = search(start, goal, next_states)
printPath(Path)
"""
Explanation: Saving the Infidels
End of explanation
"""
%run Sliding-Puzzle.ipynb
"""
Explanation: Solving the Sliding Puzzle
End of explanation
"""
%load_ext memory_profiler
%%time
%memit Path = search(start, goal, next_states)
animation(Path)
"""
Explanation: The next line is needed to enable the %memit magic command.
End of explanation
"""
|
tequa/ammisoft | ammimain/WinPython-64bit-2.7.13.1Zero/notebooks/docs/WinpythonSlim_checker.ipynb | bsd-3-clause | %matplotlib inline
"""
Explanation: WinpythonSlim Default checker
WinPythonSlim is a subset of WinPython, aiming for quick installation on a classrooms.
Command Line installation:
WinPython-32bit-3.4.3.7Slim.exe /S /DIR=you_target_directory
End of explanation
"""
# Matplotlib
from mpl_toolkits.mplot3d import axes3d
import matplotlib.pyplot as plt
from matplotlib import cm
fig = plt.figure()
ax = fig.gca(projection='3d')
X, Y, Z = axes3d.get_test_data(0.05)
ax.plot_surface(X, Y, Z, rstride=8, cstride=8, alpha=0.3)
cset = ax.contourf(X, Y, Z, zdir='z', offset=-100, cmap=cm.coolwarm)
cset = ax.contourf(X, Y, Z, zdir='x', offset=-40, cmap=cm.coolwarm)
cset = ax.contourf(X, Y, Z, zdir='y', offset=40, cmap=cm.coolwarm)
ax.set_xlabel('X')
ax.set_xlim(-40, 40)
ax.set_ylabel('Y')
ax.set_ylim(-40, 40)
ax.set_zlabel('Z')
ax.set_zlim(-100, 100)
plt.show()
# Seaborn
import seaborn as sns
sns.set()
df = sns.load_dataset("iris")
sns.pairplot(df, hue="species", size=2.5)
"""
Explanation: Graphics: Matplotlib, Seaborn
End of explanation
"""
# Guidata (Python library generating graphical user interfaces for easy dataset editing and display)
from guidata import tests; tests.run()
# Guiqwt (Efficient 2D plotting Python library based on PythonQwt)
from guiqwt import tests; tests.run()
"""
Explanation: Qt4 & Qt5 Graphic libraries: PythonQwt, guidata, guiqwt
End of explanation
"""
import IPython;IPython.__version__
# Audio Example : https://github.com/ipython/ipywidgets/blob/master/examples/Beat%20Frequencies.ipynb
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from ipywidgets import interactive
from IPython.display import Audio, display
def beat_freq(f1=220.0, f2=224.0):
max_time = 3
rate = 8000
times = np.linspace(0,max_time,rate*max_time)
signal = np.sin(2*np.pi*f1*times) + np.sin(2*np.pi*f2*times)
print(f1, f2, abs(f1-f2))
display(Audio(data=signal, rate=rate))
return signal
v = interactive(beat_freq, f1=(200.0,300.0), f2=(200.0,300.0))
display(v)
"""
Explanation: Ipython Notebook: Interactivity & other
End of explanation
"""
# checking statsmodels
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('ggplot')
import statsmodels.api as sm
data = sm.datasets.anes96.load_pandas()
party_ID = np.arange(7)
labels = ["Strong Democrat", "Weak Democrat", "Independent-Democrat",
"Independent-Independent", "Independent-Republican",
"Weak Republican", "Strong Republican"]
plt.rcParams['figure.subplot.bottom'] = 0.23 # keep labels visible
plt.rcParams['figure.figsize'] = (6.0, 4.0) # make plot larger in notebook
age = [data.exog['age'][data.endog == id] for id in party_ID]
fig = plt.figure()
ax = fig.add_subplot(111)
plot_opts={'cutoff_val':5, 'cutoff_type':'abs',
'label_fontsize':'small',
'label_rotation':30}
sm.graphics.beanplot(age, ax=ax, labels=labels,
plot_opts=plot_opts)
ax.set_xlabel("Party identification of respondent")
ax.set_ylabel("Age")
"""
Explanation: Mathematical: statsmodels
End of explanation
"""
# checking Ipython-sql, sqlparse, SQLalchemy
%load_ext sql
%%sql sqlite:///.baresql.db
DROP TABLE IF EXISTS writer;
CREATE TABLE writer (first_name, last_name, year_of_death);
INSERT INTO writer VALUES ('William', 'Shakespeare', 1616);
INSERT INTO writer VALUES ('Bertold', 'Brecht', 1956);
SELECT * , sqlite_version() as sqlite_version from Writer order by Year_of_death
# checking sqlite_bro: this should lanch a separate non-browser window with sqlite_bro's welcome
!cmd start cmd /C sqlite_bro
# checking baresql
from __future__ import print_function, unicode_literals, division # line needed only if Python2.7
from baresql import baresql
bsql = baresql.baresql(connection="sqlite:///.baresql.db")
bsqldf = lambda q: bsql.df(q, dict(globals(),**locals()))
users = ['Alexander', 'Billy', 'Charles', 'Danielle', 'Esmeralda', 'Franz', 'Greg']
# We use the python 'users' list like a SQL table
sql = "select 'Welcome ' || c0 || ' !' as say_hello, length(c0) as name_length from users$$ where c0 like '%a%' "
bsqldf(sql)
# checking db.py
from db import DB
db=DB(dbtype="sqlite", filename=".baresql.db")
db.query("select sqlite_version() as sqlite_version ;")
db.tables
"""
Explanation: SQL tools: sqlite, Ipython-sql, sqlite_bro, baresql, db.py
End of explanation
"""
#Pandas and the pipe operator (similar to (%>%) pipe operator for R.)
import pandas as pd
import numpy as np
idx = pd.date_range('2000', '2005', freq='d', closed='left')
datas = pd.DataFrame({'A': np.random.randn(len(idx)),
'B': np.random.randn(len(idx)), 'C': idx.year},
index=idx)
datas.head()
datas.query('B > 0').groupby('C').size()
"""
Explanation: DataFrames (Split, Apply, Combine): Pandas, Dask
End of explanation
"""
# checking Web Scraping: beautifulsoup and requests
import requests
from bs4 import BeautifulSoup
URL = 'http://en.wikipedia.org/wiki/Franklin,_Tennessee'
req = requests.get(URL, headers={'User-Agent' : "Mining the Social Web"})
soup = BeautifulSoup(req.text, "html5lib")
geoTag = soup.find(True, 'geo')
if geoTag and len(geoTag) > 1:
lat = geoTag.find(True, 'latitude').string
lon = geoTag.find(True, 'longitude').string
print ('Location is at', lat, lon)
elif geoTag and len(geoTag) == 1:
(lat, lon) = geoTag.string.split(';')
(lat, lon) = (lat.strip(), lon.strip())
print ('Location is at', lat, lon)
else:
print ('No location found')
"""
Explanation: Web Scraping: Beautifulsoup
End of explanation
"""
# optional scipy full test (takes up to 10 minutes)
#!cmd /C start cmd /k python.exe -c "import scipy;scipy.test()"
"""
Explanation: Wrap-up
End of explanation
"""
|
ml4a/ml4a-guides | examples/models/BASNet.ipynb | gpl-2.0 | %tensorflow_version 1.x
!pip3 install --quiet ml4a
"""
Explanation: BASNet: Salient Object Detection
Outputs a mask of an image's salient objects (foreground). See the original code and paper.
Set up ml4a and enable GPU
If you don't already have ml4a installed, or you are opening this in Colab, first enable GPU (Runtime > Change runtime type), then run the following cell to install ml4a and its dependencies.
End of explanation
"""
from ml4a.models import basnet
from ml4a import image
img = image.load_image('https://upload.wikimedia.org/wikipedia/commons/6/6a/Mona_Lisa.jpg', (220, 350))
foreground_mask = basnet.get_foreground(img)
image.display([img, foreground_mask])
"""
Explanation: Run BASNet
basnet.get_foreground takes an image and outputs a greyscale image of the input image's saliency map (img_fg below).
End of explanation
"""
from ml4a import mask
img_masked = mask.mask_image(foreground_mask, img)
image.display(img_masked)
"""
Explanation: You can use the outputted saliency map to mask out the background of the original image using mask.mask_image.
End of explanation
"""
imgbg = image.load_image(image.starrynight(), image.get_size(img))
imgbg_masked = mask.mask_image(255-foreground_mask, imgbg)
img_combined = img_masked + imgbg_masked
image.display(img_combined)
"""
Explanation: We can use the mask to composite the foreground with another background. We'll load a separate background image, and appy the opposite mask to it, then add the foreground to the background.
End of explanation
"""
|
marcinofulus/ProgramowanieRownolegle | MPI/PR_MPI_Diffusion2d.ipynb | gpl-3.0 | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import os
print os.getenv("HOME")
wd = os.path.join( os.getenv("HOME"),"mpi_tmpdir")
if not os.path.isdir(wd):
os.mkdir(wd)
os.chdir(wd)
print "WD is now:",os.getcwd()
%%writefile mpi002.py
from mpi4py import MPI
import numpy as np
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
def numpy_diff2d(u,dx2,dy2,c):
A = (1.0-2.0*(c/dx2+c/dy2))
u[1:-1,1:-1] =A*u[1:-1,1:-1] + c/dy2*(u[2:,1:-1] + u[:-2,1:-1]) + \
c/dx2*(u[1:-1,2:] + u[1:-1,:-2])
N=52
Niter=211
dx = 0.1
dy = 0.1
dx2 = dx*dx
dy2 = dy*dy
dt = 0.01
D = 0.1
c = D*dt
u = np.zeros([N, N])
if rank == 0:
u[-2,u.shape[1]/2] = 1.0/np.sqrt(dx2*dy2)
print "CLF = ",c/dx2,c/dy2
for i in range(Niter):
if rank == 0:
comm.Send([u[-2,:], MPI.FLOAT], dest=1)
comm.Recv([u[-1,:], MPI.FLOAT], source=1)
elif rank == 1:
comm.Recv([u[0,:], MPI.FLOAT], source=0)
comm.Send([u[1,:], MPI.FLOAT], dest=0)
numpy_diff2d(u,dx2,dy2,c)
#np.savez("udata%04d"%rank, u=u)
U = comm.gather(u[1:-1,1:-1])
if rank==0:
np.savez("Udata", U=U)
!mpirun -n 2 python mpi002.py
data = np.load("Udata.npz")
plt.imshow(np.vstack(data['U']))
print data['U'].shape
!pwd
"""
Explanation: Równanie dyfuzji - dekompozycja siatki z zastosowaniem MPI
Problem:
Chcemy rozwiązać równanie dyfuzji wykorzystując wiele procesorów.
Dzielimy siatkę na obszary o w każdym obszarze rozwiązujemy równanie niezależnie. Po każdym kroku czasowym wykorzystujemy komunikację w MPI do wymiany informacji o przylegających brzegach odpowiednich obszarów.
End of explanation
"""
%%writefile mpi003.py
from mpi4py import MPI
import numpy as np
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
def numpy_diff2d(u,dx2,dy2,c):
A = (1.0-2.0*(c/dx2+c/dy2))
u[1:-1,1:-1] =A*u[1:-1,1:-1] + c/dy2*(u[2:,1:-1] + u[:-2,1:-1]) + \
c/dx2*(u[1:-1,2:] + u[1:-1,:-2])
N=52
Niter=211
dx = 0.1
dy = 0.1
dx2 = dx*dx
dy2 = dy*dy
dt = 0.01
D = 0.1
c = D*dt
u = np.zeros([N, N])
if rank == 0:
u[u.shape[1]/2,-2] = 1.0/np.sqrt(dx2*dy2)
print "CLF = ",c/dx2,c/dy2
for i in range(Niter):
if rank == 0:
OUT = u[:,-2].copy()
IN = np.empty_like(OUT)
comm.Send([OUT, MPI.FLOAT], dest=1)
comm.Recv([IN, MPI.FLOAT], source=1)
u[:,-1] = IN
elif rank == 1:
OUT = u[:,1].copy()
IN = np.empty_like(OUT)
comm.Recv([IN, MPI.FLOAT], source=0)
comm.Send([OUT, MPI.FLOAT], dest=0)
u[:,0] = IN
numpy_diff2d(u,dx2,dy2,c)
np.savez("udata%04d"%rank, u=u)
!mpirun -n 2 python mpi003.py
u1 = np.load('udata0000.npz')['u']
u2 = np.load('udata0001.npz')['u']
plt.imshow(np.hstack([u1[:,:-1],u2[:,1:]]))
"""
Explanation: non-contigous slice
End of explanation
"""
%%writefile mpi004.py
from mpi4py import MPI
import numpy as np
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
Nproc = comm.size
def numpy_diff2d(u,dx2,dy2,c):
A = (1.0-2.0*(c/dx2+c/dy2))
u[1:-1,1:-1] = A*u[1:-1,1:-1] + c/dy2*(u[2:,1:-1] + u[:-2,1:-1]) + \
c/dx2*(u[1:-1,2:] + u[1:-1,:-2])
N = 16*128
Nx = N
Ny = N/Nproc
Niter=200
dx = 0.1
dy = 0.1
dx2 = dx*dx
dy2 = dy*dy
dt = 0.01
D = 0.2
c = D*dt
u = np.zeros([Ny, Nx])
if rank == 0:
u[-2,u.shape[1]/2] = 1.0/np.sqrt(dx2*dy2)
print "CLF = ",c/dx2,c/dy2
t0 = MPI.Wtime()
for i in range(Niter):
if Nproc>1:
if rank == 0:
comm.Send([u[-2,:], MPI.FLOAT], dest=1)
if rank >0 and rank < Nproc-1:
comm.Recv([u[0,:], MPI.FLOAT], source=rank-1)
comm.Send([u[-2,:], MPI.FLOAT], dest=rank+1)
if rank == Nproc - 1:
comm.Recv([u[0,:], MPI.FLOAT], source=Nproc-2)
comm.Send([u[1,:], MPI.FLOAT], dest=Nproc-2)
if rank >0 and rank < Nproc-1:
comm.Recv([u[-1,:], MPI.FLOAT], source=rank+1)
comm.Send([u[1,:], MPI.FLOAT], dest=rank-1)
if rank == 0:
comm.Recv([u[-1,:], MPI.FLOAT], source=1)
#print rank
comm.Barrier()
numpy_diff2d(u,dx2,dy2,c)
t1 = MPI.Wtime()
print rank,t1-t0
#np.savez("udata%04d"%rank, u=u)
if Nproc>1:
U = comm.gather(u[1:-1,1:-1])
if rank==0:
np.savez("Udata", U=U)
!mpirun -H gpu2,gpu3 python mpi004.py
!mpirun -n 4 python mpi004.py
data = np.load("Udata.npz")
plt.imshow(np.vstack(data['U']))
print data['U'].shape
a = np.arange(0,16).reshape(4,4)
b = a[:,2]
c = a[2,:]
np.may_share_memory(a,b),np.may_share_memory(a,c)
a.flags
b.flags
c.flags
a=np.array(range(6))
b = a[2:4]
b=666
print a
np.may_share_memory?
"""
Explanation: N - slices
End of explanation
"""
|
TrinVeerasiri/presta_to_woo_migration | add_user_nicename.ipynb | gpl-3.0 | import pandas as pd
"""
Explanation: Add users nicename
After the web opening, admin tells the user that they have to change the password because we don't migrate them from Prestashop. The problem is old users don't have a user nicename, so they can't change their password. We solve this problem by copy the information of "user_login" column to fill nan values in "user_nicename" column of wp_post_meta.
Import library
End of explanation
"""
users = users.sort_values('ID')
"""
Explanation: Load the data
users = pd.read_csv('sql_prestashop/wp_users_2018.csv', index_col=False)
Sort dataframe by ID for comfortable look.
End of explanation
"""
users_no_nice = users[users['user_nicename'].isnull()]
"""
Explanation: Select only the the nan value rows.
End of explanation
"""
users_no_nice['user_nicename'] = users_no_nice['user_login']
"""
Explanation: Set user_nicename column eqaul to user_login column.
End of explanation
"""
users_no_nice['user_nicename'] = users_no_nice['user_nicename'].str.lower()
"""
Explanation: There are some entries have uppercase letter. Change them to lowercase.
End of explanation
"""
users_no_nice['user_nicename'] = users_no_nice['user_nicename'].str.replace('@', '-')
users_no_nice['user_nicename'] = users_no_nice['user_nicename'].str.replace('.', '-')
"""
Explanation: The some entries are in e-mail form. We must change the "@" and "." to "-".
End of explanation
"""
users['user_nicename'].fillna(users_no_nice['user_nicename'], inplace=True)
"""
Explanation: Fill nan value with the "user_no_nice" dataframe.
End of explanation
"""
users.to_csv('customer_import_to_woo/wp_users_with_nicename.csv', encoding='utf-8', index=False)
"""
Explanation: Export to .csv
End of explanation
"""
|
billzhao1990/CS231n-Spring-2017 | assignment2/.ipynb_checkpoints/BatchNormalization-checkpoint.ipynb | mit | # As usual, a bit of setup
from __future__ import print_function
import time
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.fc_net import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
""" returns relative error """
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in data.items():
print('%s: ' % k, v.shape)
"""
Explanation: Batch Normalization
One way to make deep networks easier to train is to use more sophisticated optimization procedures such as SGD+momentum, RMSProp, or Adam. Another strategy is to change the architecture of the network to make it easier to train. One idea along these lines is batch normalization which was recently proposed by [3].
The idea is relatively straightforward. Machine learning methods tend to work better when their input data consists of uncorrelated features with zero mean and unit variance. When training a neural network, we can preprocess the data before feeding it to the network to explicitly decorrelate its features; this will ensure that the first layer of the network sees data that follows a nice distribution. However even if we preprocess the input data, the activations at deeper layers of the network will likely no longer be decorrelated and will no longer have zero mean or unit variance since they are output from earlier layers in the network. Even worse, during the training process the distribution of features at each layer of the network will shift as the weights of each layer are updated.
The authors of [3] hypothesize that the shifting distribution of features inside deep neural networks may make training deep networks more difficult. To overcome this problem, [3] proposes to insert batch normalization layers into the network. At training time, a batch normalization layer uses a minibatch of data to estimate the mean and standard deviation of each feature. These estimated means and standard deviations are then used to center and normalize the features of the minibatch. A running average of these means and standard deviations is kept during training, and at test time these running averages are used to center and normalize features.
It is possible that this normalization strategy could reduce the representational power of the network, since it may sometimes be optimal for certain layers to have features that are not zero-mean or unit variance. To this end, the batch normalization layer includes learnable shift and scale parameters for each feature dimension.
[3] Sergey Ioffe and Christian Szegedy, "Batch Normalization: Accelerating Deep Network Training by Reducing
Internal Covariate Shift", ICML 2015.
End of explanation
"""
# Check the training-time forward pass by checking means and variances
# of features both before and after batch normalization
# Simulate the forward pass for a two-layer network
np.random.seed(231)
N, D1, D2, D3 = 200, 50, 60, 3
X = np.random.randn(N, D1)
W1 = np.random.randn(D1, D2)
W2 = np.random.randn(D2, D3)
a = np.maximum(0, X.dot(W1)).dot(W2)
print('Before batch normalization:')
print(' means: ', a.mean(axis=0))
print(' stds: ', a.std(axis=0))
# Means should be close to zero and stds close to one
print('After batch normalization (gamma=1, beta=0)')
a_norm, _ = batchnorm_forward(a, np.ones(D3), np.zeros(D3), {'mode': 'train'})
print(' mean: ', a_norm.mean(axis=0))
print(' std: ', a_norm.std(axis=0))
# Now means should be close to beta and stds close to gamma
gamma = np.asarray([1.0, 2.0, 3.0])
beta = np.asarray([11.0, 12.0, 13.0])
a_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'})
print('After batch normalization (nontrivial gamma, beta)')
print(' means: ', a_norm.mean(axis=0))
print(' stds: ', a_norm.std(axis=0))
# Check the test-time forward pass by running the training-time
# forward pass many times to warm up the running averages, and then
# checking the means and variances of activations after a test-time
# forward pass.
np.random.seed(231)
N, D1, D2, D3 = 200, 50, 60, 3
W1 = np.random.randn(D1, D2)
W2 = np.random.randn(D2, D3)
bn_param = {'mode': 'train'}
gamma = np.ones(D3)
beta = np.zeros(D3)
for t in range(50):
X = np.random.randn(N, D1)
a = np.maximum(0, X.dot(W1)).dot(W2)
batchnorm_forward(a, gamma, beta, bn_param)
bn_param['mode'] = 'test'
X = np.random.randn(N, D1)
a = np.maximum(0, X.dot(W1)).dot(W2)
a_norm, _ = batchnorm_forward(a, gamma, beta, bn_param)
# Means should be close to zero and stds close to one, but will be
# noisier than training-time forward passes.
print('After batch normalization (test-time):')
print(' means: ', a_norm.mean(axis=0))
print(' stds: ', a_norm.std(axis=0))
"""
Explanation: Batch normalization: Forward
In the file cs231n/layers.py, implement the batch normalization forward pass in the function batchnorm_forward. Once you have done so, run the following to test your implementation.
End of explanation
"""
# Gradient check batchnorm backward pass
np.random.seed(231)
N, D = 4, 5
x = 5 * np.random.randn(N, D) + 12
gamma = np.random.randn(D)
beta = np.random.randn(D)
dout = np.random.randn(N, D)
bn_param = {'mode': 'train'}
fx = lambda x: batchnorm_forward(x, gamma, beta, bn_param)[0]
fg = lambda a: batchnorm_forward(x, a, beta, bn_param)[0]
fb = lambda b: batchnorm_forward(x, gamma, b, bn_param)[0]
dx_num = eval_numerical_gradient_array(fx, x, dout)
da_num = eval_numerical_gradient_array(fg, gamma.copy(), dout)
db_num = eval_numerical_gradient_array(fb, beta.copy(), dout)
_, cache = batchnorm_forward(x, gamma, beta, bn_param)
dx, dgamma, dbeta = batchnorm_backward(dout, cache)
print('dx error: ', rel_error(dx_num, dx))
print('dgamma error: ', rel_error(da_num, dgamma))
print('dbeta error: ', rel_error(db_num, dbeta))
"""
Explanation: Batch Normalization: backward
Now implement the backward pass for batch normalization in the function batchnorm_backward.
To derive the backward pass you should write out the computation graph for batch normalization and backprop through each of the intermediate nodes. Some intermediates may have multiple outgoing branches; make sure to sum gradients across these branches in the backward pass.
Once you have finished, run the following to numerically check your backward pass.
End of explanation
"""
np.random.seed(231)
N, D = 100, 500
x = 5 * np.random.randn(N, D) + 12
gamma = np.random.randn(D)
beta = np.random.randn(D)
dout = np.random.randn(N, D)
bn_param = {'mode': 'train'}
out, cache = batchnorm_forward(x, gamma, beta, bn_param)
t1 = time.time()
dx1, dgamma1, dbeta1 = batchnorm_backward(dout, cache)
t2 = time.time()
dx2, dgamma2, dbeta2 = batchnorm_backward_alt(dout, cache)
t3 = time.time()
print('dx difference: ', rel_error(dx1, dx2))
print('dgamma difference: ', rel_error(dgamma1, dgamma2))
print('dbeta difference: ', rel_error(dbeta1, dbeta2))
print('speedup: %.2fx' % ((t2 - t1) / (t3 - t2)))
"""
Explanation: Batch Normalization: alternative backward (OPTIONAL, +3 points extra credit)
In class we talked about two different implementations for the sigmoid backward pass. One strategy is to write out a computation graph composed of simple operations and backprop through all intermediate values. Another strategy is to work out the derivatives on paper. For the sigmoid function, it turns out that you can derive a very simple formula for the backward pass by simplifying gradients on paper.
Surprisingly, it turns out that you can also derive a simple expression for the batch normalization backward pass if you work out derivatives on paper and simplify. After doing so, implement the simplified batch normalization backward pass in the function batchnorm_backward_alt and compare the two implementations by running the following. Your two implementations should compute nearly identical results, but the alternative implementation should be a bit faster.
NOTE: This part of the assignment is entirely optional, but we will reward 3 points of extra credit if you can complete it.
End of explanation
"""
np.random.seed(231)
N, D, H1, H2, C = 2, 15, 20, 30, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))
for reg in [0, 3.14]:
print('Running check with reg = ', reg)
model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
reg=reg, weight_scale=5e-2, dtype=np.float64,
use_batchnorm=True)
loss, grads = model.loss(X, y)
print('Initial loss: ', loss)
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)
print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))
if reg == 0: print()
"""
Explanation: Fully Connected Nets with Batch Normalization
Now that you have a working implementation for batch normalization, go back to your FullyConnectedNet in the file cs2312n/classifiers/fc_net.py. Modify your implementation to add batch normalization.
Concretely, when the flag use_batchnorm is True in the constructor, you should insert a batch normalization layer before each ReLU nonlinearity. The outputs from the last layer of the network should not be normalized. Once you are done, run the following to gradient-check your implementation.
HINT: You might find it useful to define an additional helper layer similar to those in the file cs231n/layer_utils.py. If you decide to do so, do it in the file cs231n/classifiers/fc_net.py.
End of explanation
"""
np.random.seed(231)
# Try training a very deep net with batchnorm
hidden_dims = [100, 100, 100, 100, 100]
num_train = 1000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
weight_scale = 2e-2
bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True)
model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False)
bn_solver = Solver(bn_model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=200)
bn_solver.train()
solver = Solver(model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=200)
solver.train()
"""
Explanation: Batchnorm for deep networks
Run the following to train a six-layer network on a subset of 1000 training examples both with and without batch normalization.
End of explanation
"""
plt.subplot(3, 1, 1)
plt.title('Training loss')
plt.xlabel('Iteration')
plt.subplot(3, 1, 2)
plt.title('Training accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 3)
plt.title('Validation accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 1)
plt.plot(solver.loss_history, 'o', label='baseline')
plt.plot(bn_solver.loss_history, 'o', label='batchnorm')
plt.subplot(3, 1, 2)
plt.plot(solver.train_acc_history, '-o', label='baseline')
plt.plot(bn_solver.train_acc_history, '-o', label='batchnorm')
plt.subplot(3, 1, 3)
plt.plot(solver.val_acc_history, '-o', label='baseline')
plt.plot(bn_solver.val_acc_history, '-o', label='batchnorm')
for i in [1, 2, 3]:
plt.subplot(3, 1, i)
plt.legend(loc='upper center', ncol=4)
plt.gcf().set_size_inches(15, 15)
plt.show()
"""
Explanation: Run the following to visualize the results from two networks trained above. You should find that using batch normalization helps the network to converge much faster.
End of explanation
"""
np.random.seed(231)
# Try training a very deep net with batchnorm
hidden_dims = [50, 50, 50, 50, 50, 50, 50]
num_train = 1000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
bn_solvers = {}
solvers = {}
weight_scales = np.logspace(-4, 0, num=20)
for i, weight_scale in enumerate(weight_scales):
print('Running weight scale %d / %d' % (i + 1, len(weight_scales)))
bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True)
model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False)
bn_solver = Solver(bn_model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=False, print_every=200)
bn_solver.train()
bn_solvers[weight_scale] = bn_solver
solver = Solver(model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=False, print_every=200)
solver.train()
solvers[weight_scale] = solver
# Plot results of weight scale experiment
best_train_accs, bn_best_train_accs = [], []
best_val_accs, bn_best_val_accs = [], []
final_train_loss, bn_final_train_loss = [], []
for ws in weight_scales:
best_train_accs.append(max(solvers[ws].train_acc_history))
bn_best_train_accs.append(max(bn_solvers[ws].train_acc_history))
best_val_accs.append(max(solvers[ws].val_acc_history))
bn_best_val_accs.append(max(bn_solvers[ws].val_acc_history))
final_train_loss.append(np.mean(solvers[ws].loss_history[-100:]))
bn_final_train_loss.append(np.mean(bn_solvers[ws].loss_history[-100:]))
plt.subplot(3, 1, 1)
plt.title('Best val accuracy vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Best val accuracy')
plt.semilogx(weight_scales, best_val_accs, '-o', label='baseline')
plt.semilogx(weight_scales, bn_best_val_accs, '-o', label='batchnorm')
plt.legend(ncol=2, loc='lower right')
plt.subplot(3, 1, 2)
plt.title('Best train accuracy vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Best training accuracy')
plt.semilogx(weight_scales, best_train_accs, '-o', label='baseline')
plt.semilogx(weight_scales, bn_best_train_accs, '-o', label='batchnorm')
plt.legend()
plt.subplot(3, 1, 3)
plt.title('Final training loss vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Final training loss')
plt.semilogx(weight_scales, final_train_loss, '-o', label='baseline')
plt.semilogx(weight_scales, bn_final_train_loss, '-o', label='batchnorm')
plt.legend()
plt.gca().set_ylim(1.0, 3.5)
plt.gcf().set_size_inches(10, 15)
plt.show()
"""
Explanation: Batch normalization and initialization
We will now run a small experiment to study the interaction of batch normalization and weight initialization.
The first cell will train 8-layer networks both with and without batch normalization using different scales for weight initialization. The second layer will plot training accuracy, validation set accuracy, and training loss as a function of the weight initialization scale.
End of explanation
"""
|
jeffzhengye/pylearn | tensorflow_learning/tf2/notebooks/.ipynb_checkpoints/tf_keras_介绍_工程师版-checkpoint.ipynb | unlicense | import numpy as np
import tensorflow as tf
from tensorflow import keras
"""
Explanation: TensorFlow Keras 介绍-工程师版
Author: fchollet<br>
Date created: 2020/04/01<br>
Last modified: 2020/04/28<br>
Description: 使用TensorFlow keras高级api构建真实世界机器学习解决方案你所需要知道的 (Everything you need to know to use Keras to build real-world machine learning solutions.)<br>
翻译: 叶正
设置-Setup
End of explanation
"""
x = tf.constant([[5, 2], [1, 3]])
print(x)
"""
Explanation: 介绍
你是一位正在寻找基于tensorflow-keras驱动的深度学习在实际产品中解决方案的工程师吗?
本指南将为你介绍tf-keras 核心api概念和使用方法。
本指南中,你将学到以下知识:
Tensorflow tensor 和 gradient tape
训练模型前的数据准备 (转化为 NumPy 数组ndarray 或者 tf.data.Dataset 对象).
数据预处理:如特征标注化(feature normalization) 或词汇表索引 vocabulary indexing.
模型构建:数据到预测模型,使用Keras Functional API.
使用keras自带的fit()方法训练模型,同时可以存储checkpoints, 监控指标和容错
模型评估和推理(test data)
自定义 fit()功能, 如对抗网络GAN训练.
使用多GPU加速训练
模型优化:参数调优(hyperparameter tuning.)
在移动设备和 IoT 设备上部署机器学习模型
在该指南最后,你可以接着学下以下内容来增强你对这些概念的理解
图像分类,Image classification
文本分类,Text classification
信用卡欺诈检测,Credit card fraud detection
张量,Tensors
TensorFlow 是可微编程的一个基本构造层。它的核心像Numpy,是一个可以对N维矩阵(tensors)进行操作控制的框架。
然而,Numpy与TensorFlow有三个关键不同之处:
1.TensorFlow可以利用硬件如GPUs和TPUs,进行加速
2.TensorFlow能对任意可微矩阵自动计算梯度
3.TensorFlow的计算可以分配到一台或多台机器的大量设备上。
首先认识一下TensorFlow的核心:Tensor
常量Tensor
End of explanation
"""
x.numpy()
"""
Explanation: 它的值可以通过调用.numpy():
End of explanation
"""
print("dtype:", x.dtype)
print("shape:", x.shape)
"""
Explanation: 像Numpy数组,对变量赋予dtype和shape的特征,
End of explanation
"""
print(tf.ones(shape=(2, 1)))
print(tf.zeros(shape=(2, 1)))
"""
Explanation: 常用tf.ones和tf.zeros(就像np.ones和np.zeros)新建常量tensors :
End of explanation
"""
x = tf.random.normal(shape=(2, 2), mean=0.0, stddev=1.0)
#x = tf.random.uniform(shape=(2, 2), minval=0, maxval=10, dtype="int32")
print(x)
"""
Explanation: 创建随机常量型张量:
End of explanation
"""
initial_value = tf.random.normal(shape=(2, 2))
a = tf.Variable(initial_value)
print(a)
"""
Explanation: 变量,Variables
特殊的tensors,可以储存可变的状态,例如模型的权重weights
可以创建带初始值的Variable:
End of explanation
"""
new_value = tf.random.normal(shape=(2, 2))
a.assign(new_value)
print(a)
for i in range(2):
for j in range(2):
assert a[i, j] == new_value[i, j]
added_value = tf.random.normal(shape=(2, 2))
a.assign_add(added_value)
for i in range(2):
for j in range(2):
assert a[i, j] == new_value[i, j] + added_value[i, j]
"""
Explanation: 用.assign(value),.assign_add(increment)或者.assign_sub(decrement):
End of explanation
"""
a = tf.random.normal(shape=(2, 2))
b = tf.random.normal(shape=(2, 2))
with tf.GradientTape() as tape:
tape.watch(a) # Start recording the history of operations applied to `a`
c = tf.sqrt(tf.square(a) + tf.square(b)) # Do some math using `a`
# What's the gradient of `c` with respect to `a`?
dc_da = tape.gradient(c, a)
print(dc_da)
"""
Explanation: 梯度
另一个与Numpy主要不同在于,可以自动查找任何可微表达式的梯度。只需打开GradientTape,通过tape.watch() watching,建立可微表达式并用作输入:
End of explanation
"""
a = tf.random.normal(shape=(2, 2))
b = tf.random.normal(shape=(2, 2))
with tf.GradientTape() as outer_tape:
with tf.GradientTape() as tape:
tape.watch(a)
c = tf.sqrt(tf.square(a) + tf.square(b))
dc_da = tape.gradient(c, a)
print(dc_da, type(dc_da))
d2c_da2 = outer_tape.gradient(dc_da, a)
print(d2c_da2)
"""
Explanation: 通过嵌套tapes,可以计算高阶倒数:
End of explanation
"""
from tensorflow.keras.layers.experimental.preprocessing import TextVectorization
# Example training data, of dtype `string`.
training_data = np.array([["This is the 1st sample."], ["And here's the 2nd sample."]])
# Create a TextVectorization layer instance. It can be configured to either
# return integer token indices, or a dense token representation (e.g. multi-hot
# or TF-IDF). The text standardization and text splitting algorithms are fully
# configurable.
vectorizer = TextVectorization(output_mode="int")
# Calling `adapt` on an array or dataset makes the layer generate a vocabulary
# index for the data, which can then be reused when seeing new data.
vectorizer.adapt(training_data)
# After calling adapt, the layer is able to encode any n-gram it has seen before
# in the `adapt()` data. Unknown n-grams are encoded via an "out-of-vocabulary"
# token.
integer_data = vectorizer(training_data)
print(integer_data)
"""
Explanation: 数据加载和预处理,Data loading & preprocessing
神经网络无法直接处理原始数据,如文本文件、经过编码的JPEG图像文件或者CSV文件。
神经网络只能处理向量化vectorized和standardized标准化的表示
文本文件需要读入后转化为tf string tensor,然后分隔成单词(token)。最后对字词建立索引并转换成整数型tensor。
图片数据需要读入后并解码成整型integer tensor,然后转换成浮点型并归一化成较小的数值(通常0~1).
CSV数据首先需要解析,将数值型属性转换成浮点型floating tensor,对categorical 类别型属性索引并转换成整型tensor。
通常对每个属性值进行归一化,使其具有零平均值和单位方差。
开始!
数据加载,Data loading
tf-Keras 模型接受3中类型的输入inputs:
NumPy arrays, 与Scikit-Learn和其他Python库类似。如果数据能读入内存,这是个不错的选择
TensorFlow Dataset objects. TensorFlow Dataset objects,这可以提高性能,更适合于数据不能读入内存的数据集来说,且数据从硬盘或其他分布式文件系统读取的方式。
Python generators 可以生成不同批次的数据(例如:定制的keras.utils.Sequence的子类)。
在训练模型前,数据形式需要符合这三种之一。如果数据集较大,且在GPU上训练模型,建议考虑使用Dataset对象,因为这个类可以处理好性能关键的具体工作:
- 当GPU忙的时候,可以在CPU上异步地预处理数据集,并缓冲成队列。
- 将数据预读入GPU内存,在GPU处理完前一批数据时可以立即获得数据,因此可以充分利用GPU。
Keras有一些列工具可以将硬盘上原始数据转换成Dataset:
- tf.keras.preprocessing.image_dataset_from_directory 将存储在特定分类文件夹中的图形文件转换成带标签的图形tensor数据集。
- tf.keras.preprocessing.text_dataset_from_directory 与上述类似,但针对文本文件。
此外,TensorFlow tf.data包含其他类似的工具,例如tf.data.experimental.make_csv_dataset,从CSV文件加载结构化数据。
例子:从硬盘上图形文件中获取带标注的数据集
假设图形文件按类别存储在不同的文件夹中,如下所示:
main_directory/
...class_a/
......a_image_1.jpg
......a_image_2.jpg
...class_b/
......b_image_1.jpg
......b_image_2.jpg
可以操作如下:
```python
创建数据集
dataset = keras.preprocessing.image_dataset_from_directory(
'path/to/main_directory', batch_size=64, image_size=(200, 200))
迭代访问该dataset生成的数据batches
for data, labels in dataset:
print(data.shape) # (64, 200, 200, 3)
print(data.dtype) # float32
print(labels.shape) # (64,)
print(labels.dtype) # int32
``
样本的标签可以是它所在文件夹的数字字母序号。很自然,这也可以显示的赋值,如:class_names=['class_a', 'class_b'],标签0赋值给class_a,标签1赋值给class_b`。
例子:从文本文件获取带标签的数据集
同样地,后缀为.txt的文件分类存储在不同文件夹中,你可以:
```python
dataset = keras.preprocessing.text_dataset_from_directory(
'path/to/main_directory', batch_size=64)
样例
for data, labels in dataset:
print(data.shape) # (64,)
print(data.dtype) # string
print(labels.shape) # (64,)
print(labels.dtype) # int32
```
Keras数据预处理,Data preprocessing with Keras
当数据是字符串/整型/浮点型Nmumpy 矩阵,或者是Dataset对象(或者Python生成器)用于生成成批的字符串/整型/浮点型 tensor,此时就需要数据预处理preprocess。
这就是说:
- 字符串型数据的分词,并索引属性,Tokenization of string data, followed by token indexing.
- 特征归一化,Feature normalization.
- Rescaling,数据缩放至更小值(一般来说,神经网络的输出数值应该接近零,通常希望数据零平均值和单位方差,或者数据在[0,1]。)
理想的机器学习模型是端到端的 end-to-end
通常,需要设法使数据预处理尽可能成为模型的一部分,而不是外加一个数据处理通道。这是因为当需要重复使用模型时,外加的数据预处理可移植性较差。比如一个处理文本的模型:使用一个特殊的分词算法和一个专门的词汇索引。当需要迁移模型至移动app或JavaScriptapp,需要用目标语言重新设立预处理。这可能非常棘手,任何一下点与原处理过程不一致就可能彻底让模型无效,或者严重降低它的效果。
如果能简单的导出端对端的模型就会变得很简单,因为预处理已经包含在其中。理想的模型是期望输入越接近原始数据越好:图形模型最好是[0,255]RGB像素值,文本模型最好是utf-8的字符串。这样使用导出模型就不需要知道预处理环节。
Keras预处理层 Using Keras preprocessing layers
在Keras中,模型内置预处理常使用预处理层preprocessing layers。包括:
- TextVectorization层:使原始字符串型文本数据的向量化
- Normalization层:属性值标准化
- 图形缩放、修剪、图形数据增强,Image rescaling, cropping, or image data augmentation
使用Keras预处理层的主要好处:在训练中或训练后直接引入模型,使得模型可移植性变强。
有些预处理层have a state:
- TextVectorization:保留词或词组到整型索引的映射
- Normalization:保留特征的平均值和方差
预处理层的状态可以在部分训练样本或全部样本上调用layer.adapt(data)获得。
例子:将字符串转换成整型词索引序列
End of explanation
"""
from tensorflow.keras.layers.experimental.preprocessing import TextVectorization
# Example training data, of dtype `string`.
training_data = np.array([["This is the 1st sample."], ["And here's the 2nd sample."]])
# Create a TextVectorization layer instance. It can be configured to either
# return integer token indices, or a dense token representation (e.g. multi-hot
# or TF-IDF). The text standardization and text splitting algorithms are fully
# configurable.
vectorizer = TextVectorization(output_mode="binary", ngrams=2)
# Calling `adapt` on an array or dataset makes the layer generate a vocabulary
# index for the data, which can then be reused when seeing new data.
vectorizer.adapt(training_data)
# After calling adapt, the layer is able to encode any n-gram it has seen before
# in the `adapt()` data. Unknown n-grams are encoded via an "out-of-vocabulary"
# token.
integer_data = vectorizer(training_data)
print(integer_data)
"""
Explanation: 例子:将字符串转换成one-hot编码的双词序列
End of explanation
"""
from tensorflow.keras.layers.experimental.preprocessing import Normalization
# Example image data, with values in the [0, 255] range
training_data = np.random.randint(0, 256, size=(64, 200, 200, 3)).astype("float32")
normalizer = Normalization(axis=-1)
normalizer.adapt(training_data)
normalized_data = normalizer(training_data)
print("var: %.4f" % np.var(normalized_data))
print("mean: %.4f" % np.mean(normalized_data))
"""
Explanation: 例子:标准化属性值
End of explanation
"""
from tensorflow.keras.layers.experimental.preprocessing import CenterCrop
from tensorflow.keras.layers.experimental.preprocessing import Rescaling
# Example image data, with values in the [0, 255] range
training_data = np.random.randint(0, 256, size=(64, 200, 200, 3)).astype("float32")
cropper = CenterCrop(height=150, width=150)
scaler = Rescaling(scale=1.0 / 255)
output_data = scaler(cropper(training_data))
print("shape:", output_data.shape)
print("min:", np.min(output_data))
print("max:", np.max(output_data))
"""
Explanation: 例子:缩放和中心裁剪图像
Rescaling层和CenterCrop层都是无状态的stateless,因此不需要调用adapt()。
End of explanation
"""
# Let's say we expect our inputs to be RGB images of arbitrary size
inputs = keras.Input(shape=(None, None, 3))
"""
Explanation: 采用Keras Functional API建立模型
模型的一层A "layer"简单说就是输入输出的转换。比如:线性映射层就是将输入映射到16维属性空间:
python
dense = keras.layers.Dense(units=16)
而一个模型"model"就是由多个层layer组成的有向无环图。一个模型可以想象成一个大的层,里面含很多子层可以通过引入数据训练。
最常用且最有效的办法建立Keras模型就是funtional API。可以从指定特定形状(dtype可选)的输入开始采用功能API建立模型。Keras里,如果每个维度可变,则可指定为None。例如:输入为200*200的RGB图形可以为(200,200,3),但是输入为任意大小RGB的图像则可定义为(None,None,3)。
End of explanation
"""
from tensorflow.keras import layers
# Center-crop images to 150x150
x = CenterCrop(height=150, width=150)(inputs)
# Rescale images to [0, 1]
x = Rescaling(scale=1.0 / 255)(x)
# Apply some convolution and pooling layers
x = layers.Conv2D(filters=32, kernel_size=(3, 3), activation="relu")(x)
x = layers.MaxPooling2D(pool_size=(3, 3))(x)
x = layers.Conv2D(filters=32, kernel_size=(3, 3), activation="relu")(x)
x = layers.MaxPooling2D(pool_size=(3, 3))(x)
x = layers.Conv2D(filters=32, kernel_size=(3, 3), activation="relu")(x)
# Apply global average pooling to get flat feature vectors
x = layers.GlobalAveragePooling2D()(x)
# Add a dense classifier on top
num_classes = 10
outputs = layers.Dense(num_classes, activation="softmax")(x)
"""
Explanation: 定义好输入形式后,可以在输入的基础上链接层转换直到得到最终输出结果:
End of explanation
"""
model = keras.Model(inputs=inputs, outputs=outputs)
"""
Explanation: 当你像搭积木一样定义好由不同层组成的有向无环图时,就建立了你的输入到输出的转化,也就是生成了一个模型对象:
End of explanation
"""
data = np.random.randint(0, 256, size=(64, 200, 200, 3)).astype("float32")
processed_data = model(data)
print(processed_data.shape)
"""
Explanation: 这个模型就想一个大的layer,可以输入一个batch的数据,如下:
End of explanation
"""
model.summary()
"""
Explanation: 可以打印出模型的摘要,其显示的是你的数据在模型的每个阶段是如何做变换的。这对程序调试非常有用。
需要注意的是,每层输出都会显示batch的大小batch size。这里batch大小为None,表明模型可以处理任意batch大小的数据。
End of explanation
"""
# Get the data as Numpy arrays
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
# Build a simple model
inputs = keras.Input(shape=(28, 28))
x = layers.experimental.preprocessing.Rescaling(1.0 / 255)(inputs)
x = layers.Flatten()(x)
x = layers.Dense(128, activation="relu")(x)
x = layers.Dense(128, activation="relu")(x)
outputs = layers.Dense(10, activation="softmax")(x)
model = keras.Model(inputs, outputs)
model.summary()
# Compile the model
model.compile(optimizer="adam", loss="sparse_categorical_crossentropy")
# Train the model for 1 epoch from Numpy data
batch_size = 64
print("Fit on NumPy data")
history = model.fit(x_train, y_train, batch_size=batch_size, epochs=1)
# Train the model for 1 epoch using a dataset
dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)).batch(batch_size)
print("Fit on Dataset")
history = model.fit(dataset, epochs=1)
"""
Explanation: 当你的模型有多个输入和输出的时候,Functional API使得模型的构建更加的容易。
想更深入得了解次部分,请看guide to the Functional API.
使用 keras model的 fit()方法进行训练
现在,已经学会了的:
- 怎么准备数据
- 怎么建立处理数据的模型
下一步就是在数据上训练模型。Model类具有内置训练循环,fit()方法。Dataset对象、可以参数batch 数据的Python生成器或者Numpy矩阵。
在调用fit()前,需要指定优化器optimizer和损失函数loss function。这就是compile():
python
model.compile(optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),
loss=keras.losses.CategoricalCrossentropy())
损失函数和优化器可以通过字符串标识符指定:
python
model.compile(optimizer='rmsprop', loss='categorical_crossentropy')
一旦模型编译了,就可以给模型输入数据。以下就是给模型输入Numpy数据的例子:
python
model.fit(numpy_array_of_samples, numpy_array_of_labels,
batch_size=32, epochs=10)
除了数据,还需要指定2个关键参数:batch_size和重复次数(epochs)。以下是训练时batch为32个样本,重复10次的例子。
python
model.fit(dataset_of_samples_and_labels, epochs=10)
因为从数据集上生成的数据通常是分了批的,通常不需要指定batch大小。
以下是MINIST数字分类的例子:
End of explanation
"""
print(history.history)
"""
Explanation: 调用Fit()时返回“history”对象,记录整个训练过程发生了什么。history.history词典包含每个epoch时的metrics值(本例子中只有一个metric,loss)。
End of explanation
"""
model.compile(
optimizer="adam",
loss="sparse_categorical_crossentropy",
metrics=[keras.metrics.SparseCategoricalAccuracy(name="acc")],
)
history = model.fit(dataset, epochs=1)
"""
Explanation: 深入 fit(), 请看:
guide to training & evaluation with the built-in Keras methods.
跟踪性能指标
当训练模型时,需要跟踪如分类准确率、精度、召回率,AUC等指标。此外,不仅在训练数据集上在验证数据集上也需要监控这些指标。
监控指标
可以将指标对象赋值给compile(),如下:
End of explanation
"""
val_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(batch_size)
history = model.fit(dataset, epochs=1, validation_data=val_dataset)
"""
Explanation: 将验证数据传递给fit()
将验证数据传递给fit()可以监控验证损失和验证指标。验证指标再每次重复后会上报。
End of explanation
"""
loss, acc = model.evaluate(val_dataset) # returns loss and metrics
print("loss: %.2f" % loss)
print("acc: %.2f" % acc)
"""
Explanation: Using callbacks for checkpointing (and more) 用callbacks做保持模型
如果训练时间较长,那么在训练过程中定时保存模型就尤为重要。训练过程一旦崩溃,就可以利用保存的模型重新训练重新开始。
Keras一个重要特征就是callbacks,在fit()中配置。Callbacks是在训练过程中不同节点调用的对象,尤其是:
- 在每个batch的开始和结束
- 每个epoch的开始和结束
Callbacks 是使得模型训练可以完全脚本化的一种方法
你可以使用callbacks周期性的来存储你的模型。
举例: 使用ModelCheckpoint callback 来在每个epoch结束时存储模型。
python
callbacks = [
keras.callbacks.ModelCheckpoint(
filepath='path/to/my/model_{epoch}',
save_freq='epoch')
]
model.fit(dataset, epochs=2, callbacks=callbacks)
也可以使用callbacks周期性的更改学习率,把监控的各种metrics发到slack机器人、邮件通知等。
深入详见 callbacks API documentation 和
guide to writing custom callbacks.
使用TensorBoard监控训练过程
keras 命令行中进度条不是最友好的方法来监控模型的loss和metrics。更好的选择是
TensorBoard, 一个基于web的应用,可以实时的显示loss,metrics以及更多。
使用方法如下:传入 keras.callbacks.TensorBoard callback:
python
callbacks = [
keras.callbacks.TensorBoard(log_dir='./logs')
]
model.fit(dataset, epochs=2, callbacks=callbacks)
tensorboard启动方法:
tensorboard --logdir=./logs
更多使用方法请看:
Here's more information.
调用 fit()后: 评估测试性能和生产对新数据的预测
模型训练好后,可以使用 evaluate()评估模型在新数据集上的loss和metrics:
End of explanation
"""
predictions = model.predict(val_dataset)
print(predictions.shape)
"""
Explanation: 也可以使用predict()预测,predict()是用来预测新数据不需要标签,所以不返回loss等指标
End of explanation
"""
# Example training data, of dtype `string`.
samples = np.array([["This is the 1st sample."], ["And here's the 2nd sample."]])
labels = [[0], [1]]
# Prepare a TextVectorization layer.
vectorizer = TextVectorization(output_mode="int")
vectorizer.adapt(samples)
# Asynchronous preprocessing: the text vectorization is part of the tf.data pipeline.
# First, create a dataset
dataset = tf.data.Dataset.from_tensor_slices((samples, labels)).batch(2)
# Apply text vectorization to the samples
dataset = dataset.map(lambda x, y: (vectorizer(x), y))
# Prefetch with a buffer size of 2 batches
dataset = dataset.prefetch(2)
# Our model should expect sequences of integers as inputs
inputs = keras.Input(shape=(None,), dtype="int64")
x = layers.Embedding(input_dim=10, output_dim=32)(inputs)
outputs = layers.Dense(1)(x)
model = keras.Model(inputs, outputs)
model.compile(optimizer="adam", loss="mse", run_eagerly=True)
model.fit(dataset)
"""
Explanation: fit()中使用自定义的训练步骤training step
默认设置,fit()是配置为监督学习环境。如果需要不同的训练过程(如对抗网络GAN的训练循环),可以提供自定义的实现Model.train_step(),keras内部fit()方法会重复的调用该方法。
Metrics, callbacks,可以如常工作
下面是重新实现 fit():
``python
class CustomModel(keras.Model):
def train_step(self, data):
# Unpack the data. Its structure depends on your model and
# on what you pass tofit().
x, y = data
with tf.GradientTape() as tape:
y_pred = self(x, training=True) # Forward pass
# Compute the loss value
# (the loss function is configured incompile()`)
loss = self.compiled_loss(y, y_pred,regularization_losses=self.losses)
# Compute gradients
trainable_vars = self.trainable_variables
gradients = tape.gradient(loss, trainable_vars)
# Update weights
self.optimizer.apply_gradients(zip(gradients, trainable_vars))
# Update metrics (includes the metric that tracks the loss)
self.compiled_metrics.update_state(y, y_pred)
# Return a dict mapping metric names to current value
return {m.name: m.result() for m in self.metrics}
Construct and compile an instance of CustomModel
inputs = keras.Input(shape=(32,))
outputs = keras.layers.Dense(1)(inputs)
model = CustomModel(inputs, outputs)
model.compile(optimizer='adam', loss='mse', metrics=[...])
Just use fit as usual
model.fit(dataset, epochs=3, callbacks=...)
```
想更深入请看:
"Customizing what happens in fit()".
使用即可执行eager execution来调试你的模型
如果自定义training steps or custom layers,通常需要对其进行调试。
The debugging experience is an integral part of a framework: with Keras, the debugging
workflow is designed with the user in mind.
默认情况下,keras模型会编译成高度优化的计算图,执行速度更快。也就是说模型中所写的python code(e.g. in a custom train_step),并不是实际执行的code。这使得debug较为困难
通城Debugging最后能一步一步的执行,大家都喜欢用print(打印出过程信息),设置你还想用pdb。这时我们需要使用即可执行eager execution模型,如下:
参数设置 run_eagerly=True,在 compile()方法中:
python
model.compile(optimizer='adam', loss='mse', run_eagerly=True)
当然,该中方法的不足是模型要显著的慢一些。当模型完成调试时,正式训练时还是建议使用计算图模式。
使用多GPU加速训练
tf Keras 自带工业级的多GPU支持,和分布式multi-worker训练。通过 tf.distribute API实现.
如果你的机器上有多个GPU,可以同时使用所有GPU训练模型:
创建 tf.distribute.MirroredStrategy 对象
在strategy's scope内构建和编译模型
跟之前一样调用fit() and evaluate()
```python
Create a MirroredStrategy.
strategy = tf.distribute.MirroredStrategy()
Open a strategy scope.
with strategy.scope():
# Everything that creates variables should be under the strategy scope.
# In general this is only model construction & compile().
model = Model(...)
model.compile(...)
Train the model on all available devices.
train_dataset, val_dataset, test_dataset = get_dataset()
model.fit(train_dataset, epochs=2, validation_data=val_dataset)
Test the model on all available devices.
model.evaluate(test_dataset)
```
For a detailed introduction to multi-GPU & distributed training, see
this guide.
GPU设备上同步进行预处理 VS. 异步在主机CPU上预处理
前文中讲述了预处理,其中直接在模型中使用(CenterCrop and Rescaling)等预处理层。
如果我们想在设备上做预处理,预处理作为模型的一部分是一个很好的选择。比如,GPU加速的特征标注化或图像数据扩展(image augmentation)。
但是这种预处理在以下情况不适合:特别是使用TextVectorization层进行文本预处理。由于其序列特性且只能在CPU上运行,在CPU上使用异步处理是个更好想法。
异步处理时,预处理操作在CPU上运行。当你的GPU忙时(处理上一个batch的数据),预处理后的samples会缓存到一个队列queue中。在GPU可用前,就可以把已经在queue中缓冲的处理好的样本提前获取(prefetching)到GPU内存中。这就保证了预处理不会阻塞GPU的使用。
异步预处理,使用dataset.map来注入预处理操作到数据处理流程pipeline中即可:
End of explanation
"""
# Our dataset will yield samples that are strings
dataset = tf.data.Dataset.from_tensor_slices((samples, labels)).batch(2)
# Our model should expect strings as inputs
inputs = keras.Input(shape=(1,), dtype="string")
x = vectorizer(inputs)
x = layers.Embedding(input_dim=10, output_dim=32)(x)
outputs = layers.Dense(1)(x)
model = keras.Model(inputs, outputs)
model.compile(optimizer="adam", loss="mse", run_eagerly=True)
model.fit(dataset)
"""
Explanation: 与文本向量化预处理作为模型一部分对比:
End of explanation
"""
|
dusenberrymw/systemml | samples/jupyter-notebooks/Image_Classify_Using_VGG_19.ipynb | apache-2.0 | !pip show systemml
"""
Explanation: Image Classification using Caffe VGG-19 model
This notebook demonstrates importing VGG-19 model from Caffe to SystemML and use that model to do an image classification. VGG-19 model has been trained using ImageNet dataset (1000 classes with ~ 14M images). If an image to be predicted is in one of the class VGG-19 has trained on then accuracy will be higher.
We expect prediction of any image through SystemML using VGG-19 model will be similar to that of image predicted through Caffe using VGG-19 model directly.
Prerequisite:
SystemML Python Package
To run this notebook you need to install systeml 1.0 (Master Branch code as of 07/26/2017 or later) python package.
Caffe
If you want to verify results through Caffe, then you need to have Caffe python package or Caffe installed.
For this verification I have installed Caffe on local system instead of Caffe python package.
SystemML Python Package information
End of explanation
"""
from systemml import MLContext
ml = MLContext(sc)
print ("SystemML Built-Time:"+ ml.buildTime())
print(ml.info())
# Workaround for Python 2.7.13 to avoid certificate validation issue while downloading any file.
import ssl
try:
_create_unverified_https_context = ssl._create_unverified_context
except AttributeError:
# Legacy Python that doesn't verify HTTPS certificates by default
pass
else:
# Handle target environment that doesn't support HTTPS verification
ssl._create_default_https_context = _create_unverified_https_context
"""
Explanation: SystemML Build information
Following code will show SystemML information which is installed in the environment.
End of explanation
"""
# Download caffemodel and proto files
def downloadAndConvertModel(downloadDir='.', trained_vgg_weights='trained_vgg_weights'):
# Step 1: Download the VGG-19 model and other files.
import errno
import os
import urllib
# Create directory, if exists don't error out
try:
os.makedirs(os.path.join(downloadDir,trained_vgg_weights))
except OSError as exc: # Python >2.5
if exc.errno == errno.EEXIST and os.path.isdir(trained_vgg_weights):
pass
else:
raise
# Download deployer, network, solver proto and label files.
urllib.urlretrieve('https://raw.githubusercontent.com/apache/systemml/master/scripts/nn/examples/caffe2dml/models/imagenet/vgg19/VGG_ILSVRC_19_layers_deploy.proto', os.path.join(downloadDir,'VGG_ILSVRC_19_layers_deploy.proto'))
urllib.urlretrieve('https://raw.githubusercontent.com/apache/systemml/master/scripts/nn/examples/caffe2dml/models/imagenet/vgg19/VGG_ILSVRC_19_layers_network.proto',os.path.join(downloadDir,'VGG_ILSVRC_19_layers_network.proto'))
urllib.urlretrieve('https://raw.githubusercontent.com/apache/systemml/master/scripts/nn/examples/caffe2dml/models/imagenet/vgg19/VGG_ILSVRC_19_layers_solver.proto',os.path.join(downloadDir,'VGG_ILSVRC_19_layers_solver.proto'))
# Get labels for data
urllib.urlretrieve('https://raw.githubusercontent.com/apache/systemml/master/scripts/nn/examples/caffe2dml/models/imagenet/labels.txt', os.path.join(downloadDir, trained_vgg_weights, 'labels.txt'))
# Following instruction download model of size 500MG file, so based on your network it may take time to download file.
urllib.urlretrieve('http://www.robots.ox.ac.uk/~vgg/software/very_deep/caffe/VGG_ILSVRC_19_layers.caffemodel', os.path.join(downloadDir,'VGG_ILSVRC_19_layers.caffemodel'))
# Step 2: Convert the caffemodel to trained_vgg_weights directory
import systemml as sml
sml.convert_caffemodel(sc, os.path.join(downloadDir,'VGG_ILSVRC_19_layers_deploy.proto'), os.path.join(downloadDir,'VGG_ILSVRC_19_layers.caffemodel'), os.path.join(downloadDir,trained_vgg_weights))
return
"""
Explanation: Download model, proto files and convert them to SystemML format.
Download Caffe Model (VGG-19), proto files (deployer, network and solver) and label file.
Convert the Caffe model into SystemML input format.
End of explanation
"""
# Print top K indices and probability
def printTopK(prob, label, k):
print(label, 'Top ', k, ' Index : ', np.argsort(-prob)[0, :k])
print(label, 'Top ', k, ' Probability : ', prob[0,np.argsort(-prob)[0, :k]])
"""
Explanation: PrintTopK
This function will print top K probabilities and indices from the result.
End of explanation
"""
import os
def getCaffeLabel(url, printTopKData, topK, size=(224,224), modelDir='trained_vgg_weights'):
import caffe
urllib.urlretrieve(url, 'test.jpg')
image = caffe.io.resize_image(caffe.io.load_image('test.jpg'), size)
image = [(image * 255).astype(np.float)]
deploy_file = 'VGG_ILSVRC_19_layers_deploy.proto'
caffemodel_file = 'VGG_ILSVRC_19_layers.caffemodel'
net = caffe.Classifier(deploy_file, caffemodel_file)
caffe_prob = net.predict(image)
caffe_prediction = caffe_prob.argmax(axis=1)
if(printTopKData):
printTopK(caffe_prob, 'Caffe', topK)
import pandas as pd
labels = pd.read_csv(os.path.join(modelDir,'labels.txt'), names=['index', 'label'])
caffe_prediction_labels = [ labels[labels.index == x][['label']].values[0][0] for x in caffe_prediction ]
return net, caffe_prediction_labels
"""
Explanation: Classify image using Caffe
Prerequisite: You need to have Caffe installed on a system to run this code. (or have Caffe Python package installed)
This will classify image using Caffe code directly.
This can be used to verify classification through SystemML if matches with that through Caffe directly.
End of explanation
"""
import numpy as np
import urllib
from systemml.mllearn import Caffe2DML
import systemml as sml
# Setting other than current directory causes "network file not found" issue, as network file
# location is defined in solver file which does not have a path, so it searches in current dir.
downloadDir = '.' # /home/asurve/caffe_models'
trained_vgg_weights = 'trained_vgg_weights'
img_shape = (3, 224, 224)
size = (img_shape[1], img_shape[2])
def classifyImages(urls,printTokKData=False, topK=5, caffeInstalled=False):
downloadAndConvertModel(downloadDir, trained_vgg_weights)
vgg = Caffe2DML(sqlCtx, solver=os.path.join(downloadDir,'VGG_ILSVRC_19_layers_solver.proto'), input_shape=img_shape)
vgg.load(trained_vgg_weights)
for url in urls:
outFile = 'inputTest.jpg'
urllib.urlretrieve(url, outFile)
from IPython.display import Image, display
display(Image(filename=outFile))
print ("Prediction of above image to ImageNet Class using");
## Do image classification through SystemML processing
from PIL import Image
input_image = sml.convertImageToNumPyArr(Image.open(outFile), img_shape=img_shape
, color_mode='BGR', mean=sml.getDatasetMean('VGG_ILSVRC_19_2014'))
print ("Image preprocessed through SystemML :: ", vgg.predict(input_image)[0])
if(printTopKData == True):
sysml_proba = vgg.predict_proba(input_image)
printTopK(sysml_proba, 'SystemML BGR', topK)
if(caffeInstalled == True):
net, caffeLabel = getCaffeLabel(url, printTopKData, topK, size, os.path.join(downloadDir, trained_vgg_weights))
print ("Image classification through Caffe :: ", caffeLabel[0])
print ("Caffe input data through SystemML :: ", vgg.predict(np.matrix(net.blobs['data'].data.flatten()))[0])
if(printTopKData == True):
sysml_proba = vgg.predict_proba(np.matrix(net.blobs['data'].data.flatten()))
printTopK(sysml_proba, 'With Caffe input data', topK)
"""
Explanation: Classify images
This function classify images from images specified through urls.
Input Parameters:
urls: List of urls
printTokKData (default False): Whether to print top K indices and probabilities
topK: Top K elements to be displayed.
caffeInstalled (default False): If Caffe has been installed. If installed, then it will classify image (with top K probability and indices) based on printTopKData.
End of explanation
"""
printTopKData=False
topK=5
caffeInstalled=False
urls = ['https://upload.wikimedia.org/wikipedia/commons/thumb/5/58/MountainLion.jpg/312px-MountainLion.jpg', 'https://s-media-cache-ak0.pinimg.com/originals/f2/56/59/f2565989f455984f206411089d6b1b82.jpg', 'http://i2.cdn.cnn.com/cnnnext/dam/assets/161207140243-vanishing-elephant-closeup-exlarge-169.jpg', 'http://wallpaper-gallery.net/images/pictures-of-lilies/pictures-of-lilies-7.jpg', 'https://cdn.pixabay.com/photo/2012/01/07/21/56/sunflower-11574_960_720.jpg', 'https://image.shutterstock.com/z/stock-photo-bird-nest-on-tree-branch-with-five-blue-eggs-inside-108094613.jpg', 'https://i.ytimg.com/vi/6jQDbIv0tDI/maxresdefault.jpg','https://cdn.pixabay.com/photo/2016/11/01/23/53/cat-1790093_1280.jpg']
classifyImages(urls,printTopKData, topK, caffeInstalled)
"""
Explanation: Sample API call to classify image
There are couple of parameters to set based on what you are looking for.
1. printTopKData (default False): If this parameter gets set to True, then top K results (probabilities and indices) will be displayed.
2. topK (default 5): How many entities (K) to be displayed.
3. caffeInstalled (default False): If Caffe has installed. If not installed then verification through Caffe won't be done.
End of explanation
"""
|
oscar6echo/ezhc | demo_ezhc.ipynb | mit | df = hc.sample.df_timeseries(N=2, Nb_bd=15+0*3700) #<=473
df.info()
display(df.head())
display(df.tail())
g = hc.Highstock()
g.chart.width = 650
g.chart.height = 550
g.legend.enabled = True
g.legend.layout = 'horizontal'
g.legend.align = 'center'
g.legend.maxHeight = 100
g.tooltip.enabled = True
g.tooltip.valueDecimals = 2
g.exporting.enabled = True
g.chart.zoomType = 'xy'
g.title.text = 'Time series plotted with HighStock'
g.subtitle.text = 'Transparent access to the underlying js lib'
g.plotOptions.series.compare = 'percent'
g.yAxis.labels.formatter = hc.scripts.FORMATTER_PERCENT
g.tooltip.pointFormat = hc.scripts.TOOLTIP_POINT_FORMAT_PERCENT
g.tooltip.positioner = hc.scripts.TOOLTIP_POSITIONER_CENTER_TOP
g.xAxis.gridLineWidth = 1.0
g.xAxis.gridLineDashStyle = 'Dot'
g.yAxis.gridLineWidth = 1.0
g.yAxis.gridLineDashStyle = 'Dot'
g.credits.enabled = True
g.credits.text = 'Source: XXX Flow Strategy & Solutions.'
g.credits.href = 'http://www.example.com'
g.series = hc.build.series(df)
g.plot(save=False, version='6.1.2', center=True)
## IF BEHIND A CORPORATE PROXY
## IF NOT PROXY IS PASSED TO .plot() THEN NO HIGHCHARTS VERSION UPDATE IS PERFORMED
## HARDODED VERSIONS ARE USED INSTEAD
# p = hc.Proxy('mylogin', 'mypwd', 'myproxyhost', 'myproxyport')
# g.plot(save=False, version='latest', proxy=p)
options_as_dict = g.options_as_dict()
options_as_dict
options_as_json = g.options_as_json()
options_as_json
"""
Explanation: Examples
reproduced from http://www.highcharts.com/demo/ and http://www.highcharts.com/stock/demo
plot() has the following arguments:
save=True and optionally save_name and optionally save_path (default='saved') will save the graph as a stand alone HTML doc under save_path after creating it if necessary
notebook (default=True) will not inject require and jquery libs as they are already available in the classical notebook. Set to False to inject them.
version (default='latest') will specify the highcharts version to use. It is recommended to leave the default value (6.1.2 as of 4sep18).
proxy (default=None') is necessary if you want to check from highcharts release page what the latest version is, and update the list of all past versions. If no proxy is provided, the versions are hardcoded in the source code.
options_as_dict() will return highchart/highstocks options as a Python dictionary
args: chart_id to specify which div for rendering
options_as_json() will return highchart/highstocks options as json
args: Same save options as plot()
Times series
Example 1
End of explanation
"""
df = hc.sample.df_timeseries(N=3, Nb_bd=2000)
df['Cash'] = 1.0+0.02/260
df['Cash'] = df['Cash'].cumprod()
display(df.head())
display(df.tail())
g = hc.Highstock()
g.chart.height = 550
g.legend.enabled = True
g.legend.layout = 'horizontal'
g.legend.align = 'center'
g.legend.maxHeight = 100
g.tooltip.enabled = True
g.tooltip.valueDecimals = 2
g.exporting.enabled = True
g.chart.zoomType = 'xy'
g.title.text = 'Time series plotted with HighStock'
g.subtitle.text = 'Transparent access to the underlying js lib'
g.plotOptions.series.compare = 'percent'
g.yAxis.labels.formatter = hc.scripts.FORMATTER_PERCENT
g.tooltip.pointFormat = hc.scripts.TOOLTIP_POINT_FORMAT_PERCENT
g.tooltip.positioner = hc.scripts.TOOLTIP_POSITIONER_CENTER_TOP
g.xAxis.gridLineWidth = 1.0
g.xAxis.gridLineDashStyle = 'Dot'
g.yAxis.gridLineWidth = 1.0
g.yAxis.gridLineDashStyle = 'Dot'
g.credits.enabled = True
g.credits.text = 'Source: XXX Flow Strategy & Solutions.'
g.credits.href = 'http://www.example.com'
g.series = hc.build.series(df, visible={'Track3': False})
g.plot(save=True, version='6.1.2', save_name='NoTable')
"""
Explanation: Example 2
End of explanation
"""
# g.plot_with_table_1(dated=False, version='6.1.2', save=True, save_name='Table1')
"""
Explanation: Example 3
Exception
The function2 plot_with_table1() and plot_with_table2() are exceptions with respect to the idea of this module: It is NOT just transparent access to Highchart/Highstock. I added a table (based on datatable.net) to display more data about the period selected. This measurements cannot be calculated beforehand, so it has to be postprocessing.
If save=True function plot_with_table1/2() will create a standalone HTML file containing the output in subdirectory 'saved'. Optionally save_name can be set - an automatic time tag is added to keep things orderly, unless dated=False.
NOTE: Because of css collision between notebook and datatable, the table in the saved file is better looking than in the notebook output area.
End of explanation
"""
g.plotOptions.series.compare = 'value'
g.yAxis.labels.formatter = hc.scripts.FORMATTER_BASIC
g.tooltip.pointFormat = hc.scripts.TOOLTIP_POINT_FORMAT_BASIC
g.tooltip.formatter = hc.scripts.FORMATTER_QUANTILE
disclaimer = """
THE VALUE OF YOUR INVESTMENT MAY FLUCTUATE.
THE FIGURES RELATING TO SIMULATED PAST PERFORMANCES REFER TO PAST
PERIODS AND ARE NOT A RELIABLE INDICATOR OF FUTURE RESULTS.
THIS ALSO APPLIES TO HISTORICAL MARKET DATA.
"""
template_footer = hc.scripts.TEMPLATE_DISCLAIMER
create_footer = hc.scripts.from_template
logo_path = hc.scripts.PATH_TO_LOGO_SG
# logo_path = 'http://img.talkandroid.com/uploads/2015/11/Chrome-Logo.png'
# logo_path = hc.scripts.image_src('http://img.talkandroid.com/uploads/2015/11/Chrome-Logo.png')
footer = create_footer(template_footer, comment=disclaimer, img_logo=logo_path)
g.plot_with_table_2(dated=False, version='6.1.2', save=True, save_name='Table2', footer=footer)
"""
Explanation: Example 4
Footer
A footer can be added to the plot. This is interesting if the plot is saved as a stand alone file.
The footer is HTML you can write from scratch but a helper function and a jinja template make it easy.
Images are embeded upon save so the saved file is standalone. Only an internet connection is required to download the js libraries.
End of explanation
"""
df = hc.sample.df_one_idx_several_col()
df
g = hc.Highcharts()
g.chart.type = 'column'
g.chart.width = 500
g.chart.height = 300
# g.plotOptions.column.animation = False
g.title.text = 'Basic Bar Chart'
g.yAxis.title.text = 'Fruit Consumption'
g.xAxis.categories = list(df.index)
g.series = hc.build.series(df)
g.plot(center=True, save=True, version='6.1.2', save_name='test', dated=False)
g.plotOptions.column.stacking = 'normal'
g.title.text = 'Stack Bar Chart'
g.yAxis.title.text = 'Total Fruit Consumption'
g.plot(version='6.1.2')
g.plotOptions.column.stacking = 'percent'
g.yAxis.title.text = 'Fruit Consumption Distribution'
g.plot(version='6.1.2')
g = hc.Highcharts()
g.chart.type = 'bar'
g.chart.width = 500
g.chart.height = 400
g.title.text = 'Basic Bar Chart'
g.xAxis.title.text = 'Fruit Consumption'
g.xAxis.categories = list(df.index)
g.series = hc.build.series(df)
g.plot()
g.plotOptions.bar.stacking = 'normal'
g.title.text = 'Stacked Bar Chart'
g.xAxis.title.text = 'Total Fruit Consumption'
g.plot(version='6.1.2')
g.plotOptions.bar.stacking = 'percent'
g.title.text = 'Stacked Bar Chart'
g.xAxis.title.text = 'Fruit Consumption Distribution'
g.plot(version='6.1.2')
"""
Explanation: Column, Bar
End of explanation
"""
df = hc.sample.df_one_idx_one_col()
df
g = hc.Highcharts()
g.chart.type = 'pie'
g.chart.width = 400
g.chart.height = 400
gpo = g.plotOptions.pie
gpo.showInLegend = True
gpo.dataLabels.enabled = False
g.title.text = 'Browser Market Share'
g.series = hc.build.series(df)
g.plot(version='6.1.2')
g.chart.width = 400
g.chart.height = 300
gpo.showInLegend = False
gpo.dataLabels.enabled = True
gpo.startAngle = -90
gpo.endAngle = 90
gpo.innerSize = '40%'
gpo.center = ['50%', '95%']
g.plot(version='6.1.2')
"""
Explanation: Pie
End of explanation
"""
df = hc.sample.df_two_idx_one_col()
df.head()
g = hc.Highcharts()
g.chart.type = 'pie'
g.chart.width = 500
g.chart.height = 500
g.exporting = False
gpo = g.plotOptions.pie
gpo.showInLegend = False
gpo.dataLabels.enabled = True
gpo.center = ['50%', '50%']
gpo.size = '65%'
g.drilldown.drillUpButton.position = {'x': 0, 'y': 0}
g.title.text = 'Browser Market Share'
g.series, g.drilldown.series = hc.build.series_drilldown(df)
g.plot(version='6.1.2')
g = hc.Highcharts()
g.chart.type = 'bar'
g.chart.width = 500
g.chart.height = 500
g.exporting = False
gpo = g.plotOptions.pie
gpo.showInLegend = False
gpo.dataLabels.enabled = True
gpo.center = ['50%', '50%']
gpo.size = '65%'
g.drilldown.drillUpButton.position = {'x': 0, 'y': 0}
g.title.text = 'Browser Market Share'
g.series, g.drilldown.series = hc.build.series_drilldown(df)
g.plot()
"""
Explanation: Pie, Column Drilldown
End of explanation
"""
df = hc.sample.df_several_idx_one_col_2()
df.head()
df
# g = hc.Highcharts()
# g.chart.type = 'pie'
# g.chart.width = 500
# g.chart.height = 500
# g.exporting = False
# gpo = g.plotOptions.pie
# gpo.showInLegend = False
# gpo.dataLabels.enabled = True
# gpo.center = ['50%', '50%']
# gpo.size = '65%'
# g.drilldown.drillUpButton.position = {'x': 0, 'y': 0}
# g.title.text = 'World Population'
# g.series, g.drilldown.series = hc.build.series_drilldown(df, top_name='World')
# # g.plot(version='6.1.2')
"""
Explanation: Pie Drilldown - 3 levels
Any number of levels works
End of explanation
"""
df = hc.sample.df_one_idx_two_col()
df.head()
g = hc.Highcharts()
g.chart.type = 'columnrange'
g.chart.inverted = True
g.chart.width = 700
g.chart.height = 400
gpo = g.plotOptions.columnrange
gpo.dataLabels.enabled = True
gpo.dataLabels.formatter = 'function() { return this.y + "°C"; }'
g.tooltip.valueSuffix = '°C'
g.xAxis.categories, g.series = hc.build.series_range(df)
g.series[0]['name'] = 'Temperature'
g.yAxis.title.text = 'Temperature (°C)'
g.xAxis.title.text = 'Month'
g.title.text = 'Temperature Variations by Month'
g.subtitle.text = 'Vik, Norway'
g.legend.enabled = False
g.plot(save=True, save_name='index', version='6.1.2', dated=False, notebook=False)
"""
Explanation: Column Range
End of explanation
"""
df = hc.sample.df_scatter()
df.head()
g = hc.Highcharts()
g.chart.type = 'scatter'
g.chart.width = 700
g.chart.height = 500
g.chart.zoomType = 'xy'
g.exporting = False
g.plotOptions.scatter.marker.radius = 5
g.tooltip.headerFormat = '<b>Sex: {series.name}</b><br>'
g.tooltip.pointFormat = '{point.x} cm, {point.y} kg'
g.legend.layout = 'vertical'
g.legend.align = 'left'
g.legend.verticalAlign = 'top'
g.legend.x = 100
g.legend.y = 70
g.legend.floating = True
g.legend.borderWidth = 1
g.xAxis.title.text = 'Height (cm)'
g.yAxis.title.text = 'Weight (kg)'
g.title.text = 'Height Versus Weight of 507 Individuals by Gender'
g.subtitle.text = 'Source: Heinz 2003'
g.series = hc.build.series_scatter(df, color_column='Sex',
color={'Female': 'rgba(223, 83, 83, .5)',
'Male': 'rgba(119, 152, 191, .5)'})
g.plot(version='6.1.2')
"""
Explanation: Scatter - 1
End of explanation
"""
df = hc.sample.df_scatter()
df['Tag'] = np.random.choice(range(int(1e5)), size=len(df), replace=False)
df.head()
g = hc.Highcharts()
g.chart.type = 'scatter'
g.chart.width = 700
g.chart.height = 500
g.chart.zoomType = 'xy'
g.exporting = False
g.plotOptions.scatter.marker.radius = 5
g.tooltip.headerFormat = '<b>Sex: {series.name}</b><br><b>Tag: {point.key}</b><br>'
g.tooltip.pointFormat = '{point.x} cm, {point.y} kg'
g.legend.layout = 'vertical'
g.legend.align = 'left'
g.legend.verticalAlign = 'top'
g.legend.x = 100
g.legend.y = 70
g.legend.floating = True
g.legend.borderWidth = 1
g.xAxis.title.text = 'Height (cm)'
g.yAxis.title.text = 'Weight (kg)'
g.title.text = 'Height Versus Weight of 507 Individuals by Gender'
g.subtitle.text = 'Source: Heinz 2003'
g.series = hc.build.series_scatter(df, color_column='Sex', title_column='Tag',
color={'Female': 'rgba(223, 83, 83, .5)',
'Male': 'rgba(119, 152, 191, .5)'})
g.plot(version='6.1.2')
"""
Explanation: Scatter - 2
End of explanation
"""
df = hc.sample.df_bubble()
df.head()
g = hc.Highcharts()
g.chart.type = 'bubble'
g.chart.width = 700
g.chart.height = 500
g.chart.zoomType = 'xy'
g.plotOptions.bubble.minSize = 20
g.plotOptions.bubble.maxSize = 60
g.legend.enabled = True
g.title.text = 'Bubbles'
g.series = hc.build.series_bubble(df, color={'A': 'rgba(223, 83, 83, .5)', 'B': 'rgba(119, 152, 191, .5)'})
g.plot(version='6.1.2')
"""
Explanation: Bubble
End of explanation
"""
df = hc.sample.df_several_idx_one_col()
df.head()
colors = ['#7cb5ec', '#434348', '#90ed7d', '#f7a35c', '#8085e9',
'#f15c80', '#e4d354', '#2b908f', '#f45b5b', '#91e8e1']
points = hc.build.series_tree(df, set_color=True, colors=colors, set_value=True, precision=2)
points[:5]
g = hc.Highcharts()
g.chart.type = 'treemap'
g.chart.width = 900
g.chart.height = 600
g.title.text = 'Global Mortality Rate 2012, per 100 000 population'
g.subtitle.text = 'Click points to drill down.\nSource: \
<a href="http://apps.who.int/gho/data/node.main.12?lang=en">WHO</a>.'
g.exporting = False
g.series = [{
'type': "treemap",
'layoutAlgorithm': 'squarified',
'allowDrillToNode': True,
'dataLabels': {
'enabled': False
},
'levelIsConstant': False,
'levels': [{
'level': 1,
'dataLabels': {
'enabled': True
},
'borderWidth': 3
}],
'data': points,
}]
g.plot(version='6.1.2')
"""
Explanation: Treemap
Building the points datastructure cannot be wrapped without losing flexibility
Example (data and points datastructure taken from http://jsfiddle.net/gh/get/jquery/1.9.1/highslide-software/highcharts.com/tree/master/samples/highcharts/demo/treemap-large-dataset/
End of explanation
"""
df = hc.sample.df_two_idx_one_col()
df.head()
points = hc.build.series_tree(df, set_total=True, name_total='Total',
set_color=False,
set_value=False, precision=2)
points[:5]
g = hc.Highcharts()
g.chart.type = 'sunburst'
g.title.text = 'Browser Market Share'
g.plotOptions.series.animation = True
g.chart.height = '80%'
g.chart.animation = True
g.exporting = False
g.tooltip = {
'headerFormat': "",
'pointFormat': '<b>{point.name}</b> Market Share is <b>{point.value:,.3f}</b>'
}
g.series = [{
'type': 'sunburst',
'data': points,
'allowDrillToNode': True,
'cursor': 'pointer',
'dataLabels': {
'format': '{point.name}',
'filter': {
'property': 'innerArcLength',
'operator': '>',
'value': 16
}
},
'levels': [{
'level': 2,
'colorByPoint': True,
'dataLabels': {
'rotationMode': 'parallel'
}
},
{
'level': 3,
'colorVariation': {
'key': 'brightness',
'to': -0.5
}
}, {
'level': 4,
'colorVariation': {
'key': 'brightness',
'to': 0.5
}
}]
}]
g.plot(version='6.1.2')
"""
Explanation: Sunburst - 2 levels
End of explanation
"""
df = hc.sample.df_several_idx_one_col_2()
df.head()
points = hc.build.series_tree(df, set_total=True, name_total='World',
set_value=False, set_color=False, precision=0)
points[:5]
g = hc.Highcharts()
g.chart.type = 'sunburst'
g.chart.height = '90%'
g.chart.animation = True
g.title.text = 'World population 2017'
g.subtitle.text = 'Source <href="https://en.wikipedia.org/wiki/List_of_countries_by_population_(United_Nations)">Wikipedia</a>'
g.exporting = False
g.series = [{
'type': "sunburst",
'data': points,
'allowDrillToNode': True,
'cursor': 'pointer',
'dataLabels': {
'format': '{point.name}',
'filter': {
'property': 'innerArcLength',
'operator': '>',
'value': 16
}
},
'levels': [{
'level': 2,
'colorByPoint': True,
'dataLabels': {
'rotationMode': 'parallel'
}
},
{
'level': 3,
'colorVariation': {
'key': 'brightness',
'to': -0.5
}
}, {
'level': 4,
'colorVariation': {
'key': 'brightness',
'to': 0.5
}
}]
}]
g.plot(version='6.1.2')
"""
Explanation: Sunburst - 3 levels
Any number of levels works
End of explanation
"""
df = pd.DataFrame(data=np.array([[8, 7, 6, 5, 4, 3, 2, 1],
[1, 2, 3, 4, 5, 6, 7, 8],
[1, 8, 2, 7, 3, 6, 4, 5]]).T,
columns=['column', 'line', 'area'])
df
g = hc.Highcharts()
g.chart.polar = True
g.chart.width = 500
g.chart.height = 500
g.title.text = 'Polar Chart'
g.pane.startAngle = 0
g.pane.endAngle = 360
g.pane.background = [{'backgroundColor': '#FFF',
'borderWidth': 0
}]
g.xAxis.tickInterval = 45
g.xAxis.min = 0
g.xAxis.max = 360
g.xAxis.labels.formatter = 'function() { return this.value + "°"; }'
g.yAxis.min = 0
g.plotOptions.series.pointStart = 0
g.plotOptions.series.pointInterval = 45
g.plotOptions.column.pointPadding = 0
g.plotOptions.column.groupPadding = 0
g.series = [{
'type': 'column',
'name': 'Column',
'data': list(df['column']),
'pointPlacement': 'between',
}, {
'type': 'line',
'name': 'Line',
'data': list(df['line']),
}, {
'type': 'area',
'name': 'Area',
'data': list(df['area']),
}
]
g.plot(version='6.1.2')
"""
Explanation: Polar Chart
End of explanation
"""
df = pd.DataFrame(data=np.array([[43000, 19000, 60000, 35000, 17000, 10000],
[50000, 39000, 42000, 31000, 26000, 14000]]).T,
columns=['Allocated Budget', 'Actual Spending'],
index = ['Sales', 'Marketing', 'Development', 'Customer Support',
'Information Technology', 'Administration'])
df
g = hc.Highcharts()
g.chart.polar = True
g.chart.width = 650
g.chart.height = 500
g.title.text = 'Budget vs. Spending'
g.title.x = -80
g.pane.size = '80%'
g.pane.background = [{'backgroundColor': '#FFF',
'borderWidth': 0
}]
g.xAxis.tickmarkPlacement = 'on'
g.xAxis.lineWidth = 0
g.xAxis.categories = list(df.index)
g.yAxis.min = 0
g.yAxis.lineWidth = 0
g.yAxis.gridLineInterpolation = 'polygon'
g.tooltip.pointFormat = '<span style="color:{series.color}">{series.name}: <b>${point.y:,.0f}</b><br/>'
g.tooltip.shared = True
g.legend.align = 'right'
g.legend.verticalAlign = 'top'
g.legend.y = 70
g.legend.layout = 'vertical'
g.series = [{
'name': 'Allocated Budget',
'data': list(df['Allocated Budget']),
'pointPlacement': 'on'
}, {
'name': 'Actual Spending',
'data': list(df['Actual Spending']),
'pointPlacement': 'on'
},
]
g.plot(version='6.1.2')
"""
Explanation: Spider Web
End of explanation
"""
df = hc.sample.df_two_idx_several_col()
df.info()
display(df.head(10))
display(df.tail(10))
g = hc.Highcharts()
# g.chart.type = 'column'
g.chart.polar = True
g.plotOptions.series.animation = True
g.chart.width = 950
g.chart.height = 700
g.pane.size = '90%'
g.title.text = 'Perf (%) Contrib by Strategy & Period'
g.xAxis.type = 'category'
g.xAxis.tickmarkPlacement = 'on'
g.xAxis.lineWidth = 0
g.yAxis.gridLineInterpolation = 'polygon'
g.yAxis.lineWidth = 0
g.yAxis.plotLines = [{'color': 'gray', 'value': 0, 'width': 1.5}]
g.tooltip.pointFormat = '<span style="color:{series.color}">{series.name}: <b>{point.y:,.3f}%</b><br/>'
g.tooltip.shared = False
g.legend.enabled = True
g.legend.align = 'right'
g.legend.verticalAlign = 'top'
g.legend.y = 70
g.legend.layout = 'vertical'
# color names from http://www.w3schools.com/colors/colors_names.asp
# color rgba() codes from http://www.hexcolortool.com/
g.series, g.drilldown.series = hc.build.series_drilldown(df, colorByPoint=False,
color={'5Y': 'indigo'},
# color={'5Y': 'rgba(136, 110, 166, 1)'}
)
g.plot(save=True, save_name='ContribTable', version='6.1.2')
"""
Explanation: Spider Web DrillDown
End of explanation
"""
df_obs = pd.DataFrame(data=np.array([[760, 801, 848, 895, 965],
[733, 853, 939, 980, 1080],
[714, 762, 817, 870, 918],
[724, 802, 806, 871, 950],
[834, 836, 864, 882, 910]]),
index=list('ABCDE'))
display(df_obs)
# x, y positions where 0 is the first category
df_outlier = pd.DataFrame(data=np.array([[0, 644],
[4, 718],
[4, 951],
[4, 969]]))
display(df_outlier)
colors = ['#7cb5ec', '#434348', '#90ed7d', '#f7a35c', '#8085e9',
'#f15c80', '#e4d354', '#2b908f', '#f45b5b', '#91e8e1']
g = hc.Highcharts()
g.chart.type = 'boxplot'
g.chart.width = 850
g.chart.height = 500
g.title.text = 'Box Plot Example'
g.legend.enabled = False
g.xAxis.categories = list(df_obs.index)
g.xAxis.title.text = 'Experiment'
g.yAxis.title.text = 'Observations'
g.yAxis.plotLines= [{
'value': 932,
'color': 'red',
'width': 1,
'label': {
'text': 'Theoretical mean: 932',
'align': 'center',
'style': { 'color': 'gray' }
}
}]
g.series = []
g.series.append({
'name': 'Observations',
'data': list(df_obs.values),
'tooltip': { 'headerFormat': '<em>Experiment No {point.key}</em><br/>' },
})
g.series.append({
'name': 'Outlier',
'color': colors[0],
'type': 'scatter',
'data': list(df_outlier.values),
'marker': {
'fillColor': 'white',
'lineWidth': 1,
'lineColor': colors[0],
},
'tooltip': { 'pointFormat': 'Observation: {point.y}' }
})
g.plot(version='6.1.2')
"""
Explanation: Box Plot
End of explanation
"""
df = hc.sample.df_one_idx_several_col_2()
df
colors = ['#7cb5ec', '#434348', '#90ed7d', '#f7a35c', '#8085e9',
'#f15c80', '#e4d354', '#2b908f', '#f45b5b', '#91e8e1']
idx, col, data = hc.build.series_heatmap(df)
g = hc.Highcharts()
g.chart.type = 'heatmap'
g.chart.width = 650
g.chart.height = 450
g.title.text = 'Sales per employee per weekday'
g.xAxis.categories = idx
g.yAxis.categories = col
g.yAxis.title = ''
g.colorAxis = {
'min': 0,
'minColor': '#FFFFFF',
'maxColor': colors[0],
}
g.legend = {
'align': 'right',
'layout': 'vertical',
'margin': 0,
'verticalAlign': 'top',
'y': 25,
'symbolHeight': 280
}
g.tooltip = {
'formatter': """function () {
return '<b>' + this.series.xAxis.categories[this.point.x] + '</b> sold <br><b>' +
this.point.value + '</b> items on <br><b>' + this.series.yAxis.categories[this.point.y] + '</b>';
}"""
}
g.series = []
g.series.append({
'name': 'Sales per Employee',
'borderWidth': 1,
'data': data,
'dataLabels': {
'enabled': True,
'color': '#000000',
}
})
g.plot(version='6.1.2')
"""
Explanation: Heatmap
End of explanation
"""
g = hc.Highcharts()
g.yAxis.info()
g.yAxis.labels.format.info()
g = hc.Highstock()
g.plotOptions.info()
g = hc.Highcharts()
g.legend.align.info()
"""
Explanation: Direct access to Highcharts/Highstock documentation
Navigate the object property tree
An info() method gives the official help
WARNING: Once a property is set, the info method is not accessible any more
End of explanation
"""
|
tbarrongh/cosc-learning-labs | src/notebook/03_interface_shutdown.ipynb | apache-2.0 | help('learning_lab.03_interface_shutdown')
"""
Explanation: COSC Learning Lab
03_interface_shutdown.py
Related Scripts:
* 03_interface_startup.py
* 03_interface_configuration.py
Table of Contents
Table of Contents
Documentation
Implementation
Execution
HTTP
Documentation
End of explanation
"""
from importlib import import_module
script = import_module('learning_lab.03_interface_shutdown')
from inspect import getsource
print(getsource(script.main))
print(getsource(script.demonstrate))
"""
Explanation: Implementation
End of explanation
"""
run ../learning_lab/03_interface_shutdown.py
"""
Explanation: Execution
End of explanation
"""
from basics.odl_http import http_history
from basics.http import http_history_to_html
from IPython.core.display import HTML
HTML(http_history_to_html(http_history()))
"""
Explanation: HTTP
End of explanation
"""
|
MinnowBoard/fishbowl-notebooks | TFT_LCD.ipynb | mit | # Get the Python Imaging Libraries for drawing shapes and working with images
import Image
import ImageDraw
import ImageFont
# Get our driver and GPIO libraries
import pyDrivers.ILI9341 as TFT
import Adafruit_GPIO.GPIO as GPIO
import Adafruit_GPIO.SPI as SPI
# Minnowboard MAX configuration.
DC = 25
RST = 26
SPI_PORT = 0
SPI_DEVICE = 0
"""
Explanation: Thin-Film-Transistor LEDs
Over the SPI interface, you can communicate with various devices using the Minnowboard. In this notebook, you can communicate with a wide variety of TFT LCD screens. You can find some examples such as these two displays:
2.8'' ILI9341
3.5'' HXD8357D
Review the wiki page at http://wiki.minnowboard.org/Projects/Maker_TFTLCD for hardware requirements and setup.
End of explanation
"""
# Create TFT LCD display class.
disp = TFT.ILI9341(DC, rst=RST, spi=SPI.SpiDev(SPI_PORT, SPI_DEVICE, max_speed_hz=1000000))
# Initialize display.
disp.begin()
# Clear the display to a red background.
# Can pass any tuple of red, green, blue values (from 0 to 255 each).
disp.clear((255, 0, 0))
# Get a PIL Draw object to start drawing on the display buffer.
draw = disp.draw()
"""
Explanation: Using the TFT Class
The TFT LCD userspace library can run without requiring a kernel module. Using it in this mode will be somewhat slower, suitable for drawing images one-at-a-time.
End of explanation
"""
draw.rectangle((10, 90, 110, 160), outline=(255,255,0), fill=(0,0,255))
# Write buffer to the screen
disp.display()
"""
Explanation: Using the PIL Drawing class
The Python Imaging library makes it easy to draw simple shapes.
End of explanation
"""
image = Image.open('logo.png')
image = image.rotate(90).resize((240,320))
disp.display(image)
"""
Explanation: Using PIL for image files
Additionally, you can display image files on your TFT LCD, supporting different kinds of filetypes such as PNG and JPEG.
End of explanation
"""
|
mikelseverson/Udacity-Deep_Learning-Nanodegree | tv-script-generation/dlnd_tv_script_generation.ipynb | mit | """
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
"""
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
"""
view_sentence_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
"""
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
"""
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
vocab_set = set(text)
#enumerate the set and put in dictionary
vocab_to_int = {word: ii for ii, word in enumerate(vocab_set, 1)}
#flip the dictionary
int_to_vocab = {ii: word for word, ii in vocab_to_int.items()}
return vocab_to_int, int_to_vocab
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
"""
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
"""
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
token_dict = {
'.' : "||Period||",
',' : "||Comma||",
'"' : "||Quotation_Mark||",
';' : "||Semicolon||",
'!' : "||Exclamation_Mark||",
'?' : "||Question_Mark||",
'(' : "||Left_Parentheses||",
')' : "||Right_Parentheses||",
'--' : "||Dash||",
'\n' : "||Return||"
}
return token_dict
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
"""
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
"""
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
"""
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
"""
def get_inputs():
"""
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
"""
input = tf.placeholder(tf.int32, shape=(None, None), name='input')
targets = tf.placeholder(tf.int32, shape=(None, None), name='targets')
learningRate = tf.placeholder(tf.float32, shape=None, name='learning_rate')
# TODO: Implement Function
return input, targets, learningRate
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_inputs(get_inputs)
"""
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following tuple (Input, Targets, LearningRate)
End of explanation
"""
def get_init_cell(batch_size, rnn_size):
"""
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
"""
# TODO: Implement Function
lstm_cell = tf.contrib.rnn.BasicLSTMCell(rnn_size)
rnn_cell = tf.contrib.rnn.MultiRNNCell([lstm_cell])
initialized = rnn_cell.zero_state(batch_size, tf.float32)
initialized = tf.identity(initialized, name="initial_state")
return rnn_cell, initialized
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_init_cell(get_init_cell)
"""
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
"""
def get_embed(input_data, vocab_size, embed_dim):
"""
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
"""
# TODO: Implement Function
embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
embed = tf.nn.embedding_lookup(embedding, input_data)
return embed
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_embed(get_embed)
"""
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
"""
def build_rnn(cell, inputs):
"""
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
"""
# TODO: Implement Function
output, finalState = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
finalState = tf.identity(finalState, "final_state")
return output, finalState
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_rnn(build_rnn)
"""
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
"""
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
"""
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
"""
# TODO: Implement Function
embeded = get_embed(input_data, vocab_size, rnn_size)
rnn, state = build_rnn(cell, embeded)
logits = tf.contrib.layers.fully_connected(rnn, vocab_size)
return logits, state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_nn(build_nn)
"""
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
"""
def get_batches(int_text, batch_size, seq_length):
"""
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
"""
# TODO: Implement Function
n_elements = len(int_text)
n_batches = (n_elements - 1)//(batch_size*seq_length)
all_batches = np.zeros(shape=(n_batches, 2, batch_size, seq_length), dtype=np.int32)
# fill Numpy array
for i in range(n_batches):
for j in range(batch_size):
input_start = i * seq_length + j * batch_size * seq_length
target_start = input_start + 1
target_stop = target_start + seq_length
if target_stop < len(int_text):
for k in range(seq_length):
all_batches[i][0][j][k] = int_text[input_start + k]
all_batches[i][1][j][k] = int_text[target_start + k]
return all_batches
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_batches(get_batches)
"""
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2], [ 7 8], [13 14]]
# Batch of targets
[[ 2 3], [ 8 9], [14 15]]
]
# Second Batch
[
# Batch of Input
[[ 3 4], [ 9 10], [15 16]]
# Batch of targets
[[ 4 5], [10 11], [16 17]]
]
# Third Batch
[
# Batch of Input
[[ 5 6], [11 12], [17 18]]
# Batch of targets
[[ 6 7], [12 13], [18 1]]
]
]
```
Notice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive.
End of explanation
"""
# Number of Epochs
num_epochs = 40
# Batch Size
batch_size = 200
# RNN Size
rnn_size = 128
embed_dim = None
# Embedding Dimension Size
# Sequence Length
seq_length = 56
# Learning Rate
learning_rate = 0.01
# Show stats for every n number of batches
show_every_n_batches = 100
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
save_dir = './save'
"""
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set embed_dim to the size of the embedding.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
"""
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
"""
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
"""
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
"""
Explanation: Checkpoint
End of explanation
"""
def get_tensors(loaded_graph):
"""
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
"""
# TODO: Implement Function
inputTensor = loaded_graph.get_tensor_by_name("input:0")
InitialStateTensor = loaded_graph.get_tensor_by_name("initial_state:0")
FinalStateTensor = loaded_graph.get_tensor_by_name("final_state:0")
ProbsTensor = loaded_graph.get_tensor_by_name("probs:0")
return inputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_tensors(get_tensors)
"""
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
"""
def pick_word(probabilities, int_to_vocab):
"""
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
"""
# TODO: Implement Function
return int_to_vocab[np.random.choice(len(int_to_vocab), p=probabilities)]
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_pick_word(pick_word)
"""
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
"""
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
"""
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation
"""
|
slundberg/shap | notebooks/tabular_examples/tree_based_models/Understanding Tree SHAP for Simple Models.ipynb | mit | import sklearn
import shap
import numpy as np
import graphviz
"""
Explanation: Understanding Tree SHAP for Simple Models
The SHAP value for a feature is the average change in model output by conditioning on that feature when introducing features one at a time over all feature orderings. While this is easy to state, it is challenging to compute. So this notebook is meant to give a few simple examples where we can see how this plays out for very small trees. For arbitrary large trees it is very hard to intuitively guess these values by looking at the tree.
End of explanation
"""
# build data
N = 100
M = 4
X = np.zeros((N,M))
X.shape
y = np.zeros(N)
X[:N//2, 0] = 1
y[:N//2] = 1
# fit model
single_split_model = sklearn.tree.DecisionTreeRegressor(max_depth=1)
single_split_model.fit(X, y)
# draw model
dot_data = sklearn.tree.export_graphviz(single_split_model, out_file=None, filled=True, rounded=True, special_characters=True)
graph = graphviz.Source(dot_data)
graph
"""
Explanation: Single split example
End of explanation
"""
xs = [np.ones(M), np.zeros(M)]
for x in xs:
print()
print(" x =", x)
print("shap_values =", shap.TreeExplainer(single_split_model).shap_values(x))
"""
Explanation: Explain the model
Note that the bias term is the expected output of the model over the training dataset (0.5). The SHAP value for features not used in the model is always 0, while for $x_0$ it is just the difference between the expected value and the output of the model.
End of explanation
"""
# build data
N = 100
M = 4
X = np.zeros((N,M))
X.shape
y = np.zeros(N)
X[:1 * N//4, 1] = 1
X[:N//2, 0] = 1
X[N//2:3 * N//4, 1] = 1
y[:1 * N//4] = 1
# fit model
and_model = sklearn.tree.DecisionTreeRegressor(max_depth=2)
and_model.fit(X, y)
# draw model
dot_data = sklearn.tree.export_graphviz(and_model, out_file=None, filled=True, rounded=True, special_characters=True)
graph = graphviz.Source(dot_data)
graph
"""
Explanation: Two feature AND example
End of explanation
"""
xs = [np.ones(M), np.zeros(M)]
for x in xs:
print()
print(" x =", x)
print("shap_values =", shap.TreeExplainer(and_model).shap_values(x))
"""
Explanation: Explain the model
Note that the bias term is the expected output of the model over the training dataset (0.25). The SHAP value for features not used in the model is always 0, while for $x_0$ and $x_1$ it is just the difference between the expected value and the output of the model split equally between them (since they equally contribute to the AND function).
End of explanation
"""
# build data
N = 100
M = 4
X = np.zeros((N,M))
X.shape
y = np.zeros(N)
X[:N//2, 0] = 1
X[:1 * N//4, 1] = 1
X[N//2:3 * N//4, 1] = 1
y[:N//2] = 1
y[N//2:3 * N//4] = 1
# fit model
or_model = sklearn.tree.DecisionTreeRegressor(max_depth=2)
or_model.fit(X, y)
# draw model
dot_data = sklearn.tree.export_graphviz(or_model, out_file=None, filled=True, rounded=True, special_characters=True)
graph = graphviz.Source(dot_data)
graph
"""
Explanation: Two feature OR example
End of explanation
"""
xs = [np.ones(M), np.zeros(M)]
for x in xs:
print()
print(" x =", x)
print("shap_values =", shap.TreeExplainer(or_model).shap_values(x))
"""
Explanation: Explain the model
Note that the bias term is the expected output of the model over the training dataset (0.75). The SHAP value for features not used in the model is always 0, while for $x_0$ and $x_1$ it is just the difference between the expected value and the output of the model split equally between them (since they equally contribute to the OR function).
End of explanation
"""
# build data
N = 100
M = 4
X = np.zeros((N,M))
X.shape
y = np.zeros(N)
X[:N//2, 0] = 1
X[:1 * N//4, 1] = 1
X[N//2:3 * N//4, 1] = 1
y[1 * N//4:N//2] = 1
y[N//2:3 * N//4] = 1
# fit model
xor_model = sklearn.tree.DecisionTreeRegressor(max_depth=2)
xor_model.fit(X, y)
# draw model
dot_data = sklearn.tree.export_graphviz(xor_model, out_file=None, filled=True, rounded=True, special_characters=True)
graph = graphviz.Source(dot_data)
graph
"""
Explanation: Two feature XOR example
End of explanation
"""
xs = [np.ones(M), np.zeros(M)]
for x in xs:
print()
print(" x =", x)
print("shap_values =", shap.TreeExplainer(xor_model).shap_values(x))
"""
Explanation: Explain the model
Note that the bias term is the expected output of the model over the training dataset (0.5). The SHAP value for features not used in the model is always 0, while for $x_0$ and $x_1$ it is just the difference between the expected value and the output of the model split equally between them (since they equally contribute to the XOR function).
End of explanation
"""
# build data
N = 100
M = 4
X = np.zeros((N,M))
X.shape
y = np.zeros(N)
X[:N//2, 0] = 1
X[:1 * N//4, 1] = 1
X[N//2:3 * N//4, 1] = 1
y[:1 * N//4] = 1
y[:N//2] += 1
# fit model
and_fb_model = sklearn.tree.DecisionTreeRegressor(max_depth=2)
and_fb_model.fit(X, y)
# draw model
dot_data = sklearn.tree.export_graphviz(and_fb_model, out_file=None, filled=True, rounded=True, special_characters=True)
graph = graphviz.Source(dot_data)
graph
"""
Explanation: Two feature AND + feature boost example
End of explanation
"""
xs = [np.ones(M), np.zeros(M)]
for x in xs:
print()
print(" x =", x)
print("shap_values =", shap.TreeExplainer(and_fb_model).shap_values(x))
"""
Explanation: Explain the model
Note that the bias term is the expected output of the model over the training dataset (0.75). The SHAP value for features not used in the model is always 0, while for $x_0$ and $x_1$ it is just the difference between the expected value and the output of the model split equally between them (since they equally contribute to the AND function), plus an extra 0.5 impact for $x_0$ since it has an effect of $1.0$ all by itself (+0.5 if it is on and -0.5 if it is off).
End of explanation
"""
|
cshankm/rebound | ipython_examples/Horizons.ipynb | gpl-3.0 | import rebound
sim = rebound.Simulation()
sim.add("Sun")
## Other examples:
# sim.add("Venus")
# sim.add("399")
# sim.add("Europa")
# sim.add("NAME=Ida")
sim.status()
"""
Explanation: Horizons
REBOUND can add particles to simulations by obtaining ephemerides from NASA's powerful HORIZONS database. HORIZONS supports many different options, and we will certainly not try to cover everything here. This is meant to serve as an introduction to the basics, beyond what's in Churyumov-Gerasimenko.ipynb. If you catch any errors, or would either like to expand on this documentation or improve REBOUND's HORIZONS interface (rebound/horizons.py), please do fork the repository and send us a pull request.
Adding particles
When we add particles by passing a string, REBOUND queries the HORIZONS database and takes the first dataset HORIZONS offers. For the Sun, moons, and small bodies, this will typically return the body itself. For planets, it will return the barycenter of the system (for moonless planets like Venus it will say barycenter but there is no distinction). In all cases, REBOUND will print out the name of the HORIZONS entry it's using.
You can also add bodies using their integer NAIF IDs: NAIF IDs. Note that because of the number of small bodies (asteroids etc.) we have discovered, this convention only works for large objetcts. For small bodies, instead use "NAME=name" (see the SMALL BODIES section in the HORIZONS Documentation).
End of explanation
"""
sim.add("NAME=Ida")
print(sim.particles[-1]) # Ida before setting the mass
sim.particles[-1].m = 2.1e-14 # Setting mass of Ida in Solar masses
print(sim.particles[-1]) # Ida after setting the mass
"""
Explanation: Currently, HORIZONS does not have any mass information for solar system bodies. rebound/horizons.py has a hard-coded list provided by Jon Giorgini (10 May 2015) that includes the planets, their barycenters (total mass of planet plus moons), and the largest moons. If REBOUND doesn't find the corresponding mass for an object from this list (like for the asteroid Ida above), it will print a warning message. If you need the body's mass for your simulation, you can set it manually, e.g. (see Units.ipynb for an overview of using different units):
End of explanation
"""
sim = rebound.Simulation()
date = "2005-06-30 15:24"
sim.add("Venus")
sim.add("Venus", date=date)
sim.status()
"""
Explanation: Time
By default, REBOUND queries HORIZONS for objects' current positions. Specifically, it caches the current time the first time you call rebound.add, and gets the corresponding ephemeris. All subsequent calls to rebound.add will then use that initial cached time to make sure you get a synchronized set of ephemerides.
You can also explicitly pass REBOUND the time at which you would like the particles ephemerides:
End of explanation
"""
sim = rebound.Simulation()
date = "2005-06-30 15:24"
sim.add("Venus", date=date)
sim.add("Earth")
sim.status()
"""
Explanation: We see that the two Venus positions are different. The first call cached the current time, but since the second call specified a date, it overrode the default. Any time you pass a date, it will overwrite the default cached time, so:
End of explanation
"""
sim = rebound.Simulation()
sim.add("Sun")
sim.status()
"""
Explanation: would set up a simulation with Venus and Earth, all synchronized to 2005-06-30 15:24. All dates should be passed in the format Year-Month-Day Hour:Minute.
REBOUND takes these absolute times to the nearest minute, since at the level of seconds you have to worry about exactly what time system you're using, and small additional perturbations probably start to matter. For reference HORIZONS interprets all times for ephemerides as Coordinate (or Barycentric Dynamical) Time.
Reference Frame
REBOUND queries for particles' positions and velocities relative to the Sun:
End of explanation
"""
|
computational-class/cjc | code/03.python_intro.ipynb | mit | %matplotlib inline
import random, datetime
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
import statsmodels.api as sm
from scipy.stats import norm
from scipy.stats.stats import pearsonr
"""
Explanation: 数据科学的编程工具
Python使用简介
王成军
wangchengjun@nju.edu.cn
计算传播网 http://computational-communication.com
人生苦短,我用Python。
Python(/ˈpaɪθən/)是一种面向对象、解释型计算机程序设计语言
- 由Guido van Rossum于1989年底发明
- 第一个公开发行版发行于1991年
- Python语法简洁而清晰
- 具有强大的标准库和丰富的第三方模块
- 它常被昵称为胶水语言
- TIOBE编程语言排行榜“2010年度编程语言”
特点
免费、功能强大、使用者众多
与R和MATLAB相比,Python是一门更易学、更严谨的程序设计语言。使用Python编写的脚本更易于理解和维护。
如同其它编程语言一样,Python语言的基础知识包括:类型、列表(list)和元组(tuple)、字典(dictionary)、条件、循环、异常处理等。
关于这些,初阶读者可以阅读《Beginning Python》一书(Hetland, 2005)。
Python中包含了丰富的类库。
众多开源的科学计算软件包都提供了Python的调用接口,例如著名的计算机视觉库OpenCV。
Python本身的科学计算类库发展也十分完善,例如NumPy、SciPy和matplotlib等。
就社会网络分析而言,igraph, networkx, graph-tool, Snap.py等类库提供了丰富的网络分析工具
Python软件与IDE
目前最新的Python版本为3.0,更稳定的2.7版本。
编译器是编写程序的重要工具。
免费的Python编译器有Spyder、PyCharm(免费社区版)、Ipython、Vim、 Emacs、 Eclipse(加上PyDev插件)。
Installing Anaconda Python
Use the Anaconda Python
http://continuum.io/downloads.html
第三方包可以使用pip install的方法安装。
可以点击ToolsOpen command prompt
然后在打开的命令窗口中输入:
<del>pip install beautifulsoup4
pip install beautifulsoup4
NumPy /SciPy for scientific computing
pandas to make Python usable for data analysis
matplotlib to make graphics
scikit-learn for machine learning
End of explanation
"""
# str, int, float
str(3)
"chengjun wang"
# int
int('5')
# float
float('7.1')
range(10)
# for i in range(1, 10):
# print(i)
range(1,10)
"""
Explanation: Variable Type
End of explanation
"""
dir
dir(str)[-10:]
help(str)
x = ' Hello WorlD '
dir(x)[-10:]
# lower
x.lower()
# upper
x.upper()
# rstrip
x.rstrip()
# strip
x.strip()
# replace
x.replace('lo', '')
# split
x.split('lo')
# join
','.join(['a', 'b'])
"""
Explanation: dir & help
当你想要了解对象的详细信息时使用
End of explanation
"""
x = 'hello world'
type(x)
"""
Explanation: type
当你想要了解变量类型时使用type
End of explanation
"""
l = [1,2,3,3] # list
t = (1, 2, 3, 3) # tuple
s = {1, 2, 3, 3} # set([1,2,3,3]) # set
d = {'a':1,'b':2,'c':3} # dict
a = np.array(l) # array
print(l, t, s, d, a)
l = [1,2,3,3] # list
l.append(4)
l
d = {'a':1,'b':2,'c':3} # dict
d.keys()
d = {'a':1,'b':2,'c':3} # dict
d.values()
d = {'a':1,'b':2,'c':3} # dict
d['b']
d = {'a':1,'b':2,'c':3} # dict
d.items()
"""
Explanation: Data Structure
list, tuple, set, dictionary, array
End of explanation
"""
def devidePlus(m, n): # 结尾是冒号
y = m/n + 1 # 注意:空格
return y # 注意:return
"""
Explanation: 定义函数
End of explanation
"""
range(10)
range(1, 10)
for i in range(10):
print(i, i*10, i**2)
for i in range(10):
print(i*10)
for i in range(10):
print(devidePlus(i, 2))
# 列表内部的for循环
r = [devidePlus(i, 2) for i in range(10)]
r
"""
Explanation: For 循环
End of explanation
"""
m1 = map(devidePlus, [4,3,2], [2, 1, 5])
print(*m1)
#print(*map(devidePlus, [4,3,2], [2, 1, 5]))
# 注意: 将(4, 2)作为一个组合进行计算,将(3, 1)作为一个组合进行计算
m2 = map(lambda x, y: x + y, [1, 3, 5, 7, 9], [2, 4, 6, 8, 10])
print(*m2)
m3 = map(lambda x, y, z: x + y - z, [1, 3, 5, 7, 9], [2, 4, 6, 8, 10], [3, 3, 2, 2, 5])
print(*m3)
"""
Explanation: map
End of explanation
"""
j = 5
if j%2 == 1:
print(r'余数是1')
elif j%2 ==0:
print(r'余数是0')
else:
print(r'余数既不是1也不是0')
x = 5
if x < 5:
y = -1
z = 5
elif x > 5:
y = 1
z = 11
else:
y = 0
z = 10
print(x, y, z)
"""
Explanation: if elif else
End of explanation
"""
j = 0
while j <10:
print(j)
j+=1 # avoid dead loop
j = 0
while j <10:
if j%2 != 0:
print(j**2)
j+=1 # avoid dead loop
j = 0
while j <50:
if j == 30:
break
if j%2 != 0:
print(j**2)
j+=1 # avoid dead loop
a = 4
while a: # 0, None, False
print(a)
a -= 1
if a < 0:
a = None # []
"""
Explanation: while循环
End of explanation
"""
def devidePlus(m, n): # 结尾是冒号
return m/n+ 1 # 注意:空格
for i in [2, 0, 5]:
try:
print(devidePlus(4, i))
except Exception as e:
print(e)
pass
alist = [[1,1], [0, 0, 1]]
for aa in alist:
try:
for a in aa:
print(10 / a)
except Exception as e:
print(e)
pass
alist = [[1,1], [0, 0, 1]]
for aa in alist:
for a in aa:
try:
print(10 / a)
except Exception as e:
print(e)
pass
"""
Explanation: try except
End of explanation
"""
data =[[i, i**2, i**3] for i in range(10)]
data
for i in data:
print('\t'.join(map(str, i)))
type(data)
len(data)
data[0]
help(f.write)
# 保存数据
data =[[i, i**2, i**3] for i in range(10000)]
f = open("../data/data_write_to_file1.txt", "w")
for i in data:
f.write('\t'.join(map(str,i)) + '\n')
f.close()
with open('../data/data_write_to_file.txt','r') as f:
data = f.readlines()
data[:5]
with open('../data/data_write_to_file.txt','r') as f:
data = f.readlines(1000) #bytes
len(data)
with open('../data/data_write_to_file.txt','r') as f:
print(f.readline())
f = [1, 2, 3, 4, 5]
for k, i in enumerate(f):
print(k, i)
with open('../data/data_write_to_file.txt','r') as f:
for i in f:
print(i)
with open('../data/data_write_to_file.txt','r') as f:
for k, i in enumerate(f):
if k%2000 == 0:
print(i)
data = []
line = '0\t0\t0\n'
line = line.replace('\n', '')
line = line.split('\t')
line = [int(i) for i in line] # convert str to int
data.append(line)
data
# 读取数据
data = []
with open('../data/data_write_to_file1.txt','r') as f:
for line in f:
line = line.replace('\n', '').split('\t')
line = [int(i) for i in line]
data.append(line)
data
# 读取数据
data = []
with open('../data/data_write_to_file.txt','r') as f:
for line in f:
line = line.replace('\n', '').split('\t')
line = [int(i) for i in line]
data.append(line)
data
import pandas as pd
help(pd.read_csv)
df = pd.read_csv('../data/data_write_to_file.txt',
sep = '\t', names = ['a', 'b', 'c'])
df[-5:]
"""
Explanation: Write and Read data
End of explanation
"""
import json
data_dict = {'a':1, 'b':2, 'c':3}
with open('../data/save_dict.json', 'w') as f:
json.dump(data_dict, f)
dd = json.load(open("../data/save_dict.json"))
dd
"""
Explanation: 保存中间步骤产生的字典数据
End of explanation
"""
data_list = list(range(10))
with open('../data/save_list.json', 'w') as f:
json.dump(data_list, f)
dl = json.load(open("../data/save_list.json"))
dl
"""
Explanation: 重新读入json
保存中间步骤产生的列表数据
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
x = range(1, 100)
y = [i**-3 for i in x]
plt.plot(x, y, 'b-s')
plt.ylabel('$p(k)$', fontsize = 20)
plt.xlabel('$k$', fontsize = 20)
plt.xscale('log')
plt.yscale('log')
plt.title('Degree Distribution')
plt.show()
import numpy as np
# red dashes, blue squares and green triangles
t = np.arange(0., 5., 0.2)
plt.plot(t, t, 'r--')
plt.plot(t, t**2, 'bs')
plt.plot(t, t**3, 'g^')
plt.show()
# red dashes, blue squares and green triangles
t = np.arange(0., 5., 0.2)
plt.plot(t, t**2, 'b-s', label = '1')
plt.plot(t, t**2.5, 'r-o', label = '2')
plt.plot(t, t**3, 'g-^', label = '3')
plt.annotate(r'$\alpha = 3$', xy=(3.5, 40), xytext=(2, 80),
arrowprops=dict(facecolor='black', shrink=0.05),
fontsize = 20)
plt.ylabel('$f(t)$', fontsize = 20)
plt.xlabel('$t$', fontsize = 20)
plt.legend(loc=2,numpoints=1,fontsize=10)
plt.show()
# plt.savefig('/Users/chengjun/GitHub/cjc/figure/save_figure.png',
# dpi = 300, bbox_inches="tight",transparent = True)
plt.figure(1)
plt.subplot(221)
plt.plot(t, t, 'r--')
plt.text(2, 0.8*np.max(t), r'$\alpha = 1$', fontsize = 20)
plt.subplot(222)
plt.plot(t, t**2, 'bs')
plt.text(2, 0.8*np.max(t**2), r'$\alpha = 2$', fontsize = 20)
plt.subplot(223)
plt.plot(t, t**3, 'g^')
plt.text(2, 0.8*np.max(t**3), r'$\alpha = 3$', fontsize = 20)
plt.subplot(224)
plt.plot(t, t**4, 'r-o')
plt.text(2, 0.8*np.max(t**4), r'$\alpha = 4$', fontsize = 20)
plt.show()
def f(t):
return np.exp(-t) * np.cos(2*np.pi*t)
t1 = np.arange(0.0, 5.0, 0.1)
t2 = np.arange(0.0, 5.0, 0.02)
plt.figure(1)
plt.subplot(211)
plt.plot(t1, f(t1), 'bo')
plt.plot(t2, f(t2), 'k')
plt.subplot(212)
plt.plot(t2, np.cos(2*np.pi*t2), 'r--')
plt.show()
import matplotlib.gridspec as gridspec
import numpy as np
t = np.arange(0., 5., 0.2)
gs = gridspec.GridSpec(3, 3)
ax1 = plt.subplot(gs[0, :])
plt.plot(t, t**2, 'b-s')
ax2 = plt.subplot(gs[1,:-1])
plt.plot(t, t**2, 'g-s')
ax3 = plt.subplot(gs[1:, -1])
plt.plot(t, t**2, 'r-o')
ax4 = plt.subplot(gs[-1,0])
plt.plot(t, t**2, 'g-^')
ax5 = plt.subplot(gs[-1,1])
plt.plot(t, t**2, 'b-<')
plt.tight_layout()
def OLSRegressPlot(x,y,col,xlab,ylab):
xx = sm.add_constant(x, prepend=True)
res = sm.OLS(y,xx).fit()
constant, beta = res.params
r2 = res.rsquared
lab = r'$\beta = %.2f, \,R^2 = %.2f$' %(beta,r2)
plt.scatter(x,y,s=60,facecolors='none', edgecolors=col)
plt.plot(x,constant + x*beta,"red",label=lab)
plt.legend(loc = 'upper left',fontsize=16)
plt.xlabel(xlab,fontsize=26)
plt.ylabel(ylab,fontsize=26)
x = np.random.randn(50)
y = np.random.randn(50) + 3*x
pearsonr(x, y)
fig = plt.figure(figsize=(10, 4),facecolor='white')
OLSRegressPlot(x,y,'RoyalBlue',r'$x$',r'$y$')
plt.show()
fig = plt.figure(figsize=(7, 4),facecolor='white')
data = norm.rvs(10.0, 2.5, size=5000)
mu, std = norm.fit(data)
plt.hist(data, bins=25, normed=True, alpha=0.6, color='g')
xmin, xmax = plt.xlim()
x = np.linspace(xmin, xmax, 100)
p = norm.pdf(x, mu, std)
plt.plot(x, p, 'r', linewidth=2)
title = r"$\mu = %.2f, \, \sigma = %.2f$" % (mu, std)
plt.title(title,size=16)
plt.show()
import pandas as pd
df = pd.read_csv('../data/data_write_to_file.txt', sep = '\t', names = ['a', 'b', 'c'])
df[:5]
df.plot.line()
plt.yscale('log')
plt.ylabel('$values$', fontsize = 20)
plt.xlabel('$index$', fontsize = 20)
plt.show()
df.plot.scatter(x='a', y='b')
plt.show()
df.plot.hexbin(x='a', y='b', gridsize=25)
plt.show()
df['a'].plot.kde()
plt.show()
bp = df.boxplot()
plt.yscale('log')
plt.show()
df['c'].diff().hist()
plt.show()
df.plot.hist(stacked=True, bins=20)
# plt.yscale('log')
plt.show()
"""
Explanation: 使用matplotlib绘图
End of explanation
"""
|
brian-rose/ClimateModeling_courseware | Lectures/Lecture12 -- CESM climate sensitivity.ipynb | mit | startingamount = 1.
amount = startingamount
for n in range(70):
amount *= 1.01
amount
"""
Explanation: ATM 623: Climate Modeling
Brian E. J. Rose, University at Albany
Lecture 12: Examing the transient and equilibrium CO$_2$ response in the CESM
Warning: content out of date and not maintained
You really should be looking at The Climate Laboratory book by Brian Rose, where all the same content (and more!) is kept up to date.
Here you are likely to find broken links and broken code.
About these notes:
This document uses the interactive Jupyter notebook format. The notes can be accessed in several different ways:
The interactive notebooks are hosted on github at https://github.com/brian-rose/ClimateModeling_courseware
The latest versions can be viewed as static web pages rendered on nbviewer
A complete snapshot of the notes as of May 2017 (end of spring semester) are available on Brian's website.
Also here is a legacy version from 2015.
Many of these notes make use of the climlab package, available at https://github.com/brian-rose/climlab
Contents
...
I have run two sets of experiments with the CESM model:
The fully coupled model:
pre-industrial control
1%/year CO2 ramp scenario for 80 years
The slab ocean model:
pre-industrial control with prescribed q-flux
2xCO2 scenario run out to equilibrium
Our main first task is to compute the two canonical measures of climate sensitivity for this model:
Equilibrium Climate Sensitivity (ECS)
Transient Climate Response (TCR)
From the IPCC AR5 WG1 report, Chapter 9, page 817:
Equilibrium climate sensitivity (ECS) is the equilibrium change in global and annual mean surface air temperature after doubling the atmos- pheric concentration of CO2 relative to pre-industrial levels.
The transient climate response (TCR) is the change in global and annual mean surface temperature from an experiment in which the CO2 con- centration is increased by 1% yr–1, and calculated using the difference between the start of the experiment and a 20-year period centred on the time of CO2 doubling.
First, a quick demonstration that 1%/year compounded increase reaches doubling after 70 years
End of explanation
"""
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import xarray as xr
"""
Explanation: TCR is always smaller than ECS due to the transient effects of ocean heat uptake.
We are going to estimate the ECS of the fully coupled model by using the equilibrium response of the Slab Ocean .
End of explanation
"""
casenames = {'cpl_control': 'cpl_1850_f19',
'cpl_CO2ramp': 'cpl_CO2ramp_f19',
'som_control': 'som_1850_f19',
'som_2xCO2': 'som_1850_2xCO2',
}
# The path to the THREDDS server, should work from anywhere
basepath = 'http://thredds.atmos.albany.edu:8080/thredds/dodsC/CESMA/'
# For better performance if you can access the roselab_rit filesystem (e.g. from JupyterHub)
#basepath = '/roselab_rit/cesm_archive/'
casepaths = {}
for name in casenames:
casepaths[name] = basepath + casenames[name] + '/concatenated/'
# make a dictionary of all the CAM atmosphere output
atm = {}
for name in casenames:
path = casepaths[name] + casenames[name] + '.cam.h0.nc'
print('Attempting to open the dataset ', path)
atm[name] = xr.open_dataset(path, decode_times=False)
"""
Explanation: Load the concatenated output from the CAM output (atmosphere)
End of explanation
"""
days_per_year = 365
fig, ax = plt.subplots()
for name in ['cpl_control', 'cpl_CO2ramp']:
ax.plot(atm[name].time/days_per_year, atm[name].co2vmr*1E6, label=name)
ax.set_title('CO2 volume mixing ratio (CESM coupled simulations)')
ax.set_xlabel('Years')
ax.set_ylabel('pCO2 (ppm)')
ax.grid()
ax.legend();
"""
Explanation: A plot of the prescribed CO2 concentrations in the coupled simulations
End of explanation
"""
# The surface air temperature, which we will use for our sensitivity metrics
atm['cpl_control'].TREFHT
# The area weighting needed for global averaging
gw = atm['som_control'].gw
print(gw)
def global_mean(field, weight=gw):
'''Return the area-weighted global average of the input field'''
return (field*weight).mean(dim=('lat','lon'))/weight.mean(dim='lat')
# Loop through the four simulations and produce the global mean timeseries
TREFHT_global = {}
for name in casenames:
TREFHT_global[name] = global_mean(atm[name].TREFHT)
"""
Explanation: Issues to think about:
Why do we talk about fractional changes in CO2, such as "doubling atmospheric CO2", and "1%/year compounded CO2 increase?
Why not instead talk about changes in absolute amounts of CO2?
The answer is closely related to the fact that the radiative forcing associated with CO2 increase is approximately logarithmic in CO2 amount. So a doubling of CO2 represents roughly the same radiative forcing regardless of the initial CO2 concentration.
Compute and plot time series of global, annual mean near-surface air temperature in all four simulations
End of explanation
"""
fig, axes = plt.subplots(2,1,figsize=(10,8))
for name in casenames:
if 'cpl' in name:
ax = axes[0]
ax.set_title('Fully coupled ocean')
else:
ax = axes[1]
ax.set_title('Slab ocean')
field = TREFHT_global[name]
field_running = field.rolling(time=12, center=True).mean()
line = ax.plot(field.time / days_per_year,
field,
label=name,
linewidth=0.75,
)
ax.plot(field_running.time / days_per_year,
field_running,
color=line[0].get_color(),
linewidth=2,
)
for ax in axes:
ax.legend();
ax.set_xlabel('Years')
ax.set_ylabel('Temperature (K)')
ax.grid();
ax.set_xlim(0,100)
fig.suptitle('Global mean surface air temperature in CESM simulations', fontsize=16);
"""
Explanation: Make some pretty timeseries plots, including an approximate running annual average
End of explanation
"""
# extract the last 10 years from the slab ocean control simulation
# and the last 20 years from the coupled control
nyears_slab = 10
nyears_cpl = 20
clim_slice_slab = slice(-(nyears_slab*12),None)
clim_slice_cpl = slice(-(nyears_cpl*12),None)
# extract the last 10 years from the slab ocean control simulation
T0_slab = TREFHT_global['som_control'].isel(time=clim_slice_slab).mean(dim='time')
T0_slab
# and the last 20 years from the coupled control
T0_cpl = TREFHT_global['cpl_control'].isel(time=clim_slice_cpl).mean(dim='time')
T0_cpl
# extract the last 10 years from the slab 2xCO2 simulation
T2x_slab = TREFHT_global['som_2xCO2'].isel(time=clim_slice_slab).mean(dim='time')
T2x_slab
# extract the last 20 years from the coupled CO2 ramp simulation
T2x_cpl = TREFHT_global['cpl_CO2ramp'].isel(time=clim_slice_cpl).mean(dim='time')
T2x_cpl
ECS = T2x_slab - T0_slab
TCR = T2x_cpl - T0_cpl
print('The Equilibrium Climate Sensitivity is {:.3} K.'.format(float(ECS)))
print('The Transient Climate Response is {:.3} K.'.format(float(TCR)))
"""
Explanation: Issues to think about here include:
Why is the annual average here only approximate? (think about the calendar)
Why is there an annual cycle in the global average temperature? (planet is coldest during NH winter)
Different character of the temperature variability in the coupled vs. slab model
Much more rapid warming in the Slab Ocean Model
Now we can work on computing ECS and TCR
End of explanation
"""
# The map projection capabilities come from the cartopy package. There are many possible projections
import cartopy.crs as ccrs
def make_map(field):
'''input field should be a 2D xarray.DataArray on a lat/lon grid.
Make a filled contour plot of the field, and a line plot of the zonal mean
'''
fig = plt.figure(figsize=(14,6))
nrows = 10; ncols = 3
mapax = plt.subplot2grid((nrows,ncols), (0,0), colspan=ncols-1, rowspan=nrows-1, projection=ccrs.Robinson())
barax = plt.subplot2grid((nrows,ncols), (nrows-1,0), colspan=ncols-1)
plotax = plt.subplot2grid((nrows,ncols), (0,ncols-1), rowspan=nrows-1)
cx = mapax.contourf(field.lon, field.lat, field, transform=ccrs.PlateCarree())
mapax.set_global(); mapax.coastlines();
plt.colorbar(cx, cax=barax, orientation='horizontal')
plotax.plot(field.mean(dim='lon'), field.lat)
plotax.set_ylabel('Latitude')
plotax.grid()
return fig, (mapax, plotax, barax), cx
# Plot a single time slice of surface air temperature just as example
fig, axes, cx = make_map(atm['cpl_control'].TREFHT.isel(time=0))
"""
Explanation: Some CMIP climate sensitivity results to compare against
<img src='http://www.climatechange2013.org/images/figures/WGI_AR5_Fig9-43.jpg' width=800>
<img src='../images/AR5_Table9.5.png'>
Comparing against the multi-model mean of the ECS and TCR, our model is apparently slightly less sensitive than the CMIP5 mean.
Let's make some maps to compare spatial patterns of transient vs. equilibrium warming
Here is a helper function that takes a 2D lat/lon field and renders it as a nice contour map with accompanying zonal average line plot.
End of explanation
"""
Tmap_cpl_2x = atm['cpl_CO2ramp'].TREFHT.isel(time=clim_slice_cpl).mean(dim='time')
Tmap_cpl_control = atm['cpl_control'].TREFHT.isel(time=clim_slice_cpl).mean(dim='time')
DeltaT_cpl = Tmap_cpl_2x - Tmap_cpl_control
Tmap_som_2x = atm['som_2xCO2'].TREFHT.isel(time=clim_slice_slab).mean(dim='time')
Tmap_som_control = atm['som_control'].TREFHT.isel(time=clim_slice_slab).mean(dim='time')
DeltaT_som = Tmap_som_2x - Tmap_som_control
fig, axes, cx = make_map(DeltaT_cpl)
fig.suptitle('Surface air temperature anomaly (coupled transient)', fontsize=16);
axes[1].set_xlim(0,7) # ensure the line plots have same axes
cx.set_clim([0, 8]) # ensure the contour maps have the same color intervals
fig, axes,cx = make_map(DeltaT_som)
fig.suptitle('Surface air temperature anomaly (equilibrium SOM)', fontsize=16);
axes[1].set_xlim(0,7)
cx.set_clim([0, 8])
"""
Explanation: Make maps of the surface air temperature anomaly due to CO2 doubling in both the slab and coupled models
End of explanation
"""
%load_ext version_information
%version_information numpy, matplotlib, xarray, cartopy
"""
Explanation: Lots of intersting phenomena to think about here, including:
Polar amplification of surface warming
Reduction in equator-to-pole temperature gradients
Much larger polar amplification in SOM than in transient -- especially over the Southern Ocean (the delayed warming of the Southern Ocean)
North Atlantic warming hole present in transient but not in equilibrium SOM.
Land-ocean warming contrast: larger in transient, but still present in equilibrium
Homework assignment
Continue to compare the transient and equilibrium responses in these CESM simulations.
Specifically, I would like you to examine the following:
Top-of-atmosphere energy budget:
Calculate the global mean net TOA energy flux for all four simulations.
Which ones are close to zero in the global average, and which are not?
Make spatial maps of the change in ASR and the change in OLR after doubling CO2 (both transient and equilibrium).
Repeat for the clear-sky component of those changes.
Comment on what you found in your maps:
Are there any discernible spatial patterns in ASR and OLR changes?
What about the clear sky components?
Can you relate any of these results to the surface warming maps we created above?
The hydrological cycle:
precipitation
evaporation
P-E
For each of these quantities, plot the anomalies two different ways:
Absolute changes
Normalized changes in %/K (normalized by the global mean warming)
Comment on which of these two perspective seems more useful, and why.
Land - ocean warming contrast:
Make line plots of the zonal average surface air temperature change over land only and over ocean only.
For all your results, please make an effort to point out any interesting or surprising results.
Version information
End of explanation
"""
# # make a dictionary of all the CLM land model output
# land = {}
# for name in casenames:
# path = casepaths[name] + casenames[name] + '.clm2.h0.nc'
# print('Attempting to open the dataset ', path)
# land[name] = xr.open_dataset(path)
# # make a dictionary of all the sea ice model output
# ice = {}
# for name in casenames:
# path = casepaths[name] + casenames[name] + '.cice.h.nc'
# print('Attempting to open the dataset ', path)
# ice[name] = xr.open_dataset(path)
# # make a dictionary of all the river transport output
# rtm = {}
# for name in casenames:
# path = casepaths[name] + casenames[name] + '.rtm.h0.nc'
# print('Attempting to open the dataset ', path)
# rtm[name] = xr.open_dataset(path)
# ocn = {}
# for name in casenames:
# if 'cpl' in name:
# path = casepaths[name] + casenames[name] + '.pop.h.nc'
# print('Attempting to open the dataset ', path)
# ocn[name] = xr.open_dataset(path)
"""
Explanation: Credits
The author of this notebook is Brian E. J. Rose, University at Albany.
It was developed in support of ATM 623: Climate Modeling, a graduate-level course in the Department of Atmospheric and Envionmental Sciences
Development of these notes and the climlab software is partially supported by the National Science Foundation under award AGS-1455071 to Brian Rose. Any opinions, findings, conclusions or recommendations expressed here are mine and do not necessarily reflect the views of the National Science Foundation.
Appendix: for later reference, here is how you can open the other output types
The following will open the rest of the CESM output (land, sea ice, river routing, ocean).
These are not needed for the above homework assignment, but may be useful later on.
End of explanation
"""
|
mastertrojan/Udacity | image-classification/dlnd_image_classification.ipynb | mit | """
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
"""
Explanation: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
End of explanation
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
"""
Explanation: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:
* airplane
* automobile
* bird
* cat
* deer
* dog
* frog
* horse
* ship
* truck
Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.
Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
End of explanation
"""
def normalize(x):
from sklearn.preprocessing import normalize
"""
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
"""
a = 0
b = 1
min = np.amin(x)
max = np.amax(x)
return np.array(a + ((x - min)*(b - a)/(max - min)))
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_normalize(normalize)
"""
Explanation: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
End of explanation
"""
from sklearn.preprocessing import LabelBinarizer
encoder = LabelBinarizer()
encoder.fit(range(0,10))
def one_hot_encode(x):
"""
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
"""
return np.array(encoder.transform(x))
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_one_hot_encode(one_hot_encode)
"""
Explanation: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint: Don't reinvent the wheel.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
"""
Explanation: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
import tensorflow as tf
def neural_net_image_input(image_shape):
"""
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
"""
# TODO: Implement Function
x = tf.placeholder(tf.float32, [None, image_shape[0], image_shape[1], image_shape[2]], name='x')
#tf.placeholder(image_shape)
return x
def neural_net_label_input(n_classes):
"""
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
"""
# TODO: Implement Function
#print(n_classes)
y = tf.placeholder(tf.float32, [None, n_classes], name='y')
return y
def neural_net_keep_prob_input():
"""
Return a Tensor for keep probability
: return: Tensor for keep probability.
"""
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
# TODO: Implement Function
return keep_prob
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
"""
Explanation: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.
However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.
Let's begin!
Input
The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions
* Implement neural_net_image_input
* Return a TF Placeholder
* Set the shape using image_shape with batch size set to None.
* Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_label_input
* Return a TF Placeholder
* Set the shape using n_classes with batch size set to None.
* Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_keep_prob_input
* Return a TF Placeholder for dropout keep probability.
* Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.
These names will be used at the end of the project to load your saved model.
Note: None for shapes in TensorFlow allow for a dynamic size.
End of explanation
"""
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
"""
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
"""
#conv_ksize, conv_num_outputs, shape x_tensor
# print('tensor: ', x_tensor.get_shape()[3])
# print('conv_num_outputs', conv_num_outputs)
# print('conv filter:', conv_ksize)
# print('strides: ', conv_strides)
# print('pool filter:', pool_ksize)
# print('pool strides:', pool_strides)
layers = x_tensor.get_shape().as_list()[3]
weight = tf.Variable(tf.truncated_normal([conv_ksize[0], conv_ksize[1], layers, conv_num_outputs], stddev=0.1, mean=0))
bias = bias = tf.Variable(tf.zeros(conv_num_outputs), 0.1)
conv_layer = tf.nn.conv2d(x_tensor, weight, strides=[1, conv_strides[0], conv_strides[1], 1], padding='SAME')
conv_layer = tf.nn.bias_add(conv_layer, bias)
conv_layer = tf.nn.relu(conv_layer)
# Apply Max Pooling
conv_layer = tf.nn.max_pool(
conv_layer,
ksize=[1, pool_ksize[0], pool_ksize[1], 1],
strides=[1, pool_strides[0], pool_strides[1], 1],
padding='SAME')
# kernels are filters
#tf.nn.conv2d(input, F_W, strides, padding) + F_b
return conv_layer
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_con_pool(conv2d_maxpool)
"""
Explanation: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:
* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
* Apply a convolution to x_tensor using weight and conv_strides.
* We recommend you use same padding, but you're welcome to use any padding.
* Add bias
* Add a nonlinear activation to the convolution.
* Apply Max Pooling using pool_ksize and pool_strides.
* We recommend you use same padding, but you're welcome to use any padding.
Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.
End of explanation
"""
def flatten(x_tensor):
"""
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
"""
#print(x_tensor)
# TODO: Implement Function
#x_tensor = tf.Variable()
return tf.contrib.layers.flatten(x_tensor)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_flatten(flatten)
"""
Explanation: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
"""
def fully_conn(x_tensor, num_outputs):
"""
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
"""
# TODO: Implement Function
fully_conn = tf.contrib.layers.fully_connected(x_tensor, num_outputs, weights_initializer=tf.contrib.layers.xavier_initializer(dtype=tf.float32),
biases_initializer=tf.zeros_initializer)
fully_conn = tf.nn.relu(fully_conn)
return fully_conn
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_fully_conn(fully_conn)
"""
Explanation: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
"""
def output(x_tensor, num_outputs):
"""
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
"""
# TODO: Implement Function
#tensor = tf.contrib.layers.fully_connected(x_tensor, num_outputs)
#print(tensor)
x_shape = x_tensor.get_shape().as_list()
weight_shape = [int(x_shape[1]), num_outputs]
weights = tf.Variable(tf.truncated_normal(weight_shape, stddev=0.1, mean=0))
bias = tf.Variable(tf.fill([num_outputs], 0.1))
x_tensor = tf.add(tf.matmul(x_tensor, weights), bias)
return x_tensor
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_output(output)
"""
Explanation: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
Note: Activation, softmax, or cross entropy should not be applied to this.
End of explanation
"""
def conv_net(x, keep_prob):
"""
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
"""
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
conv1 = conv2d_maxpool(x, 32, [2,2], [2,2], [2,2], [2,2])
conv2 = conv2d_maxpool(conv1, 256, [2,2], [2,2], [2,2], [2,2])
#conv3 = conv2d_maxpool(conv2, 512, [2,2], [2,2], [2,2], [2,2])
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
flat = flatten(conv2)
dropout = tf.nn.dropout(flat, keep_prob, noise_shape=None, seed=None, name=None)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
full_1 = fully_conn(dropout, 256)
#full_2 = fully_conn(full_1, 512)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
out = output(full_1, 10)
# TODO: return output
return out
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
"""
Explanation: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:
Apply 1, 2, or 3 Convolution and Max Pool layers
Apply a Flatten Layer
Apply 1, 2, or 3 Fully Connected Layers
Apply an Output Layer
Return the output
Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
End of explanation
"""
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
"""
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
"""
# features = tf.placeholder(tf.float32, [None, feature_batch.shape[1], feature_batch.shape[2], feature_batch.shape[3]])
# labels = tf.placeholder(tf.float32, [None, label_batch.shape[1]])
# train_feed_dict = {features: feature_batch, labels: label_batch, keep_prob: keep_probability}
session.run(optimizer, feed_dict={x: feature_batch, y: label_batch, keep_prob: keep_probability})
# TODO: Implement Function
pass
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_train_nn(train_neural_network)
"""
Explanation: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:
* x for image input
* y for labels
* keep_prob for keep probability for dropout
This function will be called for each batch, so tf.global_variables_initializer() has already been called.
Note: Nothing needs to be returned. This function is only optimizing the neural network.
End of explanation
"""
def print_stats(session, feature_batch, label_batch, cost, accuracy):
"""
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
"""
#print(valid_features, valid_labes)
# TODO: Implement Function
loss = session.run(cost, feed_dict =
{x: feature_batch,y: label_batch, keep_prob: 1.0})
acc = session.run(accuracy,feed_dict =
{x: valid_features, y: valid_labels, keep_prob: 1.0})
print("Accuracy: ", acc, " Loss: ", loss)
pass
"""
Explanation: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
End of explanation
"""
# TODO: Tune Parameters
epochs = 50
batch_size = 4096
keep_probability = 1
"""
Explanation: Hyperparameters
Tune the following parameters:
* Set epochs to the number of iterations until the network stops learning or start overfitting
* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
* 64
* 128
* 256
* ...
* Set keep_probability to the probability of keeping a node using dropout
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
"""
Explanation: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
"""
Explanation: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
"""
Test the saved model against the test dataset
"""
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
"""
Explanation: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
End of explanation
"""
|
konstantinstadler/pymrio | doc/source/notebooks/working_with_wiod.ipynb | gpl-3.0 | import pymrio
wiod_storage = '/tmp/mrios/WIOD2013'
wiod_meta = pymrio.download_wiod2013(storage_folder=wiod_storage)
"""
Explanation: Handling the WIOD EE MRIO database
Getting the database
The WIOD database is available at http://www.wiod.org. You can download these files with the pymrio automatic downloader as described at WIOD download.
In the most simple case you get the full WIOD database with:
End of explanation
"""
wiod2007 = pymrio.parse_wiod(year=2007, path=wiod_storage)
"""
Explanation: This download the whole 2013 release of WIOD including all extensions.
The extension (satellite accounts) are provided as zip files. You can use them directly in pymrio (without extracting them). If you want to have them extracted, create a folder with the name of each extension (without the ending ".zip") and extract the zip file there.
Parsing
Parsing a single year
A single year of the WIOD database can be parse by:
End of explanation
"""
wiod2007.Z.head()
wiod2007.AIR.F
"""
Explanation: Which loads the specific year and extension data:
End of explanation
"""
wiod2007.SEA.F
"""
Explanation: If a WIOD SEA file is present (at the root of path or in a folder named
'SEA' - only one file!), the labor data of this file gets included in the
factor_input extension (calculated for the the three skill levels
available). The monetary data in this file is not added because it is only
given in national currency:
End of explanation
"""
print(wiod2007.meta)
"""
Explanation: Provenance tracking and additional meta data is availabe in the field meta:
End of explanation
"""
wiod2007_full = pymrio.parse_wiod(year=2007, path=wiod_storage, names=('full', 'full'))
wiod2007_full.Y.head()
"""
Explanation: WIOD provides three different sector/final demand categories naming
schemes. The one to use for pymrio can specified by passing a tuple
names= with:
1) 'isic': ISIC rev 3 Codes - available for interindustry flows and final demand rows.
2) 'full': Full names - available for final demand rows and final demand columns (categories) and interindustry flows.
3) 'c_codes' : WIOD specific sector numbers, available for final demand rows and columns (categories) and interindustry flows.
Internally, the parser relies on 1) for the interindustry flows and 3) for the final demand categories. This is the default and will also be used if just 'isic' gets passed ('c_codes' also replace 'isic' if this was passed for final demand categories). To specify different finial consumption category names, pass a tuple with (sectors/interindustry classification, fd categories), eg ('isic', 'full'). Names are case insensitive and passing the first character is sufficient.
For example, for loading wiod with full sector names:
End of explanation
"""
|
statsmodels/statsmodels.github.io | v0.13.2/examples/notebooks/generated/regression_plots.ipynb | bsd-3-clause | %matplotlib inline
from statsmodels.compat import lzip
import numpy as np
import matplotlib.pyplot as plt
import statsmodels.api as sm
from statsmodels.formula.api import ols
plt.rc("figure", figsize=(16, 8))
plt.rc("font", size=14)
"""
Explanation: Regression Plots
End of explanation
"""
prestige = sm.datasets.get_rdataset("Duncan", "carData", cache=True).data
prestige.head()
prestige_model = ols("prestige ~ income + education", data=prestige).fit()
print(prestige_model.summary())
"""
Explanation: Duncan's Prestige Dataset
Load the Data
We can use a utility function to load any R dataset available from the great <a href="https://vincentarelbundock.github.io/Rdatasets/">Rdatasets package</a>.
End of explanation
"""
fig = sm.graphics.influence_plot(prestige_model, criterion="cooks")
fig.tight_layout(pad=1.0)
"""
Explanation: Influence plots
Influence plots show the (externally) studentized residuals vs. the leverage of each observation as measured by the hat matrix.
Externally studentized residuals are residuals that are scaled by their standard deviation where
$$var(\hat{\epsilon}i)=\hat{\sigma}^2_i(1-h{ii})$$
with
$$\hat{\sigma}^2_i=\frac{1}{n - p - 1 \;\;}\sum_{j}^{n}\;\;\;\forall \;\;\; j \neq i$$
$n$ is the number of observations and $p$ is the number of regressors. $h_{ii}$ is the $i$-th diagonal element of the hat matrix
$$H=X(X^{\;\prime}X)^{-1}X^{\;\prime}$$
The influence of each point can be visualized by the criterion keyword argument. Options are Cook's distance and DFFITS, two measures of influence.
End of explanation
"""
fig = sm.graphics.plot_partregress(
"prestige", "income", ["income", "education"], data=prestige
)
fig.tight_layout(pad=1.0)
fig = sm.graphics.plot_partregress("prestige", "income", ["education"], data=prestige)
fig.tight_layout(pad=1.0)
"""
Explanation: As you can see there are a few worrisome observations. Both contractor and reporter have low leverage but a large residual. <br />
RR.engineer has small residual and large leverage. Conductor and minister have both high leverage and large residuals, and, <br />
therefore, large influence.
Partial Regression Plots (Duncan)
Since we are doing multivariate regressions, we cannot just look at individual bivariate plots to discern relationships. <br />
Instead, we want to look at the relationship of the dependent variable and independent variables conditional on the other <br />
independent variables. We can do this through using partial regression plots, otherwise known as added variable plots. <br />
In a partial regression plot, to discern the relationship between the response variable and the $k$-th variable, we compute <br />
the residuals by regressing the response variable versus the independent variables excluding $X_k$. We can denote this by <br />
$X_{\sim k}$. We then compute the residuals by regressing $X_k$ on $X_{\sim k}$. The partial regression plot is the plot <br />
of the former versus the latter residuals. <br />
The notable points of this plot are that the fitted line has slope $\beta_k$ and intercept zero. The residuals of this plot <br />
are the same as those of the least squares fit of the original model with full $X$. You can discern the effects of the <br />
individual data values on the estimation of a coefficient easily. If obs_labels is True, then these points are annotated <br />
with their observation label. You can also see the violation of underlying assumptions such as homoskedasticity and <br />
linearity.
End of explanation
"""
subset = ~prestige.index.isin(["conductor", "RR.engineer", "minister"])
prestige_model2 = ols(
"prestige ~ income + education", data=prestige, subset=subset
).fit()
print(prestige_model2.summary())
"""
Explanation: As you can see the partial regression plot confirms the influence of conductor, minister, and RR.engineer on the partial relationship between income and prestige. The cases greatly decrease the effect of income on prestige. Dropping these cases confirms this.
End of explanation
"""
fig = sm.graphics.plot_partregress_grid(prestige_model)
fig.tight_layout(pad=1.0)
"""
Explanation: For a quick check of all the regressors, you can use plot_partregress_grid. These plots will not label the <br />
points, but you can use them to identify problems and then use plot_partregress to get more information.
End of explanation
"""
fig = sm.graphics.plot_ccpr(prestige_model, "education")
fig.tight_layout(pad=1.0)
"""
Explanation: Component-Component plus Residual (CCPR) Plots
The CCPR plot provides a way to judge the effect of one regressor on the <br />
response variable by taking into account the effects of the other <br />
independent variables. The partial residuals plot is defined as <br />
$\text{Residuals} + B_iX_i \text{ }\text{ }$ versus $X_i$. The component adds $B_iX_i$ versus <br />
$X_i$ to show where the fitted line would lie. Care should be taken if $X_i$ <br />
is highly correlated with any of the other independent variables. If this <br />
is the case, the variance evident in the plot will be an underestimate of <br />
the true variance.
End of explanation
"""
fig = sm.graphics.plot_ccpr_grid(prestige_model)
fig.tight_layout(pad=1.0)
"""
Explanation: As you can see the relationship between the variation in prestige explained by education conditional on income seems to be linear, though you can see there are some observations that are exerting considerable influence on the relationship. We can quickly look at more than one variable by using plot_ccpr_grid.
End of explanation
"""
fig = sm.graphics.plot_regress_exog(prestige_model, "education")
fig.tight_layout(pad=1.0)
"""
Explanation: Single Variable Regression Diagnostics
The plot_regress_exog function is a convenience function that gives a 2x2 plot containing the dependent variable and fitted values with confidence intervals vs. the independent variable chosen, the residuals of the model vs. the chosen independent variable, a partial regression plot, and a CCPR plot. This function can be used for quickly checking modeling assumptions with respect to a single regressor.
End of explanation
"""
fig = sm.graphics.plot_fit(prestige_model, "education")
fig.tight_layout(pad=1.0)
"""
Explanation: Fit Plot
The plot_fit function plots the fitted values versus a chosen independent variable. It includes prediction confidence intervals and optionally plots the true dependent variable.
End of explanation
"""
# dta = pd.read_csv("http://www.stat.ufl.edu/~aa/social/csv_files/statewide-crime-2.csv")
# dta = dta.set_index("State", inplace=True).dropna()
# dta.rename(columns={"VR" : "crime",
# "MR" : "murder",
# "M" : "pctmetro",
# "W" : "pctwhite",
# "H" : "pcths",
# "P" : "poverty",
# "S" : "single"
# }, inplace=True)
#
# crime_model = ols("murder ~ pctmetro + poverty + pcths + single", data=dta).fit()
dta = sm.datasets.statecrime.load_pandas().data
crime_model = ols("murder ~ urban + poverty + hs_grad + single", data=dta).fit()
print(crime_model.summary())
"""
Explanation: Statewide Crime 2009 Dataset
Compare the following to http://www.ats.ucla.edu/stat/stata/webbooks/reg/chapter4/statareg_self_assessment_answers4.htm
Though the data here is not the same as in that example. You could run that example by uncommenting the necessary cells below.
End of explanation
"""
fig = sm.graphics.plot_partregress_grid(crime_model)
fig.tight_layout(pad=1.0)
fig = sm.graphics.plot_partregress(
"murder", "hs_grad", ["urban", "poverty", "single"], data=dta
)
fig.tight_layout(pad=1.0)
"""
Explanation: Partial Regression Plots (Crime Data)
End of explanation
"""
fig = sm.graphics.plot_leverage_resid2(crime_model)
fig.tight_layout(pad=1.0)
"""
Explanation: Leverage-Resid<sup>2</sup> Plot
Closely related to the influence_plot is the leverage-resid<sup>2</sup> plot.
End of explanation
"""
fig = sm.graphics.influence_plot(crime_model)
fig.tight_layout(pad=1.0)
"""
Explanation: Influence Plot
End of explanation
"""
from statsmodels.formula.api import rlm
rob_crime_model = rlm(
"murder ~ urban + poverty + hs_grad + single",
data=dta,
M=sm.robust.norms.TukeyBiweight(3),
).fit(conv="weights")
print(rob_crime_model.summary())
# rob_crime_model = rlm("murder ~ pctmetro + poverty + pcths + single", data=dta, M=sm.robust.norms.TukeyBiweight()).fit(conv="weights")
# print(rob_crime_model.summary())
"""
Explanation: Using robust regression to correct for outliers.
Part of the problem here in recreating the Stata results is that M-estimators are not robust to leverage points. MM-estimators should do better with this examples.
End of explanation
"""
weights = rob_crime_model.weights
idx = weights > 0
X = rob_crime_model.model.exog[idx.values]
ww = weights[idx] / weights[idx].mean()
hat_matrix_diag = ww * (X * np.linalg.pinv(X).T).sum(1)
resid = rob_crime_model.resid
resid2 = resid ** 2
resid2 /= resid2.sum()
nobs = int(idx.sum())
hm = hat_matrix_diag.mean()
rm = resid2.mean()
from statsmodels.graphics import utils
fig, ax = plt.subplots(figsize=(16, 8))
ax.plot(resid2[idx], hat_matrix_diag, "o")
ax = utils.annotate_axes(
range(nobs),
labels=rob_crime_model.model.data.row_labels[idx],
points=lzip(resid2[idx], hat_matrix_diag),
offset_points=[(-5, 5)] * nobs,
size="large",
ax=ax,
)
ax.set_xlabel("resid2")
ax.set_ylabel("leverage")
ylim = ax.get_ylim()
ax.vlines(rm, *ylim)
xlim = ax.get_xlim()
ax.hlines(hm, *xlim)
ax.margins(0, 0)
"""
Explanation: There is not yet an influence diagnostics method as part of RLM, but we can recreate them. (This depends on the status of issue #888)
End of explanation
"""
|
iRipVanWinkle/ml | Data Science UA - September 2017/Lecture 05 - Modeling Techniques and Regression/Logistic_Regression-Titanic.ipynb | mit | # import useful modules
import pandas as pd
from pandas import DataFrame
import re
import numpy as np
import matplotlib.pyplot as plt
try:
import seaborn as sns
except:
!pip install seaborn
%matplotlib inline
sns.set_style('whitegrid')
"""
Explanation: Logistic Regression - Titanic Example
In this notebook explore prediction tasks where the response variable is categorical instead of numeric and look at a common classification technique known as logistic regression. We apply this technique to a data_set containing survival data for the passengers of the Titanic.
As part of the analysis, we will be doing the following:
Data extraction : we'll load the dataset and have a look at it.
Cleaning : we'll fill in some of the missing values.
Plotting : we'll create several charts that will (hopefully) help identify correlations and other insights
Two datasets are available: a training set and a test set. We'll be using the training set to build our predictive model and the testing set to evaluate it.
End of explanation
"""
training_data = pd.read_csv("Titanic_train.csv")
test_data = pd.read_csv("Titanic_test.csv")
training_data.head()
test_data.head()
print(training_data.info())
print("\n=======================================\n")
print(test_data.info())
"""
Explanation: Let us start by loading the training set and having a first look at our data:
End of explanation
"""
# check the missing data for the Embarked field
training_data[training_data.Embarked.isnull()]
"""
Explanation: The <b>Survived</b> column is our target/dependent/reponse variable, 1 means the passenger survived, 0 means that the passenger died.
Several other variables describe the passengers:
- PassengerId: and id given to each traveler on the boat.
- Pclass: the passenger class. It has three possible values: 1,2,3.
- Name
- Sex
- Age
- SibSp: number of siblings and spouses traveling with the passenger
- Parch: number of parents and children traveling with the passenger
- The ticket number
- The ticket fare
- The cabin number
- The port of embarkation. It has three possible values S,C,Q. (C = Cherbourg; Q = Queenstown; S = Southampton)
Let us check which records in the training data are missing information for the Embarked field.
End of explanation
"""
# plot
#sns.factorplot('Embarked','Survived', data=training_data,size=4,aspect=3)
fig, (axis1,axis2,axis3) = plt.subplots(1,3,figsize=(15,5))
sns.countplot(x='Embarked', data=training_data, ax=axis1)
sns.countplot(x='Survived', hue="Embarked", data=training_data, order=[1,0], ax=axis2)
# group by embarked, and get the mean for survived passengers for each value in Embarked
embark_perc = training_data[["Embarked", "Survived"]].groupby(['Embarked'],as_index=False).mean()
sns.barplot(x='Embarked', y='Survived', data=embark_perc,order=['S','C','Q'],ax=axis3)
"""
Explanation: Let's look at the survival chances depending on the port of embarkation
End of explanation
"""
training_data.loc[training_data.Ticket == '113572']
print( 'C == ' + str( len(training_data.loc[training_data.Pclass == 1].loc[training_data.Fare > 75].loc[training_data.Fare < 85].loc[training_data.Embarked == 'C']) ) )
print( 'S == ' + str( len(training_data.loc[training_data.Pclass == 1].loc[training_data.Fare > 75].loc[training_data.Fare < 85].loc[training_data.Embarked == 'S']) ) )
training_data = training_data.set_value(training_data.Embarked.isnull(), 'Embarked', 'C')
training_data.loc[training_data.Embarked.isnull()]
"""
Explanation: Lets look at other variables that may indicate where passengers embarked the ship.
End of explanation
"""
test_data[test_data.Fare.isnull()]
"""
Explanation: Let us check which records are missing information for the Fare and Cabin fields
End of explanation
"""
fig = plt.figure(figsize=(8, 5))
ax = fig.add_subplot(111)
test_data[(test_data.Pclass==3)&(test_data.Embarked=='S')].Fare.hist(bins=100, ax=ax)
plt.xlabel('Fare')
plt.ylabel('Frequency')
plt.title('Histogram of Fare, Plcass 3 and Embarked S')
print ("The top 5 most common fares")
test_data[(test_data.Pclass==3)&(test_data.Embarked=='S')].Fare.value_counts().head()
"""
Explanation: Let's visualize a histogram of the fares paid by the 3rd class passengers who embarked in Southampton.
End of explanation
"""
test_data.set_value(test_data.Fare.isnull(), 'Fare', 8.05)
test_data.loc[test_data.Fare.isnull()]
"""
Explanation: Let us fill in the missing values with the most common fare, $8.05
End of explanation
"""
test_data.loc[test_data.Age.isnull()].head()
fig, (axis1,axis2) = plt.subplots(1,2,figsize=(15,4))
axis1.set_title('Original Age values')
axis2.set_title('New Age values')
average_age_training = training_data["Age"].mean()
std_age_training = training_data["Age"].std()
count_nan_age_training = training_data["Age"].isnull().sum()
average_age_test = test_data["Age"].mean()
std_age_test = test_data["Age"].std()
count_nan_age_test = test_data["Age"].isnull().sum()
rand_1 = np.random.randint(average_age_training - std_age_training,\
average_age_training + std_age_training,\
size = count_nan_age_training)
rand_2 = np.random.randint(average_age_test - std_age_test,\
average_age_test + std_age_test,\
size = count_nan_age_test)
training_data['Age'].dropna().astype(int).hist(bins=70, ax=axis1)
test_data['Age'].dropna().astype(int).hist(bins=70, ax=axis1)
training_data.loc[np.isnan(training_data["Age"]), "Age"] = rand_1
test_data.loc[np.isnan(test_data["Age"]), "Age"] = rand_2
training_data['Age'] = training_data['Age'].astype(int)
test_data['Age'] = test_data['Age'].astype(int)
training_data['Age'].hist(bins=70, ax=axis2)
test_data['Age'].hist(bins=70, ax=axis2)
facet = sns.FacetGrid(training_data, hue="Survived",aspect=4)
facet.map(sns.kdeplot,'Age',shade= True)
facet.set(xlim=(0, training_data['Age'].max()))
facet.add_legend()
fig, axis1 = plt.subplots(1,1,figsize=(18,4))
average_age = training_data[["Age", "Survived"]].groupby(['Age'],as_index=False).mean()
sns.barplot(x='Age', y='Survived', data=average_age)
print(training_data.info())
print("\n=====================================\n")
print(test_data.info())
"""
Explanation: Let's look at the field of <b>Age</b> in the training dataset, and see how it correlates with survival.
End of explanation
"""
Title_Dictionary = {
"Capt": "Officer",
"Col": "Officer",
"Major": "Officer",
"Jonkheer": "Nobel",
"Don": "Nobel",
"Sir" : "Nobel",
"Dr": "Officer",
"Rev": "Officer",
"the Countess":"Nobel",
"Dona": "Nobel",
"Mme": "Mrs",
"Mlle": "Miss",
"Ms": "Mrs",
"Mr" : "Mr",
"Mrs" : "Mrs",
"Miss" : "Miss",
"Master" : "Master",
"Lady" : "Nobel"
}
training_data['Title'] = training_data['Name'].apply(lambda x: Title_Dictionary[x.split(',')[1].split('.')[0].strip()])
test_data['Title'] = test_data['Name'].apply(lambda x: Title_Dictionary[x.split(',')[1].split('.')[0].strip()])
training_data.head(10)
"""
Explanation: The names have a prefix that, in some cases, is indicative of the social status, which may have been be an important factor in surviving the accident.
Braund, Mr. Owen Harris
Heikkinen, Miss. Laina
Oliva y Ocana, Dona. Fermina
Peter, Master. Michael J
Extracting the passenger titles and storring them in an additional column called <b>Title</b>.
End of explanation
"""
training_data['FamilySize'] = training_data['SibSp'] + training_data['Parch']
test_data['FamilySize'] = test_data['SibSp'] + test_data['Parch']
training_data.head()
"""
Explanation: Add a field FamilySize that aggregates the information in the fields indicating the presence of a partner (Parch) or a relative (SibSp).
End of explanation
"""
def get_person(passenger):
age,sex = passenger
return 'child' if age < 16 else sex
training_data['Person'] = training_data[['Age','Sex']].apply(get_person,axis=1)
test_data['Person'] = test_data[['Age','Sex']].apply(get_person,axis=1)
training_data.head()
training_data.info()
print("\n------------------------------------\n")
test_data.info()
"""
Explanation: The gender of pasanger is an important factor in surviving the accident. So is a pasenger's age. Let us introduce a new feature to take into account the gender and age of passengers.
End of explanation
"""
training_data.drop(labels=['PassengerId', 'Name', 'Cabin', 'Ticket', 'SibSp', 'Parch', 'Sex'], axis=1, inplace=True)
test_data.drop(labels=['Name', 'Cabin', 'Ticket', 'SibSp', 'Parch', 'Sex'], axis=1, inplace=True)
training_data.head()
"""
Explanation: Let us select just the features of interest. We are dropping features like Name, SibSp and Sex, whose information is either no longer needed or is accounted for in the columns that we have added.
End of explanation
"""
dummies_person_train = pd.get_dummies(training_data['Person'],prefix='Person')
dummies_embarked_train = pd.get_dummies(training_data['Embarked'], prefix= 'Embarked')
dummies_title_train = pd.get_dummies(training_data['Title'], prefix= 'Title')
dummies_pclass_train = pd.get_dummies(training_data['Pclass'], prefix= 'Pclass')
training_data = pd.concat([training_data, dummies_person_train, dummies_embarked_train, dummies_title_train, dummies_pclass_train], axis=1)
training_data = training_data.drop(['Person','Embarked','Title', 'Pclass'], axis=1)
training_data.head()
dummies_person_test = pd.get_dummies(test_data['Person'],prefix='Person')
dummies_embarked_test = pd.get_dummies(test_data['Embarked'], prefix= 'Embarked')
dummies_title_test = pd.get_dummies(test_data['Title'], prefix= 'Title')
dummies_pclass_test = pd.get_dummies(test_data['Pclass'], prefix= 'Pclass')
test_data = pd.concat([test_data, dummies_person_test, dummies_embarked_test, dummies_title_test, dummies_pclass_test], axis=1)
test_data = test_data.drop(['Person','Embarked','Title', 'Pclass'], axis=1)
test_data.head()
"""
Explanation: We use the information available on passengers to build a statistical model for survivorship, which, given a "new" passenger will predict whether or not he survived. There is a wide variety of models to use, from logistic regression to decision trees and more sophisticated ones such as random forests.
First, let us use Pandas' get_dummies function to encode some of the features with discrete values, i.e., Person, Embarked, Title and Pclass and add those dummy variables as columns to the DataFrame object that stores the training data.
End of explanation
"""
def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None, n_jobs=1,\
train_sizes=np.linspace(.1, 1.0, 5), scoring='accuracy'):
plt.figure(figsize=(10,6))
plt.title(title)
if ylim is not None:
plt.ylim(*ylim)
plt.xlabel("Training examples")
plt.ylabel(scoring)
train_sizes, train_scores, test_scores = learning_curve(estimator, X, y, cv=cv, scoring=scoring, n_jobs=n_jobs, train_sizes=train_sizes)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.grid()
plt.fill_between(train_sizes, train_scores_mean - train_scores_std,\
train_scores_mean + train_scores_std, alpha=0.1, \
color="r")
plt.fill_between(train_sizes, test_scores_mean - test_scores_std,\
test_scores_mean + test_scores_std, alpha=0.1, color="g")
plt.plot(train_sizes, train_scores_mean, 'o-', color="r",label="Training score")
plt.plot(train_sizes, test_scores_mean, 'o-', color="g", label="Cross-validation score")
plt.legend(loc="best")
return plt
# import machine learning modules
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import make_scorer, accuracy_score
from sklearn.ensemble import RandomForestClassifier
try:
from sklearn.model_selection import train_test_split
except:
from sklearn.cross_validation import train_test_split
try:
from sklearn.model_selection import GridSearchCV
except:
from sklearn.grid_search import GridSearchCV
try:
from sklearn.model_selection import learning_curve
except:
from sklearn.learning_curve import learning_curve
"""
Explanation: Let us create a function that visualizes the accuracy of the models we are building. It plots as a continuous line the mean values of the scores of the chosen estimator for two data sets, and a coloured band around the mean line, i.e., the interval (mean - standard deviation, mean + standard deviation).
plot_learning_curve() uses in turn the function sklearn.learning_curve.learning_curve(), which determines cross-validated training and test scores for different training set sizes. An (optional) cross-validation generator splits the given dataset k times in training and test data. (The default is 3-fold cross validation.) Subsets of the training set with varying sizes will be used to train the estimator and a score for each training subset size and the test set will be computed. The scores are averaged over all k runs for each training subset size.
End of explanation
"""
X = training_data.drop(['Survived'], axis=1)
y = training_data.Survived
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42, test_size = 0.3)
"""
Explanation: Let us build a model for the Titanic data. First, let us split the training data set into training and validation datasets, with the validation dataset being 30% of the data. We are using sklearn.model_selection.train_test_split() which splits arrays or matrices into random train and validation, "test", subsets.
End of explanation
"""
# Choose the type of classifier.
clf = RandomForestClassifier()
# Choose some parameter combinations to try
parameters = {'n_estimators': [4, 6, 9],
'max_features': ['log2', 'sqrt','auto'],
'criterion': ['entropy', 'gini'],
'max_depth': [2, 3, 5, 10],
'min_samples_split': [2, 3, 5],
'min_samples_leaf': [1,5,8]
}
# make_scorer returns a callable object that scores an estimator’s output.
#We are using accuracy_score for comparing different parameter combinations.
acc_scorer = make_scorer(accuracy_score)
# Run the grid search for the Random Forest classifier
grid_obj = GridSearchCV(clf, parameters, scoring=acc_scorer)
grid_obj = grid_obj.fit(X_train, y_train)
# Set our classifier, clf, to the have the best combination of parameters
clf = grid_obj.best_estimator_
# Fit the selected classifier to the training data
clf.fit(X_train, y_train)
predictions = clf.predict(X_test)
print(accuracy_score(y_test, predictions))
plot_learning_curve(clf, 'Random Forest', X, y, cv=4);
"""
Explanation: We will use GridSearchCV, which exhaustively considers all parameter combinations, to find the best model for the data. A search consists of:
- an estimator (regressor or classifier such as RandomForestClassifier(), or LogisticRegression());
- a parameter space;
- a method for searching or sampling candidates;
- a cross-validation scheme;
- a score function, such as accurracy_score()
End of explanation
"""
from sklearn.cross_validation import KFold
def run_kfold(clf):
#run KFold with 10 folds instead of the default 3
#on the 891 records in the training_data
kf = KFold(891, n_folds=10)
outcomes = []
fold = 0
for train_index, test_index in kf:
fold += 1
X_train, X_test = X.values[train_index], X.values[test_index]
y_train, y_test = y.values[train_index], y.values[test_index]
clf.fit(X_train, y_train)
predictions = clf.predict(X_test)
accuracy = accuracy_score(y_test, predictions)
outcomes.append(accuracy)
print("Fold {0} accuracy: {1}".format(fold, accuracy))
mean_outcome = np.mean(outcomes)
print("Mean Accuracy: {0}".format(mean_outcome))
run_kfold(clf)
"""
Explanation: Let us turn our attention to cross-validation. We are using the sklearn.model_selection.KFold(), a K-folds cross-validator which provides training and validation/testing indices to split data into training and validation (or testing) sets. KFold() splits a dataset into k consecutive folds with each fold used once as a validation while the k - 1 remaining folds form the training set.
End of explanation
"""
lg = LogisticRegression(random_state=42, penalty='l1')
parameters = {'C':[0.5]}
# Use classification accuracy to compare parameter combinations
acc_scorer_lg = make_scorer(accuracy_score)
# Run a grid search for the Logistic Regression classifier and all the selected parameters
grid_obj_lg = GridSearchCV(lg, parameters, scoring=acc_scorer_lg)
grid_obj_lg = grid_obj_lg.fit(X_train, y_train)
# Set our classifier, lg, to have the best combination of parameters
lg = grid_obj_lg.best_estimator_
# Fit the selected classifier to the training data.
lg.fit(X_train, y_train)
"""
Explanation: Let's repeat the above procedure for the logistic regression. Find the "best" Logistic Regression classifier:
End of explanation
"""
predictions_lg = lg.predict(X_test)
print(accuracy_score(y_test, predictions_lg))
plot_learning_curve(lg, 'Logistic Regression', X, y, cv=4);
"""
Explanation: Plot the mean accuracy, the "learning curve", of the classifier on both the training and validation datasets.
End of explanation
"""
ids = test_data['PassengerId']
predictions = clf.predict(test_data.drop('PassengerId', axis=1))
output = pd.DataFrame({ 'PassengerId' : ids, 'Survived': predictions })
output.to_csv('titanic-predictions.csv', index = False)
output.head()
"""
Explanation: Finally, perform predictions on the reserved test dataset using the selected Random Forest classifier and store them in a file, titanic-predictions.csv.
End of explanation
"""
|
jhjungCode/pytorch-tutorial | 09_Flowers_tranfer_learning.ipynb | mit | !if [ ! -d "/tmp/flower_photos" ]; then curl http://download.tensorflow.org/example_images/flower_photos.tgz | tar xz -C /tmp ;rm /tmp/flower_photos/LICENSE.txt; fi
%matplotlib inline
"""
Explanation: Flowers transfer learning example
앞장에서는 수행한 Retaining시에 Batch size가 8이상 크면 컴퓨터의 사양에 따라서 메모리가 부족한 경우도 생길 수도 있습니다. 혹은 이미 생겨서 그 크기를 조정한 경우도 있을 수 있습니다. 즉, 네트워크와 입력 dataset이 크면 학습에 큰 컴퓨터 자원이 소모됩니다.
따라서, pretrained된 network는 거의 모든 부분을 그대로 사용하고, 학습이 필요한 부분은 마지막 Fully connnected layer일 뿐이니,
마지막 layer 앞전 값을 입력으로 하고 마지막 단만을 학습하는 방법을 생각할 수 있습니다.
이번 장에서는 이부분을 구현하도록 하겠습니다.
일단, 밑의 명령어를 수행시켜서, 실행디렉토리 밑에 꽃 이미지 압축파일을 풀어 놓습니다.
End of explanation
"""
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchvision
from torchvision import datasets, transforms
from torch.autograd import Variable
import matplotlib.pyplot as plt
import numpy as np
is_cuda = torch.cuda.is_available() # cuda 사용가능시, True
traindir = '/tmp/flower_photos'
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
batch_size = 1
train_loader = torch.utils.data.DataLoader(
datasets.ImageFolder(traindir,
transforms.Compose([
transforms.RandomSizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
normalize,])),
batch_size=batch_size,
shuffle=True)
cls_num = len(datasets.folder.find_classes(traindir)[0])
model = torchvision.models.resnet152(pretrained = True)
# remove last fully-connected layer
model = nn.Sequential(*list(model.children())[:-1])
model.eval()
features = []
targets = []
for image, target in train_loader:
image, target = Variable(image, volatile=True), Variable(target, volatile=True)
if is_cuda : image, target = image.cuda(), target.cuda()
feature = model(image).data
features.append(feature)
targets.append(target.data.squeeze())
features = torch.cat(features, 0).squeeze()
targets = torch.cat(targets, 0)
torch.save(features, 'flower_feature.pth')
torch.save(targets, 'flower_label.pth')
"""
Explanation: 1. extract feature & save feature data
model를 생성하고 predict하여 이미지로 부터 transfer values를 생성할 것입니다.
End of explanation
"""
# load feature datasets
features = torch.load('flower_feature.pth')
targets = torch.load('flower_label.pth')
batch_size = 500
features_datasest = torch.utils.data.TensorDataset(features, targets)
feature_loader = torch.utils.data.DataLoader(
features_datasest,
batch_size=batch_size,
shuffle=True)
"""
Explanation: retrainning fullly connected layer
1. feature dataset 만들기
End of explanation
"""
# remove last fully-connected layer
fcmodel = nn.Linear(2048, cls_num)
loss_fn = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(fcmodel.parameters(), lr=1e-4, weight_decay=1e-4)
if is_cuda : model.cuda(), loss_fn.cuda()
"""
Explanation: 2. 사전 설정
model
2048개의 특징을 받아서 5개의 꽃 종류로 표시하는 fully connect layer를 만듭니다.
loss
opimizer
End of explanation
"""
# trainning
fcmodel.train()
train_loss = []
train_accu = []
i = 0
for epoch in range(1000):
for feature, target in feature_loader:
feature, target = Variable(feature), Variable(target.squeeze()) # 입력image Target 설정
if is_cuda : feature, target = feature.cuda(), target.cuda()
output = fcmodel(feature) # model 생성
loss = loss_fn(output, target) #loss 생성
optimizer.zero_grad() # zero_grad
loss.backward() # calc backward grad
optimizer.step() # update parameter
pred = output.data.max(1)[1]
accuracy = pred.eq(target.data).sum()/batch_size
train_loss.append(loss.data[0])
train_accu.append(accuracy)
if i % 300 == 0:
print(i, loss.data[0])
i += 1
plt.plot(train_accu)
plt.plot(train_loss)
checkpoint_filename = 'flowerfeature_resnet152.ckpt'
## save a parameter
torch.save(fcmodel.state_dict(), checkpoint_filename)
"""
Explanation: 3. Trainning loop
* (입력 생성)
* model 생성
* loss 생성
* zeroGrad
* backpropagation
* optimizer step (update model parameter)
```
End of explanation
"""
import torch
import torchvision
import torch.nn as nn
from torchvision import datasets, transforms
from torch.autograd import Variable
traindir = '/tmp/flower_photos'
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
cls_num = len(datasets.folder.find_classes(traindir)[0])
test_loader = torch.utils.data.DataLoader(
datasets.ImageFolder(traindir,
transforms.Compose([
transforms.RandomSizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
normalize,])),
shuffle=True)
model = torchvision.models.resnet152(pretrained = True)
# make new fully connected layer
fcmodel = nn.Linear(2048, cls_num)
# load saved parameter into fc layer
checkpoint_filename = 'flowerfeature_resnet152.ckpt'
checkpoint = torch.load(checkpoint_filename)
fcmodel.load_state_dict(checkpoint)
#connect fc layer to resnet152
model.fc = fcmodel
model.eval()
correct = 0
from itertools import islice
for image, target in islice(test_loader, 100):
if is_cuda : image, target = image.cuda(), target.cuda()
image, target = Variable(image, volatile=True), Variable(target)
output = model(image)
prediction = output.data.max(1)[1]
correct += prediction.eq(target.data).sum()
#print('\nTest set: Accuracy: {:.2f}%'.format(100. * correct / len(test_loader.dataset)))
print('\nTest set: Accuracy: {:.2f}%'.format(100. * correct / 100))
"""
Explanation: 4. Predict & Evaluate
이제 학습된 마지막단 fully connected layter를 기존의 모델에 연결하여, prediction을 수행하여 봅니다. batch size를 더 크게 할 수 있어서 그러진 8장의 예제보다 좀 더 정확한 93%정도 수준의 정확도를 보여줍니다.
End of explanation
"""
|
molgor/spystats | notebooks/Sandboxes/TensorFlow/.ipynb_checkpoints/BiospytialGaussianModels-checkpoint.ipynb | bsd-2-clause | run ../../../../traversals/tests.py
"""
Explanation: In this notebook I´ll create functions for easing the development of geostatistical models using the GPFlow (James H, et.al )the library for modelling gaussian processes in Tensor Flow (Google) (Great Library, btw).
Requirements
Inputs
Design Matrix X composed of coovariates and spatio-temporal coordinates.
A desired hypespace $A \subseteq \mathbb{R}^{n}$ (e.g. Borelian, Closed, Discrete,Partition)
An aditional set of hyperparameters and initializations.
Processing
A wrapper with GPflow regressor (This will be experimental)
Outputs
The fitted GPR model.
A tensor composed of the coordinates of two dimensions and the predicted field given a initial condition (tensor of rank two.
Get some sample data
End of explanation
"""
import tensorflow as tf
import GPflow as gf
import pandas as pd
#k = gf.kernels.Matern12(2, lengthscales=1, active_dims = [3,4])
X = pd.concat((rd[['MeanTemperature_mean','Precipitation_mean','WindSpeed_mean']],s[['Longitude','Latitude']]),axis=1)
k = gf.kernels.Matern12(2, lengthscales=1, active_dims = [0,1])
X = s[['Longitude','Latitude']]
Y = rd['Elevation_mean']
mx = X.as_matrix()
my = Y.as_matrix().reshape(16,1)
mx.shape
meanf = gf.mean_functions.Linear(np.ones((2,1)), np.ones(1))
m = gf.gpr.GPR(mx,my,k,mean_function=meanf)
m.likelihood.variance = 10
m.optimize()
print(m)
"""
Explanation: GPFlow first approximation
End of explanation
"""
plt.style.use('ggplot')
X.plot.scatter('Longitude','Latitude')
"""
Explanation: Buidling a grid for the interpolation (prediction)
The first step is to inspect the range of the geographical space.
End of explanation
"""
Nn = 300
predicted_x = np.linspace(min(X.Longitude),max(X.Longitude),Nn)
predicted_y = np.linspace(min(X.Latitude),max(X.Latitude),Nn)
Xx, Yy = np.meshgrid(predicted_x,predicted_y)
predicted_coordinates = np.vstack([Xx.ravel(), Yy.ravel()]).transpose()
predicted_coordinates.shape
means,variances = m.predict_y(predicted_coordinates)
upperl = np.square(variances)*2
lowerl = -1 * upperl
### Let´s plot
#X.plot.scatter('Longitude','Latitude')
plt.pcolor(Xx,Yy,means.reshape(Nn,Nn))
plt.colorbar()
plt.scatter(X.Longitude,X.Latitude,s=Y*0.05,c=Y,cmap=plt.cm.binary)
##
## Upper limit
plt.pcolor(Xx,Yy,variances.reshape(Nn,Nn))
plt.colorbar()
plt.scatter(X.Longitude,X.Latitude,s=Y*0.05,c=Y,cmap=plt.cm.binary)
## Upper limit
plt.pcolor(Xx,Yy,upperl.reshape(Nn,Nn))
plt.colorbar()
plt.scatter(X.Longitude,X.Latitude,s=Y*0.05,c=Y,cmap=plt.cm.binary)
## Lower limit
plt.pcolor(Xx,Yy,lowerl.reshape(Nn,Nn))
plt.colorbar()
plt.scatter(X.Longitude,X.Latitude,s=Y*0.05,c=Y,cmap=plt.cm.binary)
"""
Explanation: Lets build a mesh grid and then a pcolor using that meshgrid.
End of explanation
"""
elev = big_t.associatedData.getAssociatedRasterAreaData('Elevation')
elev.display_field()
print(elev.rasterdata.bands[0].data().shape)
## But we can extract directly the info from this raster.
from django.contrib.gis.geos import Point
true_elevs = map(lambda p : elev.getValue(Point(*p)),predicted_coordinates)
# so the errors are:
errors= means - true_elevs
plt.hist(errors,bins=50)
plt.scatter(range(len(errors)),errors)
"""
Explanation: We can get the direct Elevation data with:
End of explanation
"""
k = gf.kernels.Matern12(2, lengthscales=1, active_dims = [6,7])
X = pd.concat((rd[['MaxTemperature_mean', u'MeanTemperature_mean',
u'MinTemperature_mean', u'Precipitation_mean', u'SolarRadiation_mean',
u'Vapor_mean']],s[['Longitude','Latitude']]),axis=1)
mx = X.as_matrix()
#Y is still elevation (4,4) matrix
my = Y.as_matrix().reshape(16,1)
meanf = gf.mean_functions.Linear(np.ones((8,1)), np.ones(1))
m = gf.gpr.GPR(mx,my,k,mean_function=meanf)
m.likelihood.variance = 10
m.optimize()
print(m)
X.columns
mx = X.as_matrix()
my = Y.as_matrix().reshape(16,1)
mx.shape
meanf = gf.mean_functions.Linear(np.ones((8,1)), np.ones(1))
m = gf.gpr.GPR(mx,my,k,mean_function=meanf)
m.likelihood.variance = 10
m.optimize()
print(m)
# Now Let´s do a Logistic Regression
s
k = gf.kernels.Matern12(2, lengthscales=1, active_dims = [0,1])
X = s[['Longitude','Latitude']]
Y = s[['Falconidae']]
mx = X.as_matrix()
my = Y.as_matrix().reshape(16,1)
meanf = gf.mean_functions.Linear(np.ones((2,1)), np.ones(1))
## I need a likelihood function !
m = gf.gpmc.GPMC(mx,my,k,mean_function=meanf)
#m.likelihood.variance = 10
m.optimize()
#print(m)
"""
Explanation: Using all* covariates for predicting elevation
End of explanation
"""
|
nreimers/deeplearning4nlp-tutorial | 2015-10_Lecture/Lecture4/code/MNIST/Autoencoder.ipynb | apache-2.0 | import gzip
import cPickle
import numpy as np
import theano
import theano.tensor as T
import random
examples_per_labels = 10
# Load the pickle file for the MNIST dataset.
dataset = 'mnist.pkl.gz'
f = gzip.open(dataset, 'rb')
train_set, dev_set, test_set = cPickle.load(f)
f.close()
#train_set contains 2 entries, first the X values, second the Y values
train_x, train_y = train_set
dev_x, dev_y = dev_set
test_x, test_y = test_set
print 'Train: ', train_x.shape
print 'Dev: ', dev_x.shape
print 'Test: ', test_x.shape
examples = []
examples_labels = []
examples_count = {}
for idx in xrange(train_x.shape[0]):
label = train_y[idx]
if label not in examples_count:
examples_count[label] = 0
if examples_count[label] < examples_per_labels:
arr = train_x[idx]
examples.append(arr)
examples_labels.append(label)
examples_count[label]+=1
train_subset_x = np.asarray(examples)
train_subset_y = np.asarray(examples_labels)
print "Train Subset: ",train_subset_x.shape
"""
Explanation: Autoencoder for MNIST Dataset
This scripts trains an autoencoder on the MNIST dataset and plots some representation. It also tries to estimate how good the representation is using a a k-Means clustering an then computing the accurarcy of the clusters.
Reading the dataset
This reads the MNIST hand written digit dataset and creates a subset of the training data with only 10 training examples per class.
End of explanation
"""
from keras.layers import containers
import keras
from keras.models import Sequential
from keras.layers.core import Dense, Flatten, AutoEncoder, Dropout
from keras.optimizers import SGD
from keras.utils import np_utils
from keras.callbacks import EarlyStopping
random.seed(1)
np.random.seed(1)
nb_epoch = 50
batch_size = 100
nb_labels = 10
train_subset_y_cat = np_utils.to_categorical(train_subset_y, nb_labels)
dev_y_cat = np_utils.to_categorical(dev_y, nb_labels)
test_y_cat = np_utils.to_categorical(test_y, nb_labels)
model = Sequential()
model.add(Dense(1000, input_dim=train_x.shape[1], activation='tanh'))
model.add(Dropout(0.5))
model.add(Dense(nb_labels, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='Adam')
earlyStopping = keras.callbacks.EarlyStopping(monitor='val_loss', patience=1, verbose=0)
print('Start training')
model.fit(train_subset_x, train_subset_y_cat, batch_size=batch_size, nb_epoch=nb_epoch,
show_accuracy=True, verbose=True, validation_data=(dev_x, dev_y_cat), callbacks=[earlyStopping])
score = model.evaluate(test_x, test_y_cat, show_accuracy=True, verbose=False)
print('Test accuracy:', score[1])
"""
Explanation: Baseline
We use a feed forward network to train on the subset and to derive a accurarcy.
End of explanation
"""
# Train the autoencoder
# Source: https://github.com/fchollet/keras/issues/358
from keras.layers import containers
import keras
from keras.models import Sequential
from keras.layers.core import Dense, Flatten, AutoEncoder, Dropout
from keras.optimizers import SGD
from keras.utils import np_utils
random.seed(3)
np.random.seed(3)
nb_epoch_pretraining = 10
batch_size_pretraining = 500
# Layer-wise pretraining
encoders = []
decoders = []
nb_hidden_layers = [train_x.shape[1], 500, 2]
X_train_tmp = np.copy(train_x)
dense_layers = []
for i, (n_in, n_out) in enumerate(zip(nb_hidden_layers[:-1], nb_hidden_layers[1:]), start=1):
print('Training the layer {}: Input {} -> Output {}'.format(i, n_in, n_out))
# Create AE and training
ae = Sequential()
if n_out >= 100:
encoder = containers.Sequential([Dense(output_dim=n_out, input_dim=n_in, activation='tanh'), Dropout(0.5)])
else:
encoder = containers.Sequential([Dense(output_dim=n_out, input_dim=n_in, activation='tanh')])
decoder = containers.Sequential([Dense(output_dim=n_in, input_dim=n_out, activation='tanh')])
ae.add(AutoEncoder(encoder=encoder, decoder=decoder, output_reconstruction=False))
sgd = SGD(lr=2, decay=1e-6, momentum=0.0, nesterov=True)
ae.compile(loss='mse', optimizer='adam')
ae.fit(X_train_tmp, X_train_tmp, batch_size=batch_size_pretraining, nb_epoch=nb_epoch_pretraining, verbose = True, shuffle=True)
# Store trainined weight and update training data
encoders.append(ae.layers[0].encoder)
decoders.append(ae.layers[0].decoder)
X_train_tmp = ae.predict(X_train_tmp)
##############
#End to End Autoencoder training
if len(nb_hidden_layers) > 2:
full_encoder = containers.Sequential()
for encoder in encoders:
full_encoder.add(encoder)
full_decoder = containers.Sequential()
for decoder in reversed(decoders):
full_decoder.add(decoder)
full_ae = Sequential()
full_ae.add(AutoEncoder(encoder=full_encoder, decoder=full_decoder, output_reconstruction=False))
full_ae.compile(loss='mse', optimizer='adam')
print "Pretraining of full AE"
full_ae.fit(train_x, train_x, batch_size=batch_size_pretraining, nb_epoch=nb_epoch_pretraining, verbose = True, shuffle=True)
"""
Explanation: Autoencoder - Pretraining
This is the code how the autoencoder should work in principle. However, the pretraining does not workly too good, as is has no real impact when then trained on the labeld data. But it gives some useful representations for the data.
End of explanation
"""
############
# Plot it
############
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
model = Sequential()
for encoder in encoders:
model.add(encoder)
model.compile(loss='categorical_crossentropy', optimizer='Adam')
ae_test = model.predict(test_x)
colors = {0: 'b', 1: 'g', 2: 'r', 3:'c', 4:'m',
5:'y', 6:'k', 7:'orange', 8:'darkgreen', 9:'maroon'}
markers = {0: 'o', 1: '+', 2: 'v', 3:'<', 4:'>',
5:'^', 6:'s', 7:'p', 8:'*', 9:'x'}
plt.figure(figsize=(10, 10))
patches = []
for idx in xrange(0,300):
point = ae_test[idx]
label = test_y[idx]
if label in [2,5,8,9]: #We skip these labels to make the plot clearer
continue
color = colors[label]
marker = markers[label]
line = plt.plot(point[0], point[1], color=color, marker=marker, markersize=8)
#plt.axis([-1.1, 1.1, -1.1, +1.1])
"""
Explanation: Plot Autoencoder
Here we are going to plot the output of the autoencoder (dimension of the last hidden layer should be 2).
End of explanation
"""
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
pca.fit(train_x)
pca_test = pca.transform(test_x)
colors = {0: 'b', 1: 'g', 2: 'r', 3:'c', 4:'m',
5:'y', 6:'k', 7:'orange', 8:'darkgreen', 9:'maroon'}
markers = {0: 'o', 1: '+', 2: 'v', 3:'<', 4:'>',
5:'^', 6:'s', 7:'p', 8:'*', 9:'x'}
plt.figure(figsize=(10, 10))
patches = []
for idx in xrange(0,300):
point = pca_test[idx]
label = test_y[idx]
if label in [2,5,8,9]:
continue
color = colors[label]
marker = markers[label]
line = plt.plot(point[0], point[1], color=color, marker=marker, markersize=8)
#plt.axis([-1.1, 1.1, -1.1, +1.1])
plt.show()
"""
Explanation: PCA
In comparison we are going to plot also an PCA image.
End of explanation
"""
from sklearn.cluster import KMeans
import operator
def clusterAccurarcy(predictions, n_clusters=10):
km = KMeans(n_clusters=n_clusters)
clusters = km.fit_predict(predictions)
#Count labels per cluster
labelCount = {}
for idx in xrange(len(test_y)):
cluster = clusters[idx]
label = test_y[idx]
if cluster not in labelCount:
labelCount[cluster] = {}
if label not in labelCount[cluster]:
labelCount[cluster][label] = 0
labelCount[cluster][label] += 1
#Majority Voting
clusterLabels = {}
for num in xrange(n_clusters):
maxLabel = max(labelCount[num].iteritems(), key=operator.itemgetter(1))[0]
clusterLabels[num] = maxLabel
#print clusterLabels
#Number of errors
errCount = 0
for idx in xrange(len(test_y)):
cluster = clusters[idx]
clusterLabel = clusterLabels[cluster]
label = test_y[idx]
if label != clusterLabel:
errCount += 1
return errCount/float(len(test_y))
print "PCA Accurarcy: %f%%" % (clusterAccurarcy(pca_test)*100)
print "AE Accurarcy: %f%%" % (clusterAccurarcy(ae_test)*100)
"""
Explanation: k-Means clustering
We run a k-means clustering on the AutoEncoder representations and the PCA representations and then do a majority voting to get the label per cluster. We then compute the accurarcy of the clustering. This gives us some impression how good the 2-dim representations are. This is not perfect, as AutoEncoder and PCA might create non-linear cluster boundaries.
End of explanation
"""
nb_epoch = 50
batch_size = 100
model = Sequential()
for encoder in encoders:
model.add(encoder)
model.add(Dense(output_dim=nb_labels, activation='softmax'))
train_subset_y_cat = np_utils.to_categorical(train_subset_y, nb_labels)
test_y_cat = np_utils.to_categorical(test_y, nb_labels)
model.compile(loss='categorical_crossentropy', optimizer='Adam')
score = model.evaluate(test_x, test_y_cat, show_accuracy=True, verbose=0)
print('Test score before fine turning:', score[0])
print('Test accuracy before fine turning:', score[1])
model.fit(train_subset_x, train_subset_y_cat, batch_size=batch_size, nb_epoch=nb_epoch,
show_accuracy=True, validation_data=(dev_x, dev_y_cat), shuffle=True)
score = model.evaluate(test_x, test_y_cat, show_accuracy=True, verbose=0)
print('Test score after fine turning:', score[0])
print('Test accuracy after fine turning:', score[1])
"""
Explanation: Using pretrained AutoEncoder for Classification
In principle the pretrained AutoEncoder could be used for classification as in the following code. But it does not yet result to better results than the Neural Network without pretraining.
End of explanation
"""
|
charmasaur/digbeta | tour/traj_simulation.ipynb | gpl-3.0 | %matplotlib inline
import os
import math
import random
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from datetime import datetime
random.seed(123456789)
data_dir = 'data/data-ijcai15'
#fvisit = os.path.join(data_dir, 'userVisits-Osak.csv')
#fcoord = os.path.join(data_dir, 'photoCoords-Osak.csv')
#fvisit = os.path.join(data_dir, 'userVisits-Glas.csv')
#fcoord = os.path.join(data_dir, 'photoCoords-Glas.csv')
#fvisit = os.path.join(data_dir, 'userVisits-Edin.csv')
#fcoord = os.path.join(data_dir, 'photoCoords-Edin.csv')
fvisit = os.path.join(data_dir, 'userVisits-Toro.csv')
fcoord = os.path.join(data_dir, 'photoCoords-Toro.csv')
suffix = fvisit.split('-')[-1].split('.')[0]
visits = pd.read_csv(fvisit, sep=';')
visits.head()
coords = pd.read_csv(fcoord, sep=';')
coords.head()
# merge data frames according to column 'photoID'
assert(visits.shape[0] == coords.shape[0])
traj = pd.merge(visits, coords, on='photoID')
traj.head()
num_photo = traj['photoID'].unique().shape[0]
num_user = traj['userID'].unique().shape[0]
num_seq = traj['seqID'].unique().shape[0]
num_poi = traj['poiID'].unique().shape[0]
pd.DataFrame([num_photo, num_user, num_seq, num_poi, num_photo/num_user, num_seq/num_user], \
index = ['#photo', '#user', '#seq', '#poi', '#photo/user', '#seq/user'], columns=[str(suffix)])
"""
Explanation: Trajectory Simulation
NOTE: Before running this notebook, please run script src/ijcai15_setup.py to setup data properly.
Experimental Setup
Definitions
Load Trajectory Data
Compute POI Info
Construct Travelling Sequences
POI Category Transition Matrix
POI Transition Rules
Simulation
<a id='sec1'></a>
1. Experimental Setup
The states of the Markov chain (MC) corresponds to the categories of POIs, there is a special state "REST" which represents that people are having rests after some travelling.
Simulate trajectories using the transition matrix of the MC, when choosing a specific POI within a certain category, use the following rules:
The Nearest Neighbor of the current POI
The most Popular POI
A random POI choosing with probability proportional to the reciprocal of its distance to current POI
A random POI choosing with probability proportional to its popularity
<a id='sec1.1'></a>
1.1 Definitions
For user $u$ and POI $p$, define
Travel History:
\begin{equation}
S_u = {(p_1, t_{p_1}^a, t_{p_1}^d), \dots, (p_n, t_{p_n}^a, t_{p_n}^d)}
\end{equation}
where $t_{p_i}^a$ is the arrival time and $t_{p_i}^d$ the departure time of user $u$ at POI $p_i$
Travel Sequences: split $S_u$ if
\begin{equation}
|t_{p_i}^d - t_{p_{i+1}}^a| > \tau ~(\text{e.g.}~ \tau = 8 ~\text{hours})
\end{equation}
POI Popularity:
\begin{equation}
Pop(p) = \sum_{u \in U} \sum_{p_i \in S_u} \delta(p_i == p)
\end{equation}
<a id='sec1.2'></a>
1.2 Load Trajectory Data
End of explanation
"""
poi_coords = traj[['poiID', 'photoLon', 'photoLat']].groupby('poiID').agg(np.mean)
poi_coords.reset_index(inplace=True)
poi_coords.rename(columns={'photoLon':'poiLon', 'photoLat':'poiLat'}, inplace=True)
poi_coords.head()
"""
Explanation: <a id='sec1.3'></a>
1.3 Compute POI Info
Compute POI (Longitude, Latitude) as the average coordinates of the assigned photos.
End of explanation
"""
poi_catfreq = traj[['poiID', 'poiTheme', 'poiFreq']].groupby('poiID').first()
poi_catfreq.reset_index(inplace=True)
poi_catfreq.head()
poi_all = pd.merge(poi_catfreq, poi_coords, on='poiID')
poi_all.set_index('poiID', inplace=True)
poi_all.head()
"""
Explanation: Extract POI category and visiting frequency.
End of explanation
"""
seq_all = traj[['userID', 'seqID', 'poiID', 'dateTaken']].copy()\
.groupby(['userID', 'seqID', 'poiID']).agg([np.min, np.max])
seq_all.columns = seq_all.columns.droplevel()
seq_all.reset_index(inplace=True)
seq_all.rename(columns={'amin':'arrivalTime', 'amax':'departureTime'}, inplace=True)
seq_all['poiDuration(sec)'] = seq_all['departureTime'] - seq_all['arrivalTime']
seq_all.head()
seq_start = seq_all[['userID', 'seqID', 'arrivalTime']].copy().groupby(['userID', 'seqID']).agg(np.min)
seq_start.rename(columns={'arrivalTime':'startTime'}, inplace=True)
seq_start.reset_index(inplace=True)
seq_start.head()
seq_end = seq_all[['userID', 'seqID', 'departureTime']].copy().groupby(['userID', 'seqID']).agg(np.max)
seq_end.rename(columns={'departureTime':'endTime'}, inplace=True)
seq_end.reset_index(inplace=True)
seq_end.head()
assert(seq_start.shape[0] == seq_end.shape[0])
user_seqs = pd.merge(seq_start, seq_end, on=['userID', 'seqID'])
user_seqs.head()
#user_seqs.loc[0, 'seqID']
#user_seqs['userID'].iloc[-1]
"""
Explanation: <a id='sec1.4'></a>
1.4 Construct Travelling Sequences
End of explanation
"""
def generate_ext_transmat(poi_all, seq_all, user_seqs, timeGap):
"""Calculate the extended transition matrix of POI category for actual trajectories with a special category REST.
For a specific user, if the time gap between the earlier sequence and the latter sequence is less than 'timeGap',
then add a REST state between the two sequences, otherwise,
add a REST to REST transition after the earlier sequence.
"""
assert(timeGap > 0)
states = poi_all['poiTheme'].unique().tolist()
states.sort()
states.append('REST')
ext_transmat = pd.DataFrame(data=np.zeros((len(states), len(states)), dtype=np.float64), \
index=states, columns=states)
for user in user_seqs['userID'].unique():
sequ = user_seqs[user_seqs['userID'] == user].copy()
sequ.sort(columns=['startTime'], ascending=True, inplace=True)
prev_seqEndTime = None
prev_endPOICat = None
# sequence with length 1 should be considered
for i in range(len(sequ.index)):
idx = sequ.index[i]
seqid = sequ.loc[idx, 'seqID']
seq = seq_all[seq_all['seqID'] == seqid].copy()
seq.sort(columns=['arrivalTime'], ascending=True, inplace=True)
for j in range(len(seq.index)-1):
poi1 = seq.loc[seq.index[j], 'poiID']
poi2 = seq.loc[seq.index[j+1], 'poiID']
cat1 = poi_all.loc[poi1, 'poiTheme']
cat2 = poi_all.loc[poi2, 'poiTheme']
ext_transmat.loc[cat1, cat2] += 1
# REST state
if i > 0:
startTime = sequ.loc[idx, 'startTime']
assert(prev_seqEndTime is not None)
assert(startTime >= prev_seqEndTime)
ext_transmat.loc[prev_endPOICat, 'REST'] += 1 # POI-->REST
if startTime - prev_seqEndTime < timeGap: # REST-->POI
poi0 = seq.loc[seq.index[0], 'poiID']
startPOICat = poi_all.loc[poi0, 'poiTheme']
ext_transmat.loc['REST', startPOICat] += 1
else: # REST-->REST
ext_transmat.loc['REST', 'REST'] += 1
# memorise info of previous sequence
prev_seqEndTime = sequ.loc[idx, 'endTime']
poiN = seq.loc[seq.index[-1], 'poiID']
prev_endPOICat = poi_all.loc[poiN, 'poiTheme']
# normalize each row to get the transition probability from cati to catj
for r in ext_transmat.index:
rowsum = ext_transmat.ix[r].sum()
if rowsum == 0: continue # deal with lack of data
ext_transmat.loc[r] /= rowsum
return ext_transmat
timeGap = 24 * 60 * 60 # 24 hours
trans_mat = generate_ext_transmat(poi_all, seq_all, user_seqs, timeGap)
trans_mat
#trans_mat.columns[-1]
#trans_mat.loc['Sport']
#np.array(trans_mat.loc['Sport'])
#np.array(trans_mat.loc['Sport']).sum()
"""
Explanation: <a id='sec1.5'></a>
1.5 POI Category Transition Matrix
Generate the extended transition matrix of POI category for actual trajectories with a special category REST.
For a specific user, if the time gap between the earlier sequence and the latter sequence is less than 'timeGap' (e.g. 24 hours), then add a REST state between the two sequences, otherwise, add a REST to REST transition after the earlier sequence.
End of explanation
"""
def calc_dist(longitude1, latitude1, longitude2, latitude2):
"""Calculate the distance (unit: km) between two places on earth"""
# convert degrees to radians
lon1 = math.radians(longitude1)
lat1 = math.radians(latitude1)
lon2 = math.radians(longitude2)
lat2 = math.radians(latitude2)
radius = 6371.009 # mean earth radius is 6371.009km, en.wikipedia.org/wiki/Earth_radius#Mean_radius
# The haversine formula, en.wikipedia.org/wiki/Great-circle_distance
dlon = math.fabs(lon1 - lon2)
dlat = math.fabs(lat1 - lat2)
return 2 * radius * math.asin( math.sqrt( \
(math.sin(0.5*dlat))**2 + math.cos(lat1) * math.cos(lat2) * (math.sin(0.5*dlon))**2 ))
"""
Explanation: <a id='sec1.6'></a>
1.6 POI Transition Rules
When choosing a specific POI within a certain POI category, consider two types of rules:
1. Rules based the distance between candidate POI and the current POI
1. Rules based on popularity of candidate POI
End of explanation
"""
def rule_NN(current_poi, next_poi_cat, poi_all, randomized):
"""
choosing a specific POI within a category.
if randomized == True,
return a random POI choosing with probability proportional to the reciprocal of its distance to current POI
otherwise, return the Nearest Neighbor of the current POI
"""
assert(current_poi in poi_all.index)
assert(next_poi_cat in poi_all['poiTheme'].unique())
poi_index = None
if poi_all.loc[current_poi, 'poiTheme'] == next_poi_cat:
poi_index = [x for x in poi_all[poi_all['poiTheme'] == next_poi_cat].index if x != current_poi]
else:
poi_index = poi_all[poi_all['poiTheme'] == next_poi_cat].index
probs = np.zeros(len(poi_index), dtype=np.float64)
for i in range(len(poi_index)):
dist = calc_dist(poi_all.loc[current_poi, 'poiLon'], poi_all.loc[current_poi, 'poiLat'], \
poi_all.loc[poi_index[i],'poiLon'], poi_all.loc[poi_index[i],'poiLat'])
assert(dist > 0.)
probs[i] = 1. / dist
idx = None
if randomized == True:
probs /= np.sum(probs) # normalise
sample = np.random.multinomial(1, probs) # catgorical/multinoulli distribution, multinomial distribution (n=1)
for j in range(len(sample)):
if sample[j] == 1:
idx = j
break
else:
idx = probs.argmax()
assert(idx is not None)
return poi_index[idx]
"""
Explanation: Distance based rules
1. The Nearest Neighbor of the current POI
1. A random POI choosing with probability proportional to the reciprocal of its distance to current POI
End of explanation
"""
def rule_Pop(current_poi, next_poi_cat, poi_all, randomized):
"""
choosing a specific POI within a category.
if randomized == True,
returen a random POI choosing with probability proportional to its popularity
otherwise, return the The most Popular POI
"""
assert(current_poi in poi_all.index)
assert(next_poi_cat in poi_all['poiTheme'].unique())
poi_index = None
if poi_all.loc[current_poi, 'poiTheme'] == next_poi_cat:
poi_index = [x for x in poi_all[poi_all['poiTheme'] == next_poi_cat].index if x != current_poi]
else:
poi_index = poi_all[poi_all['poiTheme'] == next_poi_cat].index
probs = np.zeros(len(poi_index), dtype=np.float64)
for i in range(len(poi_index)):
probs[i] = poi_all.loc[poi_index[i],'poiFreq']
idx = None
if randomized == True:
probs /= np.sum(probs) # normalise
sample = np.random.multinomial(1, probs) # catgorical/multinoulli distribution, multinomial distribution (n=1)
for j in range(len(sample)):
if sample[j] == 1:
idx = j
break
else:
idx = probs.argmax()
assert(idx is not None)
return poi_index[idx]
"""
Explanation: POI Popularity based rules
1. The most Popular POI
1. A random POI choosing with probability proportional to its popularity
End of explanation
"""
def extract_seq(seqid_set, seq_all):
"""Extract the actual sequences (i.e. a list of POI) from a set of sequence ID"""
seq_dict = dict()
for seqid in seqid_set:
seqi = seq_all[seq_all['seqID'] == seqid].copy()
seqi.sort(columns=['arrivalTime'], ascending=True, inplace=True)
seq_dict[seqid] = seqi['poiID'].tolist()
return seq_dict
all_seqid = seq_all['seqID'].unique()
all_seq_dict = extract_seq(all_seqid, seq_all)
def choose_start_poi(all_seq_dict, seqLen):
"""choose the first POI in a random actual sequence"""
assert(seqLen > 0)
while True:
seqid = random.choice(sorted(all_seq_dict.keys()))
if len(all_seq_dict[seqid]) > seqLen:
return all_seq_dict[seqid][0]
obs_mat = trans_mat.copy() * 0
obs_mat
prefer_NN_over_Pop = True
randomized = True
N = 1000 # number of observations
prevpoi = choose_start_poi(all_seq_dict, 1)
prevcat = poi_all.loc[prevpoi, 'poiTheme']
nextpoi = None
nextcat = None
print('(%s, POI %d)->' % (prevcat, prevpoi))
n = 0
while n < N:
# choose the next POI category
# catgorical/multinoulli distribution, special case of multinomial distribution (n=1)
sample = np.random.multinomial(1, np.array(trans_mat.loc[prevcat]))
nextcat = None
for j in range(len(sample)):
if sample[j] == 1: nextcat = trans_mat.columns[j]
assert(nextcat is not None)
obs_mat.loc[prevcat, nextcat] += 1
# choose the next POI
if nextcat == 'REST':
nextpoi = choose_start_poi(all_seq_dict, 1) # restart
print('(REST)->')
else:
if prefer_NN_over_Pop == True:
nextpoi = rule_NN(prevpoi, nextcat, poi_all, randomized)
else:
nextpoi = rule_Pop(prevpoi, nextcat, poi_all, randomized)
print('(%s, POI %d)->' % (nextcat, nextpoi))
prevcat = nextcat
prevpoi = nextpoi
n += 1
obs_mat
# MEL estimation
est_mat = obs_mat.copy()
for r in est_mat.index:
rowsum = est_mat.ix[r].sum()
if rowsum == 0: continue # deal with lack of data
est_mat.loc[r] /= rowsum
est_mat
trans_mat
"""
Explanation: <a id='sec1.7'></a>
1.7 Simulation
End of explanation
"""
|
malogrisard/NTDScourse | toolkit/02_sol_exploitation.ipynb | mit | import pandas as pd
import numpy as np
from IPython.display import display
import os.path
folder = os.path.join('..', 'data', 'social_media')
fb = pd.read_sql('facebook', 'sqlite:///' + os.path.join(folder, 'facebook.sqlite'), index_col='index')
tw = pd.read_sql('twitter', 'sqlite:///' + os.path.join(folder, 'twitter.sqlite'), index_col='index')
display(fb[:5])
display(tw[:5])
"""
Explanation: A Python Tour of Data Science: Data Exploitation
Michaël Defferrard, PhD student, EPFL LTS2
Exercise: problem definition
Theme of the exercise: understand the impact of your communication on social networks. A real life situation: the marketing team needs help in identifying which were the most engaging posts they made on social platforms to prepare their next AdWords campaign.
This notebook is the second part of the exercise. Given the data we collected from Facebook an Twitter in the last exercise, we will construct an ML model and evaluate how good it is to predict the number of likes of a post / tweet given the content.
1 Data importation
Use pandas to import the facebook.sqlite and twitter.sqlite databases.
Print the 5 first rows of both tables.
The facebook.sqlite and twitter.sqlite SQLite databases can be created by running the data acquisition and exploration exercise.
End of explanation
"""
from sklearn.feature_extraction.text import CountVectorizer
nwords = 200 # 100
def compute_bag_of_words(text, nwords):
vectorizer = CountVectorizer(max_features=nwords)
vectors = vectorizer.fit_transform(text)
vocabulary = vectorizer.get_feature_names()
return vectors, vocabulary
fb_bow, fb_vocab = compute_bag_of_words(fb.text, nwords)
#fb_p = pd.Panel({'orig': fb, 'bow': fb_bow})
display(fb_bow)
display(fb_vocab[100:110])
tw_bow, tw_vocab = compute_bag_of_words(tw.text, nwords)
display(tw_bow)
"""
Explanation: 2 Vectorization
First step: transform the data into a format understandable by the machine. What to do with text ? A common choice is the so-called bag-of-word model, where we represent each word a an integer and simply count the number of appearances of a word into a document.
Example
Let's say we have a vocabulary represented by the following correspondance table.
| Integer | Word |
|:-------:|---------|
| 0 | unknown |
| 1 | dog |
| 2 | school |
| 3 | cat |
| 4 | house |
| 5 | work |
| 6 | animal |
Then we can represent the following document
I have a cat. Cats are my preferred animals.
by the vector $x = [6, 0, 0, 2, 0, 0, 1]^T$.
Tasks
Construct a vocabulary of the 100 most occuring words in your dataset.
Build a vector $x \in \mathbb{R}^{100}$ for each document (post or tweet).
Tip: the natural language modeling libraries nltk and gensim are useful for advanced operations. You don't need them here.
Arise a first data cleaning question. We may have some text in french and other in english. What do we do ?
End of explanation
"""
def print_most_frequent(bow, vocab, n=10):
idx = np.argsort(bow.sum(axis=0))
for i in range(10):
j = idx[0, -i]
print(vocab[j])
print_most_frequent(tw_bow, tw_vocab)
print('---')
print_most_frequent(fb_bow, fb_vocab)
"""
Explanation: Exploration question: what are the 5 most used words ? Exploring your data while playing with it is a useful sanity check.
End of explanation
"""
X = tw_bow
y = tw['likes'].values
n, d = X.shape
assert n == y.size
print(X.shape)
print(y.shape)
# Training and testing sets.
test_size = n // 2
print('Split: {} testing and {} training samples'.format(test_size, y.size - test_size))
perm = np.random.permutation(y.size)
X_test = X[perm[:test_size]]
X_train = X[perm[test_size:]]
y_test = y[perm[:test_size]]
y_train = y[perm[test_size:]]
"""
Explanation: 3 Pre-processing
The independant variables $X$ are the bags of words.
The target $y$ is the number of likes.
Split in half for training and testing sets.
End of explanation
"""
import scipy.sparse
class LinearRegression(object):
def predict(self, X):
"""Return the predicted class given the features."""
return X.dot(self.w) + self.b
def fit(self, X, y):
"""Learn the model's parameters given the training data, the closed-form way."""
n, d = X.shape
self.b = y.mean()
A = X.T.dot(X)
b = X.T.dot(y - self.b)
#self.w = np.linalg.solve(A, b)
self.w = scipy.sparse.linalg.spsolve(A, b)
def evaluate(y_pred, y_true):
return np.linalg.norm(y_pred - y_true, ord=2)**2 / y_true.size
model = LinearRegression()
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
mse = evaluate(y_pred, y_test)
print('mse: {:.4f}'.format(mse))
"""
Explanation: 4 Linear regression
Using numpy, fit and evaluate the linear model $$\hat{w}, \hat{b} = \operatorname*{arg min}_{w,b} \| Xw + b - y \|_2^2.$$
Please define a class LinearRegression with two methods:
1. fit learn the parameters $w$ and $b$ of the model given the training examples.
2. predict gives the estimated number of likes of a post / tweet. That will be used to evaluate the model on the testing set.
To evaluate the classifier, create an accuracy(y_pred, y_true) function which computes the mean squared error $\frac1n \| \hat{y} - y \|_2^2$.
Hint: you may want to use the function scipy.sparse.linalg.spsolve().
If solve and spsolve tells you that your matrix is singular, please read this good comment. Potential solutions:
1. Is there any post / tweet without any word from the vocabulary ? I.e. a row of $X$ made only of zeroes. If yes, remove this row or enlarge the vocabulary.
2. Identify and remove redundant features, i.e. words, who are linear combinations of others.
3. What else could we do ?
End of explanation
"""
idx = np.argsort(abs(model.w))
for i in range(20):
j = idx[-1-i]
print('weight: {:5.2f}, word: {}'.format(model.w[j], tw_vocab[j]))
"""
Explanation: Interpretation: what are the most important words a post / tweet should include ?
End of explanation
"""
import ipywidgets
from IPython.display import clear_output
slider = ipywidgets.widgets.IntSlider(
value=1,
min=1,
max=nwords,
step=1,
description='nwords',
)
def handle(change):
"""Handler for value change: fit model and print performance."""
nwords = change['new']
clear_output()
print('nwords = {}'.format(nwords))
model = LinearRegression()
model.fit(X_train[:, :nwords], y_train)
y_pred = model.predict(X_test[:, :nwords])
mse = evaluate(y_pred, y_test)
print('mse: {:.4f}'.format(mse))
slider.observe(handle, names='value')
display(slider)
slider.value = nwords # As if someone moved the slider.
"""
Explanation: 5 Interactivity
Create a slider for the number of words, i.e. the dimensionality of the samples $x$.
Print the accuracy for each change on the slider.
End of explanation
"""
from sklearn import linear_model, metrics
model = linear_model.LinearRegression()
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
mse = metrics.mean_squared_error(y_test, y_pred)
assert np.allclose(evaluate(y_pred, y_test), mse)
print('mse: {:.4f}'.format(mse))
"""
Explanation: 6 Scikit learn
Fit and evaluate the linear regression model using sklearn.
Evaluate the model with the mean squared error metric provided by sklearn.
Compare with your implementation.
End of explanation
"""
import os
os.environ['KERAS_BACKEND'] = 'theano' # tensorflow
import keras
model = keras.models.Sequential()
model.add(keras.layers.Dense(output_dim=50, input_dim=nwords, activation='relu'))
model.add(keras.layers.Dense(output_dim=20, activation='relu'))
model.add(keras.layers.Dense(output_dim=1, activation='relu'))
model.compile(loss='mse', optimizer='sgd')
model.fit(X_train.toarray(), y_train, nb_epoch=20, batch_size=100)
y_pred = model.predict(X_test.toarray(), batch_size=32)
mse = evaluate(y_test, y_pred.squeeze())
print('mse: {:.4f}'.format(mse))
"""
Explanation: 7 Deep Learning
Try a simple deep learning model !
Another modeling choice would be to use a Recurrent Neural Network (RNN) and feed it the sentence words after words.
End of explanation
"""
from matplotlib import pyplot as plt
plt.style.use('ggplot')
%matplotlib inline
n = 100
plt.figure(figsize=(15, 5))
plt.plot(y_test[:n], '.', alpha=.7, markersize=10, label='ground truth')
plt.plot(y_pred[:n], '.', alpha=.7, markersize=10, label='prediction')
plt.legend()
plt.show()
"""
Explanation: 8 Evaluation
Use matplotlib to plot a performance visualization. E.g. the true number of likes and the real number of likes for all posts / tweets.
What do you observe ? What are your suggestions to improve the performance ?
End of explanation
"""
|
babraham123/script-runner | notebooks/bubble_sort.ipynb | mit | import time
print('Last updated: %s' %time.strftime('%d/%m/%Y'))
"""
Explanation: Sebastian Raschka
End of explanation
"""
import platform
import multiprocessing
def print_sysinfo():
print('\nPython version :', platform.python_version())
print('compiler :', platform.python_compiler())
print('\nsystem :', platform.system())
print('release :', platform.release())
print('machine :', platform.machine())
print('processor :', platform.processor())
print('CPU count :', multiprocessing.cpu_count())
print('interpreter :', platform.architecture()[0])
print('\n\n')
"""
Explanation: Sorting Algorithms
Overview
End of explanation
"""
print_sysinfo()
"""
Explanation: Bubble sort
[back to top]
Quick note about Bubble sort
I don't want to get into the details about sorting algorithms here, but there is a great report
"Sorting in the Presence of Branch Prediction and Caches - Fast Sorting on Modern Computers" written by Paul Biggar and David Gregg, where they describe and analyze elementary sorting algorithms in very nice detail (see chapter 4).
And for a quick reference, this website has a nice animation of this algorithm.
A long story short: The "worst-case" complexity of the Bubble sort algorithm (i.e., "Big-O")
$\Rightarrow \pmb O(n^2)$
End of explanation
"""
def python_bubblesort(a_list):
""" Bubblesort in Python for list objects (sorts in place)."""
length = len(a_list)
for i in range(length):
for j in range(1, length):
if a_list[j] < a_list[j-1]:
a_list[j-1], a_list[j] = a_list[j], a_list[j-1]
return a_list
"""
Explanation: Bubble sort implemented in (C)Python
End of explanation
"""
def python_bubblesort_improved(a_list):
""" Bubblesort in Python for list objects (sorts in place)."""
length = len(a_list)
swapped = 1
for i in range(length):
if swapped:
swapped = 0
for ele in range(length-i-1):
if a_list[ele] > a_list[ele + 1]:
temp = a_list[ele + 1]
a_list[ele + 1] = a_list[ele]
a_list[ele] = temp
swapped = 1
return a_list
"""
Explanation: <br>
Below is a improved version that quits early if no further swap is needed.
End of explanation
"""
import random
import copy
random.seed(4354353)
l = [random.randint(1,1000) for num in range(1, 1000)]
l_sorted = sorted(l)
for f in [python_bubblesort, python_bubblesort_improved]:
assert(l_sorted == f(copy.copy(l)))
print('Bubblesort works correctly')
"""
Explanation: Verifying that all implementations work correctly
End of explanation
"""
# small list
l_small = [random.randint(1,100) for num in range(1, 100)]
l_small_cp = copy.copy(l_small)
%timeit python_bubblesort(l_small)
%timeit python_bubblesort_improved(l_small_cp)
# larger list
l_small = [random.randint(1,10000) for num in range(1, 10000)]
l_small_cp = copy.copy(l_small)
%timeit python_bubblesort(l_small)
%timeit python_bubblesort_improved(l_small_cp)
"""
Explanation: Performance comparison
End of explanation
"""
|
ThyrixYang/LearningNotes | MOOC/stanford_cnn_cs231n/assignment1/two_layer_net.ipynb | gpl-3.0 | # A bit of setup
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.neural_net import TwoLayerNet
from __future__ import print_function
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
""" returns relative error """
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
"""
Explanation: Implementing a Neural Network
In this exercise we will develop a neural network with fully-connected layers to perform classification, and test it out on the CIFAR-10 dataset.
End of explanation
"""
# Create a small net and some toy data to check your implementations.
# Note that we set the random seed for repeatable experiments.
input_size = 4
hidden_size = 10
num_classes = 3
num_inputs = 5
def init_toy_model():
np.random.seed(0)
return TwoLayerNet(input_size, hidden_size, num_classes, std=1e-1)
def init_toy_data():
np.random.seed(1)
X = 10 * np.random.randn(num_inputs, input_size)
y = np.array([0, 1, 2, 2, 1])
return X, y
net = init_toy_model()
X, y = init_toy_data()
"""
Explanation: We will use the class TwoLayerNet in the file cs231n/classifiers/neural_net.py to represent instances of our network. The network parameters are stored in the instance variable self.params where keys are string parameter names and values are numpy arrays. Below, we initialize toy data and a toy model that we will use to develop your implementation.
End of explanation
"""
scores = net.loss(X)
print('Your scores:')
print(scores)
print()
print('correct scores:')
correct_scores = np.asarray([
[-0.81233741, -1.27654624, -0.70335995],
[-0.17129677, -1.18803311, -0.47310444],
[-0.51590475, -1.01354314, -0.8504215 ],
[-0.15419291, -0.48629638, -0.52901952],
[-0.00618733, -0.12435261, -0.15226949]])
print(correct_scores)
print()
# The difference should be very small. We get < 1e-7
print('Difference between your scores and correct scores:')
print(np.sum(np.abs(scores - correct_scores)))
"""
Explanation: Forward pass: compute scores
Open the file cs231n/classifiers/neural_net.py and look at the method TwoLayerNet.loss. This function is very similar to the loss functions you have written for the SVM and Softmax exercises: It takes the data and weights and computes the class scores, the loss, and the gradients on the parameters.
Implement the first part of the forward pass which uses the weights and biases to compute the scores for all inputs.
End of explanation
"""
loss, _ = net.loss(X, y, reg=0.05)
correct_loss = 1.30378789133
# should be very small, we get < 1e-12
print('Difference between your loss and correct loss:')
print(np.sum(np.abs(loss - correct_loss)))
"""
Explanation: Forward pass: compute loss
In the same function, implement the second part that computes the data and regularizaion loss.
End of explanation
"""
from cs231n.gradient_check import eval_numerical_gradient
# Use numeric gradient checking to check your implementation of the backward pass.
# If your implementation is correct, the difference between the numeric and
# analytic gradients should be less than 1e-8 for each of W1, W2, b1, and b2.
loss, grads = net.loss(X, y, reg=0.05)
# these should all be less than 1e-8 or so
for param_name in grads:
f = lambda W: net.loss(X, y, reg=0.05)[0]
param_grad_num = eval_numerical_gradient(f, net.params[param_name], verbose=False)
print('%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name])))
"""
Explanation: Backward pass
Implement the rest of the function. This will compute the gradient of the loss with respect to the variables W1, b1, W2, and b2. Now that you (hopefully!) have a correctly implemented forward pass, you can debug your backward pass using a numeric gradient check:
End of explanation
"""
net = init_toy_model()
stats = net.train(X, y, X, y,
learning_rate=1e-1, reg=5e-6,
num_iters=100, verbose=False)
print('Final training loss: ', stats['loss_history'][-1])
# plot the loss history
plt.plot(stats['loss_history'])
plt.xlabel('iteration')
plt.ylabel('training loss')
plt.title('Training Loss history')
plt.show()
"""
Explanation: Train the network
To train the network we will use stochastic gradient descent (SGD), similar to the SVM and Softmax classifiers. Look at the function TwoLayerNet.train and fill in the missing sections to implement the training procedure. This should be very similar to the training procedure you used for the SVM and Softmax classifiers. You will also have to implement TwoLayerNet.predict, as the training process periodically performs prediction to keep track of accuracy over time while the network trains.
Once you have implemented the method, run the code below to train a two-layer network on toy data. You should achieve a training loss less than 0.2.
End of explanation
"""
from cs231n.data_utils import load_CIFAR10
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
"""
Load the CIFAR-10 dataset from disk and perform preprocessing to prepare
it for the two-layer neural net classifier. These are the same steps as
we used for the SVM, but condensed to a single function.
"""
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
# Normalize the data: subtract the mean image
mean_image = np.mean(X_train, axis=0)
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
# Reshape data to rows
X_train = X_train.reshape(num_training, -1)
X_val = X_val.reshape(num_validation, -1)
X_test = X_test.reshape(num_test, -1)
return X_train, y_train, X_val, y_val, X_test, y_test
# Invoke the above function to get our data.
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
print('Train data shape: ', X_train.shape)
print('Train labels shape: ', y_train.shape)
print('Validation data shape: ', X_val.shape)
print('Validation labels shape: ', y_val.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
"""
Explanation: Load the data
Now that you have implemented a two-layer network that passes gradient checks and works on toy data, it's time to load up our favorite CIFAR-10 data so we can use it to train a classifier on a real dataset.
End of explanation
"""
input_size = 32 * 32 * 3
hidden_size = 50
num_classes = 10
net = TwoLayerNet(input_size, hidden_size, num_classes)
# Train the network
stats = net.train(X_train, y_train, X_val, y_val,
num_iters=2000, batch_size=200,
learning_rate=1e-4, learning_rate_decay=0.95,
reg=0.25, verbose=True)
# Predict on the validation set
val_acc = (net.predict(X_val) == y_val).mean()
print('Validation accuracy: ', val_acc)
"""
Explanation: Train a network
To train our network we will use SGD with momentum. In addition, we will adjust the learning rate with an exponential learning rate schedule as optimization proceeds; after each epoch, we will reduce the learning rate by multiplying it by a decay rate.
End of explanation
"""
# Plot the loss function and train / validation accuracies
def fig1():
plt.subplot(2, 1, 1)
plt.plot(stats['loss_history'])
plt.title('Loss history')
plt.xlabel('Iteration')
plt.ylabel('Loss')
plt.subplot(2, 1, 2)
plt.plot(stats['train_acc_history'], label='train')
plt.plot(stats['val_acc_history'], label='val')
plt.title('Classification accuracy history')
plt.xlabel('Epoch')
plt.ylabel('Clasification accuracy')
plt.show()
fig1()
from cs231n.vis_utils import visualize_grid
# Visualize the weights of the network
def show_net_weights(net):
W1 = net.params['W1']
W1 = W1.reshape(32, 32, 3, -1).transpose(3, 0, 1, 2)
plt.imshow(visualize_grid(W1, padding=3).astype('uint8'))
plt.gca().axis('off')
plt.show()
show_net_weights(net)
"""
Explanation: Debug the training
With the default parameters we provided above, you should get a validation accuracy of about 0.29 on the validation set. This isn't very good.
One strategy for getting insight into what's wrong is to plot the loss function and the accuracies on the training and validation sets during optimization.
Another strategy is to visualize the weights that were learned in the first layer of the network. In most neural networks trained on visual data, the first layer weights typically show some visible structure when visualized.
End of explanation
"""
best_net = None # store the best model into this
#################################################################################
# TODO: Tune hyperparameters using the validation set. Store your best trained #
# model in best_net. #
# #
# To help debug your network, it may help to use visualizations similar to the #
# ones we used above; these visualizations will have significant qualitative #
# differences from the ones we saw above for the poorly tuned network. #
# #
# Tweaking hyperparameters by hand can be fun, but you might find it useful to #
# write code to sweep through possible combinations of hyperparameters #
# automatically like we did on the previous exercises. #
#################################################################################
input_size = 32 * 32 * 3
hidden_size = 70
num_classes = 10
best_net = net = TwoLayerNet(input_size, hidden_size, num_classes)
# Train the network
stats = net.train(X_train, y_train, X_val, y_val,
num_iters=4000, batch_size=200,
learning_rate=9e-4, learning_rate_decay=0.95,
reg=0.4, verbose=True)
# Predict on the validation set
val_acc = (net.predict(X_val) == y_val).mean()
print('Validation accuracy: ', val_acc)
fig1()
show_net_weights(net)
#################################################################################
# END OF YOUR CODE #
#################################################################################
# visualize the weights of the best network
show_net_weights(best_net)
"""
Explanation: Tune your hyperparameters
What's wrong?. Looking at the visualizations above, we see that the loss is decreasing more or less linearly, which seems to suggest that the learning rate may be too low. Moreover, there is no gap between the training and validation accuracy, suggesting that the model we used has low capacity, and that we should increase its size. On the other hand, with a very large model we would expect to see more overfitting, which would manifest itself as a very large gap between the training and validation accuracy.
Tuning. Tuning the hyperparameters and developing intuition for how they affect the final performance is a large part of using Neural Networks, so we want you to get a lot of practice. Below, you should experiment with different values of the various hyperparameters, including hidden layer size, learning rate, numer of training epochs, and regularization strength. You might also consider tuning the learning rate decay, but you should be able to get good performance using the default value.
Approximate results. You should be aim to achieve a classification accuracy of greater than 48% on the validation set. Our best network gets over 52% on the validation set.
Experiment: You goal in this exercise is to get as good of a result on CIFAR-10 as you can, with a fully-connected Neural Network. For every 1% above 52% on the Test set we will award you with one extra bonus point. Feel free implement your own techniques (e.g. PCA to reduce dimensionality, or adding dropout, or adding features to the solver, etc.).
End of explanation
"""
test_acc = (best_net.predict(X_test) == y_test).mean()
print('Test accuracy: ', test_acc)
"""
Explanation: Run on the test set
When you are done experimenting, you should evaluate your final trained network on the test set; you should get above 48%.
We will give you extra bonus point for every 1% of accuracy above 52%.
End of explanation
"""
|
esa-as/2016-ml-contest | ar4/ar4_submission2.ipynb | apache-2.0 | # Import
from __future__ import division
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rcParams['figure.figsize'] = (20.0, 10.0)
inline_rc = dict(mpl.rcParams)
from classification_utilities import make_facies_log_plot
import pandas as pd
import numpy as np
import seaborn as sns
from sklearn import preprocessing
from sklearn.model_selection import LeavePGroupsOut
from sklearn.metrics import f1_score
from sklearn.multiclass import OneVsOneClassifier
from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor
from scipy.signal import medfilt
import sys, scipy, sklearn
print('Python: ' + sys.version.split('\n')[0])
print(' ' + sys.version.split('\n')[1])
print('Pandas: ' + pd.__version__)
print('Numpy: ' + np.__version__)
print('Scipy: ' + scipy.__version__)
print('Sklearn: ' + sklearn.__version__)
# Parameters
feature_names = ['GR', 'ILD_log10', 'DeltaPHI', 'PHIND', 'PE', 'NM_M', 'RELPOS']
facies_names = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS', 'WS', 'D', 'PS', 'BS']
facies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00', '#1B4F72','#2E86C1', '#AED6F1', '#A569BD', '#196F3D']
"""
Explanation: Facies classification using machine learning techniques
Copy of <a href="https://home.deib.polimi.it/bestagini/">Paolo Bestagini's</a> "Try 2", augmented, by Alan Richardson (Ausar Geophysical), with an ML estimator for PE in the wells where it is missing (rather than just using the mean).
In the following, we provide a possible solution to the facies classification problem described at https://github.com/seg/2016-ml-contest.
The proposed algorithm is based on the use of random forests combined in one-vs-one multiclass strategy. In particular, we would like to study the effect of:
- Robust feature normalization.
- Feature imputation for missing feature values.
- Well-based cross-validation routines.
- Feature augmentation strategies.
Script initialization
Let us import the used packages and define some parameters (e.g., colors, labels, etc.).
End of explanation
"""
# Load data from file
data = pd.read_csv('../facies_vectors.csv')
# Store features and labels
X = data[feature_names].values # features
y = data['Facies'].values # labels
# Store well labels and depths
well = data['Well Name'].values
depth = data['Depth'].values
"""
Explanation: Load data
Let us load training data and store features, labels and other data into numpy arrays.
End of explanation
"""
# Define function for plotting feature statistics
def plot_feature_stats(X, y, feature_names, facies_colors, facies_names):
# Remove NaN
nan_idx = np.any(np.isnan(X), axis=1)
X = X[np.logical_not(nan_idx), :]
y = y[np.logical_not(nan_idx)]
# Merge features and labels into a single DataFrame
features = pd.DataFrame(X, columns=feature_names)
labels = pd.DataFrame(y, columns=['Facies'])
for f_idx, facies in enumerate(facies_names):
labels[labels[:] == f_idx] = facies
data = pd.concat((labels, features), axis=1)
# Plot features statistics
facies_color_map = {}
for ind, label in enumerate(facies_names):
facies_color_map[label] = facies_colors[ind]
sns.pairplot(data, hue='Facies', palette=facies_color_map, hue_order=list(reversed(facies_names)))
# Feature distribution
plot_feature_stats(X, y, feature_names, facies_colors, facies_names)
mpl.rcParams.update(inline_rc)
# Facies per well
for w_idx, w in enumerate(np.unique(well)):
ax = plt.subplot(3, 4, w_idx+1)
hist = np.histogram(y[well == w], bins=np.arange(len(facies_names)+1)+.5)
plt.bar(np.arange(len(hist[0])), hist[0], color=facies_colors, align='center')
ax.set_xticks(np.arange(len(hist[0])))
ax.set_xticklabels(facies_names)
ax.set_title(w)
# Features per well
for w_idx, w in enumerate(np.unique(well)):
ax = plt.subplot(3, 4, w_idx+1)
hist = np.logical_not(np.any(np.isnan(X[well == w, :]), axis=0))
plt.bar(np.arange(len(hist)), hist, color=facies_colors, align='center')
ax.set_xticks(np.arange(len(hist)))
ax.set_xticklabels(feature_names)
ax.set_yticks([0, 1])
ax.set_yticklabels(['miss', 'hit'])
ax.set_title(w)
"""
Explanation: Data inspection
Let us inspect the features we are working with. This step is useful to understand how to normalize them and how to devise a correct cross-validation strategy. Specifically, it is possible to observe that:
- Some features seem to be affected by a few outlier measurements.
- Only a few wells contain samples from all classes.
- PE measurements are available only for some wells.
End of explanation
"""
reg = RandomForestRegressor(max_features='sqrt', n_estimators=50)
DataImpAll = data[feature_names].copy()
DataImp = DataImpAll.dropna(axis = 0, inplace=False)
Ximp=DataImp.loc[:, DataImp.columns != 'PE']
Yimp=DataImp.loc[:, 'PE']
reg.fit(Ximp, Yimp)
X[np.array(DataImpAll.PE.isnull()),4] = reg.predict(DataImpAll.loc[DataImpAll.PE.isnull(),:].drop('PE',axis=1,inplace=False))
"""
Explanation: Feature imputation
Let us fill missing PE values. This is the only cell that differs from the approach of Paolo Bestagini. Currently no feature engineering is used, but this should be explored in the future.
End of explanation
"""
# Feature windows concatenation function
def augment_features_window(X, N_neig):
# Parameters
N_row = X.shape[0]
N_feat = X.shape[1]
# Zero padding
X = np.vstack((np.zeros((N_neig, N_feat)), X, (np.zeros((N_neig, N_feat)))))
# Loop over windows
X_aug = np.zeros((N_row, N_feat*(2*N_neig+1)))
for r in np.arange(N_row)+N_neig:
this_row = []
for c in np.arange(-N_neig,N_neig+1):
this_row = np.hstack((this_row, X[r+c]))
X_aug[r-N_neig] = this_row
return X_aug
# Feature gradient computation function
def augment_features_gradient(X, depth):
# Compute features gradient
d_diff = np.diff(depth).reshape((-1, 1))
d_diff[d_diff==0] = 0.001
X_diff = np.diff(X, axis=0)
X_grad = X_diff / d_diff
# Compensate for last missing value
X_grad = np.concatenate((X_grad, np.zeros((1, X_grad.shape[1]))))
return X_grad
# Feature augmentation function
def augment_features(X, well, depth, N_neig=1):
# Augment features
X_aug = np.zeros((X.shape[0], X.shape[1]*(N_neig*2+2)))
for w in np.unique(well):
w_idx = np.where(well == w)[0]
X_aug_win = augment_features_window(X[w_idx, :], N_neig)
X_aug_grad = augment_features_gradient(X[w_idx, :], depth[w_idx])
X_aug[w_idx, :] = np.concatenate((X_aug_win, X_aug_grad), axis=1)
# Find padded rows
padded_rows = np.unique(np.where(X_aug[:, 0:7] == np.zeros((1, 7)))[0])
return X_aug, padded_rows
# Augment features
X_aug, padded_rows = augment_features(X, well, depth)
"""
Explanation: Feature augmentation
Our guess is that facies do not abrutly change from a given depth layer to the next one. Therefore, we consider features at neighboring layers to be somehow correlated. To possibly exploit this fact, let us perform feature augmentation by:
- Aggregating features at neighboring depths.
- Computing feature spatial gradient.
End of explanation
"""
# Initialize model selection methods
lpgo = LeavePGroupsOut(2)
# Generate splits
split_list = []
for train, val in lpgo.split(X, y, groups=data['Well Name']):
hist_tr = np.histogram(y[train], bins=np.arange(len(facies_names)+1)+.5)
hist_val = np.histogram(y[val], bins=np.arange(len(facies_names)+1)+.5)
if np.all(hist_tr[0] != 0) & np.all(hist_val[0] != 0):
split_list.append({'train':train, 'val':val})
# Print splits
for s, split in enumerate(split_list):
print('Split %d' % s)
print(' training: %s' % (data['Well Name'][split['train']].unique()))
print(' validation: %s' % (data['Well Name'][split['val']].unique()))
"""
Explanation: Generate training, validation and test data splits
The choice of training and validation data is paramount in order to avoid overfitting and find a solution that generalizes well on new data. For this reason, we generate a set of training-validation splits so that:
- Features from each well belongs to training or validation set.
- Training and validation sets contain at least one sample for each class.
End of explanation
"""
# Parameters search grid (uncomment parameters for full grid search... may take a lot of time)
N_grid = [100] # [50, 100, 150]
M_grid = [10] # [5, 10, 15]
S_grid = [25] # [10, 25, 50, 75]
L_grid = [5] # [2, 3, 4, 5, 10, 25]
param_grid = []
for N in N_grid:
for M in M_grid:
for S in S_grid:
for L in L_grid:
param_grid.append({'N':N, 'M':M, 'S':S, 'L':L})
# Train and test a classifier
def train_and_test(X_tr, y_tr, X_v, well_v, param):
# Feature normalization
scaler = preprocessing.RobustScaler(quantile_range=(25.0, 75.0)).fit(X_tr)
X_tr = scaler.transform(X_tr)
X_v = scaler.transform(X_v)
# Train classifier
clf = OneVsOneClassifier(RandomForestClassifier(n_estimators=param['N'], criterion='entropy',
max_features=param['M'], min_samples_split=param['S'], min_samples_leaf=param['L'],
class_weight='balanced', random_state=0), n_jobs=-1)
clf.fit(X_tr, y_tr)
# Test classifier
y_v_hat = clf.predict(X_v)
# Clean isolated facies for each well
for w in np.unique(well_v):
y_v_hat[well_v==w] = medfilt(y_v_hat[well_v==w], kernel_size=5)
return y_v_hat
# For each set of parameters
score_param = []
for param in param_grid:
# For each data split
score_split = []
for split in split_list:
# Remove padded rows
split_train_no_pad = np.setdiff1d(split['train'], padded_rows)
# Select training and validation data from current split
X_tr = X_aug[split_train_no_pad, :]
X_v = X_aug[split['val'], :]
y_tr = y[split_train_no_pad]
y_v = y[split['val']]
# Select well labels for validation data
well_v = well[split['val']]
# Train and test
y_v_hat = train_and_test(X_tr, y_tr, X_v, well_v, param)
# Score
score = f1_score(y_v, y_v_hat, average='micro')
score_split.append(score)
# Average score for this param
score_param.append(np.mean(score_split))
print('F1 score = %.3f %s' % (score_param[-1], param))
# Best set of parameters
best_idx = np.argmax(score_param)
param_best = param_grid[best_idx]
score_best = score_param[best_idx]
print('\nBest F1 score = %.3f %s' % (score_best, param_best))
"""
Explanation: Classification parameters optimization
Let us perform the following steps for each set of parameters:
- Select a data split.
- Normalize features using a robust scaler.
- Train the classifier on training data.
- Test the trained classifier on validation data.
- Repeat for all splits and average the F1 scores.
At the end of the loop, we select the classifier that maximizes the average F1 score on the validation set. Hopefully, this classifier should be able to generalize well on new data.
End of explanation
"""
# Load data from file
test_data = pd.read_csv('../validation_data_nofacies.csv')
# Prepare training data
X_tr = X
y_tr = y
# Augment features
X_tr, padded_rows = augment_features(X_tr, well, depth)
# Removed padded rows
X_tr = np.delete(X_tr, padded_rows, axis=0)
y_tr = np.delete(y_tr, padded_rows, axis=0)
# Prepare test data
well_ts = test_data['Well Name'].values
depth_ts = test_data['Depth'].values
X_ts = test_data[feature_names].values
# Augment features
X_ts, padded_rows = augment_features(X_ts, well_ts, depth_ts)
# Predict test labels
y_ts_hat = train_and_test(X_tr, y_tr, X_ts, well_ts, param_best)
# Save predicted labels
test_data['Facies'] = y_ts_hat
test_data.to_csv('ar4_predicted_facies_submission002.csv')
# Plot predicted labels
make_facies_log_plot(
test_data[test_data['Well Name'] == 'STUART'],
facies_colors=facies_colors)
make_facies_log_plot(
test_data[test_data['Well Name'] == 'CRAWFORD'],
facies_colors=facies_colors)
mpl.rcParams.update(inline_rc)
"""
Explanation: Predict labels on test data
Let us now apply the selected classification technique to test data.
End of explanation
"""
|
nbortolotti/tensorflow-code-experiences | translations/es-ES/jupyter/constant_types.ipynb | apache-2.0 | shape_tensor = tf.zeros([2,3],tf.int32)
with tf.Session() as ses:
print ses.run(shape_tensor)
"""
Explanation: offitial documentation link
create tensors whose elements are of a specific value
End of explanation
"""
input_tensor_model = [[1,2],[3,4],[5,6]]
zeroslike_tensor = tf.zeros_like(input_tensor_model)
with tf.Session() as ses:
print ses.run(zeroslike_tensor)
"""
Explanation: tensor of shape and type (unless type is specified) as the input_tensor but all elements are zeros
End of explanation
"""
shape_one_tensor = tf.ones([3,3],tf.int32)
with tf.Session() as ses:
print ses.run(shape_one_tensor)
"""
Explanation: tensor of shape and all elements are ones
End of explanation
"""
input_tensor_model_ones = [[1,2],[3,4],[5,6]]
onelikes = tf.ones_like(input_tensor_model_ones)
with tf.Session() as ses:
print ses.run(onelikes)
"""
Explanation: tensor of shape and type (unless type is specified) as the input_tensor but all elements are ones.
End of explanation
"""
tensor_scalar = tf.fill([3, 3], 8)
with tf.Session() as ses:
print ses.run(tensor_scalar)
"""
Explanation: create a tensor filled with a scalar value
End of explanation
"""
tensor_lin = tf.linspace(50.0 ,55.0, 5 , name="linspace")
with tf.Session() as ses:
print ses.run(tensor_lin)
"""
Explanation: creating constants that are sequences
End of explanation
"""
tensor_range = tf.range(3 ,15 , 3)
with tf.Session() as ses:
print ses.run(tensor_range)
"""
Explanation: create a sequence of numbers that begins at start and extends by increments of delta up to but not including limit
End of explanation
"""
for a in range(4):
print a
for b in tf.range(4):
print b
"""
Explanation: TensorFlow sequences are not iterable
End of explanation
"""
|
matousc89/python-web-tutorials | HTML_and_JSON_processing.ipynb | mit | sample_html = """
<html>
<head>
<title>Test</title>
</head>
<body>
<h1>Heading!</h1>
<p class="major_content">Some content.</p>
<p class="minor_content">Some other content.</p>
</body>
</html>
"""
"""
Explanation: Processing of HTTP response - JSON and HTML
In this tutorial ti is covered basic operations with HTML and JSON.
For more informations about related stuff see:
* <a href="https://en.wikipedia.org/wiki/JSON">JavaScript Object Notation</a>
* <a href="https://en.wikipedia.org/wiki/HTML">HyperText Markup Language (HTML)</a>
HTML (XML) parsing
There are to main approaches how to parse data
SAX (Simple API for XML) - it scan elements on the fly. This approach does not store anything in memory.
DOM (Document Object Model) - it creates model of all elements in memory. Allows higher functions.
HTML parsing with Python HTMLParser class
In this section is introduced <a href="https://docs.python.org/2/library/htmlparser.html">HTMLParser</a>. This is a SAX parser. In next examples is used following sample HTML content:
End of explanation
"""
# from HTMLParser import HTMLParser # Python 2.7
from html.parser import HTMLParser
class TestHTMLParser(HTMLParser):
def handle_starttag(self, tag, attrs):
print("Tag start:", tag)
def handle_endtag(self, tag):
print("Tag end:", tag)
def handle_data(self, data):
print("Tag data:", data)
# instantiate the parser and fed in some HTML
parser = TestHTMLParser(convert_charrefs=True)
parser.feed(sample_html)
"""
Explanation: Simle example of usage follows. Following parser print out encountered tags and data.
End of explanation
"""
class Test2HTMLParser(HTMLParser):
def __init__(self):
HTMLParser.__init__(self, convert_charrefs=True)
self.recording = False
def handle_starttag(self, tag, attrs):
if tag == "p" and "major_content" in dict(attrs).values():
self.recording = True
def handle_endtag(self, tag):
self.recording = False
def handle_data(self, data):
if self.recording:
print(data)
# instantiate the parser and fed in some HTML
parser2 = Test2HTMLParser()
parser2.feed(sample_html)
"""
Explanation: The goal of this second parser is to get content from paragraph with class: major_content.
End of explanation
"""
import xml.etree.ElementTree as ET
tree = ET.fromstring(sample_html)
for child1 in tree:
print(child1.tag)
for child2 in child1:
print("\t", child2.tag, "-", child2.text)
"""
Explanation: Examples with the ElementTree XML API
See <a href="https://docs.python.org/2/library/xml.etree.elementtree.html">ElementTree XML API</a> for more information. This library is designed for XML parsing, but it si possible to use it also for HTML parsing with various levels of succuess. This parser is DOM parser. Simple example that iterates over HTML tree (only first and second level) follows:
End of explanation
"""
import xml.etree.ElementTree as ET
tree = ET.fromstring(sample_html)
tree.findall("./body/p[@class='major_content']")[0].text
"""
Explanation: Second example prints just content of paragraph with major_content class:
End of explanation
"""
from bs4 import BeautifulSoup
# path to data
path = "data/example1.html"
# template for printing the output
sentence = "{} {} is {} years old."
# load data
with open(path, 'r') as datafile:
sample_html = datafile.read()
# create tree
soup = BeautifulSoup(sample_html, "html.parser")
# get title and print it
title = soup.find("title")
print(title.text, "\n")
# select all rows in table
table = soup.find("table", {"id": "main_table"})
table_rows = table.findAll("tr")
# iterate over table and print results
for row in table_rows:
first_name = row.find("td", {"class": "first_name"})
last_name = row.find("td", {"class": "last_name"})
age = row.find("td", {"class": "age"})
if first_name and last_name and age:
print(sentence.format(first_name.text, last_name.text, age.text))
"""
Explanation: Examples with BeautifulSoup library
The <a href="https://www.crummy.com/software/BeautifulSoup/">BeautifulSoup</a> is library dedicated to simplify scraping information from HTML pages. It is an DOM parser. The <a href="data/exaple1.html">sample data</a> named Example1 from this tutorial repo are used next example:
End of explanation
"""
print(table.attrs)
"""
Explanation: Attributes of the elements are accessible as simple as follows:
End of explanation
"""
sample_html = """
<html>
<head>
<title>Test</title>
</head>
<body>
<h1>Heading!</h1>
<p class="major_content">Some content. And even more content.</p>
<p class="minor_content">
Some other content.
Numbers related content.
The important information is, that the key number is 23.
</p>
</body>
</html>
"""
"""
Explanation: Getting specific string from HTML (or other text)
In some cases can be benefical to get the particular information from source without parsing. In next examples is used following source.
End of explanation
"""
# unclean way
target_start = sample_html.find("the key number is ") + len("the key number is")
target_end = sample_html[target_start:].find(".") + target_start
print(sample_html[target_start:target_end])
"""
Explanation: If you need just the <i>key number</i> value from the text. And it is sure that:
information appers only once in the text
information will not change the form (words, word order ...)
You can use following approach.
End of explanation
"""
# much beter way (with regex)
import re
print(re.search('the key number is (.*).', sample_html).group(1))
"""
Explanation: Or you can do the same thing, but more correctly with <a href= https://docs.python.org/2/library/re.html>Regex</a>.
End of explanation
"""
import json
# sample data
message = [
{"time": 123, "value": 5},
{"time": 124, "value": 6},
{"status": "ok", "finish": [True, False, False]},
]
# pack message as json
js_message = json.dumps(message)
# show result
print(type(js_message))
print(js_message)
"""
Explanation: Work with JSON
In next piece of code is shown how to create JSON encoded message in Python with <a href="https://docs.python.org/2/library/json.html">JSON library</a>.
Simple example
End of explanation
"""
# unpack message
message = json.loads(js_message)
# show result
print(type(message))
print(message)
"""
Explanation: Note, that the output is string. In similar way you can unpack the message back to Python standard list/dictionary. Example follows.
End of explanation
"""
import requests
r = requests.get("http://api.open-notify.org/iss-now.json")
obj = r.json()
print(obj)
"""
Explanation: JSON support in Requests library
The Requests library can convert the HTTP JSON reponse directly to Python standard format (dictionary/list). See following example.
End of explanation
"""
import datetime
# raw data
print("Raw data:")
print(obj)
# important part
print("\nSelected items from data:")
print(obj["timestamp"])
print(obj['iss_position']['latitude'], obj['iss_position']['longitude'])
# unix timestamp to human format
timestamp = datetime.datetime.fromtimestamp(obj["timestamp"]).strftime('%Y-%m-%d %H:%M:%S')
# print of cleaned data
print("\nCleaned data:")
print("Time and date: {}".format(timestamp))
print("Latitude: {}, longitude: {}".format(obj['iss_position']['latitude'], obj['iss_position']['longitude']))
"""
Explanation: The Requests function json() convert the json response to Python dictionary. In next code block is demonstrated how to get data from obtained response.
End of explanation
"""
|
SBRG/ssbio | docs/notebooks/GEM-PRO - Genes & Sequences.ipynb | mit | import sys
import logging
# Import the GEM-PRO class
from ssbio.pipeline.gempro import GEMPRO
# Printing multiple outputs per cell
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
"""
Explanation: GEM-PRO - Genes & Sequences
This notebook gives an example of how to run the GEM-PRO pipeline with a dictionary of gene IDs and their protein sequences.
<div class="alert alert-info">
**Input:**
Dictionary of gene IDs and protein sequences
</div>
<div class="alert alert-info">
**Output:**
GEM-PRO model
</div>
Imports
End of explanation
"""
# Create logger
logger = logging.getLogger()
logger.setLevel(logging.INFO) # SET YOUR LOGGING LEVEL HERE #
# Other logger stuff for Jupyter notebooks
handler = logging.StreamHandler(sys.stderr)
formatter = logging.Formatter('[%(asctime)s] [%(name)s] %(levelname)s: %(message)s', datefmt="%Y-%m-%d %H:%M")
handler.setFormatter(formatter)
logger.handlers = [handler]
"""
Explanation: Logging
Set the logging level in logger.setLevel(logging.<LEVEL_HERE>) to specify how verbose you want the pipeline to be. Debug is most verbose.
CRITICAL
Only really important messages shown
ERROR
Major errors
WARNING
Warnings that don't affect running of the pipeline
INFO (default)
Info such as the number of structures mapped per gene
DEBUG
Really detailed information that will print out a lot of stuff
<div class="alert alert-warning">
**Warning:**
`DEBUG` mode prints out a large amount of information, especially if you have a lot of genes. This may stall your notebook!
</div>
End of explanation
"""
# SET FOLDERS AND DATA HERE
import tempfile
ROOT_DIR = tempfile.gettempdir()
PROJECT = 'genes_and_sequences_GP'
GENES_AND_SEQUENCES = {'b0870': 'MIDLRSDTVTRPSRAMLEAMMAAPVGDDVYGDDPTVNALQDYAAELSGKEAAIFLPTGTQANLVALLSHCERGEEYIVGQAAHNYLFEAGGAAVLGSIQPQPIDAAADGTLPLDKVAMKIKPDDIHFARTKLLSLENTHNGKVLPREYLKEAWEFTRERNLALHVDGARIFNAVVAYGCELKEITQYCDSFTICLSKGLGTPVGSLLVGNRDYIKRAIRWRKMTGGGMRQSGILAAAGIYALKNNVARLQEDHDNAAWMAEQLREAGADVMRQDTNMLFVRVGEENAAALGEYMKARNVLINASPIVRLVTHLDVSREQLAEVAAHWRAFLAR',
'b3041': 'MNQTLLSSFGTPFERVENALAALREGRGVMVLDDEDRENEGDMIFPAETMTVEQMALTIRHGSGIVCLCITEDRRKQLDLPMMVENNTSAYGTGFTVTIEAAEGVTTGVSAADRITTVRAAIADGAKPSDLNRPGHVFPLRAQAGGVLTRGGHTEATIDLMTLAGFKPAGVLCELTNDDGTMARAPECIEFANKHNMALVTIEDLVAYRQAHERKAS'}
PDB_FILE_TYPE = 'mmtf'
# Create the GEM-PRO project
my_gempro = GEMPRO(gem_name=PROJECT, root_dir=ROOT_DIR, genes_and_sequences=GENES_AND_SEQUENCES, pdb_file_type=PDB_FILE_TYPE)
"""
Explanation: Initialization of the project
Set these three things:
ROOT_DIR
The directory where a folder named after your PROJECT will be created
PROJECT
Your project name
LIST_OF_GENES
Your list of gene IDs
A directory will be created in ROOT_DIR with your PROJECT name. The folders are organized like so:
```
ROOT_DIR
└── PROJECT
├── data # General storage for pipeline outputs
├── model # SBML and GEM-PRO models are stored here
├── genes # Per gene information
│ ├── <gene_id1> # Specific gene directory
│ │ └── protein
│ │ ├── sequences # Protein sequence files, alignments, etc.
│ │ └── structures # Protein structure files, calculations, etc.
│ └── <gene_id2>
│ └── protein
│ ├── sequences
│ └── structures
├── reactions # Per reaction information
│ └── <reaction_id1> # Specific reaction directory
│ └── complex
│ └── structures # Protein complex files
└── metabolites # Per metabolite information
└── <metabolite_id1> # Specific metabolite directory
└── chemical
└── structures # Metabolite 2D and 3D structure files
```
<div class="alert alert-info">**Note:** Methods for protein complexes and metabolites are still in development.</div>
End of explanation
"""
# Mapping using BLAST
my_gempro.blast_seqs_to_pdb(all_genes=True, seq_ident_cutoff=.9, evalue=0.00001)
my_gempro.df_pdb_blast.head(2)
"""
Explanation: Mapping sequence --> structure
Since the sequences have been provided, we just need to BLAST them to the PDB.
<p><div class="alert alert-info">**Note:** These methods do not download any 3D structure files.</div></p>
Methods
End of explanation
"""
# Download all mapped PDBs and gather the metadata
my_gempro.download_all_pdbs()
my_gempro.df_pdb_metadata.head(2)
# Set representative structures
my_gempro.set_representative_structure()
my_gempro.df_representative_structures.head()
# Looking at the information saved within a gene
my_gempro.genes.get_by_id('b0870').protein.representative_structure
my_gempro.genes.get_by_id('b0870').protein.representative_structure.get_dict()
"""
Explanation: Downloading and ranking structures
Methods
<div class="alert alert-warning">
**Warning:**
Downloading all PDBs takes a while, since they are also parsed for metadata. You can skip this step and just set representative structures below if you want to minimize the number of PDBs downloaded.
</div>
End of explanation
"""
# Prep I-TASSER model folders
my_gempro.prep_itasser_modeling('~/software/I-TASSER4.4', '~/software/ITLIB/', runtype='local', all_genes=False)
"""
Explanation: Creating homology models
For those proteins with no representative structure, we can create homology models for them. ssbio contains some built in functions for easily running I-TASSER locally or on machines with SLURM (ie. on NERSC) or Torque job scheduling.
You can load in I-TASSER models once they complete using the get_itasser_models later.
<p><div class="alert alert-info">**Info:** Homology modeling can take a long time - about 24-72 hours per protein (highly dependent on the sequence length, as well as if there are available templates).</div></p>
Methods
End of explanation
"""
import os.path as op
my_gempro.save_json(op.join(my_gempro.model_dir, '{}.json'.format(my_gempro.id)), compression=False)
"""
Explanation: Saving your GEM-PRO
<p><div class="alert alert-warning">**Warning:** Saving is still experimental. For a full GEM-PRO with sequences & structures, depending on the number of genes, saving can take >5 minutes.</div></p>
End of explanation
"""
|
google/iree | samples/colab/mnist_training.ipynb | apache-2.0 | %%capture
!python -m pip install iree-compiler iree-runtime iree-tools-tf -f https://github.com/google/iree/releases
# Import IREE's TensorFlow Compiler and Runtime.
import iree.compiler.tf
import iree.runtime
"""
Explanation: ```
Copyright 2020 The IREE Authors
Licensed under the Apache License v2.0 with LLVM Exceptions.
See https://llvm.org/LICENSE.txt for license information.
SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
```
Training and Executing an MNIST Model with IREE
Overview
This notebook covers installing IREE and using it to train a simple neural network on the MNIST dataset.
1. Install and Import IREE
End of explanation
"""
from matplotlib import pyplot as plt
import numpy as np
import tensorflow as tf
tf.random.set_seed(91)
np.random.seed(91)
plt.style.use("seaborn-whitegrid")
plt.rcParams["font.family"] = "monospace"
plt.rcParams["figure.figsize"] = [8, 4.5]
plt.rcParams["figure.dpi"] = 150
# Print version information for future notebook users to reference.
print("TensorFlow version: ", tf.__version__)
print("Numpy version: ", np.__version__)
"""
Explanation: 2. Import TensorFlow and Other Dependencies
End of explanation
"""
# Keras datasets don't provide metadata.
NUM_CLASSES = 10
NUM_ROWS, NUM_COLS = 28, 28
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
# Reshape into grayscale images:
x_train = np.reshape(x_train, (-1, NUM_ROWS, NUM_COLS, 1))
x_test = np.reshape(x_test, (-1, NUM_ROWS, NUM_COLS, 1))
# Rescale uint8 pixel values into float32 values between 0 and 1:
x_train = x_train.astype(np.float32) / 255
x_test = x_test.astype(np.float32) / 255
# IREE doesn't currently support int8 tensors, so we cast them to int32:
y_train = y_train.astype(np.int32)
y_test = y_test.astype(np.int32)
print("Sample image from the dataset:")
sample_index = np.random.randint(x_train.shape[0])
plt.figure(figsize=(5, 5))
plt.imshow(x_train[sample_index].reshape(NUM_ROWS, NUM_COLS), cmap="gray")
plt.title(f"Sample #{sample_index}, label: {y_train[sample_index]}")
plt.axis("off")
plt.tight_layout()
"""
Explanation: 3. Load the MNIST Dataset
End of explanation
"""
BATCH_SIZE = 32
class TrainableDNN(tf.Module):
def __init__(self):
super().__init__()
# Create a Keras model to train.
inputs = tf.keras.layers.Input((NUM_COLS, NUM_ROWS, 1))
x = tf.keras.layers.Flatten()(inputs)
x = tf.keras.layers.Dense(128)(x)
x = tf.keras.layers.Activation("relu")(x)
x = tf.keras.layers.Dense(10)(x)
outputs = tf.keras.layers.Softmax()(x)
self.model = tf.keras.Model(inputs, outputs)
# Create a loss function and optimizer to use during training.
self.loss = tf.keras.losses.SparseCategoricalCrossentropy()
self.optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2)
@tf.function(input_signature=[
tf.TensorSpec([BATCH_SIZE, NUM_ROWS, NUM_COLS, 1]) # inputs
])
def predict(self, inputs):
return self.model(inputs, training=False)
# We compile the entire training step by making it a method on the model.
@tf.function(input_signature=[
tf.TensorSpec([BATCH_SIZE, NUM_ROWS, NUM_COLS, 1]), # inputs
tf.TensorSpec([BATCH_SIZE], tf.int32) # labels
])
def learn(self, inputs, labels):
# Capture the gradients from forward prop...
with tf.GradientTape() as tape:
probs = self.model(inputs, training=True)
loss = self.loss(labels, probs)
# ...and use them to update the model's weights.
variables = self.model.trainable_variables
gradients = tape.gradient(loss, variables)
self.optimizer.apply_gradients(zip(gradients, variables))
return loss
"""
Explanation: 4. Create a Simple DNN
MLIR-HLO (the MLIR dialect we use to convert TensorFlow models into assembly that IREE can compile) does not currently support training with a dynamic number of examples, so we compile the model with a fixed batch size (by specifying the batch size in the tf.TensorSpecs).
End of explanation
"""
exported_names = ["predict", "learn"]
"""
Explanation: 5. Compile the Model with IREE
tf.keras adds a large number of methods to TrainableDNN, and most of them
cannot be compiled with IREE. To get around this we tell IREE exactly which
methods we would like it to compile.
End of explanation
"""
backend_choice = "dylib-llvm-aot (CPU)" #@param [ "vmvx (CPU)", "dylib-llvm-aot (CPU)", "vulkan-spirv (GPU/SwiftShader – requires additional drivers) " ]
backend_choice = backend_choice.split(' ')[0]
# Compile the TrainableDNN module
# Note: extra flags are needed to i64 demotion, see https://github.com/google/iree/issues/8644
vm_flatbuffer = iree.compiler.tf.compile_module(
TrainableDNN(),
target_backends=[backend_choice],
exported_names=exported_names,
extra_args=["--iree-mhlo-demote-i64-to-i32=false",
"--iree-flow-demote-i64-to-i32"])
compiled_model = iree.runtime.load_vm_flatbuffer(
vm_flatbuffer,
backend=backend_choice)
"""
Explanation: Choose one of IREE's three backends to compile to. (Note: Using Vulkan requires installing additional drivers.)
End of explanation
"""
#@title Benchmark inference and training
print("Inference latency:\n ", end="")
%timeit -n 100 compiled_model.predict(x_train[:BATCH_SIZE])
print("Training latancy:\n ", end="")
%timeit -n 100 compiled_model.learn(x_train[:BATCH_SIZE], y_train[:BATCH_SIZE])
# Run the core training loop.
losses = []
step = 0
max_steps = x_train.shape[0] // BATCH_SIZE
for batch_start in range(0, x_train.shape[0], BATCH_SIZE):
if batch_start + BATCH_SIZE > x_train.shape[0]:
continue
inputs = x_train[batch_start:batch_start + BATCH_SIZE]
labels = y_train[batch_start:batch_start + BATCH_SIZE]
loss = compiled_model.learn(inputs, labels).to_host()
losses.append(loss)
step += 1
print(f"\rStep {step:4d}/{max_steps}: loss = {loss:.4f}", end="")
#@title Plot the training results
import bottleneck as bn
smoothed_losses = bn.move_mean(losses, 32)
x = np.arange(len(losses))
plt.plot(x, smoothed_losses, linewidth=2, label='loss (moving average)')
plt.scatter(x, losses, s=16, alpha=0.2, label='loss (per training step)')
plt.ylim(0)
plt.legend(frameon=True)
plt.xlabel("training step")
plt.ylabel("cross-entropy")
plt.title("training loss");
"""
Explanation: 6. Train the Compiled Model on MNIST
This compiled model is portable, demonstrating that IREE can be used for training on a mobile device. On mobile, IREE has a ~1000 fold binary size advantage over the current TensorFlow solution (which is to use the now-deprecated TF Mobile, as TFLite does not support training at this time).
End of explanation
"""
#@title Evaluate the network on the test data.
accuracies = []
step = 0
max_steps = x_test.shape[0] // BATCH_SIZE
for batch_start in range(0, x_test.shape[0], BATCH_SIZE):
if batch_start + BATCH_SIZE > x_test.shape[0]:
continue
inputs = x_test[batch_start:batch_start + BATCH_SIZE]
labels = y_test[batch_start:batch_start + BATCH_SIZE]
prediction = compiled_model.predict(inputs).to_host()
prediction = np.argmax(prediction, -1)
accuracies.append(np.sum(prediction == labels) / BATCH_SIZE)
step += 1
print(f"\rStep {step:4d}/{max_steps}", end="")
print()
accuracy = np.mean(accuracies)
print(f"Test accuracy: {accuracy:.3f}")
#@title Display inference predictions on a random selection of heldout data
rows = 4
columns = 4
images_to_display = rows * columns
assert BATCH_SIZE >= images_to_display
random_index = np.arange(x_test.shape[0])
np.random.shuffle(random_index)
x_test = x_test[random_index]
y_test = y_test[random_index]
predictions = compiled_model.predict(x_test[:BATCH_SIZE]).to_host()
predictions = np.argmax(predictions, -1)
fig, axs = plt.subplots(rows, columns)
for i, ax in enumerate(np.ndarray.flatten(axs)):
ax.imshow(x_test[i, :, :, 0])
color = "#000000" if predictions[i] == y_test[i] else "#ff7f0e"
ax.set_xlabel(f"prediction={predictions[i]}", color=color)
ax.grid(False)
ax.set_yticks([])
ax.set_xticks([])
fig.tight_layout()
"""
Explanation: 7. Evaluate on Heldout Test Examples
End of explanation
"""
|
tleonhardt/LearningCython | Learning_Cython_video/Chapter05/memview/loops.ipynb | mit | def p(n, m):
output = 0
for i in range(n):
output += i % m
return output
%timeit p(1000000, 42)
"""
Explanation: Plain Python: modulo (%) in a loop
End of explanation
"""
%%cython
def f(n, m):
output = 0
for i in range(n):
output += i % m
return output
%timeit f(1000000, 42)
"""
Explanation: Still Python, but inside a Cython cell
There are still no types declared, but you get a surprising speedup.
End of explanation
"""
%%cython
def c(int n):
cdef int i, output
for i in range(n):
output += i % 42
return output
%timeit c(1000000)
"""
Explanation: Cython: give types to all variables
End of explanation
"""
%%cython
def w(int n):
cdef int i = 0, output
while i < n:
output += i % 42
i += 1
return output
%timeit w(1000000)
"""
Explanation: Example of a while loop
End of explanation
"""
%%cython
cimport cython
# This code was translated to Cython from a
# wikipedia article at:
#
# https://en.wikipedia.org/wiki/Xorshift
#
# "Xorshift random number generators are a
# class of pseudorandom number generators that
# was discovered by George Marsaglia.[1] They
# generate the next number in their sequence
# by repeatedly taking the exclusive or of a
# number with a bit shifted version of itself.
# This makes them extremely fast on modern
# computer architectures"
#
# "A naive C implementation of a xorshift+ generator
# that passes all tests from the BigCrush suite
# (with an order of magnitude fewer failures than
# Mersenne Twister or WELL) typically takes fewer
# than 10 clock cycles on x86 to generate a random
# number, thanks to instruction pipelining."
cdef unsigned long long s[2] # Seed: initialize to nonzero
cdef inline unsigned long long xorshiftplus():
""" Direct translation from Wikipedia page """
cdef unsigned long long x = s[0]
cdef unsigned long long y = s[1]
s[0] = y
x ^= x << 23 # a
x ^= x >> 17 # b
x ^= y^(y>>26) # c
s[1] = x
return x + y
@cython.boundscheck(False)
def random_array(unsigned long long[:] output):
""" Array must be already be sized """
s[0] = 1 # Set the seed
s[1] = 2
cdef int i, n = output.shape[0]
for i in range(n):
output[i] = xorshiftplus()
"""
Explanation: Application: custom random number generator
Numpy only generates 32-bit random integers. Let's make a 64-bit random integer generator!
End of explanation
"""
# Create storage for our 8-byte random numbers
import numpy
output = numpy.zeros(10, dtype=numpy.uint64)
random_array(output)
print(output)
"""
Explanation: Quick demo
End of explanation
"""
n = int(1e8) # 100 million random numbers
output = numpy.zeros(n, dtype=numpy.uint64)
%timeit random_array(output)
%timeit y = numpy.random.randint(low=0, high=2**31 - 1, size=n)
"""
Explanation: Speed test - compare Cython and Numpy
Note: Numpy generates only 32-bit integers, but uses the Mersenne Twister algorithm. Our Cython function generates 64-bit integers, but uses the Xorshift+ algorithm.
End of explanation
"""
|
tbrx/compiled-inference | notebooks/Multilevel-Poisson.ipynb | gpl-3.0 | theta_est, params_est = multilevel_poisson.get_estimators()
theta_est.load_state_dict(torch.load('../saved/trained_poisson_theta.rar'))
params_est.load_state_dict(torch.load('../saved/trained_poisson_params.rar'))
true_t = np.array([94.3, 15.7, 62.9, 126, 5.24, 31.4, 1.05, 1.05, 2.1, 10.5])
true_x = np.array([5, 1, 5, 14, 3, 19, 1, 1, 4, 22])
"""
Explanation: Learn a multilevel model
We're going to learn the model for the PUMPS data:
http://www.openbugs.net/Examples/Pumps.html
This model has local parameters $\theta_i$ and global parameters $\alpha, \beta$.
End of explanation
"""
real_data = Variable(torch.FloatTensor(np.vstack((true_x, true_t)).T))
M_train = pymc.Model(multilevel_poisson.gamma_poisson(None,None))
M_test = pymc.Model(multilevel_poisson.gamma_poisson(true_x, true_t))
def estimate_MCMC(data_x, data_t, ns, iters=10000, burn=0.5):
""" MCMC estimate of weight distribution """
mcmc_est = pymc.MCMC(multilevel_poisson.gamma_poisson(data_x, data_t))
mcmc_est.sample(iters, burn=burn*iters, thin=np.ceil(burn*iters/ns))
trace_theta = mcmc_est.trace('theta').gettrace()[:ns]
trace_alpha = mcmc_est.trace('alpha').gettrace()[:ns]
trace_beta = mcmc_est.trace('beta').gettrace()[:ns]
return trace_theta, trace_alpha, trace_beta
mcmc_theta, mcmc_alpha, mcmc_beta = estimate_MCMC(true_x, true_t, ns=1000, iters=500000)
# print "MCMC MSE", np.mean((mcmc_theta.mean(0) - true_theta)**2)
true_alpha = mcmc_alpha.mean()
true_beta = mcmc_beta.mean()
true_theta = mcmc_theta.mean(0)
print "\n\nMCMC Estimated theta:", true_theta.round(3)
print "\nMCMC Estimated (alpha, beta):", true_alpha.round(3), true_beta.round(3)
"""
Explanation: Training; synthetic data
We can use our model to define synthetic data, which we will use to train the inference network.
Each "minibatch" will be an unconditioned sample from the graphical model.
Values on true data, as reported in the George et al. (1993) are:
theta = [0.060 0.102 0.089 0.116 0.602 0.609 0.891 0.894 1.588 1.994 ]
alpha = 0.7
beta = 0.9
We'll run MCMC here to get benchmark estimates of the posterior distributions.
End of explanation
"""
def draw_inverse(ns=100):
tmp = theta_est.sample(real_data, ns=ns).squeeze(2)
samples = params_est.sample(tmp, ns=1)
return tmp.cpu().data.numpy(), samples.cpu().data.numpy()
nn_raw_theta, nn_raw_params = draw_inverse(1000)
plt.figure(figsize=(20,6))
for i in xrange(10):
plt.subplot(2,5,i+1)
plt.hist(mcmc_theta[:,i], normed=True);
plt.hist(nn_raw_theta[:,i], alpha=0.7, normed=True, bins=20);
plt.title("theta_"+str(i))
plt.legend(['MCMC', 'NN'])
plt.tight_layout()
bins = 20
a = .7
plt.figure(figsize=(9,3.75))
plt.subplot(121)
plt.hist(mcmc_alpha, normed=True, bins=bins, color=sns.color_palette()[0], histtype='stepfilled', linewidth=2.0, alpha=a);
plt.hist(nn_raw_params[:,0], color=sns.color_palette()[2], normed=True, bins=bins, histtype='stepfilled', linewidth=2.0, alpha=a);
# plt.hist(nn_raw_samples[:,0], normed=True, bins=bins, histtype='step', linewidth=2.0);
plt.hist(stats.expon(scale=1.0).rvs(10000), color=sns.color_palette()[5], normed=True, bins=2*bins, histtype='stepfilled', linewidth=2.0, alpha=a);
plt.xlim((0,5))
# plt.xlim((0,plt.xlim()[1]))
plt.xlabel("$\\alpha$")
plt.subplot(122)
plt.hist(mcmc_beta, normed=True, bins=bins, color=sns.color_palette()[0], histtype='stepfilled', linewidth=2.0, alpha=a);
# plt.hist(nn_raw_samples[:,1], normed=True, bins=bins, histtype='step', linewidth=2.0);
plt.hist(nn_raw_params[:,1], normed=True, color=sns.color_palette()[2], bins=bins, histtype='stepfilled', linewidth=2.0, alpha=a);
plt.hist(stats.gamma(a=0.1, scale=1.0).rvs(10000), color=sns.color_palette()[5], normed=True, bins=2*bins, histtype='stepfilled', linewidth=2.0, alpha=a);
plt.xlim((0,3.5))
# plt.xlim((0,plt.xlim()[1]))
plt.ylim(0,4)
plt.legend(['MCMC', 'NN', 'Prior'])
plt.xlabel("$\\beta$")
plt.tight_layout()
# plt.savefig("poisson-params-alt.pdf")
"""
Explanation: Comparison: samples from the proposal, and the MCMC posterior
End of explanation
"""
def bootstrap_propose(model):
success = False
while success == False:
try:
model.alpha.rand()
model.beta.rand()
model.theta.rand()
success = True
except:
pass
try:
weight = model.x.logp
except:
weight = -np.Inf
return weight
def bootstrap_IS(model, ns=100):
theta = np.empty((ns,10))
params = np.empty((ns,2))
log_w = np.empty((ns,))
for n in xrange(ns):
log_w[n] = bootstrap_propose(model)
theta[n] = model.theta.value
params[n] = [model.alpha.value, model.beta.value]
return theta, params, log_w
def normalize_log_weights(log_w):
safe_log_w = np.array(log_w)
safe_log_w[np.isnan(log_w)] = np.NINF
A = safe_log_w.max()
log_denom = np.log(np.sum(np.exp(safe_log_w - A))) + A
return np.exp(safe_log_w - log_denom)
def systematic_resample(weights):
ns = len(weights)
cdf = np.cumsum(weights)
cutoff = (np.random.rand() + np.arange(ns))/ns
return np.digitize(cutoff, cdf)
def nn_IS(model, ns=100):
# theta
theta, log_q = theta_est.propose(real_data, ns=ns)
theta = theta.data.cpu().numpy().squeeze(2)
log_w = -log_q.data.cpu().numpy().sum(1)
# alpha, beta
params, log_q = params_est.propose(Variable(torch.Tensor(theta)))
alpha = params.data.cpu().numpy()[:,0] #np.empty((ns,))
beta = params.data.cpu().numpy()[:,1] # np.empty((ns,))
log_w -= log_q.data.cpu().numpy().squeeze(1)
# likelihood
log_w += stats.poisson(model.t.value * theta).logpmf(model.x.value).sum(1)
log_w += stats.gamma(a=alpha[:,None], scale=1.0/beta[:,None]).logpdf(theta).sum(1)
log_w += stats.expon(scale=1.0).logpdf(alpha)
log_w += stats.gamma(a=0.1, scale=1.0).logpdf(beta)
log_w[np.isnan(log_w)] = np.NINF
return theta, params.data.cpu().numpy(), log_w
nn_theta,nn_params,nn_log_w = nn_IS(M_test, ns=10000)
boostrap_theta,bootstrap_params,bootstrap_log_w = bootstrap_IS(M_test, ns=10000)
print "Effective sample size (benchmark):", 1.0/np.sum(normalize_log_weights(bootstrap_log_w)**2)
print "Effective sample size (NN):", 1.0/np.sum(normalize_log_weights(nn_log_w)**2)
"""
Explanation: Simple benchmark: likelihood weighting
Using the prior as a proposal, relative to using the learned network, what do we gain in terms of effective sample size?
End of explanation
"""
def run_DCSMC(model, ns=100, return_extras=False):
# stage one
num_points = 10
theta, log_q = theta_est.propose(real_data, ns=ns)
theta = theta.data.cpu().numpy().squeeze(2)
log_w = -log_q.data.cpu().numpy()
log_w += stats.poisson(model.t.value * theta).logpmf(model.x.value)
log_w[np.isnan(log_w)] = np.NINF
Z_est = np.log(np.exp(log_w).mean(0))
# print Z_est
for i in xrange(num_points):
indices = systematic_resample(normalize_log_weights(log_w[:,i]))
np.random.shuffle(indices)
theta[:,i] = theta[indices,i]
# Now: we have unweighted samples for each theta[:,i]
# ... and we have Z estimates for each i. What is next?
# Sample values of alpha, beta
params, log_q = params_est.propose(Variable(torch.Tensor(theta)))
alpha = params.data.cpu().numpy()[:,0] #np.empty((ns,))
beta = params.data.cpu().numpy()[:,1] # np.empty((ns,))
log_w = -log_q.data.cpu().numpy().squeeze(1)
# Merge
tmp = stats.gamma(a=alpha[:,None], scale=1.0/beta[:,None])
log_w += tmp.logpdf(theta).sum(1)
log_w += stats.expon(scale=1.0).logpdf(alpha)
log_w += stats.gamma(a=0.1, scale=1.0).logpdf(beta)
assert(log_w.shape == (ns,))
indices = systematic_resample(normalize_log_weights(log_w))
log_w[np.isnan(log_w)] = np.NINF
Z_est = Z_est.sum() + np.log(np.exp(log_w).mean())
if return_extras:
alpha_orig = np.array(alpha)
beta_orig = np.array(beta)
alpha = alpha[indices]
beta = beta[indices]
theta = theta[indices]
if return_extras:
return theta, alpha, beta, Z_est, alpha_orig, beta_orig
else:
return theta, alpha, beta, Z_est
"""
Explanation: Define a method for running divide-and-conquer SMC
This algorithm first proposes values across all thetas, then resamples prior to proposing alpha and beta.
End of explanation
"""
sizes = [5, 10, 50, 100, 500, 1000, 5000, 10000]
replications = 10
dcsmc_results_Z = np.empty((replications, len(sizes)))
dcsmc_results_L2 = np.empty((replications, len(sizes)))
for c, size in enumerate(sizes):
for rep in xrange(replications):
tmp_theta, tmp_alpha, tmp_beta, Z_est = run_DCSMC(M_test, size)
dcsmc_results_L2[rep,c] = np.sqrt(np.mean((tmp_theta.mean(0) - true_theta)**2))
dcsmc_results_Z[rep,c] = Z_est
lwis_results_Z = np.empty((replications, len(sizes)))
lwis_results_L2 = np.empty((replications, len(sizes)))
for c, size in enumerate(sizes):
for rep in xrange(replications):
bootstrap_theta, _, bootstrap_log_w = bootstrap_IS(M_test, ns=size)
Z_est = np.log(np.exp(bootstrap_log_w).mean())
mean_est = np.dot(bootstrap_theta.T, normalize_log_weights(bootstrap_log_w))
lwis_results_L2[rep,c] = np.sqrt(np.mean((mean_est - true_theta)**2))
lwis_results_Z[rep,c] = Z_est
nnis_results_Z = np.empty((replications, len(sizes)))
nnis_results_L2 = np.empty((replications, len(sizes)))
for c, size in enumerate(sizes):
for rep in xrange(replications):
nnis_theta, _, nnis_log_w = nn_IS(M_test, ns=size)
Z_est = np.log(np.exp(nnis_log_w).mean())
mean_est = np.dot(nnis_theta.T, normalize_log_weights(nnis_log_w))
nnis_results_L2[rep,c] = np.sqrt(np.mean((mean_est - true_theta)**2))
nnis_results_Z[rep,c] = Z_est
plt.figure(figsize=(8,3.6))
def get_res(res):
filtered = np.array(res)
return np.nanmean(filtered, 0), np.nanstd(filtered, 0)
tmp_m, tmp_s = get_res(dcsmc_results_Z)
plt.errorbar(sizes, tmp_m, 2*tmp_s,marker='.', capsize=4, markeredgewidth=2, color=sns.color_palette()[1])
tmp_m, tmp_s = get_res(nnis_results_Z)
plt.errorbar(sizes, tmp_m, 2*tmp_s,marker='.', capsize=4, markeredgewidth=2, color=sns.color_palette()[2])
tmp_m, tmp_s = get_res(lwis_results_Z)
plt.errorbar(sizes, tmp_m, 2*tmp_s,marker='.', capsize=4, markeredgewidth=2, zorder=0,color=sns.color_palette()[5])
plt.legend(['NN-SMC', 'NN-IS', 'IS (Prior)'], loc='lower right')
plt.semilogx();
plt.xlim(sizes[0]-1, sizes[-1])
plt.ylim([-250, 0])
plt.ylabel("$\log \hat Z$")
plt.xlabel("Number of samples")
plt.tight_layout()
plt.figure(figsize=(8,3.5))
tmp_m, tmp_s = dcsmc_results_L2.mean(0), dcsmc_results_L2.std(0)
plt.errorbar(sizes, tmp_m, 2*tmp_s,marker='.', capsize=4, markeredgewidth=2, color=sns.color_palette()[1])
tmp_m, tmp_s = lwis_results_L2.mean(0), lwis_results_L2.std(0)
plt.errorbar(sizes, tmp_m, 2*tmp_s,marker='.', capsize=4, markeredgewidth=2,color=sns.color_palette()[5])
tmp_m, tmp_s = nnis_results_L2.mean(0), nnis_results_L2.std(0)
plt.errorbar(sizes, tmp_m, 2*tmp_s,marker='.', capsize=4, markeredgewidth=2, color=sns.color_palette()[2])
plt.legend(['D&C SMC', 'IS (Prior)', 'IS (NN)'], loc='upper right')
plt.loglog();
plt.xlim(sizes[0]-1, sizes[-1])
plt.ylabel("L2 error in theta")
plt.xlabel("Number of samples")
plt.tight_layout();
"""
Explanation: Compare importance sampling with NN proposals, proposals from prior, and divide-and-conquer SMC with NN proposals
End of explanation
"""
|
phoebe-project/phoebe2-docs | 2.1/examples/binary_misaligned_spots.ipynb | gpl-3.0 | !pip install -I "phoebe>=2.1,<2.2"
"""
Explanation: Binary with Spots
Setup
Let's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
"""
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
b.set_value(qualifier='pitch', component='primary', value=30)
"""
Explanation: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
End of explanation
"""
b.add_dataset('lc', times=phoebe.linspace(0,1,101))
b.run_compute(irrad_method='none', model='no_spot')
"""
Explanation: Model without Spots
End of explanation
"""
b.add_feature('spot', component='primary', feature='spot01', relteff=0.9, radius=15, colat=180, long=0)
"""
Explanation: Adding Spots
Let's add a spot to the primary component in our binary, which we have already misaligned by 30 degrees in pitch.
The 'colat' parameter defines the colatitude on the star measured from its North (spin) Pole. The 'long' parameter measures the longitude of the spot - with longitude = 0 being defined as pointing towards the other star at t0. See the spots tutorial for more details.
We'll place this spot at the South Pole, which should be pointing towards the observer because we pitched the north pole away from the observer.
End of explanation
"""
b.add_dataset('mesh', times=[0.75], columns=['teffs'])
b.run_compute(irrad_method='none', model='with_spot')
"""
Explanation: We'll also add a mesh dataset so that we can see the positioning of the spot with respect to the misaligned component.
End of explanation
"""
afig, mplfig = b.plot(kind='mesh', fc='teffs', fcmap='plasma', ec='none', show=True)
"""
Explanation: Location of Spot
End of explanation
"""
afig, mplfig = b.plot(kind='lc', show=True, legend=True)
"""
Explanation: Comparing Light Curves
Note that the pitch means the polar spot is always facing towards the observer slightly, and so is always visible (unless eclipsed).
End of explanation
"""
|
GSimas/EEL7045 | Adicionais/derivada.ipynb | mit | from __future__ import division
from sympy import *
init_printing()
x, y = symbols('x y') #define x e y como variáveis simbólicas.
"""
Explanation: Este trabalho está licenciado sob a Licença Atribuição 4.0 Internacional Creative Commons. Para visualizar uma cópia desta licença, visite http://creativecommons.org/licenses/by/4.0/ ou mande uma carta para Creative Commons, PO Box 1866, Mountain View, CA 94042, USA.
Autor/Instituição: Pedro H A Konzen / UFRGS
pedro.konzen@ufrgs.br
Data: Out/2015
Cálculo com Python
Nesta terceira parte do curso vamos estudar como usar o python para calcular derivadas de funções de uma variável.
3 - Derivadas
Aqui você irá aprender a usar a biblioteca Python de matemática simbólica Sympy para calcular:
derivadas
retas tangentes
derivadas de ordem mais altas
máximos e mínimos
Começamos carrendo a biblioteca Sympy:
End of explanation
"""
def f(x): return (x**3 - 3*x + 2)*exp(-x/4) - 1
f(x)
"""
Explanation: Para fixar as ideias, vamos continuar trabalhando com a função:
$f(x) = (x^3 - 3x + 2)e^{-x/4} - 1$.
End of explanation
"""
diff(f(x),x)
"""
Explanation: Derivadas
Vamos ver como podemos usar o Sympy para calcularmos a derivada da função $f(x)$, i.e. $f'(x)$. Para tanto, usamos a função $\verb+diff+$:
End of explanation
"""
diff(f(x),x).subs(x,1)
"""
Explanation: Para avaliar a derivada em um ponto, por exemplo, para calcular $f'(1)$, digitamos:
End of explanation
"""
#digite sua solução aqui!
"""
Explanation: Exercício:
Sendo $g(x) = x^2 + \frac{1}{2}$ calcule $g'(1)$.
End of explanation
"""
x0 = -1/2
fl = diff(f(x),x).subs(x,-0.5) #f'(x_0)
print("coef. angular = ")
fl
"""
Explanation: Retas Tangente
Aqui, vamos ver como calcular a reta tangente ao gráfico da função $f(x)$ no ponto $x_0 = -\frac{1}{2}$. Lembremos que tal reta terá equação:
$y = f'(x_0)(x - x_0) + f(x_0)$.
Desta forma, vamos calcular $f'(x_0)$:
End of explanation
"""
def r(x) : return fl*(x-x0) + f(x0)
print("equação da reta tangente y=")
r(x)
"""
Explanation: Agora, já podemos definir a reta tangente:
End of explanation
"""
%matplotlib inline
p1 = plot(f(x),(x,-2,2),show=False,line_color='b')
p2 = plot(r(x),(x,-1.5,1),show=False,line_color='r')
p1.extend(p2)
p1.show()
"""
Explanation: Vejamos os gráficos de $f(x)$ e da reta tangente calculada.
End of explanation
"""
#digite a resolução aqui.
"""
Explanation: $\blacktriangleleft$
Exercício:
Encontre a reta tangente ao gráfico de $y = \frac{1}{x}$ em $x=1$. Faça os esboços dos gráficos da função e da reta tangente em um mesmo gráfico.
End of explanation
"""
diff(f(x),x,2)
"""
Explanation: Derivadas de ordem mais altas
Similarmente a derivadas de primeira ordem, as derivadas de ordem mais alta podem ser obtidas usando-se a função $\verb+diff+$. Por exemplo, para calcularmos:
$\frac{d^2}{d x^2}f(x)$
podemos digitar:
End of explanation
"""
#digite sua resposta aqui!
"""
Explanation: Exercício
Calcule:
$\frac{d^3}{d x^3}\left(\frac{x^2 - 1 + \text{sen}\,x}{x^3 - 3x + 1}\right)$.
End of explanation
"""
plot(f(x),(x,-2,2))
plot(f(x),(x,2,40))
plot(f(x),(x,40,60))
"""
Explanation: Máximos e Mínimos Locais
Agora, vamos ver como podemos usar o que aprendemos até aqui para resolver problemas de máximos e mínimos. Nosso objetivo será de encontrar e determinar os pontos de máximos e mínimos locais da função $f(x)$. Para tanto, vamos usar o teste da segunda derivada.
Comecemos lembrando do gráfico de $f(x)$:
End of explanation
"""
fl = diff(f(x),x) #calcula $f'(x)$
x1 = nsolve(fl,x,-1) #calcula ponto crítico próx. de x=-1
x2 = nsolve(fl,x,1) #calcula ponto crítico próx. de x=1
x3 = nsolve(fl,x,10) #calcula ponto crítico próx. de x=10
print("x1=",x1)
print("x2=",x2)
print("x3=",x3)
"""
Explanation: Observando os gráficos, vemos que os pontos críticos de $f(x)$ têm máximos locais próximos ao ponto $x=-1$ e $x=10$ e um mínimo local próximo do ponto $x=1$. Logo, podemos usar estes valores como parâmetro da função $\verb+nsolve+$:
End of explanation
"""
diff(f(x),x,2).subs(x,x1) #calcula f''(x1)
diff(f(x),x,2).subs(x,x2) #calcula f''(x2)
diff(f(x),x,2).subs(x,x3) #calcula f''(x3)
"""
Explanation: Ok. Acamos de encontrar os pontos críticos de $f(x)$ (podem haver outros pontos críticos?). Por inspesão gráfica, vemos que $x_1 \approx -1,15$ e $x_3 \approx 12,15$ são os pontos de máximos locais de $f(x)$ e $x_3 = 1,0$ é o ponto de mínimo (global?).
Vamos confirmar isso com o teste da segunda derivada. Observe:
End of explanation
"""
#digite sua solução aqui!
"""
Explanation: Exercício
Referente ao problema de otimização que acabamos de discutir, responda:
(a) Os pontos $x_1$, $x_2$ e $x_3$ calculados acima são os únicos pontos críticos de $f(x)$? Justifique sua resposta.
(b) Quais são os pontos de mínimo e máximo globais de $f(x)$?
(c) Qual é o maior valor que $f(x)$ assume? E qual é o menor?
End of explanation
"""
|
macks22/gensim | docs/notebooks/Tensorboard_visualizations.ipynb | lgpl-2.1 | import gensim
import pandas as pd
import smart_open
import random
# read data
dataframe = pd.read_csv('movie_plots.csv')
dataframe
"""
Explanation: TensorBoard Visualizations
In this tutorial, we will learn how to visualize different types of NLP based Embeddings via TensorBoard. TensorBoard is a data visualization framework for visualizing and inspecting the TensorFlow runs and graphs. We will use a built-in Tensorboard visualizer called Embedding Projector in this tutorial. It lets you interactively visualize and analyze high-dimensional data like embeddings.
Read Data
For this tutorial, a transformed MovieLens dataset<sup>[1]</sup> is used. You can download the final prepared csv from here.
End of explanation
"""
def read_corpus(documents):
for i, plot in enumerate(documents):
yield gensim.models.doc2vec.TaggedDocument(gensim.utils.simple_preprocess(plot, max_len=30), [i])
train_corpus = list(read_corpus(dataframe.Plots))
"""
Explanation: 1. Visualizing Doc2Vec
In this part, we will learn about visualizing Doc2Vec Embeddings aka Paragraph Vectors via TensorBoard. The input documents for training will be the synopsis of movies, on which Doc2Vec model is trained.
<img src="Tensorboard.png">
The visualizations will be a scatterplot as seen in the above image, where each datapoint is labelled by the movie title and colored by it's corresponding genre. You can also visit this Projector link which is configured with my embeddings for the above mentioned dataset.
Preprocess Text
Below, we define a function to read the training documents, pre-process each document using a simple gensim pre-processing tool (i.e., tokenize text into individual words, remove punctuation, set to lowercase, etc), and return a list of words. Also, to train the model, we'll need to associate a tag/number with each document of the training corpus. In our case, the tag is simply the zero-based line number.
End of explanation
"""
train_corpus[:2]
"""
Explanation: Let's take a look at the training corpus.
End of explanation
"""
model = gensim.models.doc2vec.Doc2Vec(size=50, min_count=2, iter=55)
model.build_vocab(train_corpus)
model.train(train_corpus, total_examples=model.corpus_count, epochs=model.iter)
"""
Explanation: Training the Doc2Vec Model
We'll instantiate a Doc2Vec model with a vector size with 50 words and iterating over the training corpus 55 times. We set the minimum word count to 2 in order to give higher frequency words more weighting. Model accuracy can be improved by increasing the number of iterations but this generally increases the training time. Small datasets with short documents, like this one, can benefit from more training passes.
End of explanation
"""
model.save_word2vec_format('doc_tensor.w2v', doctag_vec=True, word_vec=False)
"""
Explanation: Now, we'll save the document embedding vectors per doctag.
End of explanation
"""
%run ../../gensim/scripts/word2vec2tensor.py -i doc_tensor.w2v -o movie_plot
"""
Explanation: Prepare the Input files for Tensorboard
Tensorboard takes two Input files. One containing the embedding vectors and the other containing relevant metadata. We'll use a gensim script to directly convert the embedding file saved in word2vec format above to the tsv format required in Tensorboard.
End of explanation
"""
with open('movie_plot_metadata.tsv','w') as w:
w.write('Titles\tGenres\n')
for i,j in zip(dataframe.Titles, dataframe.Genres):
w.write("%s\t%s\n" % (i,j))
"""
Explanation: The script above generates two files, movie_plot_tensor.tsv which contain the embedding vectors and movie_plot_metadata.tsv containing doctags. But, these doctags are simply the unique index values and hence are not really useful to interpret what the document was while visualizing. So, we will overwrite movie_plot_metadata.tsv to have a custom metadata file with two columns. The first column will be for the movie titles and the second for their corresponding genres.
End of explanation
"""
import pandas as pd
import re
from gensim.parsing.preprocessing import remove_stopwords, strip_punctuation
from gensim.models import ldamodel
from gensim.corpora.dictionary import Dictionary
# read data
dataframe = pd.read_csv('movie_plots.csv')
# remove stopwords and punctuations
def preprocess(row):
return strip_punctuation(remove_stopwords(row.lower()))
dataframe['Plots'] = dataframe['Plots'].apply(preprocess)
# Convert data to required input format by LDA
texts = []
for line in dataframe.Plots:
lowered = line.lower()
words = re.findall(r'\w+', lowered, flags = re.UNICODE | re.LOCALE)
texts.append(words)
# Create a dictionary representation of the documents.
dictionary = Dictionary(texts)
# Filter out words that occur less than 2 documents, or more than 30% of the documents.
dictionary.filter_extremes(no_below=2, no_above=0.3)
# Bag-of-words representation of the documents.
corpus = [dictionary.doc2bow(text) for text in texts]
"""
Explanation: Now you can go to http://projector.tensorflow.org/ and upload the two files by clicking on Load data in the left panel.
For demo purposes I have uploaded the Doc2Vec embeddings generated from the model trained above here. You can access the Embedding projector configured with these uploaded embeddings at this link.
Using Tensorboard
For the visualization purpose, the multi-dimensional embeddings that we get from the Doc2Vec model above, needs to be downsized to 2 or 3 dimensions. So that we basically end up with a new 2d or 3d embedding which tries to preserve information from the original multi-dimensional embedding. As these vectors are reduced to a much smaller dimension, the exact cosine/euclidean distances between them are not preserved, but rather relative, and hence as you’ll see below the nearest similarity results may change.
TensorBoard has two popular dimensionality reduction methods for visualizing the embeddings and also provides a custom method based on text searches:
Principal Component Analysis: PCA aims at exploring the global structure in data, and could end up losing the local similarities between neighbours. It maximizes the total variance in the lower dimensional subspace and hence, often preserves the larger pairwise distances better than the smaller ones. See an intuition behind it in this nicely explained answer on stackexchange.
T-SNE: The idea of T-SNE is to place the local neighbours close to each other, and almost completely ignoring the global structure. It is useful for exploring local neighborhoods and finding local clusters. But the global trends are not represented accurately and the separation between different groups is often not preserved (see the t-sne plots of our data below which testify the same).
Custom Projections: This is a custom bethod based on the text searches you define for different directions. It could be useful for finding meaningful directions in the vector space, for example, female to male, currency to country etc.
You can refer to this doc for instructions on how to use and navigate through different panels available in TensorBoard.
Visualize using PCA
The Embedding Projector computes the top 10 principal components. The menu at the left panel lets you project those components onto any combination of two or three.
<img src="pca.png">
The above plot was made using the first two principal components with total variance covered being 36.5%.
Visualize using T-SNE
Data is visualized by animating through every iteration of the t-sne algorithm. The t-sne menu at the left lets you adjust the value of it's two hyperparameters. The first one is Perplexity, which is basically a measure of information. It may be viewed as a knob that sets the number of effective nearest neighbors<sup>[2]</sup>. The second one is learning rate that defines how quickly an algorithm learns on encountering new examples/data points.
<img src="tsne.png">
The above plot was generated with perplexity 8, learning rate 10 and iteration 500. Though the results could vary on successive runs, and you may not get the exact plot as above with same hyperparameter settings. But some small clusters will start forming as above, with different orientations.
2. Visualizing LDA
In this part, we will see how to visualize LDA in Tensorboard. We will be using the Document-topic distribution as the embedding vector of a document. Basically, we treat topics as the dimensions and the value in each dimension represents the topic proportion of that topic in the document.
Preprocess Text
We use the movie Plots as our documents in corpus and remove rare words and common words based on their document frequency. Below we remove words that appear in less than 2 documents or in more than 30% of the documents.
End of explanation
"""
# Set training parameters.
num_topics = 10
chunksize = 2000
passes = 50
iterations = 200
eval_every = None
# Train model
model = ldamodel.LdaModel(corpus=corpus, id2word=dictionary, chunksize=chunksize, alpha='auto', eta='auto', iterations=iterations, num_topics=num_topics, passes=passes, eval_every=eval_every)
"""
Explanation: Train LDA Model
End of explanation
"""
# Get document topics
all_topics = model.get_document_topics(corpus, minimum_probability=0)
all_topics[0]
"""
Explanation: You can refer to this notebook also before training the LDA model. It contains tips and suggestions for pre-processing the text data, and how to train the LDA model to get good results.
Doc-Topic distribution
Now we will use get_document_topics which infers the topic distribution of a document. It basically returns a list of (topic_id, topic_probability) for each document in the input corpus.
End of explanation
"""
# create file for tensors
with open('doc_lda_tensor.tsv','w') as w:
for doc_topics in all_topics:
for topics in doc_topics:
w.write(str(topics[1])+ "\t")
w.write("\n")
# create file for metadata
with open('doc_lda_metadata.tsv','w') as w:
w.write('Titles\tGenres\n')
for j, k in zip(dataframe.Titles, dataframe.Genres):
w.write("%s\t%s\n" % (j, k))
"""
Explanation: The above output shows the topic distribution of first document in the corpus as a list of (topic_id, topic_probability).
Now, using the topic distribution of a document as it's vector embedding, we will plot all the documents in our corpus using Tensorboard.
Prepare the Input files for Tensorboard
Tensorboard takes two input files, one containing the embedding vectors and the other containing relevant metadata. As described above we will use the topic distribution of documents as their embedding vector. Metadata file will consist of Movie titles with their genres.
End of explanation
"""
tensors = []
for doc_topics in all_topics:
doc_tensor = []
for topic in doc_topics:
if round(topic[1], 3) > 0:
doc_tensor.append((topic[0], float(round(topic[1], 3))))
# sort topics according to highest probabilities
doc_tensor = sorted(doc_tensor, key=lambda x: x[1], reverse=True)
# store vectors to add in metadata file
tensors.append(doc_tensor[:5])
# overwrite metadata file
i=0
with open('doc_lda_metadata.tsv','w') as w:
w.write('Titles\tGenres\n')
for j,k in zip(dataframe.Titles, dataframe.Genres):
w.write("%s\t%s\n" % (''.join((str(j), str(tensors[i]))),k))
i+=1
"""
Explanation: Now you can go to http://projector.tensorflow.org/ and upload these two files by clicking on Load data in the left panel.
For demo purposes I have uploaded the LDA doc-topic embeddings generated from the model trained above here. You can also access the Embedding projector configured with these uploaded embeddings at this link.
Visualize using PCA
The Embedding Projector computes the top 10 principal components. The menu at the left panel lets you project those components onto any combination of two or three.
<img src="doc_lda_pca.png">
From PCA, we get a simplex (tetrahedron in this case) where each data point represent a document. These data points are colored according to their Genres which were given in the Movie dataset.
As we can see there are a lot of points which cluster at the corners of the simplex. This is primarily due to the sparsity of vectors we are using. The documents at the corners primarily belongs to a single topic (hence, large weight in a single dimension and other dimensions have approximately zero weight.) You can modify the metadata file as explained below to see the dimension weights along with the Movie title.
Now, we will append the topics with highest probability (topic_id, topic_probability) to the document's title, in order to explore what topics do the cluster corners or edges dominantly belong to. For this, we just need to overwrite the metadata file as below:
End of explanation
"""
model.show_topic(topicid=0, topn=15)
"""
Explanation: Next, we upload the previous tensor file "doc_lda_tensor.tsv" and this new metadata file to http://projector.tensorflow.org/ .
<img src="topic_with_coordinate.png">
Voila! Now we can click on any point to see it's top topics with their probabilty in that document, along with the title. As we can see in the above example, "Beverly hill cops" primarily belongs to the 0th and 1st topic as they have the highest probability amongst all.
Visualize using T-SNE
In T-SNE, the data is visualized by animating through every iteration of the t-sne algorithm. The t-sne menu at the left lets you adjust the value of it's two hyperparameters. The first one is Perplexity, which is basically a measure of information. It may be viewed as a knob that sets the number of effective nearest neighbors[2]. The second one is learning rate that defines how quickly an algorithm learns on encountering new examples/data points.
Now, as the topic distribution of a document is used as it’s embedding vector, t-sne ends up forming clusters of documents belonging to same topics. In order to understand and interpret about the theme of those topics, we can use show_topic() to explore the terms that the topics consisted of.
<img src="doc_lda_tsne.png">
The above plot was generated with perplexity 11, learning rate 10 and iteration 1100. Though the results could vary on successive runs, and you may not get the exact plot as above even with same hyperparameter settings. But some small clusters will start forming as above, with different orientations.
I named some clusters above based on the genre of it's movies and also using the show_topic() to see relevant terms of the topic which was most prevelant in a cluster. Most of the clusters had doocumets belonging dominantly to a single topic. For ex. The cluster with movies belonging primarily to topic 0 could be named Fantasy/Romance based on terms displayed below for topic 0. You can play with the visualization yourself on this link and try to conclude a label for clusters based on movies it has and their dominant topic. You can see the top 5 topics of every point by hovering over it.
Now, we can notice that their are more than 10 clusters in the above image, whereas we trained our model for num_topics=10. It's because their are few clusters, which has documents belonging to more than one topic with an approximately close topic probability values.
End of explanation
"""
import pyLDAvis.gensim
viz = pyLDAvis.gensim.prepare(model, corpus, dictionary)
pyLDAvis.display(viz)
"""
Explanation: You can even use pyLDAvis to deduce topics more efficiently. It provides a deeper inspection of the terms highly associated with each individual topic. For this, it uses a measure called relevance of a term to a topic that allows users to flexibly rank terms best suited for a meaningful topic interpretation. It's weight parameter called λ can be adjusted to display useful terms which could help in differentiating topics efficiently.
End of explanation
"""
|
lindsayad/jupyter_notebooks | moose-notes.ipynb | mit | import sympy as sp
sxx, sxy, syx, syy, nx, ny = sp.var('sxx sxy syx syy nx ny')
s = sp.Matrix([[sxx, sxy],[syx, syy]])
n = sp.Matrix([nx, ny])
s*n
prod = n.transpose()*s*n
prod2 = n.transpose()*(s*n)
print(prod)
print(prod2)
print(prod==prod2)
prod.shape
sp.expand(prod) == sp.expand(prod2)
lhs = n.transpose()*s
print(lhs.shape)
rhs = (n.transpose() * s * n) * n.transpose()
print(rhs.shape)
rhs2 = (n.transpose()*s) * (n*n.transpose())
print(rhs2)
rhs3 = n.transpose() * (s*n*n.transpose())
print(sp.expand(rhs) == sp.expand(rhs2) == sp.expand(rhs3))
print(n*n.transpose())
print(n.transpose()*n)
print(sp.simplify(lhs))
print(sp.simplify(rhs))
elml = lhs[0,0]
elmr = rhs[0,0]
print(elml.expand())
print(elmr.expand())
elmr.expand()
elmr.expand().subs(nx, sp.sqrt(1 - ny**2))
elmr.expand().subs(nx, sp.sqrt(1 - ny**2)).simplify()
help(expr.replace)
t = lhs - rhs
print(t)
t1 = t[0,0]
t2 = t[0,1]
print(t1)
print(t2)
t1.simplify()
ddx, ddy, ux, uy = sp.var('ddx ddy ux uy')
grad = sp.Matrix([ddx,ddy])
u = sp.Matrix([ux,uy])
print(grad.shape)
phij,mu = sp.var('phij mu')
uDuxj = sp.Matrix([phij,0])
uDuyj = sp.Matrix([0,phij])
grad*u.transpose()
jacx = n.transpose() * (mu * (grad*uDuxj.transpose() + (grad*uDuxj.transpose()).transpose())) * n
print(jacx)
sp.expand(jacx[0,0])*nx
jacy = n.transpose() * (mu * (grad*uDuyj.transpose() + (grad*uDuyj.transpose()).transpose())) * n
print(jacy)
sp.expand(jacy[0,0])*ny
sp.factor(jacy[0,0])
print(sp.factor((jacx[0,0]*n.transpose())[0,0]))
print(sp.factor((jacy[0,0]*n.transpose())[0,1]))
sJacX = mu * (grad*uDuxj.transpose() + (grad*uDuxj.transpose()).transpose())
sJacY = mu * (grad*uDuyj.transpose() + (grad*uDuyj.transpose()).transpose())
print(sJacX)
print(sJacY)
print(sp.factor((n.transpose()*sJacX)[0,0]))
print(sp.factor((n.transpose()*sJacY)[0,1]))
jacx.shape
"""
Explanation: 2/1/17
FEProblemBase::reinitMaterials only calls property computation for Material objects currently active on the given subdomain so that's good. However, it's possible that material objects "active" on the subdomain aren't actually being used in any computing objects like kernels, etc. So we would like to do some additional checking.
Alright, let's say we're computing the residual thread. Then assuming we cannot compute properties in a material in isolation, we would like to do the next best thing: only call computeQpProperties for materials that have actually been asked to supply properties to kernels, dg_kernels, boundary_conditions, and interface_kernels.
So what am I doing as of commit d7dbfe5? I am determining the needed_mat_props through ComputeResidualThread::subdomainChanged() -> FEProblemBase::prepareMaterials. In the latter method, we first ask all materials--if there are any materials active on the block--to update their material property dependencies and then we ask materials on the boundaries of the subdomain to also update their dependencies. Note that this could lead to a boundary material object getting asked to update their material property dependencies twice because we first pass to MaterialWarehouse::updateMatPropDependenceyHelper all material objects as long as there is any one material object active in a block sense on the subdomain, and then we pass active material boundary objects. But this overlap doesn't matter so much because our needed_mat_props is a set, so if we try to insert the same material properties multiple times, it will silently and correctly fail. Note, however, that this could also pass needed_mat_props from material objects not on the current block, so that needs to be changed.
So what happens in MaterialWarehouse::updateMatPropDependencyHelper? We add mp_deps from MaterialPropertyInterface::getMatPropDependencies. However, it should be noted that this is only done for <i>material objects</i>. Is this fine? Well let's figure it out. It returns _material_property_dependencies which is a set added to by calling addMatPropDependency. Now this gets called when the object that inherits from MaterialPropertyInterface calls its own getMaterialProperty method. So I hypothesize that in my simple test, if I ask to have a material property in my kernel object with getMaterialProperty that will not register in any material objects list of _material_property_dependencies and consequently computeQpProperties will never get called. I will test that the next time I sit down at my comp.
2/2/17
Three tests:
Run with one material that doesn't supply any properties. Desired behavior: computeQpProperties does not get called. Expected to Pass. With devel MOOSE: expected to Fail (expected change)
Run two materials, one that supplies properties, another that does not. Desired behavior: computeQpProperties does not get called for the material not supplying properties while the other one does. Expected behavior: both materials compute methods get called. Fail. With devel MOOSE: expected to Fail (expected to not change)
Run with a kernel that uses a material property and an elemental aux variable that does not. Desired behavior: computeQpProperties should get called through the residual and jacobian threads but not through the aux kernel thread. Expected to Pass. With devel MOOSEE: expected to Fail (expected change)
Calls to computeProperties:
ComputeResidualThread
ComputeResidualThread
0th nonlinear residual printed
ComputeJacobianThread
ComputeResidualThread
0th linear residual printed
ComputeResidualThread
1st linear residual printed
ComputeResidualThread
ComputeResidualThread
1st nonlinear residual printed
ComputeElemAuxVarsThread -> Actually this is fine because this is the Aux Kernel that is created for outputting the material property
Number of calls: 8
1. 1-4
2. 5-8
...
7. 25-28
8. 29-32
Failed tests:
random.material_serial
controls*
Failed but now passing:
element_aux_boundary
bnd_material_test
elem_aux_bc_on_bound
output.boundary
multiplicity
material_point_source_test
line_material_sampler
2/6/17
Ok my new test is failing with threads and I don't really know why. It seems like the number of calls to computing threads should be the same...
Calls to computeProperties:
ComputeResidualThread
ComputeResidualThread
0th nonlinear residual printed
ComputeJacobianThread
ComputeResidualThread
0th linear residual printed
ComputResidualThread
1st linear residual printed
ComputeResidualThread
ComputeResidualThread
1st nonlinear residual printed
ComputeElemAuxVarsThread
Yep so thread computing pattern is the exact same. How about whether the material is the same location in memory every time?
0x7fed90 (1, 2, 8)
Increments:
1-4, 5-8, 9-12 -> average of 10.5 which is what is observed in the output
0x810b10 (3, 4, 5, 6, 7
4/28/17
Navier Stokes module development
End of explanation
"""
import sympy as sp
sxx, sxy, syx, syy, nx, ny, mu = sp.var('sxx sxy syx syy nx ny mu')
ddx, ddy, ux, uy = sp.var('ddx ddy ux uy')
grad = sp.Matrix([ddx,ddy])
u = sp.Matrix([ux,uy])
phij,mu = sp.var('phij mu')
uDuxj = sp.Matrix([phij,0])
uDuyj = sp.Matrix([0,phij])
rateOfStrain = (grad*u.transpose() + (grad*u.transpose()).transpose()) * 1 / 2
d_rateOfStrain_d_uxj = (grad*uDuxj.transpose() + (grad*uDuxj.transpose()).transpose()) * 1 / 2
d_rateOfStrain_d_uyj = (grad*uDuyj.transpose() + (grad*uDuyj.transpose()).transpose()) * 1 / 2
print(rateOfStrain)
print(d_rateOfStrain_d_uxj)
print(d_rateOfStrain_d_uyj)
tau = rateOfStrain * 2 * mu
d_tau_d_uxj = d_rateOfStrain_d_uxj * 2 * mu
d_tau_d_uyj = d_rateOfStrain_d_uyj * 2 * mu
print(tau)
print(d_tau_d_uxj)
print(d_tau_d_uyj)
normals = sp.Matrix([nx,ny])
y_component_normal = sp.Matrix([0,ny])
x_component_normal = sp.Matrix([nx,0])
test = sp.var('test')
test_x = sp.Matrix([test,0])
test_y = sp.Matrix([0,test])
"""
Explanation: Jacobian calculations related to deviatoric stress tensor ($\hat{\tau}$) and rate of strain tensor ($\hat{\epsilon}$)
Note that the total stress tensor ($\hat{\sigma}$) is equal to the sum of the deviatoric stress tensor ($\hat{\tau}$) and the stress induced by pressure ($-p\hat{I}$), e.g.
\begin{equation}
\hat{\sigma} = \hat{\tau} - p\hat{I}
\end{equation}
End of explanation
"""
normals.transpose() * d_tau_d_uxj * test_y
"""
Explanation: This is an example of an off-diagonal jacobian computation: derivative with respect to $x$ while test function corresponds to $y$
Specifically this corresponds to an off-diagonal contribution corresponding to the residual term:
\begin{equation}
\vec{n}^T \cdot \hat{\tau} \cdot \vec{v}_y
\end{equation}
End of explanation
"""
sp.factor(normals.transpose() * d_tau_d_uxj * normals * normals.transpose() * test_y)
"""
Explanation: Now let's look at an off diagonal-term for:
\begin{equation}
\left(\vec{n}^T \cdot \hat{\tau} \cdot \vec{n} \right) \vec{n}^T \cdot \vec{v}_y
\end{equation}
End of explanation
"""
import sympy as sp
nx, ny, nz, mu, phij, ddx, ddy, ddz, ux, uy, uz = sp.var('nx ny nz mu phij ddx ddy ddz ux uy uz')
grad = sp.Matrix([ddx,ddy,ddz])
u = sp.Matrix([ux, uy, uz])
uDuxj = sp.Matrix([phij,0,0])
uDuyj = sp.Matrix([0,phij,0])
uDuzj = sp.Matrix([0,0,phij])
rateOfStrain = (grad*u.transpose() + (grad*u.transpose()).transpose()) * 1 / 2
d_rateOfStrain_d_uxj = (grad*uDuxj.transpose() + (grad*uDuxj.transpose()).transpose()) * 1 / 2
d_rateOfStrain_d_uyj = (grad*uDuyj.transpose() + (grad*uDuyj.transpose()).transpose()) * 1 / 2
d_rateOfStrain_d_uzj = (grad*uDuzj.transpose() + (grad*uDuzj.transpose()).transpose()) * 1 / 2
print(rateOfStrain)
print(d_rateOfStrain_d_uxj)
print(d_rateOfStrain_d_uyj)
print(d_rateOfStrain_d_uzj)
tau = rateOfStrain * 2 * mu
d_tau_d_uxj = d_rateOfStrain_d_uxj * 2 * mu
d_tau_d_uyj = d_rateOfStrain_d_uyj * 2 * mu
d_tau_d_uzj = d_rateOfStrain_d_uzj * 2 * mu
print(tau)
print(d_tau_d_uxj)
print(d_tau_d_uyj)
print(d_tau_d_uzj)
normals = sp.Matrix([nx,ny,nz])
test = sp.var('test')
test_x = sp.Matrix([test,0,0])
test_y = sp.Matrix([0,test,0])
test_z = sp.Matrix([0,0,test])
sp.factor(normals.transpose() * d_tau_d_uxj * normals * normals.transpose() * test_y)
sp.factor(normals.transpose() * d_tau_d_uxj * normals * normals.transpose() * test_z)
sp.factor(normals.transpose() * d_tau_d_uyj * normals * normals.transpose() * test_x)
"""
Explanation: Hmm...that's not very revealing...this result is completely symmetric...it doesn't tell me what the code implementation should be. Let's try 3D in order to elucidate
End of explanation
"""
(normals.transpose() * tau)[0]
sp.factor(_)
"""
Explanation: Alright, it looks like we get the normal components corresponding to residual $i$ and derivative variable $j$!!! Boom!
End of explanation
"""
from scipy.special import erf
from numpy import exp, sqrt, pi
import numpy as np
def u(x, y, u1, u2, sigma):
return (u1 + u2) / 2. - (u1 - u2) / 2. * erf(sigma * y / x)
def v(x, y, u1, u2, sigma):
return (u1 - u2) / (2. * sigma * sqrt(pi)) * exp(-(sigma * y / x)**2)
def p():
return 0
def k(x, y, k0, sigma):
return k0 * exp(-(sigma * y / x)**2)
def epsilon(x, y, epsilon0, sigma):
return epsilon0 / x * exp(-(sigma * y / x)**2)
def muT(x, y, muT0, sigma):
return muT0 * x * exp(-(sigma * y / x)**2)
def k0(u1, u2, sigma):
return 343. / 75000. * u1 * (u1 - u2) * sigma / sqrt(pi)
def epsilon0(u1, u2, sigma, Cmu):
return 343. / 22500. * Cmu * u1 * (u1 - u2)**2 * sigma**2 / pi
def muT0(u1, rho):
return 343. / 250000. * rho * u1
def Re(rho, u1, L, mu):
return rho * u1 * L / mu
u1 = 1
u2 = 0
sigma = 13.5
Cmu = 0.9
x = np.arange(10, 100.5, .5)
y = np.arange(-30, 30.5, .5)
x,y = np.meshgrid(x, y)
uplot = u(x, y, u1, u2, sigma)
vplot = v(x, y, u1, u2, sigma)
kplot = k(x, y, k0(u1, u2, sigma), sigma)
epsPlot = epsilon(x, y, epsilon0(u1, u2, sigma, Cmu), sigma)
muTplot = muT(x, y, muT0(u1, 1), sigma)
import matplotlib.pyplot as plt
plt.pcolor(x, y, uplot)
plt.colorbar()
plt.show()
plt.pcolor(x, y, vplot)
plt.colorbar()
plt.show()
plt.pcolor(x, y, kplot)
plt.colorbar()
plt.show()
plt.pcolor(x, y, epsPlot)
plt.colorbar()
plt.show()
plt.pcolor(x, y, muTplot)
plt.colorbar()
plt.show()
import sympy as sp
from sympy import diff
x, y, sigma, Cmu, rho, mu, k0, eps0 = sp.var('x y sigma Cmu rho mu k0 eps0')
def gradVec2(u_vec, x, y):
return sp.Matrix([[diff(u_vec[0], x), diff(u_vec[1],x)], [diff(u_vec[0], y), diff(u_vec[1], y)]])
def divTen2(tensor, x, y):
return sp.Matrix([diff(tensor[0,0], x) + diff(tensor[1,0], y), diff(tensor[0, 1], x) + diff(tensor[1,1], y)])
def divVec2(u_vec, x, y):
return diff(u_vec[0], x) + diff(u_vec[1], y)
u = (1 - sp.erf(sigma * y / x)) / 2
v = sp.exp(-(sigma * y / x)**2) / 2 / sigma / sp.sqrt(sp.pi)
k = k0 * sp.exp(-(sigma * y / x)**2)
eps = eps0 / x * sp.exp(-(sigma * y / x)**2)
muT = rho * Cmu * k**2 / eps
u_vec = sp.Matrix([u, v])
grad_u_vec = gradVec2(u_vec, x, y)
visc_term = divTen2((mu + muT) * (grad_u_vec + grad_u_vec.transpose()), x, y)
print(sp.simplify(divVec2(u_vec, x, y)))
visc_term
visc_term.shape
momentum_equations = rho * u_vec.transpose() * grad_u_vec - visc_term.transpose()
u_eq = momentum_equations[0]
v_eq = momentum_equations[1]
sp.simplify(v_eq)
sp.simplify(u_eq)
sp.collect(u_eq, x)
u_eq = u_eq.subs(k0, sigma / sp.sqrt(sp.pi) * 343 / 75000)
print(u_eq)
u_eq = u_eq.subs(eps0, Cmu * sigma**2 / sp.pi * 343 / 22500)
print(u_eq)
sp.simplify(u_eq)
grad_u_vec = sp.Matrix([[diff(u, x), diff(v, x)], [diff(u, y), diff(v, y)]])
grad_u_vec
clear(pi)
from sympy.physics.vector import ReferenceFrame
R = ReferenceFrame('R')
u = (1 - sp.erf(sigma * R[1] / R[0])) / 2
v = sp.exp(sigma * R[1] / R[0]) / 2 / sigma / sp.sqrt(pi)
k = k0 * sp.exp(-(sigma * R[1] / R[0])**2)
eps = eps0 / R[0] * sp.exp(-(sigma * R[1] / R[0])**2)
muT = rho * Cmu * k**2 / eps
u_vec[0]
grad_u_vec = gradVec2(u_vec)
from scipy.special import erf
erf(2)
erf(-1)
erf(.99)
from numpy import pi, sqrt, exp
def d_erf(x):
return 2. / sqrt(pi) * exp(-x**2)
def d_half_erf(x):
return 2. / sqrt(pi) * exp(-(0.5*x)**2) * 0.5
d_half_erf(-2)
d_erf(-1)
print(pi)
d_erf(0)
d_erf(1)
import numpy as np
libmesh = np.loadtxt("/home/lindsayad/projects/moose/libmesh/contrib/fparser/examples/first_orig.dat")
libmesh.shape
xl = libmesh[:,0]
ypl = libmesh[:,1]
xt = np.arange(-1,1,.01)
yt = d_erf(xt)
import matplotlib.pyplot as plt
plt.close()
plt.plot(xl, ypl, label="libmesh")
plt.plot(xt, yt, label='true')
plt.legend()
plt.show()
plt.close()
plt.plot(xl, yt / ypl)
plt.show()
"""
Explanation: 5/3/17
End of explanation
"""
from sympy import *
x, y, L = var('x y L')
from random import randint, random, uniform
for i in range(30):
print('%.2f' % uniform(.1, .99))
from random import randint, random, uniform
def sym_func(x, y, L):
return round(uniform(.1, .99),1) + round(uniform(.1, .99),1) * sin(round(uniform(.1, .99),1) * pi * x / L) \
+ round(uniform(.1, .99),1) * sin(round(uniform(.1, .99),1) * pi * y / L) \
+ round(uniform(.1, .99),1) * sin(round(uniform(.1, .99),1) * pi * x * y / L)
u = sym_func(x, y, 1)
v = sym_func(x, y, 1)
p = sym_func(x, y, 1)
k = sym_func(x, y, 1)
eps = sym_func(x, y, 1)
print(u, v, p, k, eps, sep="\n")
import sympy as sp
def gradVec2(u_vec, x, y):
return sp.Matrix([[diff(u_vec[0], x), diff(u_vec[1],x)], [diff(u_vec[0], y), diff(u_vec[1], y)]])
def divTen2(tensor, x, y):
return sp.Matrix([diff(tensor[0,0], x) + diff(tensor[1,0], y), diff(tensor[0, 1], x) + diff(tensor[1,1], y)])
def divVec2(u_vec, x, y):
return diff(u_vec[0], x) + diff(u_vec[1], y)
def gradScalar2(u, x, y):
return sp.Matrix([diff(u, x), diff(u,y)])
def strain_rate(u_vec, x, y):
return gradVec2(u_vec, x, y) + gradVec2(u_vec, x, y).transpose()
def strain_rate_squared_2(u_vec, x, y):
tensor = gradVec2(u_vec, x, y) + gradVec2(u_vec, x, y).transpose()
rv = 0
for i in range(2):
for j in range(2):
rv += tensor[i, j] * tensor[i, j]
return rv
def laplace2(u, x, y):
return diff(diff(u, x), x) + diff(diff(u, y), y)
"""
Explanation: 5/10/17
Ok, the analytic_turbulence problem sucks. Even if I start with Dirichlet boundary conditions on all boundaries and initial conditions representing the supposed analytic solution, and then run a transient simulation, the solution evolves away from the supposed analytic solution. backwards_step_adaptive.i runs to completion, but that's for a relatively low inlet velocity.
Getting some pretty good results now also with backwards_step_adaptive_inlet_v_100.i which I wasn't a few days before. This could perhaps be due to the introduction of the SUPG terms. Convergence becomes a little slow at longer time steps, perhaps because of incomplete Jacobian implementation? Or poor relative scaling of the variables? Results for kin actually don't look too far off from the results in the Kuzmin paper. This simulation uses a Reynolds number of 100, which is still pretty small! Next effort will be with the Reynolds number in the Kuzmin paper of 47,625.
It's something I've observed over the years that decreasing element size can lead to decreasing solver convergence. Note that I'm not talking about convergence to the true solution. I wish I could find a good piece of literature discussing this phenomenon. There are just so many things to consider about a finite element solve; it can be fun at times and frustrating at others.
5/11/17
Ok, going to do some methods of manufactured solutions!
End of explanation
"""
pnew = Integer(0)
type(pnew)
"""
Explanation: Momentum equations
End of explanation
"""
cmu = 0.09
uvec = sp.Matrix([u, v])
mu, rho = var('mu rho')
visc_term = (-mu * divTen2(gradVec2(uvec, x, y) + gradVec2(uvec, x, y).transpose(), x, y)).transpose()
conv_term = rho * uvec.transpose() * gradVec2(uvec, x, y)
pressure_term = gradScalar2(p, x, y).transpose()
turbulent_visc_term = -(divTen2(rho * cmu * k**2 / eps * (gradVec2(uvec, x, y) + gradVec2(uvec, x, y).transpose()), x, y)).transpose()
# print(visc_term.shape, conv_term.shape, pressure_term.shape, sep="\n")
source = conv_term + visc_term + pressure_term + turbulent_visc_term
print(source[0])
print(source[1])
"""
Explanation: Traction Form
End of explanation
"""
cmu = 0.09
uvec = sp.Matrix([u, v])
mu, rho = var('mu rho')
visc_term = (-mu * divTen2(gradVec2(uvec, x, y), x, y)).transpose()
conv_term = rho * uvec.transpose() * gradVec2(uvec, x, y)
pressure_term = gradScalar2(p, x, y).transpose()
turbulent_visc_term = -(divTen2(rho * cmu * k**2 / eps * (gradVec2(uvec, x, y)), x, y)).transpose()
# print(visc_term.shape, conv_term.shape, pressure_term.shape, sep="\n")
source = conv_term + visc_term + pressure_term + turbulent_visc_term
print(source[0])
print(source[1])
"""
Explanation: Laplace Form
End of explanation
"""
-divVec2(uvec, x, y)
"""
Explanation: Pressure equation
End of explanation
"""
diff_term = -laplace2(p, x, y)
print(diff_term)
"""
Explanation: Or testing with a simple diffusion term
End of explanation
"""
cmu = 0.09
sigk = 1.
sigeps = 1.3
c1eps = 1.44
c2eps = 1.92
conv_term = rho * uvec.transpose() * gradScalar2(k, x, y)
diff_term = - divVec2((mu + rho * cmu * k**2 / eps / sigk) * gradScalar2(k, x, y), x, y)
creation_term = - rho * cmu * k**2 / eps / 2 * strain_rate_squared_2(uvec, x, y)
destruction_term = rho * eps
terms = [conv_term[0,0], diff_term, creation_term, destruction_term]
L = 0
for term in terms:
L += term
print(L)
"""
Explanation: Turbulent kinetic energy equation
End of explanation
"""
cmu = 0.09
sigk = 1.
sigeps = 1.3
c1eps = 1.44
c2eps = 1.92
conv_term = rho * uvec.transpose() * gradScalar2(eps, x, y)
diff_term = - divVec2((mu + rho * cmu * k**2 / eps / sigeps) * gradScalar2(eps, x, y), x, y)
creation_term = - rho * c1eps * cmu * k / 2 * strain_rate_squared_2(uvec, x, y)
destruction_term = rho * c2eps * eps**2 / k
terms = [conv_term[0,0], diff_term, creation_term, destruction_term]
L = 0
for term in terms:
L += term
print(L)
"""
Explanation: Turbulent dissipation
End of explanation
"""
diff_term = -laplace2(u, x, y)
print(diff_term)
def z(func, xh, yh):
u = np.zeros(xh.shape)
for i in range(0,xh.shape[0]):
for j in range(0,xh.shape[1]):
u[i][j] = func.subs({x:xh[i][j], y:yh[i][j]}).evalf()
# print(func.subs({x:xh[i][j], y:yh[i][j]}).evalf())
return u
xnum = np.arange(0, 1.01, .05)
ynum = np.arange(0, 1.01, .05)
xgrid, ygrid = np.meshgrid(xnum, ynum)
uh = z(u, xgrid, ygrid)
vh = z(v, xgrid, ygrid)
ph = z(p, xgrid, ygrid)
kh = z(k, xgrid, ygrid)
epsh = z(eps, xgrid, ygrid)
import matplotlib.pyplot as plt
plot_funcs = [uh, vh, ph, kh, epsh]
for func in plot_funcs:
plt.pcolor(xgrid, ygrid, func, cmap='coolwarm')
cbar = plt.colorbar()
plt.show()
f, g = symbols('f g', cls=Function)
f(x,y).diff(x)
vx, vy = symbols('v_x v_y', cls=Function)
mu, x, y = var('mu x y')
nx, ny = var('n_x n_y')
n = sp.Matrix([nx, ny])
v_vec = sp.Matrix([vx(x, y), vy(x, y)])
sigma = strain_rate(v_vec, x, y)
tw = n.transpose() * sigma - n.transpose() * sigma * n * n.transpose()
tw[0]
tw[1]
vx, vy = symbols('v_x v_y', cls=Function)
mu, x, y = var('mu x y')
nx, ny = var('n_x n_y')
n = sp.Matrix([Integer(0), Integer(1)])
v_vec = sp.Matrix([vx(x, y), 0])
sigma = strain_rate(v_vec, x, y)
tw = n.transpose() * sigma - n.transpose() * sigma * n * n.transpose()
tw[0]
"""
Explanation: Simple diffusion
End of explanation
"""
u
import sympy as sp
def gradVec2(u_vec, x, y):
return sp.Matrix([[diff(u_vec[0], x), diff(u_vec[1],x)], [diff(u_vec[0], y), diff(u_vec[1], y)]])
def divTen2(tensor, x, y):
return sp.Matrix([diff(tensor[0,0], x) + diff(tensor[1,0], y), diff(tensor[0, 1], x) + diff(tensor[1,1], y)])
def divVec2(u_vec, x, y):
return diff(u_vec[0], x) + diff(u_vec[1], y)
def gradScalar2(u, x, y):
return sp.Matrix([diff(u, x), diff(u,y)])
def strain_rate(u_vec, x, y):
return gradVec2(u_vec, x, y) + gradVec2(u_vec, x, y).transpose()
def strain_rate_squared_2(u_vec, x, y):
tensor = gradVec2(u_vec, x, y) + gradVec2(u_vec, x, y).transpose()
rv = 0
for i in range(2):
for j in range(2):
rv += tensor[i, j] * tensor[i, j]
return rv
def laplace2(u, x, y):
return diff(diff(u, x), x) + diff(diff(u, y), y)
def L_momentum_traction(uvec, k, eps, x, y):
cmu = 0.09
mu, rho = sp.var('mu rho')
visc_term = (-mu * divTen2(gradVec2(uvec, x, y) + gradVec2(uvec, x, y).transpose(), x, y)).transpose()
conv_term = rho * uvec.transpose() * gradVec2(uvec, x, y)
pressure_term = gradScalar2(p, x, y).transpose()
turbulent_visc_term = -(divTen2(rho * cmu * k**2 / eps * (gradVec2(uvec, x, y) + gradVec2(uvec, x, y).transpose()), x, y)).transpose()
# print(visc_term.shape, conv_term.shape, pressure_term.shape, sep="\n")
source = conv_term + visc_term + pressure_term + turbulent_visc_term
return source
def bc_terms_momentum_traction(uvec, nvec, k, eps, x, y):
cmu = 0.09
mu, rho = sp.var('mu rho')
visc_term = (-mu * nvec.transpose() * (gradVec2(uvec, x, y) + gradVec2(uvec, x, y).transpose(), x, y)).transpose()
turbulent_visc_term = -(nvec.transpose() * (rho * cmu * k**2 / eps * (gradVec2(uvec, x, y) + gradVec2(uvec, x, y).transpose()), x, y)).transpose()
return visc_term + turbulent_visc_term
def L_momentum_laplace(uvec, k, eps, x, y):
cmu = 0.09
mu, rho = var('mu rho')
visc_term = (-mu * divTen2(gradVec2(uvec, x, y), x, y)).transpose()
conv_term = rho * uvec.transpose() * gradVec2(uvec, x, y)
pressure_term = gradScalar2(p, x, y).transpose()
turbulent_visc_term = -(divTen2(rho * cmu * k**2 / eps * (gradVec2(uvec, x, y)), x, y)).transpose()
# print(visc_term.shape, conv_term.shape, pressure_term.shape, sep="\n")
source = conv_term + visc_term + pressure_term + turbulent_visc_term
return source
def L_pressure(uvec, x, y):
return -divVec2(uvec, x, y)
def L_kin(uvec, k, eps, x, y):
cmu = 0.09
sigk = 1.
sigeps = 1.3
c1eps = 1.44
c2eps = 1.92
conv_term = rho * uvec.transpose() * gradScalar2(k, x, y)
diff_term = - divVec2((mu + rho * cmu * k**2 / eps / sigk) * gradScalar2(k, x, y), x, y)
creation_term = - rho * cmu * k**2 / eps / 2 * strain_rate_squared_2(uvec, x, y)
destruction_term = rho * eps
terms = [conv_term[0,0], diff_term, creation_term, destruction_term]
L = 0
for term in terms:
L += term
return L
def L_eps(uvec, k, eps, x, y):
cmu = 0.09
sigk = 1.
sigeps = 1.3
c1eps = 1.44
c2eps = 1.92
conv_term = rho * uvec.transpose() * gradScalar2(eps, x, y)
diff_term = - divVec2((mu + rho * cmu * k**2 / eps / sigeps) * gradScalar2(eps, x, y), x, y)
creation_term = - rho * c1eps * cmu * k / 2 * strain_rate_squared_2(uvec, x, y)
destruction_term = rho * c2eps * eps**2 / k
terms = [conv_term[0,0], diff_term, creation_term, destruction_term]
L = 0
for term in terms:
L += term
return L
def prep_moose_input(sym_expr):
rep1 = re.sub(r'\*\*',r'^',str(sym_expr))
rep2 = re.sub(r'mu',r'${mu}',rep1)
rep3 = re.sub(r'rho',r'${rho}',rep2)
return rep3
def write_all_functions():
target = open('/home/lindsayad/python/mms_input.txt','w')
target.write("[Functions]" + "\n")
target.write(" [./u_source_func]" + "\n")
target.write(" type = ParsedFunction" + "\n")
target.write(" value = '" + prep_moose_input(L_momentum_traction(uVecNew, kinNew, epsilonNew, x, y)[0]) + "'" + "\n")
target.write(" [../]" + "\n")
target.write(" [./v_source_func]" + "\n")
target.write(" type = ParsedFunction" + "\n")
target.write(" value = '" + prep_moose_input(L_momentum_traction(uVecNew, kinNew, epsilonNew, x, y)[1]) + "'" + "\n")
target.write(" [../]" + "\n")
target.write(" [./p_source_func]" + "\n")
target.write(" type = ParsedFunction" + "\n")
target.write(" value = '" + prep_moose_input(L_pressure(uVecNew, x, y)) + "'" + "\n")
target.write(" [../]" + "\n")
target.write(" [./kin_source_func]" + "\n")
target.write(" type = ParsedFunction" + "\n")
target.write(" value = '" + prep_moose_input(L_kin(uVecNew, kinNew, epsilonNew, x, y)) + "'" + "\n")
target.write(" [../]" + "\n")
target.write(" [./epsilon_source_func]" + "\n")
target.write(" type = ParsedFunction" + "\n")
target.write(" value = '" + prep_moose_input(L_eps(uVecNew, kinNew, epsilonNew, x, y)) + "'" + "\n")
target.write(" [../]" + "\n")
target.write(" [./u_func]" + "\n")
target.write(" type = ParsedFunction" + "\n")
target.write(" value = '" + str(uNew) + "'" + "\n")
target.write(" [../]" + "\n")
target.write(" [./v_func]" + "\n")
target.write(" type = ParsedFunction" + "\n")
target.write(" value = '" + str(vNew) + "'" + "\n")
target.write(" [../]" + "\n")
target.write(" [./p_func]" + "\n")
target.write(" type = ParsedFunction" + "\n")
target.write(" value = '" + str(pNew) + "'" + "\n")
target.write(" [../]" + "\n")
target.write(" [./kin_func]" + "\n")
target.write(" type = ParsedFunction" + "\n")
target.write(" value = '" + str(kinNew) + "'" + "\n")
target.write(" [../]" + "\n")
target.write(" [./epsilon_func]" + "\n")
target.write(" type = ParsedFunction" + "\n")
target.write(" value = '" + str(epsilonNew) + "'" + "\n")
target.write(" [../]" + "\n")
target.write("[]" + "\n")
target.close()
def write_reduced_functions():
target = open('/home/lindsayad/python/mms_input.txt','w')
target.write("[Functions]" + "\n")
target.write(" [./u_source_func]" + "\n")
target.write(" type = ParsedFunction" + "\n")
target.write(" value = '" + prep_moose_input(L_momentum_traction(uVecNew, kinNew, epsilonNew, x, y)[0]) + "'" + "\n")
target.write(" [../]" + "\n")
target.write(" [./kin_source_func]" + "\n")
target.write(" type = ParsedFunction" + "\n")
target.write(" value = '" + prep_moose_input(L_kin(uVecNew, kinNew, epsilonNew, x, y)) + "'" + "\n")
target.write(" [../]" + "\n")
target.write(" [./epsilon_source_func]" + "\n")
target.write(" type = ParsedFunction" + "\n")
target.write(" value = '" + prep_moose_input(L_eps(uVecNew, kinNew, epsilonNew, x, y)) + "'" + "\n")
target.write(" [../]" + "\n")
target.write(" [./u_func]" + "\n")
target.write(" type = ParsedFunction" + "\n")
target.write(" value = '" + str(uNew) + "'" + "\n")
target.write(" [../]" + "\n")
target.write(" [./kin_func]" + "\n")
target.write(" type = ParsedFunction" + "\n")
target.write(" value = '" + str(kinNew) + "'" + "\n")
target.write(" [../]" + "\n")
target.write(" [./epsilon_func]" + "\n")
target.write(" type = ParsedFunction" + "\n")
target.write(" value = '" + str(epsilonNew) + "'" + "\n")
target.write(" [../]" + "\n")
target.write("[]" + "\n")
target.close()
yStarPlus = 11.06
# uNew = yStarPlus**2 / y + u * (y - 1.) * 200
# uNew = u * (y - 1.) * 200
# vNew = Integer(0)
# vNew = v * (y - 1.5) * 200
# pNew = Integer(0)
# # Converges
# uNew = u
# vNew = v
# pNew = p
# kinNew = k
# epsilonNew = eps
# # Does not converge
# uNew = u * 200
# vNew = v * 200
# pNew = p * 200
# kinNew = k * 200
# epsilonNew = eps * 200
# # Converges
# uNew = u * (y - 1.)
# vNew = v
# pNew = p
# kinNew = k
# epsilonNew = eps
# # Converges
# uNew = u * (y - 1.) + 1. / y
# vNew = v
# pNew = p
# kinNew = k
# epsilonNew = eps
# # Does not converge
# uNew = u * (y - 1.) + yStarPlus**2 / y
# vNew = v
# pNew = p
# kinNew = k
# epsilonNew = eps
# # Converges
# uNew = u * (y - 1.) + 1.1 / y
# vNew = 0
# pNew = 0
# kinNew = k
# epsilonNew = eps
# Want to test natural boundary condition
uNew = 0.5 + sin(pi * x / 2) + sin(pi * y / 2)
vNew = 0
pNew = 0
kinNew = k
epsilonNew = eps
uVecNew = sp.Matrix([uNew, vNew])
write_reduced_functions()
print(u)
print(v)
print(p)
print(k)
print(eps)
"""
Explanation: Verified RANS kernels
INSK
INSEpsilon
INSMomentumTurbulentViscosityTractionForm
INSMomentumTurbulentViscosityLaplaceForm
INSMomentumShearStressWallFunction with |u|/yStarPlus branch of uTau
with exp_form = false.
End of explanation
"""
vx, vy = symbols('v_x v_y', cls=Function)
mu, x, y = var('mu x y')
nx, ny = var('n_x n_y')
n = sp.Matrix([Integer(0), Integer(1)])
v_vec = sp.Matrix([vx(x, y), 0])
kinFunc, epsFunc = symbols('kinFunc epsFunc', cls=Function)
blah = bc_terms_momentum_traction(uVecNew, n, kinNew, epsilonNew, x, y)
type(blah)
blah[0]
sigma = mu * strain_rate(v_vec, x, y)
tw = n.transpose() * sigma - n.transpose() * sigma * n * n.transpose()
tw[0]
(n.transpose() * sigma)[0]
cmu = 0.09
mu, rho = sp.var('mu rho')
visc_term = (-mu * n.transpose() * (gradVec2(v_vec, x, y) + gradVec2(v_vec, x, y).transpose(), x, y)).transpose()
turbulent_visc_term = -(n.transpose() * (rho * cmu * k**2 / eps * (gradVec2(v_vec, x, y) + gradVec2(v_vec, x, y).transpose()), x, y)).transpose()
visc_term
print(visc_term)
yStarPlus = 11.06
# uNew = yStarPlus**2 / y + u * (y - 1.) * 200
# uNew = u * (y - 1.) * 200
# vNew = Integer(0)
# vNew = v * (y - 1.5) * 200
# pNew = Integer(0)
# # Converges
# uNew = u
# vNew = v
# pNew = p
# kinNew = k
# epsilonNew = eps
# # Does not converge
# uNew = u * 200
# vNew = v * 200
# pNew = p * 200
# kinNew = k * 200
# epsilonNew = eps * 200
# # Converges
# uNew = u * (y - 1.)
# vNew = v
# pNew = p
# kinNew = k
# epsilonNew = eps
# # Converges
# uNew = u * (y - 1.) + 1. / y
# vNew = v
# pNew = p
# kinNew = k
# epsilonNew = eps
# # Does not converge
# uNew = u * (y - 1.) + yStarPlus**2 / y
# vNew = v
# pNew = p
# kinNew = k
# epsilonNew = eps
# # Converges
# uNew = u * (y - 1.) + 1.1 / y
# vNew = 0
# pNew = 0
# kinNew = k
# epsilonNew = eps
from moose_calc_routines import *
from sympy import *
init_printing()
yStarPlus = 1.1
vx, vy = symbols('v_x v_y', cls=Function, positive=True, real=True)
mu, x, y = var('mu x y', real=True, positive=True)
nx, ny = var('n_x n_y')
n = sp.Matrix([Integer(0), Integer(1)])
v_vec = sp.Matrix([vx(x, y), 0])
kinFunc, epsFunc = symbols('k_f \epsilon_f', cls=Function)
u = sym_func(x, y, 1)
v = sym_func(x, y, 1)
p = sym_func(x, y, 1)
k = sym_func(x, y, 1)
eps = sym_func(x, y, 1)
# Want to test wall function bc
uNew = mu * yStarPlus**2 / y
# uNew = 0.5 + sin(pi * x / 2) + sin(pi * y / 2) + mu * yStarPlus**2 / y
vNew = 0
pNew = 0
kinNew = k
epsilonNew = eps
# # Want to test natural boundary condition
# uNew = 0.5 + sin(pi * x / 2) + sin(pi * y / 2)
# vNew = 0
# pNew = 0
# kinNew = k
# epsilonNew = eps
uVecNew = sp.Matrix([uNew, vNew])
numeric = bc_terms_momentum_traction(uVecNew, n, kinNew, epsilonNew, x, y, symbolic=False)
numeric_wall_function = wall_function_momentum_traction(uVecNew, n, kinNew, epsilonNew, x, y, "kin", symbolic=False)
symbolic = bc_terms_momentum_traction(v_vec, n, kinFunc(x, y), epsFunc(x, y), x, y, symbolic=True)
wall_function = wall_function_momentum_traction(v_vec, n, kinFunc(x, y), epsFunc(x, y), x, y, "kin", symbolic=True)
write_reduced_functions(uVecNew, kinNew, epsilonNew, x, y)
expr = numeric[0] - numeric_wall_function[0]
expr.subs(y, 1).collect('mu')
expr = symbolic[0] - wall_function[0]
# print(expr)
# expr
newexp = expr.subs(vx(x, y), vx(y)).subs(Abs(vx(y)), vx(y))
# print(newexp)
newexp
dsolve(newexp, vx(y))
symbolic[0]
wall_function[0]
from moose_calc_routines import *
from sympy import *
import sympy as sp
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
init_printing()
x, y = var('x y')
# # INS turbulence
# u = 0.4*sin(0.5*pi*x) + 0.4*sin(pi*y) + 0.7*sin(0.2*pi*x*y) + 0.5
# v = 0.6*sin(0.8*pi*x) + 0.3*sin(0.3*pi*y) + 0.2*sin(0.3*pi*x*y) + 0.3
# p = 0.5*sin(0.5*pi*x) + 1.0*sin(0.3*pi*y) + 0.5*sin(0.2*pi*x*y) + 0.5
# k = 0.4*sin(0.7*pi*x) + 0.9*sin(0.7*pi*y) + 0.7*sin(0.4*pi*x*y) + 0.4
# eps = 0.6*sin(0.3*pi*x) + 0.9*sin(0.9*pi*y) + 0.8*sin(0.6*pi*x*y) + 0.5
# uvec = sp.Matrix([u, v])
# n = sp.Matrix([Integer(0), Integer(1)])
# INS only
u = 0.4*sin(0.5*pi*x) + 0.4*sin(pi*y) + 0.7*sin(0.2*pi*x*y) + 0.5
v = 0.6*sin(0.8*pi*x) + 0.3*sin(0.3*pi*y) + 0.2*sin(0.3*pi*x*y) + 0.3
p = 0.5*sin(0.5*pi*x) + 1.0*sin(0.3*pi*y) + 0.5*sin(0.2*pi*x*y) + 0.5
uvec = sp.Matrix([u, v])
nvec = sp.Matrix([Integer(0), Integer(1)])
nvecs = {'left' : sp.Matrix([-1, 0]), 'top' : sp.Matrix([0, 1]), \
'right' : sp.Matrix([1, 0]), 'bottom' : sp.Matrix([0, -1])}
source = {bnd_name :
prep_moose_input(-bc_terms_momentum_traction_no_turbulence(uvec, nvec, p, x, y, parts=True)[0])
for bnd_name, nvec in nvecs.items()}
source
surface_terms = bc_terms_momentum_traction_no_turbulence(uvec, nvec, p, x, y, parts=True)
tested_bc = no_bc_bc(uvec, nvec, p, x, y, parts=True)
needed_func = tested_bc - surface_terms
print(prep_moose_input(needed_func[0]))
print(prep_moose_input(needed_func[1]))
surface_terms = bc_terms_momentum_traction(uvec, n, p, k, eps, x, y, symbolic=False, parts=True)
tested_bc = wall_function_momentum_traction(uvec, n, p, k, eps, x, y, "kin", symbolic=False, parts=True)
needed_func = tested_bc - surface_terms
needed_func
print(prep_moose_input(needed_func[0]))
print(prep_moose_input(-surface_terms[0]))
wall_function_momentum_traction(uvec, n, p, k, eps, x, y, "kin", symbolic=True, parts=True)
surface_terms = bc_terms_diffusion(u, n, x, y)
tested_bc = vacuum(u, n)
surface_terms
tested_bc
needed_func = tested_bc - surface_terms
needed_func
print(needed_func)
"""
Explanation: Ok, so apparently just scaling every variable's manufactured solution by 200 causes MOOSE convergence issues. Sigh
End of explanation
"""
from moose_calc_routines import *
from sympy import *
import sympy as sp
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
init_printing()
x, y = var('x y')
# INS turbulence
u = 0.4*sin(0.5*pi*x) + 0.4*sin(pi*y) + 0.7*sin(0.2*pi*x*y) + 0.5
v = 0.6*sin(0.8*pi*x) + 0.3*sin(0.3*pi*y) + 0.2*sin(0.3*pi*x*y) + 0.3
p = 0.5*sin(0.5*pi*x) + 1.0*sin(0.3*pi*y) + 0.5*sin(0.2*pi*x*y) + 0.5
k = 0.4*sin(0.7*pi*x) + 0.9*sin(0.7*pi*y) + 0.7*sin(0.4*pi*x*y) + 0.4
eps = 0.6*sin(0.3*pi*x) + 0.9*sin(0.9*pi*y) + 0.8*sin(0.6*pi*x*y) + 0.5
# # INS only
# u = 0.4*sin(0.5*pi*x) + 0.4*sin(pi*y) + 0.7*sin(0.2*pi*x*y) + 0.5
# v = 0.6*sin(0.8*pi*x) + 0.3*sin(0.3*pi*y) + 0.2*sin(0.3*pi*x*y) + 0.3
# p = 0.5*sin(0.5*pi*x) + 1.0*sin(0.3*pi*y) + 0.5*sin(0.2*pi*x*y) + 0.5
uvec = sp.Matrix([u, v])
nvecs = {'left' : sp.Matrix([-1, 0]), 'top' : sp.Matrix([0, 1]), \
'right' : sp.Matrix([1, 0]), 'bottom' : sp.Matrix([0, -1])}
source = {bnd_name :
prep_moose_input(#ins_epsilon_wall_function_bc(nvec, k, eps, x, y)
-bc_terms_eps(nvec, k, eps, x, y)[0,0])
for bnd_name, nvec in nvecs.items()}
# anti_bounds = {'left' : 'top right bottom', 'top' : 'right bottom left',
# 'right' : 'bottom left top', 'bottom' : 'left top right'}
anti_bounds = {'left' : 'top right bottom left', 'top' : 'right bottom left top',
'right' : 'bottom left top right', 'bottom' : 'left top right bottom'}
h_list = ['5', '10']
base = "k_epsilon_general_bc"
h_array = np.array([.2, .1])
volume_source = {'u' : prep_moose_input(L_momentum_traction(uvec, p, k, eps, x, y)[0]),
'v' : prep_moose_input(L_momentum_traction(uvec, p, k, eps, x, y)[1]),
'p' : prep_moose_input(L_pressure(uvec, x, y)),
'k' : prep_moose_input(L_kin(uvec, k, eps, x, y)),
'eps' : prep_moose_input(L_eps(uvec, k , eps, x, y))}
diri_func = {'u' : u, 'v' : v, 'p' : p, 'k' : k, 'eps' : eps}
a_string = "b"
a_string += "c"
a_string
"a" + None
optional_save_string="epsilon_wall_func_natural"
plot_order_accuracy('left', h_array, base, optional_save_string=optional_save_string)
plot_order_accuracy('right', h_array, base, optional_save_string=optional_save_string)
plot_order_accuracy('top', h_array, base, optional_save_string=optional_save_string)
plot_order_accuracy('bottom', h_array, base, optional_save_string=optional_save_string)
"""
Explanation: 5/16/17
End of explanation
"""
string = "Functions" + str('u')
string
import numpy as np
import matplotlib.pyplot as plt
x = np.arange(0, 2.1, .1)
u = 2*x**2 + 4*x
v = 3*x**2 + 2*x + 1
plt.plot(x, u)
plt.plot(x, v)
plt.show()
x = np.arange(0, 2.1, .1)
u = 1 - 2*x + 2*x**2
v = x**2
plt.plot(x, u)
plt.plot(x, v)
plt.show()
"""
Explanation: Tasks accomplished today:
Showed formal order accuracy of natural boundary condition using MMS with pure navier stokes
Showed grid convergence with "kinetic" branch of INSMomentumShearStressWallFunctionBC but with accuracy order between formal order and formal order - 1 for u, v, and p for top and bottom boundaries. Unable to solve for left and right boundaries. Formal order accuracy for $\epsilon$ and k for solved cases.
Showed grid convergence with "velocity" branch of INSMomentumShearStressWallFunctionBC but with accuracy order between formal order and formal order - 1 for top and bottom boundaries; formal order accuracy for $\epsilon$ and k for top and bottom boundaries. Between formal order - 1 and formal order - 2 for p, u, and v for left boundary; between formal order and formal order - 1 for $\epsilon$ and k for left boundary. Unable to solve for right boundary.
Demonstrated that just by introducing a small error in the MOOSE code (multiplying a term by 1.1), we can destroy grid convergence by two orders. This makes me feel better about the fact that we're not achieving the exact formal order of accuracy with the INSMomentumShearStressWallFunctionBC but we're still within an order of the formal order.
Natural results suggest the moose python calculation routine for the integrated by parts terms is wrong!
End of explanation
"""
|
chris1610/pbpython | notebooks/pandas-styling.ipynb | bsd-3-clause | import numpy as np
import pandas as pd
from sparklines import sparklines
df = pd.read_excel('https://github.com/chris1610/pbpython/blob/master/data/2018_Sales_Total.xlsx?raw=true')
df.head()
"""
Explanation: Introduction to Pandas Style API
Content to accompany blog post on Practical Business Python
End of explanation
"""
df.groupby('name')['ext price'].agg(['mean', 'sum'])
"""
Explanation: Do a simple groupby to look at the performance by customr
End of explanation
"""
(df.groupby('name')['ext price']
.agg(['mean', 'sum'])
.style.format('${0:,.2f}'))
"""
Explanation: Style the currency using python's string formatting
End of explanation
"""
(df.groupby('name')['ext price']
.agg(['mean', 'sum'])
.style.format('${0:,.0f}'))
"""
Explanation: Round the results to 0 decimals
End of explanation
"""
monthly_sales = df.groupby([pd.Grouper(key='date', freq='M')])['ext price'].agg(['sum']).reset_index()
monthly_sales['pct_of_total'] = monthly_sales['sum'] / df['ext price'].sum()
monthly_sales
"""
Explanation: More complex analysis of performance by month
End of explanation
"""
format_dict = {'sum':'${0:,.0f}', 'date': '{:%m-%Y}', 'pct_of_total': '{:.2%}'}
monthly_sales.style.format(format_dict).hide_index()
"""
Explanation: Use a format dictionary to control formatting per column
End of explanation
"""
(monthly_sales
.style
.format(format_dict)
.hide_index()
.highlight_max(color='lightgreen')
.highlight_min(color='#cd4f39'))
"""
Explanation: Introduce the highlight functions
End of explanation
"""
(monthly_sales
.style
.format(format_dict)
.hide_index()
.bar(color='#FFA07A', vmin=100_000, subset=['sum'], align='zero')
.bar(color='lightgreen', vmin=0, subset=['pct_of_total'], align='zero')
.set_caption('2018 Sales Performance'))
(monthly_sales.style
.format(format_dict)
.background_gradient(subset=['sum'],cmap='BuGn'))
"""
Explanation: Introduce bar formatting for table cells
End of explanation
"""
def sparkline_str(x):
bins=np.histogram(x)[0]
sl = ''.join(sparklines(bins))
return sl
sparkline_str.__name__ = "sparkline"
df.groupby('name')['quantity', 'ext price'].agg(['mean', sparkline_str])
"""
Explanation: Cool example of using sparklines from Peter Baumgartner
https://twitter.com/pmbaumgartner/status/1084645440224559104
End of explanation
"""
|
chloeyangu/BigDataAnalytics | Terrorisks/Code/.ipynb_checkpoints/BT4221- Code 1-checkpoint.ipynb | mit | import pandas as pd
import numpy as np
terror = pd.read_csv('file.csv', encoding='ISO-8859-1')
cleanedforuse = terror.filter(['imonth', 'iday', 'region','property','propextent','attacktype1','weaptype1','nperps','success','multiple','specificity'])
final = cleanedforuse[~np.isnan(cleanedforuse).any(axis=1)]
final.head()
import sqlite3
conn = sqlite3.connect('Terrorisks.db')
final.to_sql('final',con=conn, flavor='sqlite', if_exists='replace')
df = pd.read_sql_query('SELECT * FROM final', conn)
df.head(10)
"""
Explanation: Connecting to Database
End of explanation
"""
import numpy as np
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt
from patsy import dmatrices
from sklearn.linear_model import LogisticRegression
from sklearn.cross_validation import train_test_split
from sklearn import metrics
from sklearn.cross_validation import cross_val_score
from sklearn.metrics import roc_curve, auc
y, X = dmatrices('success ~ C(imonth) + C(iday) + region + C(property) + C(propextent) + C(attacktype1) + C(weaptype1)+ C(nperps) + specificity', df, return_type="dataframe")
print(X)
y = np.ravel(y)
# instantiate a logistic regression model, and fit with X and y
model = LogisticRegression()
model = model.fit(X, y)
# what percentage had multiple?
print("Benchmark:")
b = y.mean()
print(b)
# check the accuracy on the training set
a = model.score(X, y)
print("Score:")
print(a)
model.coef_
# evaluate the model by splitting into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
model2 = LogisticRegression()
model2.fit(X_train, y_train)
# predict class labels for the test set
predicted = model2.predict(X_test)
print (predicted)
# generate class probabilities
probs = model2.predict_proba(X_test)
print (probs)
# generate evaluation metrics
print (metrics.accuracy_score(y_test, predicted))
print (metrics.roc_auc_score(y_test, probs[:, 1]))
print (metrics.confusion_matrix(y_test, predicted))
print (metrics.classification_report(y_test, predicted))
scores = cross_val_score(LogisticRegression(), X, y, scoring='accuracy', cv=10)
print (scores)
print (scores.mean())
false_positive_rate, true_positive_rate, thresholds = roc_curve(y_test, predicted)
roc_auc = auc(false_positive_rate, true_positive_rate)
print('AUC = %0.4f'% roc_auc)
plt.title('Receiver Operating Characteristic')
plt.plot(false_positive_rate, true_positive_rate, 'b',label='AUC = %0.2f'% roc_auc)
plt.legend(loc='lower right')
plt.plot([0,1],[0,1],'r--')
plt.xlim([-0.1,1.2])
plt.ylim([-0.1,1.2])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
"""
Explanation: LOGISTIC REGRESSION
End of explanation
"""
y, X = dmatrices('multiple ~ C(imonth) + C(iday) + region + C(property) + C(propextent) + C(attacktype1) + C(weaptype1)+ C(nperps) + specificity', df, return_type="dataframe")
y = np.ravel(y)
# instantiate a logistic regression model, and fit with X and y
model = LogisticRegression()
model = model.fit(X, y)
# what percentage had multiple?
print("Benchmark:")
b = y.mean()
print(b)
# check the accuracy on the training set
a = model.score(X, y)
print("Score:")
print(a)
model.coef_
# evaluate the model by splitting into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
model2 = LogisticRegression()
model2.fit(X_train, y_train)
# predict class labels for the test set
predicted = model2.predict(X_test)
print (predicted)
# generate class probabilities
probs = model2.predict_proba(X_test)
print (probs)
# generate evaluation metrics
print (metrics.accuracy_score(y_test, predicted))
print (metrics.roc_auc_score(y_test, probs[:, 1]))
print (metrics.confusion_matrix(y_test, predicted))
print (metrics.classification_report(y_test, predicted))
scores = cross_val_score(LogisticRegression(), X, y, scoring='accuracy', cv=10)
print (scores)
print (scores.mean())
false_positive_rate, true_positive_rate, thresholds = roc_curve(y_test, predicted)
roc_auc = auc(false_positive_rate, true_positive_rate)
print('AUC = %0.4f'% roc_auc)
plt.title('Receiver Operating Characteristic')
plt.plot(false_positive_rate, true_positive_rate, 'b',label='AUC = %0.2f'% roc_auc)
plt.legend(loc='lower right')
plt.plot([0,1],[0,1],'r--')
plt.xlim([-0.1,1.2])
plt.ylim([-0.1,1.2])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
"""
Explanation: Logistic Regression - Success
Logistic Regression - MULTIPLE
End of explanation
"""
import numpy as np
from sklearn import preprocessing
from sklearn.metrics import roc_curve, auc
from sklearn.ensemble import RandomForestClassifier
from sklearn.cross_validation import train_test_split
from sklearn.metrics import accuracy_score
import matplotlib.pyplot as plt
import pandas as pd
y = df['multiple']
X = df.filter(['imonth', 'iday', 'region','property',
'propextent','attacktype1','weaptype1','nperps','specificity'])
Xone= pd.get_dummies(X, prefix='month', columns=['imonth'])
Xtwo= pd.get_dummies(Xone, prefix='day', columns=['iday'])
Xthree= pd.get_dummies(Xtwo, prefix='region', columns=['region'])
Xfour= pd.get_dummies(Xthree, prefix='attacktype', columns=['attacktype1'])
Xfive= pd.get_dummies(Xfour, prefix='weapontype', columns=['weaptype1'])
Xsix= pd.get_dummies(Xfive, prefix='specificity', columns=['specificity'])
features_train, features_test,target_train, target_test = train_test_split(Xsix,y, test_size = 0.2,random_state=0)
print("Benchmark: " )
print(1-(y.mean()))
#Random Forest
forest=RandomForestClassifier(n_estimators=10)
forest = forest.fit( features_train, target_train)
output = forest.predict(features_test).astype(int)
forest.score(features_train, target_train )
false_positive_rate, true_positive_rate, thresholds = roc_curve(target_test, output)
roc_auc = auc(false_positive_rate, true_positive_rate)
print('AUC = %0.4f'% roc_auc)
plt.title('Receiver Operating Characteristic')
plt.plot(false_positive_rate, true_positive_rate, 'b',label='AUC = %0.2f'% roc_auc)
plt.legend(loc='lower right')
plt.plot([0,1],[0,1],'r--')
plt.xlim([-0.1,1.2])
plt.ylim([-0.1,1.2])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
scores = cross_val_score(forest, X, y, scoring='accuracy', cv=10)
print (scores)
print (scores.mean())
"""
Explanation: RANDOM FOREST
Random Forest- MULTIPLE
End of explanation
"""
y = df['success']
X = df.filter(['imonth', 'iday', 'region','property',
'propextent','attacktype1','weaptype1','nperps','specificity'])
features_train, features_test,target_train, target_test = train_test_split(X,y, test_size = 0.2,random_state=0)
#Random Forest
forest=RandomForestClassifier(n_estimators=10)
forest = forest.fit( features_train, target_train)
output = forest.predict(features_test).astype(int)
score = forest.score(features_train, target_train)
print("Benchmark: " )
print((y.mean()))
print('Our Accuracy:')
print(score)
false_positive_rate, true_positive_rate, thresholds = roc_curve(target_test, output)
roc_auc = auc(false_positive_rate, true_positive_rate)
print('AUC = %0.4f'% roc_auc)
plt.title('Receiver Operating Characteristic')
plt.plot(false_positive_rate, true_positive_rate, 'b',label='AUC = %0.2f'% roc_auc)
plt.legend(loc='lower right')
plt.plot([0,1],[0,1],'r--')
plt.xlim([-0.1,1.2])
plt.ylim([-0.1,1.2])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
"""
Explanation: Random Forest- SUCCESS
End of explanation
"""
from sklearn.tree import _tree
def leaf_depths(tree, node_id = 0):
'''
tree.children_left and tree.children_right store ids
of left and right chidren of a given node
'''
left_child = tree.children_left[node_id]
right_child = tree.children_right[node_id]
'''
If a given node is terminal,
both left and right children are set to _tree.TREE_LEAF
'''
if left_child == _tree.TREE_LEAF:
'''
Set depth of terminal nodes to 0
'''
depths = np.array([0])
else:
'''
Get depths of left and right children and
increment them by 1
'''
left_depths = leaf_depths(tree, left_child) + 1
right_depths = leaf_depths(tree, right_child) + 1
depths = np.append(left_depths, right_depths)
return depths
def leaf_samples(tree, node_id = 0):
left_child = tree.children_left[node_id]
right_child = tree.children_right[node_id]
if left_child == _tree.TREE_LEAF:
samples = np.array([tree.n_node_samples[node_id]])
else:
left_samples = leaf_samples(tree, left_child)
right_samples = leaf_samples(tree, right_child)
samples = np.append(left_samples, right_samples)
return samples
def draw_tree(ensemble, tree_id=0):
plt.figure(figsize=(8,8))
plt.subplot(211)
tree = ensemble.estimators_[tree_id].tree_
depths = leaf_depths(tree)
plt.hist(depths, histtype='step', color='#9933ff',
bins=range(min(depths), max(depths)+1))
plt.xlabel("Depth of leaf nodes (tree %s)" % tree_id)
plt.subplot(212)
samples = leaf_samples(tree)
plt.hist(samples, histtype='step', color='#3399ff',
bins=range(min(samples), max(samples)+1))
plt.xlabel("Number of samples in leaf nodes (tree %s)" % tree_id)
plt.show()
def draw_ensemble(ensemble):
plt.figure(figsize=(8,8))
plt.subplot(211)
depths_all = np.array([], dtype=int)
for x in ensemble.estimators_:
tree = x.tree_
depths = leaf_depths(tree)
depths_all = np.append(depths_all, depths)
plt.hist(depths, histtype='step', color='#ddaaff',
bins=range(min(depths), max(depths)+1))
plt.hist(depths_all, histtype='step', color='#9933ff',
bins=range(min(depths_all), max(depths_all)+1),
weights=np.ones(len(depths_all))/len(ensemble.estimators_),
linewidth=2)
plt.xlabel("Depth of leaf nodes")
samples_all = np.array([], dtype=int)
plt.subplot(212)
for x in ensemble.estimators_:
tree = x.tree_
samples = leaf_samples(tree)
samples_all = np.append(samples_all, samples)
plt.hist(samples, histtype='step', color='#aaddff',
bins=range(min(samples), max(samples)+1))
plt.hist(samples_all, histtype='step', color='#3399ff',
bins=range(min(samples_all), max(samples_all)+1),
weights=np.ones(len(samples_all))/len(ensemble.estimators_),
linewidth=2)
plt.xlabel("Number of samples in leaf nodes")
plt.show()
draw_tree(forest)
draw_ensemble(forest)
y = df['multiple']
X = df.filter(['imonth', 'iday', 'region','property',
'propextent','attacktype1','weaptype1','nperps','specificity'])
features_train, features_test,target_train, target_test = train_test_split(X,y, test_size = 0.2,random_state=0)
#Random Forest
forest=RandomForestClassifier(n_estimators=10, max_depth = 16)
forest = forest.fit( features_train, target_train)
output = forest.predict(features_test).astype(int)
score = forest.score(features_train, target_train)
print("Benchmark: " )
print(1-(y.mean()))
print('Our Accuracy:')
print(score)
false_positive_rate, true_positive_rate, thresholds = roc_curve(target_test, output)
roc_auc = auc(false_positive_rate, true_positive_rate)
print('AUC = %0.4f'% roc_auc)
plt.title('Receiver Operating Characteristic')
plt.plot(false_positive_rate, true_positive_rate, 'b',label='AUC = %0.2f'% roc_auc)
plt.legend(loc='lower right')
plt.plot([0,1],[0,1],'r--')
plt.xlim([-0.1,1.2])
plt.ylim([-0.1,1.2])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
"""
Explanation: Preventing Overfitting of the tree for multiple model
The results are different now due to the different sample used from here as compared to when we built the model shown during presentation; as such, results may vary slightly
End of explanation
"""
import pandas as pd
df = pd.read_csv('/Users/Laishumin/Datasets/globalterrorism.csv', encoding='ISO-8859-1',low_memory=False)
clean=df[['iyear','imonth','iday','region','specificity'
,'vicinity','crit1','crit2','crit3','doubtterr','multiple','success','suicide'
,'attacktype1','ingroup','guncertain1','weaptype1']]
df_dummies1= pd.get_dummies(clean, prefix='month', columns=['imonth'])
df_dummies2= pd.get_dummies(df_dummies1, prefix='region', columns=['region'])
df_dummies3= pd.get_dummies(df_dummies2, prefix='specificity', columns=['specificity'])
df_dummies4= pd.get_dummies(df_dummies3, prefix='attack_type', columns=['attacktype1'])
df_dummies5= pd.get_dummies(df_dummies4, prefix='main_weapon_type', columns=['weaptype1'])
data = df_dummies5
del data['iyear']
del data['iday']
del data['guncertain1']
del data['ingroup']
del data['doubtterr']
names = list(data.columns.values)
names
lift_multiple = []
for i in names:
num_Feature = 0
Count = 0
for sample in data1[i]:
thing = data1[i].astype(str).str.contains('1')
if (thing.iloc[Count] == True):
num_Feature += 1
Count +=1
else:
Count +=1
print("{0} ".format(num_Feature) + " from " + i)
rule_valid = 0
rule_invalid = 0
for j in range(len(data1)):
if data1.iloc[j][i] == 1:
if data1.iloc[j].multiple == 1:
rule_valid += 1
else:
rule_invalid += 1
print("{0} cases of the rule being valid were discovered".format(rule_valid))
print("{0} cases of the rule being invalid were discovered".format(rule_invalid))
# Now we have all the information needed to compute Support and Confidence
support = rule_valid # The Support is the number of times the rule is discovered.
if (num_Feature == 0):
lift_multiple.append(0)
else:
confidence = (rule_valid) / (num_Feature)
lift = confidence / 0.13
lift_multiple.append(lift)
print(i + '-->Multiple')
print("The support is {0}, the confidence is {1:.3f}, and the lift is {2:.3f}.".format(support, confidence, lift))
print("As a percentage, the confidence is {0:.1f}%.".format(100 * confidence))
print("-----------------------------------------------------------------")
lift_multiple_pd = pd.DataFrame(
{'Lift':lift_multiple
},index=names2)
lift_multiple_pd
graph = lift_multiple_pd.sort(['Lift'], ascending=[0])
graph
%matplotlib inline
graph.plot(kind='bar')
"""
Explanation: ASSOCIATION RULES
End of explanation
"""
import numpy as np
import seaborn as sns
import pandas as pd
sns.violinplot(x="weaptype1", y="success", data=df, palette="Set3")
sns.violinplot(x="propextent", y="multiple", data=df, palette="Set3")
sns.violinplot(x="imonth", y="multiple", data=df, palette="Set3")
sns.violinplot(x="property", y="multiple", data=df, palette="Set3")
"""
Explanation: Violin Plot Visualisastions
End of explanation
"""
|
strawberryLoU/the_end_of_day_two | DefensiveProgramming_3.ipynb | mit | def test_range_overlap():
assert range_overlap([(-3.0, 5.0), (0.0, 4.5), (-1.5, 2.0)]) == (0.0, 2.0)
assert range_overlap([ (2.0, 3.0), (2.0, 4.0) ]) == (2.0, 3.0)
assert range_overlap([ (0.0, 1.0), (0.0, 2.0), (-1.0, 1.0) ]) == (0.0, 1.0)
"""
Explanation: # Defensive programming (2)
We have seen the basic idea that we can insert
assert statments into code, to check that the
results are what we expect, but how can we test
software more fully? Can doing this help us
avoid bugs in the first place?
One possible approach is test driven development.
Many people think this reduces the number of bugs in
software as it is written, but evidence for this in the
sciences is somewhat limited as it is not always easy
to say what the right answer should be before writing the
software. Having said that, the tests involved in test
driven development are certanly useful even if some of
them are written after the software.
We will look at a new (and quite difficult) problem,
finding the overlap between ranges of numbers. For
example, these could be the dates that different
sensors were running, and you need to find the
date ranges where all sensors recorded data before
running further analysis.
<img src="python-overlapping-ranges.svg">
Start off by imagining you have a working function range_overlap that takes
a list of tuples. Write some assert statments that would check if the answer from this
function is correct. Put these in a function. Think of different cases and
about edge cases (which may show a subtle bug).
End of explanation
"""
def test_range_overlap_no_overlap():
assert range_overlap([ (0.0, 1.0), (5.0, 6.0) ]) == None
assert range_overlap([ (0.0, 1.0), (1.0, 2.0) ]) == None
"""
Explanation: But what if there is no overlap? What if they just touch?
End of explanation
"""
def test_range_overlap_one_range():
assert range_overlap([ (0.0, 1.0) ]) == (0.0, 1.0)
"""
Explanation: What about the case of a single range?
End of explanation
"""
def range_overlap(ranges):
# Return common overlap among a set of [low, high] ranges.
lowest = -1000.0
highest = 1000.0
for (low, high) in ranges:
lowest = max(lowest, low)
highest = min(highest, high)
return (lowest, highest)
"""
Explanation: The write a solution - one possible one is below.
End of explanation
"""
test_range_overlap()
test_range_overlap_one_range()
"""
Explanation: And test it...
End of explanation
"""
def pairs_overlap(rangeA, rangeB):
# Check if A starts after B ends and
# A ends before B starts. If both are
# false, there is an overlap.
# We are assuming (0.0 1.0) and
# (1.0 2.0) do not overlap. If these should
# overlap swap >= for > and <= for <.
overlap = not ((rangeA[0] >= rangeB[1]) or
(rangeA[1] <= rangeB[0]))
return overlap
def find_overlap(rangeA, rangeB):
# Return the overlap between range
# A and B
if pairs_overlap(rangeA, rangeB):
low = max(rangeA[0], rangeB[0])
high = min(rangeA[1], rangeB[1])
return (low, high)
else:
return None
def range_overlap(ranges):
# Return common overlap among a set of
# [low, high] ranges.
if len(ranges) == 1:
# Special case of one range -
# overlaps with itself
return(ranges[0])
elif len(ranges) == 2:
# Just return from find_overlap
return find_overlap(ranges[0], ranges[1])
else:
# Range of A, B, C is the
# range of range(B,C) with
# A, etc. Do this by recursion...
overlap = find_overlap(ranges[-1], ranges[-2])
if overlap is not None:
# Chop off the end of ranges and
# replace with the overlap
ranges = ranges[:-2]
ranges.append(overlap)
# Now run again, with the smaller list.
return range_overlap(ranges)
else:
return None
test_range_overlap()
test_range_overlap_one_range()
test_range_overlap_no_overlap()
"""
Explanation: Should we add to the tests?
Can you write version with fewer bugs. My attempt is below.
End of explanation
"""
|
peastman/deepchem | examples/tutorials/Modeling_Protein_Ligand_Interactions.ipynb | mit | !curl -Lo conda_installer.py https://raw.githubusercontent.com/deepchem/deepchem/master/scripts/colab_install.py
import conda_installer
conda_installer.install()
!/root/miniconda/bin/conda info -e
!pip install --pre deepchem
import deepchem
deepchem.__version__
"""
Explanation: Tutorial Part 13: Modeling Protein-Ligand Interactions
By Nathan C. Frey | Twitter and Bharath Ramsundar | Twitter
In this tutorial, we'll walk you through the use of machine learning and molecular docking methods to predict the binding energy of a protein-ligand complex. Recall that a ligand is some small molecule which interacts (usually non-covalently) with a protein. Molecular docking performs geometric calculations to find a “binding pose” with a small molecule interacting with a protein in a suitable binding pocket (that is, a region on the protein which has a groove in which the small molecule can rest).
The structure of proteins can be determined experimentally with techniques like Cryo-EM or X-ray crystallography. This can be a powerful tool for structure-based drug discovery. For more info on docking, read the AutoDock Vina paper and the deepchem.dock documentation. There are many graphical user and command line interfaces (like AutoDock) for performing molecular docking. Here, we show how docking can be performed programmatically with DeepChem, which enables automation and easy integration with machine learning pipelines.
As you work through the tutorial, you'll trace an arc including
1. Loading a protein-ligand complex dataset (PDBbind)
2. Performing programmatic molecular docking
3. Featurizing protein-ligand complexes with interaction fingerprints
4. Fitting a random forest model and predicting binding affinities
To start the tutorial, we'll use a simple pre-processed dataset file that comes in the form of a gzipped file. Each row is a molecular system, and each column represents a different piece of information about that system. For instance, in this example, every row reflects a protein-ligand complex, and the following columns are present: a unique complex identifier; the SMILES string of the ligand; the binding affinity (Ki) of the ligand to the protein in the complex; a Python list of all lines in a PDB file for the protein alone; and a Python list of all lines in a ligand file for the ligand alone.
Colab
This tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.
Setup
To run DeepChem within Colab, you'll need to run the following cell of installation commands. This will take about 5 minutes to run to completion and install your environment.
End of explanation
"""
!pip install -q mdtraj nglview
# !jupyter-nbextension enable nglview --py --sys-prefix # for jupyter notebook
# !jupyter labextension install nglview-js-widgets # for jupyter lab
import os
import numpy as np
import pandas as pd
import tempfile
from rdkit import Chem
from rdkit.Chem import AllChem
import deepchem as dc
from deepchem.utils import download_url, load_from_disk
"""
Explanation: Protein-ligand complex data
It is really helpful to visualize proteins and ligands when doing docking. Unfortunately, Google Colab doesn't currently support the Jupyter widgets we need to do that visualization. Install MDTraj and nglview on your local machine to view the protein-ligand complexes we're working with.
End of explanation
"""
data_dir = dc.utils.get_data_dir()
dataset_file = os.path.join(data_dir, "pdbbind_core_df.csv.gz")
if not os.path.exists(dataset_file):
print('File does not exist. Downloading file...')
download_url("https://s3-us-west-1.amazonaws.com/deepchem.io/datasets/pdbbind_core_df.csv.gz")
print('File downloaded...')
raw_dataset = load_from_disk(dataset_file)
raw_dataset = raw_dataset[['pdb_id', 'smiles', 'label']]
"""
Explanation: To illustrate the docking procedure, here we'll use a csv that contains SMILES strings of ligands as well as PDB files for the ligand and protein targets from PDBbind. Later, we'll use the labels to train a model to predict binding affinities. We'll also show how to download and featurize PDBbind to train a model from scratch.
End of explanation
"""
raw_dataset.head(2)
"""
Explanation: Let's see what raw_dataset looks like:
End of explanation
"""
from simtk.openmm.app import PDBFile
from pdbfixer import PDBFixer
from deepchem.utils.vina_utils import prepare_inputs
# consider one protein-ligand complex for visualization
pdbid = raw_dataset['pdb_id'].iloc[1]
ligand = raw_dataset['smiles'].iloc[1]
%%time
fixer = PDBFixer(pdbid=pdbid)
PDBFile.writeFile(fixer.topology, fixer.positions, open('%s.pdb' % (pdbid), 'w'))
p, m = None, None
# fix protein, optimize ligand geometry, and sanitize molecules
try:
p, m = prepare_inputs('%s.pdb' % (pdbid), ligand)
except:
print('%s failed PDB fixing' % (pdbid))
if p and m: # protein and molecule are readable by RDKit
print(pdbid, p.GetNumAtoms())
Chem.rdmolfiles.MolToPDBFile(p, '%s.pdb' % (pdbid))
Chem.rdmolfiles.MolToPDBFile(m, 'ligand_%s.pdb' % (pdbid))
"""
Explanation: Fixing PDB files
Next, let's get some PDB protein files for visualization and docking. We'll use the PDB IDs from our raw_dataset and download the pdb files directly from the Protein Data Bank using pdbfixer. We'll also sanitize the structures with RDKit. This ensures that any problems with the protein and ligand files (non-standard residues, chemical validity, etc.) are corrected. Feel free to modify these cells and pdbids to consider new protein-ligand complexes. We note here that PDB files are complex and human judgement is required to prepare protein structures for docking. DeepChem includes a number of docking utilites to assist you with preparing protein files, but results should be inspected before docking is attempted.
End of explanation
"""
import mdtraj as md
import nglview
from IPython.display import display, Image
"""
Explanation: Visualization
If you're outside of Colab, you can expand these cells and use MDTraj and nglview to visualize proteins and ligands.
End of explanation
"""
protein_mdtraj = md.load_pdb('3cyx.pdb')
ligand_mdtraj = md.load_pdb('ligand_3cyx.pdb')
"""
Explanation: Let's take a look at the first protein ligand pair in our dataset:
End of explanation
"""
v = nglview.show_mdtraj(ligand_mdtraj)
display(v) # interactive view outside Colab
"""
Explanation: We'll use the convenience function nglview.show_mdtraj in order to view our proteins and ligands. Note that this will only work if you uncommented the above cell, installed nglview, and enabled the necessary notebook extensions.
End of explanation
"""
view = nglview.show_mdtraj(protein_mdtraj)
display(view) # interactive view outside Colab
"""
Explanation: Now that we have an idea of what the ligand looks like, let's take a look at our protein:
End of explanation
"""
finder = dc.dock.binding_pocket.ConvexHullPocketFinder()
pockets = finder.find_pockets('3cyx.pdb')
len(pockets) # number of identified pockets
"""
Explanation: Molecular Docking
Ok, now that we've got our data and basic visualization tools up and running, let's see if we can use molecular docking to estimate the binding affinities between our protein ligand systems.
There are three steps to setting up a docking job, and you should experiment with different settings. The three things we need to specify are 1) how to identify binding pockets in the target protein; 2) how to generate poses (geometric configurations) of a ligand in a binding pocket; and 3) how to "score" a pose. Remember, our goal is to identify candidate ligands that strongly interact with a target protein, which is reflected by the score.
DeepChem has a simple built-in method for identifying binding pockets in proteins. It is based on the convex hull method. The method works by creating a 3D polyhedron (convex hull) around a protein structure and identifying the surface atoms of the protein as the ones closest to the convex hull. Some biochemical properties are considered, so the method is not purely geometrical. It has the advantage of having a low computational cost and is good enough for our purposes.
End of explanation
"""
vpg = dc.dock.pose_generation.VinaPoseGenerator()
"""
Explanation: Pose generation is quite complex. Luckily, using DeepChem's pose generator will install the AutoDock Vina engine under the hood, allowing us to get up and running generating poses quickly.
End of explanation
"""
!mkdir -p vina_test
%%time
complexes, scores = vpg.generate_poses(molecular_complex=('3cyx.pdb', 'ligand_3cyx.pdb'), # protein-ligand files for docking,
out_dir='vina_test',
generate_scores=True
)
"""
Explanation: We could specify a pose scoring function from deepchem.dock.pose_scoring, which includes things like repulsive and hydrophobic interactions and hydrogen bonding. Vina will take care of this, so instead we'll allow Vina to compute scores for poses.
End of explanation
"""
scores
"""
Explanation: We used the default value for num_modes when generating poses, so Vina will return the 9 lowest energy poses it found in units of kcal/mol.
End of explanation
"""
complex_mol = Chem.CombineMols(complexes[0][0], complexes[0][1])
"""
Explanation: Can we view the complex with both protein and ligand? Yes, but we'll need to combine the molecules into a single RDkit molecule.
End of explanation
"""
v = nglview.show_rdkit(complex_mol)
display(v)
"""
Explanation: Let's now visualize our complex. We can see that the ligand slots into a pocket of the protein.
End of explanation
"""
docker = dc.dock.docking.Docker(pose_generator=vpg)
posed_complex, score = next(docker.dock(molecular_complex=('3cyx.pdb', 'ligand_3cyx.pdb'),
use_pose_generator_scores=True))
"""
Explanation: Now that we understand each piece of the process, we can put it all together using DeepChem's Docker class. Docker creates a generator that yields tuples of posed complexes and docking scores.
End of explanation
"""
pdbids = raw_dataset['pdb_id'].values
ligand_smiles = raw_dataset['smiles'].values
%%time
for (pdbid, ligand) in zip(pdbids, ligand_smiles):
fixer = PDBFixer(url='https://files.rcsb.org/download/%s.pdb' % (pdbid))
PDBFile.writeFile(fixer.topology, fixer.positions, open('%s.pdb' % (pdbid), 'w'))
p, m = None, None
# skip pdb fixing for speed
try:
p, m = prepare_inputs('%s.pdb' % (pdbid), ligand, replace_nonstandard_residues=False,
remove_heterogens=False, remove_water=False,
add_hydrogens=False)
except:
print('%s failed sanitization' % (pdbid))
if p and m: # protein and molecule are readable by RDKit
Chem.rdmolfiles.MolToPDBFile(p, '%s.pdb' % (pdbid))
Chem.rdmolfiles.MolToPDBFile(m, 'ligand_%s.pdb' % (pdbid))
proteins = [f for f in os.listdir('.') if len(f) == 8 and f.endswith('.pdb')]
ligands = [f for f in os.listdir('.') if f.startswith('ligand') and f.endswith('.pdb')]
"""
Explanation: Modeling Binding Affinity
Docking is a useful, albeit coarse-grained tool for predicting protein-ligand binding affinities. However, it takes some time, especially for large-scale virtual screenings where we might be considering different protein targets and thousands of potential ligands. We might naturally ask then, can we train a machine learning model to predict docking scores? Let's try and find out!
We'll show how to download the PDBbind dataset. We can use the loader in MoleculeNet to get the 4852 protein-ligand complexes from the "refined" set or the entire "general" set in PDBbind. For simplicity, we'll stick with the ~100 complexes we've already processed to train our models.
Next, we'll need a way to transform our protein-ligand complexes into representations which can be used by learning algorithms. Ideally, we'd have neural protein-ligand complex fingerprints, but DeepChem doesn't yet have a good learned fingerprint of this sort. We do however have well-tuned manual featurizers that can help us with our challenge here.
We'll make use of two types of fingerprints in the rest of the tutorial, the CircularFingerprint and ContactCircularFingerprint. DeepChem also has voxelizers and grid descriptors that convert a 3D volume containing an arragment of atoms into a fingerprint. These featurizers are really useful for understanding protein-ligand complexes since they allow us to translate complexes into vectors that can be passed into a simple machine learning algorithm. First, we'll create circular fingerprints. These convert small molecules into a vector of fragments.
End of explanation
"""
# Handle failed sanitizations
failures = set([f[:-4] for f in proteins]) - set([f[7:-4] for f in ligands])
for pdbid in failures:
proteins.remove(pdbid + '.pdb')
len(proteins), len(ligands)
pdbids = [f[:-4] for f in proteins]
small_dataset = raw_dataset[raw_dataset['pdb_id'].isin(pdbids)]
labels = small_dataset.label
fp_featurizer = dc.feat.CircularFingerprint(size=2048)
features = fp_featurizer.featurize([Chem.MolFromPDBFile(l) for l in ligands])
dataset = dc.data.NumpyDataset(X=features, y=labels, ids=pdbids)
train_dataset, test_dataset = dc.splits.RandomSplitter().train_test_split(dataset, seed=42)
"""
Explanation: We'll do some clean up to make sure we have a valid ligand file for every valid protein. The lines here will compare the PDB IDs between the ligand and protein files and remove any proteins that don't have corresponding ligands.
End of explanation
"""
# # Uncomment to featurize all of PDBBind's "refined" set
# pdbbind_tasks, (train_dataset, valid_dataset, test_dataset), transformers = dc.molnet.load_pdbbind(
# featurizer=fp_featurizer, set_name="refined", reload=True,
# data_dir='pdbbind_data', save_dir='pdbbind_data')
"""
Explanation: The convenience loader dc.molnet.load_pdbbind will take care of downloading and featurizing the pdbbind dataset under the hood for us. This will take quite a bit of time and compute, so the code to do it is commented out. Uncomment it and grab a cup of coffee if you'd like to featurize all of PDBbind's refined set. Otherwise, you can continue with the small dataset we constructed above.
End of explanation
"""
from sklearn.ensemble import RandomForestRegressor
from deepchem.utils.evaluate import Evaluator
import pandas as pd
seed = 42 # Set a random seed to get stable results
sklearn_model = RandomForestRegressor(n_estimators=100, max_features='sqrt')
sklearn_model.random_state = seed
model = dc.models.SklearnModel(sklearn_model)
model.fit(train_dataset)
"""
Explanation: Now, we're ready to do some learning!
To fit a deepchem model, first we instantiate one of the provided (or user-written) model classes. In this case, we have a created a convenience class to wrap around any ML model available in Sci-Kit Learn that can in turn be used to interoperate with deepchem. To instantiate an SklearnModel, you will need (a) task_types, (b) model_params, another dict as illustrated below, and (c) a model_instance defining the type of model you would like to fit, in this case a RandomForestRegressor.
End of explanation
"""
# use Pearson correlation so metrics are > 0
metric = dc.metrics.Metric(dc.metrics.pearson_r2_score)
evaluator = Evaluator(model, train_dataset, [])
train_r2score = evaluator.compute_model_performance([metric])
print("RF Train set R^2 %f" % (train_r2score["pearson_r2_score"]))
evaluator = Evaluator(model, test_dataset, [])
test_r2score = evaluator.compute_model_performance([metric])
print("RF Test set R^2 %f" % (test_r2score["pearson_r2_score"]))
"""
Explanation: Note that the $R^2$ value for the test set indicates that the model isn't producing meaningful outputs. It turns out that predicting binding affinities is hard. This tutorial isn't meant to show how to create a state-of-the-art model for predicting binding affinities, but it gives you the tools to generate your own datasets with molecular docking, featurize complexes, and train models.
End of explanation
"""
# Compare predicted and true values
list(zip(model.predict(train_dataset), train_dataset.y))[:5]
list(zip(model.predict(test_dataset), test_dataset.y))[:5]
"""
Explanation: We're using a very small dataset and an overly simplistic representation, so it's no surprise that the test set performance is quite bad.
End of explanation
"""
fp_featurizer = dc.feat.ContactCircularFingerprint(size=2048)
features = fp_featurizer.featurize(zip(ligands, proteins))
dataset = dc.data.NumpyDataset(X=features, y=labels, ids=pdbids)
train_dataset, test_dataset = dc.splits.RandomSplitter().train_test_split(dataset, seed=42)
"""
Explanation: The protein-ligand complex view.
In the previous section, we featurized only the ligand. This time, let's see if we can do something sensible with our protein-ligand fingerprints that make use of our structural information. To start with, we need to re-featurize the dataset but using the contact fingerprint this time.
End of explanation
"""
seed = 42 # Set a random seed to get stable results
sklearn_model = RandomForestRegressor(n_estimators=100, max_features='sqrt')
sklearn_model.random_state = seed
model = dc.models.SklearnModel(sklearn_model)
model.fit(train_dataset)
"""
Explanation: Let's now train a simple random forest model on this dataset.
End of explanation
"""
metric = dc.metrics.Metric(dc.metrics.pearson_r2_score)
evaluator = Evaluator(model, train_dataset, [])
train_r2score = evaluator.compute_model_performance([metric])
print("RF Train set R^2 %f" % (train_r2score["pearson_r2_score"]))
evaluator = Evaluator(model, test_dataset, [])
test_r2score = evaluator.compute_model_performance([metric])
print("RF Test set R^2 %f" % (test_r2score["pearson_r2_score"]))
"""
Explanation: Let's see what our accuracies looks like!
End of explanation
"""
|
ypeleg/Deep-Learning-Keras-Tensorflow-PyCon-Israel-2017 | .ipynb_checkpoints/2.4 Transfer Learning & Fine-Tuning-checkpoint.ipynb | mit | import numpy as np
import datetime
np.random.seed(1337) # for reproducibility
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Convolution2D, MaxPooling2D
from keras.utils import np_utils
from keras import backend as K
from numpy import nan
import keras
print keras.__version__
now = datetime.datetime.now
"""
Explanation: Transfer Learning and Fine Tuning
Train a simple convnet on the MNIST dataset the first 5 digits [0..4].
Freeze convolutional layers and fine-tune dense layers for the classification of digits [5..9].
Using GPU (highly recommended)
-> If using theano backend:
THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32
End of explanation
"""
now = datetime.datetime.now
batch_size = 128
nb_classes = 5
nb_epoch = 5
# input image dimensions
img_rows, img_cols = 28, 28
# number of convolutional filters to use
nb_filters = 32
# size of pooling area for max pooling
pool_size = 2
# convolution kernel size
kernel_size = 3
if K.image_data_format() == 'channels_first':
input_shape = (1, img_rows, img_cols)
else:
input_shape = (img_rows, img_cols, 1)
def train_model(model, train, test, nb_classes):
X_train = train[0].reshape((train[0].shape[0],) + input_shape)
X_test = test[0].reshape((test[0].shape[0],) + input_shape)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
print('X_train shape:', X_train.shape)
print(X_train.shape[0], 'train samples')
print(X_test.shape[0], 'test samples')
# convert class vectors to binary class matrices
Y_train = np_utils.to_categorical(train[1], nb_classes)
Y_test = np_utils.to_categorical(test[1], nb_classes)
model.compile(loss='categorical_crossentropy',
optimizer='adadelta',
metrics=['accuracy'])
t = now()
model.fit(X_train, Y_train,
batch_size=batch_size, nb_epoch=nb_epoch,
verbose=1,
validation_data=(X_test, Y_test))
print('Training time: %s' % (now() - t))
score = model.evaluate(X_test, Y_test, verbose=0)
print('Test score:', score[0])
print('Test accuracy:', score[1])
"""
Explanation: Settings
End of explanation
"""
# the data, shuffled and split between train and test sets
(X_train, y_train), (X_test, y_test) = mnist.load_data()
# create two datasets one with digits below 5 and one with 5 and above
X_train_lt5 = X_train[y_train < 5]
y_train_lt5 = y_train[y_train < 5]
X_test_lt5 = X_test[y_test < 5]
y_test_lt5 = y_test[y_test < 5]
X_train_gte5 = X_train[y_train >= 5]
y_train_gte5 = y_train[y_train >= 5] - 5 # make classes start at 0 for
X_test_gte5 = X_test[y_test >= 5] # np_utils.to_categorical
y_test_gte5 = y_test[y_test >= 5] - 5
# define two groups of layers: feature (convolutions) and classification (dense)
feature_layers = [
Convolution2D(nb_filters, kernel_size, kernel_size,
border_mode='valid',
input_shape=input_shape),
Activation('relu'),
Convolution2D(nb_filters, kernel_size, kernel_size),
Activation('relu'),
MaxPooling2D(pool_size=(pool_size, pool_size)),
Dropout(0.25),
Flatten(),
]
classification_layers = [
Dense(128),
Activation('relu'),
Dropout(0.5),
Dense(nb_classes),
Activation('softmax')
]
# create complete model
model = Sequential(feature_layers + classification_layers)
# train model for 5-digit classification [0..4]
train_model(model,
(X_train_lt5, y_train_lt5),
(X_test_lt5, y_test_lt5), nb_classes)
# freeze feature layers and rebuild model
for l in feature_layers:
l.trainable = False
# transfer: train dense layers for new classification task [5..9]
train_model(model,
(X_train_gte5, y_train_gte5),
(X_test_gte5, y_test_gte5), nb_classes)
"""
Explanation: Dataset Preparation
End of explanation
"""
from keras.applications import VGG16
from keras.applications.vgg16 import VGG16
from keras.preprocessing import image
from keras.applications.vgg16 import preprocess_input
from keras.layers import Input, Flatten, Dense
from keras.models import Model
import numpy as np
#Get back the convolutional part of a VGG network trained on ImageNet
model_vgg16_conv = VGG16(weights='imagenet', include_top=False)
model_vgg16_conv.summary()
#Create your own input format (here 3x200x200)
inp = Input(shape=(48,48,3),name = 'image_input')
#Use the generated model
output_vgg16_conv = model_vgg16_conv(inp)
#Add the fully-connected layers
x = Flatten(name='flatten')(output_vgg16_conv)
x = Dense(4096, activation='relu', name='fc1')(x)
x = Dense(4096, activation='relu', name='fc2')(x)
x = Dense(5, activation='softmax', name='predictions')(x)
#Create your own model
my_model = Model(input=inp, output=x)
#In the summary, weights and layers from VGG part will be hidden, but they will be fit during the training
my_model.summary()
"""
Explanation: Your Turn
Try to Fine Tune a VGG16 Network
End of explanation
"""
import scipy
new_shape = (48,48)
X_train_new = np.empty(shape=(X_train_gte5.shape[0],)+(48,48,3))
for idx in xrange(X_train_gte5.shape[0]):
X_train_new[idx] = np.resize(scipy.misc.imresize(X_train_gte5[idx], (new_shape)), (48, 48, 3))
X_train_new[idx] = np.resize(X_train_new[idx], (48, 48, 3))
#X_train_new = np.expand_dims(X_train_new, axis=-1)
print X_train_new.shape
X_train_new = X_train_new.astype('float32')
X_train_new /= 255
print('X_train shape:', X_train_new.shape)
print(X_train_new.shape[0], 'train samples')
print(X_train_new.shape[0], 'test samples')
# convert class vectors to binary class matrices
Y_train = np_utils.to_categorical(y_train_gte5, nb_classes)
Y_test = np_utils.to_categorical(y_test_gte5, nb_classes)
print y_train.shape
my_model.compile(loss='categorical_crossentropy',
optimizer='adadelta',
metrics=['accuracy'])
my_model.fit(X_train_new, Y_train,
batch_size=batch_size, nb_epoch=nb_epoch,
verbose=1)
#print('Training time: %s' % (now() - t))
#score = my_model.evaluate(X_test, Y_test, verbose=0)
#print('Test score:', score[0])
#print('Test accuracy:', score[1])
#train_model(my_model,
# (X_train_new, y_train_gte5),
# (X_test_gte5, y_test_gte5), nb_classes)
"""
Explanation: ```python
...
...
Plugging new Layers
model.add(Dense(768, activation='sigmoid'))
model.add(Dropout(0.0))
model.add(Dense(768, activation='sigmoid'))
model.add(Dropout(0.0))
model.add(Dense(n_labels, activation='softmax'))
```
End of explanation
"""
|
hamnonlineng/hamnonlineng | examples/Example_5th_order_Hamiltonian-linear_programming.ipynb | bsd-3-clause | import hamnonlineng as hnle
"""
Explanation: Find frequencies that make only $(\hat{a}^2\hat{b}^2+\hat{a}\hat{b}\hat{d}^2+\hat{d}^4)\hat{c}^\dagger +h.c.$ resonant in the 5th order expansion of $\sin(\hat{a}+\hat{b}+\hat{c}+\hat{d}+h.c.)$
Here we use linear programming instead of constraint programming and search for any positively-valued frequencies (instead of just integer frequencies).
Import the "Hamiltonian-through-Nonlinearities Engineering" module (it can be installed from PyPI using pip).
End of explanation
"""
letters = 'abcd'
"""
Explanation: Set the letters you want to use for annihilation operators (4 modes in our case).
End of explanation
"""
resonant = [hnle.Monomial(1,'aabbC'), # First argument is the constant real factor in front of the operator
hnle.Monomial(1,'abddC'), # Second argument is the string representing the operators
hnle.Monomial(1,'Cdddd')]
"""
Explanation: Write down (or somehow generate) a list of the monomials that you want to be resonant.
End of explanation
"""
resonant
"""
Explanation: The convention for typing in or printing out is:
- lower 'a' represents $\hat{a}$
- capital 'A' represents $\hat{a}^\dagger$
- the hermitian conjugate is implicit, i.e. Monomial(1,'Aab') is $\hat{a}^\dagger\hat{a}\hat{b}+\hat{a}^\dagger\hat{a}\hat{b}^\dagger$
- the library sorts the expresion to make it "canonical", and given that the presence of a hermitian conjugate is implicit each monomial might print out as its conjugate, i.e. there is no difference between Monomial(1,'a') and Monomial(1,'A')
End of explanation
"""
op_sum = hnle.operator_sum(letters)
op_sum
"""
Explanation: Now generate the terms that you want to be off resonant: start with the sum $\hat{a}+\hat{b}+\hat{c}+\hat{d}+h.c.$.
End of explanation
"""
sine_exp = hnle.sin_terms(op_sum, 3) + hnle.sin_terms(op_sum, 5)
sine_exp_list = sine_exp.m
"""
Explanation: Generate the list of 3rd and 5th order terms in the expansion of $\sin(\hat{a}+\hat{b}+\hat{c}+\hat{d}+h.c.)$.
End of explanation
"""
off_resonant = hnle.drop_single_mode(
hnle.drop_definitely_offresonant(
hnle.drop_matching(sine_exp.m, resonant)))
off_resonant = list(off_resonant)
"""
Explanation: Filter out of the list:
- terms that match the terms we want to be resonant
- terms that are only annihilation or only creation operators (definitely off-resonant)
- terms that contain only one single mode
End of explanation
"""
len(off_resonant)
"""
Explanation: How many terms are left.
End of explanation
"""
res = hnle.solve_linearprog_pulp(resonant, off_resonant, letters, maxfreq=20, detune=0.1)
res
"""
Explanation: Finally, solve the constraints:
End of explanation
"""
|
ashkamath/VQA | VQA/gru/gru_small_bilinear.ipynb | mit | # don't re-inventing the wheel
import h5py, json, spacy
import numpy as np
import cPickle as pickle
%matplotlib inline
import matplotlib.pyplot as plt
from model import LSTMModel
from utils import prepare_ques_batch, prepare_im_batch, get_batches_idx
"""
Explanation: Visual Question Answering with LSTM and VGG features
In this notebook, we build a VQA model with LSTM as the language model and the VGG-19 as our visual model. Since the full dataset is quite large, we load and play with a small portion of it on our local machine.
End of explanation
"""
# run `python -m spacy.en.download` to collect the embeddings (1st time only)
embeddings = spacy.en.English()
word_dim = 300
"""
Explanation: Word Embeddings
For word embeddings, we use the pre-trained word2vec provided by the spacy package
End of explanation
"""
h5_img_file_tiny = h5py.File('data/vqa_data_img_vgg_train_small.h5', 'r')
fv_im_tiny = h5_img_file_tiny.get('/images_train')
with open('data/qa_data_train_small.pkl', 'rb') as fp:
qa_data_tiny = pickle.load(fp)
json_file = json.load(open('data/vqa_data_prepro.json', 'r'))
ix_to_word = json_file['ix_to_word']
ix_to_ans = json_file['ix_to_ans']
vocab_size = len(ix_to_word)
print "Loading tiny dataset of %d image features and %d question/answer pairs for training." % (len(fv_im_tiny), len(qa_data_tiny))
"""
Explanation: Loading Tiny Dataset
Here we load a tiny dataset of 300 question/answer pairs and 100 images which is prepared using the script in Dataset Handling.ipynb
End of explanation
"""
questions, ques_len, im_ix, ans = zip(*qa_data_tiny)
nb_classes = 1000
max_ques_len = 26
X_ques = prepare_ques_batch(questions, ques_len, max_ques_len, embeddings, word_dim, ix_to_word)
X_im = prepare_im_batch(fv_im_tiny, im_ix)
y = np.zeros((len(ans), nb_classes))
y[np.arange(len(ans)), ans] = 1
"""
Explanation: In this dataset, one image associates with muiltiple question/answer pairs (3 in this case). Therefore, we need to hand-binding the question/answer pairs with the corresponding image feature for training.
End of explanation
"""
model = LSTMModel()
model.build()
"""
Explanation: Overfit LSTM + VGG
Finally, we are getting to the fun part! Let's build our model...
End of explanation
"""
loss = model.fit(X_ques, X_im, y, nb_epoch=30, batch_size=1000)
plt.plot(loss.history['loss'], label='train_loss')
plt.plot(loss.history['acc'], label='train_acc')
plt.legend(loc='best')
"""
Explanation: Since the dataset we are using is tiny, we can fit the whole dataset to the convenience fit method and specify the batch_size. Note that this already ate up a lot of memory and it won't work for the large dataset.
End of explanation
"""
h5_img_file_test_tiny = h5py.File('data/vqa_data_img_vgg_test_small.h5', 'r')
fv_im_test_tiny = h5_img_file_test_tiny.get('/images_test')
with open('data/qa_data_test_small.pkl', 'rb') as fp:
qa_data_test_tiny = pickle.load(fp)
print "Loading tiny dataset of %d image features and %d question/answer pairs for testing" % (len(fv_im_test_tiny), len(qa_data_test_tiny))
questions, ques_len, im_ix, ans = zip(*qa_data_test_tiny)
X_ques_test = prepare_ques_batch(questions, ques_len, max_ques_len, embeddings, word_dim, ix_to_word)
X_im_test = prepare_im_batch(fv_im_test_tiny, im_ix)
y_test = np.zeros((len(ans), nb_classes))
y_test[np.arange(len(ans)), [494 if a > 1000 else a for a in ans]] = 1
loss, acc = model.evaluate(X_ques_test, X_im_test, y_test)
# GRU
print loss, acc
"""
Explanation: Let's see how far we can get with this overfitted model...
End of explanation
"""
|
bradkav/CEvNS | COHERENT.ipynb | mit | from __future__ import print_function
%matplotlib inline
import numpy as np
import matplotlib
#matplotlib.use('Agg')
import matplotlib.pyplot as pl
from scipy.integrate import quad
from scipy.interpolate import interp1d, UnivariateSpline,InterpolatedUnivariateSpline
from scipy.optimize import minimize
from tqdm import tqdm
#Change default font size so you don't need a magnifying glass
matplotlib.rc('font', **{'size' : 16})
"""
Explanation: Reproducing the COHERENT results - and New Physics constraints
Code for reproducing the CEvNS signal observed by COHERENT - see arXiv:1708.01294. Note that the COHERENT-2017 data are now publicly available (arXiv:1804.09459) - this notebook uses digitized results from the original 2017 paper.
Note that we neglect the axial charge of the nucleus, and thus the contribution from strange quarks. We also use a slightly different parametrisation of the Form Factor, compared to the COHERENT collaboration.
End of explanation
"""
import CEvNS
#help(CEvNS.xsec_CEvNS)
"""
Explanation: Import the CEvNS module (for calculating the signal spectrum and loading the neutrino fluxes)
End of explanation
"""
#Initialise neutrino_flux interpolation function
CEvNS.loadNeutrinoFlux("SNS")
#Plot neutrino flux
E_nu = np.logspace(0, np.log10(300),1000)
pl.figure()
pl.semilogy(E_nu, CEvNS.neutrino_flux_tot(E_nu))
pl.title(r"Neutrino flux at SNS", fontsize=12)
pl.xlabel(r"Neutrino energy, $E_\nu$ [MeV]")
pl.ylabel(r"$\Phi_\nu$ [cm$^{-2}$ s$^{-1}$ MeV$^{-1}$]")
pl.show()
"""
Explanation: Neutrino Flux @ SNS
Let's load the neutrino flux. Note that here we're only plotting the continuum. There is also a population of monochromatic (29.65 MeV) muon neutrinos which we add in separately in the code (because the flux is a delta-function, it's hard to model here).
End of explanation
"""
COHERENT_PE, COHERENT_eff = np.loadtxt("DataFiles/COHERENT_eff.txt", unpack=True)
effinterp = interp1d(COHERENT_PE, COHERENT_eff, bounds_error=False, fill_value=0.0)
def efficiency_single(x):
if (x > 4.9):
return effinterp(x)
else:
return 1e-10
efficiency = np.vectorize(efficiency_single)
PEvals = np.linspace(0, 50, 100)
pl.figure()
pl.plot(PEvals, efficiency(PEvals))
pl.xlabel("PE")
pl.ylabel("Efficiency")
pl.show()
"""
Explanation: COHERENT efficiency function
Load in the efficiency (as a function of photoelectrons, PE). Set to zero below 5 PE.
End of explanation
"""
#Nuclear properties for Cs and I
A_Cs = 133.0
Z_Cs = 55.0
A_I = 127.0
Z_I = 53.0
#Mass fractions
f_Cs = A_Cs/(A_Cs + A_I)
f_I = A_I/(A_Cs + A_I)
mass = 14.6 #target mass in kg
time = 308.1 #exposure time in days
PEperkeV = 1.17 #Number of PE per keV
#Get the differential rate function from the CEvNS module
#Note that this function allows for an extra vector mediator,
#but the default coupling is zero, so we'll forget about it
diffRate_CEvNS = CEvNS.differentialRate_CEvNS
#Differential rates (times efficiency) for the two target nuclei, per PE
dRdPE_Cs = lambda x: (1.0/PEperkeV)*efficiency(x)*mass*time*f_Cs*diffRate_CEvNS(x/PEperkeV, A_Cs, Z_Cs)
dRdPE_I = lambda x: (1.0/PEperkeV)*efficiency(x)*mass*time*f_I*diffRate_CEvNS(x/PEperkeV, A_I, Z_I)
#Calculate number of signal events in each bin in the Standard Model (SM)
PE_bins = np.linspace(0, 50, 26)
N_SM_Cs = np.zeros(25)
N_SM_I = np.zeros(25)
N_SM_tot = np.zeros(25)
for i in tqdm(range(25)):
N_SM_Cs[i] = quad(dRdPE_Cs, PE_bins[i], PE_bins[i+1], epsabs = 0.01)[0]
N_SM_I[i] = quad(dRdPE_I, PE_bins[i], PE_bins[i+1], epsabs = 0.01)[0]
N_SM_tot[i] = N_SM_Cs[i] + N_SM_I[i]
print("Total CEvNS events expected: ", np.sum(N_SM_tot))
"""
Explanation: COHERENT event rate
Calculate number of CEvNS signal events at COHERENT (in bins of 2 PE)
End of explanation
"""
COHERENT_data = np.loadtxt("DataFiles/COHERENT_data.txt", usecols=(1,))
COHERENT_upper = np.loadtxt("DataFiles/COHERENT_upper.txt", usecols=(1,)) - COHERENT_data
COHERENT_lower = COHERENT_data - np.loadtxt("DataFiles/COHERENT_lower.txt", usecols=(1,))
COHERENT_spect = np.loadtxt("DataFiles/COHERENT_spectrum.txt", usecols=(1,))
COHERENT_bins = np.arange(1,50,2)
"""
Explanation: Comparing with the COHERENT results
First, let's load in the observed data and calculated spectrum (digitized from arXiv:1708.01294).
End of explanation
"""
pl.figure(figsize=(10,6))
pl.step(PE_bins, np.append(N_SM_tot,0), 'g', linestyle="-", where = "post", label="CEvNS signal (this work)",linewidth=1.5)
pl.step(PE_bins, np.append(COHERENT_spect,0), 'g', linestyle="--", where = "post", label="CEvNS signal (1708.01294)",linewidth=1.5)
pl.axhline(0, linestyle='--', color = 'gray')
pl.errorbar(COHERENT_bins, COHERENT_data, fmt='ko', \
yerr = [COHERENT_lower, COHERENT_upper], label="COHERENT data",\
capsize=0.0)
pl.xlabel("Number of photoelectrons (PE)")
pl.ylabel("Res. counts / 2 PE")
pl.legend( fontsize=14)
pl.xlim(0, 50)
pl.ylim(-15, 35)
pl.savefig("plots/COHERENT_data.pdf", bbox_inches="tight")
pl.show()
"""
Explanation: Now plot the results:
End of explanation
"""
def chisq_generic(N_sig, alpha, beta):
#Beam-on backgrounds
N_BG = 6.0
#Number of measured events
N_meas = 142.0
#Statistical uncertainty
sig_stat = np.sqrt(N_meas + 2*405 + N_BG)
#Uncertainties
unc = (alpha/0.28)**2 + (beta/0.25)**2
return ((N_meas - N_sig*(1.0+alpha) - N_BG*(1.0+beta))**2)/sig_stat**2 + unc
#Calculate minimum chi-squared as a function of (alpha, beta) nuisance parameters
def minchisq_Nsig(Nsig):
minres = minimize(lambda x: chisq_generic(Nsig, x[0], x[1]), (0.0,0.0))
return minres.fun
Nsiglist= np.linspace(0, 1000,1001)
chi2list = [minchisq_Nsig(Ns) for Ns in Nsiglist]
delta_chi2 = (chi2list - np.min(chi2list))
pl.figure(figsize=(6,6))
pl.plot(Nsiglist, delta_chi2, linewidth=2.0)
pl.ylim(0, 25)
pl.axvline(np.sum(N_SM_tot), linestyle='--', color='k')
pl.text(172, 20, "SM prediction")
pl.ylabel(r"$\Delta \chi^2$")
pl.xlabel(r"CE$\nu$NS counts")
pl.savefig("plots/COHERENT_likelihood.pdf", bbox_inches="tight")
pl.show()
"""
Explanation: Fit to signal strength
Very simple fit to the number of CEvNS signal events, using only a 1-bin likelihood.
We start by defining the $\chi^2$, as given in arXiv:1708.01294. We use a generic form, so that we don't have to recalculate the number of signal events all the time...
End of explanation
"""
deltachi2_Nsig = interp1d(Nsiglist, delta_chi2, bounds_error=False, fill_value=delta_chi2[-1])
"""
Explanation: To speed things up later (so we don't have to do the minimization every time), we'll tabulate and interpolate the chi-squared as a function of the number of signal events. This works because we're using a simple chi-squared which depends only on the number of signal events:
End of explanation
"""
#Differential rates (times efficiency) for the two target nuclei, per PE
# For electron neutrinos ONLY
dRdPE_Cs_e = lambda x: (1.0/PEperkeV)*efficiency(x)*mass*time*f_Cs*diffRate_CEvNS(x/PEperkeV, A_Cs, Z_Cs, nu_flavor="e")
dRdPE_I_e = lambda x: (1.0/PEperkeV)*efficiency(x)*mass*time*f_I*diffRate_CEvNS(x/PEperkeV, A_I, Z_I, nu_flavor="e")
# For muon neutrinos ONLY
dRdPE_Cs_mu = lambda x: (1.0/PEperkeV)*efficiency(x)*mass*time*f_Cs*(diffRate_CEvNS(x/PEperkeV, A_Cs, Z_Cs, nu_flavor="mu")+ diffRate_CEvNS(x/PEperkeV, A_Cs, Z_Cs, nu_flavor="mub"))
dRdPE_I_mu = lambda x: (1.0/PEperkeV)*efficiency(x)*mass*time*f_I*(diffRate_CEvNS(x/PEperkeV, A_I, Z_I, nu_flavor="mu") + diffRate_CEvNS(x/PEperkeV, A_I, Z_I, nu_flavor="mub"))
#Now calculate bin-by-bin signal from electron neutrinos
bins_Cs_e = np.zeros(25)
bins_I_e = np.zeros(25)
for i in tqdm(range(25)):
bins_Cs_e[i] = quad(dRdPE_Cs_e, PE_bins[i], PE_bins[i+1], epsabs = 0.01)[0]
bins_I_e[i] = quad(dRdPE_I_e, PE_bins[i], PE_bins[i+1], epsabs = 0.01)[0]
print("Number of CEvNS events due to nu_e: ", np.sum(bins_Cs_e + bins_I_e))
#Now calculate bin-by-bin signal from muon neutrinos
bins_Cs_mu = np.zeros(25)
bins_I_mu = np.zeros(25)
for i in tqdm(range(25)):
bins_Cs_mu[i] = quad(dRdPE_Cs_mu, PE_bins[i], PE_bins[i+1], epsabs = 0.01)[0]
bins_I_mu[i] = quad(dRdPE_I_mu, PE_bins[i], PE_bins[i+1], epsabs = 0.01)[0]
print("Number of CEvNS events due to nu_mu: ", np.sum(bins_Cs_mu + bins_I_mu))
"""
Explanation: NSI constraints
Calculate constraints on NSI parameters. Here, we're just assuming that the flavor-conserving e-e NSI couplings are non-zero, so we have to calculate the contribution to the rate from only the electron neutrinos and then see how that changes:
End of explanation
"""
def NSI_corr(eps_uV, eps_dV, A, Z):
SIN2THETAW = 0.2387
#Calculate standard weak nuclear charge (squared)
Qsq = 4.0*((A - Z)*(-0.5) + Z*(0.5 - 2*SIN2THETAW))**2
#Calculate the modified nuclear charge from NSI
Qsq_NSI = 4.0*((A - Z)*(-0.5 + eps_uV + 2.0*eps_dV) + Z*(0.5 - 2*SIN2THETAW + 2*eps_uV + eps_dV))**2
return Qsq_NSI/Qsq
"""
Explanation: Flavour-conserving NSI
Now, let's calculate the correction to the CEvNS rate from flavor-conserving NSI:
End of explanation
"""
def deltachisq_NSI_ee(eps_uV, eps_dV):
#NB: bins_I and bins_Cs are calculated further up in the script (they are the SM signal prediction)
#Signal events from Iodine (with NSI correction only applying to electron neutrino events)
N_sig_I = (N_SM_I + (NSI_corr(eps_uV, eps_dV, A_I, Z_I) - 1.0)*bins_I_e)
#Now signal events from Caesium
N_sig_Cs = (N_SM_Cs + (NSI_corr(eps_uV, eps_dV, A_Cs, Z_Cs) - 1.0)*bins_Cs_e)
#Number of signal events
N_NSI = np.sum(N_sig_I + N_sig_Cs)
return deltachi2_Nsig(N_NSI)
"""
Explanation: Calculate simplified (single bin) chi-squared (see chi-squared expression around p.32 in COHERENT paper):
End of explanation
"""
Ngrid = 101
ulist = np.linspace(-1.0, 1.0, Ngrid)
dlist = np.linspace(-1.0, 1.0, Ngrid)
UL, DL = np.meshgrid(ulist, dlist)
delta_chi2_grid_ee = 0.0*UL
#Not very elegant loop
for i in tqdm(range(Ngrid)):
for j in range(Ngrid):
delta_chi2_grid_ee[i,j] = deltachisq_NSI_ee(UL[i,j], DL[i,j])
#Find best-fit point
ind_BF = np.argmin(delta_chi2_grid_ee)
BF = [UL.flatten()[ind_BF], DL.flatten()[ind_BF]]
print("Best fit point: ", BF)
np.savetxt("results/COHERENT_NSI_deltachi2_ee.txt", delta_chi2_grid_ee, header="101x101 grid, corresponding to (uV, dV) values between -1 and 1. Flavor-conserving ee NSI.")
"""
Explanation: Calculate the (minimum) chi-squared on a grid and save to file:
End of explanation
"""
pl.figure(figsize=(6,6))
#pl.contourf(DL, UL, delta_chi2_grid, levels=[0,1,2,3,4,5,6,7,8,9,10],cmap="Blues")
pl.contourf(DL, UL, delta_chi2_grid_ee, levels=[0,4.6],cmap="Blues")
#levels=[0,4.60]
#pl.colorbar()
pl.plot(0.0, 0.0,'k+', markersize=12.0, label="Standard Model")
pl.plot(BF[1], BF[0], 'ro', label="Best fit")
#pl.plot(-0.25, 0.5, 'ro')
pl.ylabel(r"$\epsilon_{ee}^{uV}$", fontsize=22.0)
pl.xlabel(r"$\epsilon_{ee}^{dV}$" ,fontsize=22.0)
pl.title(r"$90\%$ CL allowed regions", fontsize=16.0)
pl.legend(frameon=False, fontsize=12, numpoints=1)
pl.savefig("plots/COHERENT_NSI_ee.pdf", bbox_inches="tight")
pl.show()
"""
Explanation: Plot the 90% allowed regions:
End of explanation
"""
def NSI_corr_changing(eps_uV, eps_dV, A, Z):
SIN2THETAW = 0.2387
#Calculate standard weak nuclear charge (squared)
Qsq = 4.0*((A - Z)*(-0.5) + Z*(0.5 - 2*SIN2THETAW))**2
#Calculate the modified nuclear charge from NSI
Qsq_NSI = Qsq + 4.0*((A-Z)*(eps_uV + 2.0*eps_dV) + Z*(2.0*eps_uV + eps_dV))**2
return Qsq_NSI/Qsq
def deltachisq_NSI_emu(eps_uV, eps_dV):
#NB: bins_I and bins_Cs are calculated further up in the script (they are the SM signal prediction)
N_sig_I = (N_SM_I)*NSI_corr_changing(eps_uV, eps_dV, A_I, Z_I)
#Now signal events from Caesium
N_sig_Cs = (N_SM_Cs)*NSI_corr_changing(eps_uV, eps_dV, A_Cs, Z_Cs)
#Number of signal events
N_NSI = np.sum(N_sig_I + N_sig_Cs)
return deltachi2_Nsig(N_NSI)
"""
Explanation: Flavour-changing NSI ($e\mu$)
Now the correction to the CEvNS rate from flavor-changing NSI ($e\mu$-type):
End of explanation
"""
Ngrid = 101
ulist = np.linspace(-1.0, 1.0, Ngrid)
dlist = np.linspace(-1.0, 1.0, Ngrid)
UL, DL = np.meshgrid(ulist, dlist)
delta_chi2_grid_emu = 0.0*UL
#Not very elegant loop
for i in tqdm(range(Ngrid)):
for j in range(Ngrid):
delta_chi2_grid_emu[i,j] = deltachisq_NSI_emu(UL[i,j], DL[i,j])
#Find best-fit point
ind_BF = np.argmin(delta_chi2_grid_emu)
BF = [UL.flatten()[ind_BF], DL.flatten()[ind_BF]]
print("Best fit point: ", BF)
np.savetxt("results/COHERENT_NSI_deltachi2_emu.txt", delta_chi2_grid_emu, header="101x101 grid, corresponding to (uV, dV) values between -1 and 1.")
pl.figure(figsize=(6,6))
#pl.contourf(DL, UL, delta_chi2_grid, levels=[0,1,2,3,4,5,6,7,8,9,10],cmap="Blues")
pl.contourf(DL, UL, delta_chi2_grid_emu, levels=[0,4.6],cmap="Blues")
#levels=[0,4.60]
#pl.colorbar()
pl.plot(0.0, 0.0,'k+', markersize=12.0, label="Standard Model")
pl.plot(BF[1], BF[0], 'ro', label="Best fit")
#pl.plot(-0.25, 0.5, 'ro')
pl.ylabel(r"$\epsilon_{e\mu}^{uV}$", fontsize=22.0)
pl.xlabel(r"$\epsilon_{e\mu}^{dV}$" ,fontsize=22.0)
pl.title(r"$90\%$ CL allowed regions", fontsize=16.0)
pl.legend(frameon=False, fontsize=12, numpoints=1)
pl.savefig("plots/COHERENT_NSI_emu.pdf", bbox_inches="tight")
pl.show()
"""
Explanation: Calculate delta-chisquared over a grid and save to file
End of explanation
"""
def deltachisq_NSI_etau(eps_uV, eps_dV):
#NB: bins_I and bins_Cs are calculated further up in the script (they are the SM signal prediction)
#Signal events from Iodine (with NSI correction only applying to electron neutrino events)
N_sig_I = (N_SM_I + (NSI_corr_changing(eps_uV, eps_dV, A_I, Z_I) - 1.0)*bins_I_e)
#Now signal events from Caesium
N_sig_Cs = (N_SM_Cs + (NSI_corr_changing(eps_uV, eps_dV, A_Cs, Z_Cs) - 1.0)*bins_Cs_e)
#Number of signal events
N_NSI = np.sum(N_sig_I + N_sig_Cs)
return deltachi2_Nsig(N_NSI)
Ngrid = 101
ulist = np.linspace(-1.0, 1.0, Ngrid)
dlist = np.linspace(-1.0, 1.0, Ngrid)
UL, DL = np.meshgrid(ulist, dlist)
delta_chi2_grid_etau = 0.0*UL
#Not very elegant loop
for i in tqdm(range(Ngrid)):
for j in range(Ngrid):
delta_chi2_grid_etau[i,j] = deltachisq_NSI_etau(UL[i,j], DL[i,j])
#Find best-fit point
ind_BF = np.argmin(delta_chi2_grid_etau)
BF = [UL.flatten()[ind_BF], DL.flatten()[ind_BF]]
print("Best fit point: ", BF)
np.savetxt("results/COHERENT_NSI_deltachi2_etau.txt", delta_chi2_grid_etau, header="101x101 grid, corresponding to (uV, dV) values between -1 and 1.")
pl.figure(figsize=(6,6))
#pl.contourf(DL, UL, delta_chi2_grid, levels=[0,1,2,3,4,5,6,7,8,9,10],cmap="Blues")
pl.contourf(DL, UL, delta_chi2_grid_etau, levels=[0,4.6],cmap="Blues")
#levels=[0,4.60]
#pl.colorbar()
pl.plot(0.0, 0.0,'k+', markersize=12.0, label="Standard Model")
pl.plot(BF[1], BF[0], 'ro', label="Best fit")
#pl.plot(-0.25, 0.5, 'ro')
pl.ylabel(r"$\epsilon_{e\tau}^{uV}$", fontsize=22.0)
pl.xlabel(r"$\epsilon_{e\tau}^{dV}$" ,fontsize=22.0)
pl.title(r"$90\%$ CL allowed regions", fontsize=16.0)
pl.legend(frameon=False, fontsize=12, numpoints=1)
pl.savefig("plots/COHERENT_NSI_etau.pdf", bbox_inches="tight")
pl.show()
"""
Explanation: Flavour-changing NSI ($e\tau$)
Finally, allowed regions for Flavour-changing NSI ($e\tau$-type)
End of explanation
"""
#Calculate the number of neutrino magnetic moment scattering events
#assuming a universal magnetic moment (in units of 1e-12 mu_B)
diffRate_mag = np.vectorize(CEvNS.differentialRate_magnetic)
dRdPE_mag = lambda x: (1.0/PEperkeV)*efficiency(x)*mass*time*(f_Cs*diffRate_mag(x/PEperkeV, A_Cs, Z_Cs, 1e-12)\
+ f_I*diffRate_mag(x/PEperkeV, A_I, Z_I, 1e-12))
N_mag = quad(dRdPE_mag, 0, 50)[0]
print("Number of magnetic moment signal events (for mu_nu = 1e-12 mu_B):", N_mag)
def deltachisq_mag(mu_nu):
#Signal events is sum of standard CEvNS + magnetic moment events
N_sig = np.sum(N_SM_tot) + N_mag*(mu_nu/1e-12)**2
return deltachi2_Nsig(N_sig)
"""
Explanation: Limits on the neutrino magnetic moment
Now let's calculate a limit on the neutrino magnetic moment (again, from a crude single-bin $\chi^2$).
End of explanation
"""
Ngrid = 501
maglist = np.logspace(-12, -6, Ngrid)
deltachi2_list_mag = 0.0*maglist
#Not very elegant loop
for i in tqdm(range(Ngrid)):
deltachi2_list_mag[i] = deltachisq_mag(maglist[i])
upper_limit = maglist[deltachi2_list_mag > 2.706][0]
print("90% upper limit: ", upper_limit)
"""
Explanation: Scan over a grid:
End of explanation
"""
pl.figure(figsize=(6,6))
pl.semilogx(maglist, deltachi2_list_mag, linewidth=2.0)
#pl.ylim(0, 25)
pl.axhline(2.706, linestyle='--', color='k')
pl.axvline(upper_limit, linestyle=':', color='k')
pl.text(1e-11, 3, "90% CL")
pl.ylabel(r"$\Delta \chi^2$")
pl.xlabel(r"Neutrino magnetic moment, $\mu_{\nu} / \mu_B$")
pl.savefig("plots/COHERENT_magnetic.pdf", bbox_inches="tight")
pl.show()
"""
Explanation: Do some plotting:
End of explanation
"""
def tabulate_rate( m_med):
vector_rate = lambda x, gsq: (1.0/PEperkeV)*efficiency(x)*mass*time*(f_Cs*CEvNS.differentialRate_CEvNS(x/PEperkeV, A_Cs, Z_Cs,gsq,m_med)\
+ f_I*CEvNS.differentialRate_CEvNS(x/PEperkeV, A_I, Z_I, gsq,m_med))
alpha = 1.0
PE_min = 4.0
PE_max = 50.0
Nvals = 500
PEvals = np.logspace(np.log10(PE_min), np.log10(PE_max),Nvals)
Rvals_A = [np.sqrt(vector_rate(PEvals[i], 0)) for i in range(Nvals)]
Rvals_B = [(1.0/(4.0*alpha*Rvals_A[i]))*(vector_rate(PEvals[i], alpha) - vector_rate(PEvals[i], -alpha)) for i in range(Nvals)]
tabrate_A = InterpolatedUnivariateSpline(PEvals, Rvals_A, k = 1)
tabrate_B = InterpolatedUnivariateSpline(PEvals, Rvals_B, k = 1)
return tabrate_A, tabrate_B
def N_sig_vector(gsq, m_med):
integrand = lambda x: (1.0/PEperkeV)*efficiency(x)*mass*time*(f_Cs*CEvNS.differentialRate_CEvNS(x/PEperkeV, A_Cs, Z_Cs,gsq,m_med)\
+ f_I*CEvNS.differentialRate_CEvNS(x/PEperkeV, A_I, Z_I, gsq,m_med))
xlist = np.linspace(4,50,100)
integ_vals = np.vectorize(integrand)(xlist)
return np.trapz(integ_vals, xlist)
def N_sig_vector_tab(gsq, tabrate_A, tabrate_B):
integrand = lambda x: (tabrate_A(x) + tabrate_B(x)*gsq)**2.0
xlist = np.linspace(4,50,100)
integ_vals = np.vectorize(integrand)(xlist)
return np.trapz(integ_vals, xlist)
#return quad(integrand, 4.0, 50, epsabs=0.01)[0]
def tabulate_Nsig(tabrate_A, tabrate_B):
N_A = N_sig_vector_tab(0, tabrate_A, tabrate_B)
N_C = 0.5*(N_sig_vector_tab(1.0, tabrate_A, tabrate_B) + N_sig_vector_tab(-1.0, tabrate_A, tabrate_B))- N_A
N_B = N_sig_vector_tab(1.0, tabrate_A, tabrate_B) - N_A - N_C
return N_A, N_B, N_C
def N_sig_fulltab(gsq, Nsig_A, Nsig_B, Nsig_C):
return Nsig_A + gsq*Nsig_B + gsq**2*Nsig_C
#Calculate the number of signal events for a 1000 MeV Z', with coupling 1e-4 by doing:
rate_A, rate_B = tabulate_rate(1000)
N_A, N_B,N_C = tabulate_Nsig(rate_A, rate_B)
#N_sig_vector_tab(1e-4, rate_A, rate_B)
N_sig_fulltab(1e-4, N_A, N_B, N_C)
"""
Explanation: Limits on new vector mediators
First, let's calculate the total number of signal events at a given mediator mass and coupling...
It takes a while to recalculate the number of signal events for each mediator mass and coupling, so we'll do some rescaling and interpolation trickery:
End of explanation
"""
gsq_list = np.append(np.logspace(0, 2, 100),1e20)
m_list = np.sort(np.append(np.logspace(-2, 4,49), [1e-6,1e8]))
#Need to search for the limit in a narrow band of coupling values
g_upper = 1e-11*(50**2+m_list**2)
g_lower = 1e-13*(50**2+m_list**2)
deltachi2_vec_grid = np.zeros((51, 101))
for i in tqdm(range(len(m_list))):
rate_A, rate_B = tabulate_rate(m_list[i])
N_A, N_B,N_C = tabulate_Nsig(rate_A, rate_B)
for j, gsq in enumerate(gsq_list):
N_sig = N_sig_fulltab(gsq*g_lower[i], N_A, N_B, N_C)
deltachi2_vec_grid[i, j] = deltachi2_Nsig(N_sig)
mgrid, ggrid = np.meshgrid(m_list, gsq_list, indexing='ij')
ggrid *= 1e-13*(50**2 + mgrid**2)
np.savetxt("results/COHERENT_Zprime.txt", np.c_[mgrid.flatten(), ggrid.flatten(), deltachi2_vec_grid.flatten()])
pl.figure(figsize=(6,6))
pl.loglog(m_list, g_upper, 'k--')
pl.loglog(m_list, g_lower, 'k--')
pl.contourf(mgrid, ggrid, deltachi2_vec_grid, levels=[2.7,1e10],cmap="Blues")
pl.ylim(1e-10, 1e5)
#pl.colorbar()
pl.xlabel(r"$m_{Z'}$ [MeV]")
pl.ylabel(r"$g_{Z'}^2$")
pl.title("Blue region (and above) is excluded...", fontsize=12)
pl.savefig("plots/COHERENT_Zprime.pdf")
pl.show()
"""
Explanation: Now we scan over a grid in $g^2$ and $m_V$ to calculate the $\chi^2$ at each point:
End of explanation
"""
def calc_Nsig_scalar(m_med):
scalar_rate = lambda x: (1.0/PEperkeV)*efficiency(x)*mass*time*(f_Cs*CEvNS.differentialRate_scalar(x/PEperkeV, A_Cs, Z_Cs,1,m_med)\
+ f_I*CEvNS.differentialRate_scalar(x/PEperkeV, A_I, Z_I, 1,m_med))
xlist = np.linspace(4,50,100)
integ_vals = np.vectorize(scalar_rate)(xlist)
return np.trapz(integ_vals, xlist)
#return quad(scalar_rate, PE_min, PE_max)[0]
"""
Explanation: Limits on a new scalar mediator
Finally, let's look at limits on the couplings of a new scalar mediator $\phi$. We start by calculating the contribution to the number of signal events for a given mediator mass (this can be rescaled by the coupling $g_\phi^4$ later):
End of explanation
"""
m_list = np.logspace(-3, 7,50)
gsq_list = np.logspace(0, 4, 50)
#Again, need to search in a specific range of coupling values to find the limit...
g_upper = 1e-10*(50**2+m_list**2)
g_lower = 1e-14*(50**2+m_list**2)
deltachi2_scal_grid = np.zeros((len(m_list), len(gsq_list)))
for i in tqdm(range(len(m_list))):
Nsig_scalar = calc_Nsig_scalar(m_list[i])
for j in range(len(gsq_list)):
deltachi2_scal_grid[i,j] = deltachi2_Nsig(np.sum(N_SM_tot) + Nsig_scalar*(gsq_list[j]*g_lower[i])**2)
mgrid, ggrid = np.meshgrid(m_list, gsq_list, indexing='ij')
ggrid *= 1e-14*(50**2+mgrid**2)
np.savetxt("results/COHERENT_scalar.txt", np.c_[mgrid.flatten(), ggrid.flatten(), deltachi2_scal_grid.flatten()])
pl.figure(figsize=(6,6))
pl.loglog(m_list, g_upper, 'k--')
pl.loglog(m_list, g_lower, 'k--')
pl.contourf(mgrid, ggrid, deltachi2_scal_grid, levels=[2.7,1e10],cmap="Blues")
#pl.colorbar()
pl.xlabel(r"$m_{\phi}$ [MeV]")
pl.ylabel(r"$g_{\phi}^2$")
pl.title("Blue region (and above) is excluded...", fontsize=12)
pl.savefig("plots/COHERENT_scalar.pdf")
pl.show()
"""
Explanation: Now grid-scan to get the $\Delta \chi^2$:
End of explanation
"""
|
GoogleCloudPlatform/vertex-ai-samples | notebooks/community/migration/UJ11 HyperParameter Tuning Training Job with TensorFlow.ipynb | apache-2.0 | ! pip3 install -U google-cloud-aiplatform --user
"""
Explanation: Vertex SDK: Submit a HyperParameter tuning training job with TensorFlow
Installation
Install the latest (preview) version of Vertex SDK.
End of explanation
"""
! pip3 install google-cloud-storage
"""
Explanation: Install the Google cloud-storage library as well.
End of explanation
"""
import os
if not os.getenv("AUTORUN"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
"""
Explanation: Restart the Kernel
Once you've installed the Vertex SDK and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
End of explanation
"""
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
"""
Explanation: Before you begin
GPU run-time
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your GCP project
The following steps are required, regardless of your notebook environment.
Select or create a GCP project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex APIs and Compute Engine APIs.
Google Cloud SDK is already installed in Google Cloud Notebooks.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
End of explanation
"""
REGION = "us-central1" # @param {type: "string"}
"""
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend when possible, to choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You cannot use a Multi-Regional Storage bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see Region support for Vertex AI services
End of explanation
"""
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
"""
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
End of explanation
"""
import os
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your Google Cloud account. This provides access
# to your Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Vertex, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this tutorial in a notebook locally, replace the string
# below with the path to your service account key and run this cell to
# authenticate your Google Cloud account.
else:
%env GOOGLE_APPLICATION_CREDENTIALS your_path_to_credentials.json
# Log in to your account on Google Cloud
! gcloud auth login
"""
Explanation: Authenticate your GCP account
If you are using Google Cloud Notebooks, your environment is already
authenticated. Skip this step.
Note: If you are on an Vertex notebook and run the cell, the cell knows to skip executing the authentication steps.
End of explanation
"""
BUCKET_NAME = "[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "[your-bucket-name]":
BUCKET_NAME = PROJECT_ID + "aip-" + TIMESTAMP
"""
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
This tutorial is designed to use training data that is in a public Cloud Storage bucket and a local Cloud Storage bucket for your batch predictions. You may alternatively use your own training data that you have stored in a local Cloud Storage bucket.
Set the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets.
End of explanation
"""
! gsutil mb -l $REGION gs://$BUCKET_NAME
"""
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
"""
! gsutil ls -al gs://$BUCKET_NAME
"""
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
"""
import os
import sys
import time
from google.cloud.aiplatform import gapic as aip
from google.protobuf import json_format
from google.protobuf.json_format import MessageToJson, ParseDict
from google.protobuf.struct_pb2 import Struct, Value
"""
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Import Vertex SDK
Import the Vertex SDK into our Python environment.
End of explanation
"""
# API Endpoint
API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION)
# Vertex AI location root path for your dataset, model and endpoint resources
PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
"""
Explanation: Vertex AI constants
Setup up the following constants for Vertex AI:
API_ENDPOINT: The Vertex AI API service endpoint for dataset, model, job, pipeline and endpoint services.
API_PREDICT_ENDPOINT: The Vertex AI API service endpoint for prediction.
PARENT: The Vertex AI location root path for dataset, model and endpoint resources.
End of explanation
"""
# client options same for all services
client_options = {"api_endpoint": API_ENDPOINT}
def create_model_client():
client = aip.ModelServiceClient(client_options=client_options)
return client
def create_endpoint_client():
client = aip.EndpointServiceClient(client_options=client_options)
return client
def create_prediction_client():
client = aip.PredictionServiceClient(client_options=client_options)
return client
def create_job_client():
client = aip.JobServiceClient(client_options=client_options)
return client
clients = {}
clients["model"] = create_model_client()
clients["endpoint"] = create_endpoint_client()
clients["prediction"] = create_prediction_client()
clients["job"] = create_job_client()
for client in clients.items():
print(client)
"""
Explanation: Clients
The Vertex SDK works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the server (Vertex).
You will use several clients in this tutorial, so set them all up upfront.
Dataset Service for managed datasets.
Model Service for managed models.
Pipeline Service for training.
Endpoint Service for deployment.
Job Service for batch jobs and custom training.
Prediction Service for serving. Note: Prediction has a different service endpoint.
End of explanation
"""
# Make folder for python training script
! rm -rf custom
! mkdir custom
# Add package information
! touch custom/README.md
setup_cfg = "[egg_info]\n\
tag_build =\n\
tag_date = 0"
! echo "$setup_cfg" > custom/setup.cfg
setup_py = "import setuptools\n\
# Requires TensorFlow Datasets\n\
setuptools.setup(\n\
install_requires=[\n\
'tensorflow_datasets==1.3.0',\n\
],\n\
packages=setuptools.find_packages())"
! echo "$setup_py" > custom/setup.py
pkg_info = "Metadata-Version: 1.0\n\
Name: Hyperparameter Tuning - Boston Housing\n\
Version: 0.0.0\n\
Summary: Demonstration hyperparameter tuning script\n\
Home-page: www.google.com\n\
Author: Google\n\
Author-email: aferlitsch@gmail.com\n\
License: Public\n\
Description: Demo\n\
Platform: Vertex AI"
! echo "$pkg_info" > custom/PKG-INFO
# Make the training subfolder
! mkdir custom/trainer
! touch custom/trainer/__init__.py
"""
Explanation: Prepare a trainer script
Package assembly
End of explanation
"""
%%writefile custom/trainer/task.py
# hyperparameter tuningfor Boston Housing
import tensorflow_datasets as tfds
import tensorflow as tf
from tensorflow.python.client import device_lib
from hypertune import HyperTune
import numpy as np
import argparse
import os
import sys
tfds.disable_progress_bar()
parser = argparse.ArgumentParser()
parser.add_argument('--model-dir', dest='model_dir',
default='/tmp/saved_model', type=str, help='Model dir.')
parser.add_argument('--lr', dest='lr',
default=0.001, type=float,
help='Learning rate.')
parser.add_argument('--units', dest='units',
default=64, type=int,
help='Number of units.')
parser.add_argument('--epochs', dest='epochs',
default=20, type=int,
help='Number of epochs.')
parser.add_argument('--param-file', dest='param_file',
default='/tmp/param.txt', type=str,
help='Output file for parameters')
args = parser.parse_args()
print('Python Version = {}'.format(sys.version))
print('TensorFlow Version = {}'.format(tf.__version__))
print('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))
def make_dataset():
# Scaling Boston Housing data features
def scale(feature):
max = np.max(feature)
feature = (feature / max).astype(np.float)
return feature, max
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.boston_housing.load_data(
path="boston_housing.npz", test_split=0.2, seed=113
)
params = []
for _ in range(13):
x_train[_], max = scale(x_train[_])
x_test[_], _ = scale(x_test[_])
params.append(max)
# store the normalization (max) value for each feature
with tf.io.gfile.GFile(args.param_file, 'w') as f:
f.write(str(params))
return (x_train, y_train), (x_test, y_test)
# Build the Keras model
def build_and_compile_dnn_model():
model = tf.keras.Sequential([
tf.keras.layers.Dense(args.units, activation='relu', input_shape=(13,)),
tf.keras.layers.Dense(args.units, activation='relu'),
tf.keras.layers.Dense(1, activation='linear')
])
model.compile(
loss='mse',
optimizer=tf.keras.optimizers.RMSprop(learning_rate=args.lr))
return model
model = build_and_compile_dnn_model()
# Instantiate the HyperTune reporting object
hpt = HyperTune()
# Reporting callback
class HPTCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs=None):
global hpt
hpt.report_hyperparameter_tuning_metric(
hyperparameter_metric_tag='val_loss',
metric_value=logs['val_loss'],
global_step=epoch)
# Train the model
BATCH_SIZE = 16
(x_train, y_train), (x_test, y_test) = make_dataset()
model.fit(x_train, y_train, epochs=args.epochs, batch_size=BATCH_SIZE, validation_split=0.1, callbacks=[HPTCallback()])
model.save(args.model_dir)
"""
Explanation: Task.py contents
End of explanation
"""
! rm -f custom.tar custom.tar.gz
! tar cvf custom.tar custom
! gzip custom.tar
! gsutil cp custom.tar.gz gs://$BUCKET_NAME/hpt_boston_housing.tar.gz
"""
Explanation: Store training script on your Cloud Storage bucket
End of explanation
"""
JOB_NAME = "hyperparameter_tuning_" + TIMESTAMP
WORKER_POOL_SPEC = [
{
"replica_count": 1,
"machine_spec": {"machine_type": "n1-standard-4", "accelerator_count": 0},
"python_package_spec": {
"executor_image_uri": "gcr.io/cloud-aiplatform/training/tf-cpu.2-1:latest",
"package_uris": ["gs://" + BUCKET_NAME + "/hpt_boston_housing.tar.gz"],
"python_module": "trainer.task",
"args": ["--model-dir=" + "gs://{}/{}".format(BUCKET_NAME, JOB_NAME)],
},
}
]
STUDY_SPEC = {
"metrics": [
{"metric_id": "val_loss", "goal": aip.StudySpec.MetricSpec.GoalType.MINIMIZE}
],
"parameters": [
{
"parameter_id": "lr",
"discrete_value_spec": {"values": [0.001, 0.01, 0.1]},
"scale_type": aip.StudySpec.ParameterSpec.ScaleType.UNIT_LINEAR_SCALE,
},
{
"parameter_id": "units",
"integer_value_spec": {"min_value": 32, "max_value": 256},
"scale_type": aip.StudySpec.ParameterSpec.ScaleType.UNIT_LINEAR_SCALE,
},
],
"algorithm": aip.StudySpec.Algorithm.RANDOM_SEARCH,
}
hyperparameter_tuning_job = aip.HyperparameterTuningJob(
display_name=JOB_NAME,
trial_job_spec={"worker_pool_specs": WORKER_POOL_SPEC},
study_spec=STUDY_SPEC,
max_trial_count=6,
parallel_trial_count=1,
)
print(
MessageToJson(
aip.CreateHyperparameterTuningJobRequest(
parent=PARENT, hyperparameter_tuning_job=hyperparameter_tuning_job
).__dict__["_pb"]
)
)
"""
Explanation: Train a model
projects.locations.hyperparameterTuningJob.create
Request
End of explanation
"""
request = clients["job"].create_hyperparameter_tuning_job(
parent=PARENT, hyperparameter_tuning_job=hyperparameter_tuning_job
)
"""
Explanation: Example output:
{
"parent": "projects/migration-ucaip-training/locations/us-central1",
"hyperparameterTuningJob": {
"displayName": "hyperparameter_tuning_20210226020029",
"studySpec": {
"metrics": [
{
"metricId": "val_loss",
"goal": "MINIMIZE"
}
],
"parameters": [
{
"parameterId": "lr",
"discreteValueSpec": {
"values": [
0.001,
0.01,
0.1
]
},
"scaleType": "UNIT_LINEAR_SCALE"
},
{
"parameterId": "units",
"integerValueSpec": {
"minValue": "32",
"maxValue": "256"
},
"scaleType": "UNIT_LINEAR_SCALE"
}
],
"algorithm": "RANDOM_SEARCH"
},
"maxTrialCount": 6,
"parallelTrialCount": 1,
"trialJobSpec": {
"workerPoolSpecs": [
{
"machineSpec": {
"machineType": "n1-standard-4"
},
"replicaCount": "1",
"pythonPackageSpec": {
"executorImageUri": "gcr.io/cloud-aiplatform/training/tf-cpu.2-1:latest",
"packageUris": [
"gs://migration-ucaip-trainingaip-20210226020029/hpt_boston_housing.tar.gz"
],
"pythonModule": "trainer.task",
"args": [
"--model-dir=gs://migration-ucaip-trainingaip-20210226020029/hyperparameter_tuning_20210226020029"
]
}
}
]
}
}
}
Call
End of explanation
"""
print(MessageToJson(request.__dict__["_pb"]))
"""
Explanation: Response
End of explanation
"""
# The full unique ID for the hyperparameter tuningjob
hyperparameter_tuning_id = request.name
# The short numeric ID for the hyperparameter tuningjob
hyperparameter_tuning_short_id = hyperparameter_tuning_id.split("/")[-1]
print(hyperparameter_tuning_id)
"""
Explanation: Example output:
{
"name": "projects/116273516712/locations/us-central1/hyperparameterTuningJobs/5264408897233354752",
"displayName": "hyperparameter_tuning_20210226020029",
"studySpec": {
"metrics": [
{
"metricId": "val_loss",
"goal": "MINIMIZE"
}
],
"parameters": [
{
"parameterId": "lr",
"discreteValueSpec": {
"values": [
0.001,
0.01,
0.1
]
},
"scaleType": "UNIT_LINEAR_SCALE"
},
{
"parameterId": "units",
"integerValueSpec": {
"minValue": "32",
"maxValue": "256"
},
"scaleType": "UNIT_LINEAR_SCALE"
}
],
"algorithm": "RANDOM_SEARCH"
},
"maxTrialCount": 6,
"parallelTrialCount": 1,
"trialJobSpec": {
"workerPoolSpecs": [
{
"machineSpec": {
"machineType": "n1-standard-4"
},
"replicaCount": "1",
"diskSpec": {
"bootDiskType": "pd-ssd",
"bootDiskSizeGb": 100
},
"pythonPackageSpec": {
"executorImageUri": "gcr.io/cloud-aiplatform/training/tf-cpu.2-1:latest",
"packageUris": [
"gs://migration-ucaip-trainingaip-20210226020029/hpt_boston_housing.tar.gz"
],
"pythonModule": "trainer.task",
"args": [
"--model-dir=gs://migration-ucaip-trainingaip-20210226020029/hyperparameter_tuning_20210226020029"
]
}
}
]
},
"state": "JOB_STATE_PENDING",
"createTime": "2021-02-26T02:02:02.787187Z",
"updateTime": "2021-02-26T02:02:02.787187Z"
}
End of explanation
"""
request = clients["job"].get_hyperparameter_tuning_job(name=hyperparameter_tuning_id)
"""
Explanation: projects.locations.hyperparameterTuningJob.get
Call
End of explanation
"""
print(MessageToJson(request.__dict__["_pb"]))
"""
Explanation: Response
End of explanation
"""
while True:
response = clients["job"].get_hyperparameter_tuning_job(
name=hyperparameter_tuning_id
)
if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED:
print("Study trials have not completed:", response.state)
if response.state == aip.PipelineState.PIPELINE_STATE_FAILED:
break
else:
print("Study trials have completed:", response.end_time - response.start_time)
break
time.sleep(20)
"""
Explanation: Example output:
{
"name": "projects/116273516712/locations/us-central1/hyperparameterTuningJobs/5264408897233354752",
"displayName": "hyperparameter_tuning_20210226020029",
"studySpec": {
"metrics": [
{
"metricId": "val_loss",
"goal": "MINIMIZE"
}
],
"parameters": [
{
"parameterId": "lr",
"discreteValueSpec": {
"values": [
0.001,
0.01,
0.1
]
},
"scaleType": "UNIT_LINEAR_SCALE"
},
{
"parameterId": "units",
"integerValueSpec": {
"minValue": "32",
"maxValue": "256"
},
"scaleType": "UNIT_LINEAR_SCALE"
}
],
"algorithm": "RANDOM_SEARCH"
},
"maxTrialCount": 6,
"parallelTrialCount": 1,
"trialJobSpec": {
"workerPoolSpecs": [
{
"machineSpec": {
"machineType": "n1-standard-4"
},
"replicaCount": "1",
"diskSpec": {
"bootDiskType": "pd-ssd",
"bootDiskSizeGb": 100
},
"pythonPackageSpec": {
"executorImageUri": "gcr.io/cloud-aiplatform/training/tf-cpu.2-1:latest",
"packageUris": [
"gs://migration-ucaip-trainingaip-20210226020029/hpt_boston_housing.tar.gz"
],
"pythonModule": "trainer.task",
"args": [
"--model-dir=gs://migration-ucaip-trainingaip-20210226020029/hyperparameter_tuning_20210226020029"
]
}
}
]
},
"state": "JOB_STATE_PENDING",
"createTime": "2021-02-26T02:02:02.787187Z",
"updateTime": "2021-02-26T02:02:02.787187Z"
}
Wait for the study to complete
End of explanation
"""
best = (None, None, None, 0.0)
response = clients["job"].get_hyperparameter_tuning_job(name=hyperparameter_tuning_id)
for trial in response.trials:
print(MessageToJson(trial.__dict__["_pb"]))
# Keep track of the best outcome
try:
if float(trial.final_measurement.metrics[0].value) > best[3]:
best = (
trial.id,
float(trial.parameters[0].value),
float(trial.parameters[1].value),
float(trial.final_measurement.metrics[0].value),
)
except:
pass
print()
print("ID", best[0])
print("Decay", best[1])
print("Learning Rate", best[2])
print("Validation Accuracy", best[3])
"""
Explanation: Review the results of the study
End of explanation
"""
delete_hpt_job = True
delete_bucket = True
# Delete the hyperparameter tuningusing the Vertex AI fully qualified identifier for the custome training
try:
if delete_hpt_job:
clients["job"].delete_hyperparameter_tuning_job(name=hyperparameter_tuning_id)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r gs://$BUCKET_NAME
"""
Explanation: Example output:
```
{
"id": "1",
"state": "SUCCEEDED",
"parameters": [
{
"parameterId": "lr",
"value": 0.1
},
{
"parameterId": "units",
"value": 80.0
}
],
"finalMeasurement": {
"stepCount": "19",
"metrics": [
{
"metricId": "val_loss",
"value": 46.61515110294993
}
]
},
"startTime": "2021-02-26T02:05:16.935353384Z",
"endTime": "2021-02-26T02:12:44Z"
}
{
"id": "2",
"state": "SUCCEEDED",
"parameters": [
{
"parameterId": "lr",
"value": 0.01
},
{
"parameterId": "units",
"value": 45.0
}
],
"finalMeasurement": {
"stepCount": "19",
"metrics": [
{
"metricId": "val_loss",
"value": 32.55313952376203
}
]
},
"startTime": "2021-02-26T02:15:31.357856840Z",
"endTime": "2021-02-26T02:24:18Z"
}
{
"id": "3",
"state": "SUCCEEDED",
"parameters": [
{
"parameterId": "lr",
"value": 0.1
},
{
"parameterId": "units",
"value": 70.0
}
],
"finalMeasurement": {
"stepCount": "19",
"metrics": [
{
"metricId": "val_loss",
"value": 42.709188321741614
}
]
},
"startTime": "2021-02-26T02:26:40.704476222Z",
"endTime": "2021-02-26T02:34:21Z"
}
{
"id": "4",
"state": "SUCCEEDED",
"parameters": [
{
"parameterId": "lr",
"value": 0.01
},
{
"parameterId": "units",
"value": 173.0
}
],
"finalMeasurement": {
"stepCount": "17",
"metrics": [
{
"metricId": "val_loss",
"value": 46.12480219399057
}
]
},
"startTime": "2021-02-26T02:37:45.275581053Z",
"endTime": "2021-02-26T02:51:07Z"
}
{
"id": "5",
"state": "SUCCEEDED",
"parameters": [
{
"parameterId": "lr",
"value": 0.01
},
{
"parameterId": "units",
"value": 223.0
}
],
"finalMeasurement": {
"stepCount": "19",
"metrics": [
{
"metricId": "val_loss",
"value": 24.875632611716664
}
]
},
"startTime": "2021-02-26T02:53:32.612612421Z",
"endTime": "2021-02-26T02:54:19Z"
}
{
"id": "6",
"state": "SUCCEEDED",
"parameters": [
{
"parameterId": "lr",
"value": 0.1
},
{
"parameterId": "units",
"value": 123.0
}
],
"finalMeasurement": {
"stepCount": "13",
"metrics": [
{
"metricId": "val_loss",
"value": 43.352300690441595
}
]
},
"startTime": "2021-02-26T02:56:47.323707459Z",
"endTime": "2021-02-26T03:03:49Z"
}
ID 1
Decay 0.1
Learning Rate 80.0
Validation Accuracy 46.61515110294993
```
Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial.
End of explanation
"""
|
aldian/tensorflow | tensorflow/python/ops/numpy_ops/g3doc/TensorFlow_NumPy_Keras_and_Distribution_Strategy.ipynb | apache-2.0 | !pip install --quiet --upgrade tf-nightly
import tensorflow as tf
import tensorflow.experimental.numpy as tnp
# Creates 3 logical GPU devices for demonstrating distribution.
gpu_device = tf.config.list_physical_devices("GPU")[0]
tf.config.set_logical_device_configuration(
gpu_device, [tf.config.LogicalDeviceConfiguration(128)] * 3)
"""
Explanation: TensorFlow NumPy: Keras and Distribution Strategy
Overview
TensorFlow Numpy provides an implementation of a subset of NumPy API on top of TensorFlow backend. Please see TF NumPy API documentation and
TensorFlow NumPy Guide.
This document shows how TensorFlow NumPy interoperates with TensorFlow's high level APIs like DistributionStrategky and Keras.
Setup
End of explanation
"""
dense_layer = tf.keras.layers.Dense(5)
inputs = tnp.random.randn(2, 3).astype(tnp.float32)
outputs = dense_layer(inputs)
print("Shape:", outputs.shape)
print("Class:", outputs.__class__)
"""
Explanation: TF NumPy and Keras
TF NumPy can be used to create custom Keras layers. These layers interoperate with and behave like regular Keras layers. Here are some things to note to understand how these layers work.
Existing Keras layers can be invoked with ND Array inputs, in addition to other input types like tf.Tensor, np.ndarray, python literals, etc. All these types will be internally convert to a tf.Tensor before the layer's call method is invoked
Existing Keras layers will continue to output tf.Tensor values. Custom layers could output ND Array or tf.Tensor.
Custom and existing Keras layers should be freely composable.
Checkout the examples below that demonstrate the above.
ND Array inputs
Create and call an existing Keras layers with ND Array inputs. Note that the layer outputs a tf.Tensor.
End of explanation
"""
class ProjectionLayer(tf.keras.layers.Layer):
"""Linear projection layer using TF NumPy."""
def __init__(self, units):
super(ProjectionLayer, self).__init__()
self._units = units
def build(self, input_shape):
stddev = tnp.sqrt(self._units).astype(tnp.float32)
initial_value = tnp.random.randn(input_shape[1], self._units).astype(
tnp.float32) / stddev
# Note that TF NumPy can interoperate with tf.Variable.
self.w = tf.Variable(initial_value, trainable=True)
def call(self, inputs):
return tnp.matmul(inputs, self.w)
# Call with ndarray inputs
layer = ProjectionLayer(2)
tnp_inputs = tnp.random.randn(2, 4).astype(tnp.float32)
print("output:", layer(tnp_inputs))
# Call with tf.Tensor inputs
tf_inputs = tf.random.uniform([2, 4])
print("\noutput: ", layer(tf_inputs))
"""
Explanation: Custom Keras Layer
Create a new Keras layer as below using TensorFlow NumPy methods. Note that the layer's call method receives a tf.tensor value as input. It can convert to ndarray using tnp.asarray. However this conversion may not be needed since TF NumPy APIs can handle tf.Tensor inputs.
End of explanation
"""
batch_size = 3
units = 5
model = tf.keras.Sequential([tf.keras.layers.Dense(units),
ProjectionLayer(2)])
print("Calling with ND Array inputs")
tnp_inputs = tnp.random.randn(batch_size, units).astype(tnp.float32)
output = model.call(tnp_inputs)
print("Output shape %s.\nOutput class: %s\n" % (output.shape, output.__class__))
print("Calling with tensor inputs")
tf_inputs = tf.convert_to_tensor(tnp_inputs)
output = model.call(tf_inputs)
print("Output shape %s.\nOutput class: %s" % (output.shape, output.__class__))
"""
Explanation: Composing layers
Next create a Keras model by composing the ProjectionLayer defined above with a Dense layer.
End of explanation
"""
# Initialize the strategy
gpus = tf.config.list_logical_devices("GPU")
print("Using following GPUs", gpus)
strategy = tf.distribute.MirroredStrategy(gpus)
"""
Explanation: Distributed Strategy: tf.distribution
TensorFlow NumPy Guide shows how tf.device API can be used to place individual operations on specific devices. Note that this works for remote devices as well.
TensorFlow also has higher level distribution APIs that make it easy to replicate computation across devices.
Here we will show how to place TensorFlow NumPy code in a Distribution Strategy context to easily perform replicated computation.
End of explanation
"""
@tf.function
def replica_fn():
replica_id = tf.distribute.get_replica_context().replica_id_in_sync_group
print("Running on device %s" % replica_id.device)
return tnp.asarray(replica_id) * 5
print(strategy.run(replica_fn).values)
"""
Explanation: Simple replication example
First try running a simple NumPy function in strategy context.
End of explanation
"""
# Test running the model in a distributed setting.
model = tf.keras.Sequential([tf.keras.layers.Dense(units), ProjectionLayer(2)])
@tf.function
def model_replica_fn():
inputs = tnp.random.randn(batch_size, units).astype(tnp.float32)
return model.call(inputs)
print("Outputs:\n", strategy.run(model_replica_fn).values)
"""
Explanation: Replicated model execution
Next run the model defined earlier under strategy scope.
End of explanation
"""
|
NII-cloud-operation/Jupyter-LC_wrapper | examples/Summarizing and Logging.ipynb | bsd-3-clause | !!from time import sleep
for i in range(0, 100):
print(i)
sleep(0.1)
"""
Explanation: Summarizing and Logging
An example of the Summarizing and Logging mode.
Enabling the Summarizing and Logging mode
To enable the Summarizing and Logging mode, you should add !! at the beginning of the code cell.
End of explanation
"""
%env lc_wrapper=4:4:4:4
!!from time import sleep
for i in range(0, 100):
print(i)
sleep(0.1)
"""
Explanation: You can configure the summarization settings via the environment variable lc_wrapper.
End of explanation
"""
!cat /notebooks/.log/20170704/20170704-071348-0190.log
"""
Explanation: The .log directory is created and the whole of output are recorded on a log file in this directory.
The filename is recorded on output area like above.
End of explanation
"""
def do_something():
return "output something"
do_something()
!!from time import sleep
for i in range(0, 100):
print(i)
sleep(0.1)
do_something()
!cat /notebooks/.log/20170704/20170704-071448-0119.log
"""
Explanation: Various Types of Execution Results
LC_wrapper also records execution results not only stream output.
Plain Text in Execution Result
An execution result is recorded with stream outputs.
End of explanation
"""
!!from time import sleep
from datetime import datetime
import pandas as pd
items = []
for i in range(0, 100):
print(i)
sleep(0.1)
items.append((i, datetime.now()))
pd.DataFrame(items, columns=['Index', 'Datetime'])
!cat /notebooks/.log/20170704/20170704-071539-0790.log
"""
Explanation: HTML in Execution Result
Also you can contain HTML code in an exeuction result...
End of explanation
"""
%matplotlib inline
!!from time import sleep
from datetime import datetime
import pandas as pd
items = []
for i in range(0, 100):
print(i)
sleep(0.1)
items.append((datetime.now(), i))
pd.DataFrame(items, columns=['Datetime', 'Index']).set_index('Datetime').plot()
!cat /notebooks/.log/20170704/20170704-071619-0567.log
"""
Explanation: Image in Execution Result
End of explanation
"""
!!from time import sleep
for i in range(0, 100):
print(i)
sleep(0.1)
# Always raises AssertionError
assert False
!cat /notebooks/.log/20170704/20170704-071647-0970.log
"""
Explanation: Errors
lc_wrapper can handle errors properly.
End of explanation
"""
|
QuantStack/quantstack-talks | 2019-01-10-ESRF/notebooks/01.0.ipywidgets.ipynb | bsd-3-clause | 10 * 10
def f(x):
print(x * x)
f(9)
from ipywidgets import *
from traitlets import dlink
interact(f, x=(0, 100));
"""
Explanation: Jupyter Interactive widgets
The notebook comes alive with the interactive widgets:
Part of the Jupyter project
BSD Licensed
Installation for the legacy notebook:
bash
conda install -c conda-forge ipywidgets
Speeding up the bottleneck in the REPL
<img src="./images/Flow.svg"></img>
End of explanation
"""
slider = FloatSlider(
value=7.5,
min=5.0,
max=10.0,
step=0.1,
description='Input:',
)
slider
slider
slider.value = 9
text = FloatText(description='Value')
dlink((slider, 'value'), (text, 'value'))
text
slider
"""
Explanation: Interactive Jupyter widgets
End of explanation
"""
|
tbenthompson/tectosaur | examples/notebooks/fullspace_qd_plotter.ipynb | mit | import numpy as np
import matplotlib.pyplot as plt
import tectosaur as tct
import tectosaur.qd
import tectosaur.qd.plotting
tct.qd.configure(
gpu_idx = 0, # Which GPU to use if there are multiple. Best to leave as 0.
fast_plot = True, # Let's make fast, inexpensive figures. Set to false for higher resolution plots with latex fonts.
)
plt.style.use('default')
"""
Explanation: Quasidynamic earthquake simulation plotting
Here, we'll make some useful plots to see what happened in the QD simulation from fullspace_qd_run.ipynb.
First, let's import our tools!
End of explanation
"""
folder_name = 'data0'
data = tct.qd.load(folder_name, tct.qd.FullspaceModel)
"""
Explanation: Now, we load the data from the previous run. Check what folder was created! If you ran the simulation code multiple times, each time a new folder will be created in sequential order (data0, data1, data2, ...). This tct.qd.load function hides some of the data loading logic that was described at the end of fullspace_qd_run.ipynb.
End of explanation
"""
data.load_new_files()
"""
Explanation: It can be nice to make some figures while the simulation is still running. For long running, large simulations, it's expensive to reload all the data, so load_new_files() allows updating the data object with any new time steps that have been completed and saved. By default, results are saved in 100 time step chunks. Look in the data0 folder to see.
End of explanation
"""
qdp = tct.qd.plotting.QDPlotData(data)
"""
Explanation: Create the plotting object. This process the data a bit to make field like slip and velocity easier to plot.
End of explanation
"""
qdp.summary()
"""
Explanation: The summary() function makes four useful plots that show the overall evolution of the fault:
The minimum state variable value on the fault as a function of time.
The $log_{10}$ of the maximum slip rate on the fault as a function of time.
The time as a function of time step index.
The time step size as a function of time step index.
From this summary, we can see that as the fault evolved, there were some slow slip events of growing magnitude until at approximate time 0.042, the fault ruptured for the first time. It ruptured again at time 0.047.
End of explanation
"""
qdp.nicefig(*qdp.V_info(99), dim = [0,2])
"""
Explanation: The qdp.V_info function provides the necessary values, levels, contour levels, and colormap to the qdp.nicefig function to make a handy figure of the state of the x component of slip rate at the 1050th time step.
End of explanation
"""
video_name = qdp.qd_video(range(1, qdp.n_steps, 4), qdp.V_info, video_prefix = 'qd_video', dim = [0,2])
tct.qd.plotting.make_mp4(video_name)
"""
Explanation: Let's make a whole bunch of this same figure and turn them into a video. We'll make a figure every 4th step and name the final video qd_video. This should create a qd_video0.mp4 file. Enjoy!
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.12/_downloads/plot_artifacts_correction_ssp.ipynb | bsd-3-clause | import numpy as np
import mne
from mne.datasets import sample
from mne.preprocessing import compute_proj_ecg, compute_proj_eog
# getting some data ready
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
raw = mne.io.read_raw_fif(raw_fname, preload=True)
raw.pick_types(meg=True, ecg=True, eog=True, stim=True)
"""
Explanation: .. _tut_artifacts_correct_ssp:
Artifact Correction with SSP
End of explanation
"""
projs, events = compute_proj_ecg(raw, n_grad=1, n_mag=1, average=True)
print(projs)
ecg_projs = projs[-2:]
mne.viz.plot_projs_topomap(ecg_projs)
# Now for EOG
projs, events = compute_proj_eog(raw, n_grad=1, n_mag=1, average=True)
print(projs)
eog_projs = projs[-2:]
mne.viz.plot_projs_topomap(eog_projs)
"""
Explanation: Compute SSP projections
End of explanation
"""
raw.info['projs'] += eog_projs + ecg_projs
"""
Explanation: Apply SSP projections
MNE is handling projections at the level of the info,
so to register them populate the list that you find in the 'proj' field
End of explanation
"""
events = mne.find_events(raw, stim_channel='STI 014')
reject = dict(grad=4000e-13, mag=4e-12, eog=150e-6)
# this can be highly data dependent
event_id = {'auditory/left': 1}
epochs_no_proj = mne.Epochs(raw, events, event_id, tmin=-0.2, tmax=0.5,
proj=False, baseline=(None, 0), reject=reject)
epochs_no_proj.average().plot(spatial_colors=True)
epochs_proj = mne.Epochs(raw, events, event_id, tmin=-0.2, tmax=0.5, proj=True,
baseline=(None, 0), reject=reject)
epochs_proj.average().plot(spatial_colors=True)
"""
Explanation: Yes this was it. Now MNE will apply the projs on demand at any later stage,
so watch out for proj parmeters in functions or to it explicitly
with the .apply_proj method
Demonstrate SSP cleaning on some evoked data
End of explanation
"""
evoked = mne.Epochs(raw, events, event_id, tmin=-0.2, tmax=0.5,
proj='delayed', baseline=(None, 0),
reject=reject).average()
# set time instants in seconds (from 50 to 150ms in a step of 10ms)
times = np.arange(0.05, 0.15, 0.01)
evoked.plot_topomap(times, proj='interactive')
"""
Explanation: Looks cool right? It is however often not clear how many components you
should take and unfortunately this can have bad consequences as can be seen
interactively using the delayed SSP mode:
End of explanation
"""
|
DBWangGroupUNSW/COMP9318 | L4 - Optimal Histogram.ipynb | mit | LARGE_NUM = 1000000000.0
EMPTY = -1
DEBUG = 2
#DEBUG = 1
import numpy as np
def sse(arr):
if len(arr) == 0: # deal with arr == []
return 0.0
avg = np.average(arr)
val = sum( [(x-avg)*(x-avg) for x in arr] )
return val
def calc_depth(b):
return 5 - b
def v_opt_rec(xx, b):
mincost = LARGE_NUM
n = len(xx)
# check boundary condition:
if n < b:
return LARGE_NUM + 1
elif b == 1:
return sse(xx)
else: # the general case
if DEBUG > 1:
#print('.. BEGIN: input = {!s:<30}, b = {}'.format(xx, b))
print('..{}BEGIN: input = {!s:<30}, b = {}'.format(' '*calc_depth(b), xx, b))
for t in range(n):
prefix = xx[0 : t+1]
suffix = xx[t+1 : ]
cost = sse(prefix) + v_opt_rec(suffix, b - 1)
mincost = min(mincost, cost)
if DEBUG > 0:
#print('.. END: input = {!s:<32}, b = {}, mincost = {}'.format(xx, b, mincost))
print('..{}END: input = {!s:<32}, b = {}, mincost = {}'.format(' '*calc_depth(b), xx, b, mincost))
return mincost
"""
Explanation: Introduction
In this notebook, we experiment with the optimal histogram algorithm. We will implement a simple version based on recursion and you will do the hard job of implementing a dynamic programming-based version.
References:
* H. V. Jagadish, Nick Koudas, S. Muthukrishnan, Viswanath Poosala, Kenneth C. Sevcik, Torsten Suel: Optimal Histograms with Quality Guarantees. VLDB 1998: 275-286. (url: http://engineering.nyu.edu/~suel/papers/vopt.pdf)
* Dynamic Programming (wikipedia): https://en.wikipedia.org/wiki/Dynamic_programming
End of explanation
"""
x = [7, 9, 13, 5]
b = 3
c = v_opt_rec(x, b)
print('optimal cost = {}'.format(c))
x = [1, 3, 9, 13, 17]
b = 4
c = v_opt_rec(x, b)
print('c = {}'.format(c))
x = [3, 1, 18, 9, 13, 17]
b = 4
c = v_opt_rec(x, b)
print('c = {}'.format(c))
x = [1, 2, 3, 4, 5, 6]
b = 4
c = v_opt_rec(x, b)
print('c = {}'.format(c))
"""
Explanation: Now, try to understand how the algorithm works -- feel free to modify the code to output more if you need. Specifically,
Observe and understand how the recursion works (set DEBUG = 2)
Observe and understand how many sub-problems are being solved again and again (set DEBUG = 1), especially when the input array is longer.
End of explanation
"""
|
BBN-Q/Auspex | doc/examples/Example-SingleShot-Fid.ipynb | apache-2.0 | from QGL import *
from auspex.qubit import *
"""
Explanation: Example Q7: Single Shot Fidelity
This example notebook shows how to run single shot fidelity experiments
© Raytheon BBN Technologies 2019
End of explanation
"""
cl = ChannelLibrary("my_config")
pl = PipelineManager()
"""
Explanation: We use a pre-existing database containing a channel library and pipeline we have established.
End of explanation
"""
spec_an = cl.new_spectrum_analzyer("SpecAn", "ASRL/dev/ttyACM0::INSTR", cl["spec_an_LO"])
cal = MixerCalibration(q2, spec_an, mixer="measure")
cal.calibrate()
"""
Explanation: Calibrating Mixers
The APS2 requires mixers to upconvert to qubit and cavity frequencies. We must tune the offset of these mixers and the amplitude factors of the quadrature channels to ensure the best possible results. We repeat the definition of the spectrum analyzer here, assuming that the LO driving this instrument is present in the channel library as spec_an_LO.
End of explanation
"""
|
sangheestyle/ml2015project | howto/model11_GMM_fixed.ipynb | mit | import gzip
import pickle
from os import path
from collections import defaultdict
from numpy import sign
"""
Load buzz data as a dictionary.
You can give parameter for data so that you will get what you need only.
"""
def load_buzz(root='../data', data=['train', 'test', 'questions'], format='pklz'):
buzz_data = {}
for ii in data:
file_path = path.join(root, ii + "." + format)
with gzip.open(file_path, "rb") as fp:
buzz_data[ii] = pickle.load(fp)
return buzz_data
"""
Explanation: Model10: GMM
A. Functions
There have four different functions.
Data reader: Read data from file.
Feature functions(private): Functions which extract features are placed in here. It means that if you make a specific feature function, you can add the one into here.
Feature function(public): We can use only this function for feature extraction.
Utility functions: All the funtions except functions which are mentioned in above should be placed in here.
Data reader
End of explanation
"""
from numpy import sign, abs
def _feat_basic(bd, group):
X = []
for item in bd[group].items():
qid = item[1]['qid']
q = bd['questions'][qid]
#item[1]['q_length'] = max(q['pos_token'].keys())
item[1]['q_length'] = len(q['question'].split())
item[1]['category'] = q['category'].lower()
item[1]['answer'] = q['answer'].lower()
X.append(item[1])
return X
def _feat_sign_val(data):
for item in data:
item['sign_val'] = sign(item['position'])
def _get_pos(bd, sign_val=None):
# bd is not bd, bd is bd['train']
unwanted_index = []
pos_uid = defaultdict(list)
pos_qid = defaultdict(list)
for index, key in enumerate(bd):
if sign_val and sign(bd[key]['position']) != sign_val:
unwanted_index.append(index)
else:
pos_uid[bd[key]['uid']].append(bd[key]['position'])
pos_qid[bd[key]['qid']].append(bd[key]['position'])
return pos_uid, pos_qid, unwanted_index
def _get_avg_pos(bd, sign_val=None):
pos_uid, pos_qid, unwanted_index = _get_pos(bd, sign_val)
avg_pos_uid = {}
avg_pos_qid = {}
if not sign_val:
sign_val = 1
for key in pos_uid:
pos = pos_uid[key]
avg_pos_uid[key] = sign_val * (sum(pos) / len(pos))
for key in pos_qid:
pos = pos_qid[key]
avg_pos_qid[key] = sign_val * (sum(pos) / len(pos))
return avg_pos_uid, avg_pos_qid, unwanted_index
def _feat_avg_pos(data, bd, group, sign_val):
avg_pos_uid, avg_pos_qid, unwanted_index = _get_avg_pos(bd['train'], sign_val=sign_val)
if group == 'train':
for index in sorted(unwanted_index, reverse=True):
del data[index]
for item in data:
if item['uid'] in avg_pos_uid:
item['avg_pos_uid'] = avg_pos_uid[item['uid']]
else:
vals = avg_pos_uid.values()
item['avg_pos_uid'] = sum(vals) / float(len(vals))
if item['qid'] in avg_pos_qid:
item['avg_pos_qid'] = avg_pos_qid[item['qid']]
else:
vals = avg_pos_qid.values()
item['avg_pos_qid'] = sum(vals) / float(len(vals))
# Response position can be longer than length of question
if item['avg_pos_uid'] > item['q_length']:
item['avg_pos_uid'] = item['q_length']
if item['avg_pos_qid'] > item['q_length']:
item['avg_pos_qid'] = item['q_length']
"""
Explanation: Feature functions(private)
End of explanation
"""
def featurize(bd, group, sign_val=None, extra=None):
# Basic features
# qid(string), uid(string), position(float)
# answer'(string), 'potistion'(float), 'qid'(string), 'uid'(string)
X = _feat_basic(bd, group=group)
# Some extra features
if extra:
for func_name in extra:
func_name = '_feat_' + func_name
if func_name in ['_feat_avg_pos']:
globals()[func_name](X, bd, group=group, sign_val=sign_val)
else:
globals()[func_name](X)
if group == 'train':
y = []
for item in X:
y.append(item['position'])
del item['position']
return X, y
elif group == 'test':
return X
else:
raise ValueError(group, 'is not the proper type')
"""
Explanation: Feature function(public)
End of explanation
"""
import csv
def select(data, keys):
unwanted = data[0].keys() - keys
for item in data:
for unwanted_key in unwanted:
del item[unwanted_key]
return data
def write_result(test_set, predictions, file_name='guess.csv'):
predictions = sorted([[id, predictions[index]] for index, id in enumerate(test_set.keys())])
predictions.insert(0,["id", "position"])
with open(file_name, "w") as fp:
writer = csv.writer(fp, delimiter=',')
writer.writerows(predictions)
"""
Explanation: Utility functions
End of explanation
"""
%matplotlib inline
import numpy as np
from scipy import linalg
import matplotlib.pyplot as plt
import matplotlib as mpl
from sklearn import mixture
def plot_gmm(X, models, n_components, covariance_type='diag',
figsize=(10, 20), suptitle=None, xlabel=None, ylabel=None):
color_iter = ['r', 'g', 'b', 'c', 'm', 'y', 'k', 'gray', 'pink', 'lime']
plt.figure(figsize=figsize)
plt.suptitle(suptitle, fontsize=20)
for i, model in enumerate(models):
mm = getattr(mixture, model)(n_components=n_components,
covariance_type=covariance_type)
mm.fit(X_pos_qid)
Y = mm.predict(X_pos_qid)
plt.subplot(len(models), 1, 1 + i)
for i, color in enumerate(color_iter):
plt.scatter(X_pos_qid[Y == i, 0], X_pos_qid[Y == i, 1], .7, color=color)
plt.title(model, fontsize=15)
plt.xlabel(xlabel, fontsize=12)
plt.ylabel(ylabel, fontsize=12)
plt.grid()
plt.show()
from collections import UserDict
import numpy as np
class DictDict(UserDict):
def __init__(self, bd):
UserDict.__init__(self)
self._set_bd(bd)
def sub_keys(self):
return self[list(self.keys())[0]].keys()
def select(self, sub_keys):
vals = []
for key in self:
vals.append([self[key][sub_key] for sub_key in sub_keys])
return np.array(vals)
def sub_append(self, sub_key, values):
for index, key in enumerate(self):
self[key][sub_key] = values[index]
class Users(DictDict):
def _set_bd(self, bd):
pos_uid, _, _ = _get_pos(bd['train'], sign_val=None)
for key in pos_uid:
u = np.array(pos_uid[key])
ave_pos_uid = sum(abs(u)) / float(len(u))
acc_ratio_uid = len(u[u > 0]) / float(len(u))
self[key] = {'ave_pos_uid': ave_pos_uid,
'acc_ratio_uid': acc_ratio_uid}
class Questions(DictDict):
def _set_bd(self, bd):
_, pos_qid, _ = _get_pos(bd['train'], sign_val=None)
for key in pos_qid:
u = np.array(pos_qid[key])
ave_pos_qid = sum(abs(u)) / float(len(u))
acc_ratio_qid = len(u[u > 0]) / float(len(u))
self[key] = bd['questions'][key]
self[key]['ave_pos_qid'] = ave_pos_qid
self[key]['acc_ratio_qid'] = acc_ratio_qid
users = Users(load_buzz())
questions = Questions(load_buzz())
X_pos_uid = users.select(['ave_pos_uid', 'acc_ratio_uid'])
X_pos_qid = questions.select(['ave_pos_qid', 'acc_ratio_qid'])
plot_gmm(X_pos_uid,
models=['GMM', 'VBGMM', 'DPGMM'],
n_components=8,
covariance_type='diag',
figsize=(10, 20),
suptitle='Classifying users',
xlabel='abs(position)',
ylabel='accuracy ratio')
plot_gmm(X_pos_qid,
models=['GMM', 'VBGMM', 'DPGMM'],
n_components=8,
covariance_type='diag',
figsize=(10, 20),
suptitle='Classifying questions',
xlabel='abs(position)',
ylabel='accuracy ratio')
# Question category
n_components = 8
gmm = mixture.GMM(n_components=n_components, covariance_type='diag')
gmm.fit(X_pos_qid)
pred_cat_qid = gmm.predict(X_pos_qid)
plt.hist(pred_cat_qid, bins=50, facecolor='g', alpha=0.75)
plt.xlabel("Category number")
plt.ylabel("Count")
plt.title("Question Category: " + str(n_components) + " categories")
plt.grid(True)
plt.show()
# User category
n_components = 8
gmm = mixture.GMM(n_components=n_components, covariance_type='diag')
gmm.fit(X_pos_uid)
pred_cat_uid = gmm.predict(X_pos_uid)
plt.hist(pred_cat_uid, bins=50, facecolor='g', alpha=0.75)
plt.xlabel("Category number")
plt.ylabel("Count")
plt.title("User Category: " + str(n_components) + " categories")
plt.grid(True)
plt.show()
from collections import Counter
users.sub_append('cat_uid', [str(x) for x in pred_cat_uid])
questions.sub_append('cat_qid', [str(x) for x in pred_cat_qid])
# to get most frequent cat for some test data which do not have ids in train set
most_pred_cat_uid = Counter(pred_cat_uid).most_common(1)[0][0]
most_pred_cat_qid = Counter(pred_cat_qid).most_common(1)[0][0]
print(most_pred_cat_uid)
print(most_pred_cat_qid)
print(users[1])
print(questions[1])
"""
Explanation: GMM
Classifying questions
features: avg_pos, accuracy rate
End of explanation
"""
regression_keys = ['category', 'q_length', 'qid', 'uid', 'answer', 'avg_pos_uid', 'avg_pos_qid']
X_train, y_train = featurize(load_buzz(), group='train', sign_val=None, extra=['sign_val', 'avg_pos'])
X_train = select(X_train, regression_keys)
def transform(X):
for index, item in enumerate(X):
uid = int(item['uid'])
qid = int(item['qid'])
# uid
if int(uid) in users:
item['acc_ratio_uid'] = users[uid]['acc_ratio_uid']
item['cat_uid'] = users[uid]['cat_uid']
else:
print('Not found uid:', uid)
acc = users.select(['acc_ratio_uid'])
item['acc_ratio_uid'] = sum(acc) / float(len(acc))
item['cat_uid'] = most_pred_cat_uid
# qid
if int(qid) in questions:
item['acc_ratio_qid'] = questions[qid]['acc_ratio_qid']
item['cat_qid'] = questions[qid]['cat_qid']
else:
print('Not found qid:', qid)
acc = questions.select(['acc_ratio_qid'])
item['acc_ratio_qid'] = sum(acc) / float(len(acc))
item['cat_qid'] = most_pred_cat_qid
item['uid'] = str(uid)
item['qid'] = str(qid)
transform(X_train)
X_train[1]
from sklearn.feature_extraction import DictVectorizer
vec = DictVectorizer()
X_train_dict_vec = vec.fit_transform(X_train)
import multiprocessing
from sklearn import linear_model
from sklearn.cross_validation import train_test_split, cross_val_score
import math
from numpy import abs, sqrt
regressor_names = """
LinearRegression
LassoCV
ElasticNetCV
"""
print ("=== Linear Cross validation RMSE scores:")
for regressor in regressor_names.split():
scores = cross_val_score(getattr(linear_model, regressor)(normalize=True, n_jobs=multiprocessing.cpu_count()-1),
X_train_dict_vec, y_train,
cv=2,
scoring='mean_squared_error'
)
print (regressor, sqrt(abs(scores)).mean())
"""
Explanation: B. Modeling
Select model
End of explanation
"""
regression_keys = ['category', 'q_length', 'qid', 'uid', 'answer', 'avg_pos_uid', 'avg_pos_qid']
X_train, y_train = featurize(load_buzz(), group='train', sign_val=None, extra=['avg_pos'])
X_train = select(X_train, regression_keys)
X_test = featurize(load_buzz(), group='test', sign_val=None, extra=['avg_pos'])
X_test = select(X_test, regression_keys)
transform(X_train)
transform(X_test)
X_train[1]
X_test[1]
vec = DictVectorizer()
vec.fit(X_train + X_test)
X_train = vec.transform(X_train)
X_test = vec.transform(X_test)
regressor = linear_model.ElasticNetCV(n_jobs=3, normalize=True)
regressor.fit(X_train, y_train)
print(regressor.coef_)
print(regressor.alpha_)
predictions = regressor.predict(X_test)
"""
Explanation: Training and testing model
End of explanation
"""
write_result(load_buzz()['test'], predictions)
"""
Explanation: Writing result
End of explanation
"""
|
opencb/opencga | opencga-client/src/main/python/notebooks/general-notebooks/pyopencga_basic_notebook_002-coverage.ipynb | apache-2.0 | # Initialize PYTHONPATH for pyopencga
import sys
import os
from pprint import pprint
cwd = os.getcwd()
print("current_dir: ...."+cwd[-10:])
base_modules_dir = os.path.dirname(cwd)
print("base_modules_dir: ...."+base_modules_dir[-10:])
sys.path.append(base_modules_dir)
from pyopencga.opencga_config import ConfigClient
from pyopencga.opencga_client import OpenCGAClient
import json
"""
Explanation: pyOpenCGA basic alignment and coverage usage
[NOTE] The server methods used by pyopencga client are defined in the following swagger URL:
- http://bioinfodev.hpc.cam.ac.uk/opencga-test/webservices
[NOTE] Current implemented methods are registered at the following spreadsheet:
- https://docs.google.com/spreadsheets/d/1QpU9yl3UTneqwRqFX_WAqCiCfZBk5eU-4E3K-WVvuoc/edit?usp=sharing
Loading pyOpenCGA
End of explanation
"""
## Reading user config/credentials to connect to server
user_config_json = "./__user_config.json"
with open(user_config_json,"r") as f:
user_credentials = json.loads(f.read())
print('User: {}***'.format(user_credentials["user"][:3]))
user = user_credentials["user"]
passwd = user_credentials["pwd"]
"""
Explanation: Setting credentials for LogIn
Credentials
Plese add the credentials for opencga login into a file in json format and read them from there.
i.e:
file: __user_config.json
flie_content: {"user":"xxx","pwd":"yyy"}
End of explanation
"""
## Creating ConfigClient
host = 'http://bioinfodev.hpc.cam.ac.uk/opencga-test'
cc = ConfigClient()
config_dict = cc.get_basic_config_dict(host)
print("Config information:\n",config_dict)
"""
Explanation: Creating ConfigClient for server connection configuration
End of explanation
"""
oc = OpenCGAClient(configuration=config_dict,
user=user,
pwd=passwd)
## Getting the session id / token
token = oc.session_id
print("Session token:\n{}...".format(token[:10]))
oc = OpenCGAClient(configuration=config_dict,
session_id=token)
"""
Explanation: LogIn with user credentials
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst | courses/machine_learning/deepdive2/text_classification/labs/text_classification_with_TFHub.ipynb | apache-2.0 | !pip install tensorflow-hub
!pip install tensorflow-datasets
import os
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_datasets as tfds
print("Version: ", tf.__version__)
print("Eager mode: ", tf.executing_eagerly())
print("Hub version: ", hub.__version__)
print("GPU is", "available" if tf.config.list_physical_devices("GPU") else "NOT AVAILABLE")
"""
Explanation: Classifying Text with TensorFlow Hub: Movie Reviews
Learning objectives
Build the model.
Configure the model.
Train and evaluate the model.
Introduction
This notebook classifies movie reviews as positive or negative using the text of the review. This is an example of binary—or two-class—classification, an important and widely applicable kind of machine learning problem.
The notebook demonstrates the basic application of transfer learning with TensorFlow Hub and Keras.
It uses the IMDB dataset that contains the text of 50,000 movie reviews from the Internet Movie Database. These are split into 25,000 reviews for training and 25,000 reviews for testing. The training and testing sets are balanced, meaning they contain an equal number of positive and negative reviews.
This notebook uses tf.keras, a high-level API to build and train models in TensorFlow, and tensorflow_hub, a library for loading trained models from TFHub in a single line of code. For a more advanced text classification tutorial using tf.keras, see the MLCC Text Classification Guide.
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.
End of explanation
"""
# Split the training set into 60% and 40% to end up with 15,000 examples
# for training, 10,000 examples for validation and 25,000 examples for testing.
train_data, validation_data, test_data = tfds.load(
name="imdb_reviews",
split=('train[:60%]', 'train[60%:]', 'test'),
as_supervised=True)
"""
Explanation: Download the IMDB dataset
The IMDB dataset is available on imdb reviews or on TensorFlow datasets. The following code downloads the IMDB dataset to your machine (or the colab runtime):
End of explanation
"""
train_examples_batch, train_labels_batch = next(iter(train_data.batch(10)))
train_examples_batch
"""
Explanation: Explore the data
Let's take a moment to understand the format of the data. Each example is a sentence representing the movie review and a corresponding label. The sentence is not preprocessed in any way. The label is an integer value of either 0 or 1, where 0 is a negative review, and 1 is a positive review.
Let's print first 10 examples.
End of explanation
"""
train_labels_batch
"""
Explanation: Let's also print the first 10 labels.
End of explanation
"""
embedding = "https://tfhub.dev/google/nnlm-en-dim50/2"
hub_layer = hub.KerasLayer(embedding, input_shape=[],
dtype=tf.string, trainable=True)
hub_layer(train_examples_batch[:3])
"""
Explanation: Build the model
The neural network is created by stacking layers—this requires three main architectural decisions:
How to represent the text?
How many layers to use in the model?
How many hidden units to use for each layer?
In this example, the input data consists of sentences. The labels to predict are either 0 or 1.
One way to represent the text is to convert sentences into embeddings vectors. Use a pre-trained text embedding as the first layer, which will have three advantages:
You don't have to worry about text preprocessing,
Benefit from transfer learning,
the embedding has a fixed size, so it's simpler to process.
For this example you use a pre-trained text embedding model from TensorFlow Hub called google/nnlm-en-dim50/2.
There are many other pre-trained text embeddings from TFHub that can be used in this notebook:
google/nnlm-en-dim128/2 - trained with the same NNLM architecture on the same data as google/nnlm-en-dim50/2, but with a larger embedding dimension. Larger dimensional embeddings can improve on your task but it may take longer to train your model.
google/nnlm-en-dim128-with-normalization/2 - the same as google/nnlm-en-dim128/2, but with additional text normalization such as removing punctuation. This can help if the text in your task contains additional characters or punctuation.
google/universal-sentence-encoder/4 - a much larger model yielding 512 dimensional embeddings trained with a deep averaging network (DAN) encoder.
And many more! Find more text embedding models on TFHub.
Let's first create a Keras layer that uses a TensorFlow Hub model to embed the sentences, and try it out on a couple of input examples. Note that no matter the length of the input text, the output shape of the embeddings is: (num_examples, embedding_dimension).
End of explanation
"""
model = # TODO 1: Your code here
model.summary()
"""
Explanation: Let's now build the full model:
End of explanation
"""
# TODO 2: Your code here
"""
Explanation: The layers are stacked sequentially to build the classifier:
The first layer is a TensorFlow Hub layer. This layer uses a pre-trained Saved Model to map a sentence into its embedding vector. The pre-trained text embedding model that you are using (google/nnlm-en-dim50/2) splits the sentence into tokens, embeds each token and then combines the embedding. The resulting dimensions are: (num_examples, embedding_dimension). For this NNLM model, the embedding_dimension is 50.
This fixed-length output vector is piped through a fully-connected (Dense) layer with 16 hidden units.
The last layer is densely connected with a single output node.
Let's compile the model.
Loss function and optimizer
A model needs a loss function and an optimizer for training. Since this is a binary classification problem and the model outputs logits (a single-unit layer with a linear activation), you'll use the binary_crossentropy loss function.
This isn't the only choice for a loss function, you could, for instance, choose mean_squared_error. But, generally, binary_crossentropy is better for dealing with probabilities—it measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and the predictions.
Later, when you are exploring regression problems (say, to predict the price of a house), you'll see how to use another loss function called mean squared error.
Now, configure the model to use an optimizer and a loss function:
End of explanation
"""
history = # TODO 3: Your code here
"""
Explanation: Train the model
Train the model for 10 epochs in mini-batches of 512 samples. This is 10 iterations over all samples in the x_train and y_train tensors. While training, monitor the model's loss and accuracy on the 10,000 samples from the validation set:
End of explanation
"""
results = model.evaluate(test_data.batch(512), verbose=2)
for name, value in zip(model.metrics_names, results):
print("%s: %.3f" % (name, value))
"""
Explanation: Evaluate the model
And let's see how the model performs. Two values will be returned. Loss (a number which represents our error, lower values are better), and accuracy.
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.