Unnamed: 0
int64 0
15.9k
| cleaned_code
stringlengths 67
124k
โ | cleaned_prompt
stringlengths 168
30.3k
โ |
|---|---|---|
11,800
|
<ASSISTANT_TASK:>
Python Code:
#subject lines that have dates, e.g. 12/01/99
[line for line in subjects if re.search("\d\d/\d\d/\d\d", line)]
[line for line in subjects if re.search("[aeiou][aeiou][aeiou][aeiou]", line)]
[line for line in subjects if re.search("F[wW]:", line)]
[line for line in subjects if res.search("^[Nn]ew [Yy]ork", line)]
[line for line in subjects if re.search(r"\boil\b", line)]
x = "this is \na test"
print(x)
x = "this is\t\t\tanother test"
print(x)
normal = "hello\nthere"
raw = r"hello\nthere"
print("normal:", normal)
print("raw:", raw)
[line for line in subjects if re.search(r"\b(?:[Cc]at|[kK]itty|[kK]itten)\b", line)]
all_subjects = open("enronsubjects.txt").read()
all_subjects[:1000]
#looking for domain names
[line for line in subjectts if re.search](r"\b\w+\.(?:com|net|org)\b", line)
#re.findall(r"\b\w+\.(?:com|net|org)\b", all_subjects)
#"will you pass teh pepper?" re.search "yes"
#"will you pass the pepper?" re.findall "yes, here it is" *passes pepper*
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: define your own character classes
Step2: metacharacters
Step3: aside
Step4: metacharacters 3
Step5: more metacharacters
|
11,801
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
import numpy, pandas
from rep.utils import train_test_split
from sklearn.metrics import roc_auc_score
data = pandas.read_csv('toy_datasets/Higgs.csv', sep='\t')
labels = data['Label'].values
labels = labels == 's'
sample_weight = data['Weight'].values
train_data, test_data, train_labels, test_labels, train_weight, test_weight = train_test_split(data, labels, sample_weight)
list(data.columns)
features = list(set(data.columns) - set(['Weight', 'Label', 'EventId']))
from rep.report import metrics
def AMS(s, b):
br = 10.0
radicand = 2 *( (s+b+br) * numpy.log (1.0 + s/(b+br)) - s)
return numpy.sqrt(radicand)
optimal_AMS = metrics.OptimalMetric(AMS, expected_s=692., expected_b=410999.)
probs_rand = numpy.ndarray((1000, 2))
probs_rand[:, 1] = numpy.random.random(1000)
probs_rand[:, 0] = 1 - probs_rand[:, 1]
labels_rand = numpy.random.randint(0, high=2, size=1000)
optimal_AMS.plot_vs_cut(labels_rand, probs_rand)
optimal_AMS(labels_rand, probs_rand)
from rep.metaml import GridOptimalSearchCV
from rep.metaml.gridsearch import RandomParameterOptimizer, FoldingScorer
from rep.estimators import SklearnClassifier
from sklearn.ensemble import AdaBoostClassifier
from collections import OrderedDict
# define grid parameters
grid_param = OrderedDict()
grid_param['n_estimators'] = [10, 20, 30]
grid_param['learning_rate'] = [0.1, 0.05]
# use random hyperparameter optimization algorithm
generator = RandomParameterOptimizer(grid_param)
# define folding scorer
scorer = FoldingScorer(optimal_AMS, folds=4, fold_checks=2)
grid_sk = GridOptimalSearchCV(SklearnClassifier(AdaBoostClassifier(), features=features), generator, scorer)
grid_sk.fit(data, labels)
grid_sk.generator.best_params_
grid_sk.generator.print_results()
def normed_weight(y, weight):
weight[y == 1] *= sum(weight[y == 0]) / sum(weight[y == 1])
return weight
from sklearn import clone
def generate_scorer(test, labels, test_weight=None):
Generate scorer which calculate metric on fixed test dataset
def custom(base_estimator, params, X, y, sample_weight=None):
cl = clone(base_estimator)
cl.set_params(**params)
cl.fit(X, y)
res = optimal_AMS(labels, cl.predict_proba(test), sample_weight=test_weight)
return res
return custom
# define grid parameters
grid_param = OrderedDict()
grid_param['n_estimators'] = [10, 20, 30]
grid_param['learning_rate'] = [0.1, 0.05]
grid_param['features'] = [features[:5], features[:10]]
# define random hyperparameter optimization algorithm
generator = RandomParameterOptimizer(grid_param)
# define specific scorer
scorer = generate_scorer(test_data, test_labels, test_weight)
grid = GridOptimalSearchCV(SklearnClassifier(clf=AdaBoostClassifier(), features=features), generator, scorer)
grid.fit(train_data, train_labels, train_weight)
len(train_data), len(test_data)
grid.generator.print_results()
from rep.report import ClassificationReport
from rep.data.storage import LabeledDataStorage
lds = LabeledDataStorage(test_data, test_labels, test_weight)
classifiers = {'grid_fold': grid_sk.fit_best_estimator(train_data[features], train_labels, train_weight),
'grid_test_dataset': grid.fit_best_estimator(train_data[features], train_labels, train_weight) }
report = ClassificationReport(classifiers, lds)
report.roc().plot()
report.metrics_vs_cut(AMS, metric_label='AMS').plot()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Loading data for Higgs Boson Challenge
Step2: Variables used in training
Step3: Metric definition
Step4: Compute threshold vs metric quality
Step5: The best quality
Step6: Hyperparameters optimization algorithms
Step7: Grid search with folding scorer
Step8: Print best parameters
Step9: Print all qualities for used parameters
Step10: Grid search with user-defined scorer
Step12: Define scorer, which will be train model on all dataset and test it on the pre-defined dataset
Step13: Print all tried combinations of parameters and quality
Step14: Results comparison
Step15: ROCs
Step16: Metric
|
11,802
|
<ASSISTANT_TASK:>
Python Code:
import string
print(string.ascii_uppercase)
if 'b' in string.ascii_uppercase:
print("Yes, the letter is in string.ascii_uppercase")
else:
print("No, the string is not in string.ascii_uppercase")
print(string.ascii_lowercase)
print(string.whitespace)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Our initial example will use the uppercase letters that are in the string library.
Step2: Here we will check to see if a letter is in the string.ascii_uppercase constant.
Step3: Here are some other interesting constants in the string library.
|
11,803
|
<ASSISTANT_TASK:>
Python Code:
counter = 1
while counter <= 10:
print(counter)
counter = counter + 1
print("end")
counter = 1
product = 1
while counter <= 5:
product = product * counter
print("counter: ", counter)
print("product: ", product)
counter = counter + 1
print(product)
a = 2
if a % 2 == 0:
print("even")
else:
print("odd")
counter = 1
product = 1
while counter <= 5:
if counter % 2 == 0:
print("counter is %d even" % counter)
print("product = %d * %d" % (product, counter))
product = product * counter
print("counter: ", counter)
print("product: ", product)
counter = counter + 1
print(product)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: ะฃะฟัะฐะถะฝะตะฝะธะต
Step2: ะฃะฟัะฐะถะฝะตะฝะธะต
|
11,804
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
x = np.array([0, 1, 1, 1, 3, 1, 5, 5, 5])
y = np.array([0, 2, 3, 4, 2, 4, 3, 4, 5])
a = 1
b = 4
result = ((x == a) & (y == b)).argmax()
if x[result] != a or y[result] != b:
result = -1
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
11,805
|
<ASSISTANT_TASK:>
Python Code:
# Run this cell to set up the notebook, but please don't change it.
# These lines import the Numpy and Datascience modules.
import numpy as np
from datascience import *
# These lines do some fancy plotting magic.
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
import warnings
warnings.simplefilter('ignore', FutureWarning)
# These lines load the tests.
from client.api.assignment import load_assignment
tests = load_assignment('lab06.ok')
observations = Table.read_table("serial_numbers.csv")
num_observations = observations.num_rows
observations
def plot_serial_numbers(numbers):
...
# Assuming the lines above produce a histogram, this next
# line may make your histograms look nicer. Feel free to
# delete it if you want.
plt.ylim(0, .25)
plot_serial_numbers(observations)
def mean_based_estimator(nums):
...
mean_based_estimate = ...
mean_based_estimate
_ = tests.grade('q1_4')
max_estimate = ...
max_estimate
_ = tests.grade('q1_5')
# ???
N = ...
# Attempts to simulate one sample from the population of all serial
# numbers, returning an array of the sampled serial numbers.
def simulate_observations():
# You'll get an error message if you try to call this
# function, because we didn't define N properly!
serial_numbers = Table().with_column("serial number", np.arange(1, N+1))
return serial_numbers.sample(num_observations)
estimates = make_array()
for i in np.arange(5000):
estimate = mean_based_estimator(simulate_observations())
estimates = np.append(estimates, estimate)
Table().with_column("mean-based estimate", estimates).hist()
def simulate_resample():
...
# This is a little magic to make sure that you see the same results
# we did.
np.random.seed(123)
one_resample = simulate_resample()
one_resample
...
...
resample_0 = ...
...
mean_based_estimate_0 = ...
max_based_estimate_0 = ...
print("Mean-based estimate for resample 0:", mean_based_estimate_0)
print("Max-based estimate for resample 0:", max_based_estimate_0)
resample_1 = ...
...
mean_based_estimate_1 = ...
max_based_estimate_1 = ...
print("Mean-based estimate for resample 1:", mean_based_estimate_1)
print("Max-based estimate for resample 1:", max_based_estimate_1)
def simulate_estimates(original_table, sample_size, statistic, num_replications):
# Our implementation of this function took 5 short lines of code.
...
# This should generate an empirical histogram of twice-mean estimates
# of N from samples of size 50 if N is 1000. This should be a bell-shaped
# curve centered at 1000 with most of its mass in [800, 1200]. To verify your
# answer, make sure that's what you see!
example_estimates = simulate_estimates(
Table().with_column("serial number", np.arange(1, 1000+1)),
50,
mean_based_estimator,
10000)
Table().with_column("mean-based estimate", example_estimates).hist(bins=np.arange(0, 1500, 25))
bootstrap_estimates = ...
...
left_end = ...
right_end = ...
print("Middle 95% of bootstrap estimates: [{:f}, {:f}]".format(left_end, right_end))
population = Table().with_column("serial number", np.arange(1, 150+1))
new_observations = ...
new_mean_based_estimate = ...
new_bootstrap_estimates = ...
...
new_left_end = ...
new_right_end = ...
print("Middle 95% of bootstrap estimates: [{:f}, {:f}]".format(new_left_end, new_right_end))
# For your convenience, you can run this cell to run all the tests at once!
import os
_ = [tests.grade(q[:-3]) for q in os.listdir("tests") if q.startswith('q')]
# Run this cell to submit your work *after* you have passed all of the test cells.
# It's ok to run this cell multiple times. Only your final submission will be scored.
!TZ=America/Los_Angeles jupyter nbconvert --output=".lab06_$(date +%m%d_%H%M)_submission.html" lab06.ipynb && echo "Submitted successfully!"
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Preliminaries
Step2: Question 1.2
Step3: Question 1.3
Step4: Question 1.5
Step5: Question 1.6
Step6: Since we don't know what the population looks like, we don't know N, and we can't run that simulation.
Step7: Let's make one resample.
Step8: Later, we'll use many resamples at once to see what estimates typically look like. We don't often pay attention to single resamples, so it's easy to misunderstand them. Let's examine some individual resamples before we start using them.
Step9: Question 2.3
Step10: You may find that the max-based estimates from the resamples are both exactly 135. You will probably find that the two mean-based estimates do differ from the sample mean-based estimate (and from each other).
Step11: Question 3.2
Step12: Question 3.4
Step13: Question 3.5
Step14: Question 3.7
|
11,806
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function, division
%matplotlib inline
import numpy as np
import brfss
import thinkstats2
import thinkplot
df = brfss.ReadBrfss(nrows=None)
def SampleRows(df, nrows, replace=False):
indices = np.random.choice(df.index, nrows, replace=replace)
sample = df.loc[indices]
return sample
sample = SampleRows(df, 5000)
heights, weights = sample.htm3, sample.wtkg2
thinkplot.Scatter(heights, weights, alpha=1)
thinkplot.Config(xlabel='Height (cm)',
ylabel='Weight (kg)',
axis=[140, 210, 20, 200],
legend=False)
def Jitter(values, jitter=0.5):
n = len(values)
return np.random.normal(0, jitter, n) + values
heights = Jitter(heights, 1.4)
weights = Jitter(weights, 0.5)
thinkplot.Scatter(heights, weights, alpha=1.0)
thinkplot.Config(xlabel='Height (cm)',
ylabel='Weight (kg)',
axis=[140, 210, 20, 200],
legend=False)
thinkplot.Scatter(heights, weights, alpha=0.1, s=10)
thinkplot.Config(xlabel='Height (cm)',
ylabel='Weight (kg)',
axis=[140, 210, 20, 200],
legend=False)
thinkplot.HexBin(heights, weights)
thinkplot.Config(xlabel='Height (cm)',
ylabel='Weight (kg)',
axis=[140, 210, 20, 200],
legend=False)
# Solution goes here
heights = Jitter(df.htm3, 1.4)
weights = Jitter(df.wtkg2, 1)
thinkplot.Scatter(heights, weights, alpha=0.05, s=2)
thinkplot.Config(axis=[135, 205, 25, 210])
cleaned = df.dropna(subset=['htm3', 'wtkg2'])
bins = np.arange(135, 210, 5)
indices = np.digitize(cleaned.htm3, bins)
groups = cleaned.groupby(indices)
for i, group in groups:
print(i, len(group))
mean_heights = [group.htm3.mean() for i, group in groups]
cdfs = [thinkstats2.Cdf(group.wtkg2) for i, group in groups]
for percent in [75, 50, 25]:
weight_percentiles = [cdf.Percentile(percent) for cdf in cdfs]
label = '%dth' % percent
thinkplot.Plot(mean_heights, weight_percentiles, label=label)
thinkplot.Config(xlabel='Height (cm)',
ylabel='Weight (kg)',
axis=[140, 210, 20, 200],
legend=False)
# Solution goes here
cleaned.head()
bins = np.arange(140, 210, 10)
indices = np.digitize(cleaned.htm3, bins)
groups = cleaned.groupby(indices)
cdfs = [thinkstats2.Cdf(group.wtkg2) for i, group in groups]
thinkplot.Cdfs(cdfs)
thinkplot.Config(xlabel='weight', ylabel='cdf')
def Cov(xs, ys, meanx=None, meany=None):
xs = np.asarray(xs)
ys = np.asarray(ys)
if meanx is None:
meanx = np.mean(xs)
if meany is None:
meany = np.mean(ys)
cov = np.dot(xs-meanx, ys-meany) / len(xs)
return cov
heights, weights = cleaned.htm3, cleaned.wtkg2
Cov(heights, weights)
def Corr(xs, ys):
xs = np.asarray(xs)
ys = np.asarray(ys)
meanx, varx = thinkstats2.MeanVar(xs)
meany, vary = thinkstats2.MeanVar(ys)
corr = Cov(xs, ys, meanx, meany) / np.sqrt(varx * vary)
return corr
Corr(heights, weights)
np.corrcoef(heights, weights)
import pandas as pd
def SpearmanCorr(xs, ys):
xranks = pd.Series(xs).rank()
yranks = pd.Series(ys).rank()
return Corr(xranks, yranks)
SpearmanCorr(heights, weights)
def SpearmanCorr(xs, ys):
xs = pd.Series(xs)
ys = pd.Series(ys)
return xs.corr(ys, method='spearman')
SpearmanCorr(heights, weights)
Corr(cleaned.htm3, np.log(cleaned.wtkg2))
import first
live, firsts, others = first.MakeFrames()
live = live.dropna(subset=['agepreg', 'totalwgt_lb'])
# Solution goes here
mom_age = live.agepreg
weight = live.totalwgt_lb
thinkplot.Scatter(mom_age, weight)
thinkplot.Config(xlabel='mom\'s age', ylabel='birthweight')
# Solution goes here
print('correlation', Corr(mom_age, weight))
print('spearman', SpearmanCorr(mom_age, weight))
# Solution goes here
# Solution goes here
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Scatter plots
Step2: The following function selects a random subset of a DataFrame.
Step3: I'll extract the height in cm and the weight in kg of the respondents in the sample.
Step4: Here's a simple scatter plot with alpha=1, so each data point is fully saturated.
Step5: The data fall in obvious columns because they were rounded off. We can reduce this visual artifact by adding some random noice to the data.
Step6: Heights were probably rounded off to the nearest inch, which is 2.8 cm, so I'll add random values from -1.4 to 1.4.
Step7: And here's what the jittered data look like.
Step8: The columns are gone, but now we have a different problem
Step9: That's better. This version of the figure shows the location and shape of the distribution most accurately. There are still some apparent columns and rows where, most likely, people reported their height and weight using rounded values. If that effect is important, this figure makes it apparent; if it is not important, we could use more aggressive jittering to minimize it.
Step10: In this case the binned plot does a pretty good job of showing the location and shape of the distribution. It obscures the row and column effects, which may or may not be a good thing.
Step11: Plotting percentiles
Step12: Then I'll divide the dataset into groups by height.
Step13: Here are the number of respondents in each group
Step14: Now we can compute the CDF of weight within each group.
Step15: And then extract the 25th, 50th, and 75th percentile from each group.
Step16: Exercise
Step17: Correlation
Step18: And here's an example
Step19: Covariance is useful for some calculations, but it doesn't mean much by itself. The coefficient of correlation is a standardized version of covariance that is easier to interpret.
Step20: The correlation of height and weight is about 0.51, which is a moderately strong correlation.
Step21: NumPy provides a function that computes correlations, too
Step22: The result is a matrix with self-correlations on the diagonal (which are always 1), and cross-correlations on the off-diagonals (which are always symmetric).
Step23: For heights and weights, Spearman's correlation is a little higher
Step24: A Pandas Series provides a method that computes correlations, and it offers spearman as one of the options.
Step25: The result is the same as for the one we wrote.
Step26: An alternative to Spearman's correlation is to transform one or both of the variables in a way that makes the relationship closer to linear, and the compute Pearson's correlation.
Step27: Exercises
|
11,807
|
<ASSISTANT_TASK:>
Python Code:
from sklearn.datasets import load_linnerud
linnerud = load_linnerud()
chinups = linnerud.data[:,0]
plt.hist( # complete
plt.hist( # complete
# complete
# complete
plt.hist(# complete
plt.hist(chinups, histtype = 'step')
# this is the code for the rug plot
plt.plot(chinups, np.zeros_like(chinups), '|', color='k', ms = 25, mew = 4)
# execute this cell
from sklearn.neighbors import KernelDensity
def kde_sklearn(data, grid, bandwidth = 1.0, **kwargs):
kde_skl = KernelDensity(bandwidth = bandwidth, **kwargs)
kde_skl.fit(data[:, np.newaxis])
log_pdf = kde_skl.score_samples(grid[:, np.newaxis]) # sklearn returns log(density)
return np.exp(log_pdf)
grid = # complete
PDFtophat = kde_sklearn( # complete
plt.plot( # complete
PDFtophat1 = # complete
# complete
# complete
# complete
PDFgaussian = # complete
PDFepanechnikov = # complete
x = np.arange(0, 6*np.pi, 0.1)
y = np.cos(x)
plt.plot(x,y, lw = 2)
plt.xlabel('X')
plt.ylabel('Y')
plt.xlim(0, 6*np.pi)
import seaborn as sns
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(x,y, lw = 2)
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_xlim(0, 6*np.pi)
sns.set_style( # complete
# complete
# default color palette
current_palette = sns.color_palette()
sns.palplot(current_palette)
# set palette to colorblind
sns.set_palette("colorblind")
current_palette = sns.color_palette()
sns.palplot(current_palette)
iris = sns.load_dataset("iris")
iris
# note - hist, kde, and rug all set to True, set to False to turn them off
with sns.axes_style("dark"):
sns.distplot(iris['petal_length'], bins=20, hist=True, kde=True, rug=True)
plt.scatter( # complete
with sns.axes_style("darkgrid"):
xexample = np.random.normal(loc = 0.2, scale = 1.1, size = 10000)
yexample = np.random.normal(loc = -0.1, scale = 0.9, size = 10000)
plt.scatter(xexample, yexample)
# hexbin w/ bins = "log" returns the log of counts/bin
# mincnt = 1 displays only hexpix with at least 1 source present
with sns.axes_style("darkgrid"):
plt.hexbin(xexample, yexample, bins = "log", cmap = "viridis", mincnt = 1)
plt.colorbar()
with sns.axes_style("darkgrid"):
sns.kdeplot(xexample, yexample,shade=False)
sns.jointplot(x=iris['petal_length'], y=iris['petal_width'])
sns.jointplot( # complete
sns.pairplot(iris[["sepal_length", "sepal_width", "petal_length", "petal_width"]])
sns.pairplot(iris, vars = ["sepal_length", "sepal_width", "petal_length", "petal_width"],
hue = "species", diag_kind = 'kde')
g = sns.PairGrid(iris, vars = ["sepal_length", "sepal_width", "petal_length", "petal_width"],
hue = "species", diag_sharey=False)
g.map_lower(sns.kdeplot)
g.map_upper(plt.scatter, edgecolor='white')
g.map_diag(sns.kdeplot, lw=3)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Problem 1a
Step2: Already with this simple plot we see a problem - the choice of bin centers and number of bins suggest that there is a 0% probability that middle aged men can do 10 chinups. Intuitively this seems incorrect, so lets examine how the histogram changes if we change the number of bins or the bin centers.
Step3: These small changes significantly change the output PDF. With fewer bins we get something closer to a continuous distribution, while shifting the bin centers reduces the probability to zero at 9 chinups.
Step4: Ending the lie
Step5: Of course, even rug plots are not a perfect solution. Many of the chinup measurements are repeated, and those instances cannot be easily isolated above. One (slightly) better solution is to vary the transparency of the rug "whiskers" using alpha = 0.3 in the whiskers plot call. But this too is far from perfect.
Step6: Problem 1e
Step7: In this representation, each "block" has a height of 0.25. The bandwidth is too narrow to provide any overlap between the blocks. This choice of kernel and bandwidth produces an estimate that is essentially a histogram with a large number of bins. It gives no sense of continuity for the distribution. Now, we examine the difference (relative to histograms) upon changing the the width (i.e. kernel) of the blocks.
Step8: It turns out blocks are not an ideal representation for continuous data (see discussion on histograms above). Now we will explore the resulting PDF from other kernels.
Step9: So, what is the optimal choice of bandwidth and kernel? Unfortunately, there is no hard and fast rule, as every problem will likely have a different optimization. Typically, the choice of bandwidth is far more important than the choice of kernel. In the case where the PDF is likely to be gaussian (or close to gaussian), then Silverman's rule of thumb can be used
Step10: Seaborn
Step11: These plots look identical, but it is possible to change the style with seaborn.
Step12: The folks behind seaborn have thought a lot about color palettes, which is a good thing. Remember - the choice of color for plots is one of the most essential aspects of visualization. A poor choice of colors can easily mask interesting patterns or suggest structure that is not real. To learn more about what is available, see the seaborn color tutorial.
Step13: which we will now change to colorblind, which is clearer to those that are colorblind.
Step14: Now that we have covered the basics of seaborn (and the above examples truly only scratch the surface of what is possible), we will explore the power of seaborn for higher dimension data sets. We will load the famous Iris data set, which measures 4 different features of 3 different types of Iris flowers. There are 150 different flowers in the data set.
Step15: Now that we have a sense of the data structure, it is useful to examine the distribution of features. Above, we went to great pains to produce histograms, KDEs, and rug plots. seaborn handles all of that effortlessly with the distplot function.
Step16: Of course, this data set lives in a 4D space, so plotting more than univariate distributions is important (and as we will see tomorrow this is particularly useful for visualizing classification results). Fortunately, seaborn makes it very easy to produce handy summary plots.
Step17: Of course, when there are many many data points, scatter plots become difficult to interpret. As in the example below
Step18: Here, we see that there are many points, clustered about the origin, but we have no sense of the underlying density of the distribution. 2D histograms, such as plt.hist2d(), can alleviate this problem. I prefer to use plt.hexbin() which is a little easier on the eyes (though note - these histograms are just as subject to the same issues discussed above).
Step19: While the above plot provides a significant improvement over the scatter plot by providing a better sense of the density near the center of the distribution, the binedge effects are clearly present. An even better solution, like before, is a density estimate, which is easily built into seaborn via the kdeplot function.
Step20: This plot is much more appealing (and informative) than the previous two. For the first time we can clearly see that the distribution is not actually centered on the origin. Now we will move back to the Iris data set.
Step21: But! Histograms and scatter plots can be problematic as we have discussed many times before.
Step22: That is much nicer than what was presented above. However - we still have a problem in that our data live in 4D, but we are (mostly) limited to 2D projections of that data. One way around this is via the seaborn version of a pairplot, which plots the distribution of every variable in the data set against each other. (Here is where the integration with pandas DataFrames becomes so powerful.)
Step23: For data sets where we have classification labels, we can even color the various points using the hue option, and produce KDEs along the diagonal with diag_type = 'kde'.
Step24: Even better - there is an option to create a PairGrid which allows fine tuned control of the data as displayed above, below, and along the diagonal. In this way it becomes possible to avoid having symmetric redundancy, which is not all that informative. In the example below, we will show scatter plots and contour plots simultaneously.
|
11,808
|
<ASSISTANT_TASK:>
Python Code:
# Author: Tommy Clausner <tommy.clausner@gmail.com>
#
# License: BSD (3-clause)
import os
import os.path as op
import mne
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
sample_dir = op.join(data_path, 'MEG', 'sample')
subjects_dir = op.join(data_path, 'subjects')
fname_src = op.join(subjects_dir, 'sample', 'bem', 'sample-oct-6-src.fif')
fname_fwd = op.join(sample_dir, 'sample_audvis-meg-oct-6-fwd.fif')
fname_fsaverage_src = os.path.join(subjects_dir, 'fsaverage', 'bem',
'fsaverage-ico-5-src.fif')
fname_stc = os.path.join(sample_dir, 'sample_audvis-meg')
# Read stc from file
stc = mne.read_source_estimate(fname_stc, subject='sample')
src_orig = mne.read_source_spaces(fname_src)
print(src_orig) # n_used=4098, 4098
fwd = mne.read_forward_solution(fname_fwd)
print(fwd['src']) # n_used=3732, 3766
print([len(v) for v in stc.vertices])
src_to = mne.read_source_spaces(fname_fsaverage_src)
print(src_to[0]['vertno']) # special, np.arange(10242)
morph = mne.compute_source_morph(stc, subject_from='sample',
subject_to='fsaverage', src_to=src_to,
subjects_dir=subjects_dir)
stc_fsaverage = morph.apply(stc)
# Define plotting parameters
surfer_kwargs = dict(
hemi='lh', subjects_dir=subjects_dir,
clim=dict(kind='value', lims=[8, 12, 15]), views='lateral',
initial_time=0.09, time_unit='s', size=(800, 800),
smoothing_steps=5)
# As spherical surface
brain = stc_fsaverage.plot(surface='sphere', **surfer_kwargs)
# Add title
brain.add_text(0.1, 0.9, 'Morphed to fsaverage (spherical)', 'title',
font_size=16)
brain_inf = stc_fsaverage.plot(surface='inflated', **surfer_kwargs)
# Add title
brain_inf.add_text(0.1, 0.9, 'Morphed to fsaverage (inflated)', 'title',
font_size=16)
stc_fsaverage = mne.compute_source_morph(stc,
subjects_dir=subjects_dir).apply(stc)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Setup paths
Step2: Load example data
Step3: Setting up SourceMorph for SourceEstimate
Step4: We also need to specify the set of vertices to morph to. This can be done
Step5: Apply morph to (Vector) SourceEstimate
Step6: Plot results
Step7: As inflated surface
Step8: Reading and writing SourceMorph from and to disk
|
11,809
|
<ASSISTANT_TASK:>
Python Code:
#installing pandas libraries
!pip install pandas-datareader
!pip install --upgrade html5lib==1.0b8
#There is a bug in the latest version of html5lib so install an earlier version
#Restart kernel after installing html5lib
import pandas as pd #pandas library
from pandas_datareader import data #data readers (google, html, etc.)
#The following line ensures that graphs are rendered in the notebook
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt #Plotting library
import datetime as dt #datetime for timeseries support
pd.DataFrame([[1,2,3],[1,2,3]],columns=['A','B','C'])
df = pd.DataFrame([['r1','00','01','02'],['r2','10','11','12'],['r3','20','21','22']],columns=['row_label','A','B','C'])
print(id(df))
df.set_index('row_label',inplace=True)
print(id(df))
df
data = {'nationality': ['UK', 'China', 'US', 'UK', 'Japan', 'China', 'UK', 'UK', 'Japan', 'US'],
'age': [25, 30, 15, np.nan, 25, 22, np.nan,45 ,18, 33],
'type': [1, 3, 2, 3, 2, 3, 1, 1, 2, 1],
'diabetes': ['yes', 'yes', 'no', 'yes', 'no', 'no', 'no', 'yes', 'no', 'no']}
labels = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j']
df=pd.DataFrame(data=data,index=labels)
#print(df[df['age'].between(20,30)])
#print(df.groupby('nationality').mean()['age'])
#print(df.sort_values(by=['age','type'],ascending=[False,True]))
df['nationality'] = df['nationality'].replace('US','United States')
print(df)
df.ix[1]
df['B']
df.loc['r1']
df.iloc[0]
df[['B','A']] #Note that the column identifiers are in a list
df.loc['r2','B']
df.loc['r2']['A']
print(df)
print(df.loc['r1':'r2'])
df.loc['r1':'r2','B':'C']
#df_list = pd.read_html('http://www.bloomberg.com/markets/currencies/major')
df_list = pd.read_html('http://www.waihuipaijia.cn/'
, encoding='utf-8')
print(len(df_list))
df = df_list[0]
print(df)
df.set_index('Currency',inplace=True)
print(df)
df.loc['EUR-CHF','Value']
eur_usd = df.loc['EUR-USD']['Change'] #This is chained indexing
df.loc['EUR-USD']['Change'] = 1.0 #Here we are changing a value in a copy of the dataframe
print(eur_usd)
print(df.loc['EUR-USD']['Change']) #Neither eur_usd, nor the dataframe are changed
eur_usd = df.loc['EUR-USD','Change'] #eur_usd points to the value inside the dataframe
df.loc['EUR-USD','Change'] = 1.0 #Change the value in the view
print(eur_usd) #eur_usd is changed (because it points to the view)
print(df.loc['EUR-USD']['Change']) #The dataframe has been correctly updated
from pandas_datareader import data
import datetime as dt
start=dt.datetime(2017, 1, 1)
end=dt.datetime.today()
print(start,end)
df = data.DataReader('IBM', 'google', start, end)
df
df['UP']=np.where(df['Close']>df['Open'],1,0)
df
df.describe()
df['UP'].sum()/df['UP'].count()
df['Close'].pct_change() #One timeperiod percent change
n=2
df['Close'].pct_change(n) #n timeperiods percent change
n=13
df['Close'].pct_change(n).mean()
df['Close'].pct_change(n).rolling(21)
n=13
df['Close'].pct_change(n).rolling(21).mean()
ma_8 = df['Close'].pct_change(n).rolling(window=8).mean()
ma_13= df['Close'].pct_change(n).rolling(window=13).mean()
ma_21= df['Close'].pct_change(n).rolling(window=21).mean()
ma_34= df['Close'].pct_change(n).rolling(window=34).mean()
ma_55= df['Close'].pct_change(n).rolling(window=55).mean()
ma_8.plot()
ma_34.plot()
import datetime
import pandas_datareader as data
start = datetime.datetime(2015,7,1)
end = datetime.datetime(2016,6,1)
solar_df = data.DataReader(['FSLR', 'TAN','RGSE','SCTY'],'google', start=start,end=end)['Close']
solar_df
rets = solar_df.pct_change()
print(rets)
import matplotlib.pyplot as plt
plt.scatter(rets.FSLR,rets.TAN)
plt.scatter(rets.RGSE,rets.TAN)
plt.scatter(rets.SCTY,rets.TAN)
solar_corr = rets.corr()
print(solar_corr)
plt.scatter(rets.mean(), rets.std())
plt.xlabel('Expected returns')
plt.ylabel('Standard deviations')
for label, x, y in zip(rets.columns, rets.mean(), rets.std()):
plt.annotate(
label,
xy = (x, y), xytext = (20, -20),
textcoords = 'offset points', ha = 'right', va = 'bottom',
bbox = dict(boxstyle = 'round,pad=0.5', fc = 'yellow', alpha = 0.5),
arrowprops = dict(arrowstyle = '->', connectionstyle = 'arc3,rad=0'))
plt.show()
import numpy as np
import statsmodels.api as sm
X=solar_df[['FSLR','RGSE','SCTY']]
X = sm.add_constant(X)
y=solar_df['TAN']
model = sm.OLS(y,X,missing='drop')
result = model.fit()
print(result.summary())
fig, ax = plt.subplots(figsize=(8,6))
ax.plot(y)
ax.plot(result.fittedvalues)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <h2>Imports</h2>
Step2: <h2>The structure of a dataframe</h2>
Step3: <h3>Accessing columns and rows</h3>
Step4: <h3>Getting column data</h3>
Step5: <h3>Getting row data</h3>
Step6: <h3>Getting a row by row number</h3>
Step7: <h3>Getting multiple columns<h3>
Step8: <h3>Getting a specific cell</h3>
Step9: <h3>Slicing</h3>
Step10: <h2>Pandas datareader</h2>
Step11: <h4>The page contains only one table so the read_html function returns a list of one element</h4>
Step12: <h4>Note that the read_html function has automatically detected the header columns</h4>
Step13: <h4>Now we can use .loc to extract specific currency rates</h4>
Step14: <h3>Working with views and copies</h3>
Step15: <h2>Getting historical stock prices from Google financs</h2>
Step16: <h2>Datareader documentation</h2>
Step17: <h3>Get summary statistics</h3>
Step18: <h4>Calculate the percentage of days that the stock has closed higher than its open</h4>
Step19: <h4>Calculate percent changes</h4>
Step20: <h3>NaN support</h3>
Step21: <h3>Rolling windows</h3>
Step22: <h4>Calculate something on the rolling windows</h4>
Step23: <h4>Calculate several moving averages and graph them</h4>
Step24: <h2>Linear regression with pandas</h2>
Step25: <h4>Let's calculate returns (the 1 day percent change)</h4>
Step26: <h4>Let's visualize the relationship between each stock and the ETF</h4>
Step27: <h4>The correlation matrix</h4>
Step28: <h3>Basic risk analysis</h3>
Step29: <h2>Regressions</h2>
Step30: <h4>Finally plot the fitted line with the actual y values
|
11,810
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from scipy.special import expit
line = np.linspace(-3, 3, 100)
plt.figure(figsize=(10,8))
plt.plot(line, np.tanh(line), label="tanh")
plt.plot(line, np.maximum(line, 0), label="relu")
plt.plot(line, expit(line), label='sigmoid')
plt.legend(loc="best")
plt.xlabel("x")
plt.ylabel("relu(x), tanh(x), sigmoid(x)")
# Tweak some colormap stuff for Matplotlib
from matplotlib.colors import ListedColormap
cm2 = ListedColormap(['#0000aa', '#ff2020'])
# Helper function for classification plots
def plot_2d_separator(classifier, X, fill=False, ax=None, eps=None, alpha=1, cm=cm2, linewidth=None, threshold=None,
linestyle="solid"):
# binary?
if eps is None:
eps = X.std() / 2.
if ax is None:
ax = plt.gca()
x_min, x_max = X[:, 0].min() - eps, X[:, 0].max() + eps
y_min, y_max = X[:, 1].min() - eps, X[:, 1].max() + eps
xx = np.linspace(x_min, x_max, 100)
yy = np.linspace(y_min, y_max, 100)
X1, X2 = np.meshgrid(xx, yy)
X_grid = np.c_[X1.ravel(), X2.ravel()]
try:
decision_values = classifier.decision_function(X_grid)
levels = [0] if threshold is None else [threshold]
fill_levels = [decision_values.min()] + levels + [decision_values.max()]
except AttributeError:
# no decision_function
decision_values = classifier.predict_proba(X_grid)[:, 1]
levels = [.5] if threshold is None else [threshold]
fill_levels = [0] + levels + [1]
if fill:
ax.contourf(X1, X2, decision_values.reshape(X1.shape), levels=fill_levels, alpha=alpha, cmap=cm)
else:
ax.contour(X1, X2, decision_values.reshape(X1.shape), levels=levels, colors="black", alpha=alpha, linewidths=linewidth,
linestyles=linestyle, zorder=5)
ax.set_xlim(x_min, x_max)
ax.set_ylim(y_min, y_max)
ax.set_xticks(())
ax.set_yticks(())
# Helper function for classification plots
import matplotlib as mpl
from matplotlib.colors import colorConverter
def discrete_scatter(x1, x2, y=None, markers=None, s=10, ax=None,
labels=None, padding=.2, alpha=1, c=None, markeredgewidth=None):
Adaption of matplotlib.pyplot.scatter to plot classes or clusters.
Parameters
----------
x1 : nd-array
input data, first axis
x2 : nd-array
input data, second axis
y : nd-array
input data, discrete labels
cmap : colormap
Colormap to use.
markers : list of string
List of markers to use, or None (which defaults to 'o').
s : int or float
Size of the marker
padding : float
Fraction of the dataset range to use for padding the axes.
alpha : float
Alpha value for all points.
if ax is None:
ax = plt.gca()
if y is None:
y = np.zeros(len(x1))
unique_y = np.unique(y)
if markers is None:
markers = ['o', '^', 'v', 'D', 's', '*', 'p', 'h', 'H', '8', '<', '>'] * 10
if len(markers) == 1:
markers = markers * len(unique_y)
if labels is None:
labels = unique_y
# lines in the matplotlib sense, not actual lines
lines = []
current_cycler = mpl.rcParams['axes.prop_cycle']
for i, (yy, cycle) in enumerate(zip(unique_y, current_cycler())):
mask = y == yy
# if c is none, use color cycle
if c is None:
color = cycle['color']
elif len(c) > 1:
color = c[i]
else:
color = c
# use light edge for dark markers
if np.mean(colorConverter.to_rgb(color)) < .4:
markeredgecolor = "grey"
else:
markeredgecolor = "black"
lines.append(ax.plot(x1[mask], x2[mask], markers[i], markersize=s,
label=labels[i], alpha=alpha, c=color,
markeredgewidth=markeredgewidth,
markeredgecolor=markeredgecolor)[0])
if padding != 0:
pad1 = x1.std() * padding
pad2 = x2.std() * padding
xlim = ax.get_xlim()
ylim = ax.get_ylim()
ax.set_xlim(min(x1.min() - pad1, xlim[0]), max(x1.max() + pad1, xlim[1]))
ax.set_ylim(min(x2.min() - pad2, ylim[0]), max(x2.max() + pad2, ylim[1]))
return lines
from sklearn.datasets import make_moons
from sklearn.model_selection import train_test_split
from sklearn.neural_network import MLPClassifier
X, y = make_moons(n_samples=100, noise=0.25, random_state=3)
X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, random_state=42)
mlp = MLPClassifier(hidden_layer_sizes=[100], activation='relu', solver='lbfgs', random_state=0).fit(X_train, y_train)
plt.figure(figsize=(10,6))
plot_2d_separator(mlp, X_train, fill=True, alpha=.3)
discrete_scatter(X_train[:, 0], X_train[:, 1], y_train)
plt.xlabel("Feature 0")
plt.ylabel("Feature 1")
mlp = MLPClassifier(hidden_layer_sizes=[10], activation='relu', solver='lbfgs', random_state=0).fit(X_train, y_train)
plt.figure(figsize=(10,6))
plot_2d_separator(mlp, X_train, fill=True, alpha=.3)
discrete_scatter(X_train[:, 0], X_train[:, 1], y_train)
plt.xlabel("Feature 0")
plt.ylabel("Feature 1")
# using two hidden layers, with 10 units each
mlp = MLPClassifier(hidden_layer_sizes=[10, 10], activation='relu', solver='lbfgs', random_state=0).fit(X_train, y_train)
plt.figure(figsize=(10,6))
plot_2d_separator(mlp, X_train, fill=True, alpha=.3)
discrete_scatter(X_train[:, 0], X_train[:, 1], y_train)
plt.xlabel("Feature 0")
plt.ylabel("Feature 1")
# using two hidden layers, with 10 units each, now with tanh nonlinearity
# using two hidden layers, with 10 units each
mlp = MLPClassifier(hidden_layer_sizes=[10, 10], activation='tanh', solver='lbfgs', random_state=0).fit(X_train, y_train)
plt.figure(figsize=(10,6))
plot_2d_separator(mlp, X_train, fill=True, alpha=.3)
discrete_scatter(X_train[:, 0], X_train[:, 1], y_train)
plt.xlabel("Feature 0")
plt.ylabel("Feature 1")
fig, axes = plt.subplots(2, 4, figsize=(20, 8))
for axx, n_hidden_nodes in zip(axes, [10, 100]):
for ax, alpha in zip(axx, [0.0001, 0.01, 0.1, 1]):
mlp = MLPClassifier(hidden_layer_sizes=[n_hidden_nodes, n_hidden_nodes], activation='relu', solver='lbfgs',
alpha=alpha, random_state=0)
mlp.fit(X_train, y_train)
plot_2d_separator(mlp, X_train, fill=True, alpha=.3, ax=ax)
discrete_scatter(X_train[:, 0], X_train[:, 1], y_train, ax=ax)
ax.set_title("n_hidden=[{}, {}]\nalpha={:.4f}".format(n_hidden_nodes, n_hidden_nodes, alpha))
fig, axes = plt.subplots(2, 4, figsize=(20, 8))
for i, ax in enumerate(axes.ravel()):
mlp = MLPClassifier(hidden_layer_sizes=[100, 100], solver='lbfgs', random_state=i)
mlp.fit(X_train, y_train)
plot_2d_separator(mlp, X_train, fill=True, alpha=.3, ax=ax)
discrete_scatter(X_train[:, 0], X_train[:, 1], y_train, ax=ax)
from sklearn.datasets import load_breast_cancer
cancer = load_breast_cancer()
print("Cancer data per-feature maxima:\n{}".format(cancer.data.max(axis=0)))
X_train, X_test, y_train, y_test = train_test_split(cancer.data, cancer.target, random_state=0)
mlp = MLPClassifier(random_state=42)
mlp.fit(X_train, y_train)
print("Accuracy on training set: {:.2f}".format(mlp.score(X_train, y_train)))
print("Accuracy on test set: {:.2f}".format(mlp.score(X_test, y_test)))
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler().fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
mlp = MLPClassifier(random_state=0)
mlp.fit(X_train_scaled, y_train)
print("Accuracy on training set: {:.3f}".format(mlp.score(X_train_scaled, y_train)))
print("Accuracy on test set: {:.3f}".format(mlp.score(X_test_scaled, y_test)))
mlp = MLPClassifier(max_iter=250, random_state=0)
mlp.fit(X_train_scaled, y_train)
print("Accuracy on training set: {:.3f}".format(mlp.score(X_train_scaled, y_train)))
print("Accuracy on test set: {:.3f}".format(mlp.score(X_test_scaled, y_test)))
mlp = MLPClassifier(max_iter=1000, alpha=1, random_state=0)
mlp.fit(X_train_scaled, y_train)
print("Accuracy on training set: {:.3f}".format(mlp.score(X_train_scaled, y_train)))
print("Accuracy on test set: {:.3f}".format(mlp.score(X_test_scaled, y_test)))
# Here's the Sequential model
from keras.models import Sequential
model = Sequential()
# Stacking layers is as easy as .add()
from keras.layers import Dense, Dropout
model.add(Dense(100, input_dim=30, init='uniform', activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(100, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))
# Once your model looks good, configure its learning process with .compile()
model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
# You can now iterate on your training data in batches
model.fit(X_train, y_train, nb_epoch=20, batch_size=32)
# Evaluate your performance in one line
loss_and_metrics = model.evaluate(X_test, y_test, batch_size=32)
print(loss_and_metrics)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: For a small neural network with a single hidden layer with three nodes, the full formula for computing ลท in the case of regression would be (when using a tanh nonlinearity)
Step3: As you can see, the neural network learned a very nonlinear but relatively smooth decision boundary. We used solver='lbfgs', which we will discuss later.
Step4: With only 10 hidden units, the decision boundary looks somewhat more ragged. The default nonlinearity is relu, shown above. With a single hidden layer, this means the decision function will be made up of 10 straight line segments. If we want a smoother decision boundary, we could add more hidden units, as shown two figures above, add a second hidden layer, or use the tanh or logistic nonlinearity
Step5: Finally, we can also control the complexity of a neural network by using an l2 penalty to shrink the weights toward zero, as we did in ridge regression and the linear classifiers. The parameter for this in the MLPClassifier is alpha (as in the linear regression models), and itโs set to a very low value (little regularization) by default. The figure below shows the effect of different values of alpha on the two_moons dataset, using two hidden layers of 10 or 100 units each
Step6: As you probably have realized by now, there are many ways to control the complexity of a neural network
Step7: To get a better understanding of neural networks on real-world data, letโs apply the MLPClassifier to the Breast Cancer dataset which is built into scikit-learn. We start with the default parameters
Step8: The accuracy of the MLP is quite good, but not as good as some other models. This is likely due to scaling of the data. Neural networks expect all input features to vary in a similar way, and ideally to have a mean of 0, and a variance of 1. We must rescale our data so that it fulfills these requirements. We can do this using semi-automatically using the StandardScaler.
Step9: The results are much better after scaling, and already quite competitive. We got a warning from the model, though, that tells us that the maximum number of iterations has been reached. This is part of the default adam solver for learning the model, and tells us that we should increase the number of iterations
Step10: Increasing the number of iterations only slightly increased the training and generalization performance. Still, the model is performing quite well. As there is some gap between the training and the test performance, we might try to decrease the modelโs complexity to get better generalization performance. Here, we choose to increase the alpha parameter (quite aggressively, from 0.0001 to 1) to add stronger regularization of the weights
Step11: This didn't help, but the performance is already excellent.
Step12: If you need to, you can further configure your optimizer. A core principle of Keras is to make things reasonably simple, while allowing the user to be fully in control when they need to (the ultimate control being the easy extensibility of the source code).
|
11,811
|
<ASSISTANT_TASK:>
Python Code:
from numpy.linalg import pinv
from Orange.classification import Learner, Model
class LinearRegression(Learner):
def fit(self, X, Y, W=None):
coef = pinv(X.T.dot(X)).dot(X.T).dot(Y)
return LinearRegressionModel(coef)
class LinearRegressionModel(Model):
def __init__(self, coef):
self.coef = coef
def predict(self, X):
return X.dot(self.coef)
import Orange
housing = Orange.data.Table('housing')
learners = [Orange.regression.MeanLearner(), Orange.regression.LinearRegressionLearner(), LinearRegression()]
res = Orange.evaluation.CrossValidation(housing, learners)
Orange.evaluation.RMSE(res)
from time import time
class TimedLearner(Learner):
def __init__(self, learner):
self.learner = learner
self.time = 0
def fit_storage(self, data):
t = time()
model = self.learner(data)
model.time = time() - t
self.time += model.time
return model
tl = TimedLearner(Orange.regression.LinearRegressionLearner())
m1 = tl(housing)
print(m1.time)
m2 = tl(housing)
print(m2.time)
print(tl.time)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Note that the above simplified version of linear regression does not fit the intercept and ignores instance weights.
Step2: We see that the error is much lower than predicting the mean value, but slightly higher than from the included LinearRegressionLearner from Orange using scikit-learn. Try adding a column of ones to the existing input features to allow model bias and check the improvement.
Step3: This time we did not need to write a Model class since we return the same model instance of the model as the base learner. An additional attribute time is added to the model containing the time in seconds used to fit it. This time is also added to the cumulative time used to fit all models and stored as an attribute of the learner.
|
11,812
|
<ASSISTANT_TASK:>
Python Code:
import yaml
# Set `PATH` to include the directory containing TFX CLI and skaffold.
PATH=%env PATH
%env PATH=/home/jupyter/.local/bin:{PATH}
!python -c "import tfx; print('TFX version: {}'.format(tfx.__version__))"
!python -c "import kfp; print('KFP version: {}'.format(kfp.__version__))"
%pip install --upgrade --user tfx==0.25.0
%pip install --upgrade --user kfp==1.0.4
%cd pipeline
!ls -la
# Use the following command to identify the GCS bucket for metadata and pipeline storage.
!gsutil ls
#TODO: Set your environment resource settings here for GCP_REGION, ARTIFACT_STORE_URI, ENDPOINT, and CUSTOM_SERVICE_ACCOUNT.
GCP_REGION = 'us-central1'
ARTIFACT_STORE_URI = 'gs://dougkelly-sandbox-kubeflowpipelines-default' #Change
ENDPOINT = '6f857f6a72ef2a99-dot-us-central2.pipelines.googleusercontent.com' #Change
CUSTOM_SERVICE_ACCOUNT = 'tfx-tuner-caip-service-account@dougkelly-sandbox.iam.gserviceaccount.com' #Change
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
# Set your resource settings as environment variables. These override the default values in pipeline/config.py.
%env GCP_REGION={GCP_REGION}
%env ARTIFACT_STORE_URI={ARTIFACT_STORE_URI}
%env CUSTOM_SERVICE_ACCOUNT={CUSTOM_SERVICE_ACCOUNT}
%env PROJECT_ID={PROJECT_ID}
PIPELINE_NAME = 'tfx_covertype_continuous_training'
MODEL_NAME = 'tfx_covertype_classifier'
DATA_ROOT_URI = 'gs://workshop-datasets/covertype/small'
CUSTOM_TFX_IMAGE = 'gcr.io/{}/{}'.format(PROJECT_ID, PIPELINE_NAME)
RUNTIME_VERSION = '2.3'
PYTHON_VERSION = '3.7'
USE_KFP_SA=False
ENABLE_TUNING=False
%env PIPELINE_NAME={PIPELINE_NAME}
%env MODEL_NAME={MODEL_NAME}
%env DATA_ROOT_URI={DATA_ROOT_URI}
%env KUBEFLOW_TFX_IMAGE={CUSTOM_TFX_IMAGE}
%env RUNTIME_VERSION={RUNTIME_VERSION}
%env PYTHON_VERIONS={PYTHON_VERSION}
%env USE_KFP_SA={USE_KFP_SA}
%env ENABLE_TUNING={ENABLE_TUNING}
!tfx pipeline compile --engine kubeflow --pipeline_path runner.py
# TODO: Your code here to use the TFX CLI to deploy your pipeline image to AI Platform Pipelines.
!tfx pipeline create \
--pipeline_path=runner.py \
--endpoint={ENDPOINT} \
--build_target_image={CUSTOM_TFX_IMAGE}
# TODO: your code here to trigger a pipeline run with the TFX CLI
!tfx run create --pipeline_name={PIPELINE_NAME} --endpoint={ENDPOINT}
!tfx run list --pipeline_name {PIPELINE_NAME} --endpoint {ENDPOINT}
RUN_ID='[YOUR RUN ID]'
!tfx run status --pipeline_name {PIPELINE_NAME} --run_id {RUN_ID} --endpoint {ENDPOINT}
ENABLE_TUNING=True
%env ENABLE_TUNING={ENABLE_TUNING}
!tfx pipeline compile --engine kubeflow --pipeline_path runner.py
#TODO: your code to update your pipeline
!tfx pipeline update --pipeline_path runner.py --endpoint {ENDPOINT}
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Validate lab package version installation
Step2: Note
Step3: Note
Step4: The config.py module configures the default values for the environment specific settings and the default values for the pipeline runtime parameters.
Step5: CUSTOM_SERVICE_ACCOUNT - In the gcp console Click on the Navigation Menu. Navigate to IAM & Admin, then to Service Accounts and use the service account starting with prefix - 'tfx-tuner-caip-service-account'. This enables CloudTuner and the Google Cloud AI Platform extensions Tuner component to work together and allows for distributed and parallel tuning backed by AI Platform Vizier's hyperparameter search algorithm. Please see the lab setup README for setup instructions.
Step6: Set the compile time settings to first create a pipeline version without hyperparameter tuning
Step7: Compile your pipeline code
Step8: Note
Step9: Hint
Step10: To view the status of existing pipeline runs
Step11: To retrieve the status of a given run
Step12: Important
Step13: Compile your pipeline code
Step14: Deploy your pipeline container to AI Platform Pipelines with the TFX CLI
|
11,813
|
<ASSISTANT_TASK:>
Python Code:
tocrawl = []
def crawl(url):
html = download(url)
page = parse(html)
urls = extract_links(page)
tocrawl.append(urls)
return tocrawl
starter_url = "www.example.com"
tocrawl = crawl(starter_url)
while len(tocrawl) != 0:
for url in tocrawl:
crawl(url)
#!/usr/bin/python
# coding: utf-8
import csv
import requests
from bs4 import BeautifulSoup
BeautifulSoup("html.parser")
def download(url):
'''fonction qui tรฉlรฉcharge une page a partir d'une url et retourne le html sous forme de texte'''
#on appelle le module requests qui permet de tรฉlรฉcharger la page
#et on stocke le rรฉsultat du tรฉlรฉchargement
#dans une variable qu'on appelle ici reponse
reponse = requests.get(url)
#response est produite par le module requests
#elle a donc des valeurs spรฉcifiques qui sont documentรฉs sur la page du module
# requests
#tel que le code de status de la page cf[[Les codes d'erreur HTTP
# le texte ou encore un json
if response.status_code == 200:
return response.text
#ici la fonction pour รฉcrire une ligne dans un fichier CSV
def write_csv(filename, line):
#on ouvre le fichier appelรฉ filename comme un fichier csv
# a pour ajouter ร la suite (append)
#r pour lire (read)
#w pour ecrire (write)
with open(filename, 'a') as csvfile:
#on met le fichier dans le writer de csv
#en specifiant la dรฉlimitation ici ";"
spamwriter = csv.writer(csvfile, delimiter=';')
#line doit etre une liste d'รฉlรฉments reprรฉsentรฉe en python par []
# chaque element contenu dans line va s'inscrire dans une colonne
spamwriter.writerow(data)
return
#ici la fonction pour extraire des liens a partir d'une url
def extract_links(url):
#on tรฉlรฉcharge le contenu de la page
#avec une fonction qu'on a dรฉjร รฉcrite plus haut
html = download(page)
#on va stocker tous les liens contenu dans la page dans une liste
#avec le nom
links = []
soup = BeautifulSoup(html)
#la fonction find_all renvoie une **liste**
#de plusieurs elements BeautifulSoup
# qui sont manipulable en utilisant les fonctions de Beautifulsoup
#on a besoin ici de rรฉcupรฉrer le lein contenu de la balise a href="{{lien}}"
#qu'on va stocker dans notre liste comme dans un tableau
#ici on veut rรฉcupรฉrer toutes les balises de type a sans filtre particulier
tag_link_list = soup.find_all("a")
#name est donc une liste d'element parsรฉs de tags
#on va donc derouler les รฉlements contenu
for element in tag_link_list:
#l'url de la page est stockรฉ
#dans le corps de la balise
#exemple <a href="http://lemonde.fr">Site du Monde </a>
#pour le rรฉcupรฉrer on utilise la fonction de Beautifulsoup get
lien = element.get("href")
#on ajoute le lien ร la liste
liens.append(lien)
return liens
#ici la fonction qui permet d'extraire les informations interessantes
#pour un site particulier
def extract_data(url):
#on tรฉlรฉcharge le contenu de la page
#avec une fonction qu'on a dรฉjร รฉcrite plus haut
html = download(page)
#on transforme le html en le parcourant (parsing) avec BeautifulSoup
soup = BeautifulSoup(html)
#on va stocker tous les futures informations
#dans une liste qu'on appelle data
data = []
#pour extraire les donnรฉes on utilise les fonctions de BeautifulSoup
#find_all renvoie une liste d'elements
#find renvoie le premier รฉlement
#imaginons que nous voulons extraire toutes les videos de cette page
#http://www.bbc.com/news/science_and_environment
#elles sont contenues dans la balise suivante
#<div id="comp-candy-asset-munger" class="distinct-component-group container-condor wide-only">
#on peut dรฉcouper la partie qui nous interesse
#en utilisant find qui renvoie la premiรจre balise
# et l'id qui permet de trouver cette balise particuliere
colonne_videos = soup.find("div", {"id": "comp-candy-asset-munger"})
#on peut ensuite appliquer les mรฉthodes de BeautifulSoup ร l'interieur de cette partie
#recuperer toutes les images en utilisant find_all
img_tags = colonne_videos.find_all("img", {"class":"responsive-image__inner-for-label"})
#il s'agit encore d'une liste de tag
#on va rรฉcupรฉrer la source de cette image stockรฉe dans src
for img in img_tags:
image_sources = img.get("scr")
data.append([image_sources, )
return data
#### Exemple d'un crawl simple ร profondeur 1
Ici le code de base pour extraire tous les liens d'une page de dรฉpart:
on part d'une url de dรฉpart qu'on tรฉlรฉcharge et dont on extrait les liens qu'on stocke dans une liste
url_de_depart='http://www.bbc.com/news/science_and_environment'
liens_page0 = extract_links(url_de_depart)
liens1 = []
# on dรฉroule les urls de la page
for page in liens_page0:
# pour chaque lien
for lien in extract_links(page):
#on ajoute dans la liste
liens.append(lien)
nb_liens1 = len(liens1)
print nb_liens
url_de_depart='http://www.bbc.com'
#liste d'url a traiter
to_do = []
#liste d'url traitรฉes
done = []
starter = extract_links(url_de_depart)
# on dรฉroule les urls de la page
for page in starter:
# pour chaque lien
for lien in extract_links(page):
#on l'ajoute ร faire
to_do.append(lien)
#on ajoute la page dont les liens ont dรฉjร ete extrait
done.append(page)
#while est une instruction spรฉciale en python de la maniรจre que for c'est une boucle
#while permet d'executer des instructions tant que la condition est remplie
#ici tant que la liste de to_do n'est pas vide
while len(to_do) != 0:
#on enlรจve les รฉventuels doublons avec la fonction de python set() qui enlรจve les รฉlรฉments en double
uniq_to_do = set(to_do)
for lien in uniq_to_do:
for new_link in extract_links(lien):
if new_link not in done:
to_do.append(new_link)
done.append(lien)
to_do.remove(lien)
#ici on aura donc au final une liste de url crawlรฉes stockรฉes dans la liste done
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Evidemment c'est un tout petit peu plus compliquรฉ que รงa...
Step2: Dรฉfinissons ensuite l'encodage pour prendre en compte les accents
Step3: un module pour ecrire des csv
Step4: On importe au debut du script
Step5: un module qui permet de parcourir le html
Step6: Pour BeautifulSoup, on spรฉcifie le type parser que nous allons utiliser
Step7: On va ensuite รฉcrire les fonctions dont on a besoin
Step8: L'avantage d'รฉcrire une fonction est qu'elle peut etre utilisรฉe autant de fois
Step9: Toutes ces fonctions toutes sont universelles et s'applique ร n'importe quelle page html
Step10: La liste de toutes les urls de cette page de dรฉpart est stockรฉes dans liens1
Step11: Crawl simple d'urls
|
11,814
|
<ASSISTANT_TASK:>
Python Code:
%%writefile ComplaintDistribution.py
from mrjob.job import MRJob
class ComplaintDistribution(MRJob):
def mapper(self, _, lines):
line = lines[:30]
if "Debt collection" in line:
self.increment_counter('Complaint', 'Debt collection', 1)
elif "Mortgage" in line:
self.increment_counter('Complaint', 'Mortgage', 1)
else:
self.increment_counter('Complaint', 'Other', 1)
if __name__ == "__main__":
ComplaintDistribution.run()
%%time
!python ComplaintDistribution.py Temp_data/Consumer_Complaints.csv
%%writefile SimpleCounters.py
from mrjob.job import MRJob
class SimpleCounters(MRJob):
def mapper_init(self):
self.increment_counter("Mappers", "Count", 1)
def mapper(self, _, lines):
self.increment_counter("Mappers", "Tasks", 1)
for word in lines.split():
yield (word, 1)
def reducer_init(self):
self.increment_counter("Reducers", "Count", 1)
def reducer(self, word, count):
self.increment_counter("Reducers", "Tasks", 1)
yield (word, sum(count))
if __name__ == "__main__":
SimpleCounters.run()
!echo "foo foo quux labs foo bar quux" | python SimpleCounters.py --jobconf mapred.map.tasks=2 --jobconf mapred.reduce.tasks=2
%%writefile IssueCounter.py
from mrjob.job import MRJob
import csv
import sys
class IssueCounter(MRJob):
def mapper(self, _, lines):
self.increment_counter("Mappers", "Tasks", 1)
terms = list(csv.reader([lines]))[0]
yield (terms[3], 1)
def reducer(self, word, count):
self.increment_counter("Reducers", "Tasks", 1)
self.increment_counter("Reducers", "Lines processed", len(list(count)))
yield (word, sum(count))
if __name__ == "__main__":
IssueCounter.run()
!cat Temp_data/Consumer_Complaints.csv | python IssueCounter.py | head -n 1
# We can easily confirm the first hypothesis
!wc -l Temp_data/Consumer_Complaints.csv
%%writefile IssueCounterCombiner.py
from mrjob.job import MRJob
from mrjob.step import MRStep
import csv
import sys
class IssueCounterCombiner(MRJob):
def mapper(self, _, lines):
self.increment_counter("Mappers", "Tasks", 1)
terms = list(csv.reader([lines]))[0]
yield (terms[3], 1)
def combiner(self, word, count):
self.increment_counter("Combiners", "Tasks", 1)
yield (word, sum(count))
def reducer(self, word, count):
self.increment_counter("Reducers", "Tasks", 1)
self.increment_counter("Reducers", "Lines processed", len(list(count)))
yield (word, sum(count))
if __name__ == "__main__":
IssueCounterCombiner.run()
%%writefile python_mr_driver.py
from IssueCounterCombiner import IssueCounterCombiner
mr_job = IssueCounterCombiner(args=['Temp_data/Consumer_Complaints.csv'])
with mr_job.make_runner() as runner:
runner.run()
print(runner.counters())
# for line in runner.stream_output():
# print(mr_job.parse_output_line(line))
results = !python python_mr_driver.py
results
%%writefile Top50.py
from mrjob.job import MRJob
from mrjob.step import MRStep
import csv
import sys
def order_key(order_in_reducer, key_name):
number_of_stars = order_in_reducer//10 + 1
number = str(order_in_reducer%10)
return "%s %s" % ("*"*number_of_stars+number, key_name)
class Top50(MRJob):
MRJob.SORT_VALUES = True
def mapper_get_issue(self, _, lines):
terms = list(csv.reader([lines]))[0]
issue = terms[3]
if issue == "":
issue = "<blank>"
yield (issue, 1)
def combiner_count_issues(self, word, count):
yield (word, sum(count))
def reducer_init_totals(self):
self.issue_counts = []
def reducer_count_issues(self, word, count):
issue_count = sum(count)
self.issue_counts.append(int(issue_count))
yield (word, issue_count)
def reducer_final_emit_counts(self):
yield (order_key(1, "Total"), sum(self.issue_counts))
yield (order_key(2, "40th"), sorted(self.issue_counts)[-40])
def reducer_init(self):
self.increment_counter("Reducers", "Count", 1)
self.var = {}
def reducer(self, word, count):
if word.startswith("*"):
_, term = word.split()
self.var[term] = next(count)
else:
total = sum(count)
if total >= self.var["40th"]:
yield (word, (total/self.var["Total"], total))
def mapper_sort(self, key, value):
value[0] = 1-float(value[0])
yield value, key
def reducer_sort(self, key, value):
key[0] = round(1-float(key[0]),3)
yield key, next(value)
def steps(self):
mr_steps = [MRStep(mapper=self.mapper_get_issue,
combiner=self.combiner_count_issues,
reducer_init=self.reducer_init_totals,
reducer=self.reducer_count_issues,
reducer_final=self.reducer_final_emit_counts),
MRStep(reducer_init=self.reducer_init,
reducer=self.reducer),
MRStep(mapper=self.mapper_sort,
reducer=self.reducer_sort)
]
return mr_steps
if __name__ == "__main__":
Top50.run()
!head -n 3001 Temp_data/Consumer_Complaints.csv | python Top50.py --jobconf mapred.reduce.tasks=1
!head -n 10 Temp_data/ProductPurchaseData.txt
%%writefile ProductPurchaseStats.py
from mrjob.job import MRJob
from mrjob.step import MRStep
import sys
import heapq
class TopList(list):
def __init__(self, max_size):
Just like a list, except the append method adds the new value to the
list only if it is larger than the smallest value (or if the size of
the list is less than max_size). If each element of the list is an int
or float, uses that value for comparison. If the first element is a
list or tuple, uses the first element of the list or tuple for the
comparison.
self.max_size = max_size
def _get_key(self, x):
return x[0] if isinstance(x, (list, tuple)) else x
def append(self, val):
key=lambda x: x[0] if isinstance(x, (list, tuple)) else x
if len(self) < self.max_size:
heapq.heappush(self, val)
elif self._get_key(self[0]) < self._get_key(val):
heapq.heapreplace(self, val)
def final_sort(self):
return sorted(self, key=self._get_key, reverse=True)
class ProductPurchaseStats(MRJob):
def mapper_init(self):
self.largest_basket = 0
self.total_items = 0
def mapper(self, _, lines):
products = lines.split()
n_products = len(products)
self.total_items += n_products
if n_products > self.largest_basket:
self.largest_basket = n_products
for prod in products:
yield (prod, 1)
def mapper_final(self):
self.increment_counter("product stats", "largest basket", self.largest_basket)
yield ("*** Total", self.total_items)
def combiner(self, keys, values):
yield keys, sum(values)
def reducer_init(self):
self.top50 = TopList(50)
self.total = 0
def reducer(self, key, values):
value_count = sum(values)
if key == "*** Total":
self.total = value_count
else:
self.increment_counter("product stats", "unique products")
self.top50.append([value_count, value_count/self.total, key])
def reducer_final(self):
for counts, relative_rate, key in self.top50.final_sort():
yield key, (counts, round(relative_rate,3))
if __name__ == "__main__":
ProductPurchaseStats.run()
!cat Temp_data/ProductPurchaseData.txt | python ProductPurchaseStats.py --jobconf mapred.reduce.tasks=1
%%writefile PairsRecommender.py
from mrjob.job import MRJob
import heapq
import sys
def all_itemsets_of_size_two(array, key=None, return_type="string", concat_val=" "):
Generator that yields all valid itemsets of size two
where each combo is returned in an order sorted by key.
key = None defaults to standard sorting.
return_type: can be "string" or "tuple". If "string",
concatenates values with concat_val and returns string.
If tuple, returns a tuple with two elements.
array = sorted(array, key=key)
for index, item in enumerate(array):
for other_item in array[index:]:
if item != other_item:
if return_type == "string":
yield "%s%s%s" % (str(item), concat_val, str(other_item))
else:
yield (item, other_item)
class TopList(list):
def __init__(self, max_size):
Just like a list, except the append method adds the new value to the
list only if it is larger than the smallest value (or if the size of
the list is less than max_size). If each element of the list is an int
or float, uses that value for comparison. If the first element is a
list or tuple, uses the first element of the list or tuple for the
comparison.
self.max_size = max_size
def _get_key(self, x):
return x[0] if isinstance(x, (list, tuple)) else x
def append(self, val):
key=lambda x: x[0] if isinstance(x, (list, tuple)) else x
if len(self) < self.max_size:
heapq.heappush(self, val)
elif self._get_key(self[0]) < self._get_key(val):
heapq.heapreplace(self, val)
def final_sort(self):
return sorted(self, key=self._get_key, reverse=True)
class PairsRecommender(MRJob):
def mapper_init(self):
self.total_baskets = 0
def mapper(self, _, lines):
self.total_baskets += 1
products = lines.split()
self.increment_counter("job stats", "number of items", len(products))
for itemset in all_itemsets_of_size_two(products):
self.increment_counter("job stats", "number of item combos")
yield (itemset, 1)
def mapper_final(self):
self.increment_counter("job stats", "number of baskets", self.total_baskets)
yield ("*** Total", self.total_baskets)
def combiner(self, key, values):
self.increment_counter("job stats", "number of keys fed to combiner")
yield key, sum(values)
def reducer_init(self):
self.top_values = TopList(50)
self.total_baskets = 0
def reducer(self, key, values):
values_sum = sum(values)
if key == "*** Total":
self.total_baskets = values_sum
elif values_sum >= 100:
self.increment_counter("job stats", "number of unique itemsets >= 100")
basket_percent = values_sum/self.total_baskets
self.top_values.append([values_sum, round(basket_percent,3), key])
else:
self.increment_counter("job stats", "number of unique itemsets < 100")
def reducer_final(self):
for values_sum, basket_percent, key in self.top_values.final_sort():
yield key, (values_sum, basket_percent)
if __name__ == "__main__":
PairsRecommender.run()
%%time
!cat Temp_data/ProductPurchaseData.txt | python PairsRecommender.py --jobconf mapred.reduce.tasks=1
!system_profiler SPHardwareDataType
%%writefile StripesRecommender.py
from mrjob.job import MRJob
from collections import Counter
import sys
import heapq
def all_itemsets_of_size_two_stripes(array, key=None):
Generator that yields all valid itemsets of size two
where each combo is as a stripe.
key = None defaults to standard sorting.
array = sorted(array, key=key)
for index, item in enumerate(array[:-1]):
yield (item, {val:1 for val in array[index+1:]})
class TopList(list):
def __init__(self, max_size):
Just like a list, except the append method adds the new value to the
list only if it is larger than the smallest value (or if the size of
the list is less than max_size). If each element of the list is an int
or float, uses that value for comparison. If the first element is a
list or tuple, uses the first element of the list or tuple for the
comparison.
self.max_size = max_size
def _get_key(self, x):
return x[0] if isinstance(x, (list, tuple)) else x
def append(self, val):
key=lambda x: x[0] if isinstance(x, (list, tuple)) else x
if len(self) < self.max_size:
heapq.heappush(self, val)
elif self._get_key(self[0]) < self._get_key(val):
heapq.heapreplace(self, val)
def final_sort(self):
return sorted(self, key=self._get_key, reverse=True)
class StripesRecommender(MRJob):
def mapper_init(self):
self.basket_count = 0
def mapper(self, _, lines):
self.basket_count += 1
products = lines.split()
for item, value in all_itemsets_of_size_two_stripes(products):
yield item, value
def mapper_final(self):
yield ("*** Total", {"total": self.basket_count})
def combiner(self, keys, values):
values_sum = Counter()
for val in values:
values_sum += Counter(val)
yield keys, dict(values_sum)
def reducer_init(self):
self.top = TopList(50)
def reducer(self, keys, values):
values_sum = Counter()
for val in values:
values_sum += Counter(val)
if keys == "*** Total":
self.total = values_sum["total"]
else:
for k, v in values_sum.items():
if v >= 100:
self.top.append([v, round(v/self.total,3), keys+" "+k])
def reducer_final(self):
for count, perc, key in self.top.final_sort():
yield key, (count, perc)
if __name__ == "__main__":
StripesRecommender.run()
%%time
!cat Temp_data/ProductPurchaseData.txt | python StripesRecommender.py --jobconf mapred.reduce.tasks=1
set([1,2,3])
[1,2,3][:-1]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: HW 3.2 Analyze the performance of your Mappers, Combiners and Reducers using Counters
Step2: Please use multiple mappers and reducers for these jobs (at least 2 mappers and 2 reducers).
Step3: Mapper tasks = 312913. The mapper was called this many times because that is how many lines there are in the file.
Step4: Perform a word count analysis of the Issue column of the Consumer Complaints Dataset using a Mapper, Reducer, and standalone combiner (i.e., not an in-memory combiner) based WordCount using user defined Counters to count up how many time the mapper, combiner, reducer are called. What is the value of your user defined Mapper Counter, and Reducer Counter after completing your word count job.
Step5: Although the same amount of map and reduce tasks were called, because 146 combiner tasks were called, my hypothesis would be that the number of observations read by reducers was less. I went back and included a counter that kept track of the lines passed over the network. With the combiner, only 146 observations were passed over the network. This is equal to the number of times the combiner was called (which makes sense because combiners act as map-side reducers and each one would process on a different key of the data and output a single line).
Step7: 3.2.1
Step10: 3.3.1 OPTIONAL
Step13: HW3.5
Step14: The pairs operation took 1 minute 30 seconds. The stripes operation took 24 seconds, which is about a quarter of the time for pairs.
|
11,815
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
iowadf= pd.read_csv("Class05_iowa_data.csv")
iowadf.head()
# The sales data looks like it isn't a float like we want it to be (the presence of a $ in front is my clue that there may be something wrong.) Let's look at the data types to be sure.
iowadf.dtypes
# Sure enough. We need to get the real values from that column. We'll create a new column for that data and use the regex parser to get the number.
iowadf['SalesToStore'] = iowadf['Total Sales to Store (Dollars)'].str.extract("\$(.*)", expand=True)
# We also need to convert the output to a float.
iowadf['SalesToStore'] = iowadf['SalesToStore'].astype(float)
print(iowadf.dtypes)
iowadf.head()
# We also need to convert the date. We'll try this and see if it parses the date correctly.
iowadf["SalesDate"] = pd.to_datetime(iowadf["Date"])
print(iowadf.dtypes)
iowadf.head()
# Let's do a bit more data exploration here to see if there are any other issues. For example, let's see if there are any NA values in the dataframe. That may indicate a problem. We'll first check the entire dataframe.
# This looks for any null value then combines them together. It will only be true if there are any null values.
iowadf.isnull().values.any()
#Let's see how many data points have null values. We'll look at the total rows and
print("Initial rows: {}".format(len(iowadf.index)))
iowadfcl = iowadf.dropna()
print("Cleaned rows: {}".format(len(iowadfcl.index)))
iowadfcl.isnull().values.any()
storegroups = iowadfcl.groupby('Store Number')
storegroups.count()
storesum = storegroups.sum()
# Let's sort this list by the sales number in descending order:
storelist = storesum.sort_values('SalesToStore',ascending=False)
# And list the top stores:
storelist.head(10)
subsetrows = iowadfcl['Store Number']==2633
subsetrows.head()
topstoredf = iowadfcl[subsetrows]
topstoredf.head()
topstoredf = topstoredf.drop(['Date', 'Store Number', 'City', 'Zip Code', 'County', 'Total Sales to Store (Dollars)'], axis=1)
topstoredf.sort_values('SalesDate',inplace=True)
topstoredf.head()
topstoredf.loc[2640]
topstoredf.reset_index(inplace=True,drop=True)
topstoredf.head()
topstoredf.loc[0]
import matplotlib.pyplot as plt
# We will plot the data values and set the linestyle to 'None' which will not plot the line. We also want to show the individual data points, so we set the marker.
plt.plot(topstoredf['SalesDate'].values, topstoredf['SalesToStore'].values, linestyle='None', marker='o')
# autofmt_xdate() tells the computer that it should treat the x-values as dates and format them appropriately. This is a figure function, so we use gcf() to "get current figure"
plt.gcf().autofmt_xdate()
djiadf = pd.read_csv('Class05_DJIA_data.csv',parse_dates=[0])
print(djiadf.dtypes)
djiadf.head()
# We'll rename the second column to something a little more descriptive
djiadf.rename(columns={'VALUE':'DowJonesAvg'},inplace=True)
#Let's check for problems:
djiadf.isnull().values.any()
# Looks like there are two problem rows. Let's identify which rows by indexing the dataframe on the isnull() data
djiadf[djiadf['DowJonesAvg'].isnull()]
# These were both holidays and shouldn't be included in the data set anyway! We can drop them in place, modifying our dataframe.
djiadf.dropna(inplace=True)
djiadf.isnull().values.any()
# We will plot the data values and set the linestyle to 'None' which will not plot the line. We also want to show the individual data points, so we set the marker.
plt.plot(djiadf['DATE'].values, djiadf['DowJonesAvg'].values, linestyle='None', marker='o')
# autofmt_xdate() tells the computer that it should treat the x-values as dates and format them appropriately. This is a figure function, so we use gcf() to "get current figure"
plt.gcf().autofmt_xdate()
raw_data = {
'patient_id': ['A', 'B', 'C', 'D', 'E'],
'first_name': ['Alex', 'Amy', 'Allen', 'Alice', 'Ayoung'],
'last_name': ['Anderson', 'Ackerman', 'Ali', 'Aoni', 'Atiches'],
'visit_number': [1,2,3,4,5],}
df_a = pd.DataFrame(raw_data, columns = ['patient_id', 'first_name', 'last_name','visit_number'])
df_a
raw_data = {
'doctor_id': ['G', 'H', 'I', 'A', 'B'],
'first_name': ['Billy', 'Brian', 'Bran', 'Bryce', 'Betty'],
'last_name': ['Bonder', 'Black', 'Balwner', 'Brice', 'Btisan'],
'visit_number': [4,5,6,7,8]}
df_b = pd.DataFrame(raw_data, columns = ['doctor_id', 'first_name', 'last_name','visit_number'])
df_b
pd.concat([df_a, df_b])
pd.merge(df_a,df_b,on='visit_number')
pd.merge(df_a,df_b,left_on='patient_id',right_on='doctor_id')
pd.merge(df_a,df_b,on='visit_number', how='left')
pd.merge(df_a,df_b,on='visit_number', how='right')
pd.merge(df_a,df_b,on='visit_number', how='outer')
storeDJ = pd.merge(topstoredf, djiadf, left_on='SalesDate',right_on='DATE',how='left')
storeDJ.head()
storeDJ.plot.scatter(x='DowJonesAvg',y='SalesToStore')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We only lose 25 rows out of 13,000. I'm going to go with that- it simplifies further computations.
Step2: We now want the sum of all of the SalesToStore values in each group. We can do this easily since it is the only numerical value in the group. We apply the sum() function of the group to get the sum. There are a handful of other built-in functions. You can get a list of the functions by pressing the <TAB> key after typing storegroups. Try it to see the list of functions. Documentation for these functions is provided on the DataFrame document page.
Step3: Sorting Data
Step4: Subsetting Data
Step5: We then index the dataframe based on those rows. We get a dataframe that only has data from the store we want.
Step6: There is a lot of duplicated information here. We only really care about the SalesDate and the SalesToStore column. So we'll drop the other columns in this dataframe and assign it the same name (effectively dropping the columns).
Step7: We now want to sort the data by date so that it is listed in chronological order. We will use the sort_values() function (in place). We tell pandas which column to sort by.
Step8: If we want to get the first entry in this new dataframe, we have to address it by its index, which in this case is 2640 and not 0.
Step9: Resetting the index
Step10: Now we get the first entry by addressing the dataframe at location 0.
Step11: Before we move on, let's look at this data. It is always a good idea to have some idea of what the data look like!
Step12: A Second dataset
Step13: And, of course, let's plot it to see what it looks like.
Step14: Joining Data
Step15: How do we tack df_b on the end of df_a? This is useful if you have pieces of a dataset that you want to combine together. Note that pandas doesn't, by default, reset the index for the new dataframe. We'd have to do that afterwards if we want to. Notice that there are now NaN values in the table! The first dataframe didn't have the doctor_id column, so pandas filled in those values with NaN. Same for the second dataframe and the patient_id values. We could drop those columns if we wanted to.
Step16: What if we want to join together the rows that have the same visit_number? Looking at the table, we see that Billy Bonder and Alice Aoni have the same visit_number. Let's merge the two dataframes together to see how that works.
Step17: So we now have a much smaller dataframe that only has the rows where visit_number was the same in both. Note that pandas also has changed the column names. This is because there were duplicate names and it can't have that to maintain unique addressing. Look at the documentation for pandas to see how to change the suffixes that pandas appends.
Step18: What if we want to keep all of the entries in the left dataframe and only add in values on the right where they match? We can do that, too!
Step19: Not surprisingly, we can do the right-side, too.
Step20: Finally, we can do both and include all the data from both dataframes, matching where possible. That is called an "outer" join.
Step21: Choosing the Merge Index
Step22: Now we can do what we wanted to do
|
11,816
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
plt.rcParams["figure.figsize"] = (20,5) # This can be the default, or else, you can also specify this every time you generate a graph
import vincent
vincent.core.initialize_notebook()
# Note: You will have to unzip the file, as Github has a file size limit of 100mb
df = pd.read_csv("Data/database.csv")
df.head()
df.shape
df.set_index(["Record ID"],inplace = True)
df.head()
df.apply(lambda x: sum(x.isnull()),axis=0)
df.describe()
df["Weapon"].head()
plt.rcParams["figure.figsize"] = (20,4)
df["Weapon"].value_counts().plot(kind = "bar")
plt.title('Deaths Attributable to Various Weapons')
plt.rcParams["figure.figsize"] = (10,10)
df["Weapon"].value_counts().plot(kind = "bar")
plt.title('Deaths Attributable to Various Weapons')
plt.rcParams["figure.figsize"] = (20,4)
plt.yscale('log', nonposy='clip')
df["Weapon"].value_counts().plot(kind = "bar")
plt.title('Deaths Attributable to Various Weapons')
df[df["Crime Solved"] != "Yes"].shape
unsolved = df[df["Crime Solved"] != "Yes"]
unsolved.head()
unsolved.describe()
plt.rcParams["figure.figsize"] = (20,7)
unsolved['Year'].value_counts().sort_index(ascending=True).plot(kind='line')
plt.title('Number of Unsolved Homicides: 1980 to 2014')
dict_states = {'Alaska':'AK','Alabama':'AL','Arkansas':'AR','Arizona':'AZ', 'California':'CA', 'Colorado':'CO', 'Connecticut':'CT',
'District of Columbia':'DC', 'Delaware':'DE', 'Florida':'FL', 'Georgia':'GA', 'Hawaii':'HI', 'Iowa':'IA',
'Idaho':'ID', 'Illinois':'IL', 'Indiana':'IN', 'Kansas':'KS', 'Kentucky':'KY', 'Louisiana':'LA',
'Massachusetts':'MA', 'Maryland':'MD', 'Maine':'ME', 'Michigan':'MI', 'Minnesota':'MN', 'Missouri':'MO',
'Mississippi':'MS', 'Montana':'MT', 'North Carolina':'NC', 'North Dakota':'ND', 'Nebraska':'NE',
'New Hampshire':'NH', 'New Jersey':'NJ', 'New Mexico':'NM', 'Nevada':'NV', 'New York':'NY', 'Ohio':'OH',
'Oklahoma':'OK', 'Oregon':'OR', 'Pennsylvania':'PA', 'Puerto Rico':'PR', 'Rhode Island':'RI',
'South Carolina':'SC', 'South Dakota':'SD', 'Tennessee':'TN', 'Texas':'TX', 'Utah':'UT',
'Virginia':'VA', 'Vermont':'VT', 'Washington':'WA', 'Wisconsin':'WI', 'West Virginia':'WV', 'Wyoming':'WY'}
abb_st = [val for val in dict_states.values()]
len(abb_st)
plt.rcParams["figure.figsize"] = (20,7)
ax = sns.countplot(x="State", hue="Weapon", data=unsolved[unsolved["Weapon"]=="Handgun"])
ax.set_xticklabels(abb_st)
plt.title("Unsolved Homicides Caused By Handguns")
unsolved['Weapon'].value_counts()
plt.rcParams["figure.figsize"] = (15,10)
unsolved['Weapon'].value_counts().plot(kind='bar')
bar = vincent.Bar(unsolved['State'].value_counts())
bar
bar.x_axis_properties(label_angle = 180+90, label_align = "right")
bar.legend(title = "Unsolved Homicides: Weapons Involved")
# Team_Before = df['Punches Before'].groupby(df['Team'])
rel = unsolved['Weapon'].groupby(unsolved['Victim Sex'])
rel.size().plot(kind='bar')
unsolved["Month"].value_counts().plot(kind="bar")
unsolved["Agency Type"].value_counts().plot(kind="bar")
#plt.yscale('log', nonposy='clip')
unsolved["Crime Type"].unique()
pot_sk = unsolved[unsolved["Crime Type"] == "Murder or Manslaughter"]
pot_sk.head()
pot_sk.shape
pot_sk["City"].value_counts().head(10).plot(kind="bar")
plt.title("Top 10 Cities: Unsolved Murders or Manslaughters")
pot_sk["City"].value_counts().tail(10).plot(kind="bar")
plt.title("Bottom 10 Cities: Unsolved Murders or Manslaughters")
pot_sk.groupby("City").filter(lambda x: len(x)>5)
two_or_more = pot_sk.groupby("City").filter(lambda x: len(x)>5)
two_or_more["City"].value_counts().tail(10).plot(kind="bar")
df["Relationship"].unique()
known = df[df["Relationship"] != "Unknown"]
known.head()
known["Relationship"].value_counts()
plt.rcParams["figure.figsize"] = (20,5)
known["Relationship"].value_counts().plot(kind="bar")
plt.title("Relationsip of Victim to Perpetrator")
plt.yscale('log', nonposy='clip')
df.head(2)
df["Perpetrator Race"].unique()
df.columns
pd.pivot_table(known,index=["Victim Race","Perpetrator Race"],values=["Victim Count"],aggfunc=[np.sum])
#columns=["Product"],aggfunc=[np.sum])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import the CSV file
Step2: Basic Exploration
Step3: Our dataset seems really clean, without any missing values, which is wonderful!
Step4: Weapons Used
Step5: Can the scale we deploy for graphics, affect our perception of the nature of the data? Let's check different configurations.
Step6: Unsolved Crimes
Step7: Significant majority of victims in unsolved homicides are males.
Step8: Agency Type
Step9: Removing Death by Negligence
Step10: Where are potential serial killers hiding? In plain sight in large cities, or in small towns?
Step11: Some of the smaller cities have just 1 unsolved homicide. Serial Killers are defined as those having atleast 3 victims. Let's put the threshold at 5 unsolved for the city.
Step12: Exploring Relationship Between Victims and Perpetrators
Step13: Race
|
11,817
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Chris Holdgraf <choldgraf@gmail.com>
# Jona Sassenhagen <jona.sassenhagen@gmail.com>
# Eric Larson <larson.eric.d@gmail.com>
# License: BSD (3-clause)
import mne
import numpy as np
import matplotlib.pyplot as plt
# Load the data from the internet
path = mne.datasets.kiloword.data_path() + '/kword_metadata-epo.fif'
epochs = mne.read_epochs(path)
# The metadata exists as a Pandas DataFrame
print(epochs.metadata.head(10))
av1 = epochs['Concreteness < 5 and WordFrequency < 2'].average()
av2 = epochs['Concreteness > 5 and WordFrequency > 2'].average()
joint_kwargs = dict(ts_args=dict(time_unit='s'),
topomap_args=dict(time_unit='s'))
av1.plot_joint(show=False, **joint_kwargs)
av2.plot_joint(show=False, **joint_kwargs)
words = ['film', 'cent', 'shot', 'cold', 'main']
epochs['WORD in {}'.format(words)].plot_image(show=False)
epochs['cent'].average().plot(show=False, time_unit='s')
# Create two new metadata columns
metadata = epochs.metadata
is_concrete = metadata["Concreteness"] > metadata["Concreteness"].median()
metadata["is_concrete"] = np.where(is_concrete, 'Concrete', 'Abstract')
is_long = metadata["NumberOfLetters"] > 5
metadata["is_long"] = np.where(is_long, 'Long', 'Short')
epochs.metadata = metadata
query = "is_long == '{0}' & is_concrete == '{1}'"
evokeds = dict()
for concreteness in ("Concrete", "Abstract"):
for length in ("Long", "Short"):
subset = epochs[query.format(length, concreteness)]
evokeds["/".join((concreteness, length))] = list(subset.iter_evoked())
# For the actual visualisation, we store a number of shared parameters.
style_plot = dict(
colors={"Long": "Crimson", "Short": "Cornflowerblue"},
linestyles={"Concrete": "-", "Abstract": ":"},
split_legend=True,
ci=.68,
show_sensors='lower right',
show_legend='lower left',
truncate_yaxis="max_ticks",
picks=epochs.ch_names.index("Pz"),
)
fig, ax = plt.subplots(figsize=(6, 4))
mne.viz.plot_compare_evokeds(evokeds, axes=ax, **style_plot)
plt.show()
letters = epochs.metadata["NumberOfLetters"].unique().astype(int).astype(str)
evokeds = dict()
for n_letters in letters:
evokeds[n_letters] = epochs["NumberOfLetters == " + n_letters].average()
style_plot["colors"] = {n_letters: int(n_letters)
for n_letters in letters}
style_plot["cmap"] = ("# of Letters", "viridis_r")
del style_plot['linestyles']
fig, ax = plt.subplots(figsize=(6, 4))
mne.viz.plot_compare_evokeds(evokeds, axes=ax, **style_plot)
plt.show()
evokeds = dict()
query = "is_concrete == '{0}' & NumberOfLetters == {1}"
for concreteness in ("Concrete", "Abstract"):
for n_letters in letters:
subset = epochs[query.format(concreteness, n_letters)]
evokeds["/".join((concreteness, n_letters))] = subset.average()
style_plot["linestyles"] = {"Concrete": "-", "Abstract": ":"}
fig, ax = plt.subplots(figsize=(6, 4))
mne.viz.plot_compare_evokeds(evokeds, axes=ax, **style_plot)
plt.show()
data = epochs.get_data()
metadata = epochs.metadata.copy()
epochs_new = mne.EpochsArray(data, epochs.info, metadata=metadata)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We can use this metadata attribute to select subsets of Epochs. This
Step2: Next we'll choose a subset of words to keep.
Step3: Note that traditional epochs sub-selection still works. The traditional
Step4: Below we'll show a more involved example that leverages the metadata
Step5: Now we can quickly extract (and plot) subsets of the data. For example, to
Step6: To compare words which are 4, 5, 6, 7 or 8 letters long
Step7: And finally, for the interaction between concreteness and continuous length
Step8: <div class="alert alert-info"><h4>Note</h4><p>Creating an
|
11,818
|
<ASSISTANT_TASK:>
Python Code:
IMAGE_PATH = "datasets/CIFAR10"
import os, subprocess
from urllib.request import urlretrieve
dataFile = "test.zip"
if not os.path.isdir(IMAGE_PATH):
os.makedirs(IMAGE_PATH)
urlretrieve("https://mmlspark.azureedge.net/datasets/CIFAR10/test.zip",
IMAGE_PATH + ".zip")
print(subprocess.check_output(
"ip=\"%s\"; cd \"$ip\" && unzip -q \"../$(basename $PWD).zip\"" % IMAGE_PATH,
stderr = subprocess.STDOUT, shell = True)
.decode("utf-8"))
%%local
IMAGE_PATH = "/datasets/CIFAR10/test"
import subprocess
if subprocess.call(["hdfs", "dfs", "-test", "-d", IMAGE_PATH]):
from urllib import urlretrieve
urlretrieve("https://mmlspark.azureedge.net/datasets/CIFAR10/test.zip", "/tmp/test.zip")
print subprocess.check_output(
"rm -rf /tmp/CIFAR10 && mkdir -p /tmp/CIFAR10 && unzip /tmp/test.zip -d /tmp/CIFAR10",
stderr=subprocess.STDOUT, shell=True)
print subprocess.check_output(
"hdfs dfs -mkdir -p %s" % IMAGE_PATH,
stderr=subprocess.STDOUT, shell=True)
print subprocess.check_output(
"hdfs dfs -copyFromLocal -f /tmp/CIFAR10/test/011*.png %s"%IMAGE_PATH,
stderr=subprocess.STDOUT, shell=True)
IMAGE_PATH = "/datasets/CIFAR10/test"
import mmlspark
import numpy as np
from mmlspark import toNDArray
images = spark.readImages(IMAGE_PATH, recursive = True, sampleRatio = 0.1).cache()
images.printSchema()
print(images.count())
from PIL import Image
data = images.take(3) # take first three rows of the dataframe
im = data[2][0] # the image is in the first column of a given row
print("image type: {}, number of fields: {}".format(type(im), len(im)))
print("image path: {}".format(im.path))
print("height: {}, width: {}, OpenCV type: {}".format(im.height, im.width, im.type))
arr = toNDArray(im) # convert to numpy array
Image.fromarray(arr, "RGB") # display the image inside notebook
print(images.count())
from mmlspark import ImageTransformer
tr = (ImageTransformer() # images are resized and then cropped
.setOutputCol("transformed")
.resize(height = 200, width = 200)
.crop(0, 0, height = 180, width = 180) )
small = tr.transform(images).select("transformed")
im = small.take(3)[2][0] # take third image
Image.fromarray(toNDArray(im), "RGB") # display the image inside notebook
from pyspark.sql.functions import udf
from mmlspark import ImageSchema, toNDArray, toImage
def u(row):
array = toNDArray(row) # convert Image to numpy ndarray[height, width, 3]
array[:,:,2] = 0
return toImage(array) # numpy array back to Spark Row structure
noBlueUDF = udf(u,ImageSchema)
noblue = small.withColumn("noblue", noBlueUDF(small["transformed"])).select("noblue")
im = noblue.take(3)[2][0] # take second image
Image.fromarray(toNDArray(im), "RGB") # display the image inside notebook
from mmlspark import UnrollImage
unroller = UnrollImage().setInputCol("noblue").setOutputCol("unrolled")
unrolled = unroller.transform(noblue).select("unrolled")
vector = unrolled.take(1)[0][0]
print(type(vector))
len(vector.toArray())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The images are loaded from the directory (for fast prototyping, consider loading a fraction of
Step2: When collected from the DataFrame, the image data are stored in a Row, which is Spark's way
Step3: Use ImageTransformer for the basic image manipulation
Step4: For the advanced image manipulations, use Spark UDFs.
Step5: Images could be unrolled into the dense 1D vectors suitable for CNTK evaluation.
|
11,819
|
<ASSISTANT_TASK:>
Python Code:
plt.imshow(plt.imread('./res/find_connected.png'))
plt.figure(figsize=(12,8))
plt.imshow(plt.imread('./res/fig21_1.png'))
# Exercises
plt.figure(figsize=(15,8))
plt.imshow(plt.imread('./res/fig21_2.png'))
# Exercises
plt.imshow(plt.imread('./res/fig21_4.png'))
plt.imshow(plt.imread('./res/fig21_5.png'))
# Pseudocode for disjoint-set forests
plt.imshow(plt.imread('./res/forest.png'))
# Exercises
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 21.2 Linked-list representation of disjoint sets
Step2: MAKE-SET and FIND-SET requires $O(1)$ time.
Step3: 21.3 Disjoint-set forests
Step4: path compression
Step5: When we use both union by rank and path compression, the worst-case running time is $O(m \alpha(n))$, where $\alpha(n)$ is a very slowly growing function, which we define in Section 21.4.
|
11,820
|
<ASSISTANT_TASK:>
Python Code:
def parse(line):
Parses a line from the colors dataset.
items = tf.string_split([line], ",").values
rgb = tf.string_to_number(items[1:], out_type=tf.float32) / 255.0
color_name = items[0]
chars = tf.one_hot(tf.decode_raw(color_name, tf.uint8), depth=256)
length = tf.cast(tf.shape(chars)[0], dtype=tf.int64)
return rgb, chars, length
def set_static_batch_shape(batch_size):
def apply(rgb, chars, length):
rgb.set_shape((batch_size, None))
chars.set_shape((batch_size, None, 256))
length.set_shape((batch_size,))
return rgb, chars, length
return apply
def load_dataset(data_dir, url, batch_size, training=True):
Loads the colors data at path into a tf.PaddedDataset.
path = tf.keras.utils.get_file(os.path.basename(url), url, cache_dir=data_dir)
dataset = tf.data.TextLineDataset(path)
dataset = dataset.skip(1)
dataset = dataset.map(parse)
dataset = dataset.cache()
dataset = dataset.repeat()
if training:
dataset = dataset.shuffle(buffer_size=3000)
dataset = dataset.padded_batch(
batch_size, padded_shapes=((None,), (None, 256), ()))
# To simplify the model code, we statically set as many of the shapes that we
# know.
dataset = dataset.map(set_static_batch_shape(batch_size))
return dataset
@autograph.convert()
class RnnColorbot(tf.keras.Model):
RNN Colorbot model.
def __init__(self):
super(RnnColorbot, self).__init__()
self.lower_cell = tf.contrib.rnn.LSTMBlockCell(256)
self.upper_cell = tf.contrib.rnn.LSTMBlockCell(128)
self.relu_layer = tf.layers.Dense(3, activation=tf.nn.relu)
def _rnn_layer(self, chars, cell, batch_size, training):
A single RNN layer.
Args:
chars: A Tensor of shape (max_sequence_length, batch_size, input_size)
cell: An object of type tf.contrib.rnn.LSTMBlockCell
batch_size: Int, the batch size to use
training: Boolean, whether the layer is used for training
Returns:
A Tensor of shape (max_sequence_length, batch_size, output_size).
hidden_outputs = tf.TensorArray(tf.float32, 0, True)
state, output = cell.zero_state(batch_size, tf.float32)
for ch in chars:
cell_output, (state, output) = cell.call(ch, (state, output))
hidden_outputs.append(cell_output)
hidden_outputs = autograph.stack(hidden_outputs)
if training:
hidden_outputs = tf.nn.dropout(hidden_outputs, 0.5)
return hidden_outputs
def build(self, _):
Creates the model variables. See keras.Model.build().
self.lower_cell.build(tf.TensorShape((None, 256)))
self.upper_cell.build(tf.TensorShape((None, 256)))
self.relu_layer.build(tf.TensorShape((None, 128)))
self.built = True
def call(self, inputs, training=False):
The RNN model code. Uses Eager.
The model consists of two RNN layers (made by lower_cell and upper_cell),
followed by a fully connected layer with ReLU activation.
Args:
inputs: A tuple (chars, length)
training: Boolean, whether the layer is used for training
Returns:
A Tensor of shape (batch_size, 3) - the model predictions.
chars, length = inputs
batch_size = chars.shape[0]
seq = tf.transpose(chars, (1, 0, 2))
seq = self._rnn_layer(seq, self.lower_cell, batch_size, training)
seq = self._rnn_layer(seq, self.upper_cell, batch_size, training)
# Grab just the end-of-sequence from each output.
indices = (length - 1, range(batch_size))
indices = tf.stack(indices, 1)
sequence_ends = tf.gather_nd(seq, indices)
return self.relu_layer(sequence_ends)
@autograph.convert()
def loss_fn(labels, predictions):
return tf.reduce_mean((predictions - labels) ** 2)
def model_fn(features, labels, mode, params):
Estimator model function.
chars = features['chars']
sequence_length = features['sequence_length']
inputs = (chars, sequence_length)
# Create the model. Simply using the AutoGraph-ed class just works!
colorbot = RnnColorbot()
colorbot.build(None)
if mode == tf.estimator.ModeKeys.TRAIN:
predictions = colorbot(inputs, training=True)
loss = loss_fn(labels, predictions)
learning_rate = params['learning_rate']
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
global_step = tf.train.get_global_step()
train_op = optimizer.minimize(loss, global_step=global_step)
return tf.estimator.EstimatorSpec(mode, loss=loss, train_op=train_op)
elif mode == tf.estimator.ModeKeys.EVAL:
predictions = colorbot(inputs)
loss = loss_fn(labels, predictions)
return tf.estimator.EstimatorSpec(mode, loss=loss)
elif mode == tf.estimator.ModeKeys.PREDICT:
predictions = colorbot(inputs)
predictions = tf.minimum(predictions, 1.0)
return tf.estimator.EstimatorSpec(mode, predictions=predictions)
def input_fn(data_dir, data_url, params, training=True):
An input function for training
batch_size = params['batch_size']
# load_dataset defined above
dataset = load_dataset(data_dir, data_url, batch_size, training=training)
# Package the pipeline end in a format suitable for the estimator.
labels, chars, sequence_length = dataset.make_one_shot_iterator().get_next()
features = {
'chars': chars,
'sequence_length': sequence_length
}
return features, labels
params = {
'batch_size': 64,
'learning_rate': 0.01,
}
train_url = "https://raw.githubusercontent.com/random-forests/tensorflow-workshop/master/archive/extras/colorbot/data/train.csv"
test_url = "https://raw.githubusercontent.com/random-forests/tensorflow-workshop/master/archive/extras/colorbot/data/test.csv"
data_dir = "tmp/rnn/data"
regressor = tf.estimator.Estimator(
model_fn=model_fn,
params=params)
regressor.train(
input_fn=lambda: input_fn(data_dir, train_url, params),
steps=100)
eval_results = regressor.evaluate(
input_fn=lambda: input_fn(data_dir, test_url, params, training=False),
steps=2
)
print('Eval loss at step %d: %s' % (eval_results['global_step'], eval_results['loss']))
def predict_input_fn(color_name):
An input function for prediction.
_, chars, sequence_length = parse(color_name)
# We create a batch of a single element.
features = {
'chars': tf.expand_dims(chars, 0),
'sequence_length': tf.expand_dims(sequence_length, 0)
}
return features, None
def draw_prediction(color_name, pred):
pred = pred * 255
pred = pred.astype(np.uint8)
plt.axis('off')
plt.imshow(pred)
plt.title(color_name)
plt.show()
def predict_with_estimator(color_name, regressor):
predictions = regressor.predict(
input_fn=lambda:predict_input_fn(color_name))
pred = next(predictions)
predictions.close()
pred = np.minimum(pred, 1.0)
pred = np.expand_dims(np.expand_dims(pred, 0), 0)
draw_prediction(color_name, pred)
tb = widgets.TabBar(["RNN Colorbot"])
while True:
with tb.output_to(0):
try:
color_name = six.moves.input("Give me a color name (or press 'enter' to exit): ")
except (EOFError, KeyboardInterrupt):
break
if not color_name:
break
with tb.output_to(0):
tb.clear_tab()
predict_with_estimator(color_name, regressor)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Case study
Step7: To show the use of control flow, we write the RNN loop by hand, rather than using a pre-built RNN model.
Step9: We will now create the model function for the custom Estimator.
Step11: We'll create an input function that will feed our training and eval data.
Step12: We now have everything in place to build our custom estimator and use it for training and eval!
Step14: And here's the same estimator used for inference.
|
11,821
|
<ASSISTANT_TASK:>
Python Code:
import os
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
! pip3 install -U google-cloud-storage $USER_FLAG
! pip3 install $USER kfp --upgrade
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
! python3 -c "import kfp; print('KFP SDK version: {}'.format(kfp.__version__))"
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
REGION = "[your-region]" # @param {type:"string"}
if REGION == "[your-region]":
REGION = "us-central1"
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
BUCKET_NAME = "[your-bucket-name]" # @param {type:"string"}
BUCKET_URI = f"gs://{BUCKET_NAME}"
if BUCKET_URI == "" or BUCKET_URI is None or BUCKET_URI == "gs://[your-bucket-name]":
BUCKET_URI = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
! gsutil mb -l $REGION $BUCKET_URI
! gsutil ls -al $BUCKET_URI
SERVICE_ACCOUNT = "[your-service-account]" # @param {type:"string"}
if (
SERVICE_ACCOUNT == ""
or SERVICE_ACCOUNT is None
or SERVICE_ACCOUNT == "[your-service-account]"
):
# Get your GCP project id from gcloud
shell_output = !gcloud auth list 2>/dev/null
SERVICE_ACCOUNT = shell_output[2].replace("*", "").strip()
print("Service Account:", SERVICE_ACCOUNT)
! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectCreator $BUCKET_URI
! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectViewer $BUCKET_URI
import google.cloud.aiplatform as aip
# API service endpoint
API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION)
PIPELINE_ROOT = "{}/pipeline_root/intro".format(BUCKET_URI)
from typing import NamedTuple
from kfp import dsl
from kfp.v2 import compiler
from kfp.v2.dsl import component
aip.init(project=PROJECT_ID, staging_bucket=BUCKET_URI)
@component(output_component_file="hw.yaml", base_image="python:3.9")
def hello_world(text: str) -> str:
print(text)
return text
@component(packages_to_install=["google-cloud-storage"])
def two_outputs(
text: str,
) -> NamedTuple(
"Outputs",
[
("output_one", str), # Return parameters
("output_two", str),
],
):
# the import is not actually used for this simple example, but the import
# is successful, as it was included in the `packages_to_install` list.
from google.cloud import storage # noqa: F401
o1 = f"output one from text: {text}"
o2 = f"output two from text: {text}"
print("output one: {}; output_two: {}".format(o1, o2))
return (o1, o2)
@component
def consumer(text1: str, text2: str, text3: str):
print(f"text1: {text1}; text2: {text2}; text3: {text3}")
@dsl.pipeline(
name="hello-world-v2",
description="A simple intro pipeline",
pipeline_root=PIPELINE_ROOT,
)
def pipeline(text: str = "hi there"):
hw_task = hello_world(text)
two_outputs_task = two_outputs(text)
consumer_task = consumer( # noqa: F841
hw_task.output,
two_outputs_task.outputs["output_one"],
two_outputs_task.outputs["output_two"],
)
from kfp.v2 import compiler # noqa: F811
compiler.Compiler().compile(pipeline_func=pipeline, package_path="intro_pipeline.json")
DISPLAY_NAME = "intro_" + TIMESTAMP
job = aip.PipelineJob(
display_name=DISPLAY_NAME,
template_path="intro_pipeline.json",
pipeline_root=PIPELINE_ROOT,
)
job.run()
job.delete()
if not os.getenv("IS_TESTING"):
from kfp.v2.google.client import AIPlatformClient # noqa: F811
api_client = AIPlatformClient(project_id=PROJECT_ID, region=REGION)
# adjust time zone and cron schedule as necessary
response = api_client.create_schedule_from_job_spec(
job_spec_path="intro_pipeline.json",
schedule="2 * * * *",
time_zone="America/Los_Angeles", # change this as necessary
parameter_values={"text": "Hello world!"},
# pipeline_root=PIPELINE_ROOT # this argument is necessary if you did not specify PIPELINE_ROOT as part of the pipeline definition.
)
if not os.getenv("IS_TESTING"):
response = api_client.create_run_from_job_spec(
job_spec_path="intro_pipeline.json",
pipeline_root=PIPELINE_ROOT,
service_account=SERVICE_ACCOUNT, # <-- CHANGE to use non-default service account
)
job = aip.PipelineJob(
display_name="intro_" + TIMESTAMP,
template_path="intro_pipeline.json",
enable_caching=False,
)
job.run()
job.delete()
! curl -X GET -H "Authorization: Bearer $(gcloud auth print-access-token)" -H "Content-Type: application/json" https://{API_ENDPOINT}/v1beta1/projects/{PROJECT_ID}/locations/{REGION}/pipelineJobs
output = ! curl -X POST -H "Authorization: Bearer $(gcloud auth print-access-token)" -H "Content-Type: application/json; charset=utf-8" https://{API_ENDPOINT}/v1beta1/projects/{PROJECT_ID}/locations/{REGION}/pipelineJobs --data "@intro_pipeline.json"
PIPELINE_RUN_ID = output[5].split("/")[-1].split('"')[0]
print(output)
! curl -X GET -H "Authorization: Bearer $(gcloud auth print-access-token)" -H "Content-Type: application/json" https://{API_ENDPOINT}/v1beta1/projects/{PROJECT_ID}/locations/{REGION}/pipelineJobs/{PIPELINE_RUN_ID}
! curl -X POST -H "Authorization: Bearer $(gcloud auth print-access-token)" -H "Content-Type: application/json" https://{API_ENDPOINT}/v1beta1/projects/{PROJECT_ID}/locations/{REGION}/pipelineJobs/{PIPELINE_RUN_ID}:cancel
! curl -X DELETE -H "Authorization: Bearer $(gcloud auth print-access-token)" -H "Content-Type: application/json" https://{API_ENDPOINT}/v1beta1/projects/{PROJECT_ID}/locations/{REGION}/pipelineJobs/{PIPELINE_RUN_ID}
delete_pipeline = True
delete_bucket = True
try:
if delete_pipeline and "DISPLAY_NAME" in globals():
pipelines = aip.PipelineJob.list(
filter=f"display_name={DISPLAY_NAME}", order_by="create_time"
)
pipeline = pipelines[0]
aip.PipelineJob.delete(pipeline.resource_name)
print("Deleted pipeline:", pipeline)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_URI" in globals():
! gsutil rm -r $BUCKET_URI
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Install the latest GA version of google-cloud-storage library as well.
Step2: Install the latest GA version of KFP SDK library as well.
Step3: Restart the kernel
Step4: Check the versions of the packages you installed. The KFP SDK version should be >=1.6.
Step5: Before you begin
Step6: Region
Step7: Timestamp
Step8: Authenticate your Google Cloud account
Step9: Create a Cloud Storage bucket
Step10: Only if your bucket doesn't already exist
Step11: Finally, validate access to your Cloud Storage bucket by examining its contents
Step12: Service Account
Step13: Set service account access for Vertex AI Pipelines
Step14: Set up variables
Step15: Vertex AI constants
Step16: Vertex AI Pipelines constants
Step17: Additional imports.
Step18: Initialize Vertex AI SDK for Python
Step19: Define Python function-based pipeline components
Step20: As you'll see below, compilation of this component creates a task factory functionโcalled hello_worldโ that you can use in defining a pipeline step.
Step21: Define the consumer component
Step22: Define a pipeline that uses the components
Step23: Compile the pipeline
Step24: Run the pipeline
Step25: Click on the generated link to see your run in the Cloud Console.
Step26: Recurring pipeline runs
Step27: Once the scheduled job is created, you can see it listed in the Cloud Scheduler panel in the Console.
Step28: Pipeline step caching
Step29: Delete the pipeline job
Step30: Using the Pipelines REST API
Step31: Create a pipeline job
Step32: Get a pipeline job from its ID
Step33: Cancel a pipeline job given its ID
Step34: Delete a pipeline job given its ID
Step35: Cleaning up
|
11,822
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
# Module with the neural net classes
import DNN
import Solvers
N = 100
data = np.concatenate((np.random.multivariate_normal(mean=[0, 0], cov=[[0.5, 0],[0, 0.5]], size=N),
np.random.multivariate_normal(mean=[4, 4], cov=[[0.5, 0],[0, 0.5]], size=N),
np.random.multivariate_normal(mean=[0, 4], cov=[[0.5, 0],[0, 0.5]], size=N),
np.random.multivariate_normal(mean=[4, 0], cov=[[0.5, 0],[0, 0.5]], size=N)),
axis=0)
# Arrays are explicitly defined as (N x 1) for convenience allowing generlizations to netowrks with multidimensional outputs",
labels = np.concatenate((np.ones((2*N, 1)), np.zeros((2*N, 1))), axis=0)
plt.plot(data[labels[:,0] == 1, 0], data[labels[:,0] == 1, 1], 'r+', label='label = 1')
plt.hold
plt.plot(data[labels[:,0] == 0, 0], data[labels[:,0] == 0, 1], 'bo', label='label = 0')
plt.legend()
plt.show()
# instantiate an empty network
my_net = DNN.Net()
# add layers to my_net in a bottom up fashion
my_net.addLayer(DNN.Layer(n_in=2, n_out=2, activation='relu'))
my_net.addLayer(DNN.Layer(n_in=2, n_out=1, activation='sigmoid'))
# create solver object for training the feedforward network
solver_params = {'lr_rate': 0.001,
'momentum': 0.9}
my_solver = DNN.SGDSolver(solver_params)
#my_solver = DNN.NAGSolver(solver_params)
#my_solver = DNN.RMSPropSolver(solver_params)
#my_solver = DNN.AdaGradSolver(solver_params)
#my_solver = Solvers.AdaDeltaSolver(solver_params)
# instantiate a NetTrainer to learn parameters of my_net using the my_solver
train_params = {'net': my_net,
'loss_func': 'xent',
'batch_size': 10,
'max_iter': 80000,
'train_data': data,
'label_data': labels,
'solver': my_solver,
'print_interval': 10000}
my_trainer = DNN.NetTrainer(train_params)
my_trainer.train()
my_net.forward(data)
pred_labels = np.reshape(my_net.Xout > 0.5, -1)
## plot data point with the predicted labels
plt.plot(data[pred_labels, 0], data[pred_labels, 1], 'r+')
plt.hold
plt.plot(data[np.logical_not(pred_labels), 0], data[np.logical_not(pred_labels), 1], 'bo')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We wil demonstrate the nonlinear representation capabilities fot the multilayer feedforward network with the XOR problem. First, let's create a small dataset with samples from positive and negative classes.
Step2: Defining the network
Step3: Choosing a solver and seting up the net trainer
Step4: The NetTrainer takes a Net object along with the solver and adds a loss function that will be employed for trainer. The main function of the NetTrainer is to manage calls that propagate training data forward, backpropagate the errors (cost gradients), and perform parameter updates. The trainer requires the training data and target vales (aka labels), Solver and Net objects and additional information such as the number of iterations, batch sizes, and display of current objective values during training.
Step5: Training the network
Step6: Checking the results
|
11,823
|
<ASSISTANT_TASK:>
Python Code:
import json
from pybbn.graph.variable import Variable
from pybbn.graph.node import BbnNode
from pybbn.graph.edge import Edge, EdgeType
from pybbn.graph.dag import Bbn
a = BbnNode(Variable(0, 'a', ['t', 'f']), [0.2, 0.8])
b = BbnNode(Variable(1, 'b', ['t', 'f']), [0.1, 0.9, 0.9, 0.1])
bbn = Bbn().add_node(a).add_node(b) \
.add_edge(Edge(a, b, EdgeType.DIRECTED))
# serialize to JSON file
s = json.dumps(Bbn.to_dict(bbn))
with open('simple-bbn.json', 'w') as f:
f.write(s)
print(bbn)
# deserialize from JSON file
with open('simple-bbn.json', 'r') as f:
d = json.loads(f.read())
bbn = Bbn.from_dict(d)
print(bbn)
from pybbn.pptc.inferencecontroller import InferenceController
from pybbn.graph.jointree import JoinTree
a = BbnNode(Variable(0, 'a', ['t', 'f']), [0.2, 0.8])
b = BbnNode(Variable(1, 'b', ['t', 'f']), [0.1, 0.9, 0.9, 0.1])
bbn = Bbn().add_node(a).add_node(b) \
.add_edge(Edge(a, b, EdgeType.DIRECTED))
jt = InferenceController.apply(bbn)
with open('simple-join-tree.json', 'w') as f:
d = JoinTree.to_dict(jt)
j = json.dumps(d, sort_keys=True, indent=2)
f.write(j)
print(jt)
with open('simple-join-tree.json', 'r') as f:
j = f.read()
d = json.loads(j)
jt = JoinTree.from_dict(d)
jt = InferenceController.apply_from_serde(jt)
print(jt)
# you have built a BBN
a = BbnNode(Variable(0, 'a', ['t', 'f']), [0.2, 0.8])
b = BbnNode(Variable(1, 'b', ['t', 'f']), [0.1, 0.9, 0.9, 0.1])
bbn = Bbn().add_node(a).add_node(b) \
.add_edge(Edge(a, b, EdgeType.DIRECTED))
# you have built a junction tree from the BBN
# let's call this "original" junction tree the left-hand side (lhs) junction tree
lhs_jt = InferenceController.apply(bbn)
# you may just update the CPTs with the original junction tree structure
# the algorithm to find/build the junction tree is avoided
# the CPTs are updated
rhs_jt = InferenceController.reapply(lhs_jt, {0: [0.3, 0.7], 1: [0.2, 0.8, 0.8, 0.2]})
# let's print out the marginal probabilities and see how things changed
# print the marginal probabilities for the lhs junction tree
print('lhs probabilities')
for node in lhs_jt.get_bbn_nodes():
potential = lhs_jt.get_bbn_potential(node)
print(node)
print(potential)
print('>')
# print the marginal probabilities for the rhs junction tree
print('rhs probabilities')
for node in rhs_jt.get_bbn_nodes():
potential = rhs_jt.get_bbn_potential(node)
print(node)
print(potential)
print('>')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Deserializing
Step2: Serde a join tree
Step3: Deserializing
Step4: Updating the conditional probability tables (CPTs) of a BBN nodes in a junction tree
|
11,824
|
<ASSISTANT_TASK:>
Python Code:
import os
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
! pip3 install -U google-cloud-storage $USER_FLAG
if os.environ["IS_TESTING"]:
! pip3 install --upgrade tensorflow $USER_FLAG
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
REGION = "us-central1" # @param {type: "string"}
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
! gsutil mb -l $REGION $BUCKET_NAME
! gsutil ls -al $BUCKET_NAME
import google.cloud.aiplatform as aip
aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
IMPORT_FILE = "gs://cloud-samples-data/ai-platform/covid/bigquery-public-covid-nyt-us-counties-train.csv"
count = ! gsutil cat $IMPORT_FILE | wc -l
print("Number of Examples", int(count[0]))
print("First 10 rows")
! gsutil cat $IMPORT_FILE | head
heading = ! gsutil cat $IMPORT_FILE | head -n1
label_column = str(heading).split(",")[-1].split("'")[0]
print("Label Column Name", label_column)
if label_column is None:
raise Exception("label column missing")
dataset = aip.TimeSeriesDataset.create(
display_name="NY Times COVID Database" + "_" + TIMESTAMP,
gcs_source=[IMPORT_FILE],
)
label_column = "mean_temp"
time_column = "date"
time_series_id_column = "county"
print(dataset.resource_name)
TRANSFORMATIONS = [
{"auto": {"column_name": "date"}},
{"auto": {"column_name": "state_name"}},
{"auto": {"column_name": "county_fips_code"}},
{"auto": {"column_name": "confirmed_cases"}},
{"auto": {"column_name": "deaths"}},
]
label_column = "deaths"
time_column = "date"
time_series_identifier_column = "county"
dag = aip.AutoMLForecastingTrainingJob(
display_name="train-iowa-liquor-sales-automl_1",
optimization_objective="minimize-rmse",
column_transformations=TRANSFORMATIONS,
)
model = dag.run(
dataset=dataset,
target_column=label_column,
time_column=time_column,
time_series_identifier_column=time_series_identifier_column,
available_at_forecast_columns=[time_column],
unavailable_at_forecast_columns=[label_column],
time_series_attribute_columns=["state_name", "county_fips_code", "confirmed_cases"],
forecast_horizon=30,
# context_window=30,
data_granularity_unit="day",
data_granularity_count=1,
weight_column=None,
budget_milli_node_hours=1000,
model_display_name="covid_" + TIMESTAMP,
predefined_split_column_name=None,
)
# Get model resource ID
models = aip.Model.list(filter="display_name=covid_" + TIMESTAMP)
# Get a reference to the Model Service client
client_options = {"api_endpoint": f"{REGION}-aiplatform.googleapis.com"}
model_service_client = aip.gapic.ModelServiceClient(client_options=client_options)
model_evaluations = model_service_client.list_model_evaluations(
parent=models[0].resource_name
)
model_evaluation = list(model_evaluations)[0]
print(model_evaluation)
HEADING = "date,county,state_name,county_fips_code,confirmed_cases,deaths"
INSTANCE_1 = "2020-10-13,Adair,Iowa,19001,103,null"
INSTANCE_2 = "2020-10-29,Adair,Iowa,19001,197,null"
import tensorflow as tf
gcs_input_uri = BUCKET_NAME + "/test.csv"
with tf.io.gfile.GFile(gcs_input_uri, "w") as f:
f.write(HEADING + "\n")
f.write(str(INSTANCE_1) + "\n")
f.write(str(INSTANCE_2) + "\n")
print(gcs_input_uri)
! gsutil cat $gcs_input_uri
batch_predict_job = model.batch_predict(
job_display_name="covid_" + TIMESTAMP,
gcs_source=gcs_input_uri,
gcs_destination_prefix=BUCKET_NAME,
instances_format="csv",
predictions_format="csv",
sync=False,
)
print(batch_predict_job)
batch_predict_job.wait()
import tensorflow as tf
bp_iter_outputs = batch_predict_job.iter_outputs()
prediction_results = list()
for blob in bp_iter_outputs:
if blob.name.split("/")[-1].startswith("prediction"):
prediction_results.append(blob.name)
tags = list()
for prediction_result in prediction_results:
gfile_name = f"gs://{bp_iter_outputs.bucket.name}/{prediction_result}"
with tf.io.gfile.GFile(name=gfile_name, mode="r") as gfile:
for line in gfile.readlines():
print(line)
delete_all = True
if delete_all:
# Delete the dataset using the Vertex dataset object
try:
if "dataset" in globals():
dataset.delete()
except Exception as e:
print(e)
# Delete the model using the Vertex model object
try:
if "model" in globals():
model.delete()
except Exception as e:
print(e)
# Delete the endpoint using the Vertex endpoint object
try:
if "endpoint" in globals():
endpoint.delete()
except Exception as e:
print(e)
# Delete the AutoML or Pipeline trainig job
try:
if "dag" in globals():
dag.delete()
except Exception as e:
print(e)
# Delete the custom trainig job
try:
if "job" in globals():
job.delete()
except Exception as e:
print(e)
# Delete the batch prediction job using the Vertex batch prediction object
try:
if "batch_predict_job" in globals():
batch_predict_job.delete()
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex hyperparameter tuning object
try:
if "hpt_job" in globals():
hpt_job.delete()
except Exception as e:
print(e)
if "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Install the latest GA version of google-cloud-storage library as well.
Step2: Restart the kernel
Step3: Before you begin
Step4: Region
Step5: Timestamp
Step6: Authenticate your Google Cloud account
Step7: Create a Cloud Storage bucket
Step8: Only if your bucket doesn't already exist
Step9: Finally, validate access to your Cloud Storage bucket by examining its contents
Step10: Set up variables
Step11: Initialize Vertex SDK for Python
Step12: Tutorial
Step13: Quick peek at your data
Step14: Create the Dataset
Step15: Create and run training pipeline
Step16: Run the training pipeline
Step17: Review model evaluation scores
Step18: Send a batch prediction request
Step19: Make the batch input file
Step20: Make the batch prediction request
Step21: Wait for completion of batch prediction job
Step22: Get the predictions
Step23: Cleaning up
|
11,825
|
<ASSISTANT_TASK:>
Python Code:
import arviz as az
import stan
import numpy as np
import matplotlib.pyplot as plt
# enable PyStan on Jupyter IDE
import nest_asyncio
nest_asyncio.apply()
np.random.seed(26)
xdata = np.linspace(0, 50, 100)
b0, b1, sigma = -2, 1, 3
ydata = np.random.normal(loc=b1 * xdata + b0, scale=sigma)
plt.plot(xdata, ydata)
refit_lr_code =
data {
// Define data for fitting
int<lower=0> N;
vector[N] x;
vector[N] y;
// Define excluded data. It will not be used when fitting.
int<lower=0> N_ex;
vector[N_ex] x_ex;
vector[N_ex] y_ex;
}
parameters {
real b0;
real b1;
real<lower=0> sigma_e;
}
model {
b0 ~ normal(0, 10);
b1 ~ normal(0, 10);
sigma_e ~ normal(0, 10);
for (i in 1:N) {
y[i] ~ normal(b0 + b1 * x[i], sigma_e); // use only data for fitting
}
}
generated quantities {
vector[N] log_lik;
vector[N_ex] log_lik_ex;
vector[N] y_hat;
for (i in 1:N) {
// calculate log likelihood and posterior predictive, there are
// no restrictions on adding more generated quantities
log_lik[i] = normal_lpdf(y[i] | b0 + b1 * x[i], sigma_e);
y_hat[i] = normal_rng(b0 + b1 * x[i], sigma_e);
}
for (j in 1:N_ex) {
// calculate the log likelihood of the excluded data given data_for_fitting
log_lik_ex[j] = normal_lpdf(y_ex[j] | b0 + b1 * x_ex[j], sigma_e);
}
}
data_dict = {
"N": len(ydata),
"y": ydata,
"x": xdata,
# No excluded data in initial fit
"N_ex": 0,
"x_ex": [],
"y_ex": [],
}
sm = stan.build(program_code=refit_lr_code, data=data_dict)
sample_kwargs = {"num_samples": 1000, "num_chains": 4}
fit = sm.sample(**sample_kwargs)
dims = {"y": ["time"], "x": ["time"], "log_likelihood": ["time"], "y_hat": ["time"]}
idata_kwargs = {
"posterior_predictive": ["y_hat"],
"observed_data": "y",
"constant_data": "x",
"log_likelihood": ["log_lik", "log_lik_ex"],
"dims": dims,
}
idata = az.from_pystan(posterior=fit, posterior_model=sm, **idata_kwargs)
class LinearRegressionWrapper(az.PyStanSamplingWrapper):
def sel_observations(self, idx):
xdata = self.idata_orig.constant_data.x.values
ydata = self.idata_orig.observed_data.y.values
mask = np.full_like(xdata, True, dtype=bool)
mask[idx] = False
N_obs = len(mask)
N_ex = np.sum(~mask)
observations = {
"N": int(N_obs - N_ex),
"x": xdata[mask],
"y": ydata[mask],
"N_ex": int(N_ex),
"x_ex": xdata[~mask],
"y_ex": ydata[~mask],
}
return observations, "log_lik_ex"
loo_orig = az.loo(idata, pointwise=True)
loo_orig
loo_orig.pareto_k[[13, 42, 56, 73]] = np.array([0.8, 1.2, 2.6, 0.9])
pystan_wrapper = LinearRegressionWrapper(
refit_lr_code, idata_orig=idata, sample_kwargs=sample_kwargs, idata_kwargs=idata_kwargs
)
loo_relooed = az.reloo(pystan_wrapper, loo_orig=loo_orig)
loo_relooed
loo_orig
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: For the example we will use a linear regression.
Step3: Now we will write the Stan code, keeping in mind that it must be able to compute the pointwise log likelihood on excluded data, that is, data which is not used to fit the model. Thus, the backbone of the code must look like
Step4: We have defined a dictionary sample_kwargs that will be passed to the SamplingWrapper in order to make sure that all
Step5: We will create a subclass of {class}~arviz.PyStanSamplingWrapper. Therefore, instead of having to implement all functions required by {func}~arviz.reloo we only have to implement sel_observations. As explained in its docs, it takes one argument which are the indices of the data to be excluded and returns modified_observed_data which is passed as data to sampling function of PyStan model and excluded_observed_data which is used to retrieve the log likelihood of the excluded data (as passing the excluded data would make no sense).
Step6: In this case, the Leave-One-Out Cross Validation (LOO-CV) approximation using Pareto Smoothed Importance Sampling (PSIS) works for all observations, so we will use modify loo_orig in order to make {func}~arviz.reloo believe that PSIS failed for some observations. This will also serve as a validation of our wrapper, as the PSIS LOO-CV already returned the correct value.
Step7: We initialize our sampling wrapper
Step8: And eventually, we can use this wrapper to call az.reloo, and compare the results with the PSIS LOO-CV results.
|
11,826
|
<ASSISTANT_TASK:>
Python Code:
### import flame module
from flame import Machine
### specify lattice file location
lat_file = "LS1FS1_lattice.lat"
### read lattice file in
with open(lat_file, 'rb') as inf:
# create lattice data object M
M = Machine(inf)
### Initialize simulation parameters
# states
S = M.allocState({})
### run flame;
# 0, len(M); beginnig to end of the lattice
# observe=range(len(M)); return states data at each element
# propagation results assigned to 'results'
results = M.propagate(S, 0, len(M), observe=range(0,len(M)))
### plot energy
# extract from 'results'
pos = [results[i][1].pos for i in range(len(M))]
ek = [results[i][1].ref_IonEk for i in range(len(M))]
# plot reference energy
plt.plot(pos,ek)
plt.title('reference energy histroy\n')
plt.xlabel('$z$ [m]')
plt.ylabel('energy [eV/u]')
plt.show()
### plot x, y centroid and rms of overall beam
# extract from 'results'
pos = [results[i][1].pos for i in range(len(M))]
x,y = np.array([[results[i][1].moment0_env[j] for i in range(len(M))] for j in [0,2]])
xrms,yrms = np.array([[results[i][1].moment0_rms[j] for i in range(len(M))] for j in [0,2]])
# plot x,y centroid
plt.plot(pos,x,label='$x$')
plt.plot(pos,y,label='$y$')
plt.title('centroid orbit histroy of overall beam')
plt.xlabel('$z$ [m]')
plt.ylabel('centroid [mm]')
plt.legend(loc='best')
plt.show()
# plot x, y rms
plt.plot(pos,xrms,label='$x$')
plt.plot(pos,yrms,label='$y$')
plt.title('rms size histroy of overall beam')
plt.xlabel('$z$ [m]')
plt.ylabel('rms [mm]')
plt.legend(loc='upper left')
plt.show()
### python object data are easy to manage
# plot x centroid and rms
plt.plot(pos,x,label='$x$ centroid')
plt.fill_between(pos, x+xrms, x-xrms, alpha=0.2, label='$x$ envelope')
plt.title('horizontal beam envelope histroy')
plt.xlabel('$z$ [m]')
plt.ylabel('$x$ [mm]')
plt.legend(loc='upper left')
plt.show()
# plot y centroid and rms
plt.plot(pos,y,'g',label='$y$ centroid')
plt.title('vertical beam envelope histroy')
plt.fill_between(pos, y+yrms, y-yrms,facecolor='g', alpha=0.2, label='$y$ envelope')
plt.xlabel('$z$ [m]')
plt.ylabel('$y$ [mm]')
plt.legend(loc='upper left')
plt.show()
### plot RMS x, y of each charge state
# extract from results
n_i = len(M.conf()['IonChargeStates'])
n_s = len(M.conf()['Stripper_IonChargeStates'])
pos, x, y, xrms, yrms = [[[] for _ in range(n_i+n_s)] for _ in range(5)]
for i in range(len(M)):
j,o = [n_i,0] if n_i == len(results[i][1].IonZ) else [n_s,n_i]
for k in range(j):
pos[k+o].append(results[i][1].pos)
x[k+o].append(results[i][1].moment0[0,k])
y[k+o].append(results[i][1].moment0[2,k])
xrms[k+o].append(np.sqrt(results[i][1].moment1[0,0,k]))
yrms[k+o].append(np.sqrt(results[i][1].moment1[2,2,k]))
# plot x centroid and rms
cs = ['33','34','76','77','78','79','80']
for i in range(n_i + n_s):
plt.plot(pos[i],x[i],color=colors[i],label='U$^{'+cs[i]+'}$')
plt.fill_between(pos[i],np.array(x[i])+np.array(xrms[i]),
np.array(x[i])-np.array(xrms[i]),
alpha=0.2,facecolor=colors[i])
plt.title('horizontal beam envelope histroy')
plt.xlabel('$z$ [m]')
plt.ylabel('$x$ [mm]')
plt.legend(loc = 'upper left', ncol = 4, fontsize=17)
plt.show()
# plot y centroid and rms
for i in range(n_i + n_s):
plt.plot(pos[i],y[i],color=colors[i],label='U$^{'+cs[i]+'}$')
plt.fill_between(pos[i],np.array(y[i])+np.array(yrms[i]),
np.array(y[i])-np.array(yrms[i]),
alpha=0.2,facecolor=colors[i])
plt.title('vertical beam envelope histroy')
plt.xlabel('$z$ [m]')
plt.ylabel('$y$ [mm]')
plt.legend(loc = 'upper left', ncol = 4, fontsize=17)
plt.show()
# index of point A (solenoid:ls1_cb06_sol3_d1594_47)
point_A = [i for i in range(len(M)) if M.conf(i)['name']=='ls1_cb06_sol3_d1594_48'][0]
# Simulate from initial point(LS1 entrance) to point A
S2 = M.allocState({})
resultsA = M.propagate(S2, 0, point_A-1, observe = range(point_A))
### Simulate from A to B(FS1 exit) with changing solenoid strength
# Try 3 Bz [T] cases for the solenoid
bzcase = [5.0, 7.5, 10.0]
# make array for results
resultsB = [[] for _ in range(len(bzcase))]
for i,bz in enumerate(bzcase):
# Set new solenoid parameter
M.reconfigure(point_A,{'B': bz})
# clone beam parameter of point S
S3 = S2.clone()
# Simulate from A to B and store results
end_B = len(M)
resultsB[i] = M.propagate(S3, point_A, end_B, observe = range(end_B))
# extract from results
pos_A = [resultsA[i][1].pos for i in range(point_A-1)]
x_A,y_A = np.array([[resultsA[i][1].moment0_env[j] for i in range(point_A-1)] for j in [0,2]])
xrms_A,yrms_A = np.array([[resultsA[i][1].moment0_rms[j] for i in range(point_A-1)] for j in [0,2]])
dlen = len(resultsB)
pos_B, x_B, y_B, xrms_B, yrms_B = [[[] for _ in range(dlen)] for _ in range(5)]
for k in range(3):
clen = len(resultsB[k])
pos_B[k] = [resultsB[k][i][1].pos for i in range(clen)]
x_B[k], y_B[k] = np.array([[resultsB[k][i][1].moment0_env[j] for i in range(clen)] for j in [0,2]])
xrms_B[k], yrms_B[k] = np.array([[resultsB[k][i][1].moment0_rms[j] for i in range(clen)] for j in [0,2]])
# plot x centroid and rms
plt.plot(pos_A,x_A,'k')
plt.fill_between(pos_A, x_A+xrms_A, x_A-xrms_A, facecolor='k',alpha=0.2)
for k in range(dlen):
plt.plot(pos_B[k], x_B[k], color=colors[k], label='$B_z$='+str(bzcase[k]))
plt.fill_between(pos_B[k], x_B[k]+xrms_B[k], x_B[k]-xrms_B[k], facecolor=colors[k], alpha=0.2)
plt.title('Horizontal beam envelope histroy')
plt.xlabel('$z$ [m]')
plt.ylabel('$x$ [mm]')
plt.legend(loc='lower left', ncol = 3, fontsize=18)
plt.show()
# plot y centroid and rms
plt.plot(pos_A,y_A,'k')
plt.fill_between(pos_A, y_A+yrms_A, y_A-yrms_A, facecolor='k',alpha=0.2)
for k in range(dlen):
plt.plot(pos_B[k],y_B[k], color=colors[k], label='$B_z$='+str(bzcase[k]))
plt.fill_between(pos_B[k], y_B[k]+yrms_B[k], y_B[k]-yrms_B[k], facecolor=colors[k], alpha=0.2)
plt.title('Vertical beam envelope histroy')
plt.xlabel('$z$ [m]')
plt.ylabel('$y$ [mm]')
plt.legend(loc='lower left', ncol = 3, fontsize=18)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: - Plot energy history
Step2: - plot x, y centroid and rms of overall beam
Step3: - python object data are easy to manage
Step4: - plot beam envelope of each charge state
Step5: Advanced usage
Step6: - plot beam envelope for 3 Bz cases
|
11,827
|
<ASSISTANT_TASK:>
Python Code:
a = {'x': 1, 'z': 3}
b = {'y': 2, 'z': 4}
# ้ๅจไธค dict ไธญๆง่กๆฅๆพๆไฝ (ๅ
ไป a ไธญๆพ๏ผ่ฅๆฏๆพไธๅฐ๏ผๅๅจ b ไธญๆพ)
from collections import ChainMap
c = ChainMap(a,b)
print(c['x'])
print(c['y'])
print(c['z'])
len(c)
list(c.keys())
list(c.values())
c['z'] = 10
c['w'] = 80
del c['x']
c_old = ChainMap(a,b)
c_old
type(c_old)
values = ChainMap()
values['x'] = 1
# Add a new mapping
values = values.new_child()
values['x'] = 2
# Add a new mapping
values = values.new_child()
values['x'] = 3
values
values['x']
# Discard last mapping
values = values.parents
values['x']
# Discard last mapping
values = values.parents
values['x']
values
a = {'x':1,'z':3}
b = {'y':2,'z':4}
merged = dict(b)
merged.update(a)
print(merged['x'],'\n\n',merged['y'],'\n\n',merged['z'])
a['x'] = 19
merged['x']
a = {'x': 1, 'z': 3}
b = {'y': 2, 'z': 4}
merged = ChainMap(a,b)
merged['x']
a['x'] = 43
merged['x'] # Notice change to merged dict
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: ไธไธช ChainMap ๆฅๅๅคไธช dict ๅฐไปไปฌๅจ้ป่พไธๅไธบไธไธช dict ็ถๅ ่ฟไบ dict ไธๆฏ็็ๅๅนถๅจไธ่ตทไบ ChainMap ็ฑปๅชๆฏๅจๅ
้จๅๅปบไบไธไธชๅฎน็บณ่ฟไบ dict ็ list and ้ๆฐๅฎไนไบไธไบๅธธ็จ็ dict ๆไฝๆฅ้ๅ่ฟไบๅ่กจ ๅคง้จๅ dict ้ฝๆฏๆญฃๅธธไฝฟ็จ็
Step2: if ๅบ็ฐ้ๅค้ฎ ๅณ ๅ
ถไธญ 'z' ้ฃไนไปฅ็ฌฌไธๆฌกๅบ็ฐ็ๆ ๅฐๅผไธบไธป ๅๆถ่ขซ่ฟๅ SO ๅ
ถไธญ c['z'] ๆปๆฏ่ฟๅ dict a ไธญ ๅฏนๅบ็ๅผ ่ไธๆฏ b ไธญ็ๅผ<br>ๅฏนไบ dict ็ๆดๆฐๆๅ ้คๆไฝๆปๆฏๅฝฑๅ็ๅ่กจไธญ็ ็ฌฌไธไธช dict
Step3: ็ฅ้ไธบๅฅไปฅไธ key ไปๅฆ้ฝๆพไธๅฐไบ๏ผๅนถๆชๆ้กบๅบ่ฟ่ก ่ฟ้็้กบๅบๆฏๆ Time Line ๆๆถ้ดๅ
ๅๆง่ก
Step4: ChainMap ๅฏนไบ่ฏญ่จไธญ variable ไฝ็จ่ๅด (globals , locals)้ๅธธๆ็จ will make something easy
Step5: ไฝไธบ ChainMap ็ๆฟไปฃ ่่ไฝฟ็จ update ๅฐไธคไธช dict ๅๅนถ
Step6: ่ฝ็ถไปฅไธๅฏ่ก ไฝ้่ฆๅๅปบไธไธชๅฎๅ
จไธๅ็ dict ๅฏน่ฑก (ๅฏ่ฝ็ ดๅ็ฐๆ dict ็ปๆ) ๅๆถ ่ฅ ๅ dict ๅไบๆดๆฐ ่ฟ็งๆดๆฐไธไผๅๆ ๅฐ ๆฐ็ๅๅนถ dict ไธญๅป
Step7: ChainMap ไฝฟ็จๅๆฅ dict ไป่ชๅทฑไธไผๅๅปบ new dict SO ๅ
ถ ไผๅฝฑๅๅ
ถๆฌ่บซ
|
11,828
|
<ASSISTANT_TASK:>
Python Code:
from crpropa import *
## settings for MHD model (must be set according to model)
filename_bfield = "clues_primordial.dat" ## filename of the magnetic field
gridOrigin = Vector3d(0,0,0) ## origin of the 3D data, preferably at boxOrigin
gridSize = 1024 ## size of uniform grid in data points
size = 249.827*Mpc ## physical edgelength of volume in Mpc
b_factor = 1. ## global renormalization factor for the field
## settings of simulation
boxOrigin = Vector3d( 0, 0, 0,) ## origin of the full box of the simulation
boxSize = Vector3d( size, size, size ) ## end of the full box of the simulation
## settings for computation
minStep = 10.*kpc ## minimum length of single step of calculation
maxStep = 4.*Mpc ## maximum length of single step of calculation
tolerance = 1e-2 ## tolerance for error in iterative calculation of propagation step
spacing = size/(gridSize) ## resolution, physical size of single cell
m = ModuleList()
## instead of computing propagation without Lorentz deflection via
# m.add(SimplePropagation(minStep,maxStep))
## initiate grid to hold field values
vgrid = Grid3f( gridOrigin, gridSize, spacing )
## load values to the grid
loadGrid( vgrid, filename_bfield, b_factor )
## use grid as magnetic field
bField = MagneticFieldGrid( vgrid )
## add propagation module to the simulation to activate deflection in supplied field
m.add(PropagationCK( bField, tolerance, minStep, maxStep))
#m.add(DeflectionCK( bField, tolerance, minStep, maxStep)) ## this was used in older versions of CRPropa
m.add( PeriodicBox( boxOrigin, boxSize ) )
m.add( MaximumTrajectoryLength( 400*Mpc ) )
source = Source()
source.add( SourceUniformBox( boxOrigin, boxSize ))
filename_density = "mass-density_clues.dat" ## filename of the density field
source = Source()
## initialize grid to hold field values
mgrid = ScalarGrid( gridOrigin, gridSize, spacing )
## load values to grid
loadGrid( mgrid, filename_density )
## add source module to simulation
source.add( SourceDensityGrid( mgrid ) )
import numpy as np
filename_halos = 'clues_halos.dat'
# read data from file
data = np.loadtxt(filename_halos, unpack=True, skiprows=39)
sX = data[0]
sY = data[1]
sZ = data[2]
mass_halo = data[5]
## find only those mass halos inside the provided volume (see Hackstein et al. 2018 for more details)
Xdown= sX >= 0.25
Xup= sX <= 0.75
Ydown= sY >= 0.25
Yup= sY <= 0.75
Zdown= sZ >= 0.25
Zup= sZ <= 0.75
insider= Xdown*Xup*Ydown*Yup*Zdown*Zup
## transform relative positions to physical positions within given grid
sX = (sX[insider]-0.25)*2*size
sY = (sY[insider]-0.25)*2*size
sZ = (sZ[insider]-0.25)*2*size
## collect all sources in the multiple sources container
smp = SourceMultiplePositions()
for i in range(0,len(sX)):
pos = Vector3d( sX[i], sY[i], sZ[i] )
smp.add( pos, 1. )
## add collected sources
source = Source()
source.add( smp )
## use isotropic emission from all sources
source.add( SourceIsotropicEmission() )
## set particle type to be injected
A, Z = 1, 1 # proton
source.add( SourceParticleType( nucleusId(A,Z) ) )
## set injected energy spectrum
Emin, Emax = 1*EeV, 1000*EeV
specIndex = -1
source.add( SourcePowerLawSpectrum( Emin, Emax, specIndex ) )
filename_output = 'data/output_MW.txt'
obsPosition = Vector3d(0.5*size,0.5*size,0.5*size) # position of observer, MW is in center of constrained simulations
obsSize = 800*kpc ## physical size of observer sphere
## initialize observer that registers particles that enter into sphere of given size around its position
obs = Observer()
obs.add( ObserverSmallSphere( obsPosition, obsSize ) )
## write registered particles to output file
obs.onDetection( TextOutput( filename_output ) )
## choose to not further follow particles paths once detected
obs.setDeactivateOnDetection(True)
## add observer to module list
m.add(obs)
N = 1000
m.showModules() ## optional, see summary of loaded modules
m.setShowProgress(True) ## optional, see progress during runtime
m.run(source, N, True) ## perform simulation with N particles injected from source
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: to make use of periodicity of the provided data grid, use
Step2: to not follow particles forever, use
Step3: Uniform injection
Step4: Injection following density field
Step5: Mass Halo injection
Step6: additional source properties
Step7: Observer
Step8: finally run the simulation by
|
11,829
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display
def random_line(m, b, sigma, size=10):
Create a line y = m*x + b + N(0,sigma**2) between x=[-1.0,1.0]
Parameters
----------
m : float
The slope of the line.
b : float
The y-intercept of the line.
sigma : float
The standard deviation of the y direction normal distribution noise.
size : int
The number of points to create for the line.
Returns
-------
x : array of floats
The array of x values for the line with `size` points.
y : array of floats
The array of y values for the lines with `size` points.
x=np.linspace(-1.0,1.0,size)
if sigma==0.0: #worked with Jack Porter to find N(o,sigma) and to work out sigma 0.0 case also explained to him list comprehension
y=np.array([i*m+b for i in x]) #creates an array of y values
else:
# N=1/(sigma*np.pi**.5)*np.exp(-(x**2)/(2*sigma**2)) #incorrectly thought this would need to be the N(0,sigma)
y=np.array([i*m+b+np.random.normal(0,sigma**2) for i in x]) #creates an array of y values for each value of x so that y has gaussian noise
return x,y
# plt.plot(x,y,'b' )
# plt.box(False)
# plt.axvline(x=0,linewidth=.2,color='k')
# plt.axhline(y=0,linewidth=.2,color='k')
# ax=plt.gca()
# ax.get_xaxis().tick_bottom()
# ax.get_yaxis().tick_left()
m = 0.0; b = 1.0; sigma=0.0; size=3
x, y = random_line(m, b, sigma, size)
assert len(x)==len(y)==size
assert list(x)==[-1.0,0.0,1.0]
assert list(y)==[1.0,1.0,1.0]
sigma = 1.0
m = 0.0; b = 0.0
size = 500
x, y = random_line(m, b, sigma, size)
assert np.allclose(np.mean(y-m*x-b), 0.0, rtol=0.1, atol=0.1)
assert np.allclose(np.std(y-m*x-b), sigma, rtol=0.1, atol=0.1)
def ticks_out(ax):
Move the ticks to the outside of the box.
ax.get_xaxis().set_tick_params(direction='out', width=1, which='both')
ax.get_yaxis().set_tick_params(direction='out', width=1, which='both')
def plot_random_line(m, b, sigma, size=10, color='red'):
Plot a random line with slope m, intercept b and size points.
x,y=random_line(m,b,sigma,size) #worked with Jack Porter, before neither of us reassigned x,y
plt.plot(x,y,color )
plt.box(False)
plt.axvline(x=0,linewidth=.2,color='k')
plt.axhline(y=0,linewidth=.2,color='k')
plt.xlim(-1.1,1.1)
plt.ylim(-10.0,10.0)
ax=plt.gca()
ax.get_xaxis().tick_bottom()
ax.get_yaxis().tick_left()
plt.xlabel('x')
plt.ylabel('y')
plt.title('Line w/ Gaussian Noise')
plot_random_line(5.0, -1.0, 2.0, 50)
assert True # use this cell to grade the plot_random_line function
interact(plot_random_line, m=(-10.0,10.0,0.1),b=(-5.0,5.0,0.1),sigma=(0.0,5.0,0.1),size=(10,100,10), color={'red':'r','green':'g','blue':'b'})
#### assert True # use this cell to grade the plot_random_line interact
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Line with Gaussian noise
Step5: Write a function named plot_random_line that takes the same arguments as random_line and creates a random line using random_line and then plots the x and y points using Matplotlib's scatter function
Step6: Use interact to explore the plot_random_line function using
|
11,830
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from jyquickhelper import add_notebook_menu
add_notebook_menu()
import numpy as np
l = [1, 42, 18 ]
a = np.array(l)
print(a)
print(a.dtype)
print(a.ndim)
print(a.shape)
print(a.size)
a
b = np.array(l, dtype=float)
print(b)
print(b.dtype)
l[0] = 1.0
bb = np.array(l)
print(bb)
print(bb.dtype)
a[0] = 2.5
a
aa = a.astype(float)
aa[0] = 2.5
aa
c = np.array([range(5), range(5,10), range(5)])
print(c)
print("ndim:{}".format(c.ndim))
print("shape:{}".format(c.shape))
print(c.transpose()) #same as c.T
print("shape transposed:{}".format(c.T.shape))
print(c.flatten())
print("ndim flattened:{}".format(c.flatten().ndim))
print(c)
print(c[1,3])
print(c[1,:3])
print(c[:,4])
print(c[1], c[1].shape)
print(c[1][:3])
ar = np.arange(1,10) #arange est l'equivalent de range mais retourne un numpy array
print('ar = ',ar)
idx = np.array([1, 4, 3, 2, 1, 7, 3])
print('idx = ',idx)
print("ar[idx] =", ar[idx])
print('######')
idx_bool = np.ones(ar.shape, dtype=bool)
idx_bool[idx] = False
print('idx_bool = ', idx_bool)
print('ar[idx_bool] = ', ar[idx_bool])
print('######', 'Que se passe-t-il dans chacun des cas suivants?', '######' )
try:
print('ar[np.array([True, True, False, True])] = ', ar[np.array([True, True, False, True])])
except Exception as e:
# l'expression ar[[True, True, False, True]] dรฉclenche une erreur depuis numpy 1.13
print("Erreur", e)
list_python = range(10)
list_python[[True, True, False, True]] # dรฉclenche une exception
list_python[[2, 3, 2, 7]] # dรฉclenche une exception
d = np.arange(1, 6, 0.5)
d
e = d
e[[0,2, 4]] = - np.pi
e
d
d = np.linspace(1,5.5,10) #Question subsidiaire: en quoi est-ce diffรฉrent de np.arange avec un step float?
f = d.copy()
f[:4] = -np.e #il s'agit du nombre d'euler, pas de l'array e ;)
print(f)
print(d)
print('d = ',d)
slice_of_d = d[2:5]
print('\nslice_of_d = ', slice_of_d)
slice_of_d[0] = np.nan
print('\nd = ', d)
mask = np.array([2, 3, 4])
fancy_indexed_subarray = d[mask]
print('\nfancy_indexed_subarray = ', fancy_indexed_subarray)
fancy_indexed_subarray[0] = -2
print('\nd = ', d)
g = np.arange(12)
print(g)
g.reshape((4,3))
g.reshape((4,3), order='F')
np.zeros_like(g)
np.ones_like(g)
np.concatenate((g, np.zeros_like(g))) #Attention ร la syntaxe: le type d'entrรฉe est un tuple!
gmat = g.reshape((1, len(g)))
np.concatenate((gmat, np.ones_like(gmat)), axis=0)
np.concatenate((gmat, np.ones_like(gmat)), axis=1)
np.hstack((g, g))
np.vstack((g,g))
#Exo1a-1:
#Exo1a-2:
#Exo1B:
#Exo1C:
a = np.ones((3,2))
b = np.arange(6).reshape(a.shape)
print(a)
b
print( (a + b)**2 )
print( np.abs( 3*a - b ) )
f = lambda x: np.exp(x-1)
print( f(b) )
b
1/b
c = np.ones(6)
c
b+c # dรฉclenche une exception
c = np.arange(3).reshape((3,1))
print(b,c, sep='\n')
b+c
a = np.zeros((3,3))
a[:,0] = -1
b = np.array(range(3))
print(a + b)
print(b.shape)
print(b[:,np.newaxis].shape)
print(b[np.newaxis,:].shape)
print( a + b[np.newaxis,:] )
print( a + b[:,np.newaxis] )
print(b[:,np.newaxis]+b[np.newaxis,:])
print(b + b)
c = np.arange(10).reshape((2,-1)) #Note: -1 is a joker!
print(c)
print(c.sum())
print(c.sum(axis=0))
print(np.sum(c, axis=1))
print(np.all(c[0] < c[1]))
print(c.min(), c.max())
print(c.min(axis=1))
A = np.tril(np.ones((3,3)))
A
b = np.diag([1,2, 3])
b
print(A.dot(b))
print(A*b)
print(A.dot(A))
print(np.linalg.det(A))
inv_A = np.linalg.inv(A)
print(inv_A)
print(inv_A.dot(A))
x = np.linalg.solve(A, np.diag(b))
print(np.diag(b))
print(x)
print(A.dot(x))
np.linalg.eig(A)
np.linalg.eigvals(A)
m = np.matrix(' 1 2 3; 4 5 6; 7 8 9')
a = np.arange(1,10).reshape((3,3))
print(m)
print(a)
print(m[0], a[0])
print(m[0].shape, a[0].shape)
m * m
a * a
m * a # La prioritรฉ des matrix est plus importantes que celles des arrays
print(m**2)
print(a**2)
m[0,0]= -1
print("det", np.linalg.det(m), "rank",np.linalg.matrix_rank(m))
print(m.I*m)
a[0,0] = -1
print("det", np.linalg.det(a), "rank",np.linalg.matrix_rank(a))
print(a.dot(np.linalg.inv(a)))
np.random.randn(4,3)
N = int(1e7)
from random import normalvariate
%timeit [normalvariate(0,1) for _ in range(N)]
%timeit np.random.randn(N)
def bowl_peak(x,y):
return x*np.exp(-x**2-y**2)+(x**2+y**2)/20
from mpl_toolkits.mplot3d import axes3d
import matplotlib.pyplot as plt
from matplotlib import cm #colormaps
min_val = -2
max_val = 2
fig = plt.figure()
ax = fig.gca(projection='3d')
x_axis = np.linspace(min_val,max_val,100)
y_axis = np.linspace(min_val,max_val,100)
X, Y = np.meshgrid(x_axis, y_axis, copy=False, indexing='xy')
Z = bowl_peak(X,Y)
#X, Y, Z = axes3d.get_test_data(0.05)
ax.plot_surface(X, Y, Z, rstride=5, cstride=5, alpha=0.2)
cset = ax.contour(X, Y, Z, zdir='z', offset=-0.5, cmap=cm.coolwarm)
cset = ax.contour(X, Y, Z, zdir='x', offset=min_val, cmap=cm.coolwarm)
cset = ax.contour(X, Y, Z, zdir='y', offset=max_val, cmap=cm.coolwarm)
ax.set_xlabel('X')
ax.set_xlim(min_val, max_val)
ax.set_ylabel('Y')
ax.set_ylim(min_val, max_val)
ax.set_zlabel('Z')
ax.set_zlim(-0.5, 0.5)
from scipy import optimize
x0 = np.array([-0.5, 0])
fun = lambda x: bowl_peak(x[0],x[1])
methods = [ 'Nelder-Mead', 'CG', 'BFGS', 'Powell', 'COBYLA', 'L-BFGS-B' ]
for m in methods:
optim_res = optimize.minimize(fun, x0, method=m)
print("---\nMethod:{}\n".format(m),optim_res, "\n")
for i in range(4):
optim_res = optimize.minimize(fun, x0, method='BFGS')
print("---\nMethod:{} - Test:{}\n".format(m,i),optim_res, "\n")
for m in methods:
print("Method:{}:".format(m))
%timeit optim_res = optimize.minimize(fun, x0, method=m)
print('############')
def shifted_scaled_bowlpeak(x,a,b,c):
return (x[0]-a)*np.exp(-((x[0]-a)**2+(x[1]-b)**2))+((x[0]-a)**2+(x[0]-b)**2)/c
a = 2
b = 3
c = 10
optim_res = optimize.minimize(shifted_scaled_bowlpeak, x0, args=(a,b,c), method='BFGS')
print(optim_res)
print('#######')
optim_res = optimize.minimize(lambda x:shifted_scaled_bowlpeak(x,a,b,c), x0, method='BFGS')
print(optim_res)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Numpy arrays
Step2: Creation d'un array
Step3: On peut indiquer explicitement le dtype lors de la crรฉation de l'array. Sinon, Numpy sรฉlectionne automatiquement le dtype.
Step4: Assigner un float dans un array de type int va caster le float en int, et ne modifie pas le dtype de l'array.
Step5: On peut forcer le casting dans un autre type avec astype
Step6: A partir d'une liste de listes, on obtient un array bi-dimmensionnel.
Step7: Indexation, Slicing, Fancy indexing
Step8: L'indexation des array multidimensionnels fonctionne avec des tuples.
Step9: Si on utilise pas un couple sur un array 2d on rรฉcupรจre un array 1d
Step10: On peut aussi utiliser l'indexation par un array (ou une liste python) de boolรฉens ou d'entiers (un mask). Cela s'appelle le fancy indexing. Un mask d'entiers permet de dรฉsigner les รฉlรฉments que l'on souhaite extraire via la liste de leurs indices, on peut aussi rรฉpรฉter l'indice d'un รฉlรฉment pour rรฉpรฉter l'รฉlement dans l'array que l'on extrait.
Step11: Pourquoi parle-t-on de fancy indexing? Essayez d'indexer des listes python de la mรชme maniรจre...
Step12: View contre Copy
Step13: Un point important est que l'on ne recopie pas un array lorsqu'on effectue une assignation ou un slicing d'un array.
Step14: Si on ne veut pas modifier $d$ indirectement, il faut travailler sur une copie de $d$ (deep copy).
Step15: Ce point est important car source classique d'erreurs silencieuses
Step16: Manipulation de shape
Step17: Par dรฉfaut, reshape utilise l'รฉnumรฉration dans l'ordre du langage C (aussi appelรฉ "row first" ), on peut prรฉciser que l'on souhaite utiliser l'ordre de Fortran ("column first"). Ceux qui connaissent Matlab et R sont habituรฉs ร l'ordre "column-first". Voir l'article wikipedia
Step18: On peut utiliser -1 sur une dimension, cela sert de joker
Step19: On peut aussi concatener ou stacker horizontalement/verticalement diffรฉrents arrays.
Step20: Exercice 1
Step21: Manipulation et Opรฉrations sur les arrays
Step22: Les opรฉrations arithmรฉtiques avec les scalaires, ou entre arrays s'effectuent รฉlรฉment par รฉlรฉment.
Step23: Remarquez que la division par zรฉro ne provoque pas d'erreur mais introduit la valeur inf
Step24: Broadcasting
Step25: L'opรฉration prรฉcรฉdente fonctionne car numpy effectue ce qu'on appelle un broadcasting de c
Step26: Par contre, il peut parfois รชtre utile de prรฉciser la dimension sur laquelle on souhaite broadcaster, on ajoute alors explicitement une dimension
Step27: Rรฉductions
Step28: Algรจbre linรฉaire
Step29: On a vu que les multiplications entre array s'effectuaient รฉlรฉment par รฉlement.
Step30: On peut calculer l'inverse ou le dรฉterminant de $A$
Step31: ... rรฉsoudre des systรจmes d'equations linรฉaires du type $Ax = b$...
Step32: ... ou encore obtenir les valeurs propres de $A$.
Step33: Numpy Matrix
Step34: Matrix surcharge par ailleurs les opรฉrateurs * et ** pour remplacer les opรฉrations รฉlรฉment par รฉlรฉment par les opรฉrations matricielles.
Step35: La syntaxe est plus lรฉgรจre pour effectuer du calcul matriciel
Step36: Gรฉnรฉration de nombres alรฉatoires et statistiques
Step37: Pour se convaincre que numpy.random est plus efficace que le module random de base de python. On effectue un grand nombre de tirages gaussiens standard, en python pur et via numpy.
Step38: Exercice 2
Step39: On va ensuite chercher un exemple dans la gallerie matplotlib pour la reprรฉsenter
Step40: On voit que le minimum se trouve prรจs de $[-\frac{1}{2}, 0]$. On va utiliser ce point pour initialiser l'optimisation.
Step41: On trouve un minimum ร $-0.4052$ en $[-0.669, 0.000]$ pour toutes les mรฉthodes qui convergent. Notez le message de sortie de 'CG' qui signifie que le gradient ne varie plus assez. Personnellement, je ne trouve pas ce message de sortie trรจs clair. Le point trouvรฉ est bien l'optimum cherchรฉ pourtant. Notez aussi le nombre d'รฉvaluations de la fonction (nfev) pour chaque mรฉthode, et le nombre d'รฉvaluation de gradient (njev) pour les mรฉthodes qui reposent sur un calcul de gradient.
Step42: On va รฉvaluer le temps de calcul nรฉcessaire ร chaque mรฉthode.
Step43: On peut aussi fournir des arguments supplรฉmentaires ร la fonction qu'on optimise. Par exemple, les donnรฉes lorsque vous maximisez une log-vraissemblance. En voici un exemple
|
11,831
|
<ASSISTANT_TASK:>
Python Code:
#load libraries
import pandas as pd
import numpy as np
#Supervised learning
from sklearn.model_selection import train_test_split
from sklearn.svm import SVC
#Load data set
from sklearn.datasets import load_breast_cancer
cancer = load_breast_cancer()
cancer =pd.DataFrame(cancer.data)
cancer.head()
#Split data set in train 75% and test 25%
X_train, X_test, y_train, y_test = train_test_split(
cancer.data, cancer.target, test_size=0.25, stratify=cancer.target, random_state=66)
print("X_train shape: {}".format(X_train.shape))
print("y_train shape: {}".format(y_train.shape))
print("X_test shape: {}".format(X_test.shape))
print("y_test shape: {}".format(y_test.shape))
list(cancer.target_names)
list(cancer.feature_names)
## Create an SVM classifier and train it on 75% of the data set.
svc =SVC(probability=True)
svc.fit(X_train, y_train)
## Create an SVM classifier and train it on 70% of the data set.
#clf = SVC(probability=True)
#clf.fit(X_train, y_train)
# Analyze accuracy of predictions on 25% of the holdout test sample.
classifier_score_test = svc.score(X_test, y_test)
classifier_score_train = svc.score(X_train, y_train)
print 'The classifier accuracy on the test set is {:.2f}'.format(classifier_score_test)
print 'The classifier accuracy on the training set is {:.2f}'.format(classifier_score_train)
print("Accuracy on training set: {:.2f}".format(svc.score(X_train, y_train)))
#print("Accuracy on test set: {:.2f}".format(svc.score(X_test, y_test)))
# import Matplotlib (scientific plotting library)
import matplotlib.pyplot as plt
# allow plots to appear within the notebook
%matplotlib inline
plt.plot(X_train.min(axis=0), 'o', label="min")
plt.plot(X_train.max(axis=0), '^', label="max")
plt.legend(loc=4)
plt.xlabel("Feature index")
plt.ylabel("Feature magnitude")
plt.yscale("log")
# compute the minimum value per feature on the training set
min_on_training = X_train.min(axis=0)
# compute the range of each feature (max - min) on the training set
range_on_training = (X_train - min_on_training).max(axis=0)
# subtract the min, and divide by range
# afterward, min=0 and max=1 for each feature
X_train_scaled = (X_train - min_on_training) / range_on_training
print("Minimum for each feature\n{}".format(X_train_scaled.min(axis=0)))
print("Maximum for each feature\n {}".format(X_train_scaled.max(axis=0)))
# use THE SAME transformation on the test set,
# using min and range of the training set (see Chapter 3 for details)
X_test_scaled = (X_test - min_on_training) / range_on_training
svc = SVC()
svc.fit(X_train_scaled, y_train)
print("Accuracy on training set: {:.3f}".format(
svc.score(X_train_scaled, y_train)))
print("Accuracy on test set: {:.3f}".format(svc.score(X_test_scaled, y_test)))
from sklearn.preprocessing import MinMaxScaler
# preprocessing using 0-1 scaling
scaler = MinMaxScaler()
scaler.fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
# learning an SVM on the scaled training data
svm =SVC()
svm.fit(X_train_scaled, y_train)
# scoring on the scaled test set
print("Scaled test set accuracy: {:.2f}".format(
svm.score(X_test_scaled, y_test)))
# preprocessing using zero mean and unit variance scaling
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
# learning an SVM on the scaled training data
svm.fit(X_train_scaled, y_train)
# scoring on the scaled test set
print("SVM test accuracy: {:.2f}".format(svm.score(X_test_scaled, y_test)))
from sklearn.grid_search import GridSearchCV
from sklearn import cross_validation
from sklearn.cross_validation import KFold, cross_val_score
from sklearn.preprocessing import StandardScaler
# Test options and evaluation metric
num_folds = 10
num_instances = len(X_train)
seed = 7
scoring = 'accuracy'
# Tune scaled SVM
scaler = StandardScaler().fit(X_train)
rescaledX = scaler.transform(X_train)
c_values = [0.1, 0.3, 0.5, 0.7, 0.9, 1.0, 1.3, 1.5, 1.7, 2.0]
kernel_values = [ 'linear' , 'poly' , 'rbf' , 'sigmoid' ]
param_grid = dict(C=c_values, kernel=kernel_values)
model = SVC()
kfold = cross_validation.KFold(n=num_instances, n_folds=num_folds, random_state=seed)
grid = GridSearchCV(estimator=model, param_grid=param_grid, scoring=scoring, cv=kfold)
grid_result = grid.fit(rescaledX, y_train)
print("Best: %f using %s" % (grid_result.best_score_, grid_result.best_params_))
for params, mean_score, scores in grid_result.grid_scores_:
print("%f (%f) with: %r" % (scores.mean(), scores.std(), params))
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn import svm, datasets
def decision_plot(X_train, y_train, n_neighbors, weights):
h = .02 # step size in the mesh
Xtrain = X_train[:, :2] # we only take the first two features.
#================================================================
# Create color maps
#================================================================
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF'])
cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])
#================================================================
# we create an instance of SVM and fit out data.
# We do not scale ourdata since we want to plot the support vectors
#================================================================
C = 1.0 # SVM regularization parameter
svm = SVC(kernel='linear', random_state=0, gamma=0.2, C=C).fit(Xtrain, y_train)
rbf_svc = SVC(kernel='rbf', gamma=0.7, C=C).fit(Xtrain, y_train)
poly_svc = SVC(kernel='poly', degree=3, C=C).fit(Xtrain, y_train)
#lin_svc = svm.LinearSVC(C=C).fit(Xtrain, y_train)
#================================================================
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, m_max]x[y_min, y_max].
#================================================================
x_min, x_max = Xtrain[:, 0].min() - 1, Xtrain[:, 0].max() + 1
y_min, y_max = Xtrain[:, 1].min() - 1, Xtrain[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.1),
np.arange(y_min, y_max, 0.1))
Z = svm.predict(np.c_[xx.ravel(), yy.ravel()])
#================================================================
# Put the result into a color plot
#================================================================
Z = Z.reshape(xx.shape)
plt.figure()
plt.pcolormesh(xx, yy, Z, cmap=cmap_light)
# Plot also the training points
plt.scatter(Xtrain[:, 0], Xtrain[:, 1], c=y_train, cmap=cmap_bold)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
#plt.title("2-Class classification (k = %i, weights = '%s')"
# % (n_neighbors, weights))
plt.show()
%matplotlib inline
plt.rcParams['figure.figsize'] = (15, 9)
plt.rcParams['axes.titlesize'] = 'large'
# create a mesh to plot in
x_min, x_max = Xtrain[:, 0].min() - 1, Xtrain[:, 0].max() + 1
y_min, y_max = Xtrain[:, 1].min() - 1, Xtrain[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.1),
np.arange(y_min, y_max, 0.1))
# title for the plots
titles = ['SVC with linear kernel',
'LinearSVC (linear kernel)',
'SVC with RBF kernel',
'SVC with polynomial (degree 3) kernel']
for i, clf in enumerate((svm, rbf_svc, poly_svc)):
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, x_max]x[y_min, y_max].
plt.subplot(2, 2, i + 1)
plt.subplots_adjust(wspace=0.4, hspace=0.4)
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, cmap=plt.cm.coolwarm, alpha=0.8)
# Plot also the training points
plt.scatter(Xtrain[:, 0], Xtrain[:, 1], c=y_train, cmap=plt.cm.coolwarm)
plt.xlabel('mean radius')
plt.ylabel('mean texture')
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.xticks(())
plt.yticks(())
plt.title(titles[i])
plt.show()
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
from sklearn.preprocessing import StandardScaler
# prepare the model
scaler = StandardScaler().fit(X_train)
rescaledX = scaler.transform(X_train)
model = SVC(kernel='linear', random_state=0, gamma=0.2, C=0.3, probability=False)
model.fit(rescaledX, y_train)
# estimate accuracy on validation dataset
rescaledtestX = scaler.transform(X_test)
predictions = model.predict(rescaledtestX)
n_classes = cancer.target_names.shape[0]
print(accuracy_score(y_test, predictions))
print(confusion_matrix(y_test, predictions, labels=range(n_classes)))
print(classification_report(y_test, predictions, target_names=cancer.target_names ))
#print the first 25 true and predicted responses
#print 'True:', (y_test.values)[0:25]
print 'Pred:', predictions[0:25]
#print the first 10 predicted response
svm.predict(X_test)[0:10]
# Plot the receiver operating characteristic curve (ROC).
from sklearn.metrics import roc_curve, auc
plt.figure(figsize=(20,10))
probas_ = model.predict_proba(X_test)
fpr, tpr, thresholds = roc_curve(y_test, probas_[:, 1])
roc_auc = auc(fpr, tpr)
plt.plot(fpr, tpr, lw=1, label='ROC fold (area = %0.2f)' % (roc_auc))
plt.plot([0, 1], [0, 1], '--', color=(0.6, 0.6, 0.6), label='Random')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic')
#plt.axes().set_aspect(1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The model overfits quite substantially, with a perfect score on the training set and only 63% accuracy on the test set.
Step2: Preprocessing data for SVM -Rescaling the data
Step3: MinMaxScaler
Step4: Tuning the parameters
Step5: Scaling the data made a huge difference! Now we are actually in an underfitting regime, where training and test set performance are quite similar but less close to 100% accuracy. From here, we can try increasing either C or gamma to fit a more complex model. For example
Step6: </div>
Step7: Matrics computed from Confusion matrix
Step8: Receiver operating characteristic (ROC) curve.
|
11,832
|
<ASSISTANT_TASK:>
Python Code:
import io, os, sys, types
from IPython import get_ipython
from IPython.nbformat import current
from IPython.core.interactiveshell import InteractiveShell
def find_notebook(fullname, path=None):
find a notebook, given its fully qualified name and an optional path
This turns "foo.bar" into "foo/bar.ipynb"
and tries turning "Foo_Bar" into "Foo Bar" if Foo_Bar
does not exist.
name = fullname.rsplit('.', 1)[-1]
if not path:
path = ['']
for d in path:
nb_path = os.path.join(d, name + ".ipynb")
if os.path.isfile(nb_path):
return nb_path
# let import Notebook_Name find "Notebook Name.ipynb"
nb_path = nb_path.replace("_", " ")
if os.path.isfile(nb_path):
return nb_path
class NotebookLoader(object):
Module Loader for IPython Notebooks
def __init__(self, path=None):
self.shell = InteractiveShell.instance()
self.path = path
def load_module(self, fullname):
import a notebook as a module
path = find_notebook(fullname, self.path)
print ("importing IPython notebook from %s" % path)
# load the notebook object
with io.open(path, 'r', encoding='utf-8') as f:
nb = current.read(f, 'json')
# create the module and add it to sys.modules
# if name in sys.modules:
# return sys.modules[name]
mod = types.ModuleType(fullname)
mod.__file__ = path
mod.__loader__ = self
mod.__dict__['get_ipython'] = get_ipython
sys.modules[fullname] = mod
# extra work to ensure that magics that would affect the user_ns
# actually affect the notebook module's ns
save_user_ns = self.shell.user_ns
self.shell.user_ns = mod.__dict__
try:
for cell in nb.worksheets[0].cells:
if cell.cell_type == 'code' and cell.language == 'python':
# transform the input to executable Python
code = self.shell.input_transformer_manager.transform_cell(cell.input)
# run the code in themodule
exec(code, mod.__dict__)
finally:
self.shell.user_ns = save_user_ns
return mod
class NotebookFinder(object):
Module finder that locates IPython Notebooks
def __init__(self):
self.loaders = {}
def find_module(self, fullname, path=None):
nb_path = find_notebook(fullname, path)
if not nb_path:
return
key = path
if path:
# lists aren't hashable
key = os.path.sep.join(path)
if key not in self.loaders:
self.loaders[key] = NotebookLoader(path)
return self.loaders[key]
sys.meta_path.append(NotebookFinder())
ls nbpackage
from pygments import highlight
from pygments.lexers import PythonLexer
from pygments.formatters import HtmlFormatter
from IPython.display import display, HTML
formatter = HtmlFormatter()
lexer = PythonLexer()
# publish the CSS for pygments highlighting
display(HTML(
<style type='text/css'>
%s
</style>
% formatter.get_style_defs()
))
def show_notebook(fname):
display a short summary of the cells of a notebook
with io.open(fname, 'r', encoding='utf-8') as f:
nb = current.read(f, 'json')
html = []
for cell in nb.worksheets[0].cells:
html.append("<h4>%s cell</h4>" % cell.cell_type)
if cell.cell_type == 'code':
html.append(highlight(cell.input, lexer, formatter))
else:
html.append("<pre>%s</pre>" % cell.source)
display(HTML('\n'.join(html)))
show_notebook(os.path.join("nbpackage", "mynotebook.ipynb"))
from nbpackage import mynotebook
mynotebook.foo()
mynotebook.has_ip_syntax()
ls nbpackage/nbs
show_notebook(os.path.join("nbpackage", "nbs", "other.ipynb"))
from nbpackage.nbs import other
other.bar(5)
import shutil
from IPython.utils.path import get_ipython_package_dir
utils = os.path.join(get_ipython_package_dir(), 'utils')
shutil.copy(os.path.join("nbpackage", "mynotebook.ipynb"),
os.path.join(utils, "inside_ipython.ipynb")
)
from IPython.utils import inside_ipython
inside_ipython.whatsmyname()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Import hooks typically take the form of two objects
Step5: Notebook Loader
Step7: The Module Finder
Step8: Register the hook
Step9: After this point, my notebooks should be importable.
Step12: So I should be able to import nbimp.mynotebook.
Step13: So my notebook has a heading cell and some code cells,
Step14: Hooray, it imported! Does it work?
Step15: Hooray again!
Step16: Notebooks in packages
Step17: Note that the __init__.py is necessary for nb to be considered a package,
Step18: So now we have importable notebooks, from both the local directory and inside packages.
Step19: and import the notebook from IPython.utils
|
11,833
|
<ASSISTANT_TASK:>
Python Code:
from csp import *
%psource CSP
s = UniversalDict(['R','G','B'])
s[5]
%psource different_values_constraint
%pdoc parse_neighbors
%psource MapColoringCSP
australia, usa, france
%psource queen_constraint
%psource NQueensCSP
eight_queens = NQueensCSP(8)
import copy
class InstruCSP(CSP):
def __init__(self, variables, domains, neighbors, constraints):
super().__init__(variables, domains, neighbors, constraints)
self.assingment_history = []
def assign(self, var, val, assignment):
super().assign(var,val, assignment)
self.assingment_history.append(copy.deepcopy(assignment))
def unassign(self, var, assignment):
super().unassign(var,assignment)
self.assingment_history.append(copy.deepcopy(assignment))
def make_instru(csp):
return InstruCSP(csp.variables, csp.domains, csp.neighbors,
csp.constraints)
neighbors = {
0: [6, 11, 15, 18, 4, 11, 6, 15, 18, 4],
1: [12, 12, 14, 14],
2: [17, 6, 11, 6, 11, 10, 17, 14, 10, 14],
3: [20, 8, 19, 12, 20, 19, 8, 12],
4: [11, 0, 18, 5, 18, 5, 11, 0],
5: [4, 4],
6: [8, 15, 0, 11, 2, 14, 8, 11, 15, 2, 0, 14],
7: [13, 16, 13, 16],
8: [19, 15, 6, 14, 12, 3, 6, 15, 19, 12, 3, 14],
9: [20, 15, 19, 16, 15, 19, 20, 16],
10: [17, 11, 2, 11, 17, 2],
11: [6, 0, 4, 10, 2, 6, 2, 0, 10, 4],
12: [8, 3, 8, 14, 1, 3, 1, 14],
13: [7, 15, 18, 15, 16, 7, 18, 16],
14: [8, 6, 2, 12, 1, 8, 6, 2, 1, 12],
15: [8, 6, 16, 13, 18, 0, 6, 8, 19, 9, 0, 19, 13, 18, 9, 16],
16: [7, 15, 13, 9, 7, 13, 15, 9],
17: [10, 2, 2, 10],
18: [15, 0, 13, 4, 0, 15, 13, 4],
19: [20, 8, 15, 9, 15, 8, 3, 20, 3, 9],
20: [3, 19, 9, 19, 3, 9]
}
coloring_problem = MapColoringCSP('RGBY', neighbors)
coloring_problem1 = make_instru(coloring_problem)
result = backtracking_search(coloring_problem1)
result # A dictonary of assingments.
coloring_problem1.nassigns
len(coloring_problem1.assingment_history)
%psource mrv
%psource num_legal_values
%psource CSP.nconflicts
%psource lcv
solve_simple = copy.deepcopy(usa)
solve_parameters = copy.deepcopy(usa)
backtracking_search(solve_simple)
backtracking_search(solve_parameters, order_domain_values=lcv, select_unassigned_variable=mrv, inference=mac )
solve_simple.nassigns
solve_parameters.nassigns
%matplotlib inline
import networkx as nx
import matplotlib.pyplot as plt
import matplotlib
import time
def make_update_step_function(graph, instru_csp):
def draw_graph(graph):
# create networkx graph
G=nx.Graph(graph)
# draw graph
pos = nx.spring_layout(G,k=0.15)
return (G, pos)
G, pos = draw_graph(graph)
def update_step(iteration):
# here iteration is the index of the assingment_history we want to visualize.
current = instru_csp.assingment_history[iteration]
# We convert the particular assingment to a default dict so that the color for nodes which
# have not been assigned defaults to black.
current = defaultdict(lambda: 'Black', current)
# Now we use colors in the list and default to black otherwise.
colors = [current[node] for node in G.node.keys()]
# Finally drawing the nodes.
nx.draw(G, pos, node_color=colors, node_size=500)
labels = {label:label for label in G.node}
# Labels shifted by offset so as to not overlap nodes.
label_pos = {key:[value[0], value[1]+0.03] for key, value in pos.items()}
nx.draw_networkx_labels(G, label_pos, labels, font_size=20)
# show graph
plt.show()
return update_step # <-- this is a function
def make_visualize(slider):
''' Takes an input a slider and returns
callback function for timer and animation
'''
def visualize_callback(Visualize, time_step):
if Visualize is True:
for i in range(slider.min, slider.max + 1):
slider.value = i
time.sleep(float(time_step))
return visualize_callback
step_func = make_update_step_function(neighbors, coloring_problem1)
matplotlib.rcParams['figure.figsize'] = (18.0, 18.0)
import ipywidgets as widgets
from IPython.display import display
iteration_slider = widgets.IntSlider(min=0, max=len(coloring_problem1.assingment_history)-1, step=1, value=0)
w=widgets.interactive(step_func,iteration=iteration_slider)
display(w)
visualize_callback = make_visualize(iteration_slider)
visualize_button = widgets.ToggleButton(desctiption = "Visualize", value = False)
time_select = widgets.ToggleButtons(description='Extra Delay:',options=['0', '0.1', '0.2', '0.5', '0.7', '1.0'])
a = widgets.interactive(visualize_callback, Visualize = visualize_button, time_step=time_select)
display(a)
def label_queen_conflicts(assingment,grid):
''' Mark grid with queens that are under conflict. '''
for col, row in assingment.items(): # check each queen for conflict
row_conflicts = {temp_col:temp_row for temp_col,temp_row in assingment.items()
if temp_row == row and temp_col != col}
up_conflicts = {temp_col:temp_row for temp_col,temp_row in assingment.items()
if temp_row+temp_col == row+col and temp_col != col}
down_conflicts = {temp_col:temp_row for temp_col,temp_row in assingment.items()----
if temp_row-temp_col == row-col and temp_col != col}
# Now marking the grid.
for col, row in row_conflicts.items():
grid[col][row] = 3
for col, row in up_conflicts.items():
grid[col][row] = 3
for col, row in down_conflicts.items():
grid[col][row] = 3
return grid
def make_plot_board_step_function(instru_csp):
'''ipywidgets interactive function supports
single parameter as input. This function
creates and return such a function by taking
in input other parameters.
'''
n = len(instru_csp.variables)
def plot_board_step(iteration):
''' Add Queens to the Board.'''
data = instru_csp.assingment_history[iteration]
grid = [[(col+row+1)%2 for col in range(n)] for row in range(n)]
grid = label_queen_conflicts(data, grid) # Update grid with conflict labels.
# color map of fixed colors
cmap = matplotlib.colors.ListedColormap(['white','lightsteelblue','red'])
bounds=[0,1,2,3] # 0 for white 1 for black 2 onwards for conflict labels (red).
norm = matplotlib.colors.BoundaryNorm(bounds, cmap.N)
fig = plt.imshow(grid, interpolation='nearest', cmap = cmap,norm=norm)
+
plt.axis('off')
fig.axes.get_xaxis().set_visible(False)
fig.axes.get_yaxis().set_visible(False)
# Place the Queens Unicode Symbol
for col, row in data.items():
fig.axes.text(row, col, u"\u265B", va='center', ha='center', family='Dejavu Sans', fontsize=32)
plt.show()
return plot_board_step
twelve_queens_csp = NQueensCSP(12)
backtracking_instru_queen = make_instru(twelve_queens_csp)
result = backtracking_search(backtracking_instru_queen)
backtrack_queen_step = make_plot_board_step_function(backtracking_instru_queen) # Step Function for Widgets
matplotlib.rcParams['figure.figsize'] = (8.0, 8.0)
matplotlib.rcParams['font.family'].append(u'Dejavu Sans')
iteration_slider = widgets.IntSlider(min=0, max=len(backtracking_instru_queen.assingment_history)-1, step=0, value=0)
w=widgets.interactive(backtrack_queen_step,iteration=iteration_slider)
display(w)
visualize_callback = make_visualize(iteration_slider)
visualize_button = widgets.ToggleButton(desctiption = "Visualize", value = False)
time_select = widgets.ToggleButtons(description='Extra Delay:',options=['0', '0.1', '0.2', '0.5', '0.7', '1.0'])
a = widgets.interactive(visualize_callback, Visualize = visualize_button, time_step=time_select)
display(a)
conflicts_instru_queen = make_instru(twelve_queens_csp)
result = min_conflicts(conflicts_instru_queen)
conflicts_step = make_plot_board_step_function(conflicts_instru_queen)
iteration_slider = widgets.IntSlider(min=0, max=len(conflicts_instru_queen.assingment_history)-1, step=0, value=0)
w=widgets.interactive(conflicts_step,iteration=iteration_slider)
display(w)
visualize_callback = make_visualize(iteration_slider)
visualize_button = widgets.ToggleButton(desctiption = "Visualize", value = False)
time_select = widgets.ToggleButtons(description='Extra Delay:',options=['0', '0.1', '0.2', '0.5', '0.7', '1.0'])
a = widgets.interactive(visualize_callback, Visualize = visualize_button, time_step=time_select)
display(a)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Review
Step2: The _ init _ method parameters specify the CSP. Variable can be passed as a list of strings or integers. Domains are passed as dict where key specify the variables and value specify the domains. The variables are passed as an empty list. Variables are extracted from the keys of the domain dictionary. Neighbor is a dict of variables that essentially describes the constraint graph. Here each variable key has a list its value which are the variables that are constraint along with it. The constraint parameter should be a function f(A, a, B, b) that returns true if neighbors A, B satisfy the constraint when they have values A=a, B=b. We have additional parameters like nassings which is incremented each time an assignment is made when calling the assign method. You can read more about the methods and parameters in the class doc string. We will talk more about them as we encounter their use. Let us jump to an example.
Step3: For our CSP we also need to define a constraint function f(A, a, B, b). In this what we need is that the neighbors must not have the same color. This is defined in the function different_values_constraint of the module.
Step4: The CSP class takes neighbors in the form of a Dict. The module specifies a simple helper function named parse_neighbors which allows to take input in the form of strings and return a Dict of the form compatible with the CSP Class.
Step5: The MapColoringCSP function creates and returns a CSP with the above constraint function and states. The variables our the keys of the neighbors dict and the constraint is the one specified by the different_values_constratint function. australia, usa and france are three CSPs that have been created using MapColoringCSP. australia corresponds to Figure 6.1 in the book.
Step6: NQueens
Step7: The NQueensCSP method implements methods that support solving the problem via min_conflicts which is one of the techniques for solving CSPs. Because min_conflicts hill climbs the number of conflicts to solve the CSP assign and unassign are modified to record conflicts. More details about the structures rows, downs, ups which help in recording conflicts are explained in the docstring.
Step8: The _ init _ method takes only one parameter n the size of the problem. To create an instance we just pass the required n into the constructor.
Step9: Helper Functions
Step10: Next, we define make_instru which takes an instance of CSP and returns a InstruCSP instance.
Step11: We will now use a graph defined as a dictonary for plotting purposes in our Graph Coloring Problem. The keys are the nodes and their corresponding values are the nodes are they are connected to.
Step12: Now we are ready to create an InstruCSP instance for our problem. We are doing this for an instance of MapColoringProblem class which inherits from the CSP Class. This means that our make_instru function will work perfectly for it.
Step13: Backtracking Search
Step14: Let us also check the number of assingments made.
Step15: Now let us check the total number of assingments and unassingments which is the lentgh ofour assingment history.
Step16: Now let us explore the optional keyword arguments that the backtracking_search function takes. These optional arguments help speed up the assignment further. Along with these, we will also point out to methods in the CSP class that help make this work.
Step17: Another ordering related parameter order_domain_values governs the value ordering. Here we select the Least Constraining Value which is implemented by the function lcv. The idea is to select the value which rules out the fewest values in the remaining variables. The intuition behind selecting the lcv is that it leaves a lot of freedom to assign values later. The idea behind selecting the mrc and lcv makes sense because we need to do all variables but for values, we might better try the ones that are likely. So for vars, we face the hard ones first.
Step18: Finally, the third parameter inference can make use of one of the two techniques called Arc Consistency or Forward Checking. The details of these methods can be found in the Section 6.3.2 of the book. In short the idea of inference is to detect the possible failure before it occurs and to look ahead to not make mistakes. mac and forward_checking implement these two techniques. The CSP methods support_pruning, suppose, prune, choices, infer_assignment and restore help in using these techniques. You can know more about these by looking up the source code.
Step19: Graph Coloring Visualization
Step20: The ipython widgets we will be using require the plots in the form of a step function such that there is a graph corresponding to each value. We define the make_update_step_function which return such a function. It takes in as inputs the neighbors/graph along with an instance of the InstruCSP. This will be more clear with the example below. If this sounds confusing do not worry this is not the part of the core material and our only goal is to help you visualize how the process works.
Step21: Finally let us plot our problem. We first use the function above to obtain a step function.
Step22: Next we set the canvas size.
Step23: Finally our plot using ipywidget slider and matplotib. You can move the slider to experiment and see the coloring change. It is also possible to move the slider using arrow keys or to jump to the value by directly editing the number with a double click. The Visualize Button will automatically animate the slider for you. The Extra Delay Box allows you to set time delay in seconds upto one second for each time step.
Step24: NQueens Visualization
Step25: Now let us visualize a solution obtained via backtracking. We use of the previosuly defined make_instru function for keeping a history of steps.
Step26: Now finally we set some matplotlib parameters to adjust how our plot will look. The font is necessary because the Black Queen Unicode character is not a part of all fonts. You can move the slider to experiment and observe the how queens are assigned. It is also possible to move the slider using arrow keys or to jump to the value by directly editing the number with a double click.The Visualize Button will automatically animate the slider for you. The Extra Delay Box allows you to set time delay in seconds upto one second for each time step.
Step27: Now let us finally repeat the above steps for min_conflicts solution.
Step28: The visualization has same features as the above. But here it also highlights the conflicts by labeling the conflicted queens with a red background.
|
11,834
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from sklearn import datasets
from sklearn import linear_model
import matplotlib.pyplot as plt
import sklearn
print sklearn.__version__
# boston data
boston = datasets.load_boston()
y = boston.target
' '.join(dir(boston))
boston['feature_names']
regr = linear_model.LinearRegression()
lm = regr.fit(boston.data, y)
predicted = regr.predict(boston.data)
fig, ax = plt.subplots()
ax.scatter(y, predicted)
ax.plot([y.min(), y.max()], [y.min(), y.max()], 'k--', lw=4)
ax.set_xlabel('$Measured$', fontsize = 20)
ax.set_ylabel('$Predicted$', fontsize = 20)
plt.show()
lm.intercept_, lm.coef_, lm.score(boston.data, y)
import pandas as pd
df = pd.read_csv('/Users/chengjun/github/cjc2016/data/tianya_bbs_threads_list.txt', sep = "\t", header=None)
df=df.rename(columns = {0:'title', 1:'link', 2:'author',3:'author_page', 4:'click', 5:'reply', 6:'time'})
df[:2]
def randomSplit(dataX, dataY, num):
dataX_train = []
dataX_test = []
dataY_train = []
dataY_test = []
import random
test_index = random.sample(range(len(df)), num)
for k in range(len(dataX)):
if k in test_index:
dataX_test.append([dataX[k]])
dataY_test.append(dataY[k])
else:
dataX_train.append([dataX[k]])
dataY_train.append(dataY[k])
return dataX_train, dataX_test, dataY_train, dataY_test,
import numpy as np
# Use only one feature
data_X = df.reply
# Split the data into training/testing sets
data_X_train, data_X_test, data_y_train, data_y_test = randomSplit(np.log(df.click+1), np.log(df.reply+1), 20)
# Create linear regression object
regr = linear_model.LinearRegression()
# Train the model using the training sets
regr.fit(data_X_train, data_y_train)
# Explained variance score: 1 is perfect prediction
print'Variance score: %.2f' % regr.score(data_X_test, data_y_test)
# Plot outputs
plt.scatter(data_X_test, data_y_test, color='black')
plt.plot(data_X_test, regr.predict(data_X_test), color='blue', linewidth=3)
plt.show()
# The coefficients
print 'Coefficients: \n', regr.coef_
# The mean square error
print "Residual sum of squares: %.2f" % np.mean((regr.predict(data_X_test) - data_y_test) ** 2)
from sklearn.cross_validation import cross_val_score
regr = linear_model.LinearRegression()
scores = cross_val_score(regr, [[c] for c in df.click], df.reply, cv = 3)
scores.mean()
from sklearn.cross_validation import cross_val_score
x = [[c] for c in np.log(df.click +0.1)]
y = np.log(df.reply+0.1)
regr = linear_model.LinearRegression()
scores = cross_val_score(regr, x, y , cv = 3)
scores.mean()
repost = []
for i in df.title:
if u'่ฝฌ่ฝฝ' in i.decode('utf8'):
repost.append(1)
else:
repost.append(0)
data_X = [[df.click[i], df.reply[i]] for i in range(len(df))]
data_X[:3]
from sklearn.linear_model import LogisticRegression
df['repost'] = repost
model.fit(data_X,df.repost)
model.score(data_X,df.repost)
def randomSplitLogistic(dataX, dataY, num):
dataX_train = []
dataX_test = []
dataY_train = []
dataY_test = []
import random
test_index = random.sample(range(len(df)), num)
for k in range(len(dataX)):
if k in test_index:
dataX_test.append(dataX[k])
dataY_test.append(dataY[k])
else:
dataX_train.append(dataX[k])
dataY_train.append(dataY[k])
return dataX_train, dataX_test, dataY_train, dataY_test,
# Split the data into training/testing sets
data_X_train, data_X_test, data_y_train, data_y_test = randomSplitLogistic(data_X, df.repost, 20)
# Create linear regression object
log_regr = LogisticRegression()
# Train the model using the training sets
log_regr.fit(data_X_train, data_y_train)
# Explained variance score: 1 is perfect prediction
print'Variance score: %.2f' % log_regr.score(data_X_test, data_y_test)
logre = LogisticRegression()
scores = cross_val_score(logre, data_X,df.repost, cv = 3)
scores.mean()
from sklearn import naive_bayes
' '.join(dir(naive_bayes))
#Import Library of Gaussian Naive Bayes model
from sklearn.naive_bayes import GaussianNB
import numpy as np
#assigning predictor and target variables
x= np.array([[-3,7],[1,5], [1,2], [-2,0], [2,3], [-4,0], [-1,1], [1,1], [-2,2], [2,7], [-4,1], [-2,7]])
Y = np.array([3, 3, 3, 3, 4, 3, 3, 4, 3, 4, 4, 4])
#Create a Gaussian Classifier
model = GaussianNB()
# Train the model using the training sets
model.fit(x[:8], Y[:8])
#Predict Output
predicted= model.predict([[1,2],[3,4]])
print predicted
model.score(x[8:], Y[8:])
data_X_train, data_X_test, data_y_train, data_y_test = randomSplit(df.click, df.reply, 20)
# Train the model using the training sets
model.fit(data_X_train, data_y_train)
#Predict Output
predicted= model.predict(data_X_test)
print predicted
model.score(data_X_test, data_y_test)
from sklearn.cross_validation import cross_val_score
model = GaussianNB()
scores = cross_val_score(model, [[c] for c in df.click], df.reply, cv = 5)
scores.mean()
from sklearn import tree
model = tree.DecisionTreeClassifier(criterion='gini')
data_X_train, data_X_test, data_y_train, data_y_test = randomSplitLogistic(data_X, df.repost, 20)
model.fit(data_X_train,data_y_train)
model.score(data_X_train,data_y_train)
# Predict
model.predict(data_X_test)
# crossvalidation
scores = cross_val_score(model, data_X, df.repost, cv = 3)
scores.mean()
from sklearn import svm
# Create SVM classification object
model=svm.SVC()
' '.join(dir(svm))
data_X_train, data_X_test, data_y_train, data_y_test = randomSplitLogistic(data_X, df.repost, 20)
model.fit(data_X_train,data_y_train)
model.score(data_X_train,data_y_train)
# Predict
model.predict(data_X_test)
# crossvalidation
scores = []
cvs = [3, 5, 10, 25, 50, 75, 100]
for i in cvs:
score = cross_val_score(model, data_X, df.repost, cv = i)
scores.append(score.mean() ) # Try to tune cv
plt.plot(cvs, scores, 'b-o')
plt.xlabel('$cv$', fontsize = 20)
plt.ylabel('$Score$', fontsize = 20)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: ไฝฟ็จsklearnๅlogisticๅๅฝ
Step2: ไฝฟ็จsklearnๅฎ็ฐ่ดๅถๆฏ้ขๆต
Step3: naive_bayes.GaussianNB Gaussian Naive Bayes (GaussianNB)
Step4: cross-validation
Step5: ไฝฟ็จsklearnๅฎ็ฐๅณ็ญๆ
Step6: ไฝฟ็จsklearnๅฎ็ฐSVMๆฏๆๅ้ๆบ
|
11,835
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nasa-giss', 'giss-e2-1h', 'land')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Description
Step7: 1.4. Land Atmosphere Flux Exchanges
Step8: 1.5. Atmospheric Coupling Treatment
Step9: 1.6. Land Cover
Step10: 1.7. Land Cover Change
Step11: 1.8. Tiling
Step12: 2. Key Properties --> Conservation Properties
Step13: 2.2. Water
Step14: 2.3. Carbon
Step15: 3. Key Properties --> Timestepping Framework
Step16: 3.2. Time Step
Step17: 3.3. Timestepping Method
Step18: 4. Key Properties --> Software Properties
Step19: 4.2. Code Version
Step20: 4.3. Code Languages
Step21: 5. Grid
Step22: 6. Grid --> Horizontal
Step23: 6.2. Matches Atmosphere Grid
Step24: 7. Grid --> Vertical
Step25: 7.2. Total Depth
Step26: 8. Soil
Step27: 8.2. Heat Water Coupling
Step28: 8.3. Number Of Soil layers
Step29: 8.4. Prognostic Variables
Step30: 9. Soil --> Soil Map
Step31: 9.2. Structure
Step32: 9.3. Texture
Step33: 9.4. Organic Matter
Step34: 9.5. Albedo
Step35: 9.6. Water Table
Step36: 9.7. Continuously Varying Soil Depth
Step37: 9.8. Soil Depth
Step38: 10. Soil --> Snow Free Albedo
Step39: 10.2. Functions
Step40: 10.3. Direct Diffuse
Step41: 10.4. Number Of Wavelength Bands
Step42: 11. Soil --> Hydrology
Step43: 11.2. Time Step
Step44: 11.3. Tiling
Step45: 11.4. Vertical Discretisation
Step46: 11.5. Number Of Ground Water Layers
Step47: 11.6. Lateral Connectivity
Step48: 11.7. Method
Step49: 12. Soil --> Hydrology --> Freezing
Step50: 12.2. Ice Storage Method
Step51: 12.3. Permafrost
Step52: 13. Soil --> Hydrology --> Drainage
Step53: 13.2. Types
Step54: 14. Soil --> Heat Treatment
Step55: 14.2. Time Step
Step56: 14.3. Tiling
Step57: 14.4. Vertical Discretisation
Step58: 14.5. Heat Storage
Step59: 14.6. Processes
Step60: 15. Snow
Step61: 15.2. Tiling
Step62: 15.3. Number Of Snow Layers
Step63: 15.4. Density
Step64: 15.5. Water Equivalent
Step65: 15.6. Heat Content
Step66: 15.7. Temperature
Step67: 15.8. Liquid Water Content
Step68: 15.9. Snow Cover Fractions
Step69: 15.10. Processes
Step70: 15.11. Prognostic Variables
Step71: 16. Snow --> Snow Albedo
Step72: 16.2. Functions
Step73: 17. Vegetation
Step74: 17.2. Time Step
Step75: 17.3. Dynamic Vegetation
Step76: 17.4. Tiling
Step77: 17.5. Vegetation Representation
Step78: 17.6. Vegetation Types
Step79: 17.7. Biome Types
Step80: 17.8. Vegetation Time Variation
Step81: 17.9. Vegetation Map
Step82: 17.10. Interception
Step83: 17.11. Phenology
Step84: 17.12. Phenology Description
Step85: 17.13. Leaf Area Index
Step86: 17.14. Leaf Area Index Description
Step87: 17.15. Biomass
Step88: 17.16. Biomass Description
Step89: 17.17. Biogeography
Step90: 17.18. Biogeography Description
Step91: 17.19. Stomatal Resistance
Step92: 17.20. Stomatal Resistance Description
Step93: 17.21. Prognostic Variables
Step94: 18. Energy Balance
Step95: 18.2. Tiling
Step96: 18.3. Number Of Surface Temperatures
Step97: 18.4. Evaporation
Step98: 18.5. Processes
Step99: 19. Carbon Cycle
Step100: 19.2. Tiling
Step101: 19.3. Time Step
Step102: 19.4. Anthropogenic Carbon
Step103: 19.5. Prognostic Variables
Step104: 20. Carbon Cycle --> Vegetation
Step105: 20.2. Carbon Pools
Step106: 20.3. Forest Stand Dynamics
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
Step109: 22.2. Growth Respiration
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
Step111: 23.2. Allocation Bins
Step112: 23.3. Allocation Fractions
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
Step115: 26. Carbon Cycle --> Litter
Step116: 26.2. Carbon Pools
Step117: 26.3. Decomposition
Step118: 26.4. Method
Step119: 27. Carbon Cycle --> Soil
Step120: 27.2. Carbon Pools
Step121: 27.3. Decomposition
Step122: 27.4. Method
Step123: 28. Carbon Cycle --> Permafrost Carbon
Step124: 28.2. Emitted Greenhouse Gases
Step125: 28.3. Decomposition
Step126: 28.4. Impact On Soil Properties
Step127: 29. Nitrogen Cycle
Step128: 29.2. Tiling
Step129: 29.3. Time Step
Step130: 29.4. Prognostic Variables
Step131: 30. River Routing
Step132: 30.2. Tiling
Step133: 30.3. Time Step
Step134: 30.4. Grid Inherited From Land Surface
Step135: 30.5. Grid Description
Step136: 30.6. Number Of Reservoirs
Step137: 30.7. Water Re Evaporation
Step138: 30.8. Coupled To Atmosphere
Step139: 30.9. Coupled To Land
Step140: 30.10. Quantities Exchanged With Atmosphere
Step141: 30.11. Basin Flow Direction Map
Step142: 30.12. Flooding
Step143: 30.13. Prognostic Variables
Step144: 31. River Routing --> Oceanic Discharge
Step145: 31.2. Quantities Transported
Step146: 32. Lakes
Step147: 32.2. Coupling With Rivers
Step148: 32.3. Time Step
Step149: 32.4. Quantities Exchanged With Rivers
Step150: 32.5. Vertical Grid
Step151: 32.6. Prognostic Variables
Step152: 33. Lakes --> Method
Step153: 33.2. Albedo
Step154: 33.3. Dynamics
Step155: 33.4. Dynamic Lake Extent
Step156: 33.5. Endorheic Basins
Step157: 34. Lakes --> Wetlands
|
11,836
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
# Import helper module
from helpers import ex02
# Load one-way BLAST results into a data frame called data_fwd
data_fwd = ex02.read_data("pseudomonas_blastp/B728a_vs_NCIMB_11764.tab")
# Show first few lines of the loaded data
data_fwd.head()
# Show descriptive statistics for the table
data_fwd.describe()
# Plot a histogram of alignment lengths for the BLAST data
data_fwd.alignment_length.hist(bins=100)
# Plot a histogram of percentage identity for the BLAST data
data_fwd.identity.hist(bins=100)
# Plot a histogram of query_coverage for the BLAST data
data_fwd.query_coverage.hist(bins=100)
# Plot a histogram of percentage coverage for the BLAST data
data_fwd.subject_coverage.hist(bins=100)
# Plot 2D histogram of subject sequence (match) coverage against query
# sequence coverag
ex02.plot_hist2d(data_fwd.query_coverage, data_fwd.subject_coverage,
"one-way query COV", "one-way subject COV",
"one-way coverage comparison")
ex02.plot_hist2d(data_fwd.query_coverage, data_fwd.identity,
"one-way query COV", "one-way match PID",
"one-way coverage/identity comparison")
# Load one-way BLAST results into a data frame called data_fwd
data_rev = ex02.read_data("pseudomonas_blastp/NCIMB_11764_vs_B728a.tab")
# Calculate RBBH for the two Pseudomonas datasets
# This returns three dataframes: df1 and df2 are the forward and reverse BLAST
# results (filtered, if any filters were used), and rbbh is the dataframe of
# reciprocal best BLAST hits
df1, df2, rbbh = ex02.find_rbbh(data_fwd, data_rev)
# Peek at the first few lines of the RBBH results
rbbh.head()
# Show summary statistics for RBBH
rbbh.describe()
#ย Report the size of each of the forward and reverse input, and rbbh output dataframes
s = '\n'.join(["Forward BLAST input: {0} proteins",
"Reverse BLAST input: {1} proteins",
"RBBH output: {2} proteins"])
print(s.format(len(data_fwd), len(data_rev), len(rbbh)))
print("(min difference = {0})".format(min(len(data_fwd), len(data_rev)) - len(rbbh)))
# Histogram of forward match percentage identity (one-way)
data_fwd.identity.hist(bins=100)
# Histogram of forward match percentage identity (RBBH)
rbbh.identity_x.hist(bins=100)
# Plot 2D histograms of query coverage against subject coverage for the
# one-way forward matches, and those retained after calculating RBBH
ex02.plot_hist2d(data_fwd.query_coverage, data_fwd.subject_coverage,
"one-way query COV", "one-way subject COV",
"one-way coverage comparison")
ex02.plot_hist2d(rbbh.query_coverage_x, rbbh.subject_coverage_x,
"RBBH (fwd) query COV", "RBBH (fwd) subject COV",
"RBBH coverage comparison")
# Calculate ID and coverage-filtered RBBH for the two Pseudomonas datasets
# This returns three dataframes: df1_filtered and df2_filtered are the
# filtered forward and reverse BLAST results , and rbbh_filtered is the
# dataframe of reciprocal best BLAST hits
df1_filtered, df2_filtered, rbbh_filtered = ex02.find_rbbh(data_fwd, data_rev, pid=40, cov=70)
# Histogram of forward match percentage identity (RBBH, filtered)
rbbh_filtered.identity_x.hist(bins=100)
# Plot 2D histograms of query coverage against subject coverage for the
# one-way forward matches retained after calculating RBBH and
# filtering on percentage identity and coverage
ex02.plot_hist2d(rbbh_filtered.query_coverage_x, rbbh_filtered.subject_coverage_x,
"filtered RBBH (fwd) query COV", "filtered_RBBH (fwd) subject COV",
"filtered RBBH coverage comparison")
# Read feature locations for each Pseudomonas file
features = ex02.read_genbank("pseudomonas/GCF_000988485.1_ASM98848v1_genomic.gbff",
"pseudomonas/GCF_000293885.2_ASM29388v3_genomic.gbff")
# Write a .crunch file of filtered RBBH for the Pseudomonas comparisons
ex02.write_crunch(rbbh_filtered, features,
fwd="GCF_000988485.1_ASM98848v1_genomic",
rev="GCF_000293885.2_ASM29388v3_genomic",
outdir="pseudomonas_blastp",
filename="B728a_rbbh_NCIMB_11764.crunch")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The first thing we do is load in the BLASTP output we generated, so that we can plot some of the key features. We do that using the ex02.read_data() function in the cell below. This puts the data into a dataframe called data_fwd.
Step2: <div class="alert alert-warning">
Step3: There are 5265 rows in this table, one for each of the query protein sequences in the P. syringae B728a annotation.
Step4: <div class="alert alert-warning">
Step5: <div class="alert alert-warning">
Step6: <div class="alert alert-warning">
Step7: We can inspect the dataframe of RBBH using the .head() and .describe() methods, by executing the cells below.
Step8: It is inevitable that the RBBH set will have the same or fewer protein pairs in it, than the number of proteins in the smallest of the forward and reverse protein sets. But how many proteins have been filtered in this comparison? We can find out by executing the cell below.
Step9: <div class="alert alert-warning">
Step10: <div class="alert alert-warning">
Step11: <div class="alert alert-warning">
Step12: Visualising RBBH with ACT
|
11,837
|
<ASSISTANT_TASK:>
Python Code:
from astropy.io import ascii, fits
import pylab as plt
%matplotlib inline
from astropy import wcs
import numpy as np
import xidplus
from xidplus import moc_routines
import pickle
xidplus.__path__[0]
#Folder containing maps
imfolder=xidplus.__path__[0]+'/../test_files/'
pswfits=imfolder+'cosmos_itermap_lacey_07012015_simulated_observation_w_noise_PSW_hipe.fits.gz'#SPIRE 250 map
pmwfits=imfolder+'cosmos_itermap_lacey_07012015_simulated_observation_w_noise_PMW_hipe.fits.gz'#SPIRE 350 map
plwfits=imfolder+'cosmos_itermap_lacey_07012015_simulated_observation_w_noise_PLW_hipe.fits.gz'#SPIRE 500 map
#Folder containing prior input catalogue
catfolder=xidplus.__path__[0]+'/../test_files/'
#prior catalogue
prior_cat='lacey_07012015_MillGas.ALLVOLS_cat_PSW_COSMOS_test.fits'
#output folder
output_folder='./'
#-----250-------------
hdulist = fits.open(pswfits)
im250phdu=hdulist[0].header
im250hdu=hdulist[1].header
im250=hdulist[1].data*1.0E3 #convert to mJy
nim250=hdulist[2].data*1.0E3 #convert to mJy
w_250 = wcs.WCS(hdulist[1].header)
pixsize250=3600.0*w_250.wcs.cd[1,1] #pixel size (in arcseconds)
hdulist.close()
#-----350-------------
hdulist = fits.open(pmwfits)
im350phdu=hdulist[0].header
im350hdu=hdulist[1].header
im350=hdulist[1].data*1.0E3 #convert to mJy
nim350=hdulist[2].data*1.0E3 #convert to mJy
w_350 = wcs.WCS(hdulist[1].header)
pixsize350=3600.0*w_350.wcs.cd[1,1] #pixel size (in arcseconds)
hdulist.close()
#-----500-------------
hdulist = fits.open(plwfits)
im500phdu=hdulist[0].header
im500hdu=hdulist[1].header
im500=hdulist[1].data*1.0E3 #convert to mJy
nim500=hdulist[2].data*1.0E3 #convert to mJy
w_500 = wcs.WCS(hdulist[1].header)
pixsize500=3600.0*w_500.wcs.cd[1,1] #pixel size (in arcseconds)
hdulist.close()
hdulist = fits.open(catfolder+prior_cat)
fcat=hdulist[1].data
hdulist.close()
inra=fcat['RA']
indec=fcat['DEC']
# select only sources with 100micron flux greater than 50 microJy
sgood=fcat['S100']>0.050
inra=inra[sgood]
indec=indec[sgood]
from astropy.coordinates import SkyCoord
from astropy import units as u
c = SkyCoord(ra=[150.74]*u.degree, dec=[2.03]*u.degree)
import pymoc
moc=pymoc.util.catalog.catalog_to_moc(c,100,15)
#---prior250--------
prior250=xidplus.prior(im250,nim250,im250phdu,im250hdu, moc=moc)#Initialise with map, uncertianty map, wcs info and primary header
prior250.prior_cat(inra,indec,prior_cat)#Set input catalogue
prior250.prior_bkg(-5.0,5)#Set prior on background (assumes Gaussian pdf with mu and sigma)
#---prior350--------
prior350=xidplus.prior(im350,nim350,im350phdu,im350hdu, moc=moc)
prior350.prior_cat(inra,indec,prior_cat)
prior350.prior_bkg(-5.0,5)
#---prior500--------
prior500=xidplus.prior(im500,nim500,im500phdu,im500hdu, moc=moc)
prior500.prior_cat(inra,indec,prior_cat)
prior500.prior_bkg(-5.0,5)
#pixsize array (size of pixels in arcseconds)
pixsize=np.array([pixsize250,pixsize350,pixsize500])
#point response function for the three bands
prfsize=np.array([18.15,25.15,36.3])
#use Gaussian2DKernel to create prf (requires stddev rather than fwhm hence pfwhm/2.355)
from astropy.convolution import Gaussian2DKernel
##---------fit using Gaussian beam-----------------------
prf250=Gaussian2DKernel(prfsize[0]/2.355,x_size=101,y_size=101)
prf250.normalize(mode='peak')
prf350=Gaussian2DKernel(prfsize[1]/2.355,x_size=101,y_size=101)
prf350.normalize(mode='peak')
prf500=Gaussian2DKernel(prfsize[2]/2.355,x_size=101,y_size=101)
prf500.normalize(mode='peak')
pind250=np.arange(0,101,1)*1.0/pixsize[0] #get 250 scale in terms of pixel scale of map
pind350=np.arange(0,101,1)*1.0/pixsize[1] #get 350 scale in terms of pixel scale of map
pind500=np.arange(0,101,1)*1.0/pixsize[2] #get 500 scale in terms of pixel scale of map
prior250.set_prf(prf250.array,pind250,pind250)#requires PRF as 2d grid, and x and y bins for grid (in pixel scale)
prior350.set_prf(prf350.array,pind350,pind350)
prior500.set_prf(prf500.array,pind500,pind500)
print('fitting '+ str(prior250.nsrc)+' sources \n')
print('using ' + str(prior250.snpix)+', '+ str(prior250.snpix)+' and '+ str(prior500.snpix)+' pixels')
prior250.get_pointing_matrix()
prior350.get_pointing_matrix()
prior500.get_pointing_matrix()
prior250.upper_lim_map()
prior350.upper_lim_map()
prior500.upper_lim_map()
%%time
from xidplus.stan_fit import SPIRE
fit=SPIRE.all_bands(prior250,prior350,prior500,iter=1000)
posterior=xidplus.posterior_stan(fit,[prior250,prior350,prior500])
xidplus.save([prior250,prior350,prior500],posterior,'test')
%%time
from xidplus.pyro_fit import SPIRE
fit_pyro=SPIRE.all_bands([prior250,prior350,prior500],n_steps=10000,lr=0.001,sub=0.1)
posterior_pyro=xidplus.posterior_pyro(fit_pyro,[prior250,prior350,prior500])
xidplus.save([prior250,prior350,prior500],posterior_pyro,'test_pyro')
plt.semilogy(posterior_pyro.loss_history)
%%time
from xidplus.numpyro_fit import SPIRE
fit_numpyro=SPIRE.all_bands([prior250,prior350,prior500])
posterior_numpyro=xidplus.posterior_numpyro(fit_numpyro,[prior250,prior350,prior500])
xidplus.save([prior250,prior350,prior500],posterior_numpyro,'test_numpyro')
prior250.bkg
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set image and catalogue filenames
Step2: Load in images, noise maps, header info and WCS information
Step3: Load in catalogue you want to fit (and make any cuts)
Step4: XID+ uses Multi Order Coverage (MOC) maps for cutting down maps and catalogues so they cover the same area. It can also take in MOCs as selection functions to carry out additional cuts. Lets use the python module pymoc to create a MOC, centered on a specific position we are interested in. We will use a HEALPix order of 15 (the resolution
Step5: XID+ is built around two python classes. A prior and posterior class. There should be a prior class for each map being fitted. It is initiated with a map, noise map, primary header and map header and can be set with a MOC. It also requires an input prior catalogue and point spread function.
Step6: Set PRF. For SPIRE, the PRF can be assumed to be Gaussian with a FWHM of 18.15, 25.15, 36.3 '' for 250, 350 and 500 $\mathrm{\mu m}$ respectively. Lets use the astropy module to construct a Gaussian PRF and assign it to the three XID+ prior classes.
Step7: Before fitting, the prior classes need to take the PRF and calculate how much each source contributes to each pixel. This process provides what we call a pointing matrix. Lets calculate the pointing matrix for each prior class
Step8: Default prior on flux is a uniform distribution, with a minimum and maximum of 0.00 and 1000.0 $\mathrm{mJy}$ respectively for each source. running the function upper_lim _map resets the upper limit to the maximum flux value (plus a 5 sigma Background value) found in the map in which the source makes a contribution to.
Step9: Now fit using the XID+ interface to pystan
Step10: Initialise the posterior class with the fit object from pystan, and save alongside the prior classes
Step11: Alternatively, you can fit with the pyro backend.
Step12: You can fit with the numpyro backend.
|
11,838
|
<ASSISTANT_TASK:>
Python Code:
%%writefile ../../user_models/cylinder_Bscan_2D.in
#title: B-scan from a metal cylinder buried in a dielectric half-space
#domain: 0.240 0.210 0.002
#dx_dy_dz: 0.002 0.002 0.002
#time_window: 3e-9
#material: 6 0 1 0 half_space
#waveform: ricker 1 1.5e9 my_ricker
#hertzian_dipole: z 0.040 0.170 0 my_ricker
#rx: 0.080 0.170 0
#src_steps: 0.002 0 0
#rx_steps: 0.002 0 0
#box: 0 0 0 0.240 0.170 0.002 half_space
#cylinder: 0.120 0.080 0 0.120 0.080 0.002 0.010 pec
import os
from gprMax.gprMax import api
filename = os.path.join(os.pardir, os.pardir, 'user_models', 'cylinder_Bscan_2D.in')
api(filename, n=60, geometry_only=False)
%run -m tools.outputfiles_merge user_models/cylinder_Bscan_2D
%matplotlib inline
import os
from tools.plot_Bscan import get_output_data, mpl_plot
filename = os.path.join(os.pardir, os.pardir, 'user_models', 'cylinder_Bscan_2D_merged.out')
rxnumber = 1
rxcomponent = 'Ez'
outputdata, dt = get_output_data(filename, rxnumber, rxcomponent)
plt = mpl_plot(outputdata, dt, rxnumber, rxcomponent)
# Change from the default 'seismic' colormap
#plt.set_cmap('gray')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The differences between this input file and the one from the A-scan are the x coordinates of the source and receiver, and the commands needed to move the source and receiver. As before, the source and receiver are offset by 40mm from each other as before but they are now shifted to a starting position for the scan. The #src_steps command is used to move every source in the model by specified steps each time the model is run. Similarly, the #rx_steps command is used to move every receiver in the model by specified steps each time the model is run. Note, the same functionality can be achieved by using a block of Python code in the input file to move the source and receiver individually (for further details see the Python section of the User Guide).
Step2: View the results
Step3: You should see a combined output file cylinder_Bscan_2D_merged.out. The tool will ask you if you want to delete the original single A-scan output files or keep them.
|
11,839
|
<ASSISTANT_TASK:>
Python Code:
data = pd.read_csv('bracket-05.tsv', sep='\t')
data = data.\
query('rd1_win > 0').\
rename(columns=dict(rd1_win=1, rd2_win=2, rd3_win=3, rd4_win=4, rd5_win=5, rd6_win=6, rd7_win=7))\
[['team_name', 'team_seed', 1, 2, 3, 4, 5, 6, 7]]
data.head()
data[8] = 0
for col in range(8, 1, -1):
data[col] = data[col-1] - data[col]
data = data.drop(labels=1, axis=1)
data = data.rename(columns=dict(zip(range(2,9), range(7))))
data.head()
rounds = range(7)
scores = [0, 1, 2, 3, 5, 8, 13]
cumscores = np.cumsum(scores)
prices = {'1': 500,
'2': 300,
'3': 225,
'4': 175,
'5': 125,
'6': 125,
'7': 95,
'8': 85,
'9': 60,
'10': 65,
'11': 60,
'11a': 60,
'11b': 60,
'12': 55,
'13': 25,
'14': 20,
'15': 5,
'16': 1,
'16a': 1,
'16b': 1}
budget = 2000
n = len(data)
data['price'] = [prices[seed] for seed in data.team_seed]
def get_expected_score(team):
return sum(team[r]*cumscores[r] for r in rounds)
data['expected_score'] = data.apply(get_expected_score, axis=1)
def get_variance(team):
return sum(team[r]*(cumscores[r]-team['expected_score'])**2 for r in rounds)
data['variance'] = data.apply(get_variance, axis=1)
data['efficiency'] = data.expected_score/data.price
cols = ['team_name', 'team_seed', 'price', 'expected_score', 'variance', 'efficiency']
data[cols].sort(columns=['efficiency'], ascending=False).head()
from cvxopt import matrix
from cvxopt.glpk import ilp
def solve_binary_program(eps):
Uses the integer linear program solver ilp from glpk:
(status, x) = ilp(c, G, h, A, b, I, B)
minimize c'*x
subject to G*x <= h
A*x = b
x[k] is integer for k in I
x[k] is binary for k in B
c nx1 dense 'd' matrix with n>=1
G mxn dense or sparse 'd' matrix with m>=1
h mx1 dense 'd' matrix
A pxn dense or sparse 'd' matrix with p>=0
b px1 dense 'd' matrix
I set of indices of integer variables
B set of indices of binary variables
c = data.expected_score - eps*data.variance
c = matrix(c)
G = matrix(data.price[:, np.newaxis].T, tc='d')
h = matrix(budget, tc='d')
A = matrix(np.zeros((1, n)), tc='d')
b = matrix(0.)
I = set(range(n))
B = set(range(n))
(status, x) = ilp(-c, G, h, A, b, I, B)
if status != 'optimal':
raise
return x
def solve_and_display(eps=0):
x = solve_binary_program(eps)
print('number of teams', sum(x))
data['selected'] = x
expected_score = data[data.selected == 1].expected_score.sum()
total_variance = data[data.selected == 1].variance.sum()
print('expected score %.2f' % expected_score)
print('total variance %.2f' % total_variance)
return data\
[data.selected == 1]\
[['team_name', 'team_seed', 'price', 'expected_score', 'variance', 'efficiency']].\
sort(columns='price', ascending=False)
solve_and_display()
solve_and_display(eps=.03)
solve_and_display(eps=.05)
solve_and_display(eps=.1)
solve_and_display(eps=.4)
import matplotlib.pyplot as plt
%matplotlib inline
f = lambda eps: sum(solve_binary_program(eps))
eps = np.linspace(0, .75)
num_teams = [f(_) for _ in eps]
plt.plot(eps, num_teams)
plt.ylim(0, 40)
plt.xlabel('Risk penalty $\epsilon$')
plt.ylabel('Optimal number of teams');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The numbered columns represent the probability that a team will win in that round of the tournament. This of course means that they had to win all previous rounds, so you can see that the numbers are always decreasing from left to right.
Step2: Now we set up the data for the scoring rules of the pool
Step3: A few quantities which we'll be interested in are
Step5: Choosing the optimal set of teams
Step6: Results
Step7: The maximum expected score solution
Step8: With just a little bit of risk penalty, Villanova drops out of the optimal set. The extra 500 of budget is spent on Gonzaga, North Carolina, and UC Irvine, with only a tiny loss of expected score.
Step9: Increasing the risk penalty typically increases the number of teams in the optimal set, spreading the eggs across many baskets to mitigate risk. But the relationship isn't strictly monotone as we see here, going down from 17 to 16 teams after increasing $\epsilon$ from .03 to .05
Step10: As we increase the risk penalty even further, the optimization problem no longer really suits our purpose. It becomes so afraid of risk that it spends far below the budget, choosing mostly terrible teams that are likely to lose in the first round, contributing very little uncertainty to our result, but also very little value.
|
11,840
|
<ASSISTANT_TASK:>
Python Code:
#Implements functional expansions
from functions.FE import FE
#Evaluates accuracy in a dataset for a particular classifier
from fitness import Classifier
#Implements gafe using DEAP toolbox
import ga
from sklearn.preprocessing import MinMaxScaler
import numpy as np
import pandas as pd
iris = pd.read_csv("data/iris.data", sep=",")
#Isolate the attributes columns
irisAtts = iris.drop("class", 1)
#Isolate the class column
target = iris["class"]
scaledIris = MinMaxScaler().fit_transform(irisAtts)
bestSingleMatch = {'knn': [(1,5) for x in range(4)], 'cart': [(3,2) for x in range(4)], 'svm': [(7,4) for x in range(4)]}
functionalExp = FE()
for cl in ['knn', 'cart', 'svm']:
#Folds are the number of folds used in crossvalidation
#Jobs are the number of CPUS used in crossvalidation and some classifiers training step.
#You can also change some classifier parameters, such as k_neigh for neighbors in knn, C in svm and others.
#If you do not specify, it will use the articles default.
model = Classifier(cl, target, folds=10, jobs=6)
#The class internally normalizes data, so no need to send normalized data when classifying
#accuracy without expanding
print("original accuracy " + cl + " " + str(model.getAccuracy(irisAtts)))
#Expand the scaled data
expandedData = functionalExp.expandMatrix(scaledIris, bestSingleMatch[cl])
print("single match expansion accuracy " + cl + " " + str(model.getAccuracy(expandedData)))
#If scaled is False, it will scale data in range [0,1]
gafe = ga.GAFE(model, scaledIris, target, scaled=True)
#Specify how many iterations of GAFE you wish with n_iter
#Note that this is a slow method, so have patience if n_iter is high
avg, bestPair = gafe.runGAFE(n_population=21, n_iter=1, verbose=True)
print("gafe " + cl + " " + str(avg) )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import modules from scikit-learn, numpy and pandas to help us deal with the data
Step2: Load data using pandas. We will use the famous Iris Dataset
Step3: Prior to expanding the data, put all values to interval [0,1] for better results
Step4: If, we didnt use GAFE, after testing 49 (7*7) combinations of FE-ES this configuration would be the best for each classifier. Note we are applying the same FE-ES pair for every data column
Step5: Now lets calculate the accuracy results for original data, single match and GAFE.
|
11,841
|
<ASSISTANT_TASK:>
Python Code:
!find export/probs/
%%bash
LOCAL_DIR=$(find export/probs | head -2 | tail -1)
BUCKET=ai-analytics-solutions-kfpdemo
gsutil rm -rf gs://${BUCKET}/mlpatterns/batchserving
gsutil cp -r $LOCAL_DIR gs://${BUCKET}/mlpatterns/batchserving
gsutil ls gs://${BUCKET}/mlpatterns/batchserving
%%bigquery
CREATE OR REPLACE MODEL mlpatterns.imdb_sentiment
OPTIONS(model_type='tensorflow', model_path='gs://ai-analytics-solutions-kfpdemo/mlpatterns/batchserving/*')
%%bigquery
SELECT * FROM ML.PREDICT(MODEL mlpatterns.imdb_sentiment,
(SELECT 'This was very well done.' AS reviews)
)
%%bigquery preds
SELECT * FROM ML.PREDICT(MODEL mlpatterns.imdb_sentiment,
(SELECT consumer_complaint_narrative AS reviews
FROM `bigquery-public-data`.cfpb_complaints.complaint_database
WHERE consumer_complaint_narrative IS NOT NULL
)
)
preds[:3]
# what's does a "positive" complaint look like?
preds.sort_values(by='positive_review_probability', ascending=False).iloc[1]['reviews']
# what's does a "typical" complaint look like?
preds.sort_values(by='positive_review_probability', ascending=False).iloc[len(preds)//2]['reviews']
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load model into BigQuery for batch serving
Step2: Now, do it at scale, on consumer complaints about financial products and services
|
11,842
|
<ASSISTANT_TASK:>
Python Code:
import os
import zipfile
from math import sqrt
import numpy as np
import pandas as pd
from sklearn.linear_model import LinearRegression
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('darkgrid')
%matplotlib inline
# Put files in current direction into a list
files_list = [f for f in os.listdir('.') if os.path.isfile(f)]
# Filenames of unzipped files
unzip_files = ['kc_house_train_data.csv','kc_house_test_data.csv', 'kc_house_data.csv']
# If upzipped file not in files_list, unzip the file
for filename in unzip_files:
if filename not in files_list:
zip_file = filename + '.zip'
unzipping = zipfile.ZipFile(zip_file)
unzipping.extractall()
unzipping.close
# Dictionary with the correct dtypes for the DataFrame columns
dtype_dict = {'bathrooms':float, 'waterfront':int, 'sqft_above':int, 'sqft_living15':float,
'grade':int, 'yr_renovated':int, 'price':float, 'bedrooms':float, 'zipcode':str,
'long':float, 'sqft_lot15':float, 'sqft_living':float, 'floors':str,
'condition':int, 'lat':float, 'date':str, 'sqft_basement':int, 'yr_built':int,
'id':str, 'sqft_lot':int, 'view':int}
# Loading sales data, sales training data, and test_data into DataFrames
sales = pd.read_csv('kc_house_data.csv', dtype = dtype_dict)
train_data = pd.read_csv('kc_house_train_data.csv', dtype = dtype_dict)
test_data = pd.read_csv('kc_house_test_data.csv', dtype = dtype_dict)
# Looking at head of training data DataFrame
train_data.head()
def get_numpy_data(input_df, features, output):
input_df['constant'] = 1.0 # Adding column 'constant' to input DataFrame with all values = 1.0
features = ['constant'] + features # Adding constant' to List of features
feature_matrix = input_df.as_matrix(columns=features) # Convert DataFrame w/ columns in features list to np.ndarray
output_array = input_df[output].values # Convert column with output feature into np.array
return(feature_matrix, output_array)
def predict_output(feature_matrix, weights):
predictions = np.dot(feature_matrix, weights)
return predictions
def feature_derivative(errors, feature):
derivative = 2.0*np.dot(errors, feature)
return derivative
def regression_gradient_descent(feature_matrix, output, initial_weights, step_size, tolerance):
converged = False
weights = np.array(initial_weights) # Initializing the weights to be the initial weights
while not converged:
predictions = predict_output(feature_matrix, weights) # Finding predicted output w/ weights and feature_matrix
errors = predictions - output # Computing error of predicted output and actual output for each data point
gradient_sum_squares = 0 # initialize the gradient sum of squares
# While we haven't reached the tolerance, update the weight of each feature
# Looping over each feature
for i in range(len(weights)): # loop over each weight
der_feat_i = feature_derivative(errors, feature_matrix[:,i]) # Cost function derivative for feature i
gradient_sum_squares += der_feat_i**2.0 # Add derivative^2 to grad. magnitude (for assessing convergence)
weights[i] = weights[i] - step_size*der_feat_i # Update weight[i] by subtr. step_size * der. weight[i]
# Compute square-root of gradient sum of squares to get the gradient magnigude:
gradient_magnitude = sqrt(gradient_sum_squares)
if gradient_magnitude < tolerance:
converged = True
return(weights)
simple_features = ['sqft_living']
my_output = 'price'
(simple_feature_matrix, output) = get_numpy_data(train_data, simple_features, my_output)
initial_weights = np.array([-47000., 1.])
step_size = 7e-12
tolerance = 2.5e7
weights_model_1 = regression_gradient_descent(simple_feature_matrix, output, initial_weights, step_size, tolerance)
round(weights_model_1[1], 1)
# First, from test data, create a feature matrix and output vector
(test_model_1_feature_matrix, test_output_model_1) = get_numpy_data(test_data, simple_features, my_output)
test_model_1_predictions = predict_output(test_model_1_feature_matrix, weights_model_1)
round(test_model_1_predictions[0] , 0)
RSS_test_model_1 = sum( (test_model_1_predictions-test_data['price'].values)**2.0 )
# Creating feature matrix with 'sqft_living' feature and output vector with 'price' feature
X_model_1 = train_data[ ['sqft_living'] ]
y_model_1 = train_data['price']
# Creating a LinearRegression Object. Then, performing linear regression on feature matrix and output vector
lin_reg_model_1 = LinearRegression()
lin_reg_model_1.fit(X_model_1, y_model_1)
# Creating x-vector for plotting. Then, defining line with weights from gradient descent and sklearn
x_vect_simple_reg = np.arange(0,14000+1,1)
y_model_1_grad_desc = weights_model_1[0] + weights_model_1[1]*x_vect_simple_reg
y_model_1_sklearn = lin_reg_model_1.intercept_ + lin_reg_model_1.coef_[0]*x_vect_simple_reg
plt.figure(figsize=(8,6))
plt.plot(train_data['sqft_living'], train_data['price'],'.',label= 'House Price Data')
plt.hold(True)
plt.plot(x_vect_simple_reg, y_model_1_grad_desc, label= 'Gradient Descent')
plt.plot(x_vect_simple_reg, y_model_1_sklearn, '--' , label= 'Sklearn Library')
plt.hold(False)
plt.legend(loc='upper left', fontsize=16)
plt.xlabel('Living Area (ft^2)', fontsize=18)
plt.ylabel('House Price ($)', fontsize=18)
plt.title('Simple Linear Regression', fontsize=18)
plt.axis([0.0, 14000.0, 0.0, 8000000.0])
plt.show()
plt.figure(figsize=(12,8))
plt.subplot(1, 2, 1)
plt.plot(train_data['sqft_living'], train_data['price'],'.')
plt.xlabel('Living Area (ft^2)', fontsize=16)
plt.ylabel('House Price ($)', fontsize=16)
plt.subplot(1, 2, 2)
plt.plot(train_data['sqft_living15'], train_data['price'],'.')
plt.xlabel('Ave. ft^2 of 15 nearest neighbors', fontsize=16)
plt.ylabel('House Price ($)', fontsize=16)
plt.show()
# sqft_living15 is the average squarefeet for the nearest 15 neighbors.
model_features = ['sqft_living', 'sqft_living15']
my_output = 'price'
(feature_matrix, output) = get_numpy_data(train_data, model_features, my_output)
initial_weights = np.array([-100000., 1., 1.])
step_size = 4e-12
tolerance = 1e9
weights_model_2_mulp_reg = regression_gradient_descent(feature_matrix, output,
initial_weights, step_size, tolerance)
# First, creating the feature matrix and output array. Then, calculating predictions for Test data set/
(test_2_feature_matrix, test_output) = get_numpy_data(test_data, model_features, my_output)
test_2_feat_predictions = predict_output(test_2_feature_matrix, weights_model_2_mulp_reg)
round(test_2_feat_predictions[0] , 0)
test_data['price'][0]
# Creating feature matrix with ['sqft_living', 'sqft_living15'] features and output vector with 'price' feature
X_model_2 = train_data[ ['sqft_living', 'sqft_living15'] ]
y_model_2 = train_data['price']
# Creating a LinearRegression Object. Then, performing linear regression on feature matrix and output vector
lin_reg_model_2 = LinearRegression()
lin_reg_model_2.fit(X_model_2, y_model_2)
print 'Grad Desc Weights = Intercept: %.3e, sqft_living feat: %.3e, sqft_living15 feat: %.3e' % (weights_model_2_mulp_reg[0], weights_model_2_mulp_reg[1], weights_model_2_mulp_reg[2])
print 'Sklearn Weights = Intercept: %.3e, sqft_living feat: %.3e, sqft_living15 feat: %.3e' % (lin_reg_model_2.intercept_, lin_reg_model_2.coef_[0], lin_reg_model_2.coef_[1])
diff_model_1_house_1_price = abs(test_model_1_predictions[0] - test_data['price'][0])
diff_model_2_house_1_price = abs(test_2_feat_predictions[0] - test_data['price'][0])
if diff_model_1_house_1_price < diff_model_2_house_1_price:
print 'Model 1 closer to true price for 1st house'
else:
print 'Model 2 closer to true price for 1st house'
RSS_test_model_2 = sum( (test_2_feat_predictions-test_data['price'].values)**2.0 )
print RSS_test_model_2
if RSS_test_model_1 < RSS_test_model_2:
print 'RSS lower for Model 1'
else:
print 'RSS lower for Model 2'
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Unzipping files with house sales data
Step2: Loading Sales data, Sales training data, and Sales test data
Step3: Convert to DataFrame data to np matrix and np array
Step4: Predicting output given regression weights
Step5: Computing the Derivative
Step6: Gradient Descent
Step7: A few things to note before we run the gradient descent. Since the gradient is a sum over all the data points and involves a product of an error and a feature the gradient itself will be very large since the features are large (squarefeet) and the output is large (prices). So while you might expect "tolerance" to be small, small is only relative to the size of the features.
Step8: Next, run gradient descent with the above parameters to determine the weights of each feature.
Step9: Quiz Question
Step10: Use newly estimated weights and predict_output() function to compute the predictions on the TEST data.
Step11: Now, compute predictions using test_model_1_feature_matrix and weights from above.
Step12: Quiz Question
Step13: Now, with the predictions on test data, compute the RSS (Residual Sum of Squares) on the test data set.
Step14: Comparing Model 1 Gradient Descent with Sklearn Library
Step15: Figure below shows that weights from Gradient Descent and Sklearn library are in good agreement
Step16: Model 2
Step17: The following code produces the weights for a second model with the following parameters
Step18: Use the parameters above to determine the weghts for model 2.
Step19: Using newly determine weights and the predict_output function to compute the predictions on the TEST data.
Step20: Quiz Question
Step21: What is the actual price for the 1st house in the test data set?
Step22: Comparing Model 2 Gradient Descent with Sklearn Library
Step23: Weights from gradient descent and Sklearn in good agreement
Step24: Now, use your predictions and the output to compute the RSS for model 2 on TEST data.
Step25: Quiz Question
|
11,843
|
<ASSISTANT_TASK:>
Python Code:
def number_to_words(n):
Given a number n between 1-1000 inclusive return a list of words for the number.
n=str(n)
key = {1:'one', 2:'two', 3:'three', 4:'four', 5:'five', 6:'six', 7:'seven', 8:'eight', 9:'nine', 10:'ten', 11:'eleven', 12:'twelve', 13:'thirteen', 14:'fourteen', 15:'fifteen', 16:'sixteen', 17:'seventeen', 18:'eighteen', 19:'nineteen', 20:'twenty', 30:'thirty', 40:'forty', 50:'fifty', 60:'sixty', 70:'seventy', 80:'eighty', 90:'ninety'}
if len(n)==4: # '1000'
return 'one thousand'
elif len(n)==3: # 3-digit numbers
if int(n[1])==0 and int(n[2])==0: # 'n00'
return key[int(n[0])]+' hundred'
elif int(n[1])==0 and not int(n[2])==0: # 'n0l'
return key[int(n[0])]+' hundred and '+key[int(n[2])]
elif not int(n[1])==0 and int(n[2])==0: # 'nm0'
return key[int(n[0])]+' hundred and '+key[int(n[1])*10]
elif not int(n[1])==0 and not int(n[2])==0: # 'nml'
if int(n[1])==1: # 'n1l'
return key[int(n[0])]+' hundred and '+key[int(n[1]+n[2])]
elif not int(n[1])==1:
return key[int(n[0])]+' hundred and '+key[int(n[1])*10]+'-'+key[int(n[2])]
elif len(n)==2: # 2-digit numbers
if int(n[1])==0: # 'n0'
return key[int(n[0])*10]
elif not int(n[1])==0: # 'nm'
if int(n[0])==1: # '1m'
return key[int(n[0]+n[1])]
elif not int(n[0])==1:
return key[int(n[0])*10]+'-'+key[int(n[1])]
elif len(n)==1: # 1-digit numbers
return key[int(n)]
p = range(1,11)
type(p)
assert number_to_words(1000)=='one thousand'
assert number_to_words(593)=='five hundred and ninety-three'
assert number_to_words(111)=='one hundred and eleven'
assert number_to_words(67)=='sixty-seven'
assert number_to_words(14)=='fourteen'
assert number_to_words(2)=='two'
assert True # use this for grading the number_to_words tests.
l="I am a-string"
len(''.join(l.split()))
def count_letters(n):
Count the number of letters used to write out the words for 1-n inclusive.
phi = number_to_words(n)
count = len(''.join(phi.split()))
for i in range(len(phi)):
if phi[i]=='-':
count -= 1
return count
assert count_letters(1000)==11
assert count_letters(342)==23
assert count_letters(115)==20
assert count_letters(21)==9
assert True # use this for grading the count_letters tests.
total=0
for i in range(1,1001):
total += count_letters(i)
total
assert True # use this for gradig the answer to the original question.
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Project Euler
Step2: Now write a set of assert tests for your number_to_words function that verifies that it is working as expected.
Step4: Now define a count_letters(n) that returns the number of letters used to write out the words for all of the the numbers 1 to n inclusive.
Step5: Now write a set of assert tests for your count_letters function that verifies that it is working as expected.
Step6: Finally used your count_letters function to solve the original question.
|
11,844
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import pylab
# Required imports
from wikitools import wiki
from wikitools import category
#ย import nltk
import nltk
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
from nltk.corpus import stopwords
from test_helper import Test
import collections
from pyspark.mllib.clustering import LDA, LDAModel
from pyspark.mllib.linalg import Vectors
#ย import gensim
#ย import numpy as np
#ย import lda
# import lda.datasets
site = wiki.Wiki("https://en.wikipedia.org/w/api.php")
# Select a category with a reasonable number of articles (>100)
cat = "Economics"
# cat = "Pseudoscience"
print cat
# Loading category data. This may take a while
print "Loading category data. This may take a while..."
cat_data = category.Category(site, cat)
corpus_titles = []
corpus_text = []
for n, page in enumerate(cat_data.getAllMembersGen()):
print "\r Loading article {0}".format(n + 1),
corpus_titles.append(page.title)
corpus_text.append(page.getWikiText())
n_art = len(corpus_titles)
print "\nLoaded " + str(n_art) + " articles from category " + cat
# n = 5
#ย print corpus_titles[n]
#ย print corpus_text[n]
corpusRDD = sc.parallelize(corpus_text, 4)
print "\nRDD created with {0} elements".format(corpusRDD.count())
Test.assertTrue(corpusRDD.count() >= 100,
"Your corpus_tokens has less than 100 articles. Consider using a larger dataset")
# You can comment this if the package is already available.
# Select option "d) Download", and identifier "punkt"
# nltk.download()
# You can comment this if the package is already available.
# Select option "d) Download", and identifier "stopwords"
# nltk.download()
stopwords_en = stopwords.words('english')
print "The stopword list contains {0} elements: ".format(len(stopwords_en))
print stopwords_en
def getTokenList(doc, stopwords_en):
# scode: tokens = <FILL IN> # Tokenize docs
tokens = word_tokenize(doc.decode('utf-8'))
# scode: tokens = <FILL IN> # Remove non-alphanumeric tokens and normalize to lowercase
tokens = [t.lower() for t in tokens if t.isalnum()]
# scode: tokens = <FILL IN> # Remove stopwords
tokens = [t for t in tokens if t not in stopwords_en]
return tokens
Test.assertEquals(getTokenList('The rain in spain stays mainly in the plane', stopwords_en),
[u'rain', u'spain', u'stays', u'mainly', u'plane'],
'getTokenList does not return the expected results')
# scode: corpus_tokensRDD = <FILL IN>
corpus_tokensRDD = (corpusRDD
.map(lambda x: getTokenList(x, stopwords_en))
.cache())
# print "\n Let's check tokens after cleaning:"
print corpus_tokensRDD.take(1)[0][0:30]
Test.assertEquals(corpus_tokensRDD.count(), n_art,
"The number of documents in the original set does not correspond to the size of corpus_tokensRDD")
Test.assertTrue(all([c==c.lower() for c in corpus_tokensRDD.take(1)[0]]), 'Capital letters have not been removed')
Test.assertTrue(all([c.isalnum() for c in corpus_tokensRDD.take(1)[0]]),
'Non alphanumeric characters have not been removed')
Test.assertTrue(len([c for c in corpus_tokensRDD.take(1)[0] if c in stopwords_en])==0,
'Stopwords have not been removed')
# Select stemmer.
stemmer = nltk.stem.SnowballStemmer('english')
# scode: corpus_stemRDD = <FILL IN>
corpus_stemRDD = corpus_tokensRDD.map(lambda x: [stemmer.stem(token) for token in x])
print "\nLet's check the first tokens from document 0 after stemming:"
print corpus_stemRDD.take(1)[0][0:30]
Test.assertTrue((len([c for c in corpus_stemRDD.take(1)[0] if c!=stemmer.stem(c)])
< 0.1*len(corpus_stemRDD.take(1)[0])),
'It seems that stemming has not been applied properly')
# You can comment this if the package is already available.
# Select option "d) Download", and identifier "wordnet"
# nltk.download()
wnl = WordNetLemmatizer()
# scode: corpus_lemmatRDD = <FILL IN>
corpus_lemmatRDD = (corpus_tokensRDD
.map(lambda x: [wnl.lemmatize(token) for token in x]))
print "\nLet's check the first tokens from document 0 after stemming:"
print corpus_lemmatRDD.take(1)[0][0:30]
# corpus_wcRDD = <FILL IN>
corpus_wcRDD = (corpus_stemRDD
.map(collections.Counter)
.map(lambda x: [(t, x[t]) for t in x]))
print corpus_wcRDD.take(1)[0][0:30]
Test.assertTrue(corpus_wcRDD.count() == n_art, 'List corpus_clean does not contain the expected number of articles')
Test.assertTrue(corpus_wcRDD.flatMap(lambda x: x).map(lambda x: x[1]).sum()== corpus_stemRDD.map(len).sum(),
'The total token count in the output RDD is not consistent with the total number of input tokens')
# scode: wcRDD = < FILL IN >
wcRDD = (corpus_wcRDD
.flatMap(lambda x: x)
.reduceByKey(lambda x, y: x + y))
print wcRDD.take(30)
# Token Dictionary:
n_tokens = wcRDD.count()
# scode: TD = wcRDD.<FILL IN>
TD = wcRDD.takeOrdered(n_tokens, lambda x: -x[1])
# scode: D = <FIll IN> # Extract tokens from TD
D = map(lambda x: x[0], TD)
# scode: token_count = <FILL IN> # Extract token counts from TD
token_count = map(lambda x: x[1], TD)
# ALTERNATIVELY:
TD_RDD = wcRDD.sortBy(lambda x: -x[1])
D_RDD = TD_RDD.map(lambda x: x[0])
token_countRDD = TD_RDD.map(lambda x: x[1])
print TD
# SORTED TOKEN FREQUENCIES (II):
# plt.rcdefaults()
# Example data
n_bins = 25
y_pos = range(n_bins-1, -1, -1)
hot_tokens = D[0:n_bins]
z = [float(t)/n_art for t in token_count[0:n_bins]]
plt.barh(y_pos, z, align='center', alpha=0.4)
plt.yticks(y_pos, hot_tokens)
plt.xlabel('Average number of occurrences per article')
plt.title('Token distribution')
plt.show()
# INDICE INVERTIDO: EJEMPLO:
#ย D = ['token1', 'token2', 'token3', 'token4']
#ย D[1] = 'token2'
#ย invD = {'token1': 0, 'token2': 1, 'token3': 2, 'token4': 3}
#ย invD['token2'] = 1
# Compute inverse dictionary
# scode: invD = <FILL IN>
invD = dict(zip(D, xrange(n_tokens)))
### ALTERNATIVELY:
#ย invD_RDD = D_RDD.zipWithIndex() ### Tuples (token, index)
# Compute RDD replacing tokens by token_ids
# scode: corpus_sparseRDD = <FILL IN>
corpus_sparseRDD = corpus_wcRDD.map(lambda x: [(invD[t[0]], t[1]) for t in x])
# Convert list of tuplas into Vectors.sparse object.
corpus_sparseRDD = corpus_sparseRDD.map(lambda x: Vectors.sparse(n_tokens, x))
corpus4lda = corpus_sparseRDD.zipWithIndex().map(lambda x: [x[1], x[0]]).cache()
print "Training LDA: this might take a while..."
# scode: ldaModel = LDA.<FILL IN>
ldaModel = LDA.train(corpus4lda, k=3)
# Output topics. Each is a distribution over words (matching word count vectors)
print("Learned topics (as distributions over vocab of " + str(ldaModel.vocabSize()) + " words):")
topics = ldaModel.topicsMatrix()
n_bins = 25
# Example data
y_pos = range(n_bins-1, -1, -1)
pylab.rcParams['figure.figsize'] = 16, 8 # Set figure size
for i in range(3):
topic = ldaModel.describeTopics(maxTermsPerTopic=n_bins)[i]
tokens = [D[n] for n in topic[0]]
weights = topic[1]
plt.subplot(1, 3, i+1)
plt.barh(y_pos, weights, align='center', alpha=0.4)
plt.yticks(y_pos, tokens)
plt.xlabel('Average number of occurrences per article')
plt.title('Token distribution')
from pyspark.mllib.common import callMLlibFunc, JavaModelWrapper
from pyspark.mllib.linalg.distributed import RowMatrix
class SVD(JavaModelWrapper):
Wrapper around the SVD scala case class
@property
def U(self):
Returns a RowMatrix whose columns are the left singular vectors of the SVD if computeU was set to be True.
u = self.call("U")
if u is not None:
return RowMatrix(u)
@property
def s(self):
Returns a DenseVector with singular values in descending order.
return self.call("s")
@property
def V(self):
Returns a DenseMatrix whose columns are the right singular vectors of the SVD.
return self.call("V")
def computeSVD(row_matrix, k, computeU=False, rCond=1e-9):
Computes the singular value decomposition of the RowMatrix.
The given row matrix A of dimension (m X n) is decomposed into U * s * V'T where
* s: DenseVector consisting of square root of the eigenvalues (singular values) in descending order.
* U: (m X k) (left singular vectors) is a RowMatrix whose columns are the eigenvectors of (A X A')
* v: (n X k) (right singular vectors) is a Matrix whose columns are the eigenvectors of (A' X A)
:param k: number of singular values to keep. We might return less than k if there are numerically zero singular values.
:param computeU: Whether of not to compute U. If set to be True, then U is computed by A * V * sigma^-1
:param rCond: the reciprocal condition number. All singular values smaller than rCond * sigma(0) are treated as zero, where sigma(0) is the largest singular value.
:returns: SVD object
java_model = row_matrix._java_matrix_wrapper.call("computeSVD", int(k), computeU, float(rCond))
return SVD(java_model)
from pyspark.ml.feature import *
from pyspark.mllib.linalg import Vectors
data = [(Vectors.dense([0.0, 1.0, 0.0, 7.0, 0.0]),), (Vectors.dense([2.0, 0.0, 3.0, 4.0, 5.0]),), (Vectors.dense([4.0, 0.0, 0.0, 6.0, 7.0]),)]
df = sqlContext.createDataFrame(data,["features"])
pca_extracted = PCA(k=2, inputCol="features", outputCol="pca_features")
model = pca_extracted.fit(df)
features = model.transform(df) # this create a DataFrame with the regular features and pca_features
# We can now extract the pca_features to prepare our RowMatrix.
pca_features = features.select("pca_features").rdd.map(lambda row : row[0])
mat = RowMatrix(pca_features)
# Once the RowMatrix is ready we can compute our Singular Value Decomposition
svd = computeSVD(mat,2,True)
print svd.s
# DenseVector([9.491, 4.6253])
print svd.U.rows.collect()
# [DenseVector([0.1129, -0.909]), DenseVector([0.463, 0.4055]), DenseVector([0.8792, -0.0968])]
print svd.V
# DenseMatrix(2, 2, [-0.8025, -0.5967, -0.5967, 0.8025], 0)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Corpus acquisition.
Step2: You can try with any other categories. Take into account that the behavior of topic modelling algorithms may depend on the amount of documents available for the analysis. Select a category with at least 100 articles. You can browse the wikipedia category tree here, https
Step3: Now, we have stored the whole text collection in two lists
Step4: Now, we will load the text collection into an RDD
Step5: 2. Corpus Processing
Step6: Also, we need to load a list of english stopwords. Select now identifier "stopwords"
Step7: You can check the stopword list. This is a standard python list of strings. We could modify it by removing words or adding new ones if required.
Step8: Task
Step9: Task
Step10: 2.2. Stemming / Lemmatization
Step11: Alternatively, we can apply lemmatization. For english texts, we can use the lemmatizer from NLTK, which is based on WordNet. If you have not used wordnet before, you will likely need to download it from nltk
Step12: Task
Step13: One of the advantages of the lemmatizer method is that the result of lemmatization is still a true word, which is more advisable for the presentation of text processing results and lemmatization.
Step14: At this point, we have got a representation of documents as list of tuples (token, word_count) in corpus_wcRDD. From this RDD, we can compute a dictionary containing all tokens in the corpus as keys, and their respective number of occurrences as values.
Step15: Task
Step16: We can visualize the token distribution using D and token_count, for the most frequent terms
Step17: 3. Latent Dirichlet Allocation
Step18: The only remaining step consists on adding an identifier to each document of the corpus.
Step19: That's all. We can already call to the lda algorithm.'
Step20: The whole topics matrix can be computed using the .topicsMatrix() method.
Step21: Alternatively, we can use the .describeTopics method that returns the most relevan terms for each topic, and it is more useful for a graphical plot.
Step27: Exercise
|
11,845
|
<ASSISTANT_TASK:>
Python Code:
!pip install -q opencv-python
import os
import tensorflow.compat.v2 as tf
import tensorflow_hub as hub
import numpy as np
import cv2
from IPython import display
import math
# Load the model once from TF-Hub.
hub_handle = 'https://tfhub.dev/deepmind/mil-nce/s3d/1'
hub_model = hub.load(hub_handle)
def generate_embeddings(model, input_frames, input_words):
Generate embeddings from the model from video frames and input words.
# Input_frames must be normalized in [0, 1] and of the shape Batch x T x H x W x 3
vision_output = model.signatures['video'](tf.constant(tf.cast(input_frames, dtype=tf.float32)))
text_output = model.signatures['text'](tf.constant(input_words))
return vision_output['video_embedding'], text_output['text_embedding']
# @title Define video loading and visualization functions { display-mode: "form" }
# Utilities to open video files using CV2
def crop_center_square(frame):
y, x = frame.shape[0:2]
min_dim = min(y, x)
start_x = (x // 2) - (min_dim // 2)
start_y = (y // 2) - (min_dim // 2)
return frame[start_y:start_y+min_dim,start_x:start_x+min_dim]
def load_video(video_url, max_frames=32, resize=(224, 224)):
path = tf.keras.utils.get_file(os.path.basename(video_url)[-128:], video_url)
cap = cv2.VideoCapture(path)
frames = []
try:
while True:
ret, frame = cap.read()
if not ret:
break
frame = crop_center_square(frame)
frame = cv2.resize(frame, resize)
frame = frame[:, :, [2, 1, 0]]
frames.append(frame)
if len(frames) == max_frames:
break
finally:
cap.release()
frames = np.array(frames)
if len(frames) < max_frames:
n_repeat = int(math.ceil(max_frames / float(len(frames))))
frames = frames.repeat(n_repeat, axis=0)
frames = frames[:max_frames]
return frames / 255.0
def display_video(urls):
html = '<table>'
html += '<tr><th>Video 1</th><th>Video 2</th><th>Video 3</th></tr><tr>'
for url in urls:
html += '<td>'
html += '<img src="{}" height="224">'.format(url)
html += '</td>'
html += '</tr></table>'
return display.HTML(html)
def display_query_and_results_video(query, urls, scores):
Display a text query and the top result videos and scores.
sorted_ix = np.argsort(-scores)
html = ''
html += '<h2>Input query: <i>{}</i> </h2><div>'.format(query)
html += 'Results: <div>'
html += '<table>'
html += '<tr><th>Rank #1, Score:{:.2f}</th>'.format(scores[sorted_ix[0]])
html += '<th>Rank #2, Score:{:.2f}</th>'.format(scores[sorted_ix[1]])
html += '<th>Rank #3, Score:{:.2f}</th></tr><tr>'.format(scores[sorted_ix[2]])
for i, idx in enumerate(sorted_ix):
url = urls[sorted_ix[i]];
html += '<td>'
html += '<img src="{}" height="224">'.format(url)
html += '</td>'
html += '</tr></table>'
return html
# @title Load example videos and define text queries { display-mode: "form" }
video_1_url = 'https://upload.wikimedia.org/wikipedia/commons/b/b0/YosriAirTerjun.gif' # @param {type:"string"}
video_2_url = 'https://upload.wikimedia.org/wikipedia/commons/e/e6/Guitar_solo_gif.gif' # @param {type:"string"}
video_3_url = 'https://upload.wikimedia.org/wikipedia/commons/3/30/2009-08-16-autodrift-by-RalfR-gif-by-wau.gif' # @param {type:"string"}
video_1 = load_video(video_1_url)
video_2 = load_video(video_2_url)
video_3 = load_video(video_3_url)
all_videos = [video_1, video_2, video_3]
query_1_video = 'waterfall' # @param {type:"string"}
query_2_video = 'playing guitar' # @param {type:"string"}
query_3_video = 'car drifting' # @param {type:"string"}
all_queries_video = [query_1_video, query_2_video, query_3_video]
all_videos_urls = [video_1_url, video_2_url, video_3_url]
display_video(all_videos_urls)
# Prepare video inputs.
videos_np = np.stack(all_videos, axis=0)
# Prepare text input.
words_np = np.array(all_queries_video)
# Generate the video and text embeddings.
video_embd, text_embd = generate_embeddings(hub_model, videos_np, words_np)
# Scores between video and text is computed by dot products.
all_scores = np.dot(text_embd, tf.transpose(video_embd))
# Display results.
html = ''
for i, words in enumerate(words_np):
html += display_query_and_results_video(words, all_videos_urls, all_scores[i, :])
html += '<br>'
display.HTML(html)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: ๅฏผๅ
ฅ TF-Hub ๆจกๅ
Step4: ๆผ็คบๆๆฌๅฐ่ง้ขๆฃ็ดข
|
11,846
|
<ASSISTANT_TASK:>
Python Code:
a = 1
a
b = 'pew'
b
%matplotlib inline
import matplotlib.pyplot as plt
from pylab import *
x = linspace(0, 5, 10)
y = x ** 2
figure()
plot(x, y, 'r')
xlabel('x')
ylabel('y')
title('title')
show()
import numpy as np
num_points = 130
y = np.random.random(num_points)
plt.plot(y)
%%latex
\begin{align}
\nabla \times \vec{\mathbf{B}} -\, \frac1c\, \frac{\partial\vec{\mathbf{E}}}{\partial t} & = \frac{4\pi}{c}\vec{\mathbf{j}} \\
\nabla \cdot \vec{\mathbf{E}} & = 4 \pi \rho \\
\nabla \times \vec{\mathbf{E}}\, +\, \frac1c\, \frac{\partial\vec{\mathbf{B}}}{\partial t} & = \vec{\mathbf{0}} \\
\nabla \cdot \vec{\mathbf{B}} & = 0
\end{align}
import re
text = 'foo bar\t baz \tqux'
re.split('\s+', text)
from plotly import __version__
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
from plotly.graph_objs import Scatter, Figure, Layout
init_notebook_mode(connected=True)
iplot([{"x": [1, 2, 3], "y": [3, 1, 6]}])
from bokeh.plotting import figure, output_notebook, show
output_notebook()
p = figure()
p.line([1, 2, 3, 4, 5], [6, 7, 2, 4, 5], line_width=2)
show(p)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This is some text, here comes some latex
Step2: Apos?
Step3: Javascript plots
Step4: bokeh
|
11,847
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
ydata = np.genfromtxt('dataForNathan.csv', delimiter=',')[:-1]
xdata = np.arange(ydata.size)+1
plt.figure(figsize=(7,7)); plt.xlim(0,64)
plt.plot(xdata, ydata); plt.scatter(xdata,ydata, c='k')
plt.show()
import scipy.special as sp
def dog(x, p):
FR = 4.0
dt = 1.0/FR
a1 = p[0]/p[2]
b1 = dt/p[2]
a2 = p[1]/p[3]
b2 = dt/p[3]
c1 = a1*np.log(b1) - sp.gammaln(a1) + np.log(p[4])
c2 = a2*np.log(b2) - sp.gammaln(a2) - np.log(p[5])
g1 = np.exp(c1 + (a1 - 1.0)*np.log(x) - b1*x)
g2 = np.exp(c2 + (a2 - 1.0)*np.log(x) - b2*x)
y = g1 - g2
return (y,g1,g2)
params = np.array([4.0,8.0,1.5,0.89,.93,3.3])
y,g1,g2 = dog(xdata, params)
fig = plt.figure(figsize=(7,7))
plt.plot(xdata,y,label='y')
plt.plot(xdata,g1,label='g1')
plt.plot(xdata,g2,label='g2')
plt.plot(xdata,np.zeros(y.size),c='k')
plt.legend()
plt.xlim(0,64)
plt.figure(figsize=(50,50))
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: He is choosing to model the function as the difference of Gamma distributions
|
11,848
|
<ASSISTANT_TASK:>
Python Code:
import os
from gensim import corpora, models
%load_ext memory_profiler
import scipy
scipy.show_config()
MODELS_DIR = "../Data/models/lda_standard"
num_topics = 10
dictionary = corpora.Dictionary.load(os.path.join(MODELS_DIR,'twentyNewsGroup.dict'))
corpus = corpora.MmCorpus(os.path.join(MODELS_DIR, 'corpora.mm'))
%timeit lda = models.LdaModel(corpus=corpus, id2word=dictionary, num_topics=num_topics)
%memit lda = models.LdaModel(corpus=corpus, id2word=dictionary, num_topics=num_topics)
%timeit lda = models.ldamulticore.LdaMulticore(corpus=corpus, id2word=dictionary, num_topics=num_topics, workers = 1)
%timeit lda = models.ldamulticore.LdaMulticore(corpus=corpus, id2word=dictionary, num_topics=num_topics, workers = 2)
%timeit lda = models.ldamulticore.LdaMulticore(corpus=corpus, id2word=dictionary, num_topics=num_topics, workers = 3)
%timeit lda = models.ldamulticore.LdaMulticore(corpus=corpus, id2word=dictionary, num_topics=num_topics, workers = 4)
%memit lda = models.ldamulticore.LdaMulticore(corpus=corpus, id2word=dictionary, num_topics=num_topics, workers = 1)
%memit lda = models.ldamulticore.LdaMulticore(corpus=corpus, id2word=dictionary, num_topics=num_topics, workers = 2)
%memit lda = models.ldamulticore.LdaMulticore(corpus=corpus, id2word=dictionary, num_topics=num_topics, workers = 3)
%memit lda = models.ldamulticore.LdaMulticore(corpus=corpus, id2word=dictionary, num_topics=num_topics, workers = 4)
%timeit lda = models.LdaModel(corpus=corpus, id2word=dictionary, num_topics=10, iterations = 100)
%memit lda = models.LdaModel(corpus=corpus, id2word=dictionary, num_topics=10, iterations = 100)
%timeit lda = models.LdaModel(corpus=corpus, id2word=dictionary, num_topics=10, iterations = 300)
%memit lda = models.LdaModel(corpus=corpus, id2word=dictionary, num_topics=10, iterations = 300)
%timeit lda = models.LdaModel(corpus=corpus, id2word=dictionary, num_topics=10, iterations = 1000)
%memit lda = models.LdaModel(corpus=corpus, id2word=dictionary, num_topics=10, iterations = 1000)
%timeit lda = models.ldamulticore.LdaMulticore(corpus=corpus, id2word=dictionary, num_topics=10, workers = 3, iterations = 100)
%memit lda = models.ldamulticore.LdaMulticore(corpus=corpus, id2word=dictionary, num_topics=10, workers = 3, iterations = 100)
%timeit lda = models.ldamulticore.LdaMulticore(corpus=corpus, id2word=dictionary, num_topics=10, workers = 3, iterations = 300)
%memit lda = models.ldamulticore.LdaMulticore(corpus=corpus, id2word=dictionary, num_topics=10, workers = 3, iterations = 300)
%timeit lda = models.ldamulticore.LdaMulticore(corpus=corpus, id2word=dictionary, num_topics=10, workers = 3, iterations = 1000)
%memit lda = models.ldamulticore.LdaMulticore(corpus=corpus, id2word=dictionary, num_topics=10, workers = 3, iterations = 1000)
%timeit lda = models.LdaModel(corpus=corpus, id2word=dictionary, num_topics=5, iterations=100)
%memit lda = models.LdaModel(corpus=corpus, id2word=dictionary, num_topics=5, iterations=100)
%timeit lda = models.LdaModel(corpus=corpus, id2word=dictionary, num_topics=20, iterations=100)
%memit lda = models.LdaModel(corpus=corpus, id2word=dictionary, num_topics=20, iterations=100)
%timeit lda = models.LdaModel(corpus=corpus, id2word=dictionary, num_topics=40, iterations=100)
%memit lda = models.LdaModel(corpus=corpus, id2word=dictionary, num_topics=40, iterations=100)
%timeit lda = models.LdaModel(corpus=corpus, id2word=dictionary, num_topics=60, iterations=100)
%memit lda = models.LdaModel(corpus=corpus, id2word=dictionary, num_topics=60, iterations=100)
%timeit lda = models.LdaModel(corpus=corpus, id2word=dictionary, num_topics=80, iterations=100)
%memit lda = models.LdaModel(corpus=corpus, id2word=dictionary, num_topics=80, iterations=100)
#default number of passes is 1
%timeit lda = models.LdaModel(corpus=corpus, id2word=dictionary, num_topics=5, passes=1)
%memit lda = models.LdaModel(corpus=corpus, id2word=dictionary, num_topics=5, passes=1)
%timeit lda = models.LdaModel(corpus=corpus, id2word=dictionary, num_topics=5, passes=2)
%memit lda = models.LdaModel(corpus=corpus, id2word=dictionary, num_topics=5, passes=2)
%timeit lda = models.LdaModel(corpus=corpus, id2word=dictionary, num_topics=5, alpha='symmetric', iterations=1000)
%memit lda = models.LdaModel(corpus=corpus, id2word=dictionary, num_topics=5, alpha='symmetric', iterations=1000)
%timeit lda = models.LdaModel(corpus=corpus, id2word=dictionary, num_topics=5, alpha='asymmetric', iterations=1000)
%memit lda = models.LdaModel(corpus=corpus, id2word=dictionary, num_topics=5, alpha='asymmetric', iterations=1000)
%timeit lda = models.LdaModel(corpus=corpus, id2word=dictionary, num_topics=5, alpha='auto', iterations=1000)
%memit lda = models.LdaModel(corpus=corpus, id2word=dictionary, num_topics=5, alpha='auto', iterations=1000)
%timeit lda = models.LdaModel(corpus=corpus, id2word=dictionary, num_topics=5, chunksize=2000, iterations=1000)
%memit lda = models.LdaModel(corpus=corpus, id2word=dictionary, num_topics=5, chunksize=2000, iterations=1000)
%timeit lda = models.LdaModel(corpus=corpus, id2word=dictionary, num_topics=5, chunksize=3000, iterations=1000)
%memit lda = models.LdaModel(corpus=corpus, id2word=dictionary, num_topics=5, chunksize=3000, iterations=1000)
%timeit lda = models.LdaModel(corpus=corpus, id2word=dictionary, num_topics=5, chunksize=4000, iterations=1000)
%memit lda = models.LdaModel(corpus=corpus, id2word=dictionary, num_topics=5, chunksize=4000, iterations=1000)
import seaborn as sns
import pandas as pd
import matplotlib.pyplot as plt
import pandas as pd
%matplotlib inline
sns.set(style="white")
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(16, 4))
d = { 'time' : pd.Series([74, 63, 53.2, 51.7, 50.8], index=xrange(5)),
'peak_memory' : pd.Series([310.88, 462.82, 416.68, 465.73, 502.21], index=xrange(5)),
'memory_increment' : pd.Series([65.73, 198.18, 127.80, 195.93, 92.29], index=xrange(5)),
'algo' : pd.Series(['lda', 'lda-m(1)', 'lda-m(2)', 'lda-m(3)', 'lda-m(4)'], index=xrange(5))}
data = pd.DataFrame(d)
plt.suptitle('Default settings; # of topics 10', fontsize=20)
ax1.set_title('algo vs execution time')
ax1.set_xlabel('Algo')
ax1.set_ylabel('Time in sec')
data = pd.DataFrame(d)
sns.barplot(x='algo', y='time', data=data, palette='PuBu', ax=ax1)
ax2.set_title('algo vs memory usage')
ax2.set_xlabel('Algo')
ax2.set_ylabel('Memory in MB')
sns.barplot(x='algo', y='peak_memory', data=data, palette='Greens', ax=ax2)
f1, (ax1, ax2) = plt.subplots(1, 2, figsize=(16, 4))
d1 = { 'time' : pd.Series([84, 91, 96, 63, 91, 127], index=xrange(6)),
'memory' : pd.Series([469.88, 330.81, 331.52, 488.07, 595.02, 542.75], index=xrange(6)),
'iterations' : pd.Series([100, 300, 1000, 100, 300, 1000], index=xrange(6)),
'algo' : pd.Series(['lda', 'lda', 'lda', 'lda-m(3)', 'lda-m(3)', 'lda-m(3)'], index=xrange(6))}
data1 = pd.DataFrame(d1)
plt.suptitle('Number of iterations', fontsize=20)
ax1.set_title('algo vs execution time')
ax1.set_xlabel('Algo')
ax1.set_ylabel('Time in sec')
sns.barplot(x='algo', y='time', hue='iterations', data=data1, palette='Purples', ax=ax1)
ax2.set_title('algo vs memory usage')
ax2.set_xlabel('Algo')
ax2.set_ylabel('Memory in MB')
sns.barplot(x='algo', y='memory', hue='iterations', data=data1, palette='pastel', ax=ax2)
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(16, 4))
d = { 'time' : pd.Series([68, 113, 172, 226, 273], index=xrange(5)),
'memory' : pd.Series([448.07, 491.55, 563.61, 706.84, 889.00], index=xrange(5)),
'topics' : pd.Series([5, 20, 40, 60, 80], index=xrange(5))}
data = pd.DataFrame(d)
plt.suptitle('Number of topics', fontsize=20)
ax1.set_title('Execution time')
ax1.set_xlabel('Algo')
ax1.set_ylabel('Time in sec')
sns.barplot(x='topics', y='time', data=data, palette=sns.cubehelix_palette(5)
, ax=ax1)
ax2.set_title('Memory usage')
ax2.set_xlabel('Algo')
ax2.set_ylabel('Memory in MB')
sns.barplot(x='topics', y='memory', data=data, ax=ax2)
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(16, 4))
d = { 'time' : pd.Series([57.8, 98], index=xrange(2)),
'memory' : pd.Series([425.43, 282.23], index=xrange(2)),
'passes' : pd.Series([1, 2], index=xrange(2))}
data = pd.DataFrame(d)
plt.suptitle('Number of passes', fontsize=20)
ax1.set_title('Execution time')
ax1.set_xlabel('Algo')
ax1.set_ylabel('Time in sec')
sns.barplot(x='passes', y='time', data=data, palette='pastel', ax=ax1)
ax2.set_title('Memory usage')
ax2.set_xlabel('Algo')
ax2.set_ylabel('Memory in MB')
sns.barplot(x='passes', y='memory', data=data, ax=ax2)
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(16, 4))
d = { 'time' : pd.Series([81, 80, 78], index=xrange(3)),
'memory' : pd.Series([282.17, 282.23, 282.47], index=xrange(3)),
'alpha' : pd.Series(['symmetric', 'asymmetric', 'auto'], index=xrange(3))}
data = pd.DataFrame(d)
plt.suptitle('Alpha', fontsize=20)
ax1.set_title('Execution time')
ax1.set_xlabel('Algo')
ax1.set_ylabel('Time in sec')
sns.barplot(x='alpha', y='time', data=data, palette='Reds', ax=ax1)
ax2.set_title('Memory usage')
ax2.set_xlabel('Algo')
ax2.set_ylabel('Memory in MB')
sns.barplot(x='alpha', y='memory', data=data, ax=ax2)
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(16, 4))
d = { 'time' : pd.Series([81, 94, 112], index=xrange(3)),
'memory' : pd.Series([282.43, 294.43, 305.25], index=xrange(3)),
'chunk size' : pd.Series([2000, 3000, 4000], index=xrange(3))}
data = pd.DataFrame(d)
plt.suptitle('Chunk size', fontsize=20)
ax1.set_title('Execution time')
ax1.set_xlabel('Algo')
ax1.set_ylabel('Time in sec')
sns.barplot(x='chunk size', y='time', data=data, palette='Blues', ax=ax1)
ax2.set_title('Memory usage')
ax2.set_xlabel('Algo')
ax2.set_ylabel('Memory in MB')
sns.barplot(x='chunk size', y='memory', data=data, palette=sns.cubehelix_palette(3, start=.5, rot=-.75), ax=ax2)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Default settings
Step2: Additional parameters
Step3: Testing LDA with iterations with 3 workers
Step4: Number of topics
Step5: Number of passes
Step6: Alpha
Step7: Chunksize
Step8: Visualization
|
11,849
|
<ASSISTANT_TASK:>
Python Code:
# Install jdk8
!apt-get install openjdk-8-jdk-headless -qq > /dev/null
import os
# Set environment variable JAVA_HOME.
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
!update-alternatives --set java /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java
!java -version
# Install latest release version of analytics-zoo
# Installing analytics-zoo from pip will automatically install pyspark, bigdl, and their dependencies.
!pip install --pre --upgrade analytics-zoo
# Install python dependencies
!pip install tensorflow==1.15.0
import os
import zipfile
import argparse
import numpy as np
import tensorflow as tf
from bigdl.dataset import base
from sklearn.model_selection import train_test_split
from zoo.orca import init_orca_context, stop_orca_context
from zoo.orca import OrcaContext
from zoo.orca.learn.tf.estimator import Estimator
from zoo.orca.data import SharedValue
import zoo.orca.data.pandas
# recommended to set it to True when running Analytics Zoo in Jupyter notebook
OrcaContext.log_output = True # (this will display terminal's stdout and stderr in the Jupyter notebook).
cluster_mode = "local"
if cluster_mode == "local":
init_orca_context(cluster_mode="local", cores=4) # run in local mode
elif cluster_mode == "yarn":
init_orca_context(cluster_mode="yarn-client", num_nodes=2, cores=2, driver_memory="6g") # run on Hadoop YARN cluster
# Download and extract movielens 1M data.
url = 'http://files.grouplens.org/datasets/movielens/ml-1m.zip'
local_file = base.maybe_download('ml-1m.zip', '.', url)
if not os.path.exists('./ml-1m'):
zip_ref = zipfile.ZipFile(local_file, 'r')
zip_ref.extractall('.')
zip_ref.close()
# Read in the dataset, and do a little preprocessing
rating_files="./ml-1m/ratings.dat"
new_rating_files="./ml-1m/ratings_new.dat"
if not os.path.exists(new_rating_files):
fin = open(rating_files, "rt")
fout = open(new_rating_files, "wt")
for line in fin:
# replace :: to : for spark 2.4 support
fout.write(line.replace('::', ':'))
fin.close()
fout.close()
full_data = zoo.orca.data.pandas.read_csv(new_rating_files, sep=':', header=None,
names=['user', 'item', 'label'], usecols=[0, 1, 2],
dtype={0: np.int32, 1: np.int32, 2: np.int32})
user_set = set(full_data['user'].unique())
item_set = set(full_data['item'].unique())
min_user_id = min(user_set)
max_user_id = max(user_set)
min_item_id = min(item_set)
max_item_id = max(item_set)
print(min_user_id, max_user_id, min_item_id, max_item_id)
# update label starting from 0. That's because ratings go from 1 to 5, while the matrix columns go from 0 to 4
def update_label(df):
df['label'] = df['label'] - 1
return df
# run Python codes on each partition in a data-parallel fashion using `XShards.transform_shard`
full_data = full_data.transform_shard(update_label)
# split to train/test dataset
def split_train_test(data):
train, test = train_test_split(data, test_size=0.2, random_state=100)
return train, test
train_data, test_data = full_data.transform_shard(split_train_test).split()
class NCF(object):
def __init__(self, embed_size, user_size, item_size):
self.user = tf.placeholder(dtype=tf.int32, shape=(None,))
self.item = tf.placeholder(dtype=tf.int32, shape=(None,))
self.label = tf.placeholder(dtype=tf.int32, shape=(None,))
# GMF part starts
with tf.name_scope("GMF"):
user_embed_GMF = tf.contrib.layers.embed_sequence(self.user, vocab_size=user_size + 1,
embed_dim=embed_size)
item_embed_GMF = tf.contrib.layers.embed_sequence(self.item, vocab_size=item_size + 1,
embed_dim=embed_size)
GMF = tf.multiply(user_embed_GMF, item_embed_GMF)
# MLP part starts
with tf.name_scope("MLP"):
user_embed_MLP = tf.contrib.layers.embed_sequence(self.user, vocab_size=user_size + 1,
embed_dim=embed_size)
item_embed_MLP = tf.contrib.layers.embed_sequence(self.item, vocab_size=item_size + 1,
embed_dim=embed_size)
interaction = tf.concat([user_embed_MLP, item_embed_MLP], axis=-1)
layer1_MLP = tf.layers.dense(inputs=interaction, units=embed_size * 2)
layer1_MLP = tf.layers.dropout(layer1_MLP, rate=0.2)
layer2_MLP = tf.layers.dense(inputs=layer1_MLP, units=embed_size)
layer2_MLP = tf.layers.dropout(layer2_MLP, rate=0.2)
layer3_MLP = tf.layers.dense(inputs=layer2_MLP, units=embed_size // 2)
layer3_MLP = tf.layers.dropout(layer3_MLP, rate=0.2)
# Concate the two parts together
with tf.name_scope("concatenation"):
concatenation = tf.concat([GMF, layer3_MLP], axis=-1)
self.logits = tf.layers.dense(inputs=concatenation, units=5)
self.logits_softmax = tf.nn.softmax(self.logits)
self.class_number = tf.argmax(self.logits_softmax, 1)
with tf.name_scope("loss"):
self.loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(
labels=self.label, logits=self.logits, name='loss'))
with tf.name_scope("optimzation"):
self.optim = tf.train.AdamOptimizer(1e-3, name='Adam')
self.optimizer = self.optim.minimize(self.loss)
embedding_size=16
model = NCF(embedding_size, max_user_id, max_item_id)
batch_size=1280
epochs=1
model_dir='./'
# create an Estimator.
estimator = Estimator.from_graph(
inputs=[model.user, model.item],
outputs=[model.class_number],
labels=[model.label],
loss=model.loss,
optimizer=model.optim,
model_dir=model_dir,
metrics={"loss": model.loss})
# fit the Estimator
estimator.fit(data=train_data,
batch_size=1280,
epochs=1,
feature_cols=['user', 'item'],
label_cols=['label'],
validation_data=test_data)
checkpoint_path = os.path.join(model_dir, "NCF.ckpt")
estimator.save_tf_checkpoint(checkpoint_path)
estimator.shutdown()
# predict using the Estimator
def predict(predict_data, user_size, item_size):
tf.reset_default_graph()
with tf.Session() as sess:
model = NCF(embedding_size, user_size, item_size)
saver = tf.train.Saver(tf.global_variables())
checkpoint_path = os.path.join(model_dir, "NCF.ckpt")
saver.restore(sess, checkpoint_path)
estimator = Estimator.from_graph(
inputs=[model.user, model.item],
outputs=[model.class_number],
sess=sess,
model_dir=model_dir
)
predict_result = estimator.predict(predict_data, feature_cols=['user', 'item'])
predictions = predict_result.collect()
assert 'prediction' in predictions[0]
print(predictions[0]['prediction'])
predict(test_data, max_user_id, max_item_id)
stop_orca_context()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Install Analytics Zoo
Step2: Data-Parallel Pandas with XShards for Distributed Deep Learning
Step3: Init Orca Context
Step4: Data Preprocessing with XShards
Step5: Read movive len csv to XShards of Pandas Dataframe.
Step6: Use XShards to process large-scale dataset with existing Pyhton codes in a distributed and data-parallel fashion.
Step7: Define NCF Model
Step8: Fit with Orca Estimator
|
11,850
|
<ASSISTANT_TASK:>
Python Code:
print('"{}" = "{}"'.format('A', ord('A')))
print('"{}" = "{}"'.format('a', ord('a')))
print('"{}" = "{}"'.format(88, chr(88)))
print('"{}" = "{}"'.format(112, chr(112)))
for n in range(5):
print(n)
for char in ['p', 'y', 't', 'h', 'o', 'n']:
print(char)
for char in "python":
print(char)
for char in "python":
print('"{}" = "{}"'.format(char, ord(char)))
word = "python"
total = 0
for char in word:
total += ord(char)
print('"{}" = "{}"'.format(word, total))
word = "python"
all = []
for char in word:
all.append(ord(char))
print(all)
print('"{}" = "{}"'.format(word, sum(all)))
list(map(str.upper, "python"))
list(map(ord, "python"))
sum(map(ord, "python"))
for line in open('gettysburg.txt'):
print(line)
for line in open('gettysburg.txt'):
print(line, end='')
for line in open('gettysburg.txt'):
print(line.rstrip())
for line in open('gettysburg.txt'):
for word in line.rstrip().split():
print(word)
for x in "a8,X.b!G":
print('"{}" = "{}"'.format(x, str.isalpha(x)))
for x in "a8,X.b!G":
print('"{}" = "{}"'.format(x, x.isalpha()))
list(filter(str.isalpha, "a8,X.b!G"))
list(filter(lambda char: char.isalpha(), "a8,X.b!G"))
list(filter(lambda x: x % 2 == 0, range(10)))
''.join(filter(str.isalpha, "a8,X.b!G"))
for line in open('gettysburg.txt'):
for word in line.rstrip().split():
print(''.join(filter(str.isalpha, word)))
for line in open('gettysburg.txt'):
for word in line.rstrip().split():
clean = ''.join(filter(str.isalpha, word))
print('"{}" = "{}"'.format(clean, sum(map(ord, clean))))
for line in map(str.rstrip, open('gettysburg.txt')):
for word in map(lambda w: ''.join(filter(str.isalpha, w)), line.split()):
print('"{}" = "{}"'.format(word, sum(map(ord, word))))
def onlychars(word):
return ''.join(filter(str.isalpha, word))
def word2num(word):
return sum(map(ord, word))
for line in map(str.rstrip, open('gettysburg.txt')):
for word in map(onlychars, line.split()):
print('"{}" = "{}"'.format(word, word2num(word)))
from collections import defaultdict
def onlychars(word):
return ''.join(filter(str.isalpha, word))
file = '/usr/share/dict/words'
num2word = defaultdict(list)
for line in map(str.rstrip, open(file)):
for word in map(onlychars, line.split()):
num = sum(map(ord, word))
num2word[num].append(word)
count_per_n = []
for n, wordlist in num2word.items():
count_per_n.append((len(wordlist), n))
top10 = list(reversed(sorted(count_per_n)))[:10]
for (num_of_words, n) in top10:
print('"{}" = {}'.format(n, ', '.join(num2word[n])))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: To implement an ASCII version of gematria in Python, we need to turn each letter into a number and add them all together. So, to start, note that Python can use a for loop to cycle through all the members of a list (in order)
Step2: A "word" is simply a list of characters, so we can iterate over it just like a list of numbers
Step3: Let's print the ordinal (ASCII) value instead
Step4: Now let's create a variable to hold the running sum of the values
Step5: Another way could be to create another list to hold the values and then use the sum function
Step6: We can use a map function to transform all the characters via the ord function. This is interesting because map is a function that takes another function as its first argument. The second is a list of items to feed into the function. The result is the transformed list. For instance, we can use the str.upper function to turn each letter (e.g., "p") into the upper-case version ("P"). NB
Step7: Now we can sum those numbers
Step8: Now let's think about how we could apply this to all the words in a file. As above, we can use a for loop to iterate over all the lines in a file
Step9: The original is single-spaced, so why is this printing double-spaced? The for loop reads each "line" which is a string of text up to and including a newline ("\n"). The print by default adds a newline, so we either need to print(line, end='') to indicate we don't want anything at the end
Step10: Or we need to use the rstrip function to "strip" whitespace off the "r"ight side of the line
Step11: We can use the split function to get all the words for each line and a for loop to iterate over those
Step12: We want to get rid of anything that is not character like the punctuation. There is a function in the str library called isalpha that returns True or False
Step13: Each x in the loop is itself a string, so we can call the method directly on the variable
Step14: Similar to what we saw above with the map function, we can use filter to find all the characters in a string which are True for isalpha. filter is another "higher-order function" that takes another function for its first argument (called the "predicate") and a list as the second argument. Whereas map returns all the elements of the list transformed by the function, filter returns only those for which the predicate is true.
Step15: The first argument for map and filter is called the "lambda," and sometimes you will see it written out explicitly like so
Step16: Here is a way to find only even numbers
Step17: Let's turn that list of characters back into a word with the join function
Step18: So, going back to our Gettysburg example, here is a list of all the words without punctuation
Step19: Now, rather let's print the sum of the chr values for each cleaned up word
Step20: Notice that we are calling rstrip for every line, so we could easily move that into a map, and the "cleaning" code can likewise be moved into a map
Step21: At this point, we have arguably sacrificed readability for the sake of using map and filter -- another instance of "just because you can doesn't mean you should"!
Step22: With this, I hope you're now understand what is meant by a "higher-order function" (functions that take other functions as arguments) and how they can streamline your code.
|
11,851
|
<ASSISTANT_TASK:>
Python Code:
import sympy as sy
import numpy as np
from sympy import *
r = Symbol('r')
I = integrate(exp(-2*r**2)*r**2,(r,0,+oo))
C = sqrt(1/I)
print(latex(simplify(C)))
E = C**2*integrate((-2*r**4+3*r**2-r)*exp(-2*r**2),(r,0,oo))
print('Expected value is %0.4f Ha.'%E)
# Hydrogen atom energy equation is given in class notes
n=1
E_true = -1/(2*n**2) # unit Ha
print('The ture value is %0.4f Ha. So the expected value is greater than the true value.' %E_true)
gamma = symbols('gamma',positive=True) # We know the gamma has to be positive, or the R10 would be larger when r increase.
r = symbols("r",positive=True)
C = sqrt(1/integrate(exp(-2*gamma*r**2)*r**2,(r,0,oo)))
E = C**2*integrate((-2*gamma**2*r**4+3*gamma*r**2-r)*exp(-2*gamma*r**2),(r,0,oo)) # expectation value of energy as a function of gamma
gamma_best=solve(diff(E,gamma),gamma)
print("Expectation of energy:");print(E)
print("Best value of gamma is: %s, which equals to %f."% (gammabest,8/(9*np.pi)))
import math
gamma_best = 8/(9*np.pi)
E_best = 8*math.sqrt(2)*gamma_best**(3/2)*(-1/(4*gamma_best) + 3*math.sqrt(2)*math.sqrt(np.pi)/(32*math.sqrt(gamma_best)))/math.sqrt(np.pi)
print("Energy with the best gamma: %0.3f eV."%E_best)
import matplotlib.pyplot as plt
gamma = np.linspace(0.001,1.5,10000)
E = []
for x in gamma:
E.append(8*math.sqrt(2)*x**(3/2)*(-1/(4*x) + 3*math.sqrt(2)*math.sqrt(np.pi)/(32*math.sqrt(x)))/math.sqrt(np.pi))
plt.plot(gamma,E)
plt.xlabel("Gamma")
plt.ylabel("Energy(eV)")
plt.axvline(x=8/(9*np.pi),color='k',linestyle='--')
plt.axhline(y=E_best,color='k',linestyle='--')
plt.annotate('Lowest energy spot', xy=(8/(9*np.pi), E_best), xytext=(0.5,-0.2), arrowprops=dict(facecolor='black'))
plt.show()
# From http://www.genstrom.net/public/biology/common/en/em_spectrum.html
print("The 1s energies become increasingly negative with inceasing Z. Light must become increasingly energetic to kick out one of them.")
hc = 1239.8 #eV*nm
E = [13.23430*27.212, 20.01336*27.212, 28.13652*27.212, 37.71226*27.212,48.64339*27.212,30.45968*27.212] # eV
lamb = [] #nm
for e in E:
lamb.append(hc/e)
print(lamb,"nm.\nThey corresponds to X-rays.")
hc = 1239.8 #eV*nm
E = [0.15147*27.212, 0.43435*27.212, 0.87602*27.212, 1.30625*27.212,1.89967*27.212,1.34278*27.212] # eV
lamb = [] #nm
for e in E:
lamb.append(hc/e)
print(lamb,"nm.\nThey corresponds to UVs")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: So the normalized 1s wavefunction is $\tilde{R}_{10}(r) = \frac{2}{\sqrt[4]{\pi}} 2^{\frac{3}{4}} e^{-r^2} = (\frac{128}{\pi}) ^ {\frac{1}{4}} e^{-r^2} $.
Step2: 3. What does the variational principle say about the expectation value of the energy of your guess as you vary a parameter $\gamma$ in your guess, $R_{10}=e^{-\gamma r^2}$? Suggest a strategy for determining the "best" $\gamma$.
Step3: Many-electrons means many troubles
Step4: 10. Why, qualitatively, do the energies vary as they do?
|
11,852
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import featuretools as ft
from featuretools.selection import (
remove_highly_correlated_features,
remove_highly_null_features,
remove_single_value_features,
)
from featuretools.primitives import NaturalLanguage
from featuretools.demo.flight import load_flight
es = load_flight(nrows=50)
es
fm, features = ft.dfs(entityset=es,
target_dataframe_name="trip_logs",
cutoff_time=pd.DataFrame({
'trip_log_id':[30, 1, 2, 3, 4],
'time':pd.to_datetime(['2016-09-22 00:00:00']*5)
}),
trans_primitives=[],
agg_primitives=[],
max_depth=2)
fm
ft.selection.remove_highly_null_features(fm)
remove_highly_null_features(fm, pct_null_threshold=.2)
fm
new_fm, new_features = remove_single_value_features(fm, features=features)
new_fm
set(features) - set(new_features)
new_fm, new_features = remove_single_value_features(fm, features=features, count_nan_as_value=True)
new_fm
set(features) - set(new_features)
fm, features = ft.dfs(entityset=es,
target_dataframe_name="trip_logs",
trans_primitives=['negate'],
agg_primitives=[],
max_depth=3)
fm.head()
new_fm, new_features = remove_highly_correlated_features(fm, features=features)
new_fm.head()
set(features) - set(new_features)
new_fm , new_features = remove_highly_correlated_features(fm, features=features, pct_corr_threshold=.9)
new_fm.head()
set(features) - set(new_features)
new_fm, new_features = remove_highly_correlated_features(fm, features=features, features_to_check=['air_time', 'distance', 'flights.distance_group'])
new_fm.head()
set(features) - set(new_features)
new_fm, new_features = remove_highly_correlated_features(fm, features=features, features_to_keep=['air_time', 'distance', 'flights.distance_group'])
new_fm.head()
set(features) - set(new_features)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Remove Highly Null Features
Step2: We look at the above feature matrix and decide to remove the highly null features
Step3: Notice that calling remove_highly_null_features didn't remove every feature that contains a null value. By default, we only remove features where the percentage of null values in the calculated feature matrix is above 95%. If we want to lower that threshold, we can set the pct_null_threshold paramter ourselves.
Step4: Remove Single Value Features
Step5: Now that we have the features definitions for the updated feature matrix, we can see that the features that were removed are
Step6: With the function used as it is above, null values are not considered when counting a feature's unique values. If we'd like to consider NaN its own value, we can set count_nan_as_value to True and we'll see flights.carrier and flights.flight_num back in the matrix.
Step7: The features that were removed are
Step8: Remove Highly Correlated Features
Step9: Note that we have some pretty clear correlations here between all the features and their negations.
Step10: The features that were removed are
Step11: Change the correlation threshold
Step12: The features that were removed are
Step13: Check a Subset of Features
Step14: The features that were removed are
Step15: Protect Features from Removal
Step16: The features that were removed are
|
11,853
|
<ASSISTANT_TASK:>
Python Code:
def pretty_print_review_and_label(i):
print(labels[i] + "\t:\t" + reviews[i][:80] + "...")
g = open('reviews.txt','r') # What we know!
reviews = list(map(lambda x:x[:-1],g.readlines()))
g.close()
g = open('labels.txt','r') # What we WANT to know!
labels = list(map(lambda x:x[:-1].upper(),g.readlines()))
g.close()
len(reviews)
reviews[0]
labels[0]
print("labels.txt \t : \t reviews.txt\n")
pretty_print_review_and_label(2137)
pretty_print_review_and_label(12816)
pretty_print_review_and_label(6267)
pretty_print_review_and_label(21934)
pretty_print_review_and_label(5297)
pretty_print_review_and_label(4998)
from collections import Counter
import numpy as np
# Create three Counter objects to store positive, negative and total counts
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
# TODO: Loop over all the words in all the reviews and increment the counts in the appropriate counter objects
for rev, label in zip(reviews, labels):
if (label == "POSITIVE"):
for word in rev.split(" "):
positive_counts[word] += 1
total_counts[word] += 1
else:
for word in rev.split(" "):
negative_counts[word] += 1
total_counts[word] += 1
# Examine the counts of the most common words in positive reviews
positive_counts.most_common()
# Examine the counts of the most common words in negative reviews
negative_counts.most_common()
# Create Counter object to store positive/negative ratios
pos_neg_ratios = Counter()
# TODO: Calculate the ratios of positive and negative uses of the most common words
# Consider words to be "common" if they've been used at least 100 times
for word, count in total_counts.most_common():
if (count >= 100):
pos_neg_ratios[word] = positive_counts[word] / float(negative_counts[word]+1)
print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"]))
# TODO: Convert ratios to logs
for word, ratio in pos_neg_ratios.most_common():
if (ratio > 1):
pos_neg_ratios[word] = np.log(ratio)
else:
pos_neg_ratios[word] = -np.log(1/(ratio + 0.01))
print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"]))
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
# Note: Above is the code Andrew uses in his solution video,
# so we've included it here to avoid confusion.
# If you explore the documentation for the Counter class,
# you will see you could also find the 30 least common
# words like this: pos_neg_ratios.most_common()[:-31:-1]
from IPython.display import Image
review = "This was a horrible, terrible movie."
Image(filename='sentiment_network.png')
review = "The movie was excellent"
Image(filename='sentiment_network_pos.png')
# TODO: Create set named "vocab" containing all of the words from all of the reviews
vocab = set(total_counts.keys())
vocab_size = len(vocab)
print(vocab_size)
from IPython.display import Image
Image(filename='sentiment_network_2.png')
# TODO: Create layer_0 matrix with dimensions 1 by vocab_size, initially filled with zeros
layer_0 = np.zeros((1, vocab_size))
layer_0.shape
from IPython.display import Image
Image(filename='sentiment_network.png')
# Create a dictionary of words in the vocabulary mapped to index positions
# (to be used in layer_0)
word2index = {}
for i,word in enumerate(vocab):
word2index[word] = i
# display the map of words to indices
word2index
def update_input_layer(review):
Modify the global layer_0 to represent the vector form of review.
The element at a given index of layer_0 should represent
how many times the given word occurs in the review.
Args:
review(string) - the string of the review
Returns:
None
global layer_0
# clear out previous state by resetting the layer to be all 0s
layer_0 *= 0
# TODO: count how many times each word is used in the given review and store the results in layer_0
for word in review.split(" "):
layer_0[0][word2index[word]] += 1
update_input_layer(reviews[0])
layer_0
def get_target_for_label(label):
Convert a label to `0` or `1`.
Args:
label(string) - Either "POSITIVE" or "NEGATIVE".
Returns:
`0` or `1`.
# TODO: Your code here
return 0 + (label == "POSITIVE")
labels[0]
get_target_for_label(labels[0])
labels[1]
get_target_for_label(labels[1])
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews, labels, hidden_nodes = 10, learning_rate = 0.1):
Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
review_vocab = set()
# TODO: populate review_vocab with all of the words in the given reviews
# Remember to split reviews into individual words
# using "split(' ')" instead of "split()".
for rev in reviews:
review_vocab.update(rev.split(" "))
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
label_vocab = set()
# TODO: populate label_vocab with all of the words in the given labels.
# There is no need to split the labels because each one is a single word.
label_vocab.update(labels)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
# TODO: populate self.word2index with indices for all the words in self.review_vocab
# like you saw earlier in the notebook
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
# TODO: do the same thing you did for self.word2index and self.review_vocab,
# but for self.label2index and self.label_vocab instead
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Store the number of nodes in input, hidden, and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# TODO: initialize self.weights_0_1 as a matrix of zeros. These are the weights between
# the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes, self.hidden_nodes))
# TODO: initialize self.weights_1_2 as a matrix of random values.
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0, self.output_nodes**-0.5, (self.hidden_nodes, self.output_nodes))
# TODO: Create the input layer, a two-dimensional matrix with shape
# 1 x input_nodes, with all values initialized to zero
self.layer_0 = np.zeros((1, input_nodes))
def update_input_layer(self, review):
# TODO: You can copy most of the code you wrote for update_input_layer
# earlier in this notebook.
#
# However, MAKE SURE YOU CHANGE ALL VARIABLES TO REFERENCE
# THE VERSIONS STORED IN THIS OBJECT, NOT THE GLOBAL OBJECTS.
# For example, replace "layer_0 *= 0" with "self.layer_0 *= 0"
# clear out previous state, reset the layer to be all 0s
self.layer_0 *= 0
for word in review.split(" "):
if (word in self.word2index.keys()):
self.layer_0[0][self.word2index[word]] += 1
def get_target_for_label(self, label):
# TODO: Copy the code you wrote for get_target_for_label
# earlier in this notebook.
return 0 + (label == "POSITIVE")
def sigmoid(self, x):
# TODO: Return the result of calculating the sigmoid activation function
# shown in the lectures
return 1. / (1. + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
# TODO: Return the derivative of the sigmoid activation function,
# where "output" is the original output from the sigmoid fucntion
return output * (1 - output)
def train(self, training_reviews, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# TODO: Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
# TODO: Implement the forward pass through the network.
# That means use the given review to update the input layer,
# then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Do not use an activation function for the hidden layer,
# but use the sigmoid activation function for the output layer.
self.update_input_layer(review)
layer_1 = self.layer_0.dot(self.weights_0_1)
output_layer = self.sigmoid(layer_1.dot(self.weights_1_2))
# TODO: Implement the back propagation pass here.
# That means calculate the error for the forward pass's prediction
# and update the weights in the network according to their
# contributions toward the error, as calculated via the
# gradient descent and back propagation algorithms you
# learned in class.
output_layer_error = output_layer - self.get_target_for_label(label)
output_layer_delta = output_layer_error * self.sigmoid_output_2_derivative(output_layer)
layer_1_error = output_layer_delta.dot(self.weights_1_2.T)
layer_1_delta = layer_1_error
self.weights_0_1 -= self.learning_rate * self.layer_0.T.dot(layer_1_delta)
self.weights_1_2 -= self.learning_rate * layer_1.T.dot(output_layer_delta)
# TODO: Keep track of correct predictions. To determine if the prediction was
# correct, check that the absolute value of the output error
# is less than 0.5. If so, add one to the correct_so_far count.
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
if (output_layer >= 0.5 and label == "POSITIVE"):
correct_so_far += 1
elif (output_layer < 0.5 and label == "NEGATIVE"):
correct_so_far += 1
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
Returns a POSITIVE or NEGATIVE prediction for the given review.
# TODO: Run a forward pass through the network, like you did in the
# "train" function. That means use the given review to
# update the input layer, then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Note: The review passed into this function for prediction
# might come from anywhere, so you should convert it
# to lower case prior to using it.
self.update_input_layer(review.lower())
layer_1 = self.layer_0.dot(self.weights_0_1)
output_layer = self.sigmoid(layer_1.dot(self.weights_1_2))
# TODO: The output layer should now contain a prediction.
# Return `POSITIVE` for predictions greater-than-or-equal-to `0.5`,
# and `NEGATIVE` otherwise.
if (abs(output_layer) >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.test(reviews[-1000:],labels[-1000:])
mlp.train(reviews[:-1000],labels[:-1000])
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.001)
mlp.train(reviews[:-1000],labels[:-1000])
from IPython.display import Image
Image(filename='sentiment_network.png')
def update_input_layer(review):
global layer_0
# clear out previous state, reset the layer to be all 0s
layer_0 *= 0
for word in review.split(" "):
layer_0[0][word2index[word]] += 1
update_input_layer(reviews[0])
layer_0
review_counter = Counter()
for word in reviews[0].split(" "):
review_counter[word] += 1
review_counter.most_common()
# TODO: -Copy the SentimentNetwork class from Projet 3 lesson
# -Modify it to reduce noise, like in the video
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews, labels, hidden_nodes = 10, learning_rate = 0.1):
Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
review_vocab = set()
# TODO: populate review_vocab with all of the words in the given reviews
# Remember to split reviews into individual words
# using "split(' ')" instead of "split()".
for rev in reviews:
review_vocab.update(rev.split(" "))
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
label_vocab = set()
# TODO: populate label_vocab with all of the words in the given labels.
# There is no need to split the labels because each one is a single word.
label_vocab.update(labels)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
# TODO: populate self.word2index with indices for all the words in self.review_vocab
# like you saw earlier in the notebook
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
# TODO: do the same thing you did for self.word2index and self.review_vocab,
# but for self.label2index and self.label_vocab instead
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Store the number of nodes in input, hidden, and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# TODO: initialize self.weights_0_1 as a matrix of zeros. These are the weights between
# the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes, self.hidden_nodes))
# TODO: initialize self.weights_1_2 as a matrix of random values.
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0, self.output_nodes**-0.5, (self.hidden_nodes, self.output_nodes))
# TODO: Create the input layer, a two-dimensional matrix with shape
# 1 x input_nodes, with all values initialized to zero
self.layer_0 = np.zeros((1, input_nodes))
def update_input_layer(self, review):
# TODO: You can copy most of the code you wrote for update_input_layer
# earlier in this notebook.
#
# However, MAKE SURE YOU CHANGE ALL VARIABLES TO REFERENCE
# THE VERSIONS STORED IN THIS OBJECT, NOT THE GLOBAL OBJECTS.
# For example, replace "layer_0 *= 0" with "self.layer_0 *= 0"
# clear out previous state, reset the layer to be all 0s
self.layer_0 *= 0
for word in review.split(" "):
if (word in self.word2index.keys()):
self.layer_0[0][self.word2index[word]] = 1
def get_target_for_label(self, label):
# TODO: Copy the code you wrote for get_target_for_label
# earlier in this notebook.
return 0 + (label == "POSITIVE")
def sigmoid(self, x):
# TODO: Return the result of calculating the sigmoid activation function
# shown in the lectures
return 1. / (1. + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
# TODO: Return the derivative of the sigmoid activation function,
# where "output" is the original output from the sigmoid fucntion
return output * (1 - output)
def train(self, training_reviews, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# TODO: Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
# TODO: Implement the forward pass through the network.
# That means use the given review to update the input layer,
# then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Do not use an activation function for the hidden layer,
# but use the sigmoid activation function for the output layer.
self.update_input_layer(review)
layer_1 = self.layer_0.dot(self.weights_0_1)
output_layer = self.sigmoid(layer_1.dot(self.weights_1_2))
# TODO: Implement the back propagation pass here.
# That means calculate the error for the forward pass's prediction
# and update the weights in the network according to their
# contributions toward the error, as calculated via the
# gradient descent and back propagation algorithms you
# learned in class.
output_layer_error = output_layer - self.get_target_for_label(label)
output_layer_delta = output_layer_error * self.sigmoid_output_2_derivative(output_layer)
layer_1_error = output_layer_delta.dot(self.weights_1_2.T)
layer_1_delta = layer_1_error
self.weights_0_1 -= self.learning_rate * self.layer_0.T.dot(layer_1_delta)
self.weights_1_2 -= self.learning_rate * layer_1.T.dot(output_layer_delta)
# TODO: Keep track of correct predictions. To determine if the prediction was
# correct, check that the absolute value of the output error
# is less than 0.5. If so, add one to the correct_so_far count.
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
if (output_layer >= 0.5 and label == "POSITIVE"):
correct_so_far += 1
elif (output_layer < 0.5 and label == "NEGATIVE"):
correct_so_far += 1
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
Returns a POSITIVE or NEGATIVE prediction for the given review.
# TODO: Run a forward pass through the network, like you did in the
# "train" function. That means use the given review to
# update the input layer, then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Note: The review passed into this function for prediction
# might come from anywhere, so you should convert it
# to lower case prior to using it.
self.update_input_layer(review.lower())
layer_1 = self.layer_0.dot(self.weights_0_1)
output_layer = self.sigmoid(layer_1.dot(self.weights_1_2))
# TODO: The output layer should now contain a prediction.
# Return `POSITIVE` for predictions greater-than-or-equal-to `0.5`,
# and `NEGATIVE` otherwise.
if (abs(output_layer) >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
mlp.test(reviews[-1000:],labels[-1000:])
Image(filename='sentiment_network_sparse.png')
layer_0 = np.zeros(10)
layer_0
layer_0[4] = 1
layer_0[9] = 1
layer_0
weights_0_1 = np.random.randn(10,5)
layer_0.dot(weights_0_1)
indices = [4,9]
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (1 * weights_0_1[index])
layer_1
Image(filename='sentiment_network_sparse_2.png')
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (weights_0_1[index])
layer_1
# TODO: -Copy the SentimentNetwork class from Project 4 lesson
# -Modify it according to the above instructions
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews, labels, hidden_nodes = 10, learning_rate = 0.1):
Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
review_vocab = set()
# TODO: populate review_vocab with all of the words in the given reviews
# Remember to split reviews into individual words
# using "split(' ')" instead of "split()".
for rev in reviews:
review_vocab.update(rev.split(" "))
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
label_vocab = set()
# TODO: populate label_vocab with all of the words in the given labels.
# There is no need to split the labels because each one is a single word.
label_vocab.update(labels)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
# TODO: populate self.word2index with indices for all the words in self.review_vocab
# like you saw earlier in the notebook
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
# TODO: do the same thing you did for self.word2index and self.review_vocab,
# but for self.label2index and self.label_vocab instead
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Store the number of nodes in input, hidden, and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Hidden layer
self.layer_1 = np.zeros((1, hidden_nodes))
# Initialize weights
# TODO: initialize self.weights_0_1 as a matrix of zeros. These are the weights between
# the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes, self.hidden_nodes))
# TODO: initialize self.weights_1_2 as a matrix of random values.
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0, self.output_nodes**-0.5, (self.hidden_nodes, self.output_nodes))
def get_target_for_label(self, label):
# TODO: Copy the code you wrote for get_target_for_label
# earlier in this notebook.
return 0 + (label == "POSITIVE")
def sigmoid(self, x):
# TODO: Return the result of calculating the sigmoid activation function
# shown in the lectures
return 1. / (1. + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
# TODO: Return the derivative of the sigmoid activation function,
# where "output" is the original output from the sigmoid fucntion
return output * (1 - output)
def train(self, training_reviews_raw, training_labels):
training_reviews = list()
for review in training_reviews_raw:
word_index = set()
for word in review.split(" "):
if (word in self.word2index.keys()):
word_index.add(self.word2index[word])
training_reviews.append(list(word_index))
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# TODO: Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
# TODO: Implement the forward pass through the network.
# That means use the given review to update the input layer,
# then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Do not use an activation function for the hidden layer,
# but use the sigmoid activation function for the output layer.
self.layer_1 *= 0
for index in review:
self.layer_1 += self.weights_0_1[index]
output_layer = self.sigmoid(self.layer_1.dot(self.weights_1_2))
# TODO: Implement the back propagation pass here.
# That means calculate the error for the forward pass's prediction
# and update the weights in the network according to their
# contributions toward the error, as calculated via the
# gradient descent and back propagation algorithms you
# learned in class.
output_layer_error = output_layer - self.get_target_for_label(label)
output_layer_delta = output_layer_error * self.sigmoid_output_2_derivative(output_layer)
layer_1_error = output_layer_delta.dot(self.weights_1_2.T)
layer_1_delta = layer_1_error
for index in review:
self.weights_0_1[index] -= self.learning_rate * layer_1_delta[0]
self.weights_1_2 -= self.learning_rate * self.layer_1.T.dot(output_layer_delta)
# TODO: Keep track of correct predictions. To determine if the prediction was
# correct, check that the absolute value of the output error
# is less than 0.5. If so, add one to the correct_so_far count.
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
if (output_layer >= 0.5 and label == "POSITIVE"):
correct_so_far += 1
elif (output_layer < 0.5 and label == "NEGATIVE"):
correct_so_far += 1
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
Returns a POSITIVE or NEGATIVE prediction for the given review.
# TODO: Run a forward pass through the network, like you did in the
# "train" function. That means use the given review to
# update the input layer, then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Note: The review passed into this function for prediction
# might come from anywhere, so you should convert it
# to lower case prior to using it.
self.layer_1 *= 0
unique_indices = set()
for word in review.lower().split(" "):
if word in self.word2index.keys():
unique_indices.add(self.word2index[word])
for index in unique_indices:
self.layer_1 += self.weights_0_1[index]
output_layer = self.sigmoid(self.layer_1.dot(self.weights_1_2))
# TODO: The output layer should now contain a prediction.
# Return `POSITIVE` for predictions greater-than-or-equal-to `0.5`,
# and `NEGATIVE` otherwise.
if (abs(output_layer) >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000]*3,labels[:-1000]*3)
mlp.test(reviews[-1000:],labels[-1000:])
Image(filename='sentiment_network_sparse_2.png')
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
from bokeh.models import ColumnDataSource, LabelSet
from bokeh.plotting import figure, show, output_file
from bokeh.io import output_notebook
output_notebook()
hist, edges = np.histogram(list(map(lambda x:x[1],pos_neg_ratios.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="Word Positive/Negative Affinity Distribution")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
frequency_frequency = Counter()
for word, cnt in total_counts.most_common():
frequency_frequency[cnt] += 1
hist, edges = np.histogram(list(map(lambda x:x[1],frequency_frequency.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="The frequency distribution of the words in our corpus")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
# TODO: -Copy the SentimentNetwork class from Project 5 lesson
# -Modify it according to the above instructions
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews, labels, min_count = 10, polarity_cutoff = 0.1, hidden_nodes = 10, learning_rate = 0.1):
Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels, min_count, polarity_cutoff)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels, min_count, polarity_cutoff):
# Create three Counter objects to store positive, negative and total counts
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
pos_neg_ratios = Counter()
for i in range(len(reviews)):
if(labels[i] == 'POSITIVE'):
for word in reviews[i].split(" "):
positive_counts[word] += 1
total_counts[word] += 1
else:
for word in reviews[i].split(" "):
negative_counts[word] += 1
total_counts[word] += 1
for word, count in list(total_counts.most_common()):
if(count >= 50):
pos_neg_ratios[word] = positive_counts[word] / float(negative_counts[word]+1)
for word, ratio in pos_neg_ratios.most_common():
if(ratio > 1):
pos_neg_ratios[word] = np.log(ratio)
else:
pos_neg_ratios[word] = -np.log((1 / (ratio + 0.01)))
review_vocab = set()
# TODO: populate review_vocab with all of the words in the given reviews
# Remember to split reviews into individual words
# using "split(' ')" instead of "split()".
for rev in reviews:
for word in rev.split(" "):
if(total_counts[word] > min_count):
if(word in pos_neg_ratios.keys()):
if (np.abs(pos_neg_ratios[word]) >= polarity_cutoff):
review_vocab.add(word)
else:
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
label_vocab = set()
# TODO: populate label_vocab with all of the words in the given labels.
# There is no need to split the labels because each one is a single word.
label_vocab.update(labels)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
# TODO: populate self.word2index with indices for all the words in self.review_vocab
# like you saw earlier in the notebook
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
# TODO: do the same thing you did for self.word2index and self.review_vocab,
# but for self.label2index and self.label_vocab instead
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Store the number of nodes in input, hidden, and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Hidden layer
self.layer_1 = np.zeros((1, hidden_nodes))
# Initialize weights
# TODO: initialize self.weights_0_1 as a matrix of zeros. These are the weights between
# the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes, self.hidden_nodes))
# TODO: initialize self.weights_1_2 as a matrix of random values.
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0, self.output_nodes**-0.5, (self.hidden_nodes, self.output_nodes))
def get_target_for_label(self, label):
# TODO: Copy the code you wrote for get_target_for_label
# earlier in this notebook.
return 0 + (label == "POSITIVE")
def sigmoid(self, x):
# TODO: Return the result of calculating the sigmoid activation function
# shown in the lectures
return 1. / (1. + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
# TODO: Return the derivative of the sigmoid activation function,
# where "output" is the original output from the sigmoid fucntion
return output * (1 - output)
def train(self, training_reviews_raw, training_labels):
training_reviews = list()
for review in training_reviews_raw:
word_index = set()
for word in review.split(" "):
if (word in self.word2index.keys()):
word_index.add(self.word2index[word])
training_reviews.append(list(word_index))
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# TODO: Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
# TODO: Implement the forward pass through the network.
# That means use the given review to update the input layer,
# then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Do not use an activation function for the hidden layer,
# but use the sigmoid activation function for the output layer.
self.layer_1 *= 0
for index in review:
self.layer_1 += self.weights_0_1[index]
output_layer = self.sigmoid(self.layer_1.dot(self.weights_1_2))
# TODO: Implement the back propagation pass here.
# That means calculate the error for the forward pass's prediction
# and update the weights in the network according to their
# contributions toward the error, as calculated via the
# gradient descent and back propagation algorithms you
# learned in class.
output_layer_error = output_layer - self.get_target_for_label(label)
output_layer_delta = output_layer_error * self.sigmoid_output_2_derivative(output_layer)
layer_1_error = output_layer_delta.dot(self.weights_1_2.T)
layer_1_delta = layer_1_error
for index in review:
self.weights_0_1[index] -= self.learning_rate * layer_1_delta[0]
self.weights_1_2 -= self.learning_rate * self.layer_1.T.dot(output_layer_delta)
# TODO: Keep track of correct predictions. To determine if the prediction was
# correct, check that the absolute value of the output error
# is less than 0.5. If so, add one to the correct_so_far count.
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
if (output_layer >= 0.5 and label == "POSITIVE"):
correct_so_far += 1
elif (output_layer < 0.5 and label == "NEGATIVE"):
correct_so_far += 1
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
Returns a POSITIVE or NEGATIVE prediction for the given review.
# TODO: Run a forward pass through the network, like you did in the
# "train" function. That means use the given review to
# update the input layer, then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Note: The review passed into this function for prediction
# might come from anywhere, so you should convert it
# to lower case prior to using it.
self.layer_1 *= 0
unique_indices = set()
for word in review.lower().split(" "):
if word in self.word2index.keys():
unique_indices.add(self.word2index[word])
for index in unique_indices:
self.layer_1 += self.weights_0_1[index]
output_layer = self.sigmoid(self.layer_1.dot(self.weights_1_2))
# TODO: The output layer should now contain a prediction.
# Return `POSITIVE` for predictions greater-than-or-equal-to `0.5`,
# and `NEGATIVE` otherwise.
if (abs(output_layer) >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.05,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
mlp.test(reviews[-1000:],labels[-1000:])
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.8,learning_rate=0.01)
mlp.train(reviews[:-1000]*10,labels[:-1000]*10)
mlp.test(reviews[-1000:],labels[-1000:])
mlp_full = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=0,polarity_cutoff=0,learning_rate=0.01)
mlp_full.train(reviews[:-1000],labels[:-1000])
Image(filename='sentiment_network_sparse.png')
def get_most_similar_words(focus = "horrible"):
most_similar = Counter()
for word in mlp_full.word2index.keys():
most_similar[word] = np.dot(mlp_full.weights_0_1[mlp_full.word2index[word]],mlp_full.weights_0_1[mlp_full.word2index[focus]])
return most_similar.most_common()
get_most_similar_words("excellent")
get_most_similar_words("terrible")
import matplotlib.colors as colors
words_to_visualize = list()
for word, ratio in pos_neg_ratios.most_common(500):
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
for word, ratio in list(reversed(pos_neg_ratios.most_common()))[0:500]:
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
pos = 0
neg = 0
colors_list = list()
vectors_list = list()
for word in words_to_visualize:
if word in pos_neg_ratios.keys():
vectors_list.append(mlp_full.weights_0_1[mlp_full.word2index[word]])
if(pos_neg_ratios[word] > 0):
pos+=1
colors_list.append("#00ff00")
else:
neg+=1
colors_list.append("#000000")
from sklearn.manifold import TSNE
tsne = TSNE(n_components=2, random_state=0)
words_top_ted_tsne = tsne.fit_transform(vectors_list)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="vector T-SNE for most polarized words")
source = ColumnDataSource(data=dict(x1=words_top_ted_tsne[:,0],
x2=words_top_ted_tsne[:,1],
names=words_to_visualize))
p.scatter(x="x1", y="x2", size=8, source=source,color=colors_list)
word_labels = LabelSet(x="x1", y="x2", text="names", y_offset=6,
text_font_size="8pt", text_color="#555555",
source=source, text_align='center')
p.add_layout(word_labels)
show(p)
# green indicates positive words, black indicates negative words
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Note
Step2: Lesson
Step3: Project 1
Step4: We'll create three Counter objects, one for words from postive reviews, one for words from negative reviews, and one for all the words.
Step5: TODO
Step6: Run the following two cells to list the words used in positive reviews and negative reviews, respectively, ordered from most to least commonly used.
Step7: As you can see, common words like "the" appear very often in both positive and negative reviews. Instead of finding the most common words in positive or negative reviews, what you really want are the words found in positive reviews more often than in negative reviews, and vice versa. To accomplish this, you'll need to calculate the ratios of word usage between positive and negative reviews.
Step8: Examine the ratios you've calculated for a few words
Step9: Looking closely at the values you just calculated, we see the following
Step10: Examine the new ratios you've calculated for the same words from before
Step11: If everything worked, now you should see neutral words with values close to zero. In this case, "the" is near zero but slightly positive, so it was probably used in more positive reviews than negative reviews. But look at "amazing"'s ratio - it's above 1, showing it is clearly a word with positive sentiment. And "terrible" has a similar score, but in the opposite direction, so it's below -1. It's now clear that both of these words are associated with specific, opposing sentiments.
Step12: End of Project 1.
Step13: Project 2
Step14: Run the following cell to check your vocabulary size. If everything worked correctly, it should print 74074
Step15: Take a look at the following image. It represents the layers of the neural network you'll be building throughout this notebook. layer_0 is the input layer, layer_1 is a hidden layer, and layer_2 is the output layer.
Step16: TODO
Step17: Run the following cell. It should display (1, 74074)
Step18: layer_0 contains one entry for every word in the vocabulary, as shown in the above image. We need to make sure we know the index of each word, so run the following cell to create a lookup table that stores the index of every word.
Step20: TODO
Step21: Run the following cell to test updating the input layer with the first review. The indices assigned may not be the same as in the solution, but hopefully you'll see some non-zero values in layer_0.
Step23: TODO
Step24: Run the following two cells. They should print out'POSITIVE' and 1, respectively.
Step25: Run the following two cells. They should print out 'NEGATIVE' and 0, respectively.
Step29: End of Project 2.
Step30: Run the following cell to create a SentimentNetwork that will train on all but the last 1000 reviews (we're saving those for testing). Here we use a learning rate of 0.1.
Step31: Run the following cell to test the network's performance against the last 1000 reviews (the ones we held out from our training set).
Step32: Run the following cell to actually train the network. During training, it will display the model's accuracy repeatedly as it trains so you can see how well it's doing.
Step33: That most likely didn't train very well. Part of the reason may be because the learning rate is too high. Run the following cell to recreate the network with a smaller learning rate, 0.01, and then train the new network.
Step34: That probably wasn't much different. Run the following cell to recreate the network one more time with an even smaller learning rate, 0.001, and then train the new network.
Step35: With a learning rate of 0.001, the network should finall have started to improve during training. It's still not very good, but it shows that this solution has potential. We will improve it in the next lesson.
Step39: Project 4
Step40: Run the following cell to recreate the network and train it. Notice we've gone back to the higher learning rate of 0.1.
Step41: That should have trained much better than the earlier attempts. It's still not wonderful, but it should have improved dramatically. Run the following cell to test your model with 1000 predictions.
Step42: End of Project 4.
Step46: Project 5
Step47: Run the following cell to recreate the network and train it once again.
Step48: That should have trained much better than the earlier attempts. Run the following cell to test your model with 1000 predictions.
Step49: End of Project 5.
Step53: Project 6
Step54: Run the following cell to train your network with a small polarity cutoff.
Step55: And run the following cell to test it's performance. It should be
Step56: Run the following cell to train your network with a much larger polarity cutoff.
Step57: And run the following cell to test it's performance.
Step58: End of Project 6.
|
11,854
|
<ASSISTANT_TASK:>
Python Code:
print("Happy birthday to you.")
print("Happy birthday to you.")
print("Happy birthday, dear Chris.")
print("Happy birthday to you.")
print("Happy birthday to you.")
print("Happy birthday to you.")
print("Happy birthday, dear Thomas.")
print("Happy birthday to you.")
def birthdaySong(name):
print("Happy birthday to you.")
print("Happy birthday to you.")
print("Happy birthday, dear ", name, ".", sep = "")
print("Happy birthday to you.")
birthdaySong(name = "Thomas")
def birthdaySong(name = "Chris"):
for i in range(4):
if (i != 2):
print("Happy birthday to you.")
else:
print("Happy birthday, dear ", name, ".", sep = "")
birthdaySong()
def f1():
return()
f1()
def f2():
return(5)
f2()
def sumPy(x):
out = 0.0
for i in x:
out += i
return(out)
sumPy(range(0, 6))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: But what if we wanted to reuse this code to congratulate someone else, e.g. named Thomas? There's basically two (fundamentally different) approaches here
Step2: Or – and here's where the high art of programming actually begins – write a function that takes a certain input variable (i.e. the name), performs some processing steps (i.e. put the single lines together), and returns the result (i.e. the complete song).
Step3: Using our newly gained knowledge about <a href="https
Step4: Please note that in the latter version, we also set a default value for name in order to save some typing work on Chris's birthday. In the end, it is totally up to you which of the two versions you prefer, i.e. four print statements in a row or rather the nested for/if-else solution.
Step5: returns an empty tuple, whereas
Step6: returns an int object. Accordingly, a custom sum function that takes a sequence of numbers as input could roughly look as follows
|
11,855
|
<ASSISTANT_TASK:>
Python Code:
from enum import Enum
import itertools
import random
from collections import Counter
import numpy as np
from plotting import *
from multiprocessing import Pool
from tqdm import tqdm_notebook
%matplotlib inline
class Party(Enum):
D = 1
R = 2
color_trans = {Party.D:'blue', Party.R:'red'}
class Justice:
def __init__(self, party):
self.party = party
self.term = random.randint(0,40)
def __str__(self):
return self.__repr__()
def __repr__(self):
return "{party}-{term}".format(party=self.party.name,term=self.term)
class Bench:
SIZE = 9
def __init__(self):
self.seats = [None] * self.SIZE
def fill_seats(self, party):
# loop through all seats
for i in range(self.SIZE):
if self.seats[i] is None:
# if seat is empty, add new
# justice of the correct party
self.seats[i] = Justice(party)
def add_years(self, num_years):
for i in range(self.SIZE):
if self.seats[i] is not None:
# for occupied seats, remove the given
# number of years from their remaining
# term. If their term is less than 0
# this means their seat should now
# be empty again.
self.seats[i].term -= num_years
if self.seats[i].term <= 0:
self.seats[i] = None
def breakdown(self):
c = Counter([s.party.name if s is not None else "" for s in self.seats])
return tuple(c[k] if k in c else 0 for k in [""] + [e.name for e in Party])
def __repr__(self):
return "\n".join(map(str,self.seats))
def simulate(years):
president_party = None
senate_party = None
bench = Bench()
for year in range(years+1):
bench.add_years(1)
if year % 2 == 0:
senate_party = random.choice(list(Party))
if year % 4 == 0:
president_party = random.choice(list(Party))
if president_party == senate_party:
bench.fill_seats(president_party)
yield year, bench.breakdown(), president_party, senate_party
def run_simulation(sim_years):
years, benches, president_parties, senate_parties = zip(*list(simulate(sim_years)))
bench_stacks = np.row_stack(zip(*benches))
vacancies = bench_stacks[0]
mean = np.cumsum(vacancies) / (np.asarray(years) + 1)
return years, bench_stacks, president_parties, senate_parties, mean
sim_years = 200
years, bench_stacks, president_parties, senate_parties, _ = run_simulation(sim_years)
stacked_plot_bench_over_time_with_parties(years,
bench_stacks,
president_parties,
senate_parties,
color_trans,
Party)
sim_years = 1000
years, bench_stacks, _, _, mean = run_simulation(sim_years)
stacked_plot_bench_over_time(years, bench_stacks, mean, color_trans, Party)
sim_years = 50000
sample_size = 1000
results=[]
with Pool(processes=4) as p:
with tqdm_notebook(total=sample_size) as pbar:
for r in p.imap_unordered(run_simulation, itertools.repeat(sim_years,sample_size)):
results.append(r)
pbar.update(1)
years, _, _, _, means = zip(*results)
plot_sims(years[0], means, [0.5, 0.9])
class Party(Enum):
D = 1
R = 2
G = 3
color_trans = {Party.D:'blue', Party.R:'red', Party.G:'green'}
sim_years = 200
years, bench_stacks, president_parties, senate_parties, mean = run_simulation(sim_years)
stacked_plot_bench_over_time_with_parties(years,
bench_stacks,
president_parties,
senate_parties,
color_trans,
Party)
sim_years = 1000
years, bench_stacks, president_parties, senate_parties, mean = run_simulation(sim_years)
stacked_plot_bench_over_time(years, bench_stacks, mean, color_trans, Party)
sim_years = 50000
sample_size = 1000
results=[]
with Pool(processes=4) as p:
with tqdm_notebook(total=sample_size) as pbar:
for r in p.imap_unordered(run_simulation, itertools.repeat(sim_years,sample_size)):
results.append(r)
pbar.update(1)
years, _, _, _, means = zip(*results)
plot_sims(years[0], means, [1, 1.8])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First we'll need a way to track which party the Senate & President are part of. For now, let's just stick with the two major parties and create a Party enumeration. Enumerations can group and give names to related constants in the code. This can help the code be more understandable when reading it.
Step2: We'll also make a class to represent each justice. When a new Justice is created for a party, they'll be given a randomly generated term of somewhere between 0 and 40 years.
Step3: Our lass class is Bench. This class will represent the bench that contains the Justices currently on the Supreme Court. When the Bench is first formed, it will be empty. We care about modifying the Bench in a few ways
Step4: Last but not least, simulate is where the magic happens. This function loops over a supplied number of years, first determining if any judges have left their position. After that, it randomly picks the winning parties for any elections that are happening. After the elections, if the government is aligned, empty seats on the bench should be filled by that party.
Step5: run_simulation will execute the simulation for the supplied number of years, and post-process the data to return the following information
Step6: First, let's look at the result of our simulated supreme court over 200 years. Along the bottom, the parties of the Senate and President are shown. The height of each stack represents the number of seats that party holds, and the white space indicates vacancies.
Step7: During the periods of alignment, the vacancies (white spaces) as filled. This just serves as visual confirmation that our simulation got that aspect correct. We can see that seats are continuously being vacated and filled, so we can't learn much from just this one plot.
Step8: This simulation shows that we should expect a little less than 1 vacancy, about 0.7, per year. It also illustrates that as more data is added, the cumulative moving average becomes less variable.
Step9: This distribution shows that we should still expect to see ~0.71 vacancies per year over the long run, but it wouldn't be surprising to see 0.68 or 0.73 vacancies.
Step10: Unsurprisingly, adding more parties into the mix while still requiring an aligned government looks like it leads to even more vacancies. But, how do the numbers shake out?
|
11,856
|
<ASSISTANT_TASK:>
Python Code:
with open('input.txt', 'rt') as f:
moves = next(f).rstrip().split(',')
import re
import numpy as np
import copy
def shuffle(p, moves):
s = copy.copy(p)
for move in moves:
spin = re.search('s(\d+)', move)
swapx = re.search('x(\d+)\/(\d+)', move)
swapp = re.search('p(\w)\/(\w)', move)
if spin:
s = np.roll(s, int(spin.group(1)))
if swapx:
a = int(swapx.group(1))
b = int(swapx.group(2))
s[a], s[b] = s[b], s[a]
if swapp:
a = swapp.group(1)
b = swapp.group(2)
a = ''.join(s).index(a)
b = ''.join(s).index(b)
s[a], s[b] = s[b], s[a]
return ''.join(s)
assert(shuffle(list('abcde'), ['s1', 'x3/4', 'pe/b']) == 'baedc')
shuffle(list('abcdefghijklmnop'), moves)
from itertools import count
def least_fixed_point(s, moves):
a = s
b = shuffle(list(s), moves)
visited = [s]
for i in count():
if b not in visited:
visited.append(b)
a = b
b = shuffle(list(b), moves)
else:
return a, i
def iterated_dances(s, moves, N):
for i in range(N):
s = shuffle(list(s), moves)
return s
least_fixed_point(list('abcde'), ['s1', 'x3/4', 'pe/b'])
iterated_dances(list('abcde'), ['s1', 'x3/4', 'pe/b'], 4)
s = 'abcdefghijklmnop'
least_fixed_point(list(s), moves)
iterated_dances(list(s), moves, (10 ** 9) % 60)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Test
Step2: Solution
Step3: Part 2
Step4: Test
Step5: Solution
|
11,857
|
<ASSISTANT_TASK:>
Python Code:
class PixWord2Vec:
# vocabulary indexing
index2word = None
word2indx = None
# embeddings vector
embeddings = None
# Normailized embeddings vector
final_embeddings = None
# hidden layer's weight and bias
softmax_weights = None
softmax_biases = None
# ๆญค Model ๆชๅฟ
้่ฆๅ
Trainig Word2Vec
import pickle
pixword = pickle.load(open("./pixword_cnn_word2vec.pk"))
import numpy as np
import random
import tensorflow as tf
import json
from pyspark import StorageLevel
vocabulary_size = len(pixword.index2word)
print "vocabulary_size" , vocabulary_size
pixword.embeddings.shape
import math
append_size = 1000
batch_size = 128
embedding_size = 128 # Dimension of the embedding vector.
graph = tf.Graph()
with graph.as_default():
np.random.seed(0)
# doc(tags or category) batch size , this is key !!! And this batch size cant be too large !!
append_size = 1000
# Input data.
train_dataset = tf.placeholder(tf.int32, shape=[None])
train_labels = tf.placeholder(tf.int32, shape=[None, 1])
# Variables.
embeddings = tf.Variable(np.append(pixword.embeddings,
np.random.randn(append_size,128)).reshape(vocabulary_size+append_size,128).astype('float32'))
softmax_weights = tf.Variable(np.append(pixword.embeddings,
np.random.randn(append_size,128)).reshape(vocabulary_size+append_size,128).astype('float32'))
softmax_biases = tf.Variable(np.append(pixword.softmax_biases,[0]*append_size).astype('float32'))
# Model.
# Look up embeddings for inputs.
embed = tf.nn.embedding_lookup(embeddings, train_dataset)
# Compute the softmax loss, using a sample of the negative labels each time.
loss = tf.reduce_mean(
tf.nn.sampled_softmax_loss(softmax_weights, softmax_biases, embed,
train_labels, num_sampled, vocabulary_size))
# Optimizer.
optimizer = tf.train.AdagradOptimizer(1.0).minimize(loss)
# Compute the similarity between minibatch examples and all embeddings.
# We use the cosine distance:
norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keep_dims=True))
normalized_embeddings = embeddings / norm
init = tf.global_variables_initializer()
session = tf.Session(graph=graph)
session.run(init)
def train(batch_data,batch_labels):
feed_dict = {train_dataset : batch_data, train_labels : batch_labels}
_, l = session.run([optimizer, loss], feed_dict=feed_dict)
return l
def searchByVec(vec,final_embeddings,scope=5):
sim = np.dot(final_embeddings,vec)
for index in sim.argsort()[-scope:][::-1][1:]:
print pixword.index2word[index],sim[index]
cate_vec = []
count = 0
def tags2vec(words_set):
np.random.seed(0)
session.run(init)
if len(words_set)>append_size: raise
cat_data = []
cat_label = []
for index , words in enumerate(words_set):
for w in words :
if w not in pixword.word2indx :
continue
wi = pixword.word2indx[w]
cat_data.append(vocabulary_size+index)
cat_label.append([wi])
for _ in range(20):
train(cat_data,cat_label)
final_embeddings = session.run(normalized_embeddings)
return (final_embeddings[vocabulary_size:vocabulary_size+index+1],final_embeddings[:vocabulary_size])
words = [u'ๆ
้',u'ๅฐๆฑ']
avg_vec = np.average([pixword.final_embeddings[pixword.word2indx[w]] for w in words],0)
for w in words:
print "#{}#".format(w.encode('utf-8'))
searchByVec(pixword.final_embeddings[pixword.word2indx[w]] ,pixword.final_embeddings)
print
# ๅฎ็ดๅ้ๆญคๅญ็ Vector Mean
print "AVG Vector"
searchByVec(avg_vec,pixword.final_embeddings,scope=20)
print
# ๅ่จญๆๅไธ document ๅ
ๅซ้ไบ tag ๅญ ๏ผๆ็ข็็ๆฐ็ vecotr ๆๆพ็ๆฐ็้้ตๅญๅฆไธ
print "Tag Vector"
result = tags2vec([words])
searchByVec(result[0][0],result[1],scope=20)
# read raw data
def checkInVoc(tlist):
r = []
for t in tlist :
if t in pixword.word2indx:
r.append(t)
return r
def merge(x):
x[0]['tags'] = x[1]
return x[0]
test_set = sc.textFile("./data/cuted_test/").map(
json.loads).map(
lambda x : (x,x['tags']) ).mapValues(
checkInVoc).filter(
lambda x : len(x[1])>1)
test_set.persist(StorageLevel.DISK_ONLY)
!rm -rvf ./data/cuted_and_tags/
import json
test_set.map(merge).map(json.dumps).saveAsTextFile("./data/cuted_and_tags/")
class MySentences(object):
def __init__(self, dirname):
self.dirname = dirname
def __iter__(self):
for fname in os.listdir(self.dirname):
if 'crc' in fname : continue
if fname.startswith('_'):continue
for line in open(os.path.join(self.dirname, fname)):
yield line
sc.textFile("./data/cuted_and_tags/").count()
def toVector(docs,tags_set,f):
res_vecs = tags2vec(tags_set)
if len(docs) != len(res_vecs[0]):
print len(docs) , len(res_vecs)
raise
for index,d in enumerate(docs):
d['tag_vec'] = [float(i) for i in list(res_vecs[0][index])]
for d in docs:
jstr = json.dumps(d)
f.write(jstr+'\n')
!rm ./data/cuted_and_vec.json
f = open('./data/cuted_and_vec.json','w')
docs = []
tags_set = []
for doc in MySentences("./data/cuted_and_tags/"):
js_objects = json.loads(doc)
docs.append(js_objects)
tags_set.append(js_objects['tags'])
if len(docs) == 1000:
toVector(docs,tags_set,f)
docs = []
tags_set = []
print '*',
toVector(docs,tags_set,f)
def loadjson(x):
try:
return json.loads(x)
except:
return None
jsondoc = sc.textFile(
"./data/cuted_and_vec.json").map(
loadjson).filter(
lambda x : x!=None)
from operator import add
import json
def loadjson(x):
try:
return json.loads(x)
except:
return None
url_vecs = np.array(jsondoc.map(
lambda x: np.array(x['tag_vec'])).collect())
url_vecs.shape
urls = jsondoc.collect()
def search(wvec,final_embeddings,cate):
# wvec = final_embeddings[windex]
sim = np.dot(final_embeddings,wvec)
result = []
for index in sim.argsort()[-1000:][::-1][1:]:
if urls[index]['category'] == cate and sim[index]>0.9 :
print urls[index]['url'],sim[index],
for tag in urls[index]['tags']:
print tag,
print
return sim
index = np.random.randint(10000)
print urls[index]['url'],urls[index]['category'],
for tag in urls[index]['tags']:
print tag,
print
print
print "########ไปฅไธๆฏ็จ Tag Vecotr ๆๆพๅบไพ็ URL #########"
sim = search(url_vecs[index],url_vecs,urls[index]['category'])
print
print
print "########ไปฅไธๆฏ็ดๆฅ็จ็ฌฌไธๅ Tag ็ดๆฅไฝๆฏๅฐ็็ตๆ,ๆๆๅฅฝ้ๅธธๅค #########"
count = 0
for _,u in enumerate(urls):
for t in u['tags']:
if t == urls[index]['tags'][0] :
count = count + 1
print u['url']
for tt in u['tags']:
print tt,
print
break
if count > 500 : break
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: ่ณๆๅ่็
Step2: ่จญ่จ Graph
Step3: Build Category2Vec
Step4: ๆธฌ่ฉฆ Category Vec
Step5: ้ๅง่ฝๆๆๅ้
Step6: Load TagVectors
Step7: ้ฒ่ก้จๆฉๆฝๆจฃ้ฉ่ญ
|
11,858
|
<ASSISTANT_TASK:>
Python Code:
import os
import numpy as np
import matplotlib.pyplot as plt
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.optimizers import SGD
from keras.utils import np_utils
def unpickle(file):
import cPickle
fo = open(file, 'rb')
dict = cPickle.load(fo)
fo.close()
return dict
def load_single_NORB_train_val(PATH, i):
print "Cargando batch training set",i,"..."
f = os.path.join(PATH, 'data_batch_%d' % (i, ))
datadict = unpickle(f)
X = datadict['data'].T
Y = np.array(datadict['labels'])
Z = np.zeros((X.shape[0], X.shape[1] + 1))
Z[:,:-1] = X
Z[:, -1] = Y
np.random.shuffle(Z)
Xtr = Z[5832:,0:-1]
Ytr = Z[5832:,-1]
Xval = Z[:5832,0:-1]
Yval = Z[:5832,-1]
print "Cargado"
return Xtr, Ytr, Xval, Yval
def load_NORB_test(PATH):
print "Cargando testing set..."
xts = []
yts = []
for b in range(11, 13):
f = os.path.join(PATH, 'data_batch_%d' % (b, ))
datadict = unpickle(f)
X = datadict['data'].T
Y = np.array(datadict['labels'])
Z = np.zeros((X.shape[0], X.shape[1] + 1))
Z[:,:-1] = X
Z[:, -1] = Y
np.random.shuffle(Z)
xts.append(Z[0:,0:-1])
yts.append(Z[:,-1])
Xts = np.concatenate(xts)
Yts = np.concatenate(yts)
del xts,yts
print "Cargado."
return Xts, Yts
# Modelo MLP FF
def get_ff_model(activation, n_classes):
model = Sequential()
model.add(Dense(4000, input_dim=2048, activation=activation))
model.add(Dense(2000, activation=activation))
model.add(Dense(n_classes, activation='softmax'))
sgd = SGD(lr=0.1, decay=0.0)
model.compile(optimizer=sgd,
loss='binary_crossentropy',
metrics=['accuracy'])
return model
# Establecer rangos para dividir training set en escenario no supervisado
def split_train(X, Y, theta):
# n_s es la cantidad de ejemplos que si sabemos su etiqueta
n_s = int(theta * n_tr)
# Dividir training set
X_s = X[0: n_s]
Y_s = Y[0: n_s]
X_ns = X[n_s: ]
return X_s, Y_s, X_ns
def scale_data(X, normalize=True, myrange=None):
from sklearn.preprocessing import MinMaxScaler, StandardScaler
if normalize and not myrange:
print "Normalizing data (mean 0, std 1)"
return StandardScaler().fit_transform(X)
elif isinstance(myrange, tuple):
print "Scaling data to range", myrange
return X * (myrange[1] - myrange[0]) + myrange[0]
else:
return "Error while scaling data."
(Xtr, Ytr, Xval, Yval) = load_single_NORB_train_val(".", 1)
%matplotlib inline
img = Xtr[25][0:1024].reshape((32,32))
plt.imshow(img, cmap='gray', interpolation='nearest')
plt.title("Original dataset image", fontsize=16)
plt.show()
img_scaled_2 = scale_data(img, normalize=False, myrange=(-1,1))
plt.title("Scaled to (-1, 1) image", fontsize=16)
plt.imshow(img_scaled_2, cmap='gray', interpolation='nearest')
plt.show()
img_scaled_01 = scale_data(img, normalize=True)
plt.imshow(img_scaled_01, cmap='gray', interpolation='nearest')
plt.title("Normalized image", fontsize=16)
plt.show()
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
thetas = np.linspace(0.1, 1, 10)
plt.figure(figsize=(15,8))
ff_score = np.array([[0.34965633492870829, 0.85337219132808007], [0.30873315344310387, 0.86633515952791207],
[0.28638169108870831, 0.8779120964645849], [0.28662411729384862, 0.87917810043802969],
[0.27735552345804965, 0.88243598899764453], [0.25611561523247078, 0.89160094224172037],
[0.25466067208978765, 0.89329561710725591], [0.23046189319056207, 0.90432671244947349],
[0.23262660608841529, 0.9034007855362689], [0.24611747253683636, 0.90140890138957075]])
ff_loss = ff_score[:,0]
min_loss = np.min(loss)
print "Min loss:",min_loss
ff_accuracy = ff_score[:,1]
plt.title(u'Error con red FF ReLu variando tamaรฑo del training set etiquetado', fontsize=20)
plt.xlabel(r'$ \theta _s = n_s/n_{tr}$', fontsize=20)
#plt.ylabel(u'Error de prueba', fontsize=20)
plt.xticks(thetas)
plt.xlim((0.1, 1))
plt.ylim((0, 1))
plt.plot(thetas, ff_loss, 'ro-', lw=2, label="Loss")
plt.plot(thetas, ff_accuracy, 'bo-', lw=2, label="Accuracy")
plt.plot([0.1, 1.0], [min_loss, min_loss], 'k--', lw=2)
plt.grid()
plt.legend(loc='best')
plt.show()
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
thetas = np.linspace(0.1, 1, 10)
plt.figure(figsize=(17,8))
# Score RBM con 4000,2000 unidades escondidas
scoreRBM = np.array([[4.4196084509047955, 0.72383116337947229], [4.4055034453445341, 0.72456847458301099],
[4.4394828855419028, 0.72233082052570641], [4.3964983934053672, 0.72311099884400476],
[0.48378892842838628, 0.83260457615262029], [0.45056016894443207, 0.83313326793115983],
[0.43085911105526165, 0.83389629889662864],[0.29784013472614995, 0.87206218527586532],
[0.22757617234340544, 0.90586420740365003], [0.24125085612158553, 0.90226624000546696]])
# Score RBM con 512,100 unidades escondidas
scoreRBM2 = np.array([[0.45062856095624559, 0.83333331346511841], [4.4528440643403426, 0.72222222402099068],
[0.45072658859775883, 0.83333331346511841], [0.4505750023911928, 0.83333331346511841],
[0.4429623542662674, 0.82581160080571892], [0.31188208550094682, 0.86670381540051866],
[0.25056480781516299, 0.89664495450192849], [0.23156057456198476, 0.90414952838330931],
[0.21324420206690148, 0.91417467884402548],[0.22067879932703877, 0.91151407051356237]])
# Score AE con 4000,2000 unidades escondidas de entrenamiento, lr=1e-1
scoreAE = np.array([[4.423457844859942, 0.72402835125295228], [4.4528440647287137, 0.72222222367350131],
[4.4528440647287137, 0.72222222367350131], [4.4528440647287137, 0.72222222367350131],
[4.4528440647287137, 0.72222222367350131], [4.4528440665888036, 0.72222222393922841],
[4.4528440670384954, 0.722222223775704], [4.4528440630321473, 0.72222222410275283],
[0.30934715830596082, 0.90771034156548469], [0.24611747253683636, 0.90140890138957075]])
# Score AE con 512,100 unidades escondidas de entrenamiento, lr=1e-3
scoreAE2 = np.array([ [0.31381310524600359, 0.87419124327814623], [0.29862671588044182, 0.88824588954653105],
[0.29122977651079507, 0.89595336747063203],[0.28631529953734458, 0.89848251762417941],
[0.26520682642347659, 0.90412380808992476],[0.29133136388273245, 0.90601852678263306],
[0.28742324732237945, 0.90006859361389535],[0.28288218034693235, 0.90822188775930224],
[0.26280671141278994, 0.91445188456315885], [0.22067879932703877, 0.91151407051356237]])
lossRBM = scoreRBM[:,0]
accuracyRBM = scoreRBM[:,1]
lossRBM2 = scoreRBM2[:,0]
accuracyRBM2 = scoreRBM2[:,1]
lossAE = scoreAE[:,0]
accuracyAE = scoreAE[:,1]
lossAE2 = scoreAE2[:,0]
accuracyAE2 = scoreAE2[:,1]
plt.title(u'RBM vs AE', fontsize=20)
plt.xlabel(r'$ \theta _s = n_s/n_{tr}$', fontsize=20)
plt.xticks(thetas)
plt.xlim((0.1, 1))
plt.plot(thetas, lossRBM, 'o-',lw=2, label="RBM(4000,2000, lr=1e-2)")
plt.plot(thetas, lossRBM2, 'o-', lw=2, label="RBM(512,100, lr=1e-2)")
plt.plot(thetas, lossAE, 'o-', lw=2, label=r"AE(4000,2000,lr=1e-1)")
plt.plot(thetas, lossAE2, 'o-', lw=2, label="AE(512,100,lr=1e-3)")
plt.plot([0.1, 1.0], [min_loss, min_loss], 'k--', lw=2)
plt.plot(thetas, ff_loss, 'ko-', lw=2, label="Raw FF ReLu")
plt.ylabel("Loss", fontsize=20)
plt.grid()
plt.legend(loc='best', fontsize=16)
plt.show()
plt.figure(figsize=(15,8))
plt.title(u'RBM vs AE (Rango [0,1])', fontsize=20)
plt.xlabel(r'$ \theta _s = n_s/n_{tr}$', fontsize=20)
plt.xticks(thetas)
plt.yticks(np.arange(0, 1.1, 0.1))
plt.xlim((0.1, 1))
plt.ylim((0, 1))
plt.plot(thetas, lossRBM, 'o-',lw=2, label="RBM(4000,2000, lr=1e-2)")
plt.plot(thetas, lossRBM2, 'o-', lw=2, label="RBM(512,100, lr=1e-2)")
plt.plot(thetas, lossAE, 'o-', lw=2, label=r"AE(4000,2000,lr=1e-1)")
plt.plot(thetas, lossAE2, 'o-', lw=2, label="AE(512,100,lr=1e-3)")
plt.plot(thetas, ff_loss, 'ko-', lw=2, label="Raw FF ReLu")
#plt.plot([0.1, 1.0], [min_loss, min_loss], 'k--', lw=2)
plt.ylabel("Loss", fontsize=20)
plt.grid()
plt.legend(loc='upper left', fontsize=16)
plt.show()
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
thetas = np.linspace(0.1, 1, 10)
plt.figure(figsize=(17,8))
score_RBM_512_sig = np.array([[0.45771518255620991, 0.83333331346511841],[0.45365564825092486, 0.83333331346511841],
[0.38187248474408569, 0.83758285605899263], [0.34743673724361246, 0.84777375865620352],
[0.31626026423083331, 0.85903920216897223], [0.30048906122716285, 0.86761259483825037],
[0.28214155361235183, 0.8757115920752655], [0.26172314131113766, 0.88501657923716737],
[0.25423361747608675, 0.88900034828686425], [0.23126906758322494, 0.90095736967356277]]
)
score_AE_512_sig = np.array([[0.36305900167454092, 0.84831959977030591],[0.33650884347289434, 0.85545838133665764],
[0.2976713113037075, 0.8675240031015562],[0.27910095958782688, 0.87922668276132376],
[0.26177046708864848, 0.88473937295591876], [0.26421231387402805, 0.88531093294278751],
[0.24143604994706799, 0.89582762480885891],[0.26421231387402805, 0.88531093294278751],
[0.22964162562190668, 0.90080876522716669], [0.23126906758322494, 0.90095736967356277]])
loss_RBM_512_sig = score_RBM_512_sig[:,0]
loss_AE_512_sig = score_AE_512_sig[:,0]
plt.title(u'RBM vs AE Sigmoid', fontsize=20)
plt.xlabel(r'$ \theta _s = n_s/n_{tr}$', fontsize=20)
plt.xticks(thetas)
plt.yticks(np.arange(0, 1.1, 0.1))
plt.xlim((0.1, 1))
plt.plot(thetas, loss_RBM_512_sig, 'ro-',lw=2, label="RBM(512,100, lr=1e-2)")
plt.plot(thetas, loss_AE_512_sig, 'bo-',lw=2, label="AE(512,100, lr=1e-2)")
#plt.plot(thetas, ff_loss, 'ko-', lw=2, label="Raw FF")
plt.plot([0.1, 1.0], [min_loss, min_loss], 'k--', lw=2)
plt.ylabel("Loss", fontsize=20)
plt.grid()
plt.legend(loc='best', fontsize=16)
plt.show()
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
thetas = np.linspace(0.1, 1, 10)
plt.figure(figsize=(17,8))
score_RBM_512_tanh = np.array([[0.47132537652967071, 0.83333617125846071], [0.43773106490204361, 0.81772403969890628],
[0.39085262340579097, 0.83759428964434668], [0.3556465181906609, 0.84698787069942072],
[0.3184757399882493, 0.86379743537090115], [0.30265503884286749, 0.87056469970442796],
[0.25681082942537115, 0.89028635624136943], [0.2447317991679118, 0.89441301461069023],
[0.2497562807307557, 0.89397577249505067], [0.2468455718116086, 0.89485025849443733]])
score_AE_512_tanh = np.array([[0.35625298988631071, 0.85218620511852661], [0.32329397273744331, 0.8609053455957496],
[0.29008143559987409, 0.87735196781460967], [0.28377419225207245, 0.87887231570212443],
[0.2652835471910745, 0.88701417956332607], [0.26240534692509571, 0.88862026109780468],
[0.26505667790617227, 0.88871742608166204], [0.25295013446802855, 0.89182956670046831],
[0.25422075993875848, 0.89163523609909667], [0.25069631986889801, 0.8929641120473053]]
)
loss_RBM_512_tanh = score_RBM_512_tanh[:,0]
loss_AE_512_tanh = score_AE_512_tanh[:,0]
plt.title(u'RBM vs AE Tanh', fontsize=20)
plt.xlabel(r'$ \theta _s = n_s/n_{tr}$', fontsize=20)
plt.xticks(thetas)
plt.xlim((0.1, 1))
plt.plot(thetas, loss_RBM_512_tanh, 'ro-',lw=2, label="RBM(512,100, lr=1e-2)")
plt.plot(thetas, loss_AE_512_tanh, 'bo-',lw=2, label="AE(512,100, lr=1e-2)")
#plt.plot(thetas, ff_loss, 'ko-', lw=2, label="Raw FF")
#plt.plot([0.1, 1.0], [min_loss, min_loss], 'k--', lw=2)
plt.ylabel("Loss", fontsize=20)
plt.grid()
plt.legend(loc='best', fontsize=16)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: b) Funciรณn para escalar data entre rango (-1,1) o bien normalizaciรณn.
Step2: A continuaciรณn se cargarรก un batch del dataset y se mostrarรกn imรกgenes convertidas bajo las dos formas de escalamiento.
Step3: c) Entrenamiento de Red FF variando tamaรฑo de batches utilizados.
Step4: Como se ha aprendido, se observa que a medida que aumentamos la cantidad de datos la red es capaz de aprender y lograr un buen rendimiento. La lรญnea punteada indica la menor pรฉrdida obtenida (~ 0.204), utilizando 8 de 10 batches.
Step5: Se puede apreciar que en general la reducciรณn de dimensionalidad ayuda bastante en obtener buenos resultados rรกpidamente sobre el conjunto de pruebas. Existen no obstante inestabilidades, por ejemplo durante el entrenamiento de la red FF con RBM, que provocan peaks en la pรฉrdida.
Step6: En este grรกfico se aprecia mejor el efecto del pre entrenamiento, donde mientras mรกs data haya para pre entrenar, mejor score inicial se obtiene. A medida que este factor $\theta$ crece, el efecto desaparece.
Step7: Se puede apreciar que en general la versiรณn pre entrenada con autoencoders funciona mejor que con RBM dados por supuesto los parรกmetros con los que se armaron dichas redes. No se pudo superar con pre entrenamiento el mรญnimo logrado con FF ReLu sin pre entrenamiento.
|
11,859
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
#from skimage.io import imread
import matplotlib.gridspec as gridspec
plt.rcParams['image.interpolation'] = 'none'
plt.rcParams['image.cmap'] = 'gray'
figsize(4,4)
size = 256
img = np.zeros((size,size), dtype=np.uint8)
t = linspace(start=0, stop=50*pi, endpoint=False, num=size)
x,y = meshgrid(t, t)
img[:,:] = 127 + 127*sin(x)
imshow(img);
F = fft2(img)
# scale image for viewing - do not take log of zero
F_pow = np.abs(F)
F_pow = log(F_pow.clip(1))
fig, axs = subplots(ncols=2, figsize=(14,5))
plt.setp(axs, xticks=[], yticks=[])
im0 = axs[0].imshow(img)
colorbar(im0, ax=axs[0])
im1 = axs[1].imshow(fftshift(F_pow))
colorbar(im1);
numpy.clip?
img = imread('unstained/u8v0ch1z0.png')
imshow(img);
def angle_ft_line_fit(img, threshold=0.999, debug=False):
Calculate preferred orientation in image with a line fit in FT.
Parameters
----------
threshold : float
Percentage of pixels to include in FT for calculating
threshold. 0.999 * 512**2 = 262 pixels
Returns
-------
float
Angle
from skimage.exposure import cumulative_distribution
from scipy.stats import linregress
# FT power spectrum
F = np.abs(fftshift(fft2(img)))
# do not calculate log(0)
F[F!=0], F[F==0] = log(F[F!=0]), log(F[F!=0].min())
# threshold
cdf = cumulative_distribution(F)
limit = np.where(cdf[0] > threshold)[0].min()
threshold_value = cdf[1][limit]
F = F > threshold_value
# points
y,x = np.where(F)
# cases
dx = abs(x.max()-x.min())
dy = abs(y.max()-y.min())
if dx==0:
# we have a vertical line
angle = 90
b = [0, 1]
# solve y=mx+c by least-squares regression
elif dx < dy:
# linregress is imprecise for dx < dy => swap x,y
m,c,r,pv,err = linregress(y,x)
b = (1/m, -c/m)
# calculate angle (assume line goes through center)
angle = (90 - arctan(b[0]) / pi * 180) % 180
else:
m,c,r,pv,err = linregress(y,x)
b = (m,c)
angle = (90 - arctan(b[0]) / pi * 180) % 180
# show image, FT and fit
if debug:
f, ax = subplots(ncols=2, figsize=(8,4))
ax[0].imshow(img)
ax[1].imshow(F)
# add calculated line
# polynomial generator
p = np.poly1d(b)
height, width = img.shape
if angle != 90:
line = ([0, width], [p(0), p(width)])
else:
line = ([width//2, width//2], [0,height])
ax[1].plot(*line)
ax[1].set_title('ang: {:3.0f} r:{:0.2} err:{:0.2}'
.format(angle,r,err))
ax[1].set_xlim(0,width)
ax[1].set_ylim(height,0)
return angle
print('angle_ft_line_fit defined')
from itertools import product
from skimage.filter import threshold_otsu
bs = 100 # block size
iy,ix = img.shape
by, bx = iy//bs, ix//bs # blocks
bsy, bsx = iy//by, ix//bx # block size
count = 0
f, axs = subplots(nrows=3, ncols=4, figsize=(10,8))
for j,i in product(range(by), range(bx)):
x,y = j*bs, i*bs
temp_img = img[y:y+bs, x:x+bs]
if temp_img.shape[0] < 50 or temp_img.shape[1] < 50:
continue
mean = temp_img.mean()
if mean <= 0:
continue
ot = threshold_otsu(temp_img)
if ot < 1:
continue
if count >= 12:
break
if (i < 2 # row
or i > 40
or j < 2 #column
or j > 40):
continue
ax = axs[count//4, count%4]
ax.imshow(temp_img)
ax.set_title('m: {:1.1f}, t:{:1.1f}'.format(mean, ot))
count += 1
if count == 6: # pick out image for manual debug
ii = np.copy(temp_img)
angle_ft_line_fit(temp_img, threshold=0.99, debug=True)
imshow(ii)
F = log(abs(fftshift(fft2(ii))))
imshow(F>7);
Switch the arguments of linregress xx,yy -> yy,xx
from scipy.stats import linregress
yy,xx = np.where(F>6)
m,c,r,pv,err = linregress(yy,xx)
print(m,c,r)
print(pv,err)
arctan(m)/pi*180 % 180
imshow(F)
plot(range(100), np.arange(100)/m-c/m)
xlim(0, 100)
ylim(100,0)
figsize(3,3);
def angle_histogram(arg):
# work around for me not knowing how to dview.map multiple arguments
threshold, filename = arg
from itertools import product
from skimage.filter import threshold_otsu
from skimage.io import imread
img = imread(filename)
bs = 100 # approx block size
iy, ix = img.shape
by, bx = iy//bs, ix//bs # blocks
bsy, bsx = iy//by, ix//bx # block size, spread spare pixels
h = np.zeros(180) # histogram
for j,i in product(range(by), range(bx)):
x,y = j*bsx, i*bsy # pos
temp_img = img[y:y+bsy, x:x+bsx]
mean = temp_img.mean()
# small image
if temp_img.shape[0] < 50 or temp_img.shape[1] < 50:
continue
# emtpy image
if mean == 0:
continue
# threshold below noise-threshold
ot = threshold_otsu(temp_img)
if (ot < 1):
continue
angle = angle_ft_line_fit(temp_img, threshold=threshold)
angle = int(angle)
h[angle] += 1
# make plot (and save it)
fig, axs = plt.subplots(ncols=2, figsize=(16,8))
axs[0].imshow(img)
axs[1].plot(h)
fn = filename.replace('ed/u', 'ed/angles-ft-line-fit-u')
fn = fn.replace('.png', str(threshold) + '.png')
#fig.savefig(fn)
angle_histogram((0.9, '/notebooks/TFY4500/unstained/u1v1ch1z0.png'))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Use this image with clear direction of fibers.
Step3: The function we want to make better
Step4: Per block optimation
Step5: Threshold
Step6: Switch the arguments of linregress xx,yy -> yy,xx
Step7: We are solving $mx+c=y$, but switched x and y
Step8: Do the whole image and calc histogram
|
11,860
|
<ASSISTANT_TASK:>
Python Code:
# Additional Libraries
%matplotlib inline
import matplotlib.pyplot as plt
# Import relevant libraries:
import time
import numpy as np
import pandas as pd
from sklearn.neighbors import KNeighborsClassifier
from sklearn import preprocessing
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import StandardScaler
from sklearn.naive_bayes import BernoulliNB
from sklearn.naive_bayes import MultinomialNB
from sklearn.naive_bayes import GaussianNB
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.metrics import log_loss
from sklearn.linear_model import LogisticRegression
from sklearn import svm
from sklearn.neural_network import MLPClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.tree import DecisionTreeClassifier
# Import Meta-estimators
from sklearn.ensemble import AdaBoostClassifier
from sklearn.ensemble import BaggingClassifier
from sklearn.ensemble import GradientBoostingClassifier
# Import Calibration tools
from sklearn.calibration import CalibratedClassifierCV
# Set random seed and format print output:
np.random.seed(0)
np.set_printoptions(precision=3)
# Data path to your local copy of Kalvin's "x_data.csv", which was produced by the negated cell above
data_path = "./data/x_data_3.csv"
df = pd.read_csv(data_path, header=0)
x_data = df.drop('category', 1)
y = df.category.as_matrix()
# Impute missing values with mean values:
#x_complete = df.fillna(df.mean())
x_complete = x_data.fillna(x_data.mean())
X_raw = x_complete.as_matrix()
# Scale the data between 0 and 1:
X = MinMaxScaler().fit_transform(X_raw)
####
#X = np.around(X, decimals=2)
####
# Shuffle data to remove any underlying pattern that may exist. Must re-run random seed step each time:
np.random.seed(0)
shuffle = np.random.permutation(np.arange(X.shape[0]))
X, y = X[shuffle], y[shuffle]
# Due to difficulties with log loss and set(y_pred) needing to match set(labels), we will remove the extremely rare
# crimes from the data for quality issues.
X_minus_trea = X[np.where(y != 'TREA')]
y_minus_trea = y[np.where(y != 'TREA')]
X_final = X_minus_trea[np.where(y_minus_trea != 'PORNOGRAPHY/OBSCENE MAT')]
y_final = y_minus_trea[np.where(y_minus_trea != 'PORNOGRAPHY/OBSCENE MAT')]
# Separate training, dev, and test data:
test_data, test_labels = X_final[800000:], y_final[800000:]
dev_data, dev_labels = X_final[700000:800000], y_final[700000:800000]
train_data, train_labels = X_final[100000:700000], y_final[100000:700000]
calibrate_data, calibrate_labels = X_final[:100000], y_final[:100000]
# Create mini versions of the above sets
mini_train_data, mini_train_labels = X_final[:20000], y_final[:20000]
mini_calibrate_data, mini_calibrate_labels = X_final[19000:28000], y_final[19000:28000]
mini_dev_data, mini_dev_labels = X_final[49000:60000], y_final[49000:60000]
# Create list of the crime type labels. This will act as the "labels" parameter for the log loss functions that follow
crime_labels = list(set(y_final))
crime_labels_mini_train = list(set(mini_train_labels))
crime_labels_mini_dev = list(set(mini_dev_labels))
crime_labels_mini_calibrate = list(set(mini_calibrate_labels))
print(len(crime_labels), len(crime_labels_mini_train), len(crime_labels_mini_dev),len(crime_labels_mini_calibrate))
#print(len(train_data),len(train_labels))
#print(len(dev_data),len(dev_labels))
print(len(mini_train_data),len(mini_train_labels))
print(len(mini_dev_data),len(mini_dev_labels))
#print(len(test_data),len(test_labels))
print(len(mini_calibrate_data),len(mini_calibrate_labels))
#print(len(calibrate_data),len(calibrate_labels))
tuned_DT_calibrate_isotonic = RandomForestClassifier(min_impurity_split=1,
n_estimators=100,
bootstrap= True,
max_features=15,
criterion='entropy',
min_samples_leaf=10,
max_depth=None
).fit(train_data, train_labels)
ccv_isotonic = CalibratedClassifierCV(tuned_DT_calibrate_isotonic, method = 'isotonic', cv = 'prefit')
ccv_isotonic.fit(calibrate_data, calibrate_labels)
ccv_predictions = ccv_isotonic.predict(dev_data)
ccv_prediction_probabilities_isotonic = ccv_isotonic.predict_proba(dev_data)
working_log_loss_isotonic = log_loss(y_true = dev_labels, y_pred = ccv_prediction_probabilities_isotonic, labels = crime_labels)
print("Multi-class Log Loss with RF and calibration with isotonic is:", working_log_loss_isotonic)
pd.DataFrame(np.amax(ccv_prediction_probabilities_isotonic, axis=1)).hist()
#clf_probabilities, clf_predictions, labels
def error_analysis_calibration(buckets, clf_probabilities, clf_predictions, labels):
inputs:
clf_probabilities = clf.predict_proba(dev_data)
clf_predictions = clf.predict(dev_data)
labels = dev_labels
#buckets = [0.05, 0.15, 0.3, 0.5, 0.8]
#buckets = [0.15, 0.25, 0.3, 1.0]
correct = [0 for i in buckets]
total = [0 for i in buckets]
lLimit = 0
uLimit = 0
for i in range(len(buckets)):
uLimit = buckets[i]
for j in range(clf_probabilities.shape[0]):
if (np.amax(clf_probabilities[j]) > lLimit) and (np.amax(clf_probabilities[j]) <= uLimit):
if clf_predictions[j] == labels[j]:
correct[i] += 1
total[i] += 1
lLimit = uLimit
print(sum(correct))
print(sum(total))
print(correct)
print(total)
#here we report the classifier accuracy for each posterior probability bucket
accuracies = []
for k in range(len(buckets)):
print(1.0*correct[k]/total[k])
accuracies.append(1.0*correct[k]/total[k])
print('p(pred) <= %.13f total = %3d correct = %3d accuracy = %.3f' \
%(buckets[k], total[k], correct[k], 1.0*correct[k]/total[k]))
plt.plot(buckets,accuracies)
plt.title("Calibration Analysis")
plt.xlabel("Posterior Probability")
plt.ylabel("Classifier Accuracy")
return buckets, accuracies
#i think you'll need to look at how the posteriors are distributed in order to set the best bins in 'buckets'
pd.DataFrame(np.amax(bestLRPredictionProbabilities, axis=1)).hist()
buckets = [0.15, 0.25, 0.3, 1.0]
calibration_buckets, calibration_accuracies = error_analysis_calibration(buckets, clf_probabilities=bestLRPredictionProbabilities, \
clf_predictions=bestLRPredictions, \
labels=mini_dev_labels)
def error_analysis_classification_report(clf_predictions, labels):
inputs:
clf_predictions = clf.predict(dev_data)
labels = dev_labels
print('Classification Report:')
report = classification_report(labels, clf_predictions)
print(report)
return report
classification_report = error_analysis_classification_report(clf_predictions=bestLRPredictions, \
labels=mini_dev_labels)
crime_labels_mini_dev
def error_analysis_confusion_matrix(label_names, clf_predictions, labels):
inputs:
clf_predictions = clf.predict(dev_data)
labels = dev_labels
cm = pd.DataFrame(confusion_matrix(labels, clf_predictions, labels=label_names))
cm.columns=label_names
cm.index=label_names
cm.to_csv(path_or_buf="./confusion_matrix.csv")
#print(cm)
return cm
error_analysis_confusion_matrix(label_names=crime_labels_mini_dev, clf_predictions=bestLRPredictions, \
labels=mini_dev_labels)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Local, individual load of updated data set (with weather data integrated) into training, development, and test subsets.
Step2: The Best RF Classifier
Step4: Error Analysis
Step6: Error Analysis
Step8: Error Analysis
|
11,861
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib as mpl
import pandas as pd
import json
import pandas as pd
import csv
import os
import re
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from sklearn import svm
from sklearn.linear_model import SGDClassifier
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.ensemble import BaggingClassifier
from sklearn.svm import LinearSVC
from sklearn.naive_bayes import MultinomialNB
from sklearn.ensemble import RandomForestClassifier
from sklearn import metrics
from sklearn.pipeline import Pipeline
import numpy as np
from sklearn import datasets, linear_model
from sklearn.linear_model import LinearRegression
import statsmodels.api as sm
from scipy import stats
from statsmodels.sandbox.regression.predstd import wls_prediction_std
from pymongo import MongoClient
from datetime import datetime
def plot_coefficients(classifier, feature_names, top_features=20):
coef = classifier.coef_.ravel()[0:200]
top_positive_coefficients = np.argsort(coef)[-top_features:]
top_negative_coefficients = np.argsort(coef)[:top_features]
top_coefficients = np.hstack([top_negative_coefficients, top_positive_coefficients])
#create plot
plt.figure(figsize=(15, 5))
colors = ['red' if c < 0 else 'blue' for c in coef[top_coefficients]]
plt.bar(np.arange(2 * top_features), coef[top_coefficients], color=colors)
feature_names = np.array(feature_names)
plt.xticks(np.arange(1, 1 + 2 * top_features), feature_names[top_coefficients], rotation=60, ha='right')
plt.show()
#def bayesian_average()
#This is the main folder where all the modules and JSON files are stored on my computer.
#You need to change this to the folder path specific to your computer
file_directory = "/Users/ed/yelp-classification/"
reviews_file = "cleaned_reviews_states_2010.json"
biz_file = "cleaned_business_data.json"
#This is a smaller subset of our overall Yelp data
#I randomly chose 5000 reviews from each state and filed them into the JSON file
#Note that for the overall dataset, we have about 2 million reviews.
#That's why we need to use a data management system like MongoDB in order to hold all our data
#and to more efficiently manipulate it
reviews_json = json.load(open(file_directory+reviews_file))
biz_json = json.load(open(file_directory+biz_file))
for key in reviews_json.keys():
reviews_json[key] = reviews_json[key][0:5000]
#Let's see how reviews_json is set up
#changed this for python 3
print(reviews_json.keys())
reviews_json['OH'][0]
#We can see that on the highest level, the dictionary keys are the different states
#Let's look at the first entry under Ohio
print(reviews_json['OH'][0]['useful'])
#So for each review filed under Ohio, we have many different attributes to choose from
#Let's look at what the review and rating was for the first review filed under Ohio
print(reviews_json['OH'][0]['text'])
print(reviews_json['OH'][0]['stars'])
#We want to split up reviews between text and labels for each state
reviews = []
stars = []
cool = []
useful = []
funny = []
compliment = []
cunumber = []
for key in reviews_json.keys():
for review in reviews_json[key]:
reviews.append(review['text'])
stars.append(review['stars'])
cool.append(review['cool'])
useful.append(review['useful'])
funny.append(review['funny'])
compliment.append(review['funny']+review['useful']+review['cool'])
cunumber.append(review['useful']+review['cool'])
#Just for demonstration, let's pick out the same review example as above but from our respective lists
print(reviews[0])
print(stars[0])
print(cool[0])
print(useful[0])
print(funny[0])
reviews_json['OH'][1]['cool']+1
#added 'low_memory=False' after I got a warning about mixed data types
harvard_dict = pd.read_csv('HIV-4.csv',low_memory=False)
negative_words = list(harvard_dict.loc[harvard_dict['Negativ'] == 'Negativ']['Entry'])
positive_words = list(harvard_dict.loc[harvard_dict['Positiv'] == 'Positiv']['Entry'])
#Use word dictionary from Hu and Liu (2004)
#had to use encoding = "ISO-8859-1" to avoid error
negative_words = open('negative-words.txt', 'r',encoding = "ISO-8859-1").read()
negative_words = negative_words.split('\n')
positive_words = open('positive-words.txt', 'r',encoding = "ISO-8859-1").read()
positive_words = positive_words.split('\n')
total_words = negative_words + positive_words
total_words = list(set(total_words))
review_length = []
negative_percent = []
positive_percent = []
for review in reviews:
length_words = len(review.split())
neg_words = [x.lower() for x in review.split() if x in negative_words]
pos_words = [x.lower() for x in review.split() if x in positive_words]
negative_percent.append(float(len(neg_words))/float(length_words))
positive_percent.append(float(len(pos_words))/float(length_words))
review_length.append(length_words)
regression_df = pd.DataFrame({'stars':stars, 'review_length':review_length, 'neg_percent': negative_percent, 'positive_percent': positive_percent})
use_df = pd.DataFrame({'useful':cunumber, 'review_length':review_length, 'neg_percent': negative_percent, 'positive_percent': positive_percent})
use_df2 = pd.DataFrame({'useful':cunumber, 'review_length':review_length})
#Standardize dependent variables
std_vars = ['neg_percent', 'positive_percent', 'review_length']
for var in std_vars:
len_std = regression_df[var].std()
len_mu = regression_df[var].mean()
regression_df[var] = [(x - len_mu)/len_std for x in regression_df[var]]
#The R-Squared from using the Harvard Dictionary is 0.1 but with the Hu & Liu word dictionary
X = np.column_stack((regression_df.review_length,regression_df.neg_percent, regression_df.positive_percent))
y = regression_df.stars
X = sm.add_constant(X)
est = sm.OLS(y, X)
est2 = est.fit()
print(est2.summary())
#The R-Squared from using the Harvard Dictionary is 0.1 but with the Hu & Liu word dictionary
X = np.column_stack((regression_df.review_length,regression_df.neg_percent, regression_df.positive_percent))
y = use_df2.useful
X = sm.add_constant(X)
est = sm.OLS(y, X)
est2 = est.fit()
print(est2.summary())
multi_logit = Pipeline([('vect', vectorizer),
('tfidf', TfidfTransformer()),
('clf', MultinomialNB())])
multi_logit.set_params(clf__alpha=1, clf__fit_prior = True, clf__class_prior = None).fit(train_reviews, train_ratings)
output['multi_logit'] = multi_logit.predict(test_reviews)
x = np.array(regression_df.stars)
#beta = [3.3648, -0.3227 , 0.5033]
y = [int(round(i)) for i in list(est2.fittedvalues)]
y = np.array(y)
errors = np.subtract(x,y)
np.sum(errors)
# fig, ax = plt.subplots(figsize=(5,5))
# ax.plot(x, x, 'b', label="data")
# ax.plot(x, y, 'o', label="ols")
# #ax.plot(x, est2.fittedvalues, 'r--.', label="OLS")
# #ax.plot(x, iv_u, 'r--')
# #ax.plot(x, iv_l, 'r--')
# ax.legend(loc='best');
#Do a QQ plot of the data
fig = sm.qqplot(errors)
plt.show()
star_hist = pd.DataFrame({'Ratings':stars})
star_hist.plot.hist()
cooluse_hist = pd.DataFrame({'Ratings':cunumber})
cooluse_hist.plot.hist(range=[0, 6])
df_list = []
states = list(reviews_json.keys())
for state in states:
stars_state = []
for review in reviews_json[state]:
stars_state.append(review['stars'])
star_hist = pd.DataFrame({'Ratings':stars_state})
df_list.append(star_hist)
for i in range(0, len(df_list)):
print(states[i] + " Rating Distribution")
df_list[i].plot.hist()
plt.show()
#First let's separate out our dataset into a training sample and a test sample
#We specify a training sample percentage of 80% of our total dataset. This is just a rule of thumb
training_percent = 0.8
train_reviews = reviews[0:int(len(reviews)*training_percent)]
test_reviews = reviews[int(len(reviews)*training_percent):len(reviews)]
train_ratings = stars[0:int(len(stars)*training_percent)]
test_ratings = stars[int(len(stars)*training_percent):len(stars)]
vectorizer = CountVectorizer(analyzer = "word", \
tokenizer = None, \
preprocessor = None, \
stop_words = None, \
vocabulary = total_words, \
max_features = 200)
train_data_features = vectorizer.fit_transform(train_reviews)
test_data_features = vectorizer.fit_transform(test_reviews)
output = pd.DataFrame( data={"Reviews": test_reviews, "Rating": test_ratings} )
#Let's do the same exercise as above but use TF-IDF, you can learn more about TF-IDF here:
#https://nlp.stanford.edu/IR-book/html/htmledition/tf-idf-weighting-1.html
tf_transformer = TfidfTransformer(use_idf=True)
train_data_features = tf_transformer.fit_transform(train_data_features)
test_data_features = tf_transformer.fit_transform(test_data_features)
lin_svm = lin_svm.fit(train_data_features, train_ratings)
lin_svm_result = lin_svm.predict(test_data_features)
output['lin_svm'] = lin_svm_result
output['Accurate'] = np.where(output['Rating'] == output['lin_svm'], 1, 0)
accurate_percentage = float(sum(output['Accurate']))/float(len(output))
print accurate_percentage
#Here we plot the features with the highest absolute value coefficient weight
plot_coefficients(lin_svm, vectorizer.get_feature_names())
# random_forest = Pipeline([('vect', vectorizer),
# ('tfidf', TfidfTransformer()),
# ('clf', RandomForestClassifier())])
# random_forest.set_params(clf__n_estimators=100, clf__criterion='entropy').fit(train_reviews, train_ratings)
# output['random_forest'] = random_forest.predict(test_reviews)
# output['Accurate'] = np.where(output['Rating'] == output['random_forest'], 1, 0)
# accurate_percentage = float(sum(output['Accurate']))/float(len(output))
# print accurate_percentage
# bagged_dt = Pipeline([('vect', vectorizer),
# ('tfidf', TfidfTransformer()),
# ('clf', BaggingClassifier())])
# bagged_dt.set_params(clf__n_estimators=100, clf__n_jobs=1).fit(train_reviews, train_ratings)
# output['bagged_dt'] = bagged_dt.predict(test_reviews)
# output['Accurate'] = np.where(output['Rating'] == output['bagged_dt'], 1, 0)
# accurate_percentage = float(sum(output['Accurate']))/float(len(output))
# print accurate_percentage
multi_logit = Pipeline([('vect', vectorizer),
('tfidf', TfidfTransformer()),
('clf', MultinomialNB())])
multi_logit.set_params(clf__alpha=1, clf__fit_prior = True, clf__class_prior = None).fit(train_reviews, train_ratings)
output['multi_logit'] = multi_logit.predict(test_reviews)
output['Accurate'] = np.where(output['Rating'] == output['multi_logit'], 1, 0)
accurate_percentage = float(sum(output['Accurate']))/float(len(output))
print(accurate_percentage)
random_forest = Pipeline([('vect', vectorizer),
('tfidf', TfidfTransformer()),
('clf', RandomForestClassifier())])
random_forest.set_params(clf__n_estimators=100, clf__criterion='entropy').fit(train_reviews, train_ratings)
output['random_forest'] = random_forest.predict(test_reviews)
output['Accurate'] = np.where(output['Rating'] == output['random_forest'], 1, 0)
accurate_percentage = float(sum(output['Accurate']))/float(len(output))
print(accurate_percentage)
print(metrics.confusion_matrix(test_ratings, bagged_dt.predict(test_reviews), labels = [1, 2, 3, 4, 5]))
for review in reviews_json[reviews_json.keys()[0]]:
print(type(review['date']))
break
reviews_json.keys()
latitude_list = []
longitude_list = []
stars_list = []
count_list = []
state_list = []
for biz in biz_json:
stars_list.append(biz['stars'])
latitude_list.append(biz['latitude'])
longitude_list.append(biz['longitude'])
count_list.append(biz['review_count'])
state_list.append(biz['state'])
biz_df = pd.DataFrame({'ratings':stars_list, 'latitude':latitude_list, 'longitude': longitude_list, 'review_count': count_list, 'state':state_list})
states = [u'OH', u'NC', u'WI', u'IL', u'AZ', u'NV']
cmap, norm = mpl.colors.from_levels_and_colors([1, 2, 3, 4, 5], ['red', 'orange', 'yellow', 'green', 'blue'], extend = 'max')
for state in states:
state_df = biz_df[biz_df.state == state]
state_df_filt = state_df[(np.abs(state_df.longitude-state_df.longitude.mean()) <= 2*state_df.longitude.std()) \
& (np.abs(state_df.latitude-state_df.latitude.mean()) <= 2*state_df.latitude.std())]
plt.ylim(min(state_df_filt.latitude), max(state_df_filt.latitude))
plt.xlim(min(state_df_filt.longitude), max(state_df_filt.longitude))
plt.scatter(state_df_filt.longitude, state_df_filt.latitude, c=state_df_filt.ratings, cmap=cmap, norm=norm)
plt.show()
print state
for state in states:
state_df = biz_df[biz_df.state == state]
state_df_filt = state_df[(np.abs(state_df.longitude-state_df.longitude.mean()) <= 2*state_df.longitude.std()) \
& (np.abs(state_df.latitude-state_df.latitude.mean()) <= 2*state_df.latitude.std())]
state_df_filt['longitude'] = (state_df_filt.longitude - state_df.longitude.mean())/state_df.longitude.std()
state_df_filt['latitude'] = (state_df_filt.latitude - state_df.latitude.mean())/state_df.latitude.std()
state_df_filt['review_count'] = (state_df_filt.review_count - state_df.review_count.mean())/state_df.review_count.std()
X = np.column_stack((state_df_filt.longitude, state_df_filt.latitude, state_df_filt.review_count))
y = state_df_filt.ratings
est = sm.OLS(y, X)
est2 = est.fit()
print(est2.summary())
print state
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This is a function that we'll use later to plot the results of a linear SVM classifier
Step2: Load in the sample JSON file and view its contents
Step3: Now, let's create two lists for all the reviews in Ohio
Step4: Let's take a look at the following regression (information is correlated with review length)
Step5: Let's try using dictionary sentiment categories as dependent variables
Step6: NOTE
Step7: Let's plot the overall distribution of ratings aggregated across all of the states
Step8: Let's plot the rating distribution of reviews within each of the states.
Step9: Now let's try to build a simple linear support vector machine
Step10: In order to use the machine learning algorithms in Sci-Kit learn, we first have to initialize a CountVectorizer object. We can use this object creates a matrix representation of each of our words. There are many options that we can specify when we initialize our CountVectorizer object (see documentation for full list) but they essentially all relate to how the words are represented in the final matrix.
Step11: Create dataframe to hold our results from the classification algorithms
Step12: Lets call a linear SVM instance from SK Learn have it train on our subset of reviews. We'll output the results to an output dataframe and then calculate a total accuracy percentage.
Step13: SKLearn uses what's known as a pipeline. Instead of having to declare each of these objects on their own and passing them into each other, we can just create one object with all the necessary options specified and then use that to run the algorithm. For each pipeline below, we specify the vector to be the CountVectorizer object we have defined above, set it to use tfidf, and then specify the classifier that we want to use.
Step14: Test results using all of the states
Step15: Each row and column corresponds to a rating number. For example, element (1,1) is the number of 1 star reviews that were correctly classified. Element (1,2) is the number of 1 star reviews that were incorrectly classified as 2 stars. Therefore, the sum of the diagonal represents the total number of correctly classified reviews. As you can see, the bagged decision tree classifier is classifying many four starred reviews as five starred reviews and vice versa.
Step16: We draw a heat map for each state below. Longitude is on the Y axis and Latitude is on the X axis. The color coding is as follows
Step17: We run the following linear regression model for each of the states
|
11,862
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
from random import normalvariate, uniform, weibullvariate
# Make several sets of data; one randomly sampled
# from a normal distribution and others that aren't.
n = 100
d_norm = [normalvariate(0,1) for x in range(n)]
d_unif = [uniform(0,1) for x in range(n)]
d_weib = [weibullvariate(1,1.5) for x in range(n)]
fig,ax = plt.subplots(1,1,figsize=(5,5))
bins = 20
xmin,xmax = -3,3
ax.hist(d_norm,histtype='step',bins=bins,range=(xmin,xmax),lw=2,
color='red',label='normal')
ax.hist(d_unif,histtype='step',bins=bins,range=(xmin,xmax),lw=2,
color='green',label='uniform')
ax.hist(d_weib,histtype='step',bins=bins,range=(xmin,xmax),lw=2,
color='blue',label='Weibull')
ax.legend(loc='upper left',fontsize=10);
from scipy.stats import norm,probplot
dists = (d_norm,d_unif,d_weib)
labels = ('Normal','Uniform','Weibull')
fig,axarr = plt.subplots(1,3,figsize=(14,4))
for d,ax,l in zip(dists,axarr.ravel(),labels):
probplot(d, dist=norm, plot=ax)
ax.set_title(l)
from scipy.stats import anderson
for d,l in zip(dists,labels):
a2, crit, sig = anderson(d,dist='norm')
if a2 > crit[2]:
print "Anderson-Darling value for {:7} is A^2={:.3f}; reject H0 at 95%.".format(l,a2)
else:
print "Anderson-Darling value for {:7} is A^2={:.3f}; cannot reject H0 at 95%.".format(l,a2)
from numpy.random import binomial
# Monte Carlo solution
N = 100000
p_girl = 0.5
p_boy = 1 - p_girl
n_girl = 0
n_boy = 0
for i in range(N):
has_girl = False
while not has_girl:
child = binomial(1,p_girl)
if child:
n_girl += 1
has_girl = True
else:
n_boy += 1
n_child = n_girl + n_boy
print "Gender ratio is {:.1f}%/{:.1f}% boy/girl.".format(n_boy * 100./n_child, n_girl * 100./n_child)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Make probability plots
Step2: Interesting. Normal distribution follows the quantiles well and has the highest $R^2$ value, but both the uniform and Weibull distributions aren't very different. Need to temper what I think of as a convincing $R^2$ value.
Step3: Note that critical and significance values are always the same in the Anderson-Darling test regardless of the input. The A^2 value must be compared to them; if the test statistic is greater than the critical value at a given significance, then the null hypothesis is rejected with that level of confidence.
Step4: Practice problems
|
11,863
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
# import lsst sims maf modules
import lsst.sims.maf
import lsst.sims.maf.db as db
import lsst.sims.maf.metrics as lsst_metrics
import lsst.sims.maf.slicers as slicers
import lsst.sims.maf.stackers as stackers
import lsst.sims.maf.plots as plots
import lsst.sims.maf.metricBundles as metricBundles
# import macho modules
import metrics
# make it so that autoreload of modules works
from IPython import get_ipython
ipython = get_ipython()
if '__IPYTHON__' in globals():
ipython.magic('load_ext autoreload')
ipython.magic('autoreload 2')
%matplotlib inline
dir = '/data/des40.a/data/marcelle/lsst-gw/OperationsSimulatorBenchmarkSurveys/'
opsdb = db.OpsimDatabase(dir+'minion_1016_sqlite.db')
outDir = 'notebook_output'
# Initially let's just look at the number of observations in r-band after 2 years with default kwargs
sql = 'filter="r" and night < %i' % (365.25*10)
# Calculate the median gap between consecutive observations within a night, in hours.
metric_intranightgap = lsst_metrics.IntraNightGapsMetric(reduceFunc=np.median)
# Calculate the median gap between consecutive observations between nights, in days.
metric_internightgap = lsst_metrics.InterNightGapsMetric(reduceFunc=np.median)
# Uniformity of time between consecutive visits on short time scales:
'''
timeCol : str, optional
The column containing the 'time' value. Default expMJD.
minNvisits : int, optional
The minimum number of visits required within the time interval (dTmin to dTmax).
Default 100.
dTmin : float, optional
The minimum dTime to consider (in days). Default 40 seconds.
dTmax : float, optional
The maximum dTime to consider (in days). Default 30 minutes.
'''
metric_rapidrevisit = lsst_metrics.RapidRevisitMetric(timeCol='expMJD', minNvisits=10,
dTmin=40.0 / 60.0 / 60.0 / 24.0, dTmax=30.0 / 60.0 / 24.0)
# Number of revisits with time spacing less than 24 hours
metric_nrevisit24hr = lsst_metrics.NRevisitsMetric(dT=24*60)
# Use the custom metric in the macho metrics file, which asks whether the light curve
# allows a detection of a mass solar_mass lens
detectable = metrics.massMetric(mass=30.)
# Let's look at the metric results in the galactic coordinate fram
slicer = slicers.HealpixSlicer(latCol='galb', lonCol='gall', nside=16)
#plotFuncs = [plots.HealpixSkyMap()] # only plot the sky maps for now
# Customize the plot format
plotDict_intranightgap = {'colorMin':0, 'colorMax': 1., 'cbarFormat': '%0.2f'} # Set the max on the color bar
plotDict_internightgap = {'colorMin':0,'colorMax': 10.} # Set the max on the color bar
plotDict_rapidrevisit = {'cbarFormat': '%0.2f'}
plotDict_nrevisit24hr = {'colorMin':0,'colorMax': 300.}
plotDict_detectable = {'colorMin':0,'colorMax': 1.}
# Create the MAF bundles for each plot
bundle_intranightgap = metricBundles.MetricBundle(metric_intranightgap, slicer, sql, plotDict=plotDict_intranightgap)#, plotFuncs=plotFuncs)
bundle_internightgap = metricBundles.MetricBundle(metric_internightgap, slicer, sql, plotDict=plotDict_internightgap)#, plotFuncs=plotFuncs)
bundle_rapidrevisit = metricBundles.MetricBundle(metric_rapidrevisit, slicer, sql, plotDict=plotDict_rapidrevisit)#, plotFuncs=plotFuncs)
bundle_nrevisit24hr = metricBundles.MetricBundle(metric_nrevisit24hr, slicer, sql, plotDict=plotDict_nrevisit24hr)#, plotFuncs=plotFuncs)
bundle_detectable = metricBundles.MetricBundle(detectable, slicer, sql, plotDict=plotDict_detectable)#, plotFuncs=plotFuncs)
# Create the query bundle dictonary to run all of the queries in the same run
bdict = {'intragap':bundle_intranightgap, 'intergap':bundle_internightgap,
'rapidrevisit':bundle_rapidrevisit, 'nrevisit24hr':bundle_nrevisit24hr,
'detectable':bundle_detectable}
bg = metricBundles.MetricBundleGroup(bdict, opsdb, outDir=outDir)
# Run the queries
bg.runAll()
# Create the plots
bg.plotAll(closefigs=False)
outDir ='LightCurve'
dbFile = 'minion_1016_sqlite.db'
resultsDb = db.ResultsDb(outDir=outDir)
filters = ['u','g','r','i','z','y']
colors={'u':'cyan','g':'g','r':'y','i':'r','z':'m', 'y':'k'}
# Set RA, Dec for a single point in the sky. in radians. Galactic Center.
ra = np.radians(266.4168)
dec = np.radians(-29.00)
# SNR limit (Don't use points below this limit)
snrLimit = 5.
# Demand this many points above SNR limit before plotting LC
nPtsLimit = 6
# The pass metric just passes data straight through.
metric = metrics.PassMetric(cols=['filter','fiveSigmaDepth','expMJD'])
slicer = slicers.UserPointsSlicer(ra,dec,lonCol='ditheredRA',latCol='ditheredDec')
sql = ''
bundle = metricBundles.MetricBundle(metric,slicer,sql)
bg = metricBundles.MetricBundleGroup({0:bundle}, opsdb,
outDir=outDir, resultsDb=resultsDb)
bg.runAll()
bundle.metricValues.data[0]['filter']
dayZero = bundle.metricValues.data[0]['expMJD'].min()
for fname in filters:
good = np.where(bundle.metricValues.data[0]['filter'] == fname)
plt.scatter(bundle.metricValues.data[0]['expMJD'][good]- dayZero,
bundle.metricValues.data[0]['fiveSigmaDepth'][good],
c = colors[fname], label=fname)
plt.xlabel('Day')
plt.ylabel('5$\sigma$ depth')
plt.legend(scatterpoints=1, loc="upper left", bbox_to_anchor=(1,1))
# Set RA, Dec for a single point in the sky. in radians. LMC.
ra = np.radians(80.8942)
dec = np.radians(-69.756)
# SNR limit (Don't use points below this limit)
snrLimit = 5.
# Demand this many points above SNR limit before plotting LC
nPtsLimit = 6
# The pass metric just passes data straight through.
metric = metrics.PassMetric(cols=['filter','fiveSigmaDepth','expMJD'])
slicer = slicers.UserPointsSlicer(ra,dec,lonCol='ditheredRA',latCol='ditheredDec')
sql = ''
bundle = metricBundles.MetricBundle(metric,slicer,sql)
bg = metricBundles.MetricBundleGroup({0:bundle}, opsdb,
outDir=outDir, resultsDb=resultsDb)
bg.runAll()
bundle.metricValues.data[0]['filter']
dayZero = bundle.metricValues.data[0]['expMJD'].min()
for fname in filters:
good = np.where(bundle.metricValues.data[0]['filter'] == fname)
plt.scatter(bundle.metricValues.data[0]['expMJD'][good]- dayZero,
bundle.metricValues.data[0]['fiveSigmaDepth'][good],
c = colors[fname], label=fname)
plt.xlabel('Day')
plt.ylabel('5$\sigma$ depth')
plt.legend(scatterpoints=1, loc="upper left", bbox_to_anchor=(1,1))
import numpy as np
import matplotlib.pyplot as plt
# import lsst sims maf modules
import lsst.sims.maf
import lsst.sims.maf.db as db
import lsst.sims.maf.metrics as lsst_metrics
import lsst.sims.maf.slicers as slicers
import lsst.sims.maf.stackers as stackers
import lsst.sims.maf.plots as plots
import lsst.sims.maf.metricBundles as metricBundles
# import macho modules
import metrics
# make it so that autoreload of modules works
from IPython import get_ipython
ipython = get_ipython()
if '__IPYTHON__' in globals():
ipython.magic('load_ext autoreload')
ipython.magic('autoreload 2')
%matplotlib inline
dir = '/data/des40.a/data/marcelle/lsst-gw/OperationsSimulatorBenchmarkSurveys/'
opsdb = db.OpsimDatabase(dir+'minion_1016_sqlite.db')
outDir = 'notebook_output'
# Initially let's just look at the number of observations in r-band after 2 years with default kwargs
nyears = 5.
mass = 30.
sql = 'filter="i" and night < %i' % (365.25*nyears)
# Use the custom metric in the macho metrics file, which asks whether the light curve
# allows a detection of a mass solar_mass lens
detectable = metrics.massMetric(mass=mass)
# Let's look at the metric results in the galactic coordinate fram
slicer = slicers.HealpixSlicer(latCol='galb', lonCol='gall', nside=32)
plotDict_detectable = {'colorMin':0,'colorMax': 1.}
bundle_detectable = metricBundles.MetricBundle(detectable, slicer, sql, plotDict=plotDict_detectable)
# Create the query bundle dictonary to run all of the queries in the same run
bdict = {'detectable':bundle_detectable}
bg = metricBundles.MetricBundleGroup(bdict, opsdb, outDir=outDir)
# Run the queries
bg.runAll()
# Create the plots
bg.plotAll(closefigs=False)
dir = '/data/des40.a/data/marcelle/lsst-gw/OperationsSimulatorBenchmarkSurveys/'
opsdb = db.OpsimDatabase(dir+'astro_lsst_01_1064_sqlite.db')
outDir = 'notebook_output'
# Initially let's just look at the number of observations in r-band after 2 years with default kwargs
nyears = 5.
mass = 30.
sql = 'filter="i" and night < %i' % (365.25*nyears)
# Use the custom metric in the macho metrics file, which asks whether the light curve
# allows a detection of a mass solar_mass lens
detectable = metrics.massMetric(mass=mass)
# Let's look at the metric results in the galactic coordinate fram
slicer = slicers.HealpixSlicer(latCol='galb', lonCol='gall', nside=32)
plotDict_detectable = {'colorMin':0,'colorMax': 1.}
bundle_detectable = metricBundles.MetricBundle(detectable, slicer, sql, plotDict=plotDict_detectable)
# Create the query bundle dictonary to run all of the queries in the same run
bdict = {'detectable':bundle_detectable}
bg = metricBundles.MetricBundleGroup(bdict, opsdb, outDir=outDir)
# Run the queries
bg.runAll()
# Create the plots
bg.plotAll(closefigs=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: General Input
Step2: SQL Query
Step3: Metrics
Step4: Slicer
Step5: Plot functions and customization
Step6: Bundles
Step7: Plot a light curve
Step8: Note that something doesn't seem right about the light curve above since there is >mag extinction towards the center of the Milky Way for bluer bands, yet these are the same 5sigma magnitude depths as towards the LMC (see below).
Step9: Mass metric example
Step10: minion_1016_sqlite.db is the baseline LSST cadence from 2016
Step11: astro_lsst_01_1064.sqlite.db is the "hacked" rolling cadence from the SN group and Rahul Biswas.
|
11,864
|
<ASSISTANT_TASK:>
Python Code:
import usau.reports
import usau.fantasy
from IPython.display import display, HTML
import pandas as pd
pd.options.display.width = 200
pd.options.display.max_colwidth = 200
pd.options.display.max_columns = 200
def display_url_column(df):
Helper for formatting url links
df.url = df.url.apply(lambda url: "<a href='{base}{url}'>Match Report Link</a>"
.format(base=usau.reports.USAUResults.BASE_URL, url=url))
display(HTML(df.to_html(escape=False)))
# Read data from csv files
usau.reports.d1_college_nats_men_2016.load_from_csvs()
usau.reports.d1_college_nats_women_2016.load_from_csvs()
display_url_column(pd.concat([usau.reports.d1_college_nats_men_2016.missing_tallies,
usau.reports.d1_college_nats_women_2016.missing_tallies])
[["Score", "Gs", "As", "Ds", "Ts", "Team", "Opponent", "url"]])
men_matches = usau.reports.d1_college_nats_men_2016.match_results
women_matches = usau.reports.d1_college_nats_women_2016.match_results
display_url_column(pd.concat([men_matches[(men_matches.Ts == 0) & (men_matches.Gs > 0)],
women_matches[(women_matches.Ts == 0) & (women_matches.Gs > 0)]])
[["Score", "Gs", "As", "Ds", "Ts", "Team", "Opponent", "url"]])
# Read last year's data from csv files
usau.reports.d1_college_nats_men_2015.load_from_csvs()
usau.reports.d1_college_nats_women_2015.load_from_csvs()
display_url_column(pd.concat([usau.reports.d1_college_nats_men_2015.missing_tallies,
usau.reports.d1_college_nats_women_2015.missing_tallies])
[["Score", "Gs", "As", "Ds", "Ts", "Team", "Opponent", "url"]])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Stats Quality for 2016 D-I College Nationals
Step2: Since we should already have the data downloaded as csv files in this repository, we will not need to re-scrape the data. Omit this cell to directly download from the USAU website (may be slow).
Step3: Let's take a look at the games for which the sum of the player goals/assists is less than the final score of the game
Step4: All in all, not too bad! A few of the women's consolation games are missing player statistics, and there are several other games for which a couple of goals or assists were missed. For missing assists, it is technically possible that there were one or more callahans scored in those game, but obviously that's not the case with all ~14 missing assists. Surprisingly, there were 10 more assists recorded by the statkeepers than goals; I would have guessed that assists would be harder to keep track.
Step5: This implies that there was a pretty good effort made to keep up with counting turns and Ds. By contrast, see how many teams did not keep track of Ds and turns last year (2015)!
|
11,865
|
<ASSISTANT_TASK:>
Python Code:
# Run some setup code for this notebook.
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
# This is a bit of magic to make matplotlib figures appear inline in the notebook
# rather than in a new window.
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Some more magic so that the notebook will reload external python modules;
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
# Load the raw CIFAR-10 data.
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# As a sanity check, we print out the size of the training and test data.
print 'Training data shape: ', X_train.shape
print 'Training labels shape: ', y_train.shape
print 'Test data shape: ', X_test.shape
print 'Test labels shape: ', y_test.shape
# Visualize some examples from the dataset.
# We show a few examples of training images from each class.
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
num_classes = len(classes)
samples_per_class = 7
for y, cls in enumerate(classes):
idxs = np.flatnonzero(y_train == y)
idxs = np.random.choice(idxs, samples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i * num_classes + y + 1
plt.subplot(samples_per_class, num_classes, plt_idx)
plt.imshow(X_train[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls)
plt.show()
# Subsample the data for more efficient code execution in this exercise
num_training = 5000
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
num_test = 500
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
# Reshape the image data into rows
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
print X_train.shape, X_test.shape
from cs231n.classifiers import KNearestNeighbor
# Create a kNN classifier instance.
# Remember that training a kNN classifier is a noop:
# the Classifier simply remembers the data and does no further processing
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
# Open cs231n/classifiers/k_nearest_neighbor.py and implement
# compute_distances_two_loops.
# Test your implementation:
dists = classifier.compute_distances_two_loops(X_test)
print dists.shape
# We can visualize the distance matrix: each row is a single test example and
# its distances to training examples
plt.imshow(dists, interpolation='none')
plt.show()
# Now implement the function predict_labels and run the code below:
# We use k = 1 (which is Nearest Neighbor).
y_test_pred = classifier.predict_labels(dists, k=1)
# Compute and print the fraction of correctly predicted examples
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
y_test_pred = classifier.predict_labels(dists, k=5)
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
# Now lets speed up distance matrix computation by using partial vectorization
# with one loop. Implement the function compute_distances_one_loop and run the
# code below:
dists_one = classifier.compute_distances_one_loop(X_test)
# To ensure that our vectorized implementation is correct, we make sure that it
# agrees with the naive implementation. There are many ways to decide whether
# two matrices are similar; one of the simplest is the Frobenius norm. In case
# you haven't seen it before, the Frobenius norm of two matrices is the square
# root of the squared sum of differences of all elements; in other words, reshape
# the matrices into vectors and compute the Euclidean distance between them.
difference = np.linalg.norm(dists - dists_one, ord='fro')
print 'Difference was: %f' % (difference, )
if difference < 0.001:
print 'Good! The distance matrices are the same'
else:
print 'Uh-oh! The distance matrices are different'
# Now implement the fully vectorized version inside compute_distances_no_loops
# and run the code
dists_two = classifier.compute_distances_no_loops(X_test)
# check that the distance matrix agrees with the one we computed before:
difference = np.linalg.norm(dists - dists_two, ord='fro')
print 'Difference was: %f' % (difference, )
if difference < 0.001:
print 'Good! The distance matrices are the same'
else:
print 'Uh-oh! The distance matrices are different'
# Let's compare how fast the implementations are
def time_function(f, *args):
Call a function f with args and return the time (in seconds) that it took to execute.
import time
tic = time.time()
f(*args)
toc = time.time()
return toc - tic
two_loop_time = time_function(classifier.compute_distances_two_loops, X_test)
print 'Two loop version took %f seconds' % two_loop_time
one_loop_time = time_function(classifier.compute_distances_one_loop, X_test)
print 'One loop version took %f seconds' % one_loop_time
no_loop_time = time_function(classifier.compute_distances_no_loops, X_test)
print 'No loop version took %f seconds' % no_loop_time
# you should see significantly faster performance with the fully vectorized implementation
num_folds = 5
k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100]
X_train_folds = []
y_train_folds = []
################################################################################
# TODO: #
# Split up the training data into folds. After splitting, X_train_folds and #
# y_train_folds should each be lists of length num_folds, where #
# y_train_folds[i] is the label vector for the points in X_train_folds[i]. #
# Hint: Look up the numpy array_split function. #
################################################################################
X_train_folds = np.array_split(X_train, num_folds)
y_train_folds = np.array_split(y_train, num_folds)
################################################################################
# END OF YOUR CODE #
################################################################################
# A dictionary holding the accuracies for different values of k that we find
# when running cross-validation. After running cross-validation,
# k_to_accuracies[k] should be a list of length num_folds giving the different
# accuracy values that we found when using that value of k.
k_to_accuracies = {}
################################################################################
# TODO: #
# Perform k-fold cross validation to find the best value of k. For each #
# possible value of k, run the k-nearest-neighbor algorithm num_folds times, #
# where in each case you use all but one of the folds as training data and the #
# last fold as a validation set. Store the accuracies for all fold and all #
# values of k in the k_to_accuracies dictionary. #
################################################################################
for k in k_choices:
for i in xrange(num_folds):
num_test_k = X_train.shape[0] * (num_folds - 1) / num_folds
X_train_k = np.delete(X_train_folds, i, 0).reshape(num_test_k, X_train.shape[1])
y_train_k = np.delete(y_train_folds, i, 0).reshape(num_test_k,)
X_test_k = X_train_folds[i]
y_test_k = y_train_folds[i]
classifier_k = KNearestNeighbor()
classifier_k.train(X_train_k, y_train_k)
dists_k = classifier_k.compute_distances_no_loops(X_test_k)
y_test_pred_k = classifier_k.predict_labels(dists_k, k=k)
num_correct_k = np.sum(y_test_pred_k == y_test_k)
accuracy_k = float(num_correct_k) / num_test_k
if k not in k_to_accuracies:
k_to_accuracies[k] = []
k_to_accuracies[k].append(accuracy_k)
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out the computed accuracies
for k in sorted(k_to_accuracies):
for accuracy in k_to_accuracies[k]:
print 'k = %d, accuracy = %f' % (k, accuracy)
# plot the raw observations
for k in k_choices:
accuracies = k_to_accuracies[k]
plt.scatter([k] * len(accuracies), accuracies)
# plot the trend line with error bars that correspond to standard deviation
accuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())])
accuracies_std = np.array([np.std(v) for k,v in sorted(k_to_accuracies.items())])
plt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std)
plt.title('Cross-validation on k')
plt.xlabel('k')
plt.ylabel('Cross-validation accuracy')
plt.show()
# Based on the cross-validation results above, choose the best value for k,
# retrain the classifier using all the training data, and test it on the test
# data. You should be able to get above 28% accuracy on the test data.
best_k = 10
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
y_test_pred = classifier.predict(X_test, k=best_k)
# Compute and display the accuracy
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps
Step2: Inline Question #1
Step3: You should expect to see approximately 27% accuracy. Now lets try out a larger k, say k = 5
Step5: You should expect to see a slightly better performance than with k = 1.
Step6: Cross-validation
|
11,866
|
<ASSISTANT_TASK:>
Python Code:
import gambit
gambit.__version__
g = gambit.Game.read_game("poker.efg")
g
g.players
g.players["Alice"]
g.players["Alice"].infosets
g.players.chance
g.players.chance.infosets
g.players.chance.infosets[0].actions
deal = g.players.chance.infosets[0]
deal.actions["A"].prob
deal.actions["K"].prob
result = gambit.nash.lcp_solve(g)
len(result)
result[0]
result[0][g.players["Alice"]]
result[0][g.players["Bob"]]
result[0].payoff(g.players["Alice"])
result[0].payoff(g.players["Bob"])
result[0].payoff(g.players["Bob"].infosets[0].actions[0])
result[0].payoff(g.players["Bob"].infosets[0].actions[1])
result[0].belief(g.players["Bob"].infosets[0].members[0])
result[0].belief(g.players["Bob"].infosets[0].members[1])
g.players["Alice"].strategies
g.players["Bob"].strategies
import IPython.display; IPython.display.HTML(g.write('html'))
print g.write('sgame')
msp = result[0].as_strategy()
msp
msp.payoff(g.players["Alice"])
msp.strategy_values(g.players["Alice"])
import pandas
probs = [ gambit.Rational(i, 20) for i in xrange(1, 20) ]
results = [ ]
for prob in probs:
g.players.chance.infosets[0].actions[0].prob = prob
g.players.chance.infosets[0].actions[1].prob = 1-prob
result = gambit.nash.lcp_solve(g)[0]
results.append({ "prob": prob,
"alice_payoff": result.payoff(g.players["Alice"]),
"bluff": result[g.players["Alice"].infosets[1].actions[0]],
"belief": result.belief(g.players["Bob"].infosets[0].members[1]) })
df = pandas.DataFrame(results)
df
import pylab
%matplotlib inline
pylab.plot(df.prob, df.bluff, '-')
pylab.xlabel("Probability Alice gets ace")
pylab.ylabel("Probability Alice bluffs with king")
pylab.show()
pylab.plot(df.prob, df.alice_payoff, '-')
pylab.xlabel("Probability Alice gets ace")
pylab.ylabel("Alice's equilibrium payoff")
pylab.show()
pylab.plot(df.prob, df.belief, '-')
pylab.xlabel("Probability Alice gets ace")
pylab.ylabel("Bob's equilibrium belief")
pylab.ylim(0,1)
pylab.show()
deal.actions[0].prob = gambit.Rational(1,2)
deal.actions[1].prob = gambit.Rational(1,2)
g.outcomes["Alice wins big"]
g.outcomes["Alice wins big"][0] = 3
g.outcomes["Alice wins big"][1] = -3
g.outcomes["Bob wins big"][0] = -3
g.outcomes["Bob wins big"][1] = 3
result = gambit.nash.lcp_solve(g)
len(result)
result[0]
result[0].payoff(g.players["Alice"])
result[0].belief(g.players["Bob"].infosets[0].members[0])
print g.write('nfg')
print g.write('gte')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Gambit version 16.0.0 is the current development version. You can get it from http
Step2: Inspecting a game
Step3: Gambit's .efg format is a serialisation of an extensive game. The format looks somewhat dated (and indeed it was finalised in 1994), but is fast
Step4: The game offers a "Pythonic" interface. Most objects in a game can be accessed via iterable collections.
Step5: All objects have an optional text label, which can be used to retrieve it from the collection
Step6: In this game, Alice has two information sets
Step7: The chance or nature player is a special player in the players collection.
Step8: Gambit does sorting of the objects in each collection, so indexing collections by integer indices also works reliably if you save and load a game again.
Step9: We can assign particular game objects to variables for convenient referencing. In this case, we will explore the strategic effects of changing the relative probabilities of the Ace and King cards.
Step10: In the original version of the game, it was assumed that the Ace and King cards were equally likely to be dealt.
Step11: Computing Nash equilibria
Step12: The result of this method is a list of (mixed) behaviour profiles. (Future
Step13: A behaviour profile looks like a nested list. Entries are of the form profile[player][infoset][action].
Step14: We can compute various interesting quantities about behaviour profiles. Most interesting is perhaps the payoff to each player; because this is a constant-sum game, this is the value of the game.
Step15: Bob is randomising at his information set, so he must be indifferent between his actions there. We can check this.
Step16: As we teach our students, the key to understanding this game is that Alice plays so as to manipulate Bob's beliefs about the likelihood she has the Ace. We can examine Bob's beliefs over the nodes (members) of his one information set.
Step17: Construction of the reduced normal form
Step18: We can also do a quick visualisation of the payoff matrix of the game using the built-in HTML output (plus Jupyter's inline rendering of HTML!)
Step19: Bonus note
Step20: We can convert our behaviour profile to a corresponding mixed strategy profile. This is indexable as a nested list with elements [player][strategy].
Step21: Of course, Alice will receive the same expected payoff from this mixed strategy profile as she would in the original behaviour profile.
Step22: We can also ask what the expected payoffs to each of the strategies are. Alice's last two strategies correspond to folding when she has the Ace, which is dominated.
Step23: Automating/scripting analysis
Step24: As a final experiment, we can also change the payoff structure instead of the probability of the high card. How would the equilibrium change if a Raise/Meet required putting 2 into the pot instead of 1?
Step25: The outcomes member of the game lists all of the outcomes. An outcome can appear at multiple nodes. Outcomes, like all other objects, can be given text labels for easy reference.
Step26: Once again, solve the revised game using Lemke's algorithm on the sequence form.
Step27: The value of the game to Alice is now higher
Step28: Bob's equilibrium belief about Alice's hand is also different of course, as he now is indifferent between meeting and passing Alice's raise when he thinks the chance she has the Ace is 2/3 (instead of 3/4 before).
Step29: Serialising the game in other formats
Step30: Also, we can write the game out in the XML format used by Game Theory Explorer
|
11,867
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
import sklearn
from sklearn.linear_model import LogisticRegression
from sklearn.datasets import load_digits
from sklearn.pipeline import Pipeline
from sklearn.decomposition import PCA
digits = load_digits()
X_digits = digits.data
y_digits = digits.target
logistic = LogisticRegression()
pca = PCA()
pipe = Pipeline(steps=[('pca', pca), ('logistic', logistic)])
pipe.fit(X_digits, y_digits)
pipe.predict(X_digits[:1])
from sklearn.grid_search import GridSearchCV
n_components = [20, 40, 64] # number of compomentens in PCA
Cs = np.logspace(-4, 0, 3, 4) # Inverse of regularization strength
penalty = ["l1", "l2"] # Norm used by the Logistic regression penalization
class_weight = [None, "balanced"] # Weights associatied with clases
estimator = GridSearchCV(pipe,
{"pca__n_components": n_components,
"logistic__C": Cs,
"logistic__class_weight": class_weight,
"logistic__penalty": penalty
}, n_jobs=8, cv=5)
estimator.fit(X_digits, y_digits)
estimator.grid_scores_
print(estimator.best_score_)
print(estimator.best_params_)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Finding the best model
|
11,868
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import time
import machine_learning_helper as machine_learning_helper
import metrics_helper as metrics_helper
import sklearn.neighbors, sklearn.linear_model, sklearn.ensemble, sklearn.naive_bayes
from sklearn.model_selection import KFold, train_test_split, ShuffleSplit
from sklearn import model_selection
from sklearn import ensemble
from xgboost.sklearn import XGBClassifier
import scipy as sp
import xgboost as xgb
import matplotlib.pyplot as plt
% matplotlib inline
from sklearn.model_selection import learning_curve
from sklearn import linear_model, datasets
import os
dataFolder = 'cleaned_data'
resultFolder = 'results'
filenameAdress_train_user = 'cleaned_train_user.csv'
filenameAdress_test_user = 'cleaned_test_user.csv'
filenameAdress_time_mean_user_id = 'time_mean_user_id.csv'
filenameAdress_time_total_user_id = 'time_total_user_id.csv'
filenameAdress_total_action_user_id = 'total_action_user_id.csv'
df_train_users = pd.read_csv(os.path.join(dataFolder, filenameAdress_train_user))
df_test_users = pd.read_csv(os.path.join(dataFolder, filenameAdress_test_user))
df_time_mean_user_id = pd.read_csv(os.path.join(dataFolder, filenameAdress_time_mean_user_id))
df_time_total_user_id = pd.read_csv(os.path.join(dataFolder, filenameAdress_time_total_user_id))
df_total_action_user_id = pd.read_csv(os.path.join(dataFolder, filenameAdress_total_action_user_id))
df_total_action_user_id.columns = ['id','action']
df_sessions = pd.merge(df_time_mean_user_id, df_time_total_user_id, on='id', how='outer')
df_sessions = pd.merge(df_sessions, df_total_action_user_id, on='id', how='outer')
df_sessions.columns = ['id','time_mean_user','time_total_user','action']
y_labels, label_enc = machine_learning_helper.buildTargetMat(df_train_users)
X_train, X_test = machine_learning_helper.buildFeatsMat(df_train_users, df_test_users, df_sessions)
#X_train = X_train[200000:201000]
#y_labels = y_labels[200000:201000]
X_train_sparse = sp.sparse.csr_matrix(X_train.values)
cv = model_selection.KFold(n_splits=5, random_state=None, shuffle=True)
number_trees = [125, 300, 500, 600 ]
max_depth = [5, 8, 12, 16, 20]
rf_score_trees = []
rf_score_depth = []
rf_param_trees = []
rf_param_depth = []
#Loop for hyperparameter number_trees
for number_trees_idx, number_trees_value in enumerate(number_trees):
print('number_trees_idx: ',number_trees_idx+1,'/',len(number_trees),', value: ', number_trees_value)
# Random forest
rand_forest_model = ensemble.RandomForestClassifier(n_estimators=number_trees_value, max_depth=14)
#Scores
scores = model_selection.cross_val_score(rand_forest_model, X_train_sparse, y_labels, cv=cv, verbose = 10, n_jobs = 12, scoring=metrics_helper.ndcg_scorer)
rf_score_trees.append(scores.mean())
rf_param_trees.append(number_trees_value)
print('Mean NDCG for this number_trees = ', scores.mean())
# best number of trees from above
print()
print('best NDCG:')
print(np.max(rf_score_trees))
print('best parameter num_trees:')
idx_best = np.argmax(rf_score_trees)
best_num_trees_RF = rf_param_trees[idx_best]
print(best_num_trees_RF)
#Loop for hyperparameter max_depth
for max_depth_idx, max_depth_value in enumerate(max_depth):
print('max_depth_idx: ',max_depth_idx+1,'/',len(max_depth),', value: ', max_depth_value)
# Random forest
rand_forest_model = ensemble.RandomForestClassifier(n_estimators=best_num_trees_RF, max_depth=max_depth_value)
#Scores
scores = model_selection.cross_val_score(rand_forest_model, X_train_sparse, y_labels, cv=cv, verbose = 10, n_jobs = 12, scoring=metrics_helper.ndcg_scorer)
rf_score_depth.append(scores.mean())
rf_param_depth.append(max_depth_value)
print('Mean NDCG for this max:_depth = ', scores.mean())
# best max_depth from above
print()
print('best NDCG:')
print(np.max(rf_score_depth))
print('best parameter max_depth:')
idx_best = np.argmax(rf_score_depth)
best_max_depth_RF = rf_param_depth[idx_best]
print(best_max_depth_RF)
best_num_trees_RF = 600
best_max_depth_RF = 16
rand_forest_model = ensemble.RandomForestClassifier(n_estimators=best_num_trees_RF, max_depth=best_max_depth_RF)
rand_forest_model.fit(X_train_sparse,y_labels)
y_pred1 = rand_forest_model.predict_proba(X_test)
id_test = df_test_users['id']
cts1,idsubmission1 = machine_learning_helper.get5likelycountries(y_pred1, id_test)
ctsSubmission1 = label_enc.inverse_transform(cts1)
# Save to csv
df_submission1 = pd.DataFrame(np.column_stack((idsubmission1, ctsSubmission1)), columns=['id', 'country'])
df_submission1.to_csv(os.path.join(resultFolder, 'submission_country_dest_RF.csv'),index=False)
learning_rates = [0.001, 0.01, 0.05,0.1, 0.2]
max_depth = [3, 5, 7, 9, 12]
n_estimators = [20,30,50,75,100]
gamma = [0,0.3, 0.5, 0.7, 1]
best_gamma_XCG, best_num_estimators_XCG,best_num_depth_XCG, best_learning_rate_XCG = machine_learning_helper.CrossVal_XGB(X_train_sparse, y_labels, cv,max_depth,n_estimators,learning_rates,gamma)
best_learning_rate_XCG = 0.1
best_num_depth_XCG = 5
best_gamma_XCG = 0.7
best_num_estimators_XCG = 75
XGB_model = XGBClassifier(max_depth=best_num_depth_XCG, learning_rate=best_learning_rate_XCG, n_estimators=best_num_estimators_XCG,objective='multi:softprob',
subsample=0.5, colsample_bytree=0.5, gamma = best_gamma_XCG)
XGB_model.fit(X_train,y_labels, eval_metric=metrics_helper.ndcg_scorer)
y_pred2 = XGB_model.predict_proba(X_test)
id_test = df_test_users['id']
cts2,idsubmission2 = machine_learning_helper.get5likelycountries(y_pred2, id_test)
ctsSubmission2 = label_enc.inverse_transform(cts2)
df_submission2 = pd.DataFrame(np.column_stack((idsubmission2, ctsSubmission2)), columns=['id', 'country'])
df_submission2.to_csv(os.path.join(resultFolder, 'submission_country_dest_XGB.csv'),index=False)
# Build 1st layer training matrix, text matrix, target vector
y_labels_binary, X_train_layer1, X_test_layer1 = machine_learning_helper.buildFeatsMatBinary(df_train_users, df_test_users, df_sessions)
#y_labels_binary = y_labels_binary[0:1000]
#X_train_layer1 = X_train_layer1[0:1000]
y_labels_binary = y_labels_binary.astype(np.int8)
# Build 1st layer model
# Cross validation with parameter C
C = [0.1, 1.0, 10, 100, 1000]
logistic_score_C = []
logistic_param_C = []
#Loop for hyperparameter
for C_idx, C_value in enumerate(C):
print('C_idx: ',C_idx+1,'/',len(C),', value: ', C_value)
# Logistic
model = linear_model.LogisticRegression(C = C_value)
#Scores
scores = model_selection.cross_val_score(model, X_train_layer1, y_labels_binary, cv=cv, verbose = 10, scoring='f1', n_jobs = 12)
logistic_score_C.append(scores.mean())
logistic_param_C.append(C_value)
print('Mean f1 for this C = ', scores.mean())
# best C from above
print()
print('best f1:')
print(np.max(logistic_score_C))
print('best parameter C:')
idx_best = np.argmax(logistic_score_C)
best_C_logistic = logistic_param_C[idx_best]
print(best_C_logistic)
# Build model with best parameter from cross validation
logreg_layer1 = linear_model.LogisticRegression(C = best_C_logistic)
logreg_layer1.fit(X_train_layer1, y_labels_binary)
score_training = logreg_layer1.predict(X_train_layer1)
# 1st layer model prediction
prediction_layer_1 = logreg_layer1.predict(X_test_layer1)
from sklearn import metrics
metrics.accuracy_score(y_labels_binary,score_training)
# Build 2nd layer training matrix, text matrix, target vector
#df_train_users.reset_index(inplace=True,drop=True)
#y_labels, label_enc = machine_learning_helper.buildTargetMat(df_train_users)
#y_labels = y_labels[0:1000]
#X_train_layer1 = X_train_layer1[0:1000]
X_train_layer2 = X_train_layer1
X_train_layer2['meta_layer_1'] = pd.Series(y_labels_binary).astype(np.int8)
X_test_layer2 = X_test_layer1
X_test_layer2['meta_layer_1'] = pd.Series(prediction_layer_1).astype(np.int8)
learning_rates = [0.001, 0.01, 0.05,0.1, 0.2]
max_depth = [3, 5, 7, 9, 12]
n_estimators = [20,30,50,75,100]
gamma = [0,0.3, 0.5, 0.7, 1]
cv2 = model_selection.KFold(n_splits=5, random_state=None, shuffle=True)
best_gamma_XCG, best_num_estimators_XCG,best_num_depth_XCG, best_learning_rate_XCG = machine_learning_helper.CrossVal_XGB(X_train_layer2, y_labels, cv2,max_depth,n_estimators,learning_rates,gamma)
best_learning_rate_XCG = 0.1
best_num_depth_XCG = 5
best_gamma_XCG = 0.7
best_num_estimators_XCG = 50
XGB_model = XGBClassifier(max_depth=best_num_depth_XCG, learning_rate=best_learning_rate_XCG, n_estimators=best_num_estimators_XCG,objective='multi:softprob',
subsample=0.5, colsample_bytree=0.5, gamma = best_gamma_XCG)
XGB_model.fit(X_train_layer2,y_labels, eval_metric=metrics_helper.ndcg_scorer)
y_pred2 = XGB_model.predict_proba(X_test_layer2)
id_test = df_test_users['id']
cts2,idsubmission2 = machine_learning_helper.get5likelycountries(y_pred2, id_test)
ctsSubmission2 = label_enc.inverse_transform(cts2)
df_submission2 = pd.DataFrame(np.column_stack((idsubmission2, ctsSubmission2)), columns=['id', 'country'])
df_submission2.to_csv(os.path.join(resultFolder, 'submission_country_dest_stacking.csv'),index=False)
# Create the sub models
estimators = []
model1 = ensemble.RandomForestClassifier(max_depth=best_max_depth_RF, n_estimators= best_num_trees_RF)
estimators.append(('random_forest', model1))
model2 = XGBClassifier(max_depth=best_num_depth_XCG,learning_rate=best_learning_rate_XCG,n_estimators= best_num_estimators_XCG,
objective='multi:softprob',
subsample=0.5, colsample_bytree=0.5, gamma = best_gamma_XCG)
estimators.append(('xgb', model2))
model3 = XGB_model
estimators.append(('2layer', model3))
# Create Voting classifier
finalModel = ensemble.VotingClassifier(estimators,voting='soft')
# Run cross validation score
results = model_selection.cross_val_score(finalModel, X_train, y_labels, cv=cv, scoring = metrics_helper.ndcg_scorer, verbose = 10, n_jobs=12)
print("Voting Classifier Cross Validation Score found:")
print(results.mean())
finalModel.fit(X_train,y_labels)
y_pred1 = finalModel.predict_proba(X_test)
id_test = df_test_users['id']
cts1,idsubmission1 = machine_learning_helper.get5likelycountries(y_pred1, id_test)
ctsSubmission1 = label_enc.inverse_transform(cts1)
df_submission1 = pd.DataFrame(np.column_stack((idsubmission1, ctsSubmission1)), columns=['id', 'country'])
df_submission1.to_csv(os.path.join(resultFolder, 'submission_country_dest_Voting.csv'),index=False)
model = XGBClassifier(max_depth=5, learning_rate=0.1, n_estimators=75,objective='multi:softprob',
subsample=0.5, colsample_bytree=0.5, gamma=0.7 )
model.fit(X_train,y_labels)
machine_learning_helper.plotFeaturesImportance(model,X_train)
fig, ax = plt.subplots(figsize=(15, 10))
xgb.plot_importance(model,height=0.7, ax=ax)
machine_learning_helper.plotFeaturesImportance(XGB_model,X_train_layer2)
fig, ax = plt.subplots(figsize=(15, 10))
xgb.plot_importance(XGB_model,height=0.7, ax=ax)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read .csv files
Step2: Construct sessions data frame
Step3: 1. From data frame to matrix
Step4: 2. From data frame to matrix
Step5: For Memory purpose, the train matrix is formatted in sparse
Step6: 3. Cross validation setup
Step7: 4. Machine Learning
Step8: Random forest 600 trees, 16 depth
Step9: Model 2
Step10: XGboost - learning_rate = 0.1, gamma =1, depth = 7, estimators = 75
Step11: Model 3
Step12: Training accuracy
Step13: Layer 2
Step14: 2 layers stack model - learning_rate = 0.1, gamma =0.7, depth = 5, estimators = 75
Step15: 4. Voting Model
Step16: Voting classifier
Step17: 5. Evaluating features importance
Step18: The figure above shows the 20 most important features following the NDCG score. The age feature is by far the most important one.
|
11,869
|
<ASSISTANT_TASK:>
Python Code:
# Copyright 2019 The Google AI Language Team Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
! pip install tapas-table-parsing
! gsutil cp "gs://tapas_models/2020_10_07/tapas_tabfact_inter_masklm_medium_reset.zip" "tapas_model.zip" && unzip tapas_model.zip
! mv tapas_tabfact_inter_masklm_medium_reset tapas_model
import tensorflow.compat.v1 as tf
import os
import shutil
import csv
import pandas as pd
import IPython
tf.get_logger().setLevel('ERROR')
from tapas.utils import tf_example_utils
from tapas.protos import interaction_pb2
from tapas.utils import number_annotation_utils
import math
os.makedirs('results/tabfact/tf_examples', exist_ok=True)
os.makedirs('results/tabfact/model', exist_ok=True)
with open('results/tabfact/model/checkpoint', 'w') as f:
f.write('model_checkpoint_path: "model.ckpt-0"')
for suffix in ['.data-00000-of-00001', '.index', '.meta']:
shutil.copyfile(f'tapas_model/model.ckpt{suffix}', f'results/tabfact/model/model.ckpt-0{suffix}')
max_seq_length = 512
vocab_file = "tapas_model/vocab.txt"
config = tf_example_utils.ClassifierConversionConfig(
vocab_file=vocab_file,
max_seq_length=max_seq_length,
max_column_id=max_seq_length,
max_row_id=max_seq_length,
strip_column_names=False,
add_aggregation_candidates=False,
)
converter = tf_example_utils.ToClassifierTensorflowExample(config)
def convert_interactions_to_examples(tables_and_queries):
Calls Tapas converter to convert interaction to example.
for idx, (table, queries) in enumerate(tables_and_queries):
interaction = interaction_pb2.Interaction()
for position, query in enumerate(queries):
question = interaction.questions.add()
question.original_text = query
question.id = f"{idx}-0_{position}"
for header in table[0]:
interaction.table.columns.add().text = header
for line in table[1:]:
row = interaction.table.rows.add()
for cell in line:
row.cells.add().text = cell
number_annotation_utils.add_numeric_values(interaction)
for i in range(len(interaction.questions)):
try:
yield converter.convert(interaction, i)
except ValueError as e:
print(f"Can't convert interaction: {interaction.id} error: {e}")
def write_tf_example(filename, examples):
with tf.io.TFRecordWriter(filename) as writer:
for example in examples:
writer.write(example.SerializeToString())
def predict(table_data, queries):
table = [list(map(lambda s: s.strip(), row.split("|")))
for row in table_data.split("\n") if row.strip()]
examples = convert_interactions_to_examples([(table, queries)])
write_tf_example("results/tabfact/tf_examples/test.tfrecord", examples)
write_tf_example("results/tabfact/tf_examples/dev.tfrecord", [])
! python -m tapas.run_task_main \
--task="TABFACT" \
--output_dir="results" \
--noloop_predict \
--test_batch_size={len(queries)} \
--tapas_verbosity="ERROR" \
--compression_type= \
--reset_position_index_per_cell \
--init_checkpoint="tapas_model/model.ckpt" \
--bert_config_file="tapas_model/bert_config.json" \
--mode="predict" 2> error
results_path = "results/tabfact/model/test.tsv"
all_results = []
df = pd.DataFrame(table[1:], columns=table[0])
display(IPython.display.HTML(df.to_html(index=False)))
print()
with open(results_path) as csvfile:
reader = csv.DictReader(csvfile, delimiter='\t')
for row in reader:
supported = int(row["pred_cls"])
all_results.append(supported)
score = float(row["logits_cls"])
position = int(row['position'])
if supported:
print("> SUPPORTS:", queries[position])
else:
print("> REFUTES:", queries[position])
return all_results
# Based on TabFact table 2-1610384-4.html.csv
result = predict(
tournament | wins | top - 10 | top - 25 | events | cuts made
masters tournament | 0 | 0 | 1 | 3 | 2
us open | 0 | 0 | 0 | 4 | 3
the open championship | 0 | 0 | 0 | 2 | 1
pga championship | 0 | 1 | 1 | 4 | 2
totals | 0 | 1 | 2 | 13 | 8
, ["The most frequently occurring number of events is 4", "The most frequently occurring number of events is 3"])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Running a Tapas fine-tuned checkpoint
Step2: Fetch models fom Google Storage
Step3: Imports
Step5: Load checkpoint for prediction
Step7: Predict
|
11,870
|
<ASSISTANT_TASK:>
Python Code:
# Load the network. This network, while in reality is a directed graph,
# is intentionally converted to an undirected one for simplification.
G = cf.load_physicians_network()
# Make a Circos plot of the graph
from nxviz import CircosPlot
c = CircosPlot(G)
c.draw()
# Example code.
def in_triangle(G, node):
Returns whether a given node is present in a triangle relationship or not.
# Then, iterate over every pair of the node's neighbors.
for nbr1, nbr2 in combinations(G.neighbors(node), 2):
# Check to see if there is an edge between the node's neighbors.
# If there is an edge, then the given node is present in a triangle.
if G.has_edge(nbr1, nbr2):
# We return because any triangle that is present automatically
# satisfies the problem requirements.
return True
return False
in_triangle(G, 3)
nx.triangles(G, 3)
# Possible answer
def get_triangles(G, node):
neighbors1 = set(G.neighbors(node))
triangle_nodes = set()
triangle_nodes.add(node)
Fill in the rest of the code below.
for nbr1, nbr2 in combinations(neighbors1, 2):
if G.has_edge(nbr1, nbr2):
triangle_nodes.add(nbr1)
triangle_nodes.add(nbr2)
return triangle_nodes
# Verify your answer with the following funciton call. Should return something of the form:
# {3, 9, 11, 41, 42, 67}
get_triangles(G, 3)
# Then, draw out those nodes.
nx.draw(G.subgraph(get_triangles(G, 3)), with_labels=True)
# Compare for yourself that those are the only triangles that node 3 is involved in.
neighbors3 = list(G.neighbors(3))
neighbors3.append(3)
nx.draw(G.subgraph(neighbors3), with_labels=True)
def get_open_triangles(G, node):
There are many ways to represent this. One may choose to represent
only the nodes involved in an open triangle; this is not the
approach taken here.
Rather, we have a code that explicitly enumrates every open triangle present.
open_triangle_nodes = []
neighbors = list(G.neighbors(node))
for n1, n2 in combinations(neighbors, 2):
if not G.has_edge(n1, n2):
open_triangle_nodes.append([n1, node, n2])
return open_triangle_nodes
# # Uncomment the following code if you want to draw out each of the triplets.
# nodes = get_open_triangles(G, 2)
# for i, triplet in enumerate(nodes):
# fig = plt.figure(i)
# nx.draw(G.subgraph(triplet), with_labels=True)
print(get_open_triangles(G, 3))
len(get_open_triangles(G, 3))
list(nx.find_cliques(G))[0:20]
def maximal_cliqes_of_size(size, G):
# Defensive programming check.
assert isinstance(size, int), "size has to be an integer"
assert size >= 2, "cliques are of size 2 or greater."
return [i for i in list(nx.find_cliques(G)) if len(i) == size]
maximal_cliqes_of_size(2, G)[0:20]
ccsubgraph_nodes = list(nx.connected_components(G))
ccsubgraph_nodes
# Start by labelling each node in the master graph G by some number
# that represents the subgraph that contains the node.
for i, nodeset in enumerate(ccsubgraph_nodes):
for n in nodeset:
G.nodes[n]['subgraph'] = i
c = CircosPlot(G, node_color='subgraph', node_order='subgraph')
c.draw()
plt.savefig('images/physicians.png', dpi=300)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Question
Step3: In reality, NetworkX already has a function that counts the number of triangles that any given node is involved in. This is probably more useful than knowing whether a node is present in a triangle or not, but the above code was simply for practice.
Step5: Exercise
Step7: Friend Recommendation
Step8: Triangle closure is also the core idea behind social networks' friend recommendation systems; of course, it's definitely more complicated than what we've implemented here.
Step9: Exercise
Step10: Connected Components
Step11: Exercise
|
11,871
|
<ASSISTANT_TASK:>
Python Code:
# As usual, a bit of setup
import time
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.fc_net import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
returns relative error
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in data.iteritems():
print '%s: ' % k, v.shape
# Check the training-time forward pass by checking means and variances
# of features both before and after batch normalization
# Simulate the forward pass for a two-layer network
N, D1, D2, D3 = 200, 50, 60, 3
X = np.random.randn(N, D1)
W1 = np.random.randn(D1, D2)
W2 = np.random.randn(D2, D3)
a = np.maximum(0, X.dot(W1)).dot(W2)
print 'Before batch normalization:'
print ' means: ', a.mean(axis=0)
print ' stds: ', a.std(axis=0)
# Means should be close to zero and stds close to one
print 'After batch normalization (gamma=1, beta=0)'
a_norm, _ = batchnorm_forward(a, np.ones(D3), np.zeros(D3), {'mode': 'train'})
print ' mean: ', a_norm.mean(axis=0)
print ' std: ', a_norm.std(axis=0)
# Now means should be close to beta and stds close to gamma
gamma = np.asarray([1.0, 2.0, 3.0])
beta = np.asarray([11.0, 12.0, 13.0])
a_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'})
print 'After batch normalization (nontrivial gamma, beta)'
print ' means: ', a_norm.mean(axis=0)
print ' stds: ', a_norm.std(axis=0)
# Check the test-time forward pass by running the training-time
# forward pass many times to warm up the running averages, and then
# checking the means and variances of activations after a test-time
# forward pass.
N, D1, D2, D3 = 200, 50, 60, 3
W1 = np.random.randn(D1, D2)
W2 = np.random.randn(D2, D3)
bn_param = {'mode': 'train'}
gamma = np.ones(D3)
beta = np.zeros(D3)
for t in xrange(50):
X = np.random.randn(N, D1)
a = np.maximum(0, X.dot(W1)).dot(W2)
batchnorm_forward(a, gamma, beta, bn_param)
bn_param['mode'] = 'test'
X = np.random.randn(N, D1)
a = np.maximum(0, X.dot(W1)).dot(W2)
a_norm, _ = batchnorm_forward(a, gamma, beta, bn_param)
# Means should be close to zero and stds close to one, but will be
# noisier than training-time forward passes.
print 'After batch normalization (test-time):'
print ' means: ', a_norm.mean(axis=0)
print ' stds: ', a_norm.std(axis=0)
# Gradient check batchnorm backward pass
N, D = 4, 5
x = 5 * np.random.randn(N, D) + 12
gamma = np.random.randn(D)
beta = np.random.randn(D)
dout = np.random.randn(N, D)
bn_param = {'mode': 'train'}
fx = lambda x: batchnorm_forward(x, gamma, beta, bn_param)[0]
fg = lambda a: batchnorm_forward(x, gamma, beta, bn_param)[0]
fb = lambda b: batchnorm_forward(x, gamma, beta, bn_param)[0]
dx_num = eval_numerical_gradient_array(fx, x, dout)
da_num = eval_numerical_gradient_array(fg, gamma, dout)
db_num = eval_numerical_gradient_array(fb, beta, dout)
_, cache = batchnorm_forward(x, gamma, beta, bn_param)
dx, dgamma, dbeta = batchnorm_backward(dout, cache)
print 'dx error: ', rel_error(dx_num, dx)
print 'dgamma error: ', rel_error(da_num, dgamma)
print 'dbeta error: ', rel_error(db_num, dbeta)
N, D = 100, 500
x = 5 * np.random.randn(N, D) + 12
gamma = np.random.randn(D)
beta = np.random.randn(D)
dout = np.random.randn(N, D)
bn_param = {'mode': 'train'}
out, cache = batchnorm_forward(x, gamma, beta, bn_param)
t1 = time.time()
dx1, dgamma1, dbeta1 = batchnorm_backward(dout, cache)
t2 = time.time()
dx2, dgamma2, dbeta2 = batchnorm_backward_alt(dout, cache)
t3 = time.time()
print 'dx difference: ', rel_error(dx1, dx2)
print 'dgamma difference: ', rel_error(dgamma1, dgamma2)
print 'dbeta difference: ', rel_error(dbeta1, dbeta2)
print 'speedup: %.2fx' % ((t2 - t1) / (t3 - t2))
N, D, H1, H2, C = 2, 15, 20, 30, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))
for reg in [0, 3.14]:
print 'Running check with reg = ', reg
model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
reg=reg, weight_scale=5e-2, dtype=np.float64,
use_batchnorm=True)
loss, grads = model.loss(X, y)
print 'Initial loss: ', loss
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)
print '%s relative error: %.2e' % (name, rel_error(grad_num, grads[name]))
if reg == 0: print
# Try training a very deep net with batchnorm
hidden_dims = [100, 100, 100, 100, 100]
num_train = 1000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
weight_scale = 2e-2
bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True)
model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False)
bn_solver = Solver(bn_model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=200)
bn_solver.train()
solver = Solver(model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=200)
solver.train()
plt.subplot(3, 1, 1)
plt.title('Training loss')
plt.xlabel('Iteration')
plt.subplot(3, 1, 2)
plt.title('Training accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 3)
plt.title('Validation accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 1)
plt.plot(solver.loss_history, 'o', label='baseline')
plt.plot(bn_solver.loss_history, 'o', label='batchnorm')
plt.subplot(3, 1, 2)
plt.plot(solver.train_acc_history, '-o', label='baseline')
plt.plot(bn_solver.train_acc_history, '-o', label='batchnorm')
plt.subplot(3, 1, 3)
plt.plot(solver.val_acc_history, '-o', label='baseline')
plt.plot(bn_solver.val_acc_history, '-o', label='batchnorm')
for i in [1, 2, 3]:
plt.subplot(3, 1, i)
plt.legend(loc='upper center', ncol=4)
plt.gcf().set_size_inches(10, 9)
plt.show()
# Try training a very deep net with batchnorm
hidden_dims = [50, 50, 50, 50, 50, 50, 50]
num_train = 1000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
bn_solvers = {}
solvers = {}
weight_scales = np.logspace(-4, 0, num=20)
for i, weight_scale in enumerate(weight_scales):
print 'Running weight scale %d / %d' % (i + 1, len(weight_scales))
bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True)
model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False)
bn_solver = Solver(bn_model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=False, print_every=200)
bn_solver.train()
bn_solvers[weight_scale] = bn_solver
solver = Solver(model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=False, print_every=200)
solver.train()
solvers[weight_scale] = solver
# Plot results of weight scale experiment
best_train_accs, bn_best_train_accs = [], []
best_val_accs, bn_best_val_accs = [], []
final_train_loss, bn_final_train_loss = [], []
for ws in weight_scales:
best_train_accs.append(max(solvers[ws].train_acc_history))
bn_best_train_accs.append(max(bn_solvers[ws].train_acc_history))
best_val_accs.append(max(solvers[ws].val_acc_history))
bn_best_val_accs.append(max(bn_solvers[ws].val_acc_history))
final_train_loss.append(np.mean(solvers[ws].loss_history[-100:]))
bn_final_train_loss.append(np.mean(bn_solvers[ws].loss_history[-100:]))
plt.subplot(3, 1, 1)
plt.title('Best val accuracy vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Best val accuracy')
plt.semilogx(weight_scales, best_val_accs, '-o', label='baseline')
plt.semilogx(weight_scales, bn_best_val_accs, '-o', label='batchnorm')
plt.legend(ncol=2, loc='lower right')
plt.subplot(3, 1, 2)
plt.title('Best train accuracy vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Best training accuracy')
plt.semilogx(weight_scales, best_train_accs, '-o', label='baseline')
plt.semilogx(weight_scales, bn_best_train_accs, '-o', label='batchnorm')
plt.legend()
plt.subplot(3, 1, 3)
plt.title('Final training loss vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Final training loss')
plt.semilogx(weight_scales, final_train_loss, '-o', label='baseline')
plt.semilogx(weight_scales, bn_final_train_loss, '-o', label='batchnorm')
plt.legend()
#plt.gcf().set_size_inches(10, 15)
plt.gcf().set_size_inches(9,7)
#fg = plt.figure()
#plt.savefig('vsvs.png')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Batch Normalization
Step2: Batch normalization
Step3: Batch Normalization
Step4: Batch Normalization
Step5: Fully Connected Nets with Batch Normalization
Step6: Batchnorm for deep networks
Step7: Run the following to visualize the results from two networks trained above. You should find that using batch normalization helps the network to converge much faster.
Step8: Batch normalization and initialization
|
11,872
|
<ASSISTANT_TASK:>
Python Code::
from sklearn.linear_model import Ridge
from sklearn.metrics import mean_squared_error, mean_absolute_error
# initialise & fit a ridge regression model with alpha set to 1
# if the model is overfitting, increase the alpha value
model = Ridge(alpha=1)
model.fit(X_train, y_train)
# create dictionary that contains the feature coefficients
coef = dict(zip(X_train.columns, model.coef_.T))
print(coef)
# make prediction for test data
y_pred = model.predict(X_test)
# evaluate performance
print('RMSE:',mean_squared_error(y_test, y_pred, squared = False))
print('MAE:',mean_absolute_error(y_test, y_pred))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
11,873
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import xgboost
import shap
N = 40000
M = 2
# randomly create binary features for (is_young, and is_female)
X = (np.random.randn(N,2) > 0) * 1
# force the first sample to be a young boy
X[0,0] = 1
X[0,1] = 0
# you survive only if you are young or female
y = ((X[:,0] + X[:,1]) > 0) * 1
model = xgboost.XGBRegressor(n_estimators=100, learning_rate=0.1)
model.fit(X, y)
model.predict(X)
explainer = shap.TreeExplainer(model, X, feature_dependence="independent")
shap_values = explainer.shap_values(X[:1,:])
print("explainer.expected_value:", explainer.expected_value.round(4))
print("SHAP values for (is_young = True, is_female = False):", shap_values[0].round(4))
print("model output:", (explainer.expected_value + shap_values[0].sum()).round(4))
explainer = shap.TreeExplainer(model, X[y == 0,:], feature_dependence="independent")
shap_values = explainer.shap_values(X[:1,:])
print("explainer.expected_value:", explainer.expected_value.round(4))
print("SHAP values for (is_young = True, is_female = False):", shap_values[0].round(4))
print("model output:", (explainer.expected_value + shap_values[0].sum()).round(4))
explainer = shap.TreeExplainer(model, X[y == 1,:], feature_dependence="independent")
shap_values = explainer.shap_values(X[:1,:])
print("explainer.expected_value:", explainer.expected_value.round(4))
print("SHAP values for (is_young = True, is_female = False):", shap_values[0].round(4))
print("model output:", (explainer.expected_value + shap_values[0].sum()).round(4))
explainer = shap.TreeExplainer(model, np.ones((1,M)), feature_dependence="independent")
shap_values = explainer.shap_values(X[:10,:])
shap_values[0:3].round(4)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create a dataset following an OR function
Step2: Train an XGBoost model to mimic this OR function
Step3: Explain the prediction for a young boy
Step4: Using only negative examples for the background distribution
Step5: Using only positive examples for the background distribution
Step6: Using young women for the background distribution
|
11,874
|
<ASSISTANT_TASK:>
Python Code:
import sys
print('{0[0]}.{0[1]}'.format(sys.version_info))
pi = 3.1416
radio = 5
area= pi * radio**2
print(area)
color_list_1 = set(["White", "Black", "Red"])
color_list_2 = set(["Red", "Green"])
color_list_1 - color_list_2
path = 'C:/Users/Margarita/Documents/Mis_documentos/Biologia_EAFIT/Semestre_IX/Programacion/'
size = len (path)
guardar = ""
for i in range(3,size):
if path[i] != '/':
guardar = guardar + path[i]
else:
print(guardar)
guardar = ""
my_list = [5,7,8,9,17]
sum_list = sum (my_list)
print(sum_list)
elemento_a_insertar = 'E'
my_list = [1, 2, 3, 4]
elemento_a_insertar = 'E'
my_list = [1, 2, 3, 4]
size = len (my_list)
carpeta = []
for i in range(size):
carpeta = carpeta + [elemento_a_insertar,my_list[i]]
my_list = carpeta
print (my_list)
N = 3
my_list = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n']
N=3
lista=[]
listaa = []
my_list = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n']
size = len(my_list)
for i in range(N):
lista = lista + [listaa]
for i in range (size):
lista[i%N] = lista[i%N] + [my_list[i]]
print(lista)
list_of_lists = [ [1,2,3], [4,5,6], [10,11,12], [7,8,9] ]
list_of_lists = [ [1,2,3], [4,5,6], [10,11,12], [7,8,9] ]
size = len(list_of_lists)
carpeta = list_of_lists[1]
for i in range(size):
if sum(list_of_lists[i]) > sum(carpeta):
carpeta = list_of_lists[i]
print(carpeta)
N = 5
N = 5
diccio = {}
for i in range(1,N+1):
diccio [i]= i**2
print(diccio)
dictionary_list=[{1:10, 2:20} , {3:30, 4:40}, {5:50,6:60}]
dictionary_list=[{1:10, 2:20} , {3:30, 4:40}, {5:50,6:60}]
final= {}
for i in dictionary_list:
for k in i:
final[k] = i[k]
print(final)
dictionary_list=[{'numero': 10, 'cantidad': 5} , {'numero': 12, 'cantidad': 3}, {'numero': 5, 'cantidad': 45}]
def diferencia_conjuntos(color_list_1, color_list_2):
print (color_list_1 - color_list_2)
# Implementar la funciรณn
diferencia_conjuntos(
color_list_1 = set(["White", "Black", "Red"]) ,
color_list_2 = set(["Red", "Green"]))
def max_list_of_lists(list_of_lists):
size = len(list_of_lists)
carpeta = list_of_lists[1]
for i in range(size):
if sum(list_of_lists[i]) > sum(carpeta):
carpeta = list_of_lists[i]
print(carpeta)
# Implementar la funciรณn
list_of_lists = [ [1,2,3], [4,5,6], [10,11,12], [7,8,9] ]
max_list_of_lists (list_of_lists)
def diccionario_cuadradovalor(N):
diccio = {}
final = {}
for i in range(1,N+1):
final = diccio [i]= i**2
print(diccio)
#Implementar la funciรณn:
N = 5
diccionario_cuadradovalor(N)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Calcule el รกrea de un circulo de radio 5
Step2: 3. Escriba cรณdigo que imprima todos los colores de que estรกn en color_list_1 y no estan presentes en color_list_2
Step3: 4 Imprima una lรญnea por cada carpeta que compone el Path donde se esta ejecutando python
Step4: Manejo de Listas
Step5: 6. Inserte un elemento_a_insertar antes de cada elemento de my_list
Step6: La salida esperada es una lista asรญ
Step7: 7. Separe my_list en una lista de lista cada N elementos
Step8: Salida Epserada
Step9: 8. Encuentra la lista dentro de list_of_lists que la suma de sus elementos sea la mayor
Step10: Salida Esperada
Step11: Manejo de Diccionarios
Step12: Salida Esperada
Step13: 10. Concatene los diccionarios en dictionary_list para crear uno nuevo
Step14: Salida Esperada
Step15: 11. Aรฑada un nuevo valor "cuadrado" con el valor de "numero" de cada diccionario elevado al cuadrado
Step16: Salida Esperada
Step17: 13. Defina y llame una funciรณn que reciva de parametro una lista de listas y solucione el problema 8
Step18: 14. Defina y llame una funciรณn que reciva un parametro N y resuleva el problema 9
|
11,875
|
<ASSISTANT_TASK:>
Python Code:
from itertools import accumulate, islice
def cubocta():
Classic Generator: Cuboctahedral / Icosahedral #s
https://oeis.org/A005901
yield 1 # nuclear ball
f = 1
while True:
elem = 10 * f * f + 2 # f for frequency
yield elem # <--- pause / resume here
f += 1
def cummulative(n):
https://oeis.org/A005902 (crystal ball sequence)
yield from islice(accumulate(cubocta()),0,n)
print("{:=^30}".format(" Crystal Ball Sequence "))
print("{:^10} {:^10}".format("Layers", "Points"))
for f, out in enumerate(cummulative(30),start=1):
print("{:>10} {:>10}".format(f, out))
from itertools import islice
def pascal():
row = [1]
while True:
yield row
row = [i+j for i,j in zip([0]+row, row+[0])]
print("{0:=^60}".format(" Pascal's Triangle "))
print()
for r in islice(pascal(),0,11):
print("{:^60}".format("".join(map(lambda n: "{:>5}".format(n), r))))
from IPython.display import YouTubeVideo
YouTubeVideo("9xUBhhM4vbM")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Oregon Curriculum Network <br />
Step3: Octet Truss
Step4: Each number in Pascal's Triangle may be understood as the number of unique pathways to that position, were falling balls introduced through the top and allowed to fall left or right to the next row down. This apparatus is sometimes called a Galton Board.
|
11,876
|
<ASSISTANT_TASK:>
Python Code:
y_sum = [0] * len(vol[0,:,0])
for i in range(len(vol[0,:,0])):
y_sum[i] = sum(sum(vol[:,i,:]))
ax = sns.barplot(x=range(len(y_sum)), y=y_sum, color="b")
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
from scipy.signal import argrelextrema
def local_minima(a):
return argrelextrema(a, np.less)
whole_volume_minima = local_minima(np.array(y_sum))
whole_volume_minima
CHUNK_SIZE = 25
sections = [(i*CHUNK_SIZE, (i+1)*CHUNK_SIZE) for i in range(len(vol[:,0,0]) / CHUNK_SIZE)]
histogram = {}
for s in sections:
section = vol[s[0]:s[1]]
histogram[s] = [0] * len(vol[0,:,0])
for i in range(len(vol[0,:,0])):
histogram[s][i] = sum(sum(vol[s[0]:s[1],i,:]))
h_local_minima = []
for t, h in histogram.iteritems():
h_local_minima.extend([i for i in local_minima(np.array(h))])
total_histogram = [item for sublist in h_local_minima for item in sublist]
sns.distplot(total_histogram, bins=26)
sns.distplot(total_histogram, bins=15)
scatterable = []
i = 0
for h in h_local_minima:
[scatterable.append([i, m]) for m in h]
i += 1
plt.scatter(x=[s[0] for s in scatterable], y=[s[1] for s in scatterable])
from sklearn.cluster import KMeans
plt.scatter([0] * len(total_histogram), total_histogram)
NUM_CLUSTERS = 3
k3cluster = KMeans(n_clusters=NUM_CLUSTERS)
total_histogram.sort()
clusters_for_th = k3cluster.fit_predict(np.array(total_histogram).reshape(-1, 1))
clusters = { n: [] for n in range(NUM_CLUSTERS) }
for i in range(len(total_histogram)):
clusters[clusters_for_th[i]].append(total_histogram[i])
clusters
cluster_means = [np.mean(v) for _, v in clusters.iteritems() ]
cluster_means
from PIL import Image
import urllib, cStringIO
file = cStringIO.StringIO(urllib.urlopen("http://openconnecto.me/ocp/ca/bock11/image/xy/7/350,850/50,936/2917/").read())
img = Image.open(file)
img_array = np.array(img)
cluster_means = np.array(cluster_means)
cluster_means_mapped = (cluster_means / vol.shape[1]) * img_array.shape[0]
for i in cluster_means_mapped:
img_array[[i, i+10, i-10], :] -= 50
Image.fromarray(img_array)
yflat = np.amax(vol, axis=1)
frame_y = pd.DataFrame(yflat)
sns.heatmap(frame_y)
processkmeans = KMeans()
print yflat
processkmeans.fit_predict(yflat)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Above, we see a histogram of y_sum that indicates that there is a local minimum at the 12th layer of y-sampling, which colocates with where we anticipate seeing the boundary between layers I and II. Here is the biological substantiation
Step2: Now let's examine smaller chunks of the volume
Step3: This coincides with our understanding that our sample space extends midway into layer 4, but covers all of layers 1, 2, and 3.
Step4: Now we can get the centroids from these clusters. I wish I understood what I was doing.
Step5: Now we can assume that the means of these clusters are the actual boundaries between cortex.
Step6: 4. Verifying our statistical boundaries against an image
Step7: 5. Finding Descending Processes in Cortex
|
11,877
|
<ASSISTANT_TASK:>
Python Code:
import math
import shutil
import numpy as np
import pandas as pd
import tensorflow as tf
print(tf.__version__)
tf.logging.set_verbosity(tf.logging.INFO)
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format
df = pd.read_csv("https://storage.googleapis.com/ml_universities/california_housing_train.csv", sep=",")
df.head()
df.describe()
df['num_rooms'] = df['total_rooms'] / df['households']
df.describe()
# Split into train and eval
np.random.seed(seed=1) #makes split reproducible
msk = np.random.rand(len(df)) < 0.8
traindf = df[msk]
evaldf = df[~msk]
OUTDIR = './housing_trained'
def train_and_evaluate(output_dir, num_train_steps):
estimator = #TODO: Use LinearRegressor estimator
#Add rmse evaluation metric
def rmse(labels, predictions):
pred_values = tf.cast(predictions['predictions'],tf.float64)
return {'rmse': tf.metrics.root_mean_squared_error(labels, pred_values)}
estimator = tf.contrib.estimator.add_metrics(estimator,rmse)
train_spec=tf.estimator.TrainSpec(
input_fn = ,#TODO: use tf.estimator.inputs.pandas_input_fn
max_steps = num_train_steps)
eval_spec=tf.estimator.EvalSpec(
input_fn = ,#TODO: use tf.estimator.inputs.pandas_input_fn
steps = None,
start_delay_secs = 1, # start evaluating after N seconds
throttle_secs = 10, # evaluate every N seconds
)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
# Run training
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
train_and_evaluate(OUTDIR, num_train_steps = 100)
SCALE = 100000
OUTDIR = './housing_trained'
def train_and_evaluate(output_dir, num_train_steps):
estimator = #TODO
#Add rmse evaluation metric
def rmse(labels, predictions):
pred_values = tf.cast(predictions['predictions'],tf.float64)
return {'rmse': tf.metrics.root_mean_squared_error(labels*SCALE, pred_values*SCALE)}
estimator = tf.contrib.estimator.add_metrics(estimator,rmse)
train_spec=tf.estimator.TrainSpec(
input_fn = ,#TODO
max_steps = num_train_steps)
eval_spec=tf.estimator.EvalSpec(
input_fn = ,#TODO
steps = None,
start_delay_secs = 1, # start evaluating after N seconds
throttle_secs = 10, # evaluate every N seconds
)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
# Run training
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
train_and_evaluate(OUTDIR, num_train_steps = 100)
SCALE = 100000
OUTDIR = './housing_trained'
def train_and_evaluate(output_dir, num_train_steps):
myopt = #TODO: use tf.train.FtrlOptimizer and set learning rate
estimator = tf.estimator.LinearRegressor(
model_dir = output_dir,
feature_columns = [tf.feature_column.numeric_column('num_rooms')],
optimizer = myopt)
#Add rmse evaluation metric
def rmse(labels, predictions):
pred_values = tf.cast(predictions['predictions'],tf.float64)
return {'rmse': tf.metrics.root_mean_squared_error(labels*SCALE, pred_values*SCALE)}
estimator = tf.contrib.estimator.add_metrics(estimator,rmse)
train_spec=tf.estimator.TrainSpec(
input_fn = ,#TODO: make sure to specify batch_size
max_steps = num_train_steps)
eval_spec=tf.estimator.EvalSpec(
input_fn = ,#TODO
steps = None,
start_delay_secs = 1, # start evaluating after N seconds
throttle_secs = 10, # evaluate every N seconds
)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
# Run training
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
train_and_evaluate(OUTDIR, num_train_steps = 100)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Next, we'll load our data set.
Step2: Examine the data
Step3: In this exercise, we'll be trying to predict median_house_value. It will be our label (sometimes also called a target). Can we use total_rooms as our input feature? What's going on with the values for that feature?
Step4: Build the first model
Step5: 1. Scale the output
Step6: 2. Change learning rate and batch size
|
11,878
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
import ipywidgets as widgets
from traitlets import Unicode, validate
class HelloWidget(widgets.DOMWidget):
_view_name = Unicode('HelloView').tag(sync=True)
_view_module = Unicode('hello').tag(sync=True)
%%javascript
define('hello', ["jupyter-js-widgets"], function(widgets) {
});
%%javascript
require.undef('hello');
define('hello', ["jupyter-js-widgets"], function(widgets) {
// Define the HelloView
var HelloView = widgets.DOMWidgetView.extend({
});
return {
HelloView: HelloView
}
});
%%javascript
require.undef('hello');
define('hello', ["jupyter-js-widgets"], function(widgets) {
var HelloView = widgets.DOMWidgetView.extend({
// Render the view.
render: function() {
this.el.textContent = 'Hello World!';
},
});
return {
HelloView: HelloView
};
});
HelloWidget()
class HelloWidget(widgets.DOMWidget):
_view_name = Unicode('HelloView').tag(sync=True)
_view_module = Unicode('hello').tag(sync=True)
value = Unicode('Hello World!').tag(sync=True)
%%javascript
require.undef('hello');
define('hello', ["jupyter-js-widgets"], function(widgets) {
var HelloView = widgets.DOMWidgetView.extend({
render: function() {
this.el.textContent = this.model.get('value');
},
});
return {
HelloView : HelloView
};
});
%%javascript
require.undef('hello');
define('hello', ["jupyter-js-widgets"], function(widgets) {
var HelloView = widgets.DOMWidgetView.extend({
render: function() {
this.value_changed();
this.model.on('change:value', this.value_changed, this);
},
value_changed: function() {
this.el.textContent = this.model.get('value');
},
});
return {
HelloView : HelloView
};
});
w = HelloWidget()
w
w.value = 'test'
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Building a Custom Widget - Hello World
Step2: sync=True traitlets
Step3: Define the view
Step4: Render method
Step5: Test
Step6: Making the widget stateful
Step7: Accessing the model from the view
Step8: Dynamic updates
Step9: Test
|
11,879
|
<ASSISTANT_TASK:>
Python Code:
import torch as T
import torch.autograd
import numpy as np
'''
Define a scalar variable, set requires_grad to be true to add it to backward path for computing gradients
It is actually very simple to use backward()
first define the computation graph, then call backward()
'''
x = T.randn(1, 1, requires_grad=True) #x is a leaf created by user, thus grad_fn is none
print('x', x)
#define an operation on x
y = 2 * x
print('y', y)
#define one more operation to check the chain rule
z = y ** 3
print('z', z)
#yes, it is just as simple as this to compute gradients:
z.backward()
print('z gradient:', z.grad)
print('y gradient:', y.grad)
print('x gradient:', x.grad, 'Requires gradient?', x.grad.requires_grad) # note that x.grad is also a tensor
x = T.randn(1, 1, requires_grad=True) #x is a leaf created by user, thus grad_fn is none
print('x', x)
#define an operation on x
y = 2 * x
#define one more operation to check the chain rule
z = y ** 3
z.backward(retain_graph=True)
print('Keeping the default value of grad_tensors gives')
print('z gradient:', z.grad)
print('y gradient:', y.grad)
print('x gradient:', x.grad)
x.grad.data.zero_()
z.backward(T.Tensor([[1]]), retain_graph=True)
print('Set grad_tensors to 1 gives')
print('z gradient:', z.grad)
print('y gradient:', y.grad)
print('x gradient:', x.grad)
x.grad.data.zero_()
z.backward(T.Tensor([[0.1]]), retain_graph=True)
print('Set grad_tensors to 0.1 gives')
print('z gradient:', z.grad)
print('y gradient:', y.grad)
print('x gradient:', x.grad)
x.grad.data.zero_()
z.backward(T.FloatTensor([[0.5]]), retain_graph=True)
print('Modifying the default value of grad_variables to 0.1 gives')
print('z gradient', z.grad)
print('y gradient', y.grad)
print('x gradient', x.grad)
x = T.randn(2, 2, requires_grad=True) #x is a leaf created by user, thus grad_fn is none
print('x', x)
#define an operation on x
y = 2 * x
#define one more operation to check the chain rule
z = y ** 3
print('z shape:', z.size())
z.backward(T.FloatTensor([[1, 1], [1, 1]]), retain_graph=True)
print('x gradient for its all elements:\n', x.grad)
print()
x.grad.data.zero_() #the gradient for x will be accumulated, it needs to be cleared.
z.backward(T.FloatTensor([[0, 1], [0, 1]]), retain_graph=True)
print('x gradient for the second column:\n', x.grad)
print()
x.grad.data.zero_()
z.backward(T.FloatTensor([[1, 1], [0, 0]]), retain_graph=True)
print('x gradient for the first row:\n', x.grad)
x = T.randn(2, 2, requires_grad=True) #x is a leaf created by user, thus grad_fn is none
print('x', x)
#define an operation on x
y = 2 * x
#print('y', y)
#define one more operation to check the chain rule
z = y ** 3
out = z.mean()
print('out', out)
out.backward(retain_graph=True)
print('x gradient:\n', x.grad)
x.grad.data.zero_()
out.backward(T.FloatTensor([[1, 1], [1, 1]]), retain_graph=True)
print('x gradient', x.grad)
x = T.randn(2, 2, requires_grad=True) #x is a leaf created by user, thus grad_fn is none
print('x', x)
#define an operation on x
y = 2 * x
#print('y', y)
#define one more operation to check the chain rule
z = y ** 3
out = z.mean()
print('out', out)
out.backward() #without setting retain_graph to be true, it is alright for first time of backward.
print('x gradient', x.grad)
x.grad.data.zero_()
out.backward() #Now we get complaint saying that no graph is available for tracing back.
print('x gradient', x.grad)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Simplicity of using backward()
Step2: The simple operations defined a forward path $z=(2x)^3$, $z$ will be the final output tensor we would like to compute gradient
Step3: The gradients of both $y$ and $z$ are None, since the function returns the gradient for the leaves, which is $x$ in this case. At the very beginning, I was assuming something like this
Step4: Testing the explicit default value, which should give the same result. For the same graph which is retained, DO NOT forget to zero the gradient before recalculate the gradients.
Step5: Then what about other values, let's try 0.1 and 0.5.
Step6: It looks like the elements of grad_tensors act as scaling factors. Now let's set $x$ to be a $2\times 2$matrix. Note that $z$ will also be a matrix. (Always use the latest version, backward had been improved a lot from earlier version, becoming much easier to understand.)
Step7: We can clearly see the gradients of $z$ are computed w.r.t to each dimension of $x$, because the operations are all element-wise.
Step8: We will get complaints if the grad_tensors is specified for the scalar function.
Step9: What is retain_graph doing?
|
11,880
|
<ASSISTANT_TASK:>
Python Code:
edges = set([(1, 2), (3, 1), (3, 2), (2, 4)])
edges = set([(1, 2), (3, 1), (3, 2), (2, 4)])
edges_list = [i[0] for i in edges] + [i[1] for i in edges]
nodes = set(edges_list)
edges_number = len(edges)
nodes_number = len(nodes)
print "Nรบmero de nodos: " + str(nodes_number)
print "Nรบmero de enlaces: " + str(edges_number)
Now using NetorkX
import networkx as nx
G = nx.Graph()
G.add_edges_from(edges)
print "Nรบmero de nodos: " + str(G.number_of_nodes())
print "Nรบmero de aristas: " + str(G.number_of_edges())
Cรณdigo propio
import numpy as np
edges = set([(1,2), (3, 1), (3, 2), (2, 4)])
def adj_matrix_dgraph(edges):
edges_list = [i[0] for i in edges] + [i[1] for i in edges]
nodes = set(edges_list)
create matrix
matrix = np.zeros((len(nodes),len(nodes)))
for edge in edges:
matrix[edge[0] - 1,edge[1] -1] = 1
return matrix
def adj_matrix(edges):
edges_list = [i[0] for i in edges] + [i[1] for i in edges]
nodes = set(edges_list)
create matrix
matrix = np.zeros((len(nodes),len(nodes)))
for edge in edges:
i = edge[0]-1
j = edge[1]-1
matrix[i,j] = 1
matrix[j,i] = 1
return matrix
print "matriz para grafo dirigido:\n" + str(adj_matrix_dgraph(edges))
print "\n"
print "matriz para grafo no dirigido:\n" + str(adj_matrix(edges))
Soluciรณn con NetworkX
import networkx as nx
G = nx.Graph()
G.add_edges_from(edges)
matrix = nx.adjacency_matrix(G)
print matrix
DG = nx.DiGraph()
DG.add_edges_from(edges)
print "\n"
print (nx.adjacency_matrix(DG))
import numpy as np
The entered datasets correspond to non-directed graphs
information about the dataset can be found in the following link:
http://snap.stanford.edu/data/egonets-Facebook.html
edges1 = np.genfromtxt('0.edges', dtype="int", delimiter=" ")
edges2 = np.genfromtxt('348.edges', dtype="int", delimiter=" ")
edges3 = np.genfromtxt('414.edges', dtype="int", delimiter=" ")
def edges_to_nodes(edges):
edges_list = [i[0] for i in edges] + [i[1] for i in edges]
nodes = set(edges_list)
return nodes
def edge_rate(edges):
nodes = edges_to_nodes(edges)
n = len(nodes)
print ("len(n) = %d" %(n))
For a non-directed graph and excluding reflexive relations
possible_edges = (n*(n-1))/2
print ("possible_edges=%d" % (possible_edges))
result = float(len(edges))/possible_edges
return result
def edge_rate_dgraph(edges):
nodes = edges_to_nodes(edges)
n = len(nodes)
For a directed graph including reflexive relations
possible_edges = n**2
result = float(len(edges))/possible_edges
return result
print (edge_rate(edges1))
print (edge_rate(edges2))
print (edge_rate(edges3))
With networkx
import networkx as nx
G1 = nx.read_edgelist('0.edges', delimiter=" ")
G2 = nx.read_edgelist('348.edges', delimiter=" ")
G3 = nx.read_edgelist('414.edges', delimiter=" ")
def possible_edges(graph):
nodes = graph.number_of_nodes()
return (nodes*(nodes-1))/2
print ("possible_edges(G1)=%d" % (possible_edges(G1)))
def edge_rate_nx(graph):
return float(graph.number_of_edges())/float(possible_edges(graph))
print ("\n")
print (edge_rate_nx(G1))
print (edge_rate_nx(G2))
print (edge_rate_nx(G3))
Without NetworkX
import numpy as np
def edges_to_nodes(edges):
edges_list = [i[0] for i in edges] + [i[1] for i in edges]
nodes = set(edges_list)
print ("len(nodes)=%d" %(len(nodes)))
return nodes
The entered datasets correspond to non-directed graphs
information about the dataset can be found in the following link:
http://snap.stanford.edu/data/egonets-Facebook.html
edges1 = np.genfromtxt('0.edges', dtype="int", delimiter=" ")
print (len(edges1))
edges2 = np.genfromtxt('348.edges', dtype="int", delimiter=" ")
print (len(edges2))
edges3 = np.genfromtxt('414.edges', dtype="int", delimiter=" ")
print (len(edges3))
Asuming there aren't repeated elements in the dataset
def number_of_zeroes(edges):
n = len(edges_to_nodes(edges))
zeroes = n**2 - len(edges)
return zeroes
def number_of_zeroes_dgraph(edges):
n = len(edges_to_nodes(edges))
zeroes = n**2 - len(edges)
return zeroes
print ("number_of_zeroes(edges1)=%d" %(number_of_zeroes(edges1)))
print ("number_of_zeroes(edges2)=%d" %(number_of_zeroes(edges2)))
print ("number_of_zeroes(edges3)=%d" %(number_of_zeroes(edges3)))
With NetworkX
import networkx as nx
The selected datasets are non-directed graphs. Therefore their adjacency matrix is simetrical
For undirected graphs NetworkX stores only the edges of one of the matrix's triangles (upper or lower)
G1 = nx.read_edgelist('0.edges', delimiter=" ")
print (len(G1.edges()))
G2 = nx.read_edgelist('348.edges', delimiter=" ")
print (len(G2.edges()))
G3 = nx.read_edgelist('414.edges', delimiter=" ")
print (len(G3.edges()))
N1 = len(G1.nodes())
N2 = len(G2.nodes())
N3 = len(G3.nodes())
def zeroes(graph):
N = len(graph.nodes())
result = N**2 - 2*len(graph.edges())
print ("zeroes=%d" %(result))
return result
zeroes(G1)
zeroes(G2)
zeroes(G3)
import numpy as np
network1 = set([(1,'a'),(3,'b'), (4,'d'),(5,'b'),(1,'b'), (2,'d'), (1,'d'), (3,'c')])
def projection_u(edges):
edges_list = list(edges)
result = []
for i in range(0,len(edges_list)):
for j in range(i+1, len(edges_list)):
if edges_list[i][1] == edges_list[j][1]:
tup = (edges_list[i][0], edges_list[j][0])
result.append(tup)
return set(result)
print (projection_u(network1))
def projection_v(edges):
edges_list = list(edges)
result = []
for i in range(0,len(edges_list)):
for j in range(i+1, len(edges_list)):
if edges_list[i][0] == edges_list[j][0]:
tup = (edges_list[i][1], edges_list[j][1])
result.append(tup)
return set(result)
print (projection_v(network1))
N = 5
routemap = [('St. Louis', 'Miami'),
('St. Louis', 'San Diego'),
('St. Louis', 'Chicago'),
('San Diego', 'Chicago'),
('San Diego', 'San Francisco'),
('San Diego', 'Minneapolis'),
('San Diego', 'Boston'),
('San Diego', 'Portland'),
('San Diego', 'Seattle'),
('Tulsa', 'New York'),
('Tulsa', 'Dallas'),
('Phoenix', 'Cleveland'),
('Phoenix', 'Denver'),
('Phoenix', 'Dallas'),
('Chicago', 'New York'),
('Chicago', 'Los Angeles'),
('Miami', 'New York'),
('Miami', 'Philadelphia'),
('Miami', 'Denver'),
('Boston', 'Atlanta'),
('Dallas', 'Cleveland'),
('Dallas', 'Albuquerque'),
('Philadelphia', 'Atlanta'),
('Denver', 'Minneapolis'),
('Denver', 'Cleveland'),
('Albuquerque', 'Atlanta'),
('Minneapolis', 'Portland'),
('Los Angeles', 'Seattle'),
('San Francisco', 'Portland'),
('San Francisco', 'Seattle'),
('San Francisco', 'Cleveland'),
('Seattle', 'Portland')]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Ejercicios Graphs, Paths & Components
Step6: Ejercicio - Matriz de Adyacencia
Step12: D## Ejercicio - Sparseness
Step20: En la matriz de adyacencia de cada uno de las redes elegidas, cuantos ceros hay?
Step21: Ejercicio - Redes Bipartitas
Step22: Ejercicio - Paths
Step23: Cree un grafo de N nodos con el mรกximo diรกmetro posible
|
11,881
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr>
# Joan Massich <mailsik@gmail.com>
#
# License: BSD Style.
import os.path as op
import mne
from mne.channels.montage import get_builtin_montages
from mne.datasets import fetch_fsaverage
from mne.viz import set_3d_title, set_3d_view
for current_montage in get_builtin_montages():
montage = mne.channels.make_standard_montage(current_montage)
info = mne.create_info(
ch_names=montage.ch_names, sfreq=100., ch_types='eeg')
info.set_montage(montage)
sphere = mne.make_sphere_model(r0='auto', head_radius='auto', info=info)
fig = mne.viz.plot_alignment(
# Plot options
show_axes=True, dig='fiducials', surfaces='head',
bem=sphere, info=info)
set_3d_view(figure=fig, azimuth=135, elevation=80)
set_3d_title(figure=fig, title=current_montage)
subjects_dir = op.dirname(fetch_fsaverage())
for current_montage in get_builtin_montages():
montage = mne.channels.make_standard_montage(current_montage)
# Create dummy info
info = mne.create_info(
ch_names=montage.ch_names, sfreq=100., ch_types='eeg')
info.set_montage(montage)
fig = mne.viz.plot_alignment(
# Plot options
show_axes=True, dig='fiducials', surfaces='head', mri_fiducials=True,
subject='fsaverage', subjects_dir=subjects_dir, info=info,
coord_frame='mri',
trans='fsaverage', # transform from head coords to fsaverage's MRI
)
set_3d_view(figure=fig, azimuth=135, elevation=80)
set_3d_title(figure=fig, title=current_montage)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Check all montages against a sphere
Step2: Check all montages against fsaverage
|
11,882
|
<ASSISTANT_TASK:>
Python Code:
# Import everything that we are going to need... but not more
import pandas as pd
import xarray as xr
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap, cm
%matplotlib inline
DF=pd.DataFrame.from_items([('A', [1, 2, 3]), ('B', [4, 5, 6])],
orient='index', columns=['one', 'two', 'three'])
DF.mean(0)
pd_s=pd.Series(range(3), index=list('abc'), name='foo')
print(pd_s)
print()
#conver 1D series to ND aware dataArray
print(xr.DataArray(pd_s))
# Naughty datasets might require decode_cf=False
# Here it just needed decode_times=False
naughty_data = xr.open_dataset(
'http://iridl.ldeo.columbia.edu/SOURCES/.OSU/.PRISM/.monthly/dods',
decode_times=False)
naughty_data
GETM = xr.open_dataset('../data/cefas_GETM_nwes.nc4')
GETM
GETM.dims
print(type(GETM.coords['latc']))
GETM.coords['latc'].shape
# List name of dataset attributes
GETM.attrs.keys()
# List variable names
GETM.data_vars.keys()
temp=GETM['temp']
print(type( temp ))
temp.shape
# print varaible attributes
for at in temp.attrs:
print(at+':\t\t',end=' ')
print(temp.attrs[at])
temp[0,0,90,100]
#positional by integer
print( temp[0,2,:,:].shape )
# positional by label
print( temp.loc['1996-02-02T01:00:00',:,:,:].shape )
# by name and integer
print( temp.isel(level=1,latc=90,lonc=100).shape )
# by name and label
print( temp.sel(time='1996-02-02T01:00:00').shape )
#temp.loc
#GETM.sel(level=1)['temp']
GETM['temp'].sel(level=1,lonc=-5.0,latc=-50.0, method='nearest')
try:
GETM['temp'].sel(level=1,lonc=-5.0,latc=-50.0, method='nearest',tolerance=0.5)
except KeyError:
print('ERROR: outside tolerance of '+str(0.5))
# Define a general mapping function using basemap
def do_map(var,title,units):
latc=GETM.coords['latc'].values
lonc=GETM.coords['lonc'].values
# create figure and axes instances
fig = plt.figure()
ax = fig.add_axes()
# create polar stereographic Basemap instance.
m = Basemap(projection='stere', lon_0=0.,lat_0=60.,
llcrnrlat=49,urcrnrlat=60,
llcrnrlon=-10,urcrnrlon=15,resolution='l')
# bondaries resolution can be 'c','l','i','h' or 'f'
m.drawcoastlines(linewidth=0.5)
m.fillcontinents(color='0.8')
parallels = np.arange(-45,70,5)
m.drawparallels(parallels,labels=[1,0,0,0],fontsize=10)
m.drawparallels?
meridians = np.arange(-15,20,5)
m.drawmeridians(meridians,labels=[0,0,0,1],fontsize=10)
# create arrays of coordinates for contourf
lon2d,lat2d=np.meshgrid(lonc,latc)
# draw filled contours.
m.contourf(lon2d,lat2d,var,50,latlon=True)
# add colorbar.
cbar = m.colorbar(cmap=plt.cm.coolwarm,location='right')
cbar.set_label(units)
# add title
plt.title(title)
# Extract attributes
units=GETM['temp'].attrs['units']
var_long_name=GETM['temp'].attrs['long_name']
# and plot
do_map(var=time_ave.sel(level=21),
units=units,
title='Time averaged '+var_long_name)
# But often, this will do
time_ave.sel(level=21).plot()
top=GETM['temp'].isel(time=0,level=4)
bottom=GETM['temp'].isel(time=0,level=0)
diff=top-bottom
diff.plot()
# average over time
time_ave = GETM['temp'].mean('time')
#average over time and level (vertical)
timelev_ave=GETM['temp'].mean(['time','level'])
timelev_ave.plot()
#zonal average (vertical)
timelon_ave=GETM['temp'].mean(['time','lonc']).isel(level=4)
timelon_ave.plot()
ds=GETM[['temp']].mean('time','level')
ds.to_netcdf('../data/temp_avg_level_time.nc')
print(type( GETM[['temp']]) )
print(type( GETM['temp']) )
# bathy = GETM
# bedtemp=GETM
# plt.scatter( , ,marker='.')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The main advantages of using xarray versus plain netCDF4 are
Step2: ...or import local dataset
Step3: Extract variable from dataset
Step4: Access variable attributes
Step5: Accessing data values
Step6: Indexing and selecting data
Step7: Define selection using nearest value
Step8: Plotting
Step9: Arithmetic operations
Step10: Calculate average along a dimension
Step11: A dataset can easily be saved to a netCDF file
Step12: Exercise
|
11,883
|
<ASSISTANT_TASK:>
Python Code:
import scipy
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import sklearn.cross_validation as cv
# Extra plotting functionality
import visplots
from sklearn import preprocessing, metrics
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.grid_search import GridSearchCV, RandomizedSearchCV
from scipy.stats.distributions import randint
from multilayer_perceptron import multilayer_perceptron
%matplotlib inline
wine = pd.read_csv("data/wine.csv", sep=",")
header = wine.columns.values
### Write your code here ###
# Convert to numpy array
npArray = np.array(wine)
X = npArray[:,:-1]
y = npArray[:,-1].astype(int)
print "X dimensions:", ### Write your code here ###
print "y dimensions:", ### Write your code here ###
yFreq = scipy.stats.itemfreq(y)
print yFreq
### Write your code here ###
f0 = 0
f1 = 1
plt.figure(figsize=(8, 5))
plt.scatter(X[y==0, f0], X[y==0, f1], color = 'b', edgecolors='black', label='Low Quality')
plt.scatter(X[y==1, f0], X[y==1, f1], color = 'r', edgecolors='black', label='High Quality')
plt.xlabel(header[f0])
plt.ylabel(header[f1])
plt.legend()
plt.show()
XTrain, XTest, yTrain, yTest = cv.train_test_split(X, y, test_size= 0.3, random_state=1)
print "XTrain dimensions:", XTrain.shape
print "yTrain dimensions:", yTrain.shape
print "XTest dimensions:", XTest.shape
print "yTest dimensions:", yTest.shape
# Build the classifier
knn3 = KNeighborsClassifier(n_neighbors=3)
# Train (fit) the model
knn3.fit(XTrain, yTrain)
# Test (predict)
yPredK3 = knn3.predict(XTest)
# Report the performance metrics
print metrics.classification_report(yTest, yPredK3)
print "Overall Accuracy:", round(metrics.accuracy_score(yTest, yPredK3), 2)
# Check the arguments of the function
help(visplots.knnDecisionPlot)
# Visualise the boundary
visplots.knnDecisionPlot(XTrain, yTrain, XTest, yTest, n_neighbors= 3, weights="uniform")
############################################
# Write your code here
# 1. Build the KNN classifier for larger K
# 2. Train (fit) the model
# 3. Test (predict)
# 4. Report the performance metrics
############################################
### Write your code here ###
# Build the classifier with two parameters
knnW3 = KNeighborsClassifier(n_neighbors=3, weights='distance')
# Train (fit) the model
knnW3.fit(XTrain, yTrain)
# Test (predict)
predictedW3 = knnW3.predict(XTest)
# Report the performance metrics
print metrics.classification_report(yTest, predictedW3)
print "Overall Accuracy:", round(metrics.accuracy_score(yTest, predictedW3), 2)
# Define the parameters to be optimised and their values/ranges
n_neighbors = np.arange(1, 51, 2) # odd numbers of neighbors used
weights = ['uniform','distance']
# Construct a dictionary of hyperparameters
parameters = [{'n_neighbors': n_neighbors, 'weights': weights}]
# Conduct a grid search with 10-fold cross-validation using the dictionary of parameters
grid = GridSearchCV(KNeighborsClassifier(), parameters, cv=10)
grid.fit(XTrain, yTrain)
# Print the optimal parameters
bestNeighbors = grid.best_params_['n_neighbors']
bestWeight = grid.best_params_['weights']
print "Best parameters found: n_neighbors=", bestNeighbors, "and weight=", bestWeight
# grid_scores_ contains parameter settings and scores
scores = [x[1] for x in grid.grid_scores_]
scores = np.array(scores).reshape(len(n_neighbors), len(weights))
scores = np.transpose(scores)
# Make a heatmap with the performance
plt.figure(figsize=(12, 6))
plt.imshow(scores, interpolation='nearest', origin='higher', cmap=plt.cm.get_cmap('jet_r'))
plt.xticks(np.arange(len(n_neighbors)), n_neighbors)
plt.yticks(np.arange(len(weights)), weights)
plt.xlabel('Number of K nearest neighbors')
plt.ylabel('Weights')
# Add the colorbar
cbar = plt.colorbar()
cbar.set_label('Classification Accuracy', rotation=270, labelpad=20)
plt.show()
# Build the classifier using the optimal parameters detected by grid search
knn = KNeighborsClassifier(n_neighbors = bestNeighbors, weights = bestWeight)
# Train (fit) the model
knn.fit(XTrain, yTrain)
# Test (predict)
yPredKnn = knn.predict(XTest)
# Report the performance metrics
print metrics.classification_report(yTest, yPredKnn)
print "Overall Accuracy:", round(metrics.accuracy_score(yTest, yPredKnn), 2)
param_dist = {'n_neighbors': randint(1,200)}
random_search = RandomizedSearchCV(KNeighborsClassifier(), param_distributions=param_dist, n_iter=20)
random_search.fit(XTrain, yTrain)
print "Best parameters: n_neighbors=", random_search.best_params_['n_neighbors']
neig = [score_tuple[0]['n_neighbors'] for score_tuple in random_search.grid_scores_]
res = [score_tuple[1] for score_tuple in random_search.grid_scores_]
plt.scatter(neig, res)
plt.xlabel('Number of K nearest neighbors')
plt.ylabel('Classification Accuracy')
plt.xlim(0,200)
plt.show()
#############################################################
# Write your code here
# 1. Build the RF classifier using the default parameters
# 2. Train (fit) the model
# 3. Test (predict)
# 4. Report the performance metrics
#############################################################
# Check the arguments of the function
help(visplots.rfDecisionPlot)
### Write your code here ###
# View the list of arguments to be optimised
help(RandomForestClassifier())
# Parameters you could investigate include:
n_estimators = [5, 10, 20, 50, 100]
max_depth = [5, 10, 15]
# Also, you may choose any of the following
# max_features = [1, 3, 10]
# min_samples_split = [1, 3, 10]
# min_samples_leaf = [1, 3, 10]
# bootstrap = [True, False]
# criterion = ["gini", "entropy"]
##############################################################################################
# Write your code here
# 1. Construct a dictionary of hyperparameters (see task 4.3)
# 2. Conduct a grid search with 10-fold cross-validation using the dictionary of parameters
# 3. Print the optimal parameters
##############################################################################################
####################################################################################
# Write your code here
# 1. Build the classifier using the optimal parameters detected by grid search
# 2. Train (fit) the model
# 3. Test (predict)
# 4. Report the performance metrics
####################################################################################
###################################################################
# Write your code here
# 1. Build a linear SVM classifier using the default parameters
# 2. Train (fit) the model
# 3. Test (predict)
# 4. Report the performance metrics
##################################################################
# Check the arguments of the function
help(visplots.svmDecisionPlot)
### Write your code here ###
#################################################################
# Write your code here
# 1. Build the RBF SVM classifier using the default parameters
# 2. Train (fit) the model
# 3. Test (predict)
# 4. Report the performance metrics
#################################################################
# Check the arguments of the function
help(visplots.svmDecisionPlot)
### Write your code here ###
# Define the parameters to be optimised and their values/ranges
# Range for gamma and Cost hyperparameters
g_range = 2. ** np.arange(-15, 5, step=2)
C_range = 2. ** np.arange(-5, 15, step=2)
##############################################################################################
# Write your code here
# 1. Construct a dictionary of hyperparameters (see task 4.3)
# 2. Conduct a grid search with 10-fold cross-validation using the dictionary of parameters
# 3. Print the optimal parameters (don't forget to use np.log2() this time)
##############################################################################################
##########################################
# Write your code here
# 1. Fix the scores
# 2. Make a heatmap with the performance
# 3. Add the colorbar
##########################################
####################################################################################
# Write your code here
# 1. Build the classifier using the optimal parameters detected by grid search
# 2. Train (fit) the model
# 3. Test (predict)
# 4. Report the performance metrics
####################################################################################
#############################################################################
# Write your code here
# 1. Build the Logistic Regression classifier using the default parameters
# 2. Train (fit) the model
# 3. Test (predict)
# 4. Report the performance metrics
#############################################################################
# Check the arguments of the function
help(visplots.logregDecisionPlot)
### Write your code here ###
# Define the parameters to be optimised and their values/ranges
# Range for pen and C hyperparameters
pen = ['l1','l2']
C_range = 2. ** np.arange(-5, 15, step=2)
##############################################################################################
# Write your code here
# 1. Construct a dictionary of hyperparameters (see task 4.3)
# 2. Conduct a grid search with 10-fold cross-validation using the dictionary of parameters
# 3. Print the optimal parameters
##############################################################################################
##########################################
# Write your code here
# 1. Fix the scores
# 2. Make a heatmap with the performance
# 3. Add the colorbar
##########################################
####################################################################################
# Write your code here
# 1. Build the classifier using the optimal parameters detected by grid search
# 2. Train (fit) the model
# 3. Test (predict)
# 4. Report the performance metrics
####################################################################################
help(multilayer_perceptron.MultilayerPerceptronClassifier)
#####################################################################################
# Write your code here
# 1. Build the Neural Net classifier classifier ... you can use parameters such as
# activation='logistic', hidden_layer_sizes=2, learning_rate_init=.5
# 2. Train (fit) the model
# 3. Test (predict)
# 4. Report the performance metrics
#####################################################################################
# Check the arguments of the function
help(visplots.nnDecisionPlot)
### Write your code here ###
### Try arguments such as hidden_layer = 2 or (2,3,6) and learning_rate = .5
# Define the parameters to be optimised and their values/ranges
# Range for gamma and Cost hyperparameters
layer_size_range = [(3,2),(10,10),(2,2,2),10,5] # different networks shapes
learning_rate_range = np.linspace(.1,1,3)
##############################################################################################
# Write your code here
# 1. Construct a dictionary of hyperparameters (see task 4.3)
# 2. Conduct a grid search with 10-fold cross-validation using the dictionary of parameters
# 3. Print the optimal parameters
##############################################################################################
##########################################
# Write your code here
# 1. Fix the scores
# 2. Make a heatmap with the performance
# 3. Add the colorbar
##########################################
####################################################################################
# Write your code here
# 1. Build the classifier using the optimal parameters detected by grid search
# 2. Train (fit) the model
# 3. Test (predict)
# 4. Report the performance metrics
####################################################################################
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Exploring and pre-processing data
Step2: At this point, you should try to explore the first few rows of the imported wine DataFrame using the "head" function from the pandas package (http
Step3: In order to feed the data into our classification models, the imported wine DataFrame needs to be converted into a numpy array. For more information on numpy arrays, see http
Step4: It is always a good practice to check the dimensionality of the imported data prior to constructing any classification model to check that you really have imported all the data and imported it in the correct way (e.g. one common mistake is to get the separator wrong and end up with only one column). <br/> Try printing the size of the input matrix X and class vector y using the "shape" command
Step5: Based on the class vector y, the wine samples are classified into two distinct categories
Step6: It is usually advisable to scale your data prior to fitting a classification model. The main advantage of scaling is to avoid attributes of greater numeric ranges dominating those in smaller numeric ranges. For the purposes of this case study, we are applying auto-scaling on the whole X dataset. (Auto-scaling
Step7: You can visualise the relationship between two variables (features) using a simple scatter plot. This step can give you a good first indication of the ML model model to apply and its complexity (linear vs. non-linear). At this stage, letโs plot the first two variables against each other
Step8: You can change the values of f0 and f1 to values of your own choice in order to investigate the relationship between different features.
Step9: XTrain and yTrain are the two arrays you use to train your model. XTest and yTest are the two arrays that you use to evaluate your model. By default, scikit-learn splits the data so that 25% of it is used for testing, but you can also specify the proportion of data you want to use for training and testing (in this case, 30% is used for testing).
Step10: 4. KNN
Step11: We can visualise the classification boundary created by the KNN classifier using the built-in function visplots.knnDecisionPlot. For easier visualisation, only the test samples are depicted in the plot. Remember though that the decision boundary has been built using the training data! <br/>
Step12: Let us try a larger value of K, for instance K = 99 or another number of your own choice; remember, it is good practice to select an odd number for K in a binary classification problem to avoid ties. Can you generate the KNN model and print the metrics for a larger K using as guidance the previous example?
Step13: Visualise the boundaries as before using the K neighbors of your choice and the knnDecisionPlot command from visplots. What do you observe?
Step14: Answer
Step15: 4.3 Tuning KNN
Step16: <br/> Let us graphically represent the results of the grid search using a heatmap
Step17: When evaluating the resulting model it is important to do it on held-out samples that were not seen during the grid search process (XTest). <Br/>
Step18: Randomized search on hyperparameters
Step19: 5. Get your hands dirty
Step20: We can visualise the classification boundary created by the linear SVM using the visplots.rfDecisionPlot function. You can check the arguments passed in this function by using the help command. For easier visualisation, only the test samples have been included in the plot. And remember that the decision boundary has been built using the training data!
Step21: Tuning for Random Forests
Step22: Finally, testing our independent XTest dataset using the optimised model
Step23: 5.2 Support Vector Machines (SVMs)
Step24: We can visualise the classification boundary created by the linear SVM using the visplots.svmDecisionPlot function. You can check the arguments passed in this function by using the help command. For easier visualisation, only the test samples have been included in the plot. And remember that the decision boundary has been built using the training data!
Step25: Tuning
Step26: We can visualise the classification boundary created by the RBF SVM using the visplots.svmDecisionPlot function. You can check the arguments passed in this function by using the help command. For easier visualisation, only the test samples have been included in the plot. And remember that the decision boundary has been built using the training data!
Step27: Hyperparameter Tuning for non-linear SVMs
Step28: Plot the results of the grid search using a heatmap (see task 4.3).
Step29: Finally, testing our independent XTest dataset using the optimised model
Step30: 5.3 Logistic Regression
Step31: We can visualise the classification boundary created by the logistic regression model using the built-in function visplots.logregDecisionPlot. <br/> As with the above examples, only the test samples have been included in the plot. Remember that the decision boundary has been built using the training data!
Step32: Tuning Logistic Regression
Step33: Plot the results of the grid search with a heatmap (see task 4.3)
Step34: Finally, testing our independent XTest dataset using the optimised model
Step35: For more details on cross-validating and tuning logistic regression models, see
Step36: We can visualise the classification boundary of the neural network using the built-in visualisation function visplots.nnDecisionPlot. As with the above examples, only the test samples have been included in the plot. And remember that the decision boundary has been built using the training data!
Step37: Tuning Neural Nets
Step38: Plot the results of the grid search using a heatmap (see task 4.3).
Step39: Finally, testing our independent XTest dataset using the optimised model
|
11,884
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import io
from scipy import integrate
string = '''
Time A
2017-12-18-19:54:40 -50187.0
2017-12-18-19:54:45 -60890.5
2017-12-18-19:54:50 -28258.5
2017-12-18-19:54:55 -8151.0
2017-12-18-19:55:00 -9108.5
2017-12-18-19:55:05 -12047.0
2017-12-18-19:55:10 -19418.0
2017-12-18-19:55:15 -50686.0
2017-12-18-19:55:20 -57159.0
2017-12-18-19:55:25 -42847.0
'''
df = pd.read_csv(io.StringIO(string), sep = '\s+')
df.Time = pd.to_datetime(df.Time, format='%Y-%m-%d-%H:%M:%S')
df = df.set_index('Time')
integral_df = df.rolling('25S').apply(integrate.trapz)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
11,885
|
<ASSISTANT_TASK:>
Python Code:
# Import necessary packages
import tensorflow as tf
import tqdm
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# Import MNIST data so we have something for our experiments
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("../gan_mnist/MNIST_data/", one_hot=True) # GK: changed to the relevant folder
class NeuralNet:
def __init__(self, initial_weights, activation_fn, use_batch_norm):
Initializes this object, creating a TensorFlow graph using the given parameters.
:param initial_weights: list of NumPy arrays or Tensors
Initial values for the weights for every layer in the network. We pass these in
so we can create multiple networks with the same starting weights to eliminate
training differences caused by random initialization differences.
The number of items in the list defines the number of layers in the network,
and the shapes of the items in the list define the number of nodes in each layer.
e.g. Passing in 3 matrices of shape (784, 256), (256, 100), and (100, 10) would
create a network with 784 inputs going into a hidden layer with 256 nodes,
followed by a hidden layer with 100 nodes, followed by an output layer with 10 nodes.
:param activation_fn: Callable
The function used for the output of each hidden layer. The network will use the same
activation function on every hidden layer and no activate function on the output layer.
e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers.
:param use_batch_norm: bool
Pass True to create a network that uses batch normalization; False otherwise
Note: this network will not use batch normalization on layers that do not have an
activation function.
# Keep track of whether or not this network uses batch normalization.
self.use_batch_norm = use_batch_norm
self.name = "With Batch Norm" if use_batch_norm else "Without Batch Norm"
# Batch normalization needs to do different calculations during training and inference,
# so we use this placeholder to tell the graph which behavior to use.
self.is_training = tf.placeholder(tf.bool, name="is_training")
# This list is just for keeping track of data we want to plot later.
# It doesn't actually have anything to do with neural nets or batch normalization.
self.training_accuracies = []
# Create the network graph, but it will not actually have any real values until after you
# call train or test
self.build_network(initial_weights, activation_fn)
def build_network(self, initial_weights, activation_fn):
Build the graph. The graph still needs to be trained via the `train` method.
:param initial_weights: list of NumPy arrays or Tensors
See __init__ for description.
:param activation_fn: Callable
See __init__ for description.
self.input_layer = tf.placeholder(tf.float32, [None, initial_weights[0].shape[0]])
layer_in = self.input_layer
for weights in initial_weights[:-1]:
layer_in = self.fully_connected(layer_in, weights, activation_fn)
self.output_layer = self.fully_connected(layer_in, initial_weights[-1])
def fully_connected(self, layer_in, initial_weights, activation_fn=None):
Creates a standard, fully connected layer. Its number of inputs and outputs will be
defined by the shape of `initial_weights`, and its starting weight values will be
taken directly from that same parameter. If `self.use_batch_norm` is True, this
layer will include batch normalization, otherwise it will not.
:param layer_in: Tensor
The Tensor that feeds into this layer. It's either the input to the network or the output
of a previous layer.
:param initial_weights: NumPy array or Tensor
Initial values for this layer's weights. The shape defines the number of nodes in the layer.
e.g. Passing in 3 matrix of shape (784, 256) would create a layer with 784 inputs and 256
outputs.
:param activation_fn: Callable or None (default None)
The non-linearity used for the output of the layer. If None, this layer will not include
batch normalization, regardless of the value of `self.use_batch_norm`.
e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers.
# Since this class supports both options, only use batch normalization when
# requested. However, do not use it on the final layer, which we identify
# by its lack of an activation function.
if self.use_batch_norm and activation_fn:
# Batch normalization uses weights as usual, but does NOT add a bias term. This is because
# its calculations include gamma and beta variables that make the bias term unnecessary.
# (See later in the notebook for more details.)
weights = tf.Variable(initial_weights)
linear_output = tf.matmul(layer_in, weights)
# Apply batch normalization to the linear combination of the inputs and weights
batch_normalized_output = tf.layers.batch_normalization(linear_output, training=self.is_training)
# Now apply the activation function, *after* the normalization.
return activation_fn(batch_normalized_output)
else:
# When not using batch normalization, create a standard layer that multiplies
# the inputs and weights, adds a bias, and optionally passes the result
# through an activation function.
weights = tf.Variable(initial_weights)
biases = tf.Variable(tf.zeros([initial_weights.shape[-1]]))
linear_output = tf.add(tf.matmul(layer_in, weights), biases)
return linear_output if not activation_fn else activation_fn(linear_output)
def train(self, session, learning_rate, training_batches, batches_per_sample, save_model_as=None):
Trains the model on the MNIST training dataset.
:param session: Session
Used to run training graph operations.
:param learning_rate: float
Learning rate used during gradient descent.
:param training_batches: int
Number of batches to train.
:param batches_per_sample: int
How many batches to train before sampling the validation accuracy.
:param save_model_as: string or None (default None)
Name to use if you want to save the trained model.
# This placeholder will store the target labels for each mini batch
labels = tf.placeholder(tf.float32, [None, 10])
# Define loss and optimizer
cross_entropy = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=labels, logits=self.output_layer))
# Define operations for testing
correct_prediction = tf.equal(tf.argmax(self.output_layer, 1), tf.argmax(labels, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
#########################
# GK : the following if statement is not clear...
if self.use_batch_norm:
# If we don't include the update ops as dependencies on the train step, the
# tf.layers.batch_normalization layers won't update their population statistics,
# which will cause the model to fail at inference time
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)
else:
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)
#####################
# Train for the appropriate number of batches. (tqdm is only for a nice timing display)
for i in tqdm.tqdm(range(training_batches)):
# We use batches of 60 just because the original paper did. You can use any size batch you like.
batch_xs, batch_ys = mnist.train.next_batch(60)
session.run(train_step, feed_dict={self.input_layer: batch_xs,
labels: batch_ys,
self.is_training: True})
# Periodically test accuracy against the 5k validation images and store it for plotting later.
if i % batches_per_sample == 0:
test_accuracy = session.run(accuracy, feed_dict={self.input_layer: mnist.validation.images,
labels: mnist.validation.labels,
self.is_training: False})
self.training_accuracies.append(test_accuracy)
# After training, report accuracy against test data
test_accuracy = session.run(accuracy, feed_dict={self.input_layer: mnist.validation.images,
labels: mnist.validation.labels,
self.is_training: False})
print('{}: After training, final accuracy on validation set = {}'.format(self.name, test_accuracy))
# If you want to use this model later for inference instead of having to retrain it,
# just construct it with the same parameters and then pass this file to the 'test' function
if save_model_as:
tf.train.Saver().save(session, save_model_as)
def test(self, session, test_training_accuracy=False, include_individual_predictions=False, restore_from=None):
Trains a trained model on the MNIST testing dataset.
:param session: Session
Used to run the testing graph operations.
:param test_training_accuracy: bool (default False)
If True, perform inference with batch normalization using batch mean and variance;
if False, perform inference with batch normalization using estimated population mean and variance.
Note: in real life, *always* perform inference using the population mean and variance.
This parameter exists just to support demonstrating what happens if you don't.
:param include_individual_predictions: bool (default True)
This function always performs an accuracy test against the entire test set. But if this parameter
is True, it performs an extra test, doing 200 predictions one at a time, and displays the results
and accuracy.
:param restore_from: string or None (default None)
Name of a saved model if you want to test with previously saved weights.
# This placeholder will store the true labels for each mini batch
labels = tf.placeholder(tf.float32, [None, 10])
# Define operations for testing
correct_prediction = tf.equal(tf.argmax(self.output_layer, 1), tf.argmax(labels, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# If provided, restore from a previously saved model
if restore_from:
tf.train.Saver().restore(session, restore_from)
# Test against all of the MNIST test data
test_accuracy = session.run(accuracy, feed_dict={self.input_layer: mnist.test.images,
labels: mnist.test.labels,
self.is_training: test_training_accuracy})
print('-'*75)
print('{}: Accuracy on full test set = {}'.format(self.name, test_accuracy))
# If requested, perform tests predicting individual values rather than batches
if include_individual_predictions:
predictions = []
correct = 0
# Do 200 predictions, 1 at a time
for i in range(200):
# This is a normal prediction using an individual test case. However, notice
# we pass `test_training_accuracy` to `feed_dict` as the value for `self.is_training`.
# Remember that will tell it whether it should use the batch mean & variance or
# the population estimates that were calucated while training the model.
pred, corr = session.run([tf.arg_max(self.output_layer,1), accuracy],
feed_dict={self.input_layer: [mnist.test.images[i]],
labels: [mnist.test.labels[i]],
self.is_training: test_training_accuracy})
correct += corr
predictions.append(pred[0])
print("200 Predictions:", predictions)
print("Accuracy on 200 samples:", correct/200)
def plot_training_accuracies(*args, **kwargs):
Displays a plot of the accuracies calculated during training to demonstrate
how many iterations it took for the model(s) to converge.
:param args: One or more NeuralNet objects
You can supply any number of NeuralNet objects as unnamed arguments
and this will display their training accuracies. Be sure to call `train`
the NeuralNets before calling this function.
:param kwargs:
You can supply any named parameters here, but `batches_per_sample` is the only
one we look for. It should match the `batches_per_sample` value you passed
to the `train` function.
fig, ax = plt.subplots()
batches_per_sample = kwargs['batches_per_sample']
for nn in args:
ax.plot(range(0,len(nn.training_accuracies)*batches_per_sample,batches_per_sample),
nn.training_accuracies, label=nn.name)
ax.set_xlabel('Training steps')
ax.set_ylabel('Accuracy')
ax.set_title('Validation Accuracy During Training')
ax.legend(loc=4)
ax.set_ylim([0,1])
plt.yticks(np.arange(0, 1.1, 0.1))
plt.grid(True)
plt.show()
def train_and_test(use_bad_weights, learning_rate, activation_fn, training_batches=50000, batches_per_sample=500):
Creates two networks, one with and one without batch normalization, then trains them
with identical starting weights, layers, batches, etc. Finally tests and plots their accuracies.
:param use_bad_weights: bool
If True, initialize the weights of both networks to wildly inappropriate weights;
if False, use reasonable starting weights.
:param learning_rate: float
Learning rate used during gradient descent.
:param activation_fn: Callable
The function used for the output of each hidden layer. The network will use the same
activation function on every hidden layer and no activate function on the output layer.
e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers.
:param training_batches: (default 50000)
Number of batches to train.
:param batches_per_sample: (default 500)
How many batches to train before sampling the validation accuracy.
# Use identical starting weights for each network to eliminate differences in
# weight initialization as a cause for differences seen in training performance
#
# Note: The networks will use these weights to define the number of and shapes of
# its layers. The original batch normalization paper used 3 hidden layers
# with 100 nodes in each, followed by a 10 node output layer. These values
# build such a network, but feel free to experiment with different choices.
# However, the input size should always be 784 and the final output should be 10.
if use_bad_weights:
# These weights should be horrible because they have such a large standard deviation
weights = [np.random.normal(size=(784,100), scale=5.0).astype(np.float32),
np.random.normal(size=(100,100), scale=5.0).astype(np.float32),
np.random.normal(size=(100,100), scale=5.0).astype(np.float32),
np.random.normal(size=(100,10), scale=5.0).astype(np.float32)
]
else:
# These weights should be good because they have such a small standard deviation
weights = [np.random.normal(size=(784,100), scale=0.05).astype(np.float32),
np.random.normal(size=(100,100), scale=0.05).astype(np.float32),
np.random.normal(size=(100,100), scale=0.05).astype(np.float32),
np.random.normal(size=(100,10), scale=0.05).astype(np.float32)
]
# Just to make sure the TensorFlow's default graph is empty before we start another
# test, because we don't bother using different graphs or scoping and naming
# elements carefully in this sample code.
tf.reset_default_graph()
# build two versions of same network, 1 without and 1 with batch normalization
nn = NeuralNet(weights, activation_fn, False)
bn = NeuralNet(weights, activation_fn, True)
# train and test the two models
with tf.Session() as sess:
tf.global_variables_initializer().run()
nn.train(sess, learning_rate, training_batches, batches_per_sample)
bn.train(sess, learning_rate, training_batches, batches_per_sample)
nn.test(sess)
bn.test(sess)
# Display a graph of how validation accuracies changed during training
# so we can compare how the models trained and when they converged
plot_training_accuracies(nn, bn, batches_per_sample=batches_per_sample)
train_and_test(False, 0.01, tf.nn.relu)
train_and_test(False, 0.01, tf.nn.relu, 2000, 50)
train_and_test(False, 0.01, tf.nn.sigmoid)
train_and_test(False, 1, tf.nn.relu)
train_and_test(False, 1, tf.nn.relu)
train_and_test(False, 1, tf.nn.sigmoid)
train_and_test(False, 1, tf.nn.sigmoid, 2000, 50)
train_and_test(False, 2, tf.nn.relu)
train_and_test(False, 2, tf.nn.sigmoid)
train_and_test(False, 2, tf.nn.sigmoid, 2000, 50)
train_and_test(True, 0.01, tf.nn.relu)
train_and_test(True, 0.01, tf.nn.sigmoid)
train_and_test(True, 1, tf.nn.relu)
train_and_test(True, 1, tf.nn.sigmoid)
train_and_test(True, 2, tf.nn.relu)
train_and_test(True, 2, tf.nn.sigmoid)
train_and_test(True, 1, tf.nn.relu)
train_and_test(True, 2, tf.nn.relu)
def fully_connected(self, layer_in, initial_weights, activation_fn=None):
Creates a standard, fully connected layer. Its number of inputs and outputs will be
defined by the shape of `initial_weights`, and its starting weight values will be
taken directly from that same parameter. If `self.use_batch_norm` is True, this
layer will include batch normalization, otherwise it will not.
:param layer_in: Tensor
The Tensor that feeds into this layer. It's either the input to the network or the output
of a previous layer.
:param initial_weights: NumPy array or Tensor
Initial values for this layer's weights. The shape defines the number of nodes in the layer.
e.g. Passing in 3 matrix of shape (784, 256) would create a layer with 784 inputs and 256
outputs.
:param activation_fn: Callable or None (default None)
The non-linearity used for the output of the layer. If None, this layer will not include
batch normalization, regardless of the value of `self.use_batch_norm`.
e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers.
if self.use_batch_norm and activation_fn:
# Batch normalization uses weights as usual, but does NOT add a bias term. This is because
# its calculations include gamma and beta variables that make the bias term unnecessary.
weights = tf.Variable(initial_weights)
linear_output = tf.matmul(layer_in, weights)
num_out_nodes = initial_weights.shape[-1]
# Batch normalization adds additional trainable variables:
# gamma (for scaling) and beta (for shifting).
gamma = tf.Variable(tf.ones([num_out_nodes]))
beta = tf.Variable(tf.zeros([num_out_nodes]))
# These variables will store the mean and variance for this layer over the entire training set,
# which we assume represents the general population distribution.
# By setting `trainable=False`, we tell TensorFlow not to modify these variables during
# back propagation. Instead, we will assign values to these variables ourselves.
pop_mean = tf.Variable(tf.zeros([num_out_nodes]), trainable=False)
pop_variance = tf.Variable(tf.ones([num_out_nodes]), trainable=False)
# Batch normalization requires a small constant epsilon, used to ensure we don't divide by zero.
# This is the default value TensorFlow uses.
epsilon = 1e-3
def batch_norm_training():
# Calculate the mean and variance for the data coming out of this layer's linear-combination step.
# The [0] defines an array of axes to calculate over.
batch_mean, batch_variance = tf.nn.moments(linear_output, [0])
# Calculate a moving average of the training data's mean and variance while training.
# These will be used during inference.
# Decay should be some number less than 1. tf.layers.batch_normalization uses the parameter
# "momentum" to accomplish this and defaults it to 0.99
decay = 0.99
train_mean = tf.assign(pop_mean, pop_mean * decay + batch_mean * (1 - decay))
train_variance = tf.assign(pop_variance, pop_variance * decay + batch_variance * (1 - decay))
# The 'tf.control_dependencies' context tells TensorFlow it must calculate 'train_mean'
# and 'train_variance' before it calculates the 'tf.nn.batch_normalization' layer.
# This is necessary because the those two operations are not actually in the graph
# connecting the linear_output and batch_normalization layers,
# so TensorFlow would otherwise just skip them.
with tf.control_dependencies([train_mean, train_variance]):
return tf.nn.batch_normalization(linear_output, batch_mean, batch_variance, beta, gamma, epsilon)
def batch_norm_inference():
# During inference, use the our estimated population mean and variance to normalize the layer
return tf.nn.batch_normalization(linear_output, pop_mean, pop_variance, beta, gamma, epsilon)
# Use `tf.cond` as a sort of if-check. When self.is_training is True, TensorFlow will execute
# the operation returned from `batch_norm_training`; otherwise it will execute the graph
# operation returned from `batch_norm_inference`.
batch_normalized_output = tf.cond(self.is_training, batch_norm_training, batch_norm_inference)
# Pass the batch-normalized layer output through the activation function.
# The literature states there may be cases where you want to perform the batch normalization *after*
# the activation function, but it is difficult to find any uses of that in practice.
return activation_fn(batch_normalized_output)
else:
# When not using batch normalization, create a standard layer that multiplies
# the inputs and weights, adds a bias, and optionally passes the result
# through an activation function.
weights = tf.Variable(initial_weights)
biases = tf.Variable(tf.zeros([initial_weights.shape[-1]]))
linear_output = tf.add(tf.matmul(layer_in, weights), biases)
return linear_output if not activation_fn else activation_fn(linear_output)
def batch_norm_test(test_training_accuracy):
:param test_training_accuracy: bool
If True, perform inference with batch normalization using batch mean and variance;
if False, perform inference with batch normalization using estimated population mean and variance.
weights = [np.random.normal(size=(784,100), scale=0.05).astype(np.float32),
np.random.normal(size=(100,100), scale=0.05).astype(np.float32),
np.random.normal(size=(100,100), scale=0.05).astype(np.float32),
np.random.normal(size=(100,10), scale=0.05).astype(np.float32)
]
tf.reset_default_graph()
# Train the model
bn = NeuralNet(weights, tf.nn.relu, True)
# First train the network
with tf.Session() as sess:
tf.global_variables_initializer().run()
bn.train(sess, 0.01, 2000, 2000)
bn.test(sess, test_training_accuracy=test_training_accuracy, include_individual_predictions=True)
batch_norm_test(True)
batch_norm_test(False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step6: Neural network classes for testing
Step9: There are quite a few comments in the code, so those should answer most of your questions. However, let's take a look at the most important lines.
Step10: Comparisons between identical networks, with and without batch normalization
Step11: As expected, both networks train well and eventually reach similar test accuracies. However, notice that the model with batch normalization converges slightly faster than the other network, reaching accuracies over 90% almost immediately and nearing its max acuracy in 10 or 15 thousand iterations. The other network takes about 3 thousand iterations to reach 90% and doesn't near its best accuracy until 30 thousand or more iterations.
Step12: As you can see, using batch normalization produces a model with over 95% accuracy in only 2000 batches, and it was above 90% at somewhere around 500 batches. Without batch normalization, the model takes 1750 iterations just to hit 80% โย the network with batch normalization hits that mark after around 200 iterations! (Note
Step13: With the number of layers we're using and this small learning rate, using a sigmoid activation function takes a long time to start learning. It eventually starts making progress, but it took over 45 thousand batches just to get over 80% accuracy. Using batch normalization gets to 90% in around one thousand batches.
Step14: Now we're using ReLUs again, but with a larger learning rate. The plot shows how training started out pretty normally, with the network with batch normalization starting out faster than the other. But the higher learning rate bounces the accuracy around a bit more, and at some point the accuracy in the network without batch normalization just completely crashes. It's likely that too many ReLUs died off at this point because of the high learning rate.
Step15: In both of the previous examples, the network with batch normalization manages to gets over 98% accuracy, and get near that result almost immediately. The higher learning rate allows the network to train extremely fast.
Step16: In this example, we switched to a sigmoid activation function. It appears to hande the higher learning rate well, with both networks achieving high accuracy.
Step17: As you can see, even though these parameters work well for both networks, the one with batch normalization gets over 90% in 400 or so batches, whereas the other takes over 1700. When training larger networks, these sorts of differences become more pronounced.
Step18: With this very large learning rate, the network with batch normalization trains fine and almost immediately manages 98% accuracy. However, the network without normalization doesn't learn at all.
Step19: Once again, using a sigmoid activation function with the larger learning rate works well both with and without batch normalization.
Step20: In the rest of the examples, we use really bad starting weights. That is, normally we would use very small values close to zero. However, in these examples we choose random values with a standard deviation of 5. If you were really training a neural network, you would not want to do this. But these examples demonstrate how batch normalization makes your network much more resilient.
Step21: As the plot shows, without batch normalization the network never learns anything at all. But with batch normalization, it actually learns pretty well and gets to almost 80% accuracy. The starting weights obviously hurt the network, but you can see how well batch normalization does in overcoming them.
Step22: Using a sigmoid activation function works better than the ReLU in the previous example, but without batch normalization it would take a tremendously long time to train the network, if it ever trained at all.
Step23: The higher learning rate used here allows the network with batch normalization to surpass 90% in about 30 thousand batches. The network without it never gets anywhere.
Step24: Using sigmoid works better than ReLUs for this higher learning rate. However, you can see that without batch normalization, the network takes a long time tro train, bounces around a lot, and spends a long time stuck at 90%. The network with batch normalization trains much more quickly, seems to be more stable, and achieves a higher accuracy.
Step25: We've already seen that ReLUs do not do as well as sigmoids with higher learning rates, and here we are using an extremely high rate. As expected, without batch normalization the network doesn't learn at all. But with batch normalization, it eventually achieves 90% accuracy. Notice, though, how its accuracy bounces around wildly during training - that's because the learning rate is really much too high, so the fact that this worked at all is a bit of luck.
Step26: In this case, the network with batch normalization trained faster and reached a higher accuracy. Meanwhile, the high learning rate makes the network without normalization bounce around erratically and have trouble getting past 90%.
Step27: When we used these same parameters earlier, we saw the network with batch normalization reach 92% validation accuracy. This time we used different starting weights, initialized using the same standard deviation as before, and the network doesn't learn at all. (Remember, an accuracy around 10% is what the network gets if it just guesses the same value all the time.)
Step29: When we trained with these parameters and batch normalization earlier, we reached 90% validation accuracy. However, this time the network almost starts to make some progress in the beginning, but it quickly breaks down and stops learning.
Step31: This version of fully_connected is much longer than the original, but once again has extensive comments to help you understand it. Here are some important points
Step32: In the following cell, we pass True for test_training_accuracy, which performs the same batch normalization that we normally perform during training.
Step33: As you can see, the network guessed the same value every time! But why? Because during training, a network with batch normalization adjusts the values at each layer based on the mean and variance of that batch. The "batches" we are using for these predictions have a single input each time, so their values are the means, and their variances will always be 0. That means the network will normalize the values at any layer to zero. (Review the equations from before to see why a value that is equal to the mean would always normalize to zero.) So we end up with the same result for every input we give the network, because its the value the network produces when it applies its learned weights to zeros at every layer.
|
11,886
|
<ASSISTANT_TASK:>
Python Code:
!pip install -I "phoebe>=2.2,<2.3"
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
b.add_dataset('lc', times=np.linspace(0,20,501))
b.run_compute(detach=True, model='mymodel')
b['mymodel']
print(b['mymodel'].status)
b.save('test_detach.bundle')
b = phoebe.Bundle.open('test_detach.bundle')
print(b['mymodel'].status)
b['mymodel'].attach()
b['mymodel']
axs, artists = b['mymodel'].plot(show=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
Step2: Now we'll add datasets
Step3: Run Compute
Step4: If we then try to access the model, we see that there is instead a single parameter that is a placeholder - this parameter stores information on how to check the progress of the run_compute job and how to load the resulting model once it's complete
Step5: Re-attaching to a Job
Step6: If we want, we can even save the Bundle and load it later to retrieve the results. In this case where the job is being run in a different Python thread but on the same machine, you cannot, however, exit Python or restart your machine.
Step7: And at any point we can choose to "re-attach". If the job isn't yet complete, we'll be in a wait loop until it is. Once the job is complete, the new model will be loaded and accessible.
|
11,887
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline --no-import-all
#plt.rc('text', usetex=True)
plt.rcParams['figure.figsize'] = (6.0, 6.0)
#plt.rcParams['savefig.dpi'] = 60
import george
from george.kernels import ExpSquaredKernel, My2ExpLEEKernel, MySignificanceKernel
from scipy.stats import chi2, norm
length_scale_of_correaltion=3.
ratio_of_length_scales=10.
kernel1 = ExpSquaredKernel(length_scale_of_correaltion, ndim=1)
kernel2 = ExpSquaredKernel(ratio_of_length_scales**2*length_scale_of_correaltion, ndim=1)
kernel12 = 0.5*kernel1+0.5*kernel2
# Create the Gaussian process
# gp = george.GP(kernel)
gp1 = george.GP(kernel1, solver=george.HODLRSolver) #faster
gp2 = george.GP(kernel2, solver=george.HODLRSolver) #faster
gp12 = george.GP(kernel12, solver=george.HODLRSolver) #faster
a1,b1,a2,b2, dummy = 1.e-2,0.1,-1.e-2,1.1, 1e3
newkernel1 = My2ExpLEEKernel(a1=a1,b1=b1,a2=0.,b2=dummy,\
l1=length_scale_of_correaltion,\
l2=ratio_of_length_scales*length_scale_of_correaltion)
newkernel2 = My2ExpLEEKernel(a1=0.,b1=dummy,a2=a2,b2=b2,\
l1=length_scale_of_correaltion,\
l2=ratio_of_length_scales*length_scale_of_correaltion)
newkernel12 = My2ExpLEEKernel(a1=a1,b1=b1,a2=a2,b2=b2,\
l1=length_scale_of_correaltion,\
l2=ratio_of_length_scales*length_scale_of_correaltion)
#build equivalent combined kernel from basic parts and operations
# this is almost it, but still need to divide by ((1./sig11/sig11 + 1./sig21/sig21))
# and current implementation of MySignificanceKernel should be 1/sig not sig
#sig1kernel = MySignificanceKernel(a=a1,b=b1)
#sig2kernel = MySignificanceKernel(a=a2,b=b2)
#newkernel12 = sig1kernel*kernel1+sig2kernel*kernel2
# made up sensitivity curves
def sigma1(x):
return a1*x+b1
def sigma2(x):
return a2*x+b2
gp1 = george.GP(newkernel1) #,solver=george.HODLRSolver)
gp2 = george.GP(newkernel2) #,solver=george.HODLRSolver)
gp12 = george.GP(newkernel12) #,solver=george.HODLRSolver)
n_scan_points=250
x = np.linspace(0,100,n_scan_points)
# slow part: pre-compute internal stuff for the GP
gp1.compute(x)
gp2.compute(x)
gp12.compute(x)
# evaluate one realization of the GP
z1 = gp1.sample(x)
z2 = gp2.sample(x)
z12 = gp12.sample(x)
# plot the chi-square random field
plt.plot(x,z1, color='red')
plt.plot(x,z2, color='red')
plt.plot(x,z12)
plt.ylabel(r'$z(\nu)$')
plt.xlabel(r'$\nu$')
def q_to_pvalue(q):
return (1.-chi2.cdf(q, 1))/2 #divide by 2 for 1-sided test
def pvalue_to_significance(p):
return -norm.ppf(p)
def significance_to_pvalue(Z):
return 1.-norm.cdf(Z)
def num_upcrossings(z):
count number of times adjacent bins change between 0,1
return np.sum((z-np.roll(z,1))**2)/2
def global_pvalue(u,u0, n):
#return (1.-chi2.cdf(u, 1))/2. + np.exp(-(u-u0)/2)*n #1-sided p-value
return (1.-chi2.cdf(u, 1)) + np.exp(-(u-u0)/2)*n # 2-sided p-value
u1 = 0.1
n_samples = 1000
n_plots = 3
plt.figure(figsize=(9,n_plots*3))
z_array = gp1.sample(x,n_samples)
n_up = np.zeros(n_samples)
sig1 = sigma1(x)
sig2 = sigma2(x)
for scan_no, z in enumerate(z_array):
scan = (z/sig1)**2
exc1 = (scan>u1) + 0. #add 0. to convert from bool to double
n_up[scan_no] = num_upcrossings(exc1)
if scan_no < n_plots:
plt.subplot(n_plots,2,2*scan_no+1)
plt.plot(x,scan)
plt.plot([0,100],[u1,u1], c='r')
plt.subplot(n_plots,2,2*scan_no+2)
plt.plot(x,exc1)
plt.ylim(-.1,1.1)
print('experiment %d has %d upcrossings' %(scan_no, n_up[scan_no]))
n_av = np.mean(n_up)
print("average number of upcrossings in %d experiments is %f" %(n_samples, n_av))
u = np.linspace(5,25,100)
global_p = global_pvalue(u,u1,n_av)
n_samples = 10000
z_array = gp1.sample(x,n_samples)
q_max = np.zeros(n_samples)
for scan_no, z in enumerate(z_array):
scan = (z/sig1)**2
q_max[scan_no] = np.max(scan)
bins, edges, patches = plt.hist(q_max, bins=30)
icdf = 1.-np.cumsum(bins/n_samples)
icdf = np.hstack((1.,icdf))
icdf_error = np.sqrt(np.cumsum(bins))/n_samples
icdf_error = np.hstack((0.,icdf_error))
plt.xlabel('$q_{max}$')
plt.ylabel('counts / bin')
# plot the p-value
plt.plot(edges,icdf, c='r', label='toys')
plt.errorbar(edges,icdf,yerr=icdf_error)
plt.plot(u, global_p, label='prediction')
plt.xlabel('$u$')
plt.ylabel('$P(q_{max} >u)$')
plt.legend(('prediction','toys'))
#plt.ylabel('P(q>u)')
plt.ylim(1E-3,10)
plt.xlim(0,25)
plt.semilogy()
plt.plot(x,sig1)
plt.plot(x,sig2)
plt.ylim(0,4)
n_samples = 10000
z_array1 = gp1.sample(x,n_samples)
z_array2 = gp2.sample(x,n_samples)
n_av1, n_av2, n_av12 = 0., 0., 0.
q_max = np.zeros((n_samples,3))
q_10 = np.zeros((n_samples,3))
n_plots = 3
plt.figure(figsize=(9,n_plots*3))
scan_no=0
for z1, z2 in zip(z_array1,z_array2):
scan1 = (z1/sig1)**2
scan2 = (z2/sig2)**2
scan12 = ((z1/sig1**2 + z2/sig2**2 )/(sig1**-2+sig2**-2))**2 # This is where the combination happens
scan12 = scan12*(sig1**-2 + sig2**-2)
if scan_no==0:
print sig1[5], sig2[5], scan1[5], scan2[5], scan12[5]
exc1 = (scan1>u1) + 0. #add 0. to convert from bool to double
exc2 = (scan2>u1) + 0. #add 0. to convert from bool to double
exc12 = (scan12>u1) + 0. #add 0. to convert from bool to double
if scan_no < n_plots:
aspect = 1.
#plt.subplot(n_plots,3,3*scan_no+1)
plt.subplot(n_plots,1,1*scan_no+1)
plt.plot(x,scan1, c='r', label='search 1')
#plt.subplot(n_plots,3,3*scan_no+2)
plt.subplot(n_plots,1,1*scan_no+1)
plt.plot(x,scan2, c='g', label='search 2')
#plt.subplot(n_plots,3,3*scan_no+3)
plt.subplot(n_plots,1,1*scan_no+1)
plt.plot(x,scan12, c='b', label='combined')
plt.legend(('search 1', 'search 2', 'combined'))
q_max[scan_no,:] = [np.max(scan1), np.max(scan2), np.max(scan12)]
q_10[scan_no,:] = [scan1[10],scan2[10], scan12[10]]
#print num_upcrossings(exc1)
n_av1 += 1.*num_upcrossings(exc1)/n_samples
n_av2 += 1.*num_upcrossings(exc2)/n_samples
n_av12 += 1.*num_upcrossings(exc12)/n_samples
scan_no +=1
print "n_av search 1, search 2, combined = ", n_av1, n_av2, n_av12
#Simple scaling:
print "check simple scailing rule: prediction=%f, observed=%f" %(np.sqrt((n_av1**2+n_av2**2)/2), n_av12)
z_array12 = gp12.sample(x,n_samples)
q12_max = np.zeros((n_samples))
n_up = np.zeros(n_samples)
for scan_no, z12 in enumerate(z_array12):
scan12 = (z12)**2 * (sig1**-2 + sig2**-2) #divide out the variance of the combined
q12_max[scan_no] = np.max(scan12)
n_up[scan_no] = num_upcrossings((scan12 > u1)+0.)
print("average number of upcrossings for combined GP = %f" %(np.mean(n_up)))
bins, edges, patches = plt.hist(q_max[:,2], bins=50, alpha=0.1, color='r', label='explicit combination')
bins, edges, patches = plt.hist(q12_max, bins=edges, alpha=0.1, color='b', label='predicted')
plt.ylabel('counts/bin')
plt.xlabel('$q_{max}$')
plt.legend(('explicit combination', 'predicted'))
u = np.linspace(5,25,100)
global_p = global_pvalue(u,u1,np.mean(n_up))
icdf = 1.-np.cumsum(bins/n_samples)
icdf = np.hstack((1.,icdf))
icdf_error = np.sqrt(np.cumsum(bins))/n_samples
icdf_error = np.hstack((0.,icdf_error))
plt.plot(edges,icdf, c='r', label='toys')
plt.errorbar(edges,icdf,yerr=icdf_error)
plt.plot(u, global_p, label='prediction')
plt.xlabel('$u$')
plt.ylabel('$P(q_{max} >u)$')
plt.legend(('prediction','toys'))
#plt.ylabel('P(q>u)')
plt.ylim(1E-3,10)
plt.xlim(0,25)
plt.semilogy()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Now lets histogram the values of the random field.
Step3: Define the threshold for counting upcrossings
Step4: Check the code to count upcrossings and the LEE correction is working
Step5: Make prediction for global p-value for q_max distribution
Step6: Generate many toy experiments (via the Gaussian Process), find maximum local significance for each, and check the prediction for the LEE-corrected global p-value
Step7: Wow! that was awesome! Go math!
Step8: Now let's do some experiments combining two searches
Step9: Now let's test the prediction that gp12 has the same behavior as the explicit combination of search 1 and search 2.
Step10: Compare $q_{max}$ distribution from direct combination with the prediction from gp12
|
11,888
|
<ASSISTANT_TASK:>
Python Code:
# handy graph library for python
import igraph
# science
import numpy as np
from collections import defaultdict
# plot things
import tabulate
import matplotlib.pyplot as plt
%matplotlib inline
# get some toy graph data so we can demonstrate these properties
network = igraph.Nexus.get("kaptail")['KAPFTI1']
network.summary()
# visualize the graph. if you can't get Cairo to work, don't worry about it.
# if you are worried about it anyway, google "cairo py2cairo" and sort this mess out for yourself
igraph.plot(network)
# Using canned methods from igraph won't be very illustrative, so for the most part, I'll use the adjacency matrix
adj_mat = np.array(network.get_adjacency().data)
num_nodes = adj_mat.shape[0]
fig, axes = plt.subplots(1, figsize = (5,5))
axes.set_title("Adjacency Matrix (black = edge)")
axes.matshow(adj_mat, cmap = 'gist_heat_r')
# these are just constants that we'll want to use
# Get some 1s we'll need repeatedly
ones = np.ones((num_nodes, 1))
normed_ones = np.divide(ones, np.linalg.norm(ones))
zeros = np.zeros((num_nodes, 1))
identity = np.eye(num_nodes)
# total number of nodes in the graph = size of the adjacency matrix
num_nodes = adj_mat.shape[0]
# calculate the in- and out- degree
outdegree = np.sum(adj_mat,1)
indegree = np.sum(adj_mat,0)
# graph the vertices ordered by in + out degree
names_and_degrees = {network.vs["name"][i]: {"in": indegree[i], "out": outdegree[i]} for i in range(0,num_nodes)}
sorted_names_and_degrees = sorted(names_and_degrees.items(), key = lambda x: -1*(x[1]["in"] + x[1]["out"]))
sorted_indegree = [x[1]["in"] for x in sorted_names_and_degrees]
sorted_outdegree = [x[1]["out"] for x in sorted_names_and_degrees]
sorted_names = [x[0] for x in sorted_names_and_degrees]
fig, axes = plt.subplots(3,1, figsize = (16,20))
plt.subplots_adjust(bottom = .1)
axes[0].set_title("Degree for each node", size = 16)
axes[0].bar([x + .4 for x in range(0,num_nodes)], sorted_indegree, color = 'darkorange', width = .8)
axes[0].bar([x + .4 for x in range(0,num_nodes)], sorted_outdegree, bottom = sorted_indegree, color = 'navajowhite', width = .8)
axes[0].legend(["in-degree", "out-degree"])
axes[0].set_ylim(0,max(outdegree + indegree) + 1)
axes[0].set_ylabel("degree", size = 16)
axes[0].set_xticks([x + .9 for x in range(0,num_nodes)])
_ = axes[0].set_xticklabels(sorted_names, rotation = 90)
# graph the vertices sorted by the indegree
sorted_by_indegrees = sorted(names_and_degrees.items(), key = lambda x: -1*(x[1]["in"]))
axes[1].bar([x + .4 for x in range(0,num_nodes)], [x[1]["in"] for x in sorted_by_indegrees], color = 'darkorange', width = .8)
axes[1].set_ylim(0,max(outdegree + indegree) + 1)
axes[1].set_ylabel("indegree", size = 16)
axes[1].legend(["in-degree"])
axes[1].set_xticks([x + .9 for x in range(0,num_nodes)])
_ = axes[1].set_xticklabels([x[0] for x in sorted_by_indegrees], rotation = 90)
# graph the vertices ordered by the out degree
sorted_by_outdegrees = sorted(names_and_degrees.items(), key = lambda x: -1*(x[1]["out"]))
axes[2].bar([x + .4 for x in range(0,num_nodes)], [x[1]["out"] for x in sorted_by_outdegrees], color = 'navajowhite', width = .8)
axes[2].set_ylim(0,max(outdegree + indegree) + 1)
axes[2].set_xlabel("vertex", size = 16)
axes[2].set_ylabel("indegree", size = 16)
axes[2].legend(["out-degree"])
axes[2].set_xticks([x + .9 for x in range(0,num_nodes)])
_ = axes[2].set_xticklabels([x[0] for x in sorted_by_outdegrees], rotation = 90)
# I'm also going to compute total degree, scaled from 0 to 1, so that I can compare with other centrality metrics later
max_degree = float(max([x[1]["in"] + x[1]["out"] for x in sorted_names_and_degrees]))
scaled_degree = {x[0]: (x[1]["in"] + x[1]["out"])/max_degree for x in sorted_names_and_degrees}
sorted_degree_centrality = sorted(scaled_degree.items(), key = lambda x:-x[1])
#max_degree = float(max([x[1]["in"] for x in sorted_names_and_degrees]))
#scaled_degree = {x[0]: (x[1]["in"])/max_degree for x in sorted_names_and_degrees}
#sorted_degree_centrality = sorted(scaled_degree.items(), key = lambda x:-x[1])
# set some initial guess for the centralities. giving every node the same value is a good start
eigenvector_centrality = normed_ones
# give an initial error value and a tolerance to stop iterating
err = 100
tol = .01
while err > tol:
# calculate x' with Ax
eigenvector_centrality_new_unnormed = np.dot(adj_mat, eigenvector_centrality)
# norm your x values (only the proportions matter, if you don't normalize the values blow up)
eigenvector_centrality_new = np.divide(eigenvector_centrality_new_unnormed,
np.linalg.norm(eigenvector_centrality_new_unnormed))
# calculate the error
err = sum(abs(np.subtract(eigenvector_centrality_new, eigenvector_centrality)))
# set the new centrality vector
eigenvector_centrality = eigenvector_centrality_new
# sort and scale the centrality metric from 0 to 1
sorted_eigenvector_centrality = sorted(zip(network.vs["name"],
[x/max(eigenvector_centrality) for x in eigenvector_centrality]),
key = lambda x: -x[1])
dict_eigenvector_centrality = dict(sorted_eigenvector_centrality)
# plot things
fig, axes = plt.subplots(2,1, figsize = (16,12))
plt.subplots_adjust(bottom = .1)
axes[0].set_title("Eigenvector Centrality", size = 16)
axes[0].bar([x + .4 for x in range(0,num_nodes)], [x[1] for x in sorted_eigenvector_centrality],
color = 'royalblue', width = .8)
axes[0].set_ylabel("Eigenvector Centrality Values", size = 16)
axes[0].set_xticks([x + .9 for x in range(0,num_nodes)])
_ = axes[0].set_xticklabels([x[0] for x in sorted_eigenvector_centrality], rotation = 90)
node_ordering = sorted_degree_centrality #sorted_eigenvector_centrality
axes[1].plot(range(0,num_nodes), [dict_eigenvector_centrality[x[0]] for x in node_ordering], color = 'royalblue')
axes[1].plot(range(0,num_nodes), [scaled_degree[x[0]] for x in node_ordering], color = 'navajowhite')
axes[1].plot(range(0,num_nodes), [dict_eigenvector_centrality[x[0]] for x in node_ordering], color = 'royalblue', marker = 'o')
axes[1].plot(range(0,num_nodes), [scaled_degree[x[0]] for x in node_ordering], color = 'navajowhite', marker = 'o', alpha = .5)
axes[1].set_ylabel("Eigenvector Centrality vs Degree", size = 16)
axes[1].set_xticks([x for x in range(0,num_nodes)])
axes[1].grid()
axes[1].set_xlim(-0.5,38.5)
axes[1].set_ylim(0,1.05)
axes[1].legend(["eigenvector centrality", "degree centrality"])
_ = axes[1].set_xticklabels([x[0] for x in node_ordering], rotation = 90)
# matrix formulation. this is only guaranteed to work properly if the matrix is of full rank
# (not true in a disconnected graph)
# get the eigenvalues and vectors
e_vals, e_vecs = np.linalg.eig(adj_mat)
# find the principal eigenvector
largest_eval_at = e_vals.argsort()[-1]
# cetralities proportional to the principal eigenvector, barring some (potentially negative) constant factor
matrix_eigen = sorted(zip(network.vs["name"], e_vecs[:,largest_eval_at]), key = lambda x: -1*abs(x[1]))
print tabulate.tabulate([[x[0], np.real(x[1])] for x in matrix_eigen][0:8])
tree = igraph.Graph(directed = True)
tree.add_vertices([0,1,2,3,4,5,6])
tree.add_edges([(0,1),(0,2),(1,3),(1,4),(2,5),(2,6)])
tree.vs["label"] = [0,1,2,3,4,5,6]
igraph.plot(tree, layout = tree.layout("sugiyama"), bbox=(0, 0, 150, 150), vertex_label_size = 1)
# I'll use the built-in eigenvector centrality here for brevity, but it's the same algorithm
tree.eigenvector_centrality()
alpha_values = np.arange(0.219,.4,.0005)
det_values = [np.linalg.det(np.subtract(adj_mat, 1.0/a * np.eye(adj_mat.shape[0]))) for a in alpha_values]
f, axes = plt.subplots(1,2, figsize = (14,5))
axes[0].plot(alpha_values, [0 for a in alpha_values], 'k')
axes[1].plot(alpha_values, [0 for a in alpha_values], 'k')
axes[0].plot(alpha_values, det_values)
axes[1].plot([0.21988], [0], 'ro')
axes[1].plot(alpha_values, det_values)
xlim0 = axes[0].set_xlim(0.219,.32)
xlim1 = axes[1].set_xlim(0.219,.23)
alpha_index = np.where(np.array(det_values) > 0)[0][0] - 1
katz_alpha = alpha_values[alpha_index] * .9
print "The chosen alpha value is: ", katz_alpha
# iterative formulation
katz_centrality = zeros
err = 100
tol = .01
while err > tol:
katz_centrality_new = katz_alpha * np.dot(adj_mat, katz_centrality) + ones
err = sum(abs(np.subtract(katz_centrality_new, katz_centrality)))
katz_centrality = katz_centrality_new
sorted_katz_centrality = sorted(zip(network.vs["name"],
[x[0]/max(katz_centrality)[0] for x in katz_centrality]),
key = lambda x: -x[1])
dict_katz_centrality = dict(sorted_katz_centrality)
fig, axes = plt.subplots(2,1, figsize = (16,12))
plt.subplots_adjust(bottom = .1)
# plot katz centrality
axes[0].set_title("Katz Centrality", size = 16)
axes[0].bar([x + .4 for x in range(0,num_nodes)], [x[1] for x in sorted_katz_centrality],
color = 'limegreen', width = .8)
axes[0].set_ylabel("Katz Centrality Values", size = 16)
axes[0].set_xticks([x + .9 for x in range(0,num_nodes)])
_ = axes[0].set_xticklabels([x[0] for x in sorted_katz_centrality], rotation = 90)
# compare with other measures
fade = .3
node_ordering = sorted_degree_centrality #sorted_katz_centrality
axes[1].plot(range(0,num_nodes), [dict_katz_centrality[x[0]] for x in node_ordering], color = 'limegreen')
axes[1].plot(range(0,num_nodes), [dict_eigenvector_centrality[x[0]] for x in node_ordering], color = 'royalblue', alpha = fade)
axes[1].plot(range(0,num_nodes), [scaled_degree[x[0]] for x in node_ordering], color = 'navajowhite', alpha = fade)
axes[1].plot(range(0,num_nodes), [dict_katz_centrality[x[0]] for x in node_ordering], color = 'limegreen', marker = 'o')
axes[1].plot(range(0,num_nodes), [dict_eigenvector_centrality[x[0]] for x in node_ordering], color = 'royalblue', marker = 'o', alpha = fade)
axes[1].plot(range(0,num_nodes), [scaled_degree[x[0]] for x in node_ordering], color = 'navajowhite', marker = 'o', alpha = fade)
axes[1].set_ylabel("Katz vs Eigenvector vs Degree", size = 16)
axes[1].set_xticks([x for x in range(0,num_nodes)])
axes[1].grid()
axes[1].set_xlim(-0.5,38.5)
axes[1].set_ylim(0,1.05)
axes[1].legend(["katz centrality", "eigenvector centrality", "degree centrality"])
_ = axes[1].set_xticklabels([x[0] for x in node_ordering], rotation = 90)
# in case you aren't convinced, here is the matrix formulation
matrix_form_katz_centrality = np.dot(np.linalg.inv(identity - katz_alpha*adj_mat), ones)
sorted_matrix_form_katz_centrality = sorted(zip(network.vs["name"], [x[0] for x in matrix_form_katz_centrality]),
key = lambda x: -x[1])
print tabulate.tabulate([[x[0],x[1]] for x in sorted_matrix_form_katz_centrality][0:8])
# and for the tree graph case
tree_adj_mat = np.array(tree.get_adjacency().data)
katz_centrality_tree = np.dot(np.linalg.inv(np.eye(len(tree.vs)) - katz_alpha*tree_adj_mat),
np.ones(len(tree.vs)))
katz_centrality_tree = sorted(zip(tree.vs["label"], [x for x in katz_centrality_tree]),
key = lambda x: -x[1])
print tabulate.tabulate([[x[0],x[1]] for x in katz_centrality_tree][0:8])
igraph.plot(tree, layout = tree.layout("sugiyama"), bbox=(0, 0, 150, 150), vertex_label_size = 1)
# we need the out degree with non-zero, so that we don't divide by zero
# it's okay to simply replace zeros with ones, because the matrix multiplication will zero them out again
outdegree_no_zeros = outdegree
outdegree_no_zeros[outdegree_no_zeros == 0] = 1
Degree = np.diag(outdegree_no_zeros)
outdegree_no_zeros = outdegree_no_zeros.reshape(num_nodes,1)
# iterative formulation
pagerank = ones
err = 100
tol = .001
pagerank_alpha = .85
while err > tol:
pagerank_new = (pagerank_alpha * np.dot(adj_mat, np.divide(pagerank, outdegree_no_zeros))) + ones
err = sum(abs(np.subtract(pagerank_new, pagerank)))
pagerank = pagerank_new
sorted_pagerank = sorted(zip(network.vs["name"], [x/float(max(pagerank)) for x in pagerank]), key = lambda x:-x[1])
dict_pagerank = dict(sorted_pagerank)
# plotting code, ignore
fig, axes = plt.subplots(2,1, figsize = (16,12))
plt.subplots_adjust(bottom = .1)
# plot katz centrality
axes[0].set_title("PageRank Centrality", size = 16)
axes[0].bar([x + .4 for x in range(0,num_nodes)], [x[1] for x in sorted_pagerank],
color = 'purple', width = .8)
axes[0].set_ylabel("PageRank Centrality Values", size = 16)
axes[0].set_xticks([x + .9 for x in range(0,num_nodes)])
_ = axes[0].set_xticklabels([x[0] for x in sorted_pagerank], rotation = 90)
# compare with other measures
fade = .3
node_ordering = sorted_degree_centrality #sorted_pagerank
axes[1].plot(range(0,num_nodes), [dict_pagerank[x[0]] for x in node_ordering], color = 'purple')
axes[1].plot(range(0,num_nodes), [dict_katz_centrality[x[0]] for x in node_ordering], color = 'limegreen', alpha = fade)
axes[1].plot(range(0,num_nodes), [dict_eigenvector_centrality[x[0]] for x in node_ordering], color = 'royalblue', alpha = fade)
axes[1].plot(range(0,num_nodes), [scaled_degree[x[0]] for x in node_ordering], color = 'navajowhite', alpha = fade)
axes[1].plot(range(0,num_nodes), [dict_pagerank[x[0]] for x in node_ordering], color = 'purple', marker = 'o')
axes[1].plot(range(0,num_nodes), [dict_katz_centrality[x[0]] for x in node_ordering], color = 'limegreen', marker = 'o', alpha = fade)
axes[1].plot(range(0,num_nodes), [dict_eigenvector_centrality[x[0]] for x in node_ordering], color = 'royalblue', marker = 'o', alpha = fade)
axes[1].plot(range(0,num_nodes), [scaled_degree[x[0]] for x in node_ordering], color = 'navajowhite', marker = 'o', alpha = fade)
axes[1].set_ylabel("PageRank vs Katz vs Eigenvector vs Degree", size = 16)
axes[1].set_xticks([x for x in range(0,num_nodes)])
axes[1].grid()
axes[1].set_xlim(-0.5,38.5)
axes[1].set_ylim(0,1.05)
axes[1].legend(["pagerank centrality", "katz centrality", "eigenvector centrality", "degree centrality"])
_ = axes[1].set_xticklabels([x[0] for x in node_ordering], rotation = 90)
# matrix formulation, in case you didn't believe my hasty non-derivation
D_minus_alpha_a = np.subtract(Degree, pagerank_alpha * adj_mat)
D_minus_alpha_a_inv = np.linalg.inv(D_minus_alpha_a)
pagerank_matrix_method = np.dot(np.dot(Degree, D_minus_alpha_a_inv), ones)
print tabulate.tabulate([[x[0], x[1]] for x in
sorted(zip(network.vs["name"], pagerank_matrix_method), key = lambda x:-x[1])[0:8]])
normed_ones = np.divide([1 for i in sum(adj_mat)], np.linalg.norm([1 for i in sum(adj_mat)]))
err = 100
tol = .001
hub_score = normed_ones
authority_score = normed_ones
alpha = 1
beta= 1
while err > tol:
# find the new hub and authority scores
authority_score_new_unnormed = alpha * np.dot(adj_mat, hub_score)
hub_score_new_unnormed = beta * np.dot(np.transpose(adj_mat), authority_score)
# norm the scores (we only care about proportional values anyway)
authority_score_new = np.divide(authority_score_new_unnormed, np.linalg.norm(authority_score_new_unnormed))
hub_score_new = np.divide(hub_score_new_unnormed, np.linalg.norm(hub_score_new_unnormed))
# find error and update
err = sum(abs(np.subtract(hub_score_new, hub_score))) + sum(abs(np.subtract(authority_score_new, authority_score)))
authority_score = authority_score_new
hub_score = hub_score_new
dict_authority = dict(zip(network.vs["name"], [x/max(authority_score) for x in authority_score] ))
dict_hub = dict(zip(network.vs["name"], [x/max(hub_score) for x in hub_score] ))
sorted_authority_score = sorted(dict_authority.items(), key = lambda x: -x[1])
fig, axes = plt.subplots(2,1, figsize = (16,12))
plt.subplots_adjust(bottom = .1)
# plot katz centrality
axes[0].set_title("Hub/Authority Centrality", size = 16)
axes[0].bar([x + .2 for x in range(0,num_nodes)], [x[1] for x in sorted_authority_score], color = 'indianred', width = .4)
axes[0].bar([x + .6 for x in range(0,num_nodes)], [dict_hub[x[0]] for x in sorted_authority_score], color = 'brown', width = .4)
axes[0].set_ylabel("Hub/Authority Centrality Values", size = 16)
axes[0].set_xticks([x + .8 for x in range(0,num_nodes)])
_ = axes[0].set_xticklabels([x[0] for x in sorted_authority_score], rotation = 90)
axes[0].legend(["authority score", "hub score"])
# compare with other measures
fade = .3
node_ordering = sorted_degree_centrality #sorted_pagerank
axes[1].plot(range(0,num_nodes), [dict_authority[x[0]] for x in node_ordering], color = 'indianred')
axes[1].plot(range(0,num_nodes), [dict_hub[x[0]] for x in node_ordering], color = 'brown')
axes[1].plot(range(0,num_nodes), [dict_pagerank[x[0]] for x in node_ordering], color = 'purple', alpha = fade)
axes[1].plot(range(0,num_nodes), [dict_katz_centrality[x[0]] for x in node_ordering], color = 'limegreen', alpha = fade)
axes[1].plot(range(0,num_nodes), [dict_eigenvector_centrality[x[0]] for x in node_ordering], color = 'royalblue', alpha = fade)
axes[1].plot(range(0,num_nodes), [scaled_degree[x[0]] for x in node_ordering], color = 'navajowhite', alpha = fade)
axes[1].plot(range(0,num_nodes), [dict_authority[x[0]] for x in node_ordering], color = 'indianred', marker = 'o')
axes[1].plot(range(0,num_nodes), [dict_hub[x[0]] for x in node_ordering], color = 'brown', marker = 'o')
axes[1].plot(range(0,num_nodes), [dict_pagerank[x[0]] for x in node_ordering], color = 'purple', marker = 'o', alpha = fade)
axes[1].plot(range(0,num_nodes), [dict_katz_centrality[x[0]] for x in node_ordering], color = 'limegreen', marker = 'o', alpha = fade)
axes[1].plot(range(0,num_nodes), [dict_eigenvector_centrality[x[0]] for x in node_ordering], color = 'royalblue', marker = 'o', alpha = fade)
axes[1].plot(range(0,num_nodes), [scaled_degree[x[0]] for x in node_ordering], color = 'navajowhite', marker = 'o', alpha = fade)
axes[1].set_ylabel("Hub/Authority vs PageRank vs Katz vs Eigenvector vs Degree", size = 12)
axes[1].set_xticks([x for x in range(0,num_nodes)])
axes[1].grid()
axes[1].set_xlim(-0.5,38.5)
axes[1].set_ylim(0,1.05)
axes[1].legend(["authority score", "hub score",
"pagerank centrality", "katz centrality", "eigenvector centrality", "degree centrality"])
_ = axes[1].set_xticklabels([x[0] for x in node_ordering], rotation = 90)
# Let's look at how hub and authority scores are correlated
auth_hub = [(y[0][0], y[0][1], y[1])
for y in zip(sorted_authority_score,[dict_hub[x[0]] for x in sorted_authority_score])]
plt.figure(figsize = (10,8))
plt.plot([x[1] for x in auth_hub], [x[2] for x in auth_hub], 'o', color = 'indianred')
plt.plot([0,1],[0,1], 'k--')
plt.ylim(-.05,1.05)
plt.xlim(-.05,1.05)
for label, x, y in auth_hub:
plt.annotate(label, (x - .02,y + .011), size = 8)
plt.grid()
plt.title("Authority score vs hub score", size = 16)
plt.ylabel("Authority Score", size = 16)
_ = plt.xlabel("Hub Score", size = 16)
# it doesn't make sense to calculate these metrics on components that aren't connected, so I'll get the
# stringly connected component to use in the next few metrics
connected_components = network.clusters(mode = 'STRONG')
giant_component = max(zip([len(c) for c in connected_components], connected_components), key = lambda x:x[0])[1]
connected_subnetwork = network.subgraph(giant_component)
unconnected_nodes = list(set(network.vs["name"]) - set(connected_subnetwork.vs["name"]))
connected_subnetwork.summary()
num_connected_nodes = len(connected_subnetwork.vs)
adj_mat_connected_subnetwork = np.array(connected_subnetwork.get_adjacency().data)
outdegree_connected_subnetwork = np.sum(adj_mat,1)
indegree_connected_subnetwork = np.sum(adj_mat,0)
closeness_centrality = []
for vertex in connected_subnetwork.vs:
closeness_centrality.append(num_connected_nodes / float(sum(connected_subnetwork.shortest_paths(vertex)[0])))
dict_closeness_centrality = defaultdict(float)
for i in range(0, num_connected_nodes):
dict_closeness_centrality[connected_subnetwork.vs["name"][i]] = closeness_centrality[i]/max(closeness_centrality)
for unconnected_node in unconnected_nodes:
dict_closeness_centrality[unconnected_node] = 0
sorted_closeness = sorted(dict_closeness_centrality.items(), key = lambda x: -x[1])
fig, axes = plt.subplots(2,1, figsize = (16,12))
plt.subplots_adjust(bottom = .1)
# plot centrality
axes[0].set_title("Closeness Centrality", size = 16)
axes[0].bar([x + .4 for x in range(0,num_nodes)], [x[1] for x in sorted_closeness],
color = 'dimgray', width = .8)
axes[0].set_ylabel("Closeness Centrality Values", size = 16)
axes[0].set_xticks([x + .8 for x in range(0,num_nodes)])
_ = axes[0].set_xticklabels([x[0] for x in sorted_closeness], rotation = 90)
# compare with other measures
fade = .3
node_ordering = sorted_degree_centrality #sorted_pagerank
axes[1].plot(range(0,num_nodes), [dict_closeness_centrality[x[0]] for x in node_ordering], color = 'dimgray')
axes[1].plot(range(0,num_nodes), [dict_authority[x[0]] for x in node_ordering], color = 'indianred', alpha = fade)
axes[1].plot(range(0,num_nodes), [dict_hub[x[0]] for x in node_ordering], color = 'brown', alpha = fade)
axes[1].plot(range(0,num_nodes), [dict_pagerank[x[0]] for x in node_ordering], color = 'purple', alpha = fade)
axes[1].plot(range(0,num_nodes), [dict_katz_centrality[x[0]] for x in node_ordering], color = 'limegreen', alpha = fade)
axes[1].plot(range(0,num_nodes), [dict_eigenvector_centrality[x[0]] for x in node_ordering], color = 'royalblue', alpha = fade)
axes[1].plot(range(0,num_nodes), [scaled_degree[x[0]] for x in node_ordering], color = 'navajowhite', alpha = fade)
axes[1].plot(range(0,num_nodes), [dict_closeness_centrality[x[0]] for x in node_ordering], color = 'dimgray', marker = 'o')
axes[1].plot(range(0,num_nodes), [dict_authority[x[0]] for x in node_ordering], color = 'indianred', marker = 'o', alpha = fade)
axes[1].plot(range(0,num_nodes), [dict_hub[x[0]] for x in node_ordering], color = 'brown', marker = 'o', alpha = fade)
axes[1].plot(range(0,num_nodes), [dict_pagerank[x[0]] for x in node_ordering], color = 'purple', marker = 'o', alpha = fade)
axes[1].plot(range(0,num_nodes), [dict_katz_centrality[x[0]] for x in node_ordering], color = 'limegreen', marker = 'o', alpha = fade)
axes[1].plot(range(0,num_nodes), [dict_eigenvector_centrality[x[0]] for x in node_ordering], color = 'royalblue', marker = 'o', alpha = fade)
axes[1].plot(range(0,num_nodes), [scaled_degree[x[0]] for x in node_ordering], color = 'navajowhite', marker = 'o', alpha = fade)
axes[1].set_ylabel("PageRank vs Katz vs Eigenvector vs Degree", size = 16)
axes[1].set_xticks([x for x in range(0,num_nodes)])
axes[1].grid()
axes[1].set_xlim(-0.5,38.5)
axes[1].set_ylim(0,1.05)
axes[1].legend(["closeness centrality", "authority score", "hub score",
"pagerank centrality", "katz centrality", "eigenvector centrality", "degree centrality"])
_ = axes[1].set_xticklabels([x[0] for x in node_ordering], rotation = 90)
shortest_paths_all_pairs = {}
for vertex in range(0,num_connected_nodes):
for dest in range(0,num_connected_nodes):
shortest_paths_all_pairs[(vertex, dest)] = connected_subnetwork.get_all_shortest_paths(vertex, to = dest)
betweenness_sums = defaultdict(int)
for v in range(0,num_connected_nodes):
for i in range(0,num_connected_nodes):
for j in range(0,num_connected_nodes):
n_v_vector = [int(v in path) for path in shortest_paths_all_pairs[(i,j)]]
betweenness_sums[v] += float(sum(n_v_vector))/float(len(n_v_vector))
betweenness = [x[1] for x in sorted(betweenness_sums.items(), key = lambda x:x[0])]
print tabulate.tabulate([[x[0], x[1]] for x in
sorted(zip(connected_subnetwork.vs["name"], betweenness), key = lambda x: -x[1])][0:8])
# random walk betweenness
iterations = 50
give_up_after = 2*num_connected_nodes
paths = defaultdict(list)
# random walk between every pair of vertices
for vertex in range(0,num_connected_nodes):
for dest in range(0,num_connected_nodes):
for i in range(0,iterations):
# start at the first vertex
current_vertex = vertex
path = [current_vertex]
# now wander around
steps = 0
while (current_vertex != dest) and (steps < give_up_after):
steps += 1
# choose a random vertex to walk to
options = np.where(adj_mat_connected_subnetwork[vertex] != 0)[0]
# dead end
if options.size == 0:
path = []
break
else:
current_vertex = np.random.choice(options)
path.append(current_vertex)
paths[(vertex,dest)].append(path)
flow_betweenness_sums = defaultdict(int)
for v in range(0,num_connected_nodes):
for i in range(0,num_connected_nodes):
for j in range(0,num_connected_nodes):
n_v_vector = [int(v) in path for path in paths[(i,j)]]
flow_betweenness_sums[v] += float(sum(n_v_vector))/float(len(n_v_vector))
flow_betweenness = [x[1] for x in sorted(flow_betweenness_sums.items(), key = lambda x:x[0])]
dict_betweenness = dict(zip(connected_subnetwork.vs["name"], [x/max(betweenness) for x in betweenness]))
dict_flow_betweenness = dict(zip(connected_subnetwork.vs["name"], [x/max(flow_betweenness) for x in flow_betweenness]))
for unconnected_node in unconnected_nodes:
dict_betweenness[unconnected_node] = 0
dict_flow_betweenness[unconnected_node] = 0
sorted_flow_betweenness = sorted(dict_flow_betweenness.items(), key = lambda x: -x[1])
sorted_betweenness = sorted(dict_betweenness.items(), key = lambda x: -x[1])
print tabulate.tabulate([[x[0], x[1]] for x in
sorted(zip(connected_subnetwork.vs["name"], flow_betweenness), key = lambda x: -x[1])][0:8])
fig, axes = plt.subplots(2,1, figsize = (16,12))
plt.subplots_adjust(bottom = .1)
# plot centrality
axes[0].set_title("Betweenness Centrality", size = 16)
axes[0].bar([x + .2 for x in range(0,num_nodes)], [x[1] for x in sorted_betweenness], color = 'cyan', width = .4)
axes[0].bar([x + .6 for x in range(0,num_nodes)], [dict_flow_betweenness[x[0]] for x in sorted_betweenness], color = 'darkcyan', width = .4)
axes[0].set_ylabel("Betweenness Centrality Values", size = 16)
axes[0].set_xticks([x + .8 for x in range(0,num_nodes)])
_ = axes[0].set_xticklabels([x[0] for x in sorted_betweenness], rotation = 90)
axes[0].legend(["betweenness by geodesic distances", "betweenness by random walk"])
# compare with other measures
fade = .3
node_ordering = sorted_degree_centrality #sorted_pagerank
axes[1].plot(range(0,num_nodes), [dict_betweenness[x[0]] for x in node_ordering], color = 'cyan')
axes[1].plot(range(0,num_nodes), [dict_flow_betweenness[x[0]] for x in node_ordering], color = 'darkcyan')
axes[1].plot(range(0,num_nodes), [dict_closeness_centrality[x[0]] for x in node_ordering], color = 'dimgray', alpha = fade)
#axes[1].plot(range(0,num_nodes), [dict_authority[x[0]] for x in node_ordering], color = 'indianred', alpha = fade)
#axes[1].plot(range(0,num_nodes), [dict_hub[x[0]] for x in node_ordering], color = 'brown', alpha = fade)
#axes[1].plot(range(0,num_nodes), [dict_pagerank[x[0]] for x in node_ordering], color = 'purple', alpha = fade)
#axes[1].plot(range(0,num_nodes), [dict_katz_centrality[x[0]] for x in node_ordering], color = 'limegreen', alpha = fade)
#axes[1].plot(range(0,num_nodes), [dict_eigenvector_centrality[x[0]] for x in node_ordering], color = 'royalblue', alpha = fade)
#axes[1].plot(range(0,num_nodes), [scaled_degree[x[0]] for x in node_ordering], color = 'navajowhite', alpha = fade)
axes[1].plot(range(0,num_nodes), [dict_betweenness[x[0]] for x in node_ordering], color = 'cyan', marker = 'o')
axes[1].plot(range(0,num_nodes), [dict_flow_betweenness[x[0]] for x in node_ordering], color = 'darkcyan', marker = 'o')
axes[1].plot(range(0,num_nodes), [dict_closeness_centrality[x[0]] for x in node_ordering], color = 'dimgray', marker = 'o', alpha = fade)
#axes[1].plot(range(0,num_nodes), [dict_authority[x[0]] for x in node_ordering], color = 'indianred', marker = 'o', alpha = fade)
#axes[1].plot(range(0,num_nodes), [dict_hub[x[0]] for x in node_ordering], color = 'brown', marker = 'o', alpha = fade)
#axes[1].plot(range(0,num_nodes), [dict_pagerank[x[0]] for x in node_ordering], color = 'purple', marker = 'o', alpha = fade)
#axes[1].plot(range(0,num_nodes), [dict_katz_centrality[x[0]] for x in node_ordering], color = 'limegreen', marker = 'o', alpha = fade)
#axes[1].plot(range(0,num_nodes), [dict_eigenvector_centrality[x[0]] for x in node_ordering], color = 'royalblue', marker = 'o', alpha = fade)
#axes[1].plot(range(0,num_nodes), [scaled_degree[x[0]] for x in node_ordering], color = 'navajowhite', marker = 'o', alpha = fade)
axes[1].set_ylabel("Betweenness vs PageRank vs Katz vs Eigenvector vs Degree", size = 12)
axes[1].set_xticks([x for x in range(0,num_nodes)])
axes[1].grid()
axes[1].set_xlim(-0.5,38.5)
axes[1].set_ylim(0,1.05)
axes[1].legend(["betweenness", "flow betweenness", "closeness centrality"])#, "authority score", "hub score", "pagerank centrality", "katz centrality", "eigenvector centrality", "degree centrality"])
_ = axes[1].set_xticklabels([x[0] for x in node_ordering], rotation = 90)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A quick graph vocabulary refresher
Step2: degree
Step3: Degree centrality
Step4: Eigenvector Centrality
Step5: One potential problem with eigenvector cetrality is that only vertices in the strongly conencted component (vertices from which you can travel to any place on the network, and to which you can travel from anywhere on the network) can have a non-zero eigenvector centrality.
Step6: Katz Centrality
Step7: Calculate Katz centrality with the chosen $\alpha$ using the iterative formulation
Step8: This is how Katz centrality would behave on the directed acyclic graph for which Eigenvector centrality failed
Step9: In Katz centrality (similar to eigenvector centrality), if a vertex is very important, it makes the vertices that it points to important. This could be undesireable in the case where very important vertices also point to many other verties, as is the case with Google, which points to many websites, none of which are necessarily particualrly important.
Step10: Hubs and Authorities
Step11: Distance-based centrality metrics
Step12: Closeness Centrality
Step13: You might notice (by looking at the histogram) that the actual range of values for the closeness centrality score is quite low, and the difference between the most central vertices is quite small. If you remember a previous discussion on small-diameter social graphs, this is because the average distance between any two nodes on a random graph tends to be both reatively constant and quite low.
Step14: An alternative to the shortest distance formulation
|
11,889
|
<ASSISTANT_TASK:>
Python Code:
from mpl_toolkits.mplot3d import Axes3D
from mpl_toolkits.mplot3d.art3d import Poly3DCollection
fig = plt.figure()
ax = Axes3D(fig)
x = [1,0,0]
y = [0,1,0]
z = [0,0,1]
verts = [zip(x, y,z)]
ax.add_collection3d(Poly3DCollection(verts, edgecolor="k", lw=5, alpha=0.4))
ax.text(1, 0, 0, "(1,0,0)", position=(0.7,0.1))
ax.text(0, 1, 0, "(0,1,0)", position=(0,1.04))
ax.text(0, 0, 1, "(0,0,1)", position=(-0.2,0))
ax.set_xlabel("x")
ax.set_ylabel("y")
ax.set_zlabel("z")
ax.set_xticks([0, 1])
ax.set_yticks([0, 1])
ax.set_zticks([0, 1])
ax.view_init(20, -20)
plt.show()
def plot_triangle(X, kind):
n1 = np.array([1, 0, 0])
n2 = np.array([0, 1, 0])
n3 = np.array([0, 0, 1])
n12 = (n1 + n2)/2
m1 = np.array([1, -1, 0])
m2 = n3 - n12
m1 = m1/np.linalg.norm(m1)
m2 = m2/np.linalg.norm(m2)
X1 = (X-n12).dot(m1)
X2 = (X-n12).dot(m2)
g = sns.jointplot(X1, X2, kind=kind, xlim=(-0.8,0.8), ylim=(-0.45,0.9))
g.ax_joint.axis("equal")
plt.show()
X1 = np.random.rand(1000, 3)
X1 = X1/X1.sum(axis=1)[:, np.newaxis]
plot_triangle(X1, kind="scatter")
plot_triangle(X1, kind="hex")
X2 = sp.stats.dirichlet((1,1,1)).rvs(1000)
plot_triangle(X2, kind="scatter")
plot_triangle(X2, kind="hex")
def project(x):
n1 = np.array([1, 0, 0])
n2 = np.array([0, 1, 0])
n3 = np.array([0, 0, 1])
n12 = (n1 + n2)/2
m1 = np.array([1, -1, 0])
m2 = n3 - n12
m1 = m1/np.linalg.norm(m1)
m2 = m2/np.linalg.norm(m2)
return np.dstack([(x-n12).dot(m1), (x-n12).dot(m2)])[0]
def project_reverse(x):
n1 = np.array([1, 0, 0])
n2 = np.array([0, 1, 0])
n3 = np.array([0, 0, 1])
n12 = (n1 + n2)/2
m1 = np.array([1, -1, 0])
m2 = n3 - n12
m1 = m1/np.linalg.norm(m1)
m2 = m2/np.linalg.norm(m2)
return x[:,0][:, np.newaxis] * m1 + x[:,1][:, np.newaxis] * m2 + n12
eps = np.finfo(float).eps * 10
X = project([[1-eps,0,0], [0,1-eps,0], [0,0,1-eps]])
import matplotlib.tri as mtri
triang = mtri.Triangulation(X[:,0], X[:,1], [[0, 1, 2]])
refiner = mtri.UniformTriRefiner(triang)
triang2 = refiner.refine_triangulation(subdiv=6)
XYZ = project_reverse(np.dstack([triang2.x, triang2.y, 1-triang2.x-triang2.y])[0])
pdf = sp.stats.dirichlet((1,1,1)).pdf(XYZ.T)
plt.tricontourf(triang2, pdf)
plt.axis("equal")
plt.show()
pdf = sp.stats.dirichlet((3,4,2)).pdf(XYZ.T)
plt.tricontourf(triang2, pdf)
plt.axis("equal")
plt.show()
pdf = sp.stats.dirichlet((16,24,14)).pdf(XYZ.T)
plt.tricontourf(triang2, pdf)
plt.axis("equal")
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: ๋ค์ ํจ์๋ ์์ฑ๋ ์ ๋ค์ 2์ฐจ์ ์ผ๊ฐํ ์์์ ๋ณผ ์ ์๋๋ก ๊ทธ๋ ค์ฃผ๋ ํจ์์ด๋ค.
Step2: ๋ง์ฝ ์ด ๋ฌธ์ ๋ฅผ ๋จ์ํ๊ฒ ์๊ฐํ์ฌ ์๋ก ๋
๋ฆฝ์ธ 0๊ณผ 1์ฌ์ด์ ์ ๋ํผ ํ๋ฅ ๋ณ์๋ฅผ 3๊ฐ ์์ฑํ๊ณ ์ด๋ค์ ํฉ์ด 1์ด ๋๋๋ก ํฌ๊ธฐ๋ฅผ ์ ๊ทํ(normalize)ํ๋ฉด ๋ค์ ๊ทธ๋ฆผ๊ณผ ๊ฐ์ด ์ผ๊ฐํ์ ์ค์ ๊ทผ์ฒ์ ๋ง์ ํ๋ฅ ๋ถํฌ๊ฐ ์ง์ค๋๋ค. ์ฆ, ํ๋ฅ ๋ณ์๊ฐ ๊ณจ๊ณ ๋ฃจ ๋ถํฌ๋์ง ์๋๋ค.
Step3: ๊ทธ๋ฌ๋ $\alpha=(1,1,1)$์ธ ๋๋ฆฌํด๋ ๋ถํฌ๋ ๋ค์๊ณผ ๊ฐ์ด ๊ณจ๊ณ ๋ฃจ ์ํ์ ์์ฑํ๋ค.
Step4: $\alpha$๊ฐ $(1,1,1)$์ด ์๋ ๊ฒฝ์ฐ์๋ ๋ค์๊ณผ ๊ฐ์ด ํน์ ์์น์ ๋ถํฌ๊ฐ ์ง์ค๋๋๋ก ํ ์ ์๋ค. ์ด ํน์ฑ์ ์ด์ฉํ๋ฉด ๋คํญ ๋ถํฌ์ ๋ชจ์๋ฅผ ์ถ์ ํ๋ ๋ฒ ์ด์ง์ ์ถ์ ๋ฌธ์ ์ ์์ฉํ ์ ์๋ค.
|
11,890
|
<ASSISTANT_TASK:>
Python Code:
# Setup feedback system
from learntools.core import binder
binder.bind(globals())
from learntools.computer_vision.ex2 import *
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
plt.rc('figure', autolayout=True)
plt.rc('axes', labelweight='bold', labelsize='large',
titleweight='bold', titlesize=18, titlepad=10)
plt.rc('image', cmap='magma')
tf.config.run_functions_eagerly(True)
image_path = '../input/computer-vision-resources/car_illus.jpg'
image = tf.io.read_file(image_path)
image = tf.io.decode_jpeg(image, channels=1)
image = tf.image.resize(image, size=[400, 400])
img = tf.squeeze(image).numpy()
plt.figure(figsize=(6, 6))
plt.imshow(img, cmap='gray')
plt.axis('off')
plt.show();
import learntools.computer_vision.visiontools as visiontools
from learntools.computer_vision.visiontools import edge, bottom_sobel, emboss, sharpen
kernels = [edge, bottom_sobel, emboss, sharpen]
names = ["Edge Detect", "Bottom Sobel", "Emboss", "Sharpen"]
plt.figure(figsize=(12, 12))
for i, (kernel, name) in enumerate(zip(kernels, names)):
plt.subplot(1, 4, i+1)
visiontools.show_kernel(kernel)
plt.title(name)
plt.tight_layout()
# YOUR CODE HERE: Define a kernel with 3 rows and 3 columns.
kernel = tf.constant([
#____,
])
# Uncomment to view kernel
# visiontools.show_kernel(kernel)
# Check your answer
q_1.check()
#%%RM_IF(PROD)%%
kernel = np.array([
[-2, -1, 0],
[-1, 1, 1],
[0, 1, 2],
])
q_1.assert_check_failed()
#%%RM_IF(PROD)%%
kernel = tf.constant([
'abc'
])
q_1.assert_check_failed()
#%%RM_IF(PROD)%%
kernel = tf.constant([0, 1, 2])
q_1.assert_check_failed()
#%%RM_IF(PROD)%%
kernel = tf.constant([
[-2, -1, 0],
[-1, 1, 1],
[0, 1, 2],
])
visiontools.show_kernel(kernel)
q_1.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_1.hint()
#_COMMENT_IF(PROD)_
q_1.solution()
# Reformat for batch compatibility.
image = tf.image.convert_image_dtype(image, dtype=tf.float32)
image = tf.expand_dims(image, axis=0)
kernel = tf.reshape(kernel, [*kernel.shape, 1, 1])
kernel = tf.cast(kernel, dtype=tf.float32)
# YOUR CODE HERE: Give the TensorFlow convolution function (without arguments)
conv_fn = ____
# Check your answer
q_2.check()
#%%RM_IF(PROD)%%
conv_fn = 'abc'
q_2.assert_check_failed()
#%%RM_IF(PROD)%%
conv_fn = tf.nn.conv2d(
input=image,
filters=kernel,
strides=1, # or (1, 1)
padding='SAME',
)
q_2.assert_check_failed()
#%%RM_IF(PROD)%%
conv_fn = tf.nn.conv2d
q_2.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_2.hint()
#_COMMENT_IF(PROD)_
q_2.solution()
image_filter = conv_fn(
input=image,
filters=kernel,
strides=1, # or (1, 1)
padding='SAME',
)
plt.imshow(
# Reformat for plotting
tf.squeeze(image_filter)
)
plt.axis('off')
plt.show();
# YOUR CODE HERE: Give the TensorFlow ReLU function (without arguments)
relu_fn = ____
# Check your answer
q_3.check()
#%%RM_IF(PROD)%%
relu_fn = 'abc'
q_3.assert_check_failed()
#%%RM_IF(PROD)%%
relu_fn = tf.nn.relu(image_filter)
q_3.assert_check_failed()
#%%RM_IF(PROD)%%
relu_fn = tf.nn.relu
q_3.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_3.hint()
#_COMMENT_IF(PROD)_
q_3.solution()
image_detect = relu_fn(image_filter)
plt.imshow(
# Reformat for plotting
tf.squeeze(image_detect)
)
plt.axis('off')
plt.show();
# Sympy is a python library for symbolic mathematics. It has a nice
# pretty printer for matrices, which is all we'll use it for.
import sympy
sympy.init_printing()
from IPython.display import display
image = np.array([
[0, 1, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0],
[0, 1, 0, 1, 1, 1],
[0, 1, 0, 0, 0, 0],
])
kernel = np.array([
[1, -1],
[1, -1],
])
display(sympy.Matrix(image))
display(sympy.Matrix(kernel))
# Reformat for Tensorflow
image = tf.cast(image, dtype=tf.float32)
image = tf.reshape(image, [1, *image.shape, 1])
kernel = tf.reshape(kernel, [*kernel.shape, 1, 1])
kernel = tf.cast(kernel, dtype=tf.float32)
# View the solution (Run this code cell to receive credit!)
q_4.check()
image_filter = tf.nn.conv2d(
input=image,
filters=kernel,
strides=1,
padding='VALID',
)
image_detect = tf.nn.relu(image_filter)
# The first matrix is the image after convolution, and the second is
# the image after ReLU.
display(sympy.Matrix(tf.squeeze(image_filter).numpy()))
display(sympy.Matrix(tf.squeeze(image_detect).numpy()))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Apply Transformations
Step2: You can run this cell to see some standard kernels used in image processing.
Step3: 1) Define Kernel
Step4: Now we'll do the first step of feature extraction, the filtering step. First run this cell to do some reformatting for TensorFlow.
Step5: 2) Apply Convolution
Step6: Once you've got the correct answer, run this next cell to execute the convolution and see the result!
Step7: Can you see how the kernel you chose relates to the feature map it produced?
Step8: Once you've got the solution, run this cell to detect the feature with ReLU and see the result!
Step9: In the tutorial, our discussion of kernels and feature maps was mainly visual. We saw the effect of Conv2D and ReLU by observing how they transformed some example images.
Step10: 4) Observe Convolution on a Numerical Matrix
Step11: Now let's try it out. Run the next cell to apply convolution and ReLU to the image and display the result.
|
11,891
|
<ASSISTANT_TASK:>
Python Code:
from pyturb.gas_models import ThermoProperties
tp = ThermoProperties()
print(tp.species_list[850:875])
tp.is_available('Air')
from pyturb.gas_models import PerfectIdealGas
from pyturb.gas_models import SemiperfectIdealGas
# Air as perfect gas:
perfect_air = PerfectIdealGas('Air')
# Air as semiperfect gas:
semiperfect_air = SemiperfectIdealGas('Air')
print(perfect_air.thermo_prop)
print(perfect_air.Rg)
print(perfect_air.Mg)
print(perfect_air.cp())
print(perfect_air.cp_molar())
print(perfect_air.cv())
print(perfect_air.cv_molar())
print(perfect_air.gamma())
perfect_air?
T = 288.15 #K
cp_perf = perfect_air.cp()
cp_sp = semiperfect_air.cp(T)
print('At T={0:8.2f}K, cp_perfect={1:8.2f}J/kg/K'.format(T, cp_perf))
print('At T={0:8.2f}K, cp_semipft={1:8.2f}J/kg/K'.format(T, cp_sp))
T = 1500 #K
cp_perf = perfect_air.cp()
cp_sp = semiperfect_air.cp(T)
print('At T={0:8.2f}K, cp_perfect={1:8.2f}J/kg/K'.format(T, cp_perf))
print('At T={0:8.2f}K, cp_semipft={1:8.2f}J/kg/K'.format(T, cp_sp))
import numpy as np
from matplotlib import pyplot as plt
T = np.linspace(200, 2000, 50)
cp = np.zeros_like(T)
cv = np.zeros_like(T)
gamma = np.zeros_like(T)
for ii, temperature in enumerate(T):
cp[ii] = semiperfect_air.cp(temperature)
cv[ii] = semiperfect_air.cv(temperature)
gamma[ii] = semiperfect_air.gamma(temperature)
fig, (ax1, ax2) = plt.subplots(2)
fig.suptitle('Air properties')
ax1.plot(T, cp)
ax1.plot(T, cv)
ax2.plot(T, gamma)
ax1.set(xlabel="Temperature [K]", ylabel="cp, cv [J/kg/K]")
ax2.set(xlabel="Temperature [K]", ylabel="gamma [-]")
ax1.grid()
ax2.grid()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import Perfect and Semiperfect Ideal Gas classes
Step2: To retrieve the thermodynamic properties you can print the thermo_prop from the gas
Step3: You can get the thermodynamic properties directly from the gas object. Note that all units are International System of Units (SI)
Step4: Use the docstrings for more info about the content of a PerfectIdealGas or a SemiperfectIdealGas
Step5: Compare both models
Step6: $c_p$, $c_v$ and $\gamma$ versus temperature
|
11,892
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
np.set_printoptions(suppress=True, precision=1)
fw = np.array([200,200,50,50,50,50,200,200])
f = np.array([fw,fw,fw,fw])
print(f)
F = np.fft.fft2(f)
print(F)
frestaurado = np.fft.ifft2(F)
print(frestaurado)
Faux = np.zeros_like(F)
Faux[0,0] = F[0,0]
print(Faux)
fr0 = np.fft.ifft2(Faux)
print(fr0.real)
Faux = np.zeros_like(F)
Faux[0,1] = F[0,1]
Faux[0,-1] = F[0,-1]
print(Faux)
fr1 = np.fft.ifft2(Faux)
print(fr1.real)
Faux = np.zeros_like(F)
Faux[0,3] = F[0,3]
Faux[0,-3] = F[0,-3]
print(Faux)
fr3 = np.fft.ifft2(Faux)
print(fr3.real)
fr = fr0 + fr1 + fr3
print(fr)
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(fr3.real[0])
x = np.arange(80)
y = np.cos(2*np.pi*x*3/80)
plt.plot(y)
import matplotlib.image as mpimg
f = mpimg.imread('../data/keyb.tif')
plt.imshow(f[:50,:50],cmap='gray')
F = np.fft.fft2(f)
H,W = F.shape
import sys,os
sys.path.append('/home/lotufo')
import ia898.src as ia
Fview = np.abs(np.log(ia.ptrans(F,(H//2,W//2))+1))
plt.imshow(Fview,cmap='gray')
x = np.arange(4).reshape(4,1)
A = x.dot(x.T)
print(2**A*1.j)
plt.imshow(f,cmap='gray')
F = np.fft.fft2(f)
Fview = np.abs(np.log(ia.ptrans(F,(H//2,W//2))+1))
plt.imshow(Fview,cmap='gray')
U,V = (6,4)
H,W = F.shape
FW = np.zeros_like(F)
FW[0:U,0:V] = 1.
FW[-U:,-V:] = 1.
FW[0:U,-V:] = 1.
FW[-U:,0:V] = 1.
fw = np.fft.ifft2(F * FW)
print((fw.imag).sum()) # cheque para verificar se o resultado รฉ real
plt.imshow(ia.ptrans(np.log(1+ np.abs(F*FW)),(H//2,W//2)),cmap='gray')
plt.imshow(fw.real,cmap='gray')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Aprendizados
Step2: Rotaรงรฃo
Step3: Processamento
Step4: Visualizaรงรฃo
|
11,893
|
<ASSISTANT_TASK:>
Python Code:
from sklearn.grid_search import GridSearchCV
from sklearn.svm import SVC
from sklearn.datasets import load_digits
from sklearn.cross_validation import train_test_split
digits = load_digits()
X_train, X_test, y_train, y_test = train_test_split(digits.data, digits.target)
import numpy as np
param_grid = {'C': 10. ** np.arange(-3, 3),
'gamma' : 10. ** np.arange(-5, 0)}
np.set_printoptions(suppress=True)
print(param_grid)
grid_search = GridSearchCV(SVC(), param_grid, verbose=3, cv=5)
grid_search.fit(X_train, y_train)
grid_search.predict(X_test)
grid_search.score(X_test, y_test)
grid_search.best_params_
# We extract just the scores
%matplotlib notebook
import matplotlib.pyplot as plt
scores = [x[1] for x in grid_search.grid_scores_]
scores = np.array(scores).reshape(6, 5)
plt.matshow(scores)
plt.xlabel('gamma')
plt.ylabel('C')
plt.colorbar()
plt.xticks(np.arange(5), param_grid['gamma'])
plt.yticks(np.arange(6), param_grid['C']);
# %load solutions/grid_search_k_neighbors.py
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Define parameter grid
Step2: A GridSearchCV object behaves just like a normal classifier.
Step3: Exercises
|
11,894
|
<ASSISTANT_TASK:>
Python Code:
import pixiedust
pixiedust.enableJobMonitor()
# @hidden_cell
# Enter your S3 access key (e.g. 'A....K')
s3_access_key = '...'
# Enter your S3 secret key (e.g. 'S....K')
s3_secret_key = '...'
# Enter your S3 bucket name (e.g. 'my-source-bucket')
s3_bucket = '...'
# Enter your csv file name (e.g. 'my-data/my-file.csv' if _my-file_ is located in folder _my-data_)
s3_file_name = '....csv'
# no changes are required to this cell
from ingest import Connectors
from pyspark.sql import SQLContext
sqlContext = SQLContext(sc)
S3loadoptions = {
Connectors.AmazonS3.ACCESS_KEY : s3_access_key,
Connectors.AmazonS3.SECRET_KEY : s3_secret_key,
Connectors.AmazonS3.SOURCE_BUCKET : s3_bucket,
Connectors.AmazonS3.SOURCE_FILE_NAME : s3_file_name,
Connectors.AmazonS3.SOURCE_INFER_SCHEMA : '1',
Connectors.AmazonS3.SOURCE_FILE_FORMAT : 'csv'}
S3_data = sqlContext.read.format('com.ibm.spark.discover').options(**S3loadoptions).load()
display(S3_data)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Configure Amazon S3 connectivity
Step2: Load CSV data
Step3: Explore the loaded data using PixieDust
|
11,895
|
<ASSISTANT_TASK:>
Python Code:
import requests
from bs4 import BeautifulSoup
url = "http://www.theguardian.com/discussion/p/4fqc7"
r = requests.get(url)
html = r.text
soup = BeautifulSoup(html, "html.parser")
comments = soup.select(".d-comment__main")
comment_authors = soup.select(".d-comment__author")
print len (comments), " comments found in first page."
print len (comment_authors), " authors found in first page."
comments_dict = []
parsed_comments = []
parsed_authors = []
for comment, author in zip(comments, comment_authors):
c = comment.select(".d-comment__body")[0].text
a = author['title']
comments_dict.append({"text": c, "author": a})
parsed_comments.append(c)
parsed_authors.append(a)
print comments_dict[:6]
from sklearn.feature_extraction.text import TfidfVectorizer
import nltk.stem
english_stemmer = nltk.stem.SnowballStemmer('english')
class StemmedTfidfVectorizer(TfidfVectorizer):
def build_analyzer(self):
analyzer=super(StemmedTfidfVectorizer,self).build_analyzer()
return lambda doc:(english_stemmer.stem(w) for w in analyzer(doc))
stem_vectorizer = StemmedTfidfVectorizer(min_df=1, stop_words='english')
stem_analyze = stem_vectorizer.build_analyzer()
# print [tok for tok in stem_analyze ("When we have a real living wage, there will no longer need to be 'stupid tax credits'. Until then, people need a top up to support themselves, because the companies they work for, don't want to give people their dues.")]
comment_vectors = stem_vectorizer.fit_transform(parsed_comments)
print "%d features found" % (len(stem_vectorizer.get_feature_names()))
print stem_vectorizer.get_feature_names()
formatted = ["Comment #{0}\n{1}".format(i,cv) for i, cv in enumerate(comment_vectors)]
for f in formatted:
print f
from sklearn.cluster import KMeans
km = KMeans(n_clusters=4, init='k-means++',
max_iter=100, n_init=1)
km.fit(comment_vectors)
# Top terms per cluster (out of the 4 clusters)
order_centroids = km.cluster_centers_.argsort()[:, ::-1]
terms = stem_vectorizer.get_feature_names()
for i in range(4):
print "Cluster %d:"%(i)
for ind in order_centroids[i, :10]:
print " %s" % terms[ind]
print ""
!git add -A && git commit -m "Clusters comments in 4 clusters using kmeans. Now I intend to use agglomerative clustering on the data."
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Extract the comments
Step2: Create comment stemmer and TFIDF vectorizer
Step3: Vectorize extracted comments
Step4: These are the vectorized comments
Step5: Apply clustering algorithm to vectorized comments
|
11,896
|
<ASSISTANT_TASK:>
Python Code:
k = 4
for n in range(2 * k):
print abs(n - k),
for n in range(2 * k):
print abs(n - (k - 1)),
for n in range(2 * k):
print abs(n - (k - 1)) + k,
def row_value(k, i):
i %= (2 * k) # wrap the index at the row boundary.
return abs(i - (k - 1)) + k
k = 5
for i in range(2 * k):
print row_value(k, i),
def rank_and_offset(n):
assert n >= 2 # Guard the domain.
n -= 2 # Subtract two,
# one for the initial square,
# and one because we are counting from 1 instead of 0.
k = 1
while True:
m = 8 * k # The number of places total in this rank, 4(2k).
if n < m:
return k, n % (2 * k)
n -= m # Remove this rank's worth.
k += 1
for n in range(2, 51):
print n, rank_and_offset(n)
for n in range(2, 51):
k, i = rank_and_offset(n)
print n, row_value(k, i)
def row_value(k, i):
return abs(i - (k - 1)) + k
def rank_and_offset(n):
n -= 2 # Subtract two,
# one for the initial square,
# and one because we are counting from 1 instead of 0.
k = 1
while True:
m = 8 * k # The number of places total in this rank, 4(2k).
if n < m:
return k, n % (2 * k)
n -= m # Remove this rank's worth.
k += 1
def aoc20173(n):
if n <= 1:
return 0
k, i = rank_and_offset(n)
return row_value(k, i)
aoc20173(23)
aoc20173(23000)
aoc20173(23000000000000)
from sympy import floor, lambdify, solve, symbols
from sympy import init_printing
init_printing()
k = symbols('k')
E = 2 + 8 * k * (k + 1) / 2 # For the reason for adding 2 see above.
E
def rank_of(n):
return floor(max(solve(E - n, k))) + 1
for n in (9, 10, 25, 26, 49, 50):
print n, rank_of(n)
%time rank_of(23000000000000) # Compare runtime with rank_and_offset()!
%time rank_and_offset(23000000000000)
y = symbols('y')
g, f = solve(E - y, k)
g
f
floor(f) + 1
F = lambdify(y, floor(f) + 1)
for n in (9, 10, 25, 26, 49, 50):
print n, int(F(n))
%time int(F(23000000000000)) # The clear winner.
from math import floor as mfloor, sqrt
def mrank_of(n):
return int(mfloor(sqrt(23000000000000 - 1) / 2 - 0.5) + 1)
%time mrank_of(23000000000000)
def offset_of(n, k):
return (n - 2 + 4 * k * (k - 1)) % (2 * k)
offset_of(23000000000000, 2397916)
def rank_of(n):
return int(mfloor(sqrt(n - 1) / 2 - 0.5) + 1)
def offset_of(n, k):
return (n - 2 + 4 * k * (k - 1)) % (2 * k)
def row_value(k, i):
return abs(i - (k - 1)) + k
def aoc20173(n):
k = rank_of(n)
i = offset_of(n, k)
return row_value(k, i)
aoc20173(23)
aoc20173(23000)
aoc20173(23000000000000)
%time aoc20173(23000000000000000000000000) # Fast for large values.
from notebook_preamble import J, V, define
define('rank_of == -- sqrt 2 / 0.5 - floor ++')
define('offset_of == dup 2 * [dup -- 4 * * 2 + -] dip %')
define('row_value == over -- - abs +')
define('aoc2017.3 == dup rank_of [offset_of] dupdip swap row_value')
J('23 aoc2017.3')
J('23000 aoc2017.3')
V('23000000000000 aoc2017.3')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Subtract $k$ from the index and take the absolute value
Step2: Not quite. Subtract $k - 1$ from the index and take the absolute value
Step3: Great, now add $k$...
Step4: So to write a function that can give us the value of a row at a given index
Step5: (I'm leaving out details of how I figured this all out and just giving the relevent bits. It took a little while to zero in of the aspects of the pattern that were important for the task.)
Step6: Putting it all together
Step7: Sympy to the Rescue
Step8: Since
Step9: We can write a function to solve for $k$ given some $n$...
Step10: First solve() for $E - n = 0$ which has two solutions (because the equation is quadratic so it has two roots) and since we only care about the larger one we use max() to select it. It will generally not be a nice integer (unless $n$ is the number of an end-corner of a rank) so we take the floor() and add 1 to get the integer rank of $n$. (Taking the ceiling() gives off-by-one errors on the rank boundaries. I don't know why. I'm basically like a monkey doing math here.) =-D
Step11: And it runs much faster (at least for large numbers)
Step12: After finding the rank you would still have to find the actual value of the rank's first corner and subtract it (plus 2) from the number and compute the offset as above and then the final output, but this overhead is partially shared by the other method, and overshadowed by the time it (the other iterative method) would take for really big inputs.
Step13: The equation is quadratic so there are two roots, we are interested in the greater one...
Step14: Now we can take the floor(), add 1, and lambdify() the equation to get a Python function that calculates the rank directly.
Step15: It's pretty fast.
Step16: Knowing the equation we could write our own function manually, but the speed is no better.
Step17: Given $n$ and a rank, compute the offset.
Step18: (Note the sneaky way the sign changes from $k(k + 1)$ to $k(k - 1)$. This is because we want to subract the $(k - 1)$th rank's total places (its own and those of lesser rank) from our $n$ of rank $k$. Substituting $k - 1$ for $k$ in $k(k + 1)$ gives $(k - 1)(k - 1 + 1)$, which of course simplifies to $k(k - 1)$.)
Step19: So, we can compute the rank, then the offset, then the row value.
Step20: A Joy Version
Step21: rank_of
Step22: offset_of
Step23: row_value
Step24: aoc2017.3
|
11,897
|
<ASSISTANT_TASK:>
Python Code:
# connect to PostgreSQL using psycopg2
# !pip install psycopg2-binary
import psycopg2
# Connect to an existing database and create the test table
with psycopg2.connect("dbname=yugabyte user=yugabyte host=localhost port=5433") as yb_conn:
cur = yb_conn.cursor()
# use this drop statement if you need to recreate the table
# cur.execute("DROP TABLE data")
cur.execute("CREATE TABLE data as select random()*100 random_value from generate_series(1, 100);")
table_name = "data" # table or temporary view containing the data
value_col = "random_value" # column name on which to compute the histogram
min = -20 # min: minimum value in the histogram
max = 90 # maximum value in the histogram
bins = 11 # number of histogram buckets to compute
step = (max - min) / bins
query = f
with hist as (
select
width_bucket({value_col}, {min}, {max}, {bins}) as bucket,
count(*) as cnt
from {table_name}
group by bucket
),
buckets as (
select generate_series as bucket from generate_series(1,{bins})
)
select
bucket, {min} + (bucket - 0.5) * {step} as value,
coalesce(cnt, 0) as count
from hist right outer join buckets using(bucket)
order by bucket
import pandas as pd
# query Oracle using ora_conn and put the result into a pandas Dataframe
with psycopg2.connect("dbname=yugabyte user=yugabyte host=localhost port=5433") as yb_conn:
hist_pandasDF = pd.read_sql(query, con=yb_conn)
# Decription
#
# bucket: the bucket number, range from 1 to bins (included)
# value: midpoint value of the given bucket
# count: number of values in the bucket
hist_pandasDF
# Optionally normalize the event count into a frequency
# dividing by the total number of events
hist_pandasDF["frequency"] = hist_pandasDF["count"] / sum(hist_pandasDF["count"])
hist_pandasDF
import matplotlib.pyplot as plt
plt.style.use('seaborn-darkgrid')
plt.rcParams.update({'font.size': 20, 'figure.figsize': [14,10]})
f, ax = plt.subplots()
# histogram data
x = hist_pandasDF["value"]
y = hist_pandasDF["count"]
# bar plot
ax.bar(x, y, width = 3.0, color='red')
ax.set_xlabel("Bucket values")
ax.set_ylabel("Event count")
ax.set_title("Distribution of event counts")
# Label for the resonances spectrum peaks
txt_opts = {'horizontalalignment': 'center',
'verticalalignment': 'center',
'transform': ax.transAxes}
plt.show()
import matplotlib.pyplot as plt
plt.style.use('seaborn-darkgrid')
plt.rcParams.update({'font.size': 20, 'figure.figsize': [14,10]})
f, ax = plt.subplots()
# histogram data
x = hist_pandasDF["value"]
y = hist_pandasDF["frequency"]
# bar plot
ax.bar(x, y, width = 3.0, color='blue')
ax.set_xlabel("Bucket values")
ax.set_ylabel("Event frequency")
ax.set_title("Distribution of event frequencies")
# Label for the resonances spectrum peaks
txt_opts = {'horizontalalignment': 'center',
'verticalalignment': 'center',
'transform': ax.transAxes}
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Define the query to compute the histogram
Step3: Fetch the histogram data into a pandas dataframe
Step4: Histogram plotting
|
11,898
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pickle as pkl
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data')
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, (None, real_dim), name='input_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')
return inputs_real, inputs_z
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):
with tf.variable_scope('generator', reuse=reuse):
# Hidden layer
h1 = tf.layers.dense(z, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha * h1, h1)
# Logits and tanh output
logits = tf.layers.dense(h1, out_dim, activation=None)
out = tf.tanh(logits)
return out
def discriminator(x, n_units=128, reuse=False, alpha=0.01):
with tf.variable_scope('discriminator', reuse=reuse):
# Hidden layer
h1 = tf.layers.dense(x, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha * h1, h1)
logits = tf.layers.dense(h1, 1, activation=None)
out = tf.sigmoid(logits)
return out, logits
# Size of input image to discriminator
input_size = 784
# Size of latent vector to generator
z_size = 100
# Sizes of hidden layers in generator and discriminator
g_hidden_size = 128
d_hidden_size = 128
# Leak factor for leaky ReLU
alpha = 0.01
# Smoothing
smooth = 0.1
tf.reset_default_graph()
# Create our input placeholders
input_real, input_z = model_inputs(input_size, z_size)
# Build the model
g_model = generator(input_z, input_size, n_units=g_hidden_size, alpha=alpha)
# g_model is the generator output
d_model_real, d_logits_real = discriminator(input_real, n_units=d_hidden_size, alpha=alpha)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True, n_units=d_hidden_size, alpha=alpha)
# Calculate losses
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real,
labels=tf.ones_like(d_logits_real) * (1 - smooth)))
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.zeros_like(d_logits_real)))
d_loss = d_loss_real + d_loss_fake
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.ones_like(d_logits_fake)))
# Optimizers
learning_rate = 0.002
# Get the trainable_variables, split into G and D parts
t_vars = tf.trainable_variables()
g_vars = [var for var in t_vars if var.name.startswith('generator')]
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
d_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(g_loss, var_list=g_vars)
batch_size = 100
epochs = 100
samples = []
losses = []
# Only save generator variables
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images, reshape and rescale to pass to D
batch_images = batch[0].reshape((batch_size, 784))
batch_images = batch_images*2 - 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_train_opt, feed_dict={input_z: batch_z})
# At the end of each epoch, get the losses and print them out
train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
# Sample from generator as we're training for viewing afterwards
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
samples.append(gen_samples)
saver.save(sess, './checkpoints/generator.ckpt')
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator')
plt.plot(losses.T[1], label='Generator')
plt.title("Training Losses")
plt.legend()
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')
return fig, axes
# Load samples from generator taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
np.array(samples[-1]).shape
_ = view_samples(-1, samples)
rows, cols = 10, 6
fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)
for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):
for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):
ax.imshow(img.reshape((28,28)), cmap='Greys_r')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
_ = view_samples(0, [gen_samples])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Model Inputs
Step2: Generator network
Step3: Discriminator
Step4: Hyperparameters
Step5: Build network
Step6: Discriminator and Generator Losses
Step7: Optimizers
Step8: Training
Step9: Training loss
Step10: Generator samples from training
Step11: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 1, 7, 3, 2. Since this is just a sample, it isn't representative of the full range of images this generator can make.
Step12: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
Step13: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise like 1s and 9s.
|
11,899
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
# Eric Larson <larson.eric.d@gmail.com>
# License: BSD (3-clause)
import os.path as op
import numpy as np
from numpy.random import randn
from scipy import stats as stats
import mne
from mne import (io, spatial_tris_connectivity, compute_morph_matrix,
grade_to_tris)
from mne.epochs import equalize_epoch_counts
from mne.stats import (spatio_temporal_cluster_1samp_test,
summarize_clusters_stc)
from mne.minimum_norm import apply_inverse, read_inverse_operator
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
subjects_dir = data_path + '/subjects'
tmin = -0.2
tmax = 0.3 # Use a lower tmax to reduce multiple comparisons
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname)
events = mne.read_events(event_fname)
raw.info['bads'] += ['MEG 2443']
picks = mne.pick_types(raw.info, meg=True, eog=True, exclude='bads')
event_id = 1 # L auditory
reject = dict(grad=1000e-13, mag=4000e-15, eog=150e-6)
epochs1 = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=reject, preload=True)
event_id = 3 # L visual
epochs2 = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=reject, preload=True)
# Equalize trial counts to eliminate bias (which would otherwise be
# introduced by the abs() performed below)
equalize_epoch_counts([epochs1, epochs2])
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'
snr = 3.0
lambda2 = 1.0 / snr ** 2
method = "dSPM" # use dSPM method (could also be MNE or sLORETA)
inverse_operator = read_inverse_operator(fname_inv)
sample_vertices = [s['vertno'] for s in inverse_operator['src']]
# Let's average and compute inverse, resampling to speed things up
evoked1 = epochs1.average()
evoked1.resample(50, npad='auto')
condition1 = apply_inverse(evoked1, inverse_operator, lambda2, method)
evoked2 = epochs2.average()
evoked2.resample(50, npad='auto')
condition2 = apply_inverse(evoked2, inverse_operator, lambda2, method)
# Let's only deal with t > 0, cropping to reduce multiple comparisons
condition1.crop(0, None)
condition2.crop(0, None)
tmin = condition1.tmin
tstep = condition1.tstep
n_vertices_sample, n_times = condition1.data.shape
n_subjects = 7
print('Simulating data for %d subjects.' % n_subjects)
# Let's make sure our results replicate, so set the seed.
np.random.seed(0)
X = randn(n_vertices_sample, n_times, n_subjects, 2) * 10
X[:, :, :, 0] += condition1.data[:, :, np.newaxis]
X[:, :, :, 1] += condition2.data[:, :, np.newaxis]
fsave_vertices = [np.arange(10242), np.arange(10242)]
morph_mat = compute_morph_matrix('sample', 'fsaverage', sample_vertices,
fsave_vertices, 20, subjects_dir)
n_vertices_fsave = morph_mat.shape[0]
# We have to change the shape for the dot() to work properly
X = X.reshape(n_vertices_sample, n_times * n_subjects * 2)
print('Morphing data.')
X = morph_mat.dot(X) # morph_mat is a sparse matrix
X = X.reshape(n_vertices_fsave, n_times, n_subjects, 2)
X = np.abs(X) # only magnitude
X = X[:, :, :, 0] - X[:, :, :, 1] # make paired contrast
print('Computing connectivity.')
connectivity = spatial_tris_connectivity(grade_to_tris(5))
# Note that X needs to be a multi-dimensional array of shape
# samples (subjects) x time x space, so we permute dimensions
X = np.transpose(X, [2, 1, 0])
# Now let's actually do the clustering. This can take a long time...
# Here we set the threshold quite high to reduce computation.
p_threshold = 0.001
t_threshold = -stats.distributions.t.ppf(p_threshold / 2., n_subjects - 1)
print('Clustering.')
T_obs, clusters, cluster_p_values, H0 = clu = \
spatio_temporal_cluster_1samp_test(X, connectivity=connectivity, n_jobs=1,
threshold=t_threshold)
# Now select the clusters that are sig. at p < 0.05 (note that this value
# is multiple-comparisons corrected).
good_cluster_inds = np.where(cluster_p_values < 0.05)[0]
print('Visualizing clusters.')
# Now let's build a convenient representation of each cluster, where each
# cluster becomes a "time point" in the SourceEstimate
stc_all_cluster_vis = summarize_clusters_stc(clu, tstep=tstep,
vertices=fsave_vertices,
subject='fsaverage')
# Let's actually plot the first "time point" in the SourceEstimate, which
# shows all the clusters, weighted by duration
subjects_dir = op.join(data_path, 'subjects')
# blue blobs are for condition A < condition B, red for A > B
brain = stc_all_cluster_vis.plot(
hemi='both', views='lateral', subjects_dir=subjects_dir,
time_label='Duration significant (ms)', size=(800, 800),
smoothing_steps=5)
# brain.save_image('clusters.png')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set parameters
Step2: Read epochs for all channels, removing a bad one
Step3: Transform to source space
Step4: Transform to common cortical space
Step5: It's a good idea to spatially smooth the data, and for visualization
Step6: Finally, we want to compare the overall activity levels in each condition,
Step7: Compute statistic
Step8: Visualize the clusters
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.