text_prompt
stringlengths 168
30.3k
| code_prompt
stringlengths 67
124k
|
|---|---|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read in the data, it is from the UCI Wholesale Customer Dataset at
Step2: Create a feature for total customer size. Note
Step3: Add a function to convert and join dummy variables to the model. If source df = dest df then remember to delete the original catagorical variable.
Step4: Process dummy variables for the 'Channel' and 'Region' features. Drop original categorical feature, plus drop one of the dummy variables for 'leave one out' encoding.
Step5: Plot a histogram of customer size, shows a small number of large customers, many smaller customers.
Step6: Scale the data so no category dominates due to numeric scale.
Step7: Set up a plotting function for K Means output.
Step8: Set up some markers and colors.
Step9: Try K Means with k = 3 (Mainly because when I previously studied this using R, 3 produced interesting results). The value for 'k' will be set from a distortion graph later in the notebook.
Step10: Use an inverse transform on the centroids. Needed because the centroids were calculated on the scaled data and we would like the centroids to plot correctly with the original data.
Step11: Commentary
Step12: Plot a distortion or elbow graph to help chose an optimal value for k.
Step13: Distortion plots often have elbow points that are difficult to interpret, but k = 6 looks like it might make for an interesting set of clusters and plots.
Step14: Plot K Means result with centroids.
Step15: The pattern seen in the full data set also shows up amongst the set of customers with less than 75,000 euros in total sales. There are, for example, a number of customers who buy detergents and paper from our client but not frozen goods, the inverse also appears to be true. A business domain expert could be consulted in order to try and determine if this is charasteristic of the needs of these customers or perhaps there are missed opportunities for our client's marketing department.
Step16: Print out data for clusters 1, 3 and 5.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
from sklearn.preprocessing import StandardScaler
from sklearn.cluster import KMeans
from scipy.spatial.distance import cdist
import matplotlib.pyplot as plt
df = pd.read_csv('Wholesale customers data.csv')
df['Total'] = df['Fresh'] + df['Milk'] + df['Grocery'] + df['Frozen'] + df['Detergents_Paper'] + df['Delicassen']
print df.head()
def get_dummies(source_df, dest_df, col):
dummies = pd.get_dummies(source_df[col], prefix=col)
print 'Quantities for %s column' % col
for col in dummies:
print '%s: %d' % (col, np.sum(dummies[col]))
print
dest_df = dest_df.join(dummies)
return dest_df
df = get_dummies(df, df, 'Channel')
df.drop(['Channel', 'Channel_2'], axis=1, inplace=True)
df = get_dummies(df, df, 'Region')
df.drop(['Region', 'Region_3'], axis=1, inplace=True)
df.rename(index=str, columns={'Channel_1': 'Channel_Horeca', 'Region_1': 'Region_Lisbon', 'Region_2': 'Region_Oporto'},
inplace=True)
print df.head()
plt.hist(df['Total'], bins=32)
plt.xlabel('Total Purchases')
plt.ylabel('Number of Customers')
plt.title('Histogram of Customer Size')
plt.show()
plt.close()
sc = StandardScaler()
sc.fit(df)
X = sc.transform(df)
def plot_kmeans(pred, centroids, x_name, y_name, x_idx, y_idx, k):
for i in range(0, k):
plt.scatter(df[x_name].loc[pred == i], df[y_name].loc[pred == i], s=6,
c=colors[i], marker=markers[i], label='Cluster %d' % (i + 1))
centroids = sc.inverse_transform(kmeans.cluster_centers_)
plt.scatter(centroids[:, x_idx], centroids[:, y_idx],
marker='x', s=180, linewidths=3,
color='k', zorder=10)
plt.xlabel(x_name)
plt.ylabel(y_name)
plt.legend()
plt.show()
plt.close()
markers = ('s', 'o', 'v', '*', 'D', '+', 'p', '<', '>', 'x')
colors = ('C0', 'C1', 'C2', 'C3', 'C4', 'C5', 'C6', 'C7', 'C8', 'C9')
k=3
kmeans = KMeans(n_clusters=k)
kmeans.fit(X)
pred = kmeans.predict(X)
for i in range(0, k):
x = len(pred[pred == i])
print 'Cluster %d has %d members' % ((i + 1), x)
centroids = sc.inverse_transform(kmeans.cluster_centers_)
plot_kmeans(pred, centroids, 'Frozen', 'Detergents_Paper', 3, 4, k)
df = df.loc[df['Total'] <= 75000]
sc = StandardScaler()
sc.fit(df)
X = sc.transform(df)
K = range(1, 20)
mean_distortions = []
for k in K:
np.random.seed(555)
kmeans = KMeans(n_clusters=k, init='k-means++')
kmeans.fit(X)
mean_distortions.append(sum(np.min(cdist(X, kmeans.cluster_centers_, 'euclidean'), axis = 1))/ X.shape[0])
plt.plot(K, mean_distortions, 'bx-')
plt.xlabel('k')
plt.ylabel('Average distortion')
plt.title('Selecting K w/ Elbow Method')
plt.show()
plt.close()
np.random.seed(555) # sets seed, makes it repeatable
k = 6
kmeans = KMeans(n_clusters=k) # , init='random')
kmeans.fit(X)
pred = kmeans.predict(X)
for i in range(0, k):
x = len(pred[pred == i])
print 'Cluster %d has %d members' % ((i + 1), x)
centroids = sc.inverse_transform(kmeans.cluster_centers_)
plot_kmeans(pred, centroids, 'Frozen', 'Detergents_Paper', 3, 4, k)
def print_cluster_data(cluster_number):
print '\nData for cluster %d' % cluster_number
cluster = df.loc[pred == cluster_number - 1, :]
# print cluster1.head()
num_in_cluster = float(len(cluster.index))
num_horeca = float(np.sum(cluster['Channel_Horeca']))
num_retail = float(num_in_cluster - num_horeca)
print 'Percent Horeca: %.2f, Percent Retail: %.2f' % \
(num_horeca / num_in_cluster * 100.0, num_retail / num_in_cluster * 100.0)
num_lisbon = float(np.sum(cluster['Region_Lisbon']))
num_oporto = float(np.sum(cluster['Region_Oporto']))
num_other = num_in_cluster - num_lisbon - num_oporto
print 'Percent Lisbon: %.2f, Percent Oporto: %.2f, Percent Other: %.2f' % \
(num_lisbon / num_in_cluster * 100.0, num_oporto / num_in_cluster * 100.0, num_other / num_in_cluster * 100.0)
avg_cust_size = np.sum(cluster['Total']) / num_in_cluster
print 'Average Customer Size is: %.2f for %d Customers' % (avg_cust_size, num_in_cluster)
print_cluster_data(cluster_number=1)
print_cluster_data(cluster_number=3)
print_cluster_data(cluster_number=5)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TensorFlow Probability 둘러보기
Step2: 개요
Step3: 하드웨어 가속
Step4: 자동 미분
Step5: TensorFlow Probability
Step6: 단순 스칼라 변량 Distribution
Step7: 분포와 형상
Step8: 벡터 변량 Distribution
Step9: 행렬 변량 Distribution
Step10: 가우시안 프로세스
Step11: 가우시안 프로세스 회귀
Step12: Bijectors
Step13: 단순 Bijector
Step14: Distribution을 변환하는 Bijector
Step15: Bijectors 일괄 처리하기
Step16: 캐싱
Step17: 마르코프 연쇄 몬테카를로(MCMC)
Step18: 결합 로그 확률 함수를 정의합니다.
Step19: HMC TransitionKernel을 빌드하고 sample_chain을 호출합니다.
Step20: 별로 좋지 않습니다. 채택률이 .65에 가까워지길 바랍니다.
Step21: 진단
Step22: noise_scale 샘플링하기
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title Import { display-mode: "form" }
from pprint import pprint
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import tensorflow.compat.v2 as tf
tf.enable_v2_behavior()
import tensorflow_probability as tfp
sns.reset_defaults()
sns.set_context(context='talk',font_scale=0.7)
plt.rcParams['image.cmap'] = 'viridis'
%matplotlib inline
tfd = tfp.distributions
tfb = tfp.bijectors
#@title Utils { display-mode: "form" }
def print_subclasses_from_module(module, base_class, maxwidth=80):
import functools, inspect, sys
subclasses = [name for name, obj in inspect.getmembers(module)
if inspect.isclass(obj) and issubclass(obj, base_class)]
def red(acc, x):
if not acc or len(acc[-1]) + len(x) + 2 > maxwidth:
acc.append(x)
else:
acc[-1] += ", " + x
return acc
print('\n'.join(functools.reduce(red, subclasses, [])))
mats = tf.random.uniform(shape=[1000, 10, 10])
vecs = tf.random.uniform(shape=[1000, 10, 1])
def for_loop_solve():
return np.array(
[tf.linalg.solve(mats[i, ...], vecs[i, ...]) for i in range(1000)])
def vectorized_solve():
return tf.linalg.solve(mats, vecs)
# Vectorization for the win!
%timeit for_loop_solve()
%timeit vectorized_solve()
# Code can run seamlessly on a GPU, just change Colab runtime type
# in the 'Runtime' menu.
if tf.test.gpu_device_name() == '/device:GPU:0':
print("Using a GPU")
else:
print("Using a CPU")
a = tf.constant(np.pi)
b = tf.constant(np.e)
with tf.GradientTape() as tape:
tape.watch([a, b])
c = .5 * (a**2 + b**2)
grads = tape.gradient(c, [a, b])
print(grads[0])
print(grads[1])
print_subclasses_from_module(tfp.distributions, tfp.distributions.Distribution)
# A standard normal
normal = tfd.Normal(loc=0., scale=1.)
print(normal)
# Plot 1000 samples from a standard normal
samples = normal.sample(1000)
sns.distplot(samples)
plt.title("Samples from a standard Normal")
plt.show()
# Compute the log_prob of a point in the event space of `normal`
normal.log_prob(0.)
# Compute the log_prob of a few points
normal.log_prob([-1., 0., 1.])
# Create a batch of 3 normals, and plot 1000 samples from each
normals = tfd.Normal([-2.5, 0., 2.5], 1.) # The scale parameter broadacasts!
print("Batch shape:", normals.batch_shape)
print("Event shape:", normals.event_shape)
# Samples' shapes go on the left!
samples = normals.sample(1000)
print("Shape of samples:", samples.shape)
# Sample shapes can themselves be more complicated
print("Shape of samples:", normals.sample([10, 10, 10]).shape)
# A batch of normals gives a batch of log_probs.
print(normals.log_prob([-2.5, 0., 2.5]))
# The computation broadcasts, so a batch of normals applied to a scalar
# also gives a batch of log_probs.
print(normals.log_prob(0.))
# Normal numpy-like broadcasting rules apply!
xs = np.linspace(-6, 6, 200)
try:
normals.log_prob(xs)
except Exception as e:
print("TFP error:", e.message)
# That fails for the same reason this does:
try:
np.zeros(200) + np.zeros(3)
except Exception as e:
print("Numpy error:", e)
# But this would work:
a = np.zeros([200, 1]) + np.zeros(3)
print("Broadcast shape:", a.shape)
# And so will this!
xs = np.linspace(-6, 6, 200)[..., np.newaxis]
# => shape = [200, 1]
lps = normals.log_prob(xs)
print("Broadcast log_prob shape:", lps.shape)
# Summarizing visually
for i in range(3):
sns.distplot(samples[:, i], kde=False, norm_hist=True)
plt.plot(np.tile(xs, 3), normals.prob(xs), c='k', alpha=.5)
plt.title("Samples from 3 Normals, and their PDF's")
plt.show()
mvn = tfd.MultivariateNormalDiag(loc=[0., 0.], scale_diag = [1., 1.])
print("Batch shape:", mvn.batch_shape)
print("Event shape:", mvn.event_shape)
samples = mvn.sample(1000)
print("Samples shape:", samples.shape)
g = sns.jointplot(samples[:, 0], samples[:, 1], kind='scatter')
plt.show()
lkj = tfd.LKJ(dimension=10, concentration=[1.5, 3.0])
print("Batch shape: ", lkj.batch_shape)
print("Event shape: ", lkj.event_shape)
samples = lkj.sample()
print("Samples shape: ", samples.shape)
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(6, 3))
sns.heatmap(samples[0, ...], ax=axes[0], cbar=False)
sns.heatmap(samples[1, ...], ax=axes[1], cbar=False)
fig.tight_layout()
plt.show()
kernel = tfp.math.psd_kernels.ExponentiatedQuadratic()
xs = np.linspace(-5., 5., 200).reshape([-1, 1])
gp = tfd.GaussianProcess(kernel, index_points=xs)
print("Batch shape:", gp.batch_shape)
print("Event shape:", gp.event_shape)
upper, lower = gp.mean() + [2 * gp.stddev(), -2 * gp.stddev()]
plt.plot(xs, gp.mean())
plt.fill_between(xs[..., 0], upper, lower, color='k', alpha=.1)
for _ in range(5):
plt.plot(xs, gp.sample(), c='r', alpha=.3)
plt.title(r"GP prior mean, $2\sigma$ intervals, and samples")
plt.show()
# *** Bonus question ***
# Why do so many of these functions lie outside the 95% intervals?
# Suppose we have some observed data
obs_x = [[-3.], [0.], [2.]] # Shape 3x1 (3 1-D vectors)
obs_y = [3., -2., 2.] # Shape 3 (3 scalars)
gprm = tfd.GaussianProcessRegressionModel(kernel, xs, obs_x, obs_y)
upper, lower = gprm.mean() + [2 * gprm.stddev(), -2 * gprm.stddev()]
plt.plot(xs, gprm.mean())
plt.fill_between(xs[..., 0], upper, lower, color='k', alpha=.1)
for _ in range(5):
plt.plot(xs, gprm.sample(), c='r', alpha=.3)
plt.scatter(obs_x, obs_y, c='k', zorder=3)
plt.title(r"GP posterior mean, $2\sigma$ intervals, and samples")
plt.show()
print_subclasses_from_module(tfp.bijectors, tfp.bijectors.Bijector)
normal_cdf = tfp.bijectors.NormalCDF()
xs = np.linspace(-4., 4., 200)
plt.plot(xs, normal_cdf.forward(xs))
plt.show()
plt.plot(xs, normal_cdf.forward_log_det_jacobian(xs, event_ndims=0))
plt.show()
exp_bijector = tfp.bijectors.Exp()
log_normal = exp_bijector(tfd.Normal(0., .5))
samples = log_normal.sample(1000)
xs = np.linspace(1e-10, np.max(samples), 200)
sns.distplot(samples, norm_hist=True, kde=False)
plt.plot(xs, log_normal.prob(xs), c='k', alpha=.75)
plt.show()
# Create a batch of bijectors of shape [3,]
softplus = tfp.bijectors.Softplus(
hinge_softness=[1., .5, .1])
print("Hinge softness shape:", softplus.hinge_softness.shape)
# For broadcasting, we want this to be shape [200, 1]
xs = np.linspace(-4., 4., 200)[..., np.newaxis]
ys = softplus.forward(xs)
print("Forward shape:", ys.shape)
# Visualization
lines = plt.plot(np.tile(xs, 3), ys)
for line, hs in zip(lines, softplus.hinge_softness):
line.set_label("Softness: %1.1f" % hs)
plt.legend()
plt.show()
# This bijector represents a matrix outer product on the forward pass,
# and a cholesky decomposition on the inverse pass. The latter costs O(N^3)!
bij = tfb.CholeskyOuterProduct()
size = 2500
# Make a big, lower-triangular matrix
big_lower_triangular = tf.eye(size)
# Squaring it gives us a positive-definite matrix
big_positive_definite = bij.forward(big_lower_triangular)
# Caching for the win!
%timeit bij.inverse(big_positive_definite)
%timeit tf.linalg.cholesky(big_positive_definite)
# Generate some data
def f(x, w):
# Pad x with 1's so we can add bias via matmul
x = tf.pad(x, [[1, 0], [0, 0]], constant_values=1)
linop = tf.linalg.LinearOperatorFullMatrix(w[..., np.newaxis])
result = linop.matmul(x, adjoint=True)
return result[..., 0, :]
num_features = 2
num_examples = 50
noise_scale = .5
true_w = np.array([-1., 2., 3.])
xs = np.random.uniform(-1., 1., [num_features, num_examples])
ys = f(xs, true_w) + np.random.normal(0., noise_scale, size=num_examples)
# Visualize the data set
plt.scatter(*xs, c=ys, s=100, linewidths=0)
grid = np.meshgrid(*([np.linspace(-1, 1, 100)] * 2))
xs_grid = np.stack(grid, axis=0)
fs_grid = f(xs_grid.reshape([num_features, -1]), true_w)
fs_grid = np.reshape(fs_grid, [100, 100])
plt.colorbar()
plt.contour(xs_grid[0, ...], xs_grid[1, ...], fs_grid, 20, linewidths=1)
plt.show()
# Define the joint_log_prob function, and our unnormalized posterior.
def joint_log_prob(w, x, y):
# Our model in maths is
# w ~ MVN([0, 0, 0], diag([1, 1, 1]))
# y_i ~ Normal(w @ x_i, noise_scale), i=1..N
rv_w = tfd.MultivariateNormalDiag(
loc=np.zeros(num_features + 1),
scale_diag=np.ones(num_features + 1))
rv_y = tfd.Normal(f(x, w), noise_scale)
return (rv_w.log_prob(w) +
tf.reduce_sum(rv_y.log_prob(y), axis=-1))
# Create our unnormalized target density by currying x and y from the joint.
def unnormalized_posterior(w):
return joint_log_prob(w, xs, ys)
# Create an HMC TransitionKernel
hmc_kernel = tfp.mcmc.HamiltonianMonteCarlo(
target_log_prob_fn=unnormalized_posterior,
step_size=np.float64(.1),
num_leapfrog_steps=2)
# We wrap sample_chain in tf.function, telling TF to precompile a reusable
# computation graph, which will dramatically improve performance.
@tf.function
def run_chain(initial_state, num_results=1000, num_burnin_steps=500):
return tfp.mcmc.sample_chain(
num_results=num_results,
num_burnin_steps=num_burnin_steps,
current_state=initial_state,
kernel=hmc_kernel,
trace_fn=lambda current_state, kernel_results: kernel_results)
initial_state = np.zeros(num_features + 1)
samples, kernel_results = run_chain(initial_state)
print("Acceptance rate:", kernel_results.is_accepted.numpy().mean())
# Apply a simple step size adaptation during burnin
@tf.function
def run_chain(initial_state, num_results=1000, num_burnin_steps=500):
adaptive_kernel = tfp.mcmc.SimpleStepSizeAdaptation(
hmc_kernel,
num_adaptation_steps=int(.8 * num_burnin_steps),
target_accept_prob=np.float64(.65))
return tfp.mcmc.sample_chain(
num_results=num_results,
num_burnin_steps=num_burnin_steps,
current_state=initial_state,
kernel=adaptive_kernel,
trace_fn=lambda cs, kr: kr)
samples, kernel_results = run_chain(
initial_state=np.zeros(num_features+1))
print("Acceptance rate:", kernel_results.inner_results.is_accepted.numpy().mean())
# Trace plots
colors = ['b', 'g', 'r']
for i in range(3):
plt.plot(samples[:, i], c=colors[i], alpha=.3)
plt.hlines(true_w[i], 0, 1000, zorder=4, color=colors[i], label="$w_{}$".format(i))
plt.legend(loc='upper right')
plt.show()
# Histogram of samples
for i in range(3):
sns.distplot(samples[:, i], color=colors[i])
ymax = plt.ylim()[1]
for i in range(3):
plt.vlines(true_w[i], 0, ymax, color=colors[i])
plt.ylim(0, ymax)
plt.show()
# Instead of a single set of initial w's, we create a batch of 8.
num_chains = 8
initial_state = np.zeros([num_chains, num_features + 1])
chains, kernel_results = run_chain(initial_state)
r_hat = tfp.mcmc.potential_scale_reduction(chains)
print("Acceptance rate:", kernel_results.inner_results.is_accepted.numpy().mean())
print("R-hat diagnostic (per latent variable):", r_hat.numpy())
# Define the joint_log_prob function, and our unnormalized posterior.
def joint_log_prob(w, sigma, x, y):
# Our model in maths is
# w ~ MVN([0, 0, 0], diag([1, 1, 1]))
# y_i ~ Normal(w @ x_i, noise_scale), i=1..N
rv_w = tfd.MultivariateNormalDiag(
loc=np.zeros(num_features + 1),
scale_diag=np.ones(num_features + 1))
rv_sigma = tfd.LogNormal(np.float64(1.), np.float64(5.))
rv_y = tfd.Normal(f(x, w), sigma[..., np.newaxis])
return (rv_w.log_prob(w) +
rv_sigma.log_prob(sigma) +
tf.reduce_sum(rv_y.log_prob(y), axis=-1))
# Create our unnormalized target density by currying x and y from the joint.
def unnormalized_posterior(w, sigma):
return joint_log_prob(w, sigma, xs, ys)
# Create an HMC TransitionKernel
hmc_kernel = tfp.mcmc.HamiltonianMonteCarlo(
target_log_prob_fn=unnormalized_posterior,
step_size=np.float64(.1),
num_leapfrog_steps=4)
# Create a TransformedTransitionKernl
transformed_kernel = tfp.mcmc.TransformedTransitionKernel(
inner_kernel=hmc_kernel,
bijector=[tfb.Identity(), # w
tfb.Invert(tfb.Softplus())]) # sigma
# Apply a simple step size adaptation during burnin
@tf.function
def run_chain(initial_state, num_results=1000, num_burnin_steps=500):
adaptive_kernel = tfp.mcmc.SimpleStepSizeAdaptation(
transformed_kernel,
num_adaptation_steps=int(.8 * num_burnin_steps),
target_accept_prob=np.float64(.75))
return tfp.mcmc.sample_chain(
num_results=num_results,
num_burnin_steps=num_burnin_steps,
current_state=initial_state,
kernel=adaptive_kernel,
seed=(0, 1),
trace_fn=lambda cs, kr: kr)
# Instead of a single set of initial w's, we create a batch of 8.
num_chains = 8
initial_state = [np.zeros([num_chains, num_features + 1]),
.54 * np.ones([num_chains], dtype=np.float64)]
chains, kernel_results = run_chain(initial_state)
r_hat = tfp.mcmc.potential_scale_reduction(chains)
print("Acceptance rate:", kernel_results.inner_results.inner_results.is_accepted.numpy().mean())
print("R-hat diagnostic (per w variable):", r_hat[0].numpy())
print("R-hat diagnostic (sigma):", r_hat[1].numpy())
w_chains, sigma_chains = chains
# Trace plots of w (one of 8 chains)
colors = ['b', 'g', 'r', 'teal']
fig, axes = plt.subplots(4, num_chains, figsize=(4 * num_chains, 8))
for j in range(num_chains):
for i in range(3):
ax = axes[i][j]
ax.plot(w_chains[:, j, i], c=colors[i], alpha=.3)
ax.hlines(true_w[i], 0, 1000, zorder=4, color=colors[i], label="$w_{}$".format(i))
ax.legend(loc='upper right')
ax = axes[3][j]
ax.plot(sigma_chains[:, j], alpha=.3, c=colors[3])
ax.hlines(noise_scale, 0, 1000, zorder=4, color=colors[3], label=r"$\sigma$".format(i))
ax.legend(loc='upper right')
fig.tight_layout()
plt.show()
# Histogram of samples of w
fig, axes = plt.subplots(4, num_chains, figsize=(4 * num_chains, 8))
for j in range(num_chains):
for i in range(3):
ax = axes[i][j]
sns.distplot(w_chains[:, j, i], color=colors[i], norm_hist=True, ax=ax, hist_kws={'alpha': .3})
for i in range(3):
ax = axes[i][j]
ymax = ax.get_ylim()[1]
ax.vlines(true_w[i], 0, ymax, color=colors[i], label="$w_{}$".format(i), linewidth=3)
ax.set_ylim(0, ymax)
ax.legend(loc='upper right')
ax = axes[3][j]
sns.distplot(sigma_chains[:, j], color=colors[3], norm_hist=True, ax=ax, hist_kws={'alpha': .3})
ymax = ax.get_ylim()[1]
ax.vlines(noise_scale, 0, ymax, color=colors[3], label=r"$\sigma$".format(i), linewidth=3)
ax.set_ylim(0, ymax)
ax.legend(loc='upper right')
fig.tight_layout()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: It is defined as the number of pairs of objects that are either in the same group or in different groups in both partitions divided by the total number of pairs of objects. The Rand index lies between 0 and 1. When two partitions agree perfectly, the Rand index achieves the maximum value 1. A problem with Rand index is that the expected value of the Rand index between two random partitions is not a constant. This problem is corrected by the adjusted Rand index that assumes the generalized hyper-geometric distribution as the model of randomness. The adjusted Rand index has the maximum value 1, and its expected value is 0 in the case of random clusters. A larger adjusted Rand index means a higher agreement between two partitions. The adjusted Rand index is recommended for measuring agreement even when the partitions compared have different numbers of clusters.
Step2: Furthermore, adjusted_rand_score is symmetric
Step3: Perfect labeling is scored 1.0
Step4: Bad (e.g. independent labelings) have negative or close to 0.0 scores
Step6: Advantages
Step7: Mutual Information based scores
Step8: One can permute 0 and 1 in the predicted labels, rename 2 to 3 and get the same score
Step9: All, mutual_info_score, adjusted_mutual_info_score and normalized_mutual_info_score are symmetric
Step10: Perfect labeling is scored 1.0
Step11: This is not true for mutual_info_score, which is therefore harder to judge
Step13: Advantages
Step14: Homogeneity, completeness and V-measure
Step15: Their harmonic mean called V-measure is computed by v_measure_score
Step16: The V-measure is actually equivalent to the mutual information (NMI) discussed above normalized by the sum of the label entropies.
Step17: Note v_measure_score is symmetric
Step19: Advantages
Step20: Fowlkes-Mallows scores
Step21: One can permute 0 and 1 in the predicted labels, rename 2 to 3 and get the same score
Step22: Perfect labeling is scored 1.0
Step23: Bad (e.g. independent labelings) have zero scores
|
<ASSISTANT_TASK:>
Python Code:
actual = [1, 2, 3 , 5, 10, 11]
predicted = [1, 10, 11, 3, 2, 5 ]
from sklearn import metrics
labels_true = [0, 0, 0, 1, 1, 1]
labels_pred = [0, 0, 1, 1, 1, 2]
metrics.adjusted_rand_score(labels_true, labels_pred)
labels_pred = [1, 1, 0, 0, 3, 3]
metrics.adjusted_rand_score(labels_true, labels_pred)
labels_pred = [1, 10, 1, 0, 10, 0]
metrics.adjusted_rand_score(labels_true, labels_pred)
labels_pred = [1, 1, 2, 0, 2, 0]
metrics.adjusted_rand_score(labels_true, labels_pred)
labels_pred = labels_true[:]
print(id(labels_pred), id(labels_true))
metrics.adjusted_rand_score(labels_true, labels_pred)
print(labels_true, labels_pred)
labels_true = [0, 0, 0, 1, 1, 1]
labels_pred = [0, 0, 1, 0, 1, 1]
metrics.adjusted_rand_score(labels_true, labels_pred)
labels_true = [0, 1, 2, 0, 3, 4, 5, 1]
labels_pred = [1, 1, 0, 0, 2, 2, 2, 2]
metrics.adjusted_rand_score(labels_true, labels_pred)
print(__doc__)
# Author: Olivier Grisel <olivier.grisel@ensta.org>
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from time import time
from sklearn import metrics
def uniform_labelings_scores(score_func, n_samples, n_clusters_range,
fixed_n_classes=None, n_runs=5, seed=42):
Compute score for 2 random uniform cluster labelings.
Both random labelings have the same number of clusters for each value
possible value in ``n_clusters_range``.
When fixed_n_classes is not None the first labeling is considered a ground
truth class assignment with fixed number of classes.
random_labels = np.random.RandomState(seed).randint
scores = np.zeros((len(n_clusters_range), n_runs))
if fixed_n_classes is not None:
labels_a = random_labels(low=0, high=fixed_n_classes, size=n_samples)
for i, k in enumerate(n_clusters_range):
for j in range(n_runs):
if fixed_n_classes is None:
labels_a = random_labels(low=0, high=k, size=n_samples)
labels_b = random_labels(low=0, high=k, size=n_samples)
scores[i, j] = score_func(labels_a, labels_b)
return scores
score_funcs = [
metrics.adjusted_rand_score,
metrics.v_measure_score,
metrics.adjusted_mutual_info_score,
metrics.mutual_info_score,
]
# 2 independent random clusterings with equal cluster number
n_samples = 100
n_clusters_range = np.linspace(2, n_samples, 10).astype(np.int)
plt.figure(1)
plots = []
names = []
for score_func in score_funcs:
print("Computing %s for %d values of n_clusters and n_samples=%d"
% (score_func.__name__, len(n_clusters_range), n_samples))
t0 = time()
scores = uniform_labelings_scores(score_func, n_samples, n_clusters_range)
print("done in %0.3fs" % (time() - t0))
plots.append(plt.errorbar(
n_clusters_range, np.median(scores, axis=1), scores.std(axis=1))[0])
names.append(score_func.__name__)
plt.title("Clustering measures for 2 random uniform labelings\n"
"with equal number of clusters")
plt.xlabel('Number of clusters (Number of samples is fixed to %d)' % n_samples)
plt.ylabel('Score value')
plt.legend(plots, names)
plt.ylim(ymin=-0.05, ymax=1.05)
# Random labeling with varying n_clusters against ground class labels
# with fixed number of clusters
n_samples = 1000
n_clusters_range = np.linspace(2, 100, 10).astype(np.int)
n_classes = 10
plt.figure(2)
plots = []
names = []
for score_func in score_funcs:
print("Computing %s for %d values of n_clusters and n_samples=%d"
% (score_func.__name__, len(n_clusters_range), n_samples))
t0 = time()
scores = uniform_labelings_scores(score_func, n_samples, n_clusters_range,
fixed_n_classes=n_classes)
print("done in %0.3fs" % (time() - t0))
plots.append(plt.errorbar(
n_clusters_range, scores.mean(axis=1), scores.std(axis=1))[0])
names.append(score_func.__name__)
plt.title("Clustering measures for random uniform labeling\n"
"against reference assignment with %d classes" % n_classes)
plt.xlabel('Number of clusters (Number of samples is fixed to %d)' % n_samples)
plt.ylabel('Score value')
plt.ylim(ymin=-0.05, ymax=1.05)
plt.legend(plots, names)
plt.show()
from sklearn import metrics
labels_true = [0, 0, 0, 1, 1, 1]
labels_pred = [0, 0, 1, 1, 2, 2]
metrics.adjusted_mutual_info_score(labels_true, labels_pred)
from sklearn import metrics
labels_true = [0, 0, 0, 1, 1, 1]
labels_pred = [0, 0, 0, 1, 1, 1]
metrics.adjusted_mutual_info_score(labels_true, labels_pred)
from sklearn import metrics
labels_true = [0, 0, 0, 1, 1, 1]
labels_pred = [0, 0, 0, 1, 1, 2]
print(metrics.adjusted_rand_score(labels_true, labels_pred))
print(metrics.adjusted_mutual_info_score(labels_true, labels_pred) )
from sklearn import metrics
labels_true = [0, 0, 0, 1, 1, 1]
labels_pred = [0, 0, 0, 1, 5, 1]
print(metrics.adjusted_rand_score(labels_true, labels_pred))
metrics.adjusted_mutual_info_score(labels_true, labels_pred)
labels_pred = [1, 1, 0, 0, 3, 3]
metrics.adjusted_mutual_info_score(labels_true, labels_pred)
metrics.adjusted_mutual_info_score(labels_pred, labels_true)
labels_pred = labels_true[:]
metrics.adjusted_mutual_info_score(labels_true, labels_pred)
metrics.normalized_mutual_info_score(labels_true, labels_pred)
metrics.mutual_info_score(labels_true, labels_pred)
# Bad
labels_true = [0, 0, 0, 1, 1, 5, 1, 1]
labels_pred = [1, 1, 3, 5, 2, 2, 2, 2]
print(metrics.adjusted_mutual_info_score(labels_true, labels_pred) )
print(metrics.mutual_info_score(labels_true, labels_pred))
print(__doc__)
# Author: Olivier Grisel <olivier.grisel@ensta.org>
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from time import time
from sklearn import metrics
def uniform_labelings_scores(score_func, n_samples, n_clusters_range,
fixed_n_classes=None, n_runs=5, seed=42):
Compute score for 2 random uniform cluster labelings.
Both random labelings have the same number of clusters for each value
possible value in ``n_clusters_range``.
When fixed_n_classes is not None the first labeling is considered a ground
truth class assignment with fixed number of classes.
random_labels = np.random.RandomState(seed).randint
scores = np.zeros((len(n_clusters_range), n_runs))
if fixed_n_classes is not None:
labels_a = random_labels(low=0, high=fixed_n_classes, size=n_samples)
for i, k in enumerate(n_clusters_range):
for j in range(n_runs):
if fixed_n_classes is None:
labels_a = random_labels(low=0, high=k, size=n_samples)
labels_b = random_labels(low=0, high=k, size=n_samples)
scores[i, j] = score_func(labels_a, labels_b)
return scores
score_funcs = [
metrics.adjusted_rand_score,
metrics.v_measure_score,
metrics.adjusted_mutual_info_score,
metrics.mutual_info_score,
]
# 2 independent random clusterings with equal cluster number
n_samples = 100
n_clusters_range = np.linspace(2, n_samples, 10).astype(np.int)
plt.figure(1)
plots = []
names = []
for score_func in score_funcs:
print("Computing %s for %d values of n_clusters and n_samples=%d"
% (score_func.__name__, len(n_clusters_range), n_samples))
t0 = time()
scores = uniform_labelings_scores(score_func, n_samples, n_clusters_range)
print("done in %0.3fs" % (time() - t0))
plots.append(plt.errorbar(
n_clusters_range, np.median(scores, axis=1), scores.std(axis=1))[0])
names.append(score_func.__name__)
plt.title("Clustering measures for 2 random uniform labelings\n"
"with equal number of clusters")
plt.xlabel('Number of clusters (Number of samples is fixed to %d)' % n_samples)
plt.ylabel('Score value')
plt.legend(plots, names)
plt.ylim(ymin=-0.05, ymax=1.05)
# Random labeling with varying n_clusters against ground class labels
# with fixed number of clusters
n_samples = 1000
n_clusters_range = np.linspace(2, 100, 10).astype(np.int)
n_classes = 10
plt.figure(2)
plots = []
names = []
for score_func in score_funcs:
print("Computing %s for %d values of n_clusters and n_samples=%d"
% (score_func.__name__, len(n_clusters_range), n_samples))
t0 = time()
scores = uniform_labelings_scores(score_func, n_samples, n_clusters_range,
fixed_n_classes=n_classes)
print("done in %0.3fs" % (time() - t0))
plots.append(plt.errorbar(
n_clusters_range, scores.mean(axis=1), scores.std(axis=1))[0])
names.append(score_func.__name__)
plt.title("Clustering measures for random uniform labeling\n"
"against reference assignment with %d classes" % n_classes)
plt.xlabel('Number of clusters (Number of samples is fixed to %d)' % n_samples)
plt.ylabel('Score value')
plt.ylim(ymin=-0.05, ymax=1.05)
plt.legend(plots, names)
plt.show()
from sklearn import metrics
labels_true = [0, 0, 0, 1, 1, 1]
labels_pred = [0, 0, 1, 1, 2, 2]
print(metrics.homogeneity_score(labels_true, labels_pred) )
print(metrics.completeness_score(labels_true, labels_pred) )
labels_true = [0, 0, 0, 1, 1, 1]
labels_pred = [0, 2, 2, 0, 0, 2]
print(metrics.homogeneity_score(labels_true, labels_pred))
print(metrics.completeness_score(labels_true, labels_pred))
labels_true = [0, 0, 0, 1, 1, 1]
labels_pred = [0, 1, 1, 0, 0, 1]
print(metrics.homogeneity_score(labels_true, labels_pred))
print(metrics.completeness_score(labels_true, labels_pred))
labels_true = [0, 0, 0, 1, 1, 1]
labels_pred = [0, 0, 0, 2, 2, 2]
print(metrics.homogeneity_score(labels_true, labels_pred))
print(metrics.completeness_score(labels_true, labels_pred))
metrics.v_measure_score(labels_true, labels_pred)
labels_pred = [0, 0, 0, 1, 2, 2]
metrics.homogeneity_completeness_v_measure(labels_true, labels_pred)
metrics.homogeneity_score(labels_true, labels_pred) == metrics.completeness_score(labels_pred, labels_true)
print(__doc__)
# Author: Olivier Grisel <olivier.grisel@ensta.org>
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from time import time
from sklearn import metrics
def uniform_labelings_scores(score_func, n_samples, n_clusters_range,
fixed_n_classes=None, n_runs=5, seed=42):
Compute score for 2 random uniform cluster labelings.
Both random labelings have the same number of clusters for each value
possible value in ``n_clusters_range``.
When fixed_n_classes is not None the first labeling is considered a ground
truth class assignment with fixed number of classes.
random_labels = np.random.RandomState(seed).randint
scores = np.zeros((len(n_clusters_range), n_runs))
if fixed_n_classes is not None:
labels_a = random_labels(low=0, high=fixed_n_classes, size=n_samples)
for i, k in enumerate(n_clusters_range):
for j in range(n_runs):
if fixed_n_classes is None:
labels_a = random_labels(low=0, high=k, size=n_samples)
labels_b = random_labels(low=0, high=k, size=n_samples)
scores[i, j] = score_func(labels_a, labels_b)
return scores
score_funcs = [
metrics.adjusted_rand_score,
metrics.v_measure_score,
metrics.adjusted_mutual_info_score,
metrics.mutual_info_score,
]
# 2 independent random clusterings with equal cluster number
n_samples = 100
n_clusters_range = np.linspace(2, n_samples, 10).astype(np.int)
plt.figure(1)
plots = []
names = []
for score_func in score_funcs:
print("Computing %s for %d values of n_clusters and n_samples=%d"
% (score_func.__name__, len(n_clusters_range), n_samples))
t0 = time()
scores = uniform_labelings_scores(score_func, n_samples, n_clusters_range)
print("done in %0.3fs" % (time() - t0))
plots.append(plt.errorbar(
n_clusters_range, np.median(scores, axis=1), scores.std(axis=1))[0])
names.append(score_func.__name__)
plt.title("Clustering measures for 2 random uniform labelings\n"
"with equal number of clusters")
plt.xlabel('Number of clusters (Number of samples is fixed to %d)' % n_samples)
plt.ylabel('Score value')
plt.legend(plots, names)
plt.ylim(ymin=-0.05, ymax=1.05)
# Random labeling with varying n_clusters against ground class labels
# with fixed number of clusters
n_samples = 1000
n_clusters_range = np.linspace(2, 100, 10).astype(np.int)
n_classes = 10
plt.figure(2)
plots = []
names = []
for score_func in score_funcs:
print("Computing %s for %d values of n_clusters and n_samples=%d"
% (score_func.__name__, len(n_clusters_range), n_samples))
t0 = time()
scores = uniform_labelings_scores(score_func, n_samples, n_clusters_range,
fixed_n_classes=n_classes)
print("done in %0.3fs" % (time() - t0))
plots.append(plt.errorbar(
n_clusters_range, scores.mean(axis=1), scores.std(axis=1))[0])
names.append(score_func.__name__)
plt.title("Clustering measures for random uniform labeling\n"
"against reference assignment with %d classes" % n_classes)
plt.xlabel('Number of clusters (Number of samples is fixed to %d)' % n_samples)
plt.ylabel('Score value')
plt.ylim(ymin=-0.05, ymax=1.05)
plt.legend(plots, names)
plt.show()
from sklearn import metrics
labels_true = [0, 0, 0, 1, 1, 1]
labels_pred = [0, 0, 1, 1, 2, 2]
metrics.fowlkes_mallows_score(labels_true, labels_pred)
labels_pred = [1, 1, 0, 0, 3, 3]
metrics.fowlkes_mallows_score(labels_true, labels_pred)
labels_pred = labels_true[:]
metrics.fowlkes_mallows_score(labels_true, labels_pred)
labels_true = [0, 1, 2, 0, 3, 4, 5, 1]
labels_pred = [1, 1, 0, 0, 2, 2, 2, 2]
metrics.fowlkes_mallows_score(labels_true, labels_pred)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Even better is to represent the same data in a Python list. To create a list, you need to use square brackets ([, ]) and separate each item with a comma. Every item in the list is a Python string, so each is enclosed in quotation marks.
Step2: At first glance, it doesn't look too different, whether you represent the information in a Python string or list. But as you will see, there are a lot of tasks that you can more easily do with a list. For instance, a list will make it easier to
Step3: Indexing
Step4: Side Note
Step5: As you can see above, when we slice a list, it returns a new, shortened list.
Step6: Adding items
Step7: Lists are not just for strings
Step8: Here, hardcover_sales is a list of integers. Similar to when working with strings, you can still do things like get the length, pull individual entries, and extend the list.
Step9: You can also get the minimum with min() and the maximum with max().
Step10: To add every item in the list, use sum().
Step11: We can also do similar calculations with slices of the list. In the next code cell, we take the sum from the first five days (sum(hardcover_sales[
|
<ASSISTANT_TASK:>
Python Code:
flowers = "pink primrose,hard-leaved pocket orchid,canterbury bells,sweet pea,english marigold,tiger lily,moon orchid,bird of paradise,monkshood,globe thistle"
print(type(flowers))
print(flowers)
flowers_list = ["pink primrose", "hard-leaved pocket orchid", "canterbury bells", "sweet pea", "english marigold", "tiger lily", "moon orchid", "bird of paradise", "monkshood", "globe thistle"]
print(type(flowers_list))
print(flowers_list)
# The list has ten entries
print(len(flowers_list))
print("First entry:", flowers_list[0])
print("Second entry:", flowers_list[1])
# The list has length ten, so we refer to final entry with 9
print("Last entry:", flowers_list[9])
print("First three entries:", flowers_list[:3])
print("Final two entries:", flowers_list[-2:])
flowers_list.remove("globe thistle")
print(flowers_list)
flowers_list.append("snapdragon")
print(flowers_list)
hardcover_sales = [139, 128, 172, 139, 191, 168, 170]
print("Length of the list:", len(hardcover_sales))
print("Entry at index 2:", hardcover_sales[2])
print("Minimum:", min(hardcover_sales))
print("Maximum:", max(hardcover_sales))
print("Total books sold in one week:", sum(hardcover_sales))
print("Average books sold in first five days:", sum(hardcover_sales[:5])/5)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
dataset = load_data()
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(dataset.iloc[:, :-1], dataset.iloc[:, -1], test_size=0.4,
random_state=42)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load original Keras ResNet50 model without the top layer.
Step2: Add a Pooling layer at the top to extract the CNN coded (aka bottleneck)
Step3: The following preprocessing is not proper for the ResNet as it uses mean image rather than mean pixel (I chose VGG paper values) yet it yields little numerical differencies hence works properly and is more than enough for this experiment.
Step4: Get the trainign and validation DirectoryIterators
Step5: Obtain the CNN codes for all images (it takes ~10 minutes on GTX 1080 GPU)
Step6: Save the CNN codes for futher analysys
Step7: Compute mean values of codes across all training codes
Step8: Visualize codes as images. As it can be clearly seen, Cats have many different features (plenty of high value - dark spots) while dogs highly activate only two neurons (two distinct dark spots).
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
import matplotlib.pyplot as plt
import numpy as np
import os
import sys
import zipfile
from IPython.display import display, Image
from scipy import ndimage
from sklearn.linear_model import LogisticRegression
from six.moves.urllib.request import urlretrieve
from six.moves import cPickle as pickle
from skimage import color, io
from scipy.misc import imresize
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Convolution2D, MaxPooling2D, Activation, GlobalMaxPooling2D
from keras.layers import merge, Input, Lambda
from keras.callbacks import EarlyStopping
from keras.models import Model
import h5py
np.random.seed(31337)
NAME="ResNet50-300x300-MaxPooling"
# Config the matplotlib backend as plotting inline in IPython
%matplotlib inline
from keras.applications.resnet50 import ResNet50
from keras.preprocessing import image
from keras.applications.resnet50 import preprocess_input, decode_predictions
import numpy as np
resnet_codes_model = ResNet50(input_shape=(300,300,3), include_top=False, weights='imagenet')
#resnet_codes_model.summary()
# Final model
model=Model(input=resnet_codes_model.input, output=GlobalMaxPooling2D()(resnet_codes_model.output))
model.summary()
from keras.preprocessing.image import ImageDataGenerator
def img_to_bgr(im):
# the following BGR values should be subtracted: [103.939, 116.779, 123.68]. (VGG)
return (im[:,:,::-1] - np.array([103.939, 116.779, 123.68]))
datagen = ImageDataGenerator(rescale=1., preprocessing_function=img_to_bgr) #(rescale=1./255)
train_batches = datagen.flow_from_directory("train", model.input_shape[1:3], shuffle=False, batch_size=32)
valid_batches = datagen.flow_from_directory("valid", model.input_shape[1:3], shuffle=False, batch_size=32)
test_batches = datagen.flow_from_directory("test", model.input_shape[1:3], shuffle=False, batch_size=32, class_mode=None)
train_codes = model.predict_generator(train_batches, train_batches.nb_sample)
valid_codes = model.predict_generator(valid_batches, valid_batches.nb_sample)
test_codes = model.predict_generator(test_batches, test_batches.nb_sample)
from keras.utils.np_utils import to_categorical
with h5py.File(NAME+"_codes-train.h5") as hf:
hf.create_dataset("X_train", data=train_codes)
hf.create_dataset("X_valid", data=valid_codes)
hf.create_dataset("Y_train", data=to_categorical(train_batches.classes))
hf.create_dataset("Y_valid", data=to_categorical(valid_batches.classes))
with h5py.File(NAME+"_codes-test.h5") as hf:
hf.create_dataset("X_test", data=test_codes)
def get_codes_by_class(X,Y):
l=len(Y)
if (len(X)!=l):
raise Exception("X and Y are of different lengths")
classes=set(Y)
return [[X[i] for i in xrange(l) if Y[i]==c] for c in classes], classes
class_codes, classes=get_codes_by_class(train_codes, train_batches.classes)
cats=np.mean(class_codes[0],0)
dogs=np.mean(class_codes[1],0)
cats=np.abs(cats)
dogs=np.abs(dogs)
# cats=np.log(cats)
# dogs=np.log(dogs)
cats/=cats.max()
dogs/=dogs.max()
fig, ax = plt.subplots(nrows=2, ncols=2, figsize=(12, 6))
ax[0,0].imshow(cats.reshape(32,64),cmap="Greys")
ax[0,0].set_title('Cats')
ax[0,1].imshow(dogs.reshape(32,64),cmap="Greys")
ax[0,1].set_title('Dogs')
freq = np.fft.fft2(cats.reshape(32,64))
freq = np.abs(freq)
ax[1,0].hist(np.log(freq).ravel(), bins=100)
ax[1,0].set_title('hist(log(freq))')
freq = np.fft.fft2(dogs.reshape(32,64))
freq = np.abs(freq)
ax[1,1].hist(np.log(freq).ravel(), bins=100)
ax[1,1].set_title('hist(log(freq))')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We want to make a phenotype phase plane to evaluate uptakes of Glucose and Oxygen.
Step2: If brewer2mpl is installed, other color schemes can be used as well
Step3: The number of points which are plotted in each dimension can also be changed
Step4: The code can also use multiple processes to speed up calculations
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from time import time
import cobra.test
from cobra.flux_analysis import calculate_phenotype_phase_plane
model = cobra.test.create_test_model("textbook")
data = calculate_phenotype_phase_plane(model, "EX_glc__D_e", "EX_o2_e")
data.plot_matplotlib();
data.plot_matplotlib("Pastel1")
data.plot_matplotlib("Dark2");
calculate_phenotype_phase_plane(model, "EX_glc__D_e", "EX_o2_e",
reaction1_npoints=20,
reaction2_npoints=20).plot_matplotlib();
start_time = time()
calculate_phenotype_phase_plane(model, "EX_glc__D_e", "EX_o2_e", n_processes=1,
reaction1_npoints=100, reaction2_npoints=100)
print("took %.2f seconds with 1 process" % (time() - start_time))
start_time = time()
calculate_phenotype_phase_plane(model, "EX_glc__D_e", "EX_o2_e", n_processes=4,
reaction1_npoints=100, reaction2_npoints=100)
print("took %.2f seconds with 4 process" % (time() - start_time))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Essay 5 prompt text and passage refer to the word document in data folder
Step2: Let us this data for features and model building
Step3: 2. Lets us Build Features
Step4: 3. Model building
Step5: 4. Model Prediction
Step6: This is a very bad model, I will work on this again tonight and build a model. A very bad prediction
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
# Load data
dataset_essay_1 = pd.read_csv("/data/data/automated_scoring_public_dataset.csv")
dataset_essay_1.shape
dataset_essay_1['essay'][0]
print ("Mean word count: ", dataset_essay_1['word_count'].mean())
print ("Max word count: ", dataset_essay_1['word_count'].max())
print ("Min word count: ", dataset_essay_1['word_count'].min())
print ("STD word count: ", dataset_essay_1['word_count'].std())
dataset_essay_1_dropped_NaN_columns = dataset_essay_1.dropna(axis=1, how='all')
dataset_essay_1_dropped_NaN_columns.shape
dataset_essay_1_dropped_NaN_columns.head(2)
# we are interested only in two columns
# data (X) is essay text and truth value being (Y) 'rater1_domain1'
dataset = dataset_essay_1_dropped_NaN_columns[['essay', 'rater1_domain1']]
dataset.rater1_domain1.value_counts() # this is the rater 1 human score distribution
def convert_dataframe_to_arrays(dataset):
essay_array = np.array(dataset['essay'].tolist()) # data
essay_rater1 = np.array(dataset['rater1_domain1'].tolist()) # truth value
return essay_array, essay_rater1
from sklearn.model_selection import train_test_split
def split_train_test_X_Y(dataset):
X, y = convert_dataframe_to_arrays(dataset)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=42)
print("X_train shape: ", X_train.shape)
print("X_test shape: ", X_test.shape)
print("y_train shape: ", y_train.shape)
print("y_test shape: ", y_test.shape)
return X_train, X_test, y_train, y_test
# Split it in to train and test arrays
X_train, X_test, y_train, y_test = split_train_test_X_Y(dataset)
X_train[0]
y_train[0]
X_test[0]
y_test[0]
count_vect = CountVectorizer()
X_train_counts = count_vect.fit_transform(X_train)
X_train_counts.shape
X_train_counts[0]
tf_transformer = TfidfTransformer(use_idf=False).fit(X_train_counts)
X_train_tf = tf_transformer.transform(X_train_counts)
X_train_tf.shape
tfidf_transformer = TfidfTransformer()
X_train_tfidf = tfidf_transformer.fit_transform(X_train_counts)
X_train_tfidf.shape
from sklearn.naive_bayes import MultinomialNB
clf = MultinomialNB().fit(X_train_tfidf, y_train)
X_test_counts = count_vect.transform(X_test)
X_test_counts.shape
X_test_tfidf = tfidf_transformer.transform(X_test_counts)
X_test_tfidf.shape
predicted = clf.predict(X_test_tfidf)
predicted
y_test
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Step 1
Step2: Step 2
Step3: Step 3
Step4: OK. Now that everything is defined, it's time to run the code. mwxml has a map() function that applied the process_dump function each of the XML dump file in paths -- in parallel -- using python's multiprocessing library and collects all of the yielded values in a generator. As the code below demonstrates, it's easy to collect this output and write it to a new output file or print it out to the console (not recommended for large amounts of output).
|
<ASSISTANT_TASK:>
Python Code:
import mwxml
import glob
paths = glob.glob('/public/dumps/public/nlwiki/20151202/nlwiki-20151202-pages-meta-history*.xml*.bz2')
paths
import re
EXTS = ["png", "gif", "jpg", "jpeg"]
# [[(file|image):<file>.<ext>]]
IMAGE_LINK_RE = re.compile(r"\[\[" +
r"(file|image|afbeelding|bestand):" + # Group 1
r"([^\]]+.(" + "|".join(EXTS) + r"))" + # Group 2 & 3
r"(|[^\]]+)?" + # Group 4
r"\]\]")
def extract_image_links(text):
for m in IMAGE_LINK_RE.finditer(text):
yield m.group(2)
def process_dump(dump, path):
for page in dump:
last_count = 0
for revision in page:
image_links = list(extract_image_links(revision.text or ""))
delta = len(image_links) - last_count
if delta != 0:
yield revision.id, revision.timestamp, delta
last_count = len(image_links)
count = 0
for rev_id, rev_timestamp, delta in mwxml.map(process_dump, paths):
print("\t".join(str(v) for v in [rev_id, rev_timestamp, delta]))
count += 1
if count > 15:
break
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Description
Step7: 1.4. Land Atmosphere Flux Exchanges
Step8: 1.5. Atmospheric Coupling Treatment
Step9: 1.6. Land Cover
Step10: 1.7. Land Cover Change
Step11: 1.8. Tiling
Step12: 2. Key Properties --> Conservation Properties
Step13: 2.2. Water
Step14: 2.3. Carbon
Step15: 3. Key Properties --> Timestepping Framework
Step16: 3.2. Time Step
Step17: 3.3. Timestepping Method
Step18: 4. Key Properties --> Software Properties
Step19: 4.2. Code Version
Step20: 4.3. Code Languages
Step21: 5. Grid
Step22: 6. Grid --> Horizontal
Step23: 6.2. Matches Atmosphere Grid
Step24: 7. Grid --> Vertical
Step25: 7.2. Total Depth
Step26: 8. Soil
Step27: 8.2. Heat Water Coupling
Step28: 8.3. Number Of Soil layers
Step29: 8.4. Prognostic Variables
Step30: 9. Soil --> Soil Map
Step31: 9.2. Structure
Step32: 9.3. Texture
Step33: 9.4. Organic Matter
Step34: 9.5. Albedo
Step35: 9.6. Water Table
Step36: 9.7. Continuously Varying Soil Depth
Step37: 9.8. Soil Depth
Step38: 10. Soil --> Snow Free Albedo
Step39: 10.2. Functions
Step40: 10.3. Direct Diffuse
Step41: 10.4. Number Of Wavelength Bands
Step42: 11. Soil --> Hydrology
Step43: 11.2. Time Step
Step44: 11.3. Tiling
Step45: 11.4. Vertical Discretisation
Step46: 11.5. Number Of Ground Water Layers
Step47: 11.6. Lateral Connectivity
Step48: 11.7. Method
Step49: 12. Soil --> Hydrology --> Freezing
Step50: 12.2. Ice Storage Method
Step51: 12.3. Permafrost
Step52: 13. Soil --> Hydrology --> Drainage
Step53: 13.2. Types
Step54: 14. Soil --> Heat Treatment
Step55: 14.2. Time Step
Step56: 14.3. Tiling
Step57: 14.4. Vertical Discretisation
Step58: 14.5. Heat Storage
Step59: 14.6. Processes
Step60: 15. Snow
Step61: 15.2. Tiling
Step62: 15.3. Number Of Snow Layers
Step63: 15.4. Density
Step64: 15.5. Water Equivalent
Step65: 15.6. Heat Content
Step66: 15.7. Temperature
Step67: 15.8. Liquid Water Content
Step68: 15.9. Snow Cover Fractions
Step69: 15.10. Processes
Step70: 15.11. Prognostic Variables
Step71: 16. Snow --> Snow Albedo
Step72: 16.2. Functions
Step73: 17. Vegetation
Step74: 17.2. Time Step
Step75: 17.3. Dynamic Vegetation
Step76: 17.4. Tiling
Step77: 17.5. Vegetation Representation
Step78: 17.6. Vegetation Types
Step79: 17.7. Biome Types
Step80: 17.8. Vegetation Time Variation
Step81: 17.9. Vegetation Map
Step82: 17.10. Interception
Step83: 17.11. Phenology
Step84: 17.12. Phenology Description
Step85: 17.13. Leaf Area Index
Step86: 17.14. Leaf Area Index Description
Step87: 17.15. Biomass
Step88: 17.16. Biomass Description
Step89: 17.17. Biogeography
Step90: 17.18. Biogeography Description
Step91: 17.19. Stomatal Resistance
Step92: 17.20. Stomatal Resistance Description
Step93: 17.21. Prognostic Variables
Step94: 18. Energy Balance
Step95: 18.2. Tiling
Step96: 18.3. Number Of Surface Temperatures
Step97: 18.4. Evaporation
Step98: 18.5. Processes
Step99: 19. Carbon Cycle
Step100: 19.2. Tiling
Step101: 19.3. Time Step
Step102: 19.4. Anthropogenic Carbon
Step103: 19.5. Prognostic Variables
Step104: 20. Carbon Cycle --> Vegetation
Step105: 20.2. Carbon Pools
Step106: 20.3. Forest Stand Dynamics
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
Step109: 22.2. Growth Respiration
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
Step111: 23.2. Allocation Bins
Step112: 23.3. Allocation Fractions
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
Step115: 26. Carbon Cycle --> Litter
Step116: 26.2. Carbon Pools
Step117: 26.3. Decomposition
Step118: 26.4. Method
Step119: 27. Carbon Cycle --> Soil
Step120: 27.2. Carbon Pools
Step121: 27.3. Decomposition
Step122: 27.4. Method
Step123: 28. Carbon Cycle --> Permafrost Carbon
Step124: 28.2. Emitted Greenhouse Gases
Step125: 28.3. Decomposition
Step126: 28.4. Impact On Soil Properties
Step127: 29. Nitrogen Cycle
Step128: 29.2. Tiling
Step129: 29.3. Time Step
Step130: 29.4. Prognostic Variables
Step131: 30. River Routing
Step132: 30.2. Tiling
Step133: 30.3. Time Step
Step134: 30.4. Grid Inherited From Land Surface
Step135: 30.5. Grid Description
Step136: 30.6. Number Of Reservoirs
Step137: 30.7. Water Re Evaporation
Step138: 30.8. Coupled To Atmosphere
Step139: 30.9. Coupled To Land
Step140: 30.10. Quantities Exchanged With Atmosphere
Step141: 30.11. Basin Flow Direction Map
Step142: 30.12. Flooding
Step143: 30.13. Prognostic Variables
Step144: 31. River Routing --> Oceanic Discharge
Step145: 31.2. Quantities Transported
Step146: 32. Lakes
Step147: 32.2. Coupling With Rivers
Step148: 32.3. Time Step
Step149: 32.4. Quantities Exchanged With Rivers
Step150: 32.5. Vertical Grid
Step151: 32.6. Prognostic Variables
Step152: 33. Lakes --> Method
Step153: 33.2. Albedo
Step154: 33.3. Dynamics
Step155: 33.4. Dynamic Lake Extent
Step156: 33.5. Endorheic Basins
Step157: 34. Lakes --> Wetlands
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mohc', 'hadgem3-gc31-hh', 'land')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Download & Process Mordor Dataset
Step2: Analytic I
Step3: Analytic II
|
<ASSISTANT_TASK:>
Python Code:
from openhunt.mordorutils import *
spark = get_spark()
mordor_file = "https://raw.githubusercontent.com/OTRF/mordor/master/datasets/small/windows/credential_access/host/empire_dcsync_dcerpc_drsuapi_DsGetNCChanges.zip"
registerMordorSQLTable(spark, mordor_file, "mordorTable")
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, SubjectUserName, SubjectLogonId
FROM mordorTable
WHERE LOWER(Channel) = "security"
AND EventID = 4662
AND AccessMask = "0x100"
AND (
Properties LIKE "%1131f6aa_9c07_11d1_f79f_00c04fc2dcd2%"
OR Properties LIKE "%1131f6ad_9c07_11d1_f79f_00c04fc2dcd2%"
OR Properties LIKE "%89e95b76_444d_4c62_991a_0facbeda640c%"
)
AND NOT SubjectUserName LIKE "%$"
'''
)
df.show(10,False)
df = spark.sql(
'''
SELECT o.`@timestamp`, o.Hostname, o.SubjectUserName, o.SubjectLogonId, a.IpAddress
FROM mordorTable o
INNER JOIN (
SELECT Hostname,TargetUserName,TargetLogonId,IpAddress
FROM mordorTable
WHERE LOWER(Channel) = "security"
AND EventID = 4624
AND LogonType = 3
AND NOT TargetUserName LIKE "%$"
) a
ON o.SubjectLogonId = a.TargetLogonId
WHERE LOWER(o.Channel) = "security"
AND o.EventID = 4662
AND o.AccessMask = "0x100"
AND (
o.Properties LIKE "%1131f6aa_9c07_11d1_f79f_00c04fc2dcd2%"
OR o.Properties LIKE "%1131f6ad_9c07_11d1_f79f_00c04fc2dcd2%"
OR o.Properties LIKE "%89e95b76_444d_4c62_991a_0facbeda640c%"
)
AND o.Hostname = a.Hostname
AND NOT o.SubjectUserName LIKE "%$"
'''
)
df.show(10,False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data Preparation
Step2: To keep model simple, exclude draws. Mark them as victories for the away team instead.
Step3: We want to split the data into test and train in a stratified manner, i.e. we don't want to favour a certain season, or a part of the season. So we'll take a portion (25%) of games from each round.
Step4: Create test and training data
Step5: Capture all of the 'diff' columns in the model, too
Step6: Define features
Step7: Set up test and train datasets
Step8: Fill the NaN values
Step9: Modelling
Step10: Model 2
Step11: Training model on all of the data
Step12: We'll then prepare the data for the round we're interested in. We'll do this by
Step13: Get the IDs for the players we'll be using
Step14: Try including Bachar Houli
Step15: Can now use this to make predictions
Step16: Glue these together and sort
Step17: Training model on all data
Step19: Visualisation
Step32: Metadata and functions
|
<ASSISTANT_TASK:>
Python Code:
# match data with aggregated individual data
import pandas as pd
match_path = '/Users/t_raver9/Desktop/projects/aflengine/analysis/machine_learning/src/player_data/data/matches_with_player_agg.csv'
players_path = '/Users/t_raver9/Desktop/projects/aflengine/analysis/machine_learning/src/player_data/data/players_with_player_stat_totals.csv'
matches = pd.read_csv(match_path)
players = pd.read_csv(players_path)
model_data = matches[matches['season'] >= 2010]
for idx, row in model_data.iterrows():
if row['winner'] == 'draw':
model_data.at[idx,'winner'] = 'away'
# How many games do we get per round?
round_counts = {}
curr_round = 1
matches_in_round = 0
for idx,row in model_data.iterrows():
if curr_round != row['round']:
if matches_in_round not in round_counts:
round_counts[matches_in_round] = 1
else:
round_counts[matches_in_round] += 1
curr_round = row['round']
matches_in_round = 1
continue
else:
matches_in_round += 1
round_counts
# Taking a minimum 25% of each round
from math import ceil
test_sample_size = {}
for num_games in round_counts:
test_sample_size[num_games] = ceil(num_games/4)
rounds_in_season = get_season_rounds(model_data)
teams_in_season = get_season_teams(model_data)
# test set
from copy import deepcopy
test_data = pd.DataFrame()
for season, max_round in rounds_in_season.items():
for rnd in range(1, max_round):
round_matches = model_data[(model_data['season']==season) & (model_data['round']==rnd)]
num_test = test_sample_size[len(round_matches)]
round_test_set = round_matches.sample(num_test)
test_data = test_data.append(round_test_set)
# training set
training_data = model_data.drop(test_data.index)
diff_cols = [col for col in model_data.columns if col[0:4] == 'diff']
features = [col
for col
in ['h_career_' + col for col in player_cols_to_agg] + \
['h_season_' + col for col in player_cols_to_agg] + \
['a_career_' + col for col in player_cols_to_agg] + \
['a_season_' + col for col in player_cols_to_agg] + \
['h_' + col for col in ladder_cols] + \
['h_' + col + '_form' for col in ladder_cols] + \
['a_' + col for col in ladder_cols] + \
['a_' + col + '_form' for col in ladder_cols] + \
['h_career_' + col for col in misc_columns] + \
['h_season_' + col for col in misc_columns] + \
['a_career_' + col for col in misc_columns] + \
['a_season_' + col for col in misc_columns] + \
diff_cols
]
# REMOVE PERCENTAGE FOR NOW
features.remove('h_percentage')
features.remove('a_percentage')
features.remove('diff_percentage')
target = 'winner'
X_train = training_data[features]
y_train = training_data[target]
X_test = test_data[features]
y_test = test_data[target]
X_train.fillna(0,inplace=True)
y_train.fillna(0,inplace=True)
X_test.fillna(0,inplace=True)
y_test.fillna(0,inplace=True)
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import GridSearchCV
import numpy as np
log_reg = LogisticRegression()
param_grid = {
'tol': [.0001, .001, .01],
'C': [.1, 1, 10],
'max_iter': [50,100,200]
}
grid_log_reg = GridSearchCV(log_reg, param_grid, cv=5)
grid_log_reg.fit(X_train, y_train)
grid_log_reg.score(X_train,y_train)
grid_log_reg.score(X_test,y_test)
# Confirm that it's not just picking the home team
print(sum(grid_log_reg.predict(X_test)=='away'))
print(sum(grid_log_reg.predict(X_test)=='home'))
diff_cols = [col for col in model_data.columns if col[0:4] == 'diff']
features = diff_cols
# REMOVE PERCENTAGE FOR NOW
diff_cols.remove('diff_percentage')
target = 'winner'
X_train_2 = training_data[diff_cols]
y_train_2 = training_data[target]
X_test_2 = test_data[diff_cols]
y_test_2 = test_data[target]
#X_train_2 = X_train_2[features]
#y_train_2 = y_train_2[features]
#X_test_2 = X_test_2[features]
#y_test_2 = y_test_2[features]
X_train_2.fillna(0,inplace=True)
y_train_2.fillna(0,inplace=True)
X_test_2.fillna(0,inplace=True)
y_test_2.fillna(0,inplace=True)
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import GridSearchCV
import numpy as np
log_reg_2 = LogisticRegression()
param_grid = {
'tol': [.0001, .001, .01],
'C': [.1, 1, 10],
'max_iter': [50,100,200]
}
grid_log_reg_2 = GridSearchCV(log_reg_2, param_grid, cv=5)
grid_log_reg_2.fit(X_train_2, y_train_2)
grid_log_reg_2.score(X_train_2,y_train_2)
grid_log_reg_2.score(X_test_2,y_test_2)
training_data[(training_data['round']==1) & (training_data['season']==2018)]
fixture_path = '/Users/t_raver9/Desktop/projects/aflengine/tipengine/fixture2020.csv'
fixture = pd.read_csv(fixture_path)
fixture[fixture['round']==2]
next_round_matches = get_upcoming_matches(matches,fixture,round_num=2)
next_round_matches
import cv2
import pytesseract
custom_config = r'--oem 3 --psm 6'
import pathlib
names_dir = '/Users/t_raver9/Desktop/projects/aflengine/analysis/machine_learning/src/OCR/images'
# Initialise the dictionary
player_names_dict = {}
for team in matches['hteam'].unique():
player_names_dict[team] = []
# Fill out the dictionary
for path in pathlib.Path(names_dir).iterdir():
print(path)
if path.name.split('.')[0] in player_names_dict:
path_str = str(path)
image_obj = cv2.imread(path_str)
image_string = pytesseract.image_to_string(image_obj, config=custom_config)
names = get_player_names(image_string)
player_names_dict[path.name.split('.')[0]].extend(names)
from copy import deepcopy
players_in_rnd = []
for _, v in player_names_dict.items():
players_in_rnd.extend(v)
player_data = get_player_data(players_in_rnd)
players_in_rnd
aggregate = player_data[player_cols].groupby('team').apply(lambda x: x.mean(skipna=False))
# Factor in any missing players
num_players_per_team = player_data[player_cols].groupby('team').count()['Supercoach']
for team in num_players_per_team.index:
aggregate.loc[team] = aggregate.loc[team] * (22/num_players_per_team[team])
aggs_h = deepcopy(aggregate)
aggs_a = deepcopy(aggregate)
aggs_h.columns = aggregate.columns.map(lambda x: 'h_' + str(x))
aggs_a.columns = aggregate.columns.map(lambda x: 'a_' + str(x))
combined = next_round_matches.merge(aggs_h, left_on='hteam', right_on='team')
combined = combined.merge(aggs_a, left_on='ateam', right_on='team')
combined = get_diff_cols(combined)
pd.set_option('max_columns',500)
X = combined[features]
X['diff_wins_form']
grid_log_reg.decision_function(X)
grid_log_reg.predict_proba(X)
grid_log_reg.predict(X)
Z = combined[diff_cols]
grid_log_reg_2.predict_proba(Z)
grid_log_reg_2.predict(Z)
combined[['ateam','hteam']]
combined[['h_percentage_form','a_percentage_form']]
combined[['h_career_games_played','a_career_games_played']]
combined[['h_wins_form','a_wins_form']]
model_coef = grid_log_reg.best_estimator_.coef_
X['diff_season_Supercoach']
coef = []
for i in model_coef:
for j in i:
coef.append(abs(j))
zipped = list(zip(features,coef))
zipped.sort(key = lambda x: x[1],reverse=True)
zipped
features = [col
for col
in ['h_career_' + col for col in player_cols_to_agg] + \
['h_season_' + col for col in player_cols_to_agg] + \
['a_career_' + col for col in player_cols_to_agg] + \
['a_season_' + col for col in player_cols_to_agg] + \
['h_' + col for col in ladder_cols] + \
['h_' + col + '_form' for col in ladder_cols] + \
['a_' + col for col in ladder_cols] + \
['a_' + col + '_form' for col in ladder_cols] + \
['h_career_' + col for col in misc_columns] + \
['h_season_' + col for col in misc_columns] + \
['a_career_' + col for col in misc_columns] + \
['a_season_' + col for col in misc_columns] + \
diff_cols
]
# REMOVE PERCENTAGE FOR NOW
features.remove('h_percentage')
features.remove('a_percentage')
features.remove('diff_percentage')
target = 'winner'
X = model_data[features]
y = model_data[target]
X.fillna(0,inplace=True)
y.fillna(0,inplace=True)
grid_log_reg_2.predict_proba(Z)
combined[['ateam','hteam']]
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
category_names = ['Home','Away']
results = {
'Collingwood v Richmond': [50.7,49.3],
'Geelong v Hawthorn': [80.4,19.5],
'Brisbane Lions v Fremantle': [57.3,42.7],
'Carlton v Melbourne': [62.4,37.6],
'Gold Coast v West Coast': [9.9,90.1],
'Port Adelaide v Adelaide': [58.0,42.0],
'GWS v North Melbourne': [62.6,37.4],
'Sydney v Essendon': [75.2,24.8],
'St Kilda v Footscray': [61.2,38.8]
}
def survey(results, category_names):
Parameters
----------
results : dict
A mapping from question labels to a list of answers per category.
It is assumed all lists contain the same number of entries and that
it matches the length of *category_names*.
category_names : list of str
The category labels.
labels = list(results.keys())
data = np.array(list(results.values()))
data_cum = data.cumsum(axis=1)
category_colors = plt.get_cmap('RdYlGn')(
np.linspace(0.15, 0.85, data.shape[1]))
fig, ax = plt.subplots(figsize=(18, 10))
fig.suptitle('Win Probabilities', fontsize=20)
ax.invert_yaxis()
ax.xaxis.set_visible(False)
ax.set_xlim(0, np.sum(data, axis=1).max())
for i, (colname, color) in enumerate(zip(category_names, category_colors)):
widths = data[:, i]
starts = data_cum[:, i] - widths
ax.barh(labels, widths, left=starts, height=0.5,
label=colname, color=color)
xcenters = starts + widths / 2
r, g, b, _ = color
text_color = 'white' if r * g * b < 0.5 else 'darkgrey'
for y, (x, c) in enumerate(zip(xcenters, widths)):
ax.text(x, y, str(int(c)), ha='center', va='center',
color=text_color,fontsize=15)
ax.legend(ncol=len(category_names), bbox_to_anchor=(0, 1),
loc='lower left', fontsize=15)
return fig, ax
survey(results, category_names)
plt.show()
plt.show()
from typing import Dict
import numpy as np
def get_season_rounds(matches: pd.DataFrame) -> Dict:
Return a dictionary with seasons as keys and number of games
in season as values
seasons = matches['season'].unique()
rounds_in_season = dict.fromkeys(seasons,0)
for season in seasons:
rounds_in_season[season] = max(matches[matches['season']==season]['round'])
return rounds_in_season
# What teams participated in each season?
def get_season_teams(matches: pd.DataFrame) -> Dict:
Return a dictionary with seasons as keys and a list of teams who played
in that season as values
seasons = matches['season'].unique()
teams_in_season = {}
for season in seasons:
teams = list(matches[matches['season']==season]['hteam'].unique())
teams.extend(list(matches[matches['season']==season]['ateam'].unique()))
teams = np.unique(teams)
teams_in_season[season] = list(teams)
return teams_in_season
player_cols_to_agg = [
'AFLfantasy',
'Supercoach',
'behinds',
'bounces',
'brownlow',
'clangers',
'clearances',
'contested_marks',
'contested_poss',
'disposals',
'frees_against',
'frees_for',
'goal_assists',
'goals',
'handballs',
'hitouts',
'inside50',
'kicks',
'marks',
'marks_in_50',
'one_percenters',
'rebound50',
'tackles',
'tog',
'uncontested_poss',
'centre_clearances',
'disposal_efficiency',
'effective_disposals',
'intercepts',
'metres_gained',
'stoppage_clearances',
'score_involvements',
'tackles_in_50',
'turnovers'
]
match_cols = [
'odds',
'line'
]
ladder_columns = [
'wins',
'losses',
'draws',
'prem_points',
'played',
'points_for',
'points_against',
'percentage',
'position'
]
misc_columns = [
'games_played'
]
diff_cols = [
]
def get_upcoming_matches(matches, fixture, round_num=None):
if round_num == None: # Get the latest populated round
round_num = matches['round'].iloc[-1] + 1
fixture['round'] = fixture['round'].astype(str)
next_round = fixture[fixture['round']==str(round_num)]
# Get list of home and away
matches.sort_values(by=['season','round'],ascending=False,inplace=True)
teams = list(next_round['hometeam'])
teams = list(zip(teams,list(next_round['awayteam']))) # (home, away)
# Initialise upcoming round
df = pd.DataFrame()
output = pd.DataFrame(columns = h_ladder_cols + h_ladder_form_cols + a_ladder_cols + a_ladder_form_cols)
# For each team, find the data that is relevant to them
for team in teams:
h_last_match = matches[(matches['hteam'] == team[0]) | (matches['ateam'] == team[0])].iloc[0]
a_last_match = matches[(matches['hteam'] == team[1]) | (matches['ateam'] == team[1])].iloc[0]
# Home team conditions, and use the 'game_cols' to update the ladder and ladder form for that team
if team[0] == h_last_match['hteam']: # Home team was home team last game
h_last_match_rel_cols = h_last_match[h_ladder_cols + h_ladder_form_cols + game_cols]
h_last_match_rel_cols = update_ladder(h_last_match_rel_cols,'home')
elif team[0] == h_last_match['ateam']: # Home team was away team last game
h_last_match_rel_cols = h_last_match[a_ladder_cols + a_ladder_form_cols + game_cols]
h_last_match_rel_cols = update_ladder(h_last_match_rel_cols,'away')
# Away team conditions
if team[1] == a_last_match['hteam']: # Away team was home team last game
a_last_match_rel_cols = a_last_match[h_ladder_cols + h_ladder_form_cols + game_cols]
a_last_match_rel_cols = update_ladder(a_last_match_rel_cols,'home')
elif team[1] == a_last_match['ateam']: # Away team was away team last game
a_last_match_rel_cols = a_last_match[a_ladder_cols + a_ladder_form_cols + game_cols]
a_last_match_rel_cols = update_ladder(a_last_match_rel_cols,'away')
h_last_match_rel_cols['hteam'] = team[0]
a_last_match_rel_cols['ateam'] = team[1]
# Make sure the columns are the right format
h_col_final = []
for col in h_last_match_rel_cols.index:
if col[0] == 'h':
h_col_final.append(col)
else:
col = 'h' + col[1:]
h_col_final.append(col)
a_col_final = []
for col in a_last_match_rel_cols.index:
if col[0] == 'a':
a_col_final.append(col)
else:
col = 'a' + col[1:]
a_col_final.append(col)
h_last_match_rel_cols.index = h_col_final
a_last_match_rel_cols.index = a_col_final
# Add all of these to the output.
joined = pd.concat([h_last_match_rel_cols,a_last_match_rel_cols]).to_frame().T
joined.drop('hscore',axis=1,inplace=True)
joined.drop('ascore',axis=1,inplace=True)
output = output.append(joined)
matches.sort_values(by=['season','round'],ascending=True,inplace=True)
return output
def update_ladder(last_match_rel_cols, last_game_h_a):
if last_game_h_a == 'home':
# Update wins, losses, draws and prem points
if last_match_rel_cols['hscore'] > last_match_rel_cols['ascore']:
last_match_rel_cols['h_wins'] = last_match_rel_cols['h_wins'] + 1
last_match_rel_cols['h_wins_form'] = last_match_rel_cols['h_wins_form'] + 1
last_match_rel_cols['h_prem_points'] = last_match_rel_cols['h_prem_points'] + 4
last_match_rel_cols['h_prem_points_form'] = last_match_rel_cols['h_prem_points_form'] + 4
elif last_match_rel_cols['hscore'] < last_match_rel_cols['ascore']:
last_match_rel_cols['h_losses'] = last_match_rel_cols['h_losses'] + 1
last_match_rel_cols['h_losses_form'] = last_match_rel_cols['h_losses_form'] + 1
else:
last_match_rel_cols['h_draws'] = last_match_rel_cols['h_draws'] + 1
last_match_rel_cols['h_prem_points'] = last_match_rel_cols['h_prem_points'] + 2
last_match_rel_cols['h_prem_points_form'] = last_match_rel_cols['h_prem_points_form'] + 2
# Update points for and against
last_match_rel_cols['h_points_for'] = last_match_rel_cols['h_points_for'] + last_match_rel_cols['hscore']
last_match_rel_cols['h_points_against'] = last_match_rel_cols['h_points_against'] + last_match_rel_cols['ascore']
last_match_rel_cols['h_points_for_form'] = last_match_rel_cols['h_points_for_form'] + last_match_rel_cols['hscore']
last_match_rel_cols['h_points_against_form'] = last_match_rel_cols['h_points_against_form'] + last_match_rel_cols['ascore']
# Update percentage
last_match_rel_cols['h_percentage'] = (last_match_rel_cols['h_points_for'] / last_match_rel_cols['h_points_against']) * 100
last_match_rel_cols['h_percentage_form'] = (last_match_rel_cols['h_points_for_form'] / last_match_rel_cols['h_points_against_form']) * 100
if last_game_h_a == 'away':
# Update wins, losses, draws and prem points
if last_match_rel_cols['hscore'] > last_match_rel_cols['ascore']:
last_match_rel_cols['a_losses'] = last_match_rel_cols['a_losses'] + 1
last_match_rel_cols['a_losses_form'] = last_match_rel_cols['a_losses_form'] + 1
elif last_match_rel_cols['hscore'] < last_match_rel_cols['ascore']:
last_match_rel_cols['a_wins'] = last_match_rel_cols['a_wins'] + 1
last_match_rel_cols['a_wins_form'] = last_match_rel_cols['a_wins_form'] + 1
last_match_rel_cols['a_prem_points'] = last_match_rel_cols['a_prem_points'] + 4
last_match_rel_cols['a_prem_points_form'] = last_match_rel_cols['a_prem_points_form'] + 4
else:
last_match_rel_cols['a_draws'] = last_match_rel_cols['a_draws'] + 1
last_match_rel_cols['a_prem_points'] = last_match_rel_cols['a_prem_points'] + 2
last_match_rel_cols['a_prem_points_form'] = last_match_rel_cols['a_prem_points_form'] + 2
# Update points for and against
last_match_rel_cols['a_points_for'] = last_match_rel_cols['a_points_for'] + last_match_rel_cols['ascore']
last_match_rel_cols['a_points_against'] = last_match_rel_cols['a_points_against'] + last_match_rel_cols['hscore']
last_match_rel_cols['a_points_for_form'] = last_match_rel_cols['a_points_for_form'] + last_match_rel_cols['ascore']
last_match_rel_cols['a_points_against_form'] = last_match_rel_cols['a_points_against_form'] + last_match_rel_cols['hscore']
# Update percentage
last_match_rel_cols['a_percentage'] = (last_match_rel_cols['a_points_for'] / last_match_rel_cols['a_points_against']) * 100
last_match_rel_cols['a_percentage_form'] = (last_match_rel_cols['a_points_for_form'] / last_match_rel_cols['a_points_against_form']) * 100
return last_match_rel_cols
ladder_columns = {
('wins',0),
('losses',0),
('draws',0),
('prem_points',0),
('played',0),
('points_for',0),
('points_against',0),
('percentage',100),
('position',1)
}
ladder_cols = [i for i,j in ladder_columns]
h_ladder_cols = ['h_' + i for i,j in ladder_columns]
a_ladder_cols = ['a_' + i for i,j in ladder_columns]
h_ladder_form_cols = ['h_' + i + '_form' for i,j in ladder_columns]
a_ladder_form_cols = ['a_' + i + '_form' for i,j in ladder_columns]
h_ladder_form_cols_mapping = dict(zip(ladder_cols,h_ladder_form_cols))
a_ladder_form_cols_mapping = dict(zip(ladder_cols,a_ladder_form_cols))
game_cols = [
'hscore',
'ascore'
]
def update_last_game(df):
for idx,row in df.iterrows():
for col in cols_to_update:
single_game_col = col[7:] # This is the non-aggregate column, e.g. 'Supercoach' instead of 'career_Supercoach'
if col[0:7] == 'career_':
df.at[idx,col] = (df.at[idx,single_game_col] + (df.at[idx,col] * (df.at[idx,'career_games_played']))) / df.at[idx,'career_games_played']
elif col[0:7] == 'season_':
df.at[idx,col] = (df.at[idx,single_game_col] + (df.at[idx,col] * (df.at[idx,'season_games_played']))) / df.at[idx,'season_games_played']
else:
raise Exception('Column not found, check what columns you\'re passing')
return df
def get_player_names(image_string):
Returns the names of players who are named in a team
names = []
name = ''
i = 0
while i <= len(image_string):
if image_string[i] == ']':
name = ''
i += 2 # Skip the first space
else:
i += 1
continue
name = ''
while (image_string[i] != ',') & (image_string[i] != '\n'):
name += image_string[i]
i += 1
if i == len(image_string):
break
name = name.replace(' ','_')
names.append(name)
i += 1
return names
def get_player_data(player_ids):
last_games = pd.DataFrame(columns = players.columns)
for player in player_ids:
last_game_row = players[(players['playerid']==player) & (players['next_matchid'].isna())]
last_games = last_games.append(last_game_row)
return last_games
player_cols = ['AFLfantasy',
'Supercoach',
'behinds',
'bounces',
'brownlow',
'clangers',
'clearances',
'contested_marks',
'contested_poss',
'disposals',
'frees_against',
'frees_for',
'goal_assists',
'goals',
'handballs',
'hitouts',
'inside50',
'kicks',
'marks',
'marks_in_50',
'one_percenters',
'rebound50',
'tackles',
'tog',
'uncontested_poss',
'centre_clearances',
'disposal_efficiency',
'effective_disposals',
'intercepts',
'metres_gained',
'stoppage_clearances',
'score_involvements',
'tackles_in_50',
'turnovers',
'matchid',
'next_matchid',
'team',
'career_AFLfantasy',
'career_Supercoach',
'career_behinds',
'career_bounces',
'career_brownlow',
'career_clangers',
'career_clearances',
'career_contested_marks',
'career_contested_poss',
'career_disposals',
'career_frees_against',
'career_frees_for',
'career_goal_assists',
'career_goals',
'career_handballs',
'career_hitouts',
'career_inside50',
'career_kicks',
'career_marks',
'career_marks_in_50',
'career_one_percenters',
'career_rebound50',
'career_tackles',
'career_tog',
'career_uncontested_poss',
'career_centre_clearances',
'career_disposal_efficiency',
'career_effective_disposals',
'career_intercepts',
'career_metres_gained',
'career_stoppage_clearances',
'career_score_involvements',
'career_tackles_in_50',
'career_turnovers',
'season_AFLfantasy',
'season_Supercoach',
'season_behinds',
'season_bounces',
'season_brownlow',
'season_clangers',
'season_clearances',
'season_contested_marks',
'season_contested_poss',
'season_disposals',
'season_frees_against',
'season_frees_for',
'season_goal_assists',
'season_goals',
'season_handballs',
'season_hitouts',
'season_inside50',
'season_kicks',
'season_marks',
'season_marks_in_50',
'season_one_percenters',
'season_rebound50',
'season_tackles',
'season_tog',
'season_uncontested_poss',
'season_centre_clearances',
'season_disposal_efficiency',
'season_effective_disposals',
'season_intercepts',
'season_metres_gained',
'season_stoppage_clearances',
'season_score_involvements',
'season_tackles_in_50',
'season_turnovers',
'career_games_played',
'season_games_played']
def get_diff_cols(matches: pd.DataFrame) -> pd.DataFrame:
Function to take the columns and separate between home and away teams. Each
metric will have a "diff" column which tells the difference between home
and away for this metric. i.e. there's a diff_percentage column which tells
the difference between home and away for the percentage
print('Creating differential columns')
for col in matches.columns:
if col[0:2] == 'h_':
try:
h_col = col
a_col = 'a_' + col[2:]
diff_col = 'diff_' + col[2:]
matches[diff_col] = matches[h_col] - matches[a_col]
except TypeError:
pass
return matches
from typing import Type
import pandas as pd
class TeamLadder:
def __init__(self, team: str):
self.team = team
for column, init_val in ladder_columns:
setattr(self, column, init_val)
def add_prev_round_team_ladder(self, prev_round_team_ladder):
for col,val in prev_round_team_ladder.items():
self.__dict__[col] = val
def update_home_team(self, match):
self.played += 1
if match.hscore > match.ascore:
self.wins += 1
self.prem_points += 4
elif match.hscore == match.ascore:
self.draws += 1
self.prem_points += 2
else:
self.losses += 1
self.points_for += match.hscore
self.points_against += match.ascore
self.percentage = 100 * (self.points_for / self.points_against)
def update_away_team(self, match):
self.played += 1
if match.hscore < match.ascore:
self.wins += 1
self.prem_points += 4
elif match.hscore == match.ascore:
self.draws += 1
self.prem_points += 2
else:
self.losses += 1
self.points_for += match.ascore
self.points_against += match.hscore
self.percentage = 100 * (self.points_for / self.points_against)
def update_ladder(self, match):
Update the ladder for the team based on the outcome of the game. There
will be two possibilites - the team can be the home or the away team
in the provided match.
if self.team == match.teams['home']:
self.update_home_team(match)
else:
self.update_away_team(match)
class Ladder:
Each round object holds the ladder details for that round for each team
def __init__(self, teams_in_season):
self.teams_in_season = teams_in_season
self.team_ladders = {}
def add_team_ladder(self, team_ladder):
self.team_ladders[team_ladder.team.team] = team_ladder
class Team:
Holds team-level data for a particular match
def __init__(self, generic_team_columns, home_or_away: str):
self.home_or_away = home_or_away
for column in generic_team_columns:
setattr(self, column, None)
def add_data(self, data: pd.DataFrame):
if self.home_or_away == 'home':
for home_col, generic_col in home_cols_mapped.items():
self.__dict__[generic_col] = data[home_col]
if self.home_or_away == 'away':
for away_col, generic_col in away_cols_mapped.items():
self.__dict__[generic_col] = data[away_col]
class Match:
Holds data about a match, as well as an object for each team
def __init__(self, match_columns):
self.teams = {
'home': None,
'away': None
}
for column in match_columns:
setattr(self, column, None)
def add_data(self, data: pd.DataFrame):
for column in self.__dict__.keys():
try:
self.__dict__[column] = data[column]
except KeyError:
continue
def add_home_team(self, team):
self.teams['home'] = team
def add_away_team(self, team):
self.teams['away'] = team
class Round:
Contains match and ladder data for each round
def __init__(self, round_num: int):
self.round_num = round_num
self.matches = []
self.bye_teams = []
self.ladder = None
def add_match(self, match):
self.matches.append(match)
def add_ladder(self, ladder):
self.ladder = ladder
class Season:
Contains the rounds for a season, and which teams competed
def __init__(self, year: int, teams):
self.year = year
self.teams = teams
self.rounds = {}
def add_round(self, round_obj: Type[Round]):
self.rounds[round_obj.round_num] = round_obj
class History:
Holds all season objects
def __init__(self):
self.seasons = {}
def add_season(self, season):
self.seasons[season.year] = season
from typing import Dict
def get_season_num_games(matches: pd.DataFrame) -> Dict:
Return a dictionary with seasons as keys and number of games
in season as values
seasons = matches['season'].unique()
rounds_in_season = dict.fromkeys(seasons,0)
for season in seasons:
rounds_in_season[season] = max(matches[matches['season']==season]['h_played']) + 1
return rounds_in_season
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Variables
Step7: 3. Key Properties --> Seawater Properties
Step8: 3.2. Ocean Freezing Point Value
Step9: 4. Key Properties --> Resolution
Step10: 4.2. Canonical Horizontal Resolution
Step11: 4.3. Number Of Horizontal Gridpoints
Step12: 5. Key Properties --> Tuning Applied
Step13: 5.2. Target
Step14: 5.3. Simulations
Step15: 5.4. Metrics Used
Step16: 5.5. Variables
Step17: 6. Key Properties --> Key Parameter Values
Step18: 6.2. Additional Parameters
Step19: 7. Key Properties --> Assumptions
Step20: 7.2. On Diagnostic Variables
Step21: 7.3. Missing Processes
Step22: 8. Key Properties --> Conservation
Step23: 8.2. Properties
Step24: 8.3. Budget
Step25: 8.4. Was Flux Correction Used
Step26: 8.5. Corrected Conserved Prognostic Variables
Step27: 9. Grid --> Discretisation --> Horizontal
Step28: 9.2. Grid Type
Step29: 9.3. Scheme
Step30: 9.4. Thermodynamics Time Step
Step31: 9.5. Dynamics Time Step
Step32: 9.6. Additional Details
Step33: 10. Grid --> Discretisation --> Vertical
Step34: 10.2. Number Of Layers
Step35: 10.3. Additional Details
Step36: 11. Grid --> Seaice Categories
Step37: 11.2. Number Of Categories
Step38: 11.3. Category Limits
Step39: 11.4. Ice Thickness Distribution Scheme
Step40: 11.5. Other
Step41: 12. Grid --> Snow On Seaice
Step42: 12.2. Number Of Snow Levels
Step43: 12.3. Snow Fraction
Step44: 12.4. Additional Details
Step45: 13. Dynamics
Step46: 13.2. Transport In Thickness Space
Step47: 13.3. Ice Strength Formulation
Step48: 13.4. Redistribution
Step49: 13.5. Rheology
Step50: 14. Thermodynamics --> Energy
Step51: 14.2. Thermal Conductivity
Step52: 14.3. Heat Diffusion
Step53: 14.4. Basal Heat Flux
Step54: 14.5. Fixed Salinity Value
Step55: 14.6. Heat Content Of Precipitation
Step56: 14.7. Precipitation Effects On Salinity
Step57: 15. Thermodynamics --> Mass
Step58: 15.2. Ice Vertical Growth And Melt
Step59: 15.3. Ice Lateral Melting
Step60: 15.4. Ice Surface Sublimation
Step61: 15.5. Frazil Ice
Step62: 16. Thermodynamics --> Salt
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Step65: 17.2. Constant Salinity Value
Step66: 17.3. Additional Details
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Step68: 18.2. Constant Salinity Value
Step69: 18.3. Additional Details
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Step72: 20.2. Additional Details
Step73: 21. Thermodynamics --> Melt Ponds
Step74: 21.2. Formulation
Step75: 21.3. Impacts
Step76: 22. Thermodynamics --> Snow Processes
Step77: 22.2. Snow Aging Scheme
Step78: 22.3. Has Snow Ice Formation
Step79: 22.4. Snow Ice Formation Scheme
Step80: 22.5. Redistribution
Step81: 22.6. Heat Diffusion
Step82: 23. Radiative Processes
Step83: 23.2. Ice Radiation Transmission
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'dwd', 'sandbox-3', 'seaice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Classes
|
<ASSISTANT_TASK:>
Python Code:
# Versão da Linguagem Python
from platform import python_version
print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version())
# Criando uma classe chamada Livro
class Livro():
# Este método vai inicializar cada objeto criado a partir desta classe
# O nome deste método é __init__
# (self) é uma referência a cada atributo de um objeto criado a partir desta classe
def __init__(self):
# Atributos de cada objeto criado a partir desta classe.
# O self indica que estes são atributos dos objetos
self.titulo = 'O Monge e o Executivo'
self.isbn = 9988888
print("Construtor chamado para criar um objeto desta classe")
# Métodos são funções, que recebem como parâmetro atributos do objeto criado
def imprime(self):
print("Foi criado o livro %s e ISBN %d" %(self.titulo, self.isbn))
# Criando uma instância da classe Livro
Livro1 = Livro()
# Tipo do Objeto Livro1
type(Livro1)
# Atributo do objeto Livro1
Livro1.titulo
# Método do objeto Livro1
Livro1.imprime()
# Criando a classe Livro com parâmetros no método construtor
class Livro():
def __init__(self, titulo, isbn):
self.titulo = titulo
self.isbn = isbn
print("Construtor chamado para criar um objeto desta classe")
def imprime(self, titulo, isbn):
print("Este é o livro %s e ISBN %d" %(titulo, isbn))
# Criando o objeto Livro2 que é uma instância da classe Livro
Livro2 = Livro("A Menina que Roubava Livros", 77886611)
Livro2.titulo
# Método do objeto Livro2
Livro2.imprime("A Menina que Roubava Livros", 77886611)
# Criando a classe cachorro
class Cachorro():
def __init__(self, raça):
self.raça = raça
print("Construtor chamado para criar um objeto desta classe")
# Criando um objeto a partir da classe cachorro
Rex = Cachorro(raça='Labrador')
# Criando um objeto a partir da classe cachorro
Golias = Cachorro(raça='Huskie')
# Atributo da classe cachorro, utilizado pelo objeto criado
Rex.raça
# Atributo da classe cachorro, utilizado pelo objeto criado
Golias.raça
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We take a peek at the training data with the head() method below.
Step2: Next, we obtain a list of all of the categorical variables in the training data.
Step3: Define Function to Measure Quality of Each Approach
Step4: Score from Approach 1 (Drop Categorical Variables)
Step5: Score from Approach 2 (Ordinal Encoding)
Step6: In the code cell above, for each column, we randomly assign each unique value to a different integer. This is a common approach that is simpler than providing custom labels; however, we can expect an additional boost in performance if we provide better-informed labels for all ordinal variables.
|
<ASSISTANT_TASK:>
Python Code:
#$HIDE$
import pandas as pd
from sklearn.model_selection import train_test_split
# Read the data
data = pd.read_csv('../input/melbourne-housing-snapshot/melb_data.csv')
# Separate target from predictors
y = data.Price
X = data.drop(['Price'], axis=1)
# Divide data into training and validation subsets
X_train_full, X_valid_full, y_train, y_valid = train_test_split(X, y, train_size=0.8, test_size=0.2,
random_state=0)
# Drop columns with missing values (simplest approach)
cols_with_missing = [col for col in X_train_full.columns if X_train_full[col].isnull().any()]
X_train_full.drop(cols_with_missing, axis=1, inplace=True)
X_valid_full.drop(cols_with_missing, axis=1, inplace=True)
# "Cardinality" means the number of unique values in a column
# Select categorical columns with relatively low cardinality (convenient but arbitrary)
low_cardinality_cols = [cname for cname in X_train_full.columns if X_train_full[cname].nunique() < 10 and
X_train_full[cname].dtype == "object"]
# Select numerical columns
numerical_cols = [cname for cname in X_train_full.columns if X_train_full[cname].dtype in ['int64', 'float64']]
# Keep selected columns only
my_cols = low_cardinality_cols + numerical_cols
X_train = X_train_full[my_cols].copy()
X_valid = X_valid_full[my_cols].copy()
X_train.head()
# Get list of categorical variables
s = (X_train.dtypes == 'object')
object_cols = list(s[s].index)
print("Categorical variables:")
print(object_cols)
#$HIDE$
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_absolute_error
# Function for comparing different approaches
def score_dataset(X_train, X_valid, y_train, y_valid):
model = RandomForestRegressor(n_estimators=100, random_state=0)
model.fit(X_train, y_train)
preds = model.predict(X_valid)
return mean_absolute_error(y_valid, preds)
drop_X_train = X_train.select_dtypes(exclude=['object'])
drop_X_valid = X_valid.select_dtypes(exclude=['object'])
print("MAE from Approach 1 (Drop categorical variables):")
print(score_dataset(drop_X_train, drop_X_valid, y_train, y_valid))
from sklearn.preprocessing import OrdinalEncoder
# Make copy to avoid changing original data
label_X_train = X_train.copy()
label_X_valid = X_valid.copy()
# Apply ordinal encoder to each column with categorical data
ordinal_encoder = OrdinalEncoder()
label_X_train[object_cols] = ordinal_encoder.fit_transform(X_train[object_cols])
label_X_valid[object_cols] = ordinal_encoder.transform(X_valid[object_cols])
print("MAE from Approach 2 (Ordinal Encoding):")
print(score_dataset(label_X_train, label_X_valid, y_train, y_valid))
from sklearn.preprocessing import OneHotEncoder
# Apply one-hot encoder to each column with categorical data
OH_encoder = OneHotEncoder(handle_unknown='ignore', sparse=False)
OH_cols_train = pd.DataFrame(OH_encoder.fit_transform(X_train[object_cols]))
OH_cols_valid = pd.DataFrame(OH_encoder.transform(X_valid[object_cols]))
# One-hot encoding removed index; put it back
OH_cols_train.index = X_train.index
OH_cols_valid.index = X_valid.index
# Remove categorical columns (will replace with one-hot encoding)
num_X_train = X_train.drop(object_cols, axis=1)
num_X_valid = X_valid.drop(object_cols, axis=1)
# Add one-hot encoded columns to numerical features
OH_X_train = pd.concat([num_X_train, OH_cols_train], axis=1)
OH_X_valid = pd.concat([num_X_valid, OH_cols_valid], axis=1)
print("MAE from Approach 3 (One-Hot Encoding):")
print(score_dataset(OH_X_train, OH_X_valid, y_train, y_valid))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load data
Step2: Extract Features
Step3: Train SVM on features
Step4: Inline question 1
Step5: | Learning Rate| Regularization Rate | Validation Accuracy | Test Accuracy |
|
<ASSISTANT_TASK:>
Python Code:
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-7, 2e-7, 3e-7, 5e-5, 8e-7]
regularization_strengths = [1e4, 2e4, 3e4, 4e4, 5e4, 6e4, 7e4, 8e4, 7e5]
results = {}
best_val = -1
best_svm = None
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
for lr in learning_rates:
for rs in regularization_strengths:
svm = LinearSVM()
svm.train(X_train_feats, y_train, learning_rate = lr, reg = rs, num_iters = 2000)
train_accuracy = np.mean(y_train == svm.predict(X_train_feats))
val_accuracy = np.mean(y_val == svm.predict(X_val_feats))
results[(lr, rs)] = (train_accuracy, val_accuracy)
if val_accuracy > best_val:
best_val = val_accuracy
best_svm = svm
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print 'lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy)
print 'best validation accuracy achieved during cross-validation: %f' % best_val
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print test_accuracy
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
print X_train_feats.shape
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
best_net = None
best_val_acc = 0.0
best_hidden_size = None
best_learning_rate = None
best_regularization_strength = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
learning_rates = np.logspace(-1, 2, 10)
regularization_strengths = np.logspace(-4, -1, 10)
print '| Learning Rate| Regularization Rate | Validation Accuracy | Test Accuracy |'
print '| --- | --- | --- | --- |'
for learning_rate in learning_rates:
for regularization_strength in regularization_strengths:
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
# Train the network
stats = net.train(X_train_feats, y_train, X_val_feats, y_val,
num_iters=5000, batch_size=500,
learning_rate=learning_rate, learning_rate_decay=0.95,
reg=regularization_strength, verbose=False)
# Predict on the validation set
val_acc = (net.predict(X_val_feats) == y_val).mean()
test_acc = (net.predict(X_test_feats) == y_test).mean()
if best_val_acc < val_acc:
best_val_acc = val_acc
best_net = net
best_learning_rate = learning_rate
best_regularization_strength = regularization_strength
print '|', learning_rate, '|', regularization_strength,'|', val_acc,'|',test_acc, '|'
################################################################################
# END OF YOUR CODE #
################################################################################
# Run your neural net classifier on the test set. You should be able to
# get more than 55% accuracy.
test_acc = (best_net.predict(X_test_feats) == y_test).mean()
print test_acc
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Read in the hanford.csv file
Step2: 3. Calculate the basic descriptive statistics on the data
Step3: 4. Calculate the coefficient of correlation (r) and generate the scatter plot. Does there seem to be a correlation worthy of investigation?
Step4: 5. Create a linear regression model based on the available data to predict the mortality rate given a level of exposure
Step5: 6. Plot the linear regression line on the scatter plot of values. Calculate the r^2 (coefficient of determination)
Step6: 7. Predict the mortality rate (Cancer per 100,000 man years) given an index of exposure = 10
Step7: Now using statsmodels
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
import numpy as np
df = pd.read_csv("data/hanford.csv")
df
df.describe()
df.hist()
df.corr()
df.plot(kind='scatter',x='Exposure',y='Mortality')
lm = LinearRegression()
data = np.asarray(df[['Mortality','Exposure']])
x = data[:,1:]
y = data[:,0]
lm.fit(x,y)
lm.score(x,y)
m = lm.coef_[0]
m
b = lm.intercept_
b
df.plot(kind='scatter',x='Exposure',y='Mortality')
plt.plot(df['Exposure'],m*df['Exposure']+b,'-')
lm.predict(10)
import statsmodels.formula.api as smf
lm = smf.ols(formula='Mortality~Exposure',data=df).fit()
lm.params
intercept, slope = lm.params
df.plot(kind='scatter',x='Exposure',y='Mortality')
plt.plot(df['Exposure'],slope*df['Exposure']+intercept,'-')
plt.xkcd()
df.plot(kind='scatter',x='Exposure',y='Mortality')
plt.plot(df['Exposure'],slope*df['Exposure']+intercept,'-')
lm.summary()
lm.mse_model
lm.pvalues
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set up data
Step2: The arrays have dimensions [sample, time step, feature]
Step4: So we get a better train score and a worse validation score. This indicates overfitting.
Step5: Same as with the first RNN above we seem to overfit to the dataset, but maybe not as strongly. Let's now try a more complex model with a longer sequence length.
Step6: So again we are overfitting, but maybe there is something to be learned. Let's first add some regularization and then try a longer training set.
Step7: So with drop out we get slightly better validation results, but we are still starting to overfit. I think there is a lot of parameter tuning that would be possible with the complexity of the network and so forth.
Step8: Maybe a small improvement. Now let's test our sequence model with a longer training period.
Step9: Get additional variables
Step10: Combining with embeddings
Step11: Embedding with only temperature
Step12: Add previous forecasts and observations as features
Step13: Erm ok wow, that is pretty incredible. But wait maybe this is very similar to the embedding information.
Step14: Alright, I need to check whether I am cheating, but for now let's try to build the best model.
Step15: Go further back and test importance
|
<ASSISTANT_TASK:>
Python Code:
# Imports
import sys
sys.path.append('../') # This is where all the python files are!
from importlib import reload
import utils; reload(utils)
from utils import *
import keras_models; reload(keras_models)
from keras_models import *
import losses; reload(losses)
from losses import crps_cost_function, crps_cost_function_seq
import matplotlib.pyplot as plt
%matplotlib inline
import keras
from keras.layers import Input, Dense, merge, Embedding, Flatten, Dropout, \
SimpleRNN, LSTM, TimeDistributed, GRU, Dropout, Masking
from keras.layers.merge import Concatenate
from keras.models import Model, Sequential
import keras.backend as K
from keras.callbacks import EarlyStopping
from keras.optimizers import SGD, Adam
# Use this if you want to limit the GPU RAM usage
import tensorflow as tf
from keras.backend.tensorflow_backend import set_session
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
set_session(tf.Session(config=config))
# Basic setup
DATA_DIR = '/Volumes/STICK/data/ppnn_data/' # Mac
# DATA_DIR = '/project/meteo/w2w/C7/ppnn_data/' # LMU
results_dir = '../results/'
window_size = 25 # Days in rolling window
fclt = 48 # Forecast lead time in hours
seq_len=5
train_dates = ['2015-01-01', '2016-01-01']
test_dates = ['2016-01-01', '2017-01-01']
train_set, test_set = get_train_test_sets(DATA_DIR, train_dates, test_dates,
seq_len=seq_len, fill_value=-999.)
train_set.features.shape, train_set.targets.shape
batch_size = 1024
hidden_nodes = 100 # Number of hidden nodes inside RNN cell
inp = Input(shape=(seq_len, 2, )) # time step, feature
x = GRU(hidden_nodes)(inp)
x = Dense(2, activation='linear')(x)
rnn_model = Model(inputs=inp, outputs=x)
rnn_model.compile(optimizer=Adam(0.01), loss=crps_cost_function)
rnn_model.summary()
rnn_model.fit(train_set.features, train_set.targets[:,-1], epochs=10, batch_size=batch_size,
validation_data=(test_set.features, test_set.targets[:,-1]))
inp = Input(shape=(seq_len, 2, )) # time step, feature
x = GRU(hidden_nodes, return_sequences=True)(inp)
x = TimeDistributed(Dense(2, activation='linear'))(x)
seq_rnn_model = Model(inputs=inp, outputs=x)
seq_rnn_model.summary()
seq_rnn_model.compile(optimizer=Adam(0.01), loss=crps_cost_function_seq,
sample_weight_mode="temporal")
def train_and_valid(model, train_set, test_set, epochs, batch_size, verbose=0, emb=False):
Write our own function to train and validate,
because the keras fit function cannot handle sample weights for training
and validation at the same time.
train_inp = [train_set.features, train_set.cont_ids] if emb else train_set.features
test_inp = [test_set.features, test_set.cont_ids] if emb else test_set.features
for i in range(epochs):
print('Epoch:', i+1)
t1 = timeit.default_timer()
h = model.fit(train_inp, train_set.targets, epochs=1, batch_size=batch_size,
sample_weight=train_set.sample_weights, verbose=verbose)
t2 = timeit.default_timer()
print('Train loss: %.4f - Valid loss: %.4f - Time: %.1fs' % (h.history['loss'][0],
model.evaluate(test_inp, test_set.targets, batch_size=10000,
sample_weight=test_set.sample_weights, verbose=verbose),
t2 - t1))
train_and_valid(seq_rnn_model, train_set, test_set, 10, batch_size)
seq_len = 20
train_dates = ['2015-01-01', '2016-01-01']
test_dates = ['2016-01-01', '2017-01-01']
train_set, test_set = get_train_test_sets(DATA_DIR, train_dates, test_dates,
seq_len=seq_len, fill_value=-999.)
hidden_nodes = 200
inp = Input(shape=(seq_len, 2, )) # time step, feature
x = GRU(hidden_nodes, return_sequences=True)(inp)
x = TimeDistributed(Dense(2, activation='linear'))(x)
seq_rnn_model = Model(inputs=inp, outputs=x)
seq_rnn_model.summary()
seq_rnn_model.compile(optimizer=Adam(0.01), loss=crps_cost_function_seq,
sample_weight_mode="temporal")
train_and_valid(seq_rnn_model, train_set, test_set, 10, batch_size)
inp = Input(shape=(seq_len, 2, )) # time step, feature
x = GRU(hidden_nodes, return_sequences=True, recurrent_dropout=0.5)(inp)
x = TimeDistributed(Dense(2, activation='linear'))(x)
seq_rnn_model = Model(inputs=inp, outputs=x)
seq_rnn_model.compile(optimizer=Adam(0.001), loss=crps_cost_function_seq,
sample_weight_mode="temporal")
train_and_valid(seq_rnn_model, train_set, test_set, 10, batch_size)
train_dates_long = ['2008-01-01', '2016-01-01']
train_set, test_set = get_train_test_sets(DATA_DIR, train_dates_long, test_dates)
# Copied from fc_network notebook
def build_fc_model():
inp = Input(shape=(2,))
x = Dense(2, activation='linear')(inp)
return Model(inputs=inp, outputs=x)
fc_model = build_fc_model()
fc_model.compile(optimizer=Adam(0.1), loss=crps_cost_function)
fc_model.fit(train_set.features, train_set.targets, epochs=10, batch_size=1024,
validation_data=[test_set.features, test_set.targets])
seq_len = 20
train_set, test_set = get_train_test_sets(DATA_DIR, train_dates_long, test_dates,
seq_len=seq_len, fill_value=-999.)
def build_seq_rnn(hidden_nodes, n_features, dropout=0, lr=0.01):
inp = Input(shape=(seq_len, n_features, )) # time step, feature
x = GRU(hidden_nodes, return_sequences=True, recurrent_dropout=dropout)(inp)
x = TimeDistributed(Dense(2, activation='linear'))(x)
seq_rnn_model = Model(inputs=inp, outputs=x)
seq_rnn_model.compile(optimizer=Adam(lr), loss=crps_cost_function_seq,
sample_weight_mode="temporal")
return seq_rnn_model
inp = Input(shape=(seq_len, 2, )) # time step, feature
x = GRU(hidden_nodes, return_sequences=True, recurrent_dropout=0.5)(inp)
x = TimeDistributed(Dense(2, activation='linear'))(x)
seq_rnn_model = Model(inputs=inp, outputs=x)
seq_rnn_model.compile(optimizer=Adam(0.001), loss=crps_cost_function_seq,
sample_weight_mode="temporal")
# This takes several minutes on the GPU
# Epoch counter: 7
train_and_valid(seq_rnn_model, train_set, test_set, 2, batch_size)
from collections import OrderedDict
aux_dict = OrderedDict()
aux_dict['data_aux_geo_interpolated.nc'] = ['orog',
'station_alt',
'station_lat',
'station_lon']
aux_dict['data_aux_pl500_interpolated_00UTC.nc'] = ['u_pl500_fc',
'v_pl500_fc',
'gh_pl500_fc']
aux_dict['data_aux_pl850_interpolated_00UTC.nc'] = ['u_pl850_fc',
'v_pl850_fc',
'q_pl850_fc']
aux_dict['data_aux_surface_interpolated_00UTC.nc'] = ['cape_fc',
'sp_fc',
'tcc_fc']
# Start with just one training year
train_set, test_set = get_train_test_sets(DATA_DIR, train_dates, test_dates,
seq_len=seq_len, fill_value=-999., aux_dict=aux_dict)
train_set.cont_ids.shape
n_features = train_set.features.shape[-1]
n_features
seq_rnn_model = build_seq_rnn(hidden_nodes, n_features, dropout=0.5, lr=0.01)
# Epoch counter: 8
train_and_valid(seq_rnn_model, train_set, test_set, 2, batch_size, verbose=1)
seq_rnn_model.summary()
def build_seq_rnn_with_embeddings(seq_len, hidden_nodes, n_features, emb_size, max_id,
recurrent_dropout=0, dropout=0, lr=0.01):
features_inp = Input(shape=(seq_len, n_features, )) # time step, feature
id_in = Input(shape=(seq_len,))
emb = Embedding(max_id + 1, emb_size)(id_in)
x = GRU(hidden_nodes, return_sequences=True, recurrent_dropout=recurrent_dropout)(features_inp)
x = Concatenate()([x, emb])
x = Dropout(dropout)(x)
x = TimeDistributed(Dense(2, activation='linear'))(x)
model = Model(inputs=[features_inp, id_in], outputs=x)
model.compile(optimizer=Adam(lr), loss=crps_cost_function_seq,
sample_weight_mode="temporal")
return model
emb_size = 5
max_id = int(np.max([train_set.cont_ids.max(), test_set.cont_ids.max()]))
hidden_nodes, max_id
emb_rnn = build_seq_rnn_with_embeddings(hidden_nodes, n_features, emb_size, max_id, 0.5)
emb_rnn.summary()
train_and_valid(emb_rnn, train_set, test_set, 5, batch_size, emb=True)
train_set, test_set = get_train_test_sets(DATA_DIR, train_dates, test_dates,
seq_len=2, fill_value=-999.)
emb_rnn = build_seq_rnn_with_embeddings(2, 10, 2, emb_size, max_id, recurrent_dropout=0)
emb_rnn.summary()
train_and_valid(emb_rnn, train_set, test_set, 5, batch_size, emb=True)
train_dates_long = ['2008-01-01', '2016-01-01']
train_set_long, test_set = get_train_test_sets(DATA_DIR, train_dates_long, test_dates,
seq_len=seq_len, fill_value=-999.)
emb_rnn = build_seq_rnn_with_embeddings(seq_len, 30, 2, emb_size, max_id, recurrent_dropout=0.3)
train_and_valid(emb_rnn, train_set_long, test_set, 15, batch_size, emb=True)
train_dates = ['2015-01-01', '2016-01-01']
test_dates = ['2016-01-01', '2017-01-01']
train_set, test_set = get_train_test_sets(DATA_DIR, train_dates, test_dates,
add_current_error=True)
train_set.feature_names
train_set.features.shape
test_set.features[10000:11000, :]
fc_model = build_fc_model(5, 2, compile=True)
# Note: I am running this cell several times
fc_model.fit(train_set.features, train_set.targets, epochs=10, batch_size=1024,
validation_data=[test_set.features, test_set.targets])
emb_size = 3
max_id = int(np.max([train_set.cont_ids.max(), test_set.cont_ids.max()]))
max_id
emb_model = build_emb_model(5, 2, [], emb_size, max_id, compile=True,
lr=0.01)
emb_model.fit([train_set.features, train_set.cont_ids], train_set.targets,
epochs=10, batch_size=1024,
validation_data=[[test_set.features, test_set.cont_ids], test_set.targets])
from collections import OrderedDict
more_aux_dict = OrderedDict()
more_aux_dict['data_aux_geo_interpolated.nc'] = ['orog',
'station_alt',
'station_lat',
'station_lon']
more_aux_dict['data_aux_pl500_interpolated_00UTC.nc'] = ['u_pl500_fc',
'v_pl500_fc',
'gh_pl500_fc']
more_aux_dict['data_aux_pl850_interpolated_00UTC.nc'] = ['u_pl850_fc',
'v_pl850_fc',
'q_pl850_fc']
more_aux_dict['data_aux_surface_interpolated_00UTC.nc'] = ['cape_fc',
'sp_fc',
'tcc_fc']
more_aux_dict['data_aux_surface_more_interpolated_part1_00UTC.nc'] = [
'sshf_fc', 'slhf_fc', 'u10_fc','v10_fc'
]
more_aux_dict['data_aux_surface_more_interpolated_part2_00UTC.nc'] = [
'ssr_fc', 'str_fc', 'd2m_fc','sm_fc'
]
train_dates = ['2015-01-01', '2016-01-01']
test_dates = ['2016-01-01', '2017-01-01']
more_train_set, more_test_set = get_train_test_sets(DATA_DIR, train_dates, test_dates,
aux_dict=more_aux_dict, add_current_error=True)
emb_size = 3
max_id = int(np.max([more_train_set.cont_ids.max(), more_test_set.cont_ids.max()]))
max_id
emb_model = build_emb_model(more_train_set.features.shape[1], 2, [50], 3, max_id,
compile=True, lr=0.01)
from IPython.display import SVG,
from keras.utils.vis_utils import model_to_dot
SVG(model_to_dot(emb_model, show_shapes=True).create(prog='dot', format='svg'))
emb_model.fit([more_train_set.features, more_train_set.cont_ids], more_train_set.targets, epochs=30,
batch_size=4096, validation_split=0.0)
#callbacks=[EarlyStopping(monitor='val_loss',
# min_delta=0,
# patience=3)])
emb_model.evaluate([more_test_set.features, more_test_set.cont_ids], more_test_set.targets, batch_size=10000)
emb_model.summary()
long_train_dates = ['2008-01-01', '2016-01-01']
long_more_train_set, more_test_set = get_train_test_sets(DATA_DIR, long_train_dates, test_dates,
aux_dict=more_aux_dict,
add_current_error=True)
emb_model = build_emb_model(long_more_train_set.features.shape[1], 2, [50], 3, max_id,
compile=True, lr=0.01)
emb_model.fit([long_more_train_set.features, long_more_train_set.cont_ids], long_more_train_set.targets, epochs=50,
batch_size=4096, validation_split=0.2,
callbacks=[EarlyStopping(monitor='val_loss',
min_delta=0,
patience=2)])
emb_model.evaluate([more_test_set.features, more_test_set.cont_ids], test_set.targets, batch_size=10000)
# Test current error
train_dates = ['2015-01-01', '2016-01-01']
test_dates = ['2016-01-01', '2017-01-01']
train_set, test_set = get_train_test_sets(DATA_DIR, train_dates, test_dates,
add_current_error=True,
current_error_len=1)
train_set.feature_names
fc_model = build_fc_model(4, 2, compile=True)
fc_model.fit(train_set.features, train_set.targets, epochs=10, batch_size=1024,
validation_data=[test_set.features, test_set.targets])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Basic rich display
Step2: Use the HTML object to display HTML in the notebook that reproduces the table of Quarks on this page. This will require you to learn about how to create HTML tables and then pass that to the HTML object for display. Don't worry about styling and formatting the table, but you should use LaTeX where appropriate.
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import display
from IPython.display import Image
assert True # leave this to grade the import statements
Image(url='http://easyscienceforkids.com/wp-content/uploads/2013/06/ICI.jpg', embed=True, width=600, height=600)
assert True # leave this to grade the image display
%%html
<center>
<table>
<tr>
<th>Name</th>
<th>Symbol</th>
<th>Antiparticle</th>
<th>Charge (e)</th>
<th>Mass $(MeV/c^2)$</th>
</tr>
<tr>
<td>up</td>
<td>$u$</td>
<td>$\bar{u}$</td>
<td>+$\frac{2}{3}$</td>
<td>1.5-3.3</td>
</tr>
<tr>
<td>down</td>
<td>$d$</td>
<td>$\bar{d}$</td>
<td>-$\frac{1}{3}$</td>
<td>3.5-6.0</td>
</tr>
<tr>
<td>charm</td>
<td>d</td>
<td>$\bar{c}$</td>
<td>+$\frac{2}{3}$</td>
<td>1,160-1,340</td>
</tr>
<tr>
<td>strange</td>
<td>s</td>
<td>$\bar{s}$</td>
<td>-$\frac{1}{3}$</td>
<td>70-130</td>
</tr>
<tr>
<td>top</td>
<td>t</td>
<td>$\bar{t}$</td>
<td>+$\frac{2}{3}$</td>
<td>169,100-173,300</td>
</tr>
<tr>
<td>bottom</td>
<td>b</td>
<td>$\bar{b}$</td>
<td>-$\frac{1}{3}$</td>
<td>4,130-4,370</td>
</tr>
</table>
</center>
assert True # leave this here to grade the quark table
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As usual, we import everything we need.
Step2: First, we load and preprocess the data. We use runs 6, 10, and 14 from
Step3: Now we can create 5s epochs around events of interest.
Step4: Here we set suitable values for computing ERDS maps.
Step5: Finally, we perform time/frequency decomposition over all epochs.
Step6: Similar to ~mne.Epochs objects, we can also export data from
Step7: This allows us to use additional plotting functions like
Step8: Having the data as a DataFrame also facilitates subsetting,
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Clemens Brunner <clemens.brunner@gmail.com>
# Felix Klotzsche <klotzsche@cbs.mpg.de>
#
# License: BSD-3-Clause
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import TwoSlopeNorm
import pandas as pd
import seaborn as sns
import mne
from mne.datasets import eegbci
from mne.io import concatenate_raws, read_raw_edf
from mne.time_frequency import tfr_multitaper
from mne.stats import permutation_cluster_1samp_test as pcluster_test
fnames = eegbci.load_data(subject=1, runs=(6, 10, 14))
raw = concatenate_raws([read_raw_edf(f, preload=True) for f in fnames])
raw.rename_channels(lambda x: x.strip('.')) # remove dots from channel names
events, _ = mne.events_from_annotations(raw, event_id=dict(T1=2, T2=3))
tmin, tmax = -1, 4
event_ids = dict(hands=2, feet=3) # map event IDs to tasks
epochs = mne.Epochs(raw, events, event_ids, tmin - 0.5, tmax + 0.5,
picks=('C3', 'Cz', 'C4'), baseline=None, preload=True)
freqs = np.arange(2, 36) # frequencies from 2-35Hz
vmin, vmax = -1, 1.5 # set min and max ERDS values in plot
baseline = [-1, 0] # baseline interval (in s)
cnorm = TwoSlopeNorm(vmin=vmin, vcenter=0, vmax=vmax) # min, center & max ERDS
kwargs = dict(n_permutations=100, step_down_p=0.05, seed=1,
buffer_size=None, out_type='mask') # for cluster test
tfr = tfr_multitaper(epochs, freqs=freqs, n_cycles=freqs, use_fft=True,
return_itc=False, average=False, decim=2)
tfr.crop(tmin, tmax).apply_baseline(baseline, mode="percent")
for event in event_ids:
# select desired epochs for visualization
tfr_ev = tfr[event]
fig, axes = plt.subplots(1, 4, figsize=(12, 4),
gridspec_kw={"width_ratios": [10, 10, 10, 1]})
for ch, ax in enumerate(axes[:-1]): # for each channel
# positive clusters
_, c1, p1, _ = pcluster_test(tfr_ev.data[:, ch], tail=1, **kwargs)
# negative clusters
_, c2, p2, _ = pcluster_test(tfr_ev.data[:, ch], tail=-1, **kwargs)
# note that we keep clusters with p <= 0.05 from the combined clusters
# of two independent tests; in this example, we do not correct for
# these two comparisons
c = np.stack(c1 + c2, axis=2) # combined clusters
p = np.concatenate((p1, p2)) # combined p-values
mask = c[..., p <= 0.05].any(axis=-1)
# plot TFR (ERDS map with masking)
tfr_ev.average().plot([ch], cmap="RdBu", cnorm=cnorm, axes=ax,
colorbar=False, show=False, mask=mask,
mask_style="mask")
ax.set_title(epochs.ch_names[ch], fontsize=10)
ax.axvline(0, linewidth=1, color="black", linestyle=":") # event
if ch != 0:
ax.set_ylabel("")
ax.set_yticklabels("")
fig.colorbar(axes[0].images[-1], cax=axes[-1]).ax.set_yscale("linear")
fig.suptitle(f"ERDS ({event})")
plt.show()
df = tfr.to_data_frame(time_format=None)
df.head()
df = tfr.to_data_frame(time_format=None, long_format=True)
# Map to frequency bands:
freq_bounds = {'_': 0,
'delta': 3,
'theta': 7,
'alpha': 13,
'beta': 35,
'gamma': 140}
df['band'] = pd.cut(df['freq'], list(freq_bounds.values()),
labels=list(freq_bounds)[1:])
# Filter to retain only relevant frequency bands:
freq_bands_of_interest = ['delta', 'theta', 'alpha', 'beta']
df = df[df.band.isin(freq_bands_of_interest)]
df['band'] = df['band'].cat.remove_unused_categories()
# Order channels for plotting:
df['channel'] = df['channel'].cat.reorder_categories(('C3', 'Cz', 'C4'),
ordered=True)
g = sns.FacetGrid(df, row='band', col='channel', margin_titles=True)
g.map(sns.lineplot, 'time', 'value', 'condition', n_boot=10)
axline_kw = dict(color='black', linestyle='dashed', linewidth=0.5, alpha=0.5)
g.map(plt.axhline, y=0, **axline_kw)
g.map(plt.axvline, x=0, **axline_kw)
g.set(ylim=(None, 1.5))
g.set_axis_labels("Time (s)", "ERDS (%)")
g.set_titles(col_template="{col_name}", row_template="{row_name}")
g.add_legend(ncol=2, loc='lower center')
g.fig.subplots_adjust(left=0.1, right=0.9, top=0.9, bottom=0.08)
df_mean = (df.query('time > 1')
.groupby(['condition', 'epoch', 'band', 'channel'])[['value']]
.mean()
.reset_index())
g = sns.FacetGrid(df_mean, col='condition', col_order=['hands', 'feet'],
margin_titles=True)
g = (g.map(sns.violinplot, 'channel', 'value', 'band', n_boot=10,
palette='deep', order=['C3', 'Cz', 'C4'],
hue_order=freq_bands_of_interest,
linewidth=0.5).add_legend(ncol=4, loc='lower center'))
g.map(plt.axhline, **axline_kw)
g.set_axis_labels("", "ERDS (%)")
g.set_titles(col_template="{col_name}", row_template="{row_name}")
g.fig.subplots_adjust(left=0.1, right=0.9, top=0.9, bottom=0.3)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step4: version 1.0.0
Step5: Let's examine the lines that were just loaded in the two subset (small) files - one from Google and one from Amazon
Step7: Part 1
Step9: (1b) Removing stopwords
Step11: (1c) Tokenizing the small datasets
Step13: (1d) Amazon record with the most tokens
Step15: Part 2
Step16: (2b) Create a corpus
Step18: (2c) Implement an IDFs function
Step19: (2d) Tokens with the smallest IDF
Step20: (2e) IDF Histogram
Step22: (2f) Implement a TF-IDF function
Step26: Part 3
Step28: (3b) Implement a cosineSimilarity function
Step31: (3c) Perform Entity Resolution
Step34: (3d) Perform Entity Resolution with Broadcast Variables
Step36: (3e) Perform a Gold Standard evaluation
Step37: Using the "gold standard" data we can answer the following questions
Step38: Part 4
Step39: (4b) Compute IDFs and TF-IDFs for the full datasets
Step40: (4c) Compute Norms for the weights from the full datasets
Step42: (4d) Create inverted indicies from the full datasets
Step44: (4e) Identify common tokens from the full dataset
Step46: (4f) Identify common tokens from the full dataset
Step47: Part 5
Step48: The next step is to pick a threshold between 0 and 1 for the count of True Positives (true duplicates above the threshold). However, we would like to explore many different thresholds. To do this, we divide the space of thresholds into 100 bins, and take the following actions
Step49: (5b) Precision, Recall, and F-measures
Step50: (5c) Line Plots
|
<ASSISTANT_TASK:>
Python Code:
import re
DATAFILE_PATTERN = '^(.+),"(.+)",(.*),(.*),(.*)'
def removeQuotes(s):
Remove quotation marks from an input string
Args:
s (str): input string that might have the quote "" characters
Returns:
str: a string without the quote characters
return ''.join(i for i in s if i!='"')
def parseDatafileLine(datafileLine):
Parse a line of the data file using the specified regular expression pattern
Args:
datafileLine (str): input string that is a line from the data file
Returns:
str: a string parsed using the given regular expression and without the quote characters
match = re.search(DATAFILE_PATTERN, datafileLine)
if match is None:
print 'Invalid datafile line: %s' % datafileLine
return (datafileLine, -1)
elif match.group(1) == '"id"':
print 'Header datafile line: %s' % datafileLine
return (datafileLine, 0)
else:
product = '%s %s %s' % (match.group(2), match.group(3), match.group(4))
return ((removeQuotes(match.group(1)), product), 1)
import sys
import os
from test_helper import Test
baseDir = os.path.join('data')
inputPath = os.path.join('cs100', 'lab3')
GOOGLE_PATH = 'Google.csv'
GOOGLE_SMALL_PATH = 'Google_small.csv'
AMAZON_PATH = 'Amazon.csv'
AMAZON_SMALL_PATH = 'Amazon_small.csv'
GOLD_STANDARD_PATH = 'Amazon_Google_perfectMapping.csv'
STOPWORDS_PATH = 'stopwords.txt'
def parseData(filename):
Parse a data file
Args:
filename (str): input file name of the data file
Returns:
RDD: a RDD of parsed lines
return (sc
.textFile(filename, 4, 0)
.map(parseDatafileLine)
.cache())
def loadData(path):
Load a data file
Args:
path (str): input file name of the data file
Returns:
RDD: a RDD of parsed valid lines
filename = os.path.join(baseDir, inputPath, path)
raw = parseData(filename).cache()
failed = (raw
.filter(lambda s: s[1] == -1)
.map(lambda s: s[0]))
for line in failed.take(10):
print '%s - Invalid datafile line: %s' % (path, line)
valid = (raw
.filter(lambda s: s[1] == 1)
.map(lambda s: s[0])
.cache())
print '%s - Read %d lines, successfully parsed %d lines, failed to parse %d lines' % (path,
raw.count(),
valid.count(),
failed.count())
assert failed.count() == 0
assert raw.count() == (valid.count() + 1)
return valid
googleSmall = loadData(GOOGLE_SMALL_PATH)
google = loadData(GOOGLE_PATH)
amazonSmall = loadData(AMAZON_SMALL_PATH)
amazon = loadData(AMAZON_PATH)
for line in googleSmall.take(3):
print 'google: %s: %s\n' % (line[0], line[1])
for line in amazonSmall.take(3):
print 'amazon: %s: %s\n' % (line[0], line[1])
quickbrownfox = 'A quick brown fox jumps over the lazy dog.'
split_regex = r'\W+'
def simpleTokenize(string):
A simple implementation of input string tokenization
Args:
string (str): input string
Returns:
list: a list of tokens
return filter(lambda token: token <> '',
map(lambda token: token.strip().lower(),
re.split(split_regex, string)))
print simpleTokenize(quickbrownfox)
# TEST Tokenize a String (1a)
Test.assertEquals(simpleTokenize(quickbrownfox),
['a','quick','brown','fox','jumps','over','the','lazy','dog'],
'simpleTokenize should handle sample text')
Test.assertEquals(simpleTokenize(' '), [], 'simpleTokenize should handle empty string')
Test.assertEquals(simpleTokenize('!!!!123A/456_B/789C.123A'), ['123a','456_b','789c','123a'],
'simpleTokenize should handle puntuations and lowercase result')
Test.assertEquals(simpleTokenize('fox fox'), ['fox', 'fox'],
'simpleTokenize should not remove duplicates')
stopfile = os.path.join(baseDir, inputPath, STOPWORDS_PATH)
stopwords = set(sc.textFile(stopfile).collect())
print 'These are the stopwords: %s' % stopwords
def tokenize(string):
An implementation of input string tokenization that excludes stopwords
Args:
string (str): input string
Returns:
list: a list of tokens without stopwords
return filter(lambda token: token not in stopwords, simpleTokenize(string))
print tokenize(quickbrownfox)
# TEST Removing stopwords (1b)
Test.assertEquals(tokenize("Why a the?"), [], 'tokenize should remove all stopwords')
Test.assertEquals(tokenize("Being at the_?"), ['the_'], 'tokenize should handle non-stopwords')
Test.assertEquals(tokenize(quickbrownfox), ['quick','brown','fox','jumps','lazy','dog'],
'tokenize should handle sample text')
amazonRecToToken = amazonSmall.map(lambda line: (line[0], tokenize(line[1])))
googleRecToToken = googleSmall.map(lambda line: (line[0], tokenize(line[1])))
def countTokens(vendorRDD):
Count and return the number of tokens
Args:
vendorRDD (RDD of (recordId, tokenizedValue)): Pair tuple of record ID to tokenized output
Returns:
count: count of all tokens
return vendorRDD.map(lambda a: len(a[1])).sum()
totalTokens = countTokens(amazonRecToToken) + countTokens(googleRecToToken)
print 'There are %s tokens in the combined datasets' % totalTokens
# TEST Tokenizing the small datasets (1c)
Test.assertEquals(totalTokens, 22520, 'incorrect totalTokens')
def findBiggestRecord(vendorRDD):
Find and return the record with the largest number of tokens
Args:
vendorRDD (RDD of (recordId, tokens)): input Pair Tuple of record ID and tokens
Returns:
list: a list of 1 Pair Tuple of record ID and tokens
return vendorRDD.map(lambda a: (a[0], a[1], len(a[1]))).takeOrdered(1, lambda s: s[2] * -1)
biggestRecordAmazon = findBiggestRecord(amazonRecToToken)
print 'The Amazon record with ID "%s" has the most tokens (%s)' % (biggestRecordAmazon[0][0],
len(biggestRecordAmazon[0][1]))
# TEST Amazon record with the most tokens (1d)
Test.assertEquals(biggestRecordAmazon[0][0], 'b000o24l3q', 'incorrect biggestRecordAmazon')
Test.assertEquals(len(biggestRecordAmazon[0][1]), 1547, 'incorrect len for biggestRecordAmazon')
def tf(tokens):
Compute TF
Args:
tokens (list of str): input list of tokens from tokenize
Returns:
dictionary: a dictionary of tokens to its TF values
tf_dict = {}
for token in tokens:
if token in tf_dict:
tf_dict[token] += 1
else:
tf_dict[token] = 1
for tf_value in tf_dict:
tf_dict[tf_value] /= float(len(tokens))
return tf_dict
print tf(tokenize(quickbrownfox)) # Should give { 'quick': 0.1666 ... }
# TEST Implement a TF function (2a)
tf_test = tf(tokenize(quickbrownfox))
Test.assertEquals(tf_test, {'brown': 0.16666666666666666, 'lazy': 0.16666666666666666,
'jumps': 0.16666666666666666, 'fox': 0.16666666666666666,
'dog': 0.16666666666666666, 'quick': 0.16666666666666666},
'incorrect result for tf on sample text')
tf_test2 = tf(tokenize('one_ one_ two!'))
Test.assertEquals(tf_test2, {'one_': 0.6666666666666666, 'two': 0.3333333333333333},
'incorrect result for tf test')
corpusRDD = amazonRecToToken.union(googleRecToToken)
# TEST Create a corpus (2b)
Test.assertEquals(corpusRDD.count(), 400, 'incorrect corpusRDD.count()')
def idfs(corpus):
Compute IDF
Args:
corpus (RDD): input corpus
Returns:
RDD: a RDD of (record ID, IDF value)
N = corpus.count()
uniqueTokens = (corpus
.map(lambda r: [(token, r[0]) for token in r[1]])
.flatMap(lambda r: r)
.groupByKey()
.map(lambda r: (r[0], set(r[1]))))
tokenCountPairTuple = uniqueTokens.map(lambda r: (r[0], len(r[1])))
tokenSumPairTuple = tokenCountPairTuple.reduceByKey(lambda a,b: a + b)
return tokenSumPairTuple.map(lambda (token, sum): (token, N / float(sum))).cache()
idfsSmall = idfs(amazonRecToToken.union(googleRecToToken))
uniqueTokenCount = idfsSmall.count()
print 'There are %s unique tokens in the small datasets.' % uniqueTokenCount
# TEST Implement an IDFs function (2c)
Test.assertEquals(uniqueTokenCount, 4772, 'incorrect uniqueTokenCount')
tokenSmallestIdf = idfsSmall.takeOrdered(1, lambda s: s[1])[0]
Test.assertEquals(tokenSmallestIdf[0], 'software', 'incorrect smallest IDF token')
Test.assertTrue(abs(tokenSmallestIdf[1] - 4.25531914894) < 0.0000000001,
'incorrect smallest IDF value')
smallIDFTokens = idfsSmall.takeOrdered(11, lambda s: s[1])
print smallIDFTokens
import matplotlib.pyplot as plt
small_idf_values = idfsSmall.map(lambda s: s[1]).collect()
fig = plt.figure(figsize=(8,3))
plt.hist(small_idf_values, 50, log=True)
pass
def tfidf(tokens, idfs):
Compute TF-IDF
Args:
tokens (list of str): input list of tokens from tokenize
idfs (dictionary): record to IDF value
Returns:
dictionary: a dictionary of records to TF-IDF values
tfs = tf(tokens)
tfIdfDict = {}
for token in tfs:
tfIdfDict[token] = idfs[token] * tfs[token]
return tfIdfDict
recb000hkgj8k = amazonRecToToken.filter(lambda x: x[0] == 'b000hkgj8k').collect()[0][1]
idfsSmallWeights = idfsSmall.collectAsMap()
rec_b000hkgj8k_weights = tfidf(recb000hkgj8k, idfsSmallWeights)
print 'Amazon record "b000hkgj8k" has tokens and weights:\n%s' % rec_b000hkgj8k_weights
# TEST Implement a TF-IDF function (2f)
Test.assertEquals(rec_b000hkgj8k_weights,
{'autocad': 33.33333333333333, 'autodesk': 8.333333333333332,
'courseware': 66.66666666666666, 'psg': 33.33333333333333,
'2007': 3.5087719298245617, 'customizing': 16.666666666666664,
'interface': 3.0303030303030303}, 'incorrect rec_b000hkgj8k_weights')
import math
def dotprod(a, b):
Compute dot product
Args:
a (dictionary): first dictionary of record to value
b (dictionary): second dictionary of record to value
Returns:
dotProd: result of the dot product with the two input dictionaries
sum = 0
for k in a:
if k in b:
sum += a[k] * b[k]
return sum
def norm(a):
Compute square root of the dot product
Args:
a (dictionary): a dictionary of record to value
Returns:
norm: a dictionary of tokens to its TF values
return math.sqrt(sum(map(lambda k: a[k] ** 2, a)))
def cossim(a, b):
Compute cosine similarity
Args:
a (dictionary): first dictionary of record to value
b (dictionary): second dictionary of record to value
Returns:
cossim: dot product of two dictionaries divided by the norm of the first dictionary and
then by the norm of the second dictionary
return dotprod(a, b) / (norm(a) * norm(b))
testVec1 = {'foo': 2, 'bar': 3, 'baz': 5 }
testVec2 = {'foo': 1, 'bar': 0, 'baz': 20 }
dp = dotprod(testVec1, testVec2)
nm = norm(testVec1)
print dp, nm
# TEST Implement the components of a cosineSimilarity function (3a)
Test.assertEquals(dp, 102, 'incorrect dp')
Test.assertTrue(abs(nm - 6.16441400297) < 0.0000001, 'incorrrect nm')
def cosineSimilarity(string1, string2, idfsDictionary):
Compute cosine similarity between two strings
Args:
string1 (str): first string
string2 (str): second string
idfsDictionary (dictionary): a dictionary of IDF values
Returns:
cossim: cosine similarity value
w1 = tfidf(tokenize(string1), idfsDictionary)
w2 = tfidf(tokenize(string2), idfsDictionary)
return cossim(w1, w2)
cossimAdobe = cosineSimilarity('Adobe Photoshop',
'Adobe Illustrator',
idfsSmallWeights)
print cossimAdobe
# TEST Implement a cosineSimilarity function (3b)
Test.assertTrue(abs(cossimAdobe - 0.0577243382163) < 0.0000001, 'incorrect cossimAdobe')
crossSmall = (googleSmall
.cartesian(amazonSmall)
.cache())
def computeSimilarity(record):
Compute similarity on a combination record
Args:
record: a pair, (google record, amazon record)
Returns:
pair: a pair, (google URL, amazon ID, cosine similarity value)
googleRec = record[0]
amazonRec = record[1]
googleURL = googleRec[0]
amazonID = amazonRec[0]
googleValue = googleRec[1]
amazonValue = amazonRec[1]
cs = cosineSimilarity(googleValue, amazonValue, idfsSmallWeights)
return (googleURL, amazonID, cs)
similarities = (crossSmall
.map(computeSimilarity)
.cache())
def similar(amazonID, googleURL):
Return similarity value
Args:
amazonID: amazon ID
googleURL: google URL
Returns:
similar: cosine similarity value
return (similarities
.filter(lambda record: (record[0] == googleURL and record[1] == amazonID))
.collect()[0][2])
similarityAmazonGoogle = similar('b000o24l3q', 'http://www.google.com/base/feeds/snippets/17242822440574356561')
print 'Requested similarity is %s.' % similarityAmazonGoogle
# TEST Perform Entity Resolution (3c)
Test.assertTrue(abs(similarityAmazonGoogle - 0.000303171940451) < 0.0000001,
'incorrect similarityAmazonGoogle')
def computeSimilarityBroadcast(record):
Compute similarity on a combination record, using Broadcast variable
Args:
record: a pair, (google record, amazon record)
Returns:
pair: a pair, (google URL, amazon ID, cosine similarity value)
googleRec = record[0]
amazonRec = record[1]
googleURL = googleRec[0]
amazonID = amazonRec[0]
googleValue = googleRec[1]
amazonValue = amazonRec[1]
cs = cosineSimilarity(googleValue, amazonValue, idfsSmallBroadcast.value)
return (googleURL, amazonID, cs)
idfsSmallBroadcast = sc.broadcast(idfsSmallWeights)
similaritiesBroadcast = (crossSmall
.map(computeSimilarity)
.cache())
def similarBroadcast(amazonID, googleURL):
Return similarity value, computed using Broadcast variable
Args:
amazonID: amazon ID
googleURL: google URL
Returns:
similar: cosine similarity value
return (similaritiesBroadcast
.filter(lambda record: (record[0] == googleURL and record[1] == amazonID))
.collect()[0][2])
similarityAmazonGoogleBroadcast = similarBroadcast('b000o24l3q', 'http://www.google.com/base/feeds/snippets/17242822440574356561')
print 'Requested similarity is %s.' % similarityAmazonGoogleBroadcast
# TEST Perform Entity Resolution with Broadcast Variables (3d)
from pyspark import Broadcast
Test.assertTrue(isinstance(idfsSmallBroadcast, Broadcast), 'incorrect idfsSmallBroadcast')
Test.assertEquals(len(idfsSmallBroadcast.value), 4772, 'incorrect idfsSmallBroadcast value')
Test.assertTrue(abs(similarityAmazonGoogleBroadcast - 0.000303171940451) < 0.0000001,
'incorrect similarityAmazonGoogle')
GOLDFILE_PATTERN = '^(.+),(.+)'
# Parse each line of a data file useing the specified regular expression pattern
def parse_goldfile_line(goldfile_line):
Parse a line from the 'golden standard' data file
Args:
goldfile_line: a line of data
Returns:
pair: ((key, 'gold', 1 if successful or else 0))
match = re.search(GOLDFILE_PATTERN, goldfile_line)
if match is None:
print 'Invalid goldfile line: %s' % goldfile_line
return (goldfile_line, -1)
elif match.group(1) == '"idAmazon"':
print 'Header datafile line: %s' % goldfile_line
return (goldfile_line, 0)
else:
key = '%s %s' % (removeQuotes(match.group(1)), removeQuotes(match.group(2)))
return ((key, 'gold'), 1)
goldfile = os.path.join(baseDir, inputPath, GOLD_STANDARD_PATH)
gsRaw = (sc
.textFile(goldfile)
.map(parse_goldfile_line)
.cache())
gsFailed = (gsRaw
.filter(lambda s: s[1] == -1)
.map(lambda s: s[0]))
for line in gsFailed.take(10):
print 'Invalid goldfile line: %s' % line
goldStandard = (gsRaw
.filter(lambda s: s[1] == 1)
.map(lambda s: s[0])
.cache())
print 'Read %d lines, successfully parsed %d lines, failed to parse %d lines' % (gsRaw.count(),
goldStandard.count(),
gsFailed.count())
assert (gsFailed.count() == 0)
assert (gsRaw.count() == (goldStandard.count() + 1))
sims = similaritiesBroadcast.map(lambda r: (r[1] + ' ' + r[0], r[2]))
trueDupsRDD = sims.join(goldStandard).cache()
trueDupsCount = trueDupsRDD.count()
avgSimDups = trueDupsRDD.map(lambda r: r[1][0]).fold(0, lambda a, v: a + v) / float(trueDupsCount)
dupIds = trueDupsRDD.map(lambda r: r[0]).collect()
nonDupsRDD = sims.filter(lambda r: r[0] not in dupIds).cache()
nonDupsRDDCount = nonDupsRDD.count()
avgSimNon = nonDupsRDD.map(lambda r: r[1]).fold(0, lambda a, v: a + v) / float(nonDupsRDDCount)
print 'There are %s true duplicates.' % trueDupsCount
print 'The average similarity of true duplicates is %s.' % avgSimDups
print 'And for non duplicates, it is %s.' % avgSimNon
# TEST Perform a Gold Standard evaluation (3e)
Test.assertEquals(trueDupsCount, 146, 'incorrect trueDupsCount')
Test.assertTrue(abs(avgSimDups - 0.264332573435) < 0.0000001, 'incorrect avgSimDups')
Test.assertTrue(abs(avgSimNon - 0.00123476304656) < 0.0000001, 'incorrect avgSimNon')
amazonFullRecToToken = amazon.map(lambda line: (line[0], tokenize(line[1])))
googleFullRecToToken = google.map(lambda line: (line[0], tokenize(line[1])))
print 'Amazon full dataset is %s products, Google full dataset is %s products' % (amazonFullRecToToken.count(),
googleFullRecToToken.count())
# TEST Tokenize the full dataset (4a)
Test.assertEquals(amazonFullRecToToken.count(), 1363, 'incorrect amazonFullRecToToken.count()')
Test.assertEquals(googleFullRecToToken.count(), 3226, 'incorrect googleFullRecToToken.count()')
fullCorpusRDD = amazonFullRecToToken.union(googleFullRecToToken)
idfsFull = idfs(fullCorpusRDD)
idfsFullCount = idfsFull.count()
print 'There are %s unique tokens in the full datasets.' % idfsFullCount
idfsFullWeights = idfsFull.collectAsMap()
idfsFullBroadcast = sc.broadcast(idfsFullWeights)
amazonWeightsRDD = amazonFullRecToToken.map(lambda r: (r[0], tfidf(r[1], idfsFullBroadcast.value)))
googleWeightsRDD = googleFullRecToToken.map(lambda r: (r[0], tfidf(r[1], idfsFullBroadcast.value)))
print 'There are %s Amazon weights and %s Google weights.' % (amazonWeightsRDD.count(),
googleWeightsRDD.count())
# TEST Compute IDFs and TF-IDFs for the full datasets (4b)
Test.assertEquals(idfsFullCount, 17078, 'incorrect idfsFullCount')
Test.assertEquals(amazonWeightsRDD.count(), 1363, 'incorrect amazonWeightsRDD.count()')
Test.assertEquals(googleWeightsRDD.count(), 3226, 'incorrect googleWeightsRDD.count()')
amazonNorms = amazonWeightsRDD.map(lambda r: (r[0], norm(r[1])))
amazonNormsBroadcast = sc.broadcast(amazonNorms.collectAsMap())
googleNorms = googleWeightsRDD.map(lambda r: (r[0], norm(r[1])))
googleNormsBroadcast = sc.broadcast(googleNorms.collectAsMap())
# TEST Compute Norms for the weights from the full datasets (4c)
Test.assertTrue(isinstance(amazonNormsBroadcast, Broadcast), 'incorrect amazonNormsBroadcast')
Test.assertEquals(len(amazonNormsBroadcast.value), 1363, 'incorrect amazonNormsBroadcast.value')
Test.assertTrue(isinstance(googleNormsBroadcast, Broadcast), 'incorrect googleNormsBroadcast')
Test.assertEquals(len(googleNormsBroadcast.value), 3226, 'incorrect googleNormsBroadcast.value')
def invert(record):
Invert (ID, tokens) to a list of (token, ID)
Args:
record: a pair, (ID, token vector)
Returns:
pairs: a list of pairs of token to ID
pairs = [ (token, record[0]) for token in record[1] ]
return pairs
amazonInvPairsRDD = amazonWeightsRDD.flatMap(invert).cache()
googleInvPairsRDD = googleWeightsRDD.flatMap(invert).cache()
print 'There are %s Amazon inverted pairs and %s Google inverted pairs.' % (amazonInvPairsRDD.count(),
googleInvPairsRDD.count())
# TEST Create inverted indicies from the full datasets (4d)
invertedPair = invert((1, {'foo': 2}))
Test.assertEquals(invertedPair[0][1], 1, 'incorrect invert result')
Test.assertEquals(amazonInvPairsRDD.count(), 111387, 'incorrect amazonInvPairsRDD.count()')
Test.assertEquals(googleInvPairsRDD.count(), 77678, 'incorrect googleInvPairsRDD.count()')
def swap(record):
Swap (token, (ID, URL)) to ((ID, URL), token)
Args:
record: a pair, (token, (ID, URL))
Returns:
pair: ((ID, URL), token)
token = record[0]
keys = record[1]
return (keys, token)
def empty(a):
return [a]
def append(a, b):
a.append(b)
return a
def merge(a, b):
return a + b
joined = amazonInvPairsRDD.join(googleInvPairsRDD).cache()
commonTokens = joined.map(swap).combineByKey(empty, append, merge).cache()
print 'Found %d common tokens' % commonTokens.count()
# TEST Identify common tokens from the full dataset (4e)
Test.assertEquals(commonTokens.count(), 2441100, 'incorrect commonTokens.count()')
amazonWeightsBroadcast = sc.broadcast(amazonWeightsRDD.collectAsMap())
googleWeightsBroadcast = sc.broadcast(googleWeightsRDD.collectAsMap())
def fastCosineSimilarity(record):
Compute Cosine Similarity using Broadcast variables
Args:
record: ((ID, URL), tokens)
Returns:
pair: ((ID, URL), cosine similarity value)
amazonRec = record[0][0]
googleRec = record[0][1]
amazonTfidf = amazonWeightsBroadcast.value[amazonRec]
googleTfidf = googleWeightsBroadcast.value[googleRec]
tokens = record[1]
s = reduce(lambda a,b: a + b, [ amazonTfidf[token] * googleTfidf[token] for token in tokens ], 0)
value = (s / googleNormsBroadcast.value[googleRec]) / amazonNormsBroadcast.value[amazonRec]
key = (amazonRec, googleRec)
return (key, value)
similaritiesFullRDD = commonTokens.map(fastCosineSimilarity).cache()
print similaritiesFullRDD.count()
# TEST Identify common tokens from the full dataset (4f)
similarityTest = similaritiesFullRDD.filter(lambda ((aID, gURL), cs): aID == 'b00005lzly' and gURL == 'http://www.google.com/base/feeds/snippets/13823221823254120257').collect()
Test.assertEquals(len(similarityTest), 1, 'incorrect len(similarityTest)')
Test.assertTrue(abs(similarityTest[0][1] - 4.286548414e-06) < 0.000000000001, 'incorrect similarityTest fastCosineSimilarity')
Test.assertEquals(similaritiesFullRDD.count(), 2441100, 'incorrect similaritiesFullRDD.count()')
# Create an RDD of ((Amazon ID, Google URL), similarity score)
simsFullRDD = similaritiesFullRDD.map(lambda x: ("%s %s" % (x[0][0], x[0][1]), x[1]))
assert (simsFullRDD.count() == 2441100)
# Create an RDD of just the similarity scores
simsFullValuesRDD = (simsFullRDD
.map(lambda x: x[1])
.cache())
assert (simsFullValuesRDD.count() == 2441100)
# Look up all similarity scores for true duplicates
# This helper function will return the similarity score for records that are in the gold standard and the simsFullRDD (True positives), and will return 0 for records that are in the gold standard but not in simsFullRDD (False Negatives).
def gs_value(record):
if (record[1][1] is None):
return 0
else:
return record[1][1]
# Join the gold standard and simsFullRDD, and then extract the similarities scores using the helper function
trueDupSimsRDD = (goldStandard
.leftOuterJoin(simsFullRDD)
.map(gs_value)
.cache())
print 'There are %s true duplicates.' % trueDupSimsRDD.count()
assert(trueDupSimsRDD.count() == 1300)
from pyspark.accumulators import AccumulatorParam
class VectorAccumulatorParam(AccumulatorParam):
# Initialize the VectorAccumulator to 0
def zero(self, value):
return [0] * len(value)
# Add two VectorAccumulator variables
def addInPlace(self, val1, val2):
for i in xrange(len(val1)):
val1[i] += val2[i]
return val1
# Return a list with entry x set to value and all other entries set to 0
def set_bit(x, value, length):
bits = []
for y in xrange(length):
if (x == y):
bits.append(value)
else:
bits.append(0)
return bits
# Pre-bin counts of false positives for different threshold ranges
BINS = 101
nthresholds = 100
def bin(similarity):
return int(similarity * nthresholds)
# fpCounts[i] = number of entries (possible false positives) where bin(similarity) == i
zeros = [0] * BINS
fpCounts = sc.accumulator(zeros, VectorAccumulatorParam())
def add_element(score):
global fpCounts
b = bin(score)
fpCounts += set_bit(b, 1, BINS)
simsFullValuesRDD.foreach(add_element)
# Remove true positives from FP counts
def sub_element(score):
global fpCounts
b = bin(score)
fpCounts += set_bit(b, -1, BINS)
trueDupSimsRDD.foreach(sub_element)
def falsepos(threshold):
fpList = fpCounts.value
return sum([fpList[b] for b in range(0, BINS) if float(b) / nthresholds >= threshold])
def falseneg(threshold):
return trueDupSimsRDD.filter(lambda x: x < threshold).count()
def truepos(threshold):
return trueDupSimsRDD.count() - falsenegDict[threshold]
# Precision = true-positives / (true-positives + false-positives)
# Recall = true-positives / (true-positives + false-negatives)
# F-measure = 2 x Recall x Precision / (Recall + Precision)
def precision(threshold):
tp = trueposDict[threshold]
return float(tp) / (tp + falseposDict[threshold])
def recall(threshold):
tp = trueposDict[threshold]
return float(tp) / (tp + falsenegDict[threshold])
def fmeasure(threshold):
r = recall(threshold)
p = precision(threshold)
return 2 * r * p / (r + p)
thresholds = [float(n) / nthresholds for n in range(0, nthresholds)]
falseposDict = dict([(t, falsepos(t)) for t in thresholds])
falsenegDict = dict([(t, falseneg(t)) for t in thresholds])
trueposDict = dict([(t, truepos(t)) for t in thresholds])
precisions = [precision(t) for t in thresholds]
recalls = [recall(t) for t in thresholds]
fmeasures = [fmeasure(t) for t in thresholds]
print precisions[0], fmeasures[0]
assert (abs(precisions[0] - 0.000532546802671) < 0.0000001)
assert (abs(fmeasures[0] - 0.00106452669505) < 0.0000001)
fig = plt.figure()
plt.plot(thresholds, precisions)
plt.plot(thresholds, recalls)
plt.plot(thresholds, fmeasures)
plt.legend(['Precision', 'Recall', 'F-measure'])
pass
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Raw data with whitening
Step2: Epochs with whitening
Step3: Evoked data with whitening
Step4: Evoked data with scaled whitening
Step5: Topographic plot with whitening
|
<ASSISTANT_TASK:>
Python Code:
import mne
from mne.datasets import sample
data_path = sample.data_path()
raw_fname = data_path / 'MEG' / 'sample' / 'sample_audvis_filt-0-40_raw.fif'
raw = mne.io.read_raw_fif(raw_fname, preload=True)
events = mne.find_events(raw, stim_channel='STI 014')
event_id = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,
'visual/right': 4, 'smiley': 5, 'button': 32}
reject = dict(grad=4000e-13, mag=4e-12, eog=150e-6)
epochs = mne.Epochs(raw, events, event_id=event_id, reject=reject)
# baseline noise cov, not a lot of samples
noise_cov = mne.compute_covariance(epochs, tmax=0., method='shrunk', rank=None,
verbose='error')
# butterfly mode shows the differences most clearly
raw.plot(events=events, butterfly=True)
raw.plot(noise_cov=noise_cov, events=events, butterfly=True)
epochs.plot()
epochs.plot(noise_cov=noise_cov)
evoked = epochs.average()
evoked.plot(time_unit='s')
evoked.plot(noise_cov=noise_cov, time_unit='s')
evoked.plot_white(noise_cov=noise_cov, time_unit='s')
evoked.comment = 'All trials'
evoked.plot_topo(title='Evoked data')
evoked.plot_topo(noise_cov=noise_cov, title='Whitened evoked data')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: whoa! linear regression rmse on 10-fold cross validation is terrible!! let's try something else
Step2: The best regression method was Lasso with an alpha value of 0.23
|
<ASSISTANT_TASK:>
Python Code:
x = data.as_matrix()
y = target.as_matrix()
x = np.array([np.concatenate((v,[1])) for v in x]) #add column of ones to the end of the data set
print x
linreg = LinearRegression()
linreg.fit(x,y)
p = linreg.predict(x)
p
err = abs(p-y)
err
total_error = np.dot(err,err)
rmse_train = np.sqrt(total_error/len(p))
rmse_train
linreg.coef_ #Regression Coefficients
pl.plot(p, y,'ro')
pl.plot([-25,50],[-25,50], 'g-')
pl.xlabel('predicted')
pl.ylabel('real')
pl.show()
# RMSE with 10-Fold Cross Validation
kf = KFold(len(x), n_folds=10)
xval_err = 0
for train,test in kf:
linreg.fit(x[train],y[train])
p = linreg.predict(x[test])
e = p-y[test]
print e
xval_err += np.dot(e,e)
rmse_10cv = np.sqrt(xval_err/len(x))
print('Method: Linear Regression')
print('RMSE on training: %.4f' %rmse_train)
print('RMSE on 10-fold CV: %.4f' %rmse_10cv)
print('alpha\t\tridge\t\tlasso\t\telastic-net\n')
alpha = np.linspace(0.01,0.5,50)
for a in alpha:
results = []
for name,met in [
#('linear regression', LinearRegression()),
('ridge', Ridge(fit_intercept=True, alpha=a)),
('lasso', Lasso(fit_intercept=True, alpha=a)),
('elastic-net', ElasticNet(fit_intercept=True, alpha=a))
]:
#met.fit(x,y)
#p = met.predict(x)
#e = p-y
#total_error = np.dot(e,e)
#rmse_train = np.sqrt(total_error/len(p))
kf = KFold(len(x), n_folds=10)
err = 0
for train,test in kf:
met.fit(x[train],y[train])
p = met.predict(x[test])
e = p-y[test]
err += np.dot(e,e)
rmse_10cv = np.sqrt(err/len(x))
results.append(rmse_10cv)
print('{:.3f}\t\t{:.4f}\t\t{:.4f}\t\t{:.4f}\n'.format(a,results[0],results[1],results[2]))
print('Lasso Regression w/ alpha=0.23')
ridge = Lasso(fit_intercept=True, alpha=0.23)
# computing RMSE using 10-fold cross validation
kf = KFold(len(x), n_folds=10)
xval_err = 0
for train, test in kf:
ridge.fit(x[train], y[train])
p = ridge.predict(x[test])
err = p - y[test]
xval_err += np.dot(err,err)
pl.plot(p, y[test],'ro')
pl.plot([-25,50],[-25,50], 'g-')
pl.xlabel('predicted')
pl.ylabel('real')
pl.show()
rmse_10cv = np.sqrt(xval_err/len(x))
print('rsme with 10-fold cross validation = {:.4f}'.format(rmse_10cv))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Test set 2
Step2: Test set 3
Step3: Test set 4
Step4: Test set 5
Step5: Test set 6
Step6: 3D Poisson Problem
Step7: Strong Scaling Test
|
<ASSISTANT_TASK:>
Python Code:
omg=numpy.array([0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1])
tPCG = numpy.array([5.72, 4.54, 3.78, 3.14, 2.71, 2.38, 2.06, 1.95, 2.49, 10.15])
tPCGF = numpy.array([2.48, 2.14, 2.03, 2.6, 10.7])
tPBICGSTAB = numpy.array([2.79, 2.58, 2.48, 3, 12.1])
pyplot.plot(omg, tPCG, label="PCG")
pyplot.plot(omg[5:], tPCGF, label="PCGF")
pyplot.plot(omg[5:], tPBICGSTAB, label="PBICGSTAB")
pyplot.xlabel("Relaxation factor")
pyplot.ylabel("Time for solve")
pyplot.legend(loc=0);
cycle = ["V", "W", "F"]
sweep = numpy.array([1, 2, 3])
time24 = {"V": numpy.array([5.152, 3.822, 4.314]),
"W": numpy.array([5.568, 5.89, 6.39]),
"F": numpy.array([5.886, 4.232, 4.53])}
errL24 = {"V": numpy.array([0.052, 0.152, 2.004]),
"W": numpy.array([0.008, 0.03, 0.01]),
"F": numpy.array([2.766, 0.002, 0.23])}
errU24 = {"V": numpy.array([0.018, 0.078, 1.986]),
"W": numpy.array([0.012, 0.04, 0.02]),
"F": numpy.array([3.174, 0.008, 0.89])}
err24 = {cyc: numpy.array(numpy.vstack((errL24[cyc], errU24[cyc]))) for cyc in cycle}
time12 = {"V": numpy.array([6.248, 5.238, 4.15]),
"W": numpy.array([7.382, 5.53, 7.456]),
"F": numpy.array([5.672, 5.58, 4.24])}
errL12 = {"V": numpy.array([0.008, 0.368, 0]),
"W": numpy.array([0.002, 0, 1.656]),
"F": numpy.array([0.992, 1.22, 0])}
errU12 = {"V": numpy.array([0.002, 1.472, 0]),
"W": numpy.array([0.008, 0, 0.424]),
"F": numpy.array([0.658, 1.83, 0])}
err12 = {cyc: numpy.array(numpy.vstack((errL12[cyc], errU12[cyc]))) for cyc in cycle}
pyplot.figure(figsize=(16, 4))
pyplot.subplot(1, 2, 1)
pyplot.title("12 Devices")
pyplot.errorbar(sweep, time12["V"], yerr = err12["V"], fmt='ks-', label="V")
pyplot.errorbar(sweep, time12["W"], yerr = err12["W"], fmt='r^-', label="W")
pyplot.errorbar(sweep, time12["F"], yerr = err12["F"], fmt='bo-', label="F")
pyplot.xlabel("Number of iteration on coarsest grid")
pyplot.xlim(0, 4)
pyplot.ylabel("Time for solve (sec)")
pyplot.ylim(2, 10)
pyplot.legend(loc=0)
pyplot.subplot(1, 2, 2)
pyplot.title("24 Devices")
pyplot.errorbar(sweep, time24["V"], yerr = err24["V"], fmt='ks-', label="V")
pyplot.errorbar(sweep, time24["W"], yerr = err24["W"], fmt='r^-', label="W")
pyplot.errorbar(sweep, time24["F"], yerr = err24["F"], fmt='bo-', label="F")
pyplot.xlabel("Number of iteration on coarsest grid")
pyplot.xlim(0, 4)
pyplot.ylabel("Time for solve (sec)")
pyplot.ylim(2, 10)
pyplot.legend(loc=0)
cycle = ["V", "W", "F"]
sweep = numpy.array([1, 2, 3])
time24 = {"V": numpy.array([3.102, 2.444, 2.166]),
"W": numpy.array([3.716, 4.376, 5.4]),
"F": numpy.array([2.872, 3.31, 3.78])}
errL24 = {"V": numpy.array([0.032, 0.044, 0.006]),
"W": numpy.array([0.066, 0.316, 0.99]),
"F": numpy.array([0.012, 0.49, 0.88])}
errU24 = {"V": numpy.array([0.058, 0.016, 0.004]),
"W": numpy.array([0.074, 1.214, 0.67]),
"F": numpy.array([0.008, 0.74, 0.23])}
err24 = {cyc: numpy.array(numpy.vstack((errL24[cyc], errU24[cyc]))) for cyc in cycle}
time12 = {"V": numpy.array([6.238, 4.42, 3.7]),
"W": numpy.array([5.272, 5.174, 5.396]),
"F": numpy.array([4.23, 3.974, 4.58])}
errL12 = {"V": numpy.array([0.608, 0, 0]),
"W": numpy.array([0.402, 0.004, 0.156]),
"F": numpy.array([0.05, 0.004, 0.61])}
errU12 = {"V": numpy.array([2.422, 0, 0]),
"W": numpy.array([1.608, 0.006, 0.044]),
"F": numpy.array([0.08, 0.016, 0.92])}
err12 = {cyc: numpy.array(numpy.vstack((errL12[cyc], errU12[cyc]))) for cyc in cycle}
pyplot.figure(figsize=(16, 4))
pyplot.subplot(1, 2, 1)
pyplot.title("12 Devices")
pyplot.errorbar(sweep, time12["V"], yerr = err12["V"], fmt='ks-', label="V")
pyplot.errorbar(sweep, time12["W"], yerr = err12["W"], fmt='r^-', label="W")
pyplot.errorbar(sweep, time12["F"], yerr = err12["F"], fmt='bo-', label="F")
pyplot.xlabel("Number of iteration on coarsest grid")
pyplot.xlim(0, 4)
pyplot.ylabel("Time for solve (sec)")
pyplot.ylim(2, 10)
pyplot.legend(loc=0)
pyplot.subplot(1, 2, 2)
pyplot.title("24 Devices")
pyplot.errorbar(sweep, time24["V"], yerr = err24["V"], fmt='ks-', label="V")
pyplot.errorbar(sweep, time24["W"], yerr = err24["W"], fmt='r^-', label="W")
pyplot.errorbar(sweep, time24["F"], yerr = err24["F"], fmt='bo-', label="F")
pyplot.xlabel("Number of iteration on coarsest grid")
pyplot.xlim(0, 4)
pyplot.ylabel("Time for solve (sec)")
pyplot.ylim(2, 10)
pyplot.legend(loc=0)
cycle = ["V", "W", "F"]
sweep = numpy.array([1, 2, 3])
time24 = {"V": numpy.array([3.06, 2.422, 2.39]),
"W": numpy.array([3.802, 4.376, 5.406]),
"F": numpy.array([2.878, 3.382, 5.568])}
errL24 = {"V": numpy.array([0.05, 0.022, 0.23]),
"W": numpy.array([0.002, 0.306, 1.006]),
"F": numpy.array([0.008, 0.552, 0.668])}
errU24 = {"V": numpy.array([0.02, 0.038, 0.91]),
"W": numpy.array([0.008, 1.214, 0.674]),
"F": numpy.array([0.012, 0.988, 0.452])}
err24 = {cyc: numpy.array(numpy.vstack((errL24[cyc], errU24[cyc]))) for cyc in cycle}
time12 = {"V": numpy.array([6.23, 4.376, 3.702]),
"W": numpy.array([5.266, 5.174, 5.43]),
"F": numpy.array([4.208, 4.288, 4.572])}
errL12 = {"V": numpy.array([0.65, 0.126, 0.002]),
"W": numpy.array([0.406, 0.004, 0.01]),
"F": numpy.array([0.028, 0.318, 0.602])}
errU12 = {"V": numpy.array([2.42, 0.044, 0.008]),
"W": numpy.array([1.614, 0.006, 0.01]),
"F": numpy.array([0.112, 1.272, 0.908])}
err12 = {cyc: numpy.array(numpy.vstack((errL12[cyc], errU12[cyc]))) for cyc in cycle}
pyplot.figure(figsize=(16, 4))
pyplot.subplot(1, 2, 1)
pyplot.title("12 Devices")
pyplot.errorbar(sweep, time12["V"], yerr = err12["V"], fmt='ks-', label="V")
pyplot.errorbar(sweep, time12["W"], yerr = err12["W"], fmt='r^-', label="W")
pyplot.errorbar(sweep, time12["F"], yerr = err12["F"], fmt='bo-', label="F")
pyplot.xlabel("Number of iteration on coarsest grid")
pyplot.xlim(0, 4)
pyplot.ylabel("Time for solve (sec)")
pyplot.ylim(2, 10)
pyplot.legend(loc=0)
pyplot.subplot(1, 2, 2)
pyplot.title("24 Devices")
pyplot.errorbar(sweep, time24["V"], yerr = err24["V"], fmt='ks-', label="V")
pyplot.errorbar(sweep, time24["W"], yerr = err24["W"], fmt='r^-', label="W")
pyplot.errorbar(sweep, time24["F"], yerr = err24["F"], fmt='bo-', label="F")
pyplot.xlabel("Number of iteration on coarsest grid")
pyplot.xlim(0, 4)
pyplot.ylabel("Time for solve (sec)")
pyplot.ylim(2, 10)
pyplot.legend(loc=0)
cycle = ["V", "W", "F"]
sweep = numpy.array([1, 2, 3])
time24 = {"V": numpy.array([4.512, 4.508, 5.152]),
"W": numpy.array([5.626, 5.63, 5.622]),
"F": numpy.array([5.268, 5.286, 6.822])}
errL24 = {"V": numpy.array([0.332, 0.338, 0.962]),
"W": numpy.array([0.026, 0.02, 0.022]),
"F": numpy.array([1.088, 1.026, 0.012])}
errU24 = {"V": numpy.array([1.278, 1.292, 0.638]),
"W": numpy.array([0.034, 0.02, 0.028]),
"F": numpy.array([1.562, 1.534, 0.008])}
err24 = {cyc: numpy.array(numpy.vstack((errL24[cyc], errU24[cyc]))) for cyc in cycle}
time12 = {"V": numpy.array([4.81, 5.212, 6.016]),
"W": numpy.array([7.402, 7.406, 7.4]),
"F": numpy.array([5.584, 5.53, 5.554])}
errL12 = {"V": numpy.array([0, 0.402, 1.206]),
"W": numpy.array([0.002, 0.006, 0]),
"F": numpy.array([0.084, 0.03, 0.054])}
errU12 = {"V": numpy.array([0, 1.608, 0.804]),
"W": numpy.array([0.008, 0.014, 0]),
"F": numpy.array([0.056, 0.1, 0.086])}
err12 = {cyc: numpy.array(numpy.vstack((errL12[cyc], errU12[cyc]))) for cyc in cycle}
pyplot.figure(figsize=(16, 4))
pyplot.subplot(1, 2, 1)
pyplot.title("12 Devices")
pyplot.errorbar(sweep, time12["V"], yerr = err12["V"], fmt='ks-', label="V")
pyplot.errorbar(sweep, time12["W"], yerr = err12["W"], fmt='r^-', label="W")
pyplot.errorbar(sweep, time12["F"], yerr = err12["F"], fmt='bo-', label="F")
pyplot.xlabel("Number of iteration on coarsest grid")
pyplot.xlim(0, 4)
pyplot.ylabel("Time for solve (sec)")
pyplot.ylim(2, 10)
pyplot.legend(loc=0)
pyplot.subplot(1, 2, 2)
pyplot.title("24 Devices")
pyplot.errorbar(sweep, time24["V"], yerr = err24["V"], fmt='ks-', label="V")
pyplot.errorbar(sweep, time24["W"], yerr = err24["W"], fmt='r^-', label="W")
pyplot.errorbar(sweep, time24["F"], yerr = err24["F"], fmt='bo-', label="F")
pyplot.xlabel("Number of iteration on coarsest grid")
pyplot.xlim(0, 4)
pyplot.ylabel("Time for solve (sec)")
pyplot.ylim(2, 10)
pyplot.legend(loc=0)
cycle = ["V", "W", "F"]
sweep = numpy.array([1, 2, 3, 4, 5])
time24 = {"V": numpy.array([2.75, 2.186, 1.958, 1.832, 1.782]),
"W": numpy.array([6.9, 8.3, 8.236, 9.762, 15.764]),
"F": numpy.array([4.204, 5.106, 6.574, 5.782, 6.68])}
errL24 = {"V": numpy.array([0.06, 0.066, 0.058, 0.002, 0.002]),
"W": numpy.array([1.61, 1.95, 0.016, 0.012, 0.064]),
"F": numpy.array([0.774, 1.066, 1.554, 0.272, 0.04])}
errU24 = {"V": numpy.array([0.04, 0.044, 0.042, 0.008, 0.008]),
"W": numpy.array([0.41, 1.27, 0.014, 0.038, 0.076]),
"F": numpy.array([1.046, 0.704, 0.426, 0.078, 0.02])}
err24 = {cyc: numpy.array(numpy.vstack((errL24[cyc], errU24[cyc]))) for cyc in cycle}
time12 = {"V": numpy.array([2.396, 2.18, 8.164, 1.98, 2.174]),
"W": numpy.array([8.316, 13.444, 16.048, 23.508, 18.928]),
"F": numpy.array([9.094, 8.608, 6.818, 8.416, 9.832])}
errL12 = {"V": numpy.array([0.006, 0, 6.134, 0, 0.174]),
"W": numpy.array([0.006, 0.044, 0.658, 3.685, 0.048]),
"F": numpy.array([1.624, 0.018, 0.128, 1.126, 1.832])}
errU12 = {"V": numpy.array([0.004, 0, 24.486, 0, 0.696]),
"W": numpy.array([0.014, 0.056, 2.532, 2.462, 0.062]),
"F": numpy.array([0.416, 0.012, 0.042, 1.674, 1.838])}
err12 = {cyc: numpy.array(numpy.vstack((errL12[cyc], errU12[cyc]))) for cyc in cycle}
pyplot.figure(figsize=(16, 4))
pyplot.subplot(1, 2, 1)
pyplot.title("12 Devices")
pyplot.errorbar(sweep, time12["V"], yerr = err12["V"], fmt='ks-', label="V")
pyplot.errorbar(sweep, time12["W"], yerr = err12["W"], fmt='r^-', label="W")
pyplot.errorbar(sweep, time12["F"], yerr = err12["F"], fmt='bo-', label="F")
pyplot.xlabel("Number of iteration on coarsest grid")
pyplot.xlim(0, 6)
pyplot.ylabel("Time for solve (sec)")
pyplot.ylim(1, 30)
pyplot.legend(loc=0)
pyplot.subplot(1, 2, 2)
pyplot.title("24 Devices")
pyplot.errorbar(sweep, time24["V"], yerr = err24["V"], fmt='ks-', label="V")
pyplot.errorbar(sweep, time24["W"], yerr = err24["W"], fmt='r^-', label="W")
pyplot.errorbar(sweep, time24["F"], yerr = err24["F"], fmt='bo-', label="F")
pyplot.xlabel("Number of iteration on coarsest grid")
pyplot.xlim(0, 6)
pyplot.ylabel("Time for solve (sec)")
pyplot.ylim(1, 30)
pyplot.legend(loc=0)
N_1GPU = numpy.array([1000, 8000, 64000, 512000, 4096000])
Time_1GPU = numpy.array([0.024, 0.017, 0.10, 0.2, 1.6])
err_1GPU = numpy.array([0., 0., 0., 0., 0.])
N_2GPU = numpy.array([1000, 8000, 64000, 512000, 4096000])
Time_2GPU = numpy.array([0.04, 0.11, 0.09, 0.17, 1.19])
err_2GPU = numpy.array([0., 0., 0., 0., 0.])
N_4GPU = numpy.array([1000, 8000, 64000, 512000, 4096000])
Time_4GPU = numpy.array([0.11, 0.10, 0.09, 0.57, 0.69])
err_4GPU = numpy.array([0., 0., 0., 0., 0.])
N_8GPU = numpy.array([1000, 8000, 64000, 512000, 4096000])
Time_8GPU = numpy.array([0.09, 0.09, 0.53, 0.4, 0.44])
err_8GPU = numpy.array([0., 0., 0., 0., 0.])
nGPU = numpy.array([1, 2, 4, 8])
Time_N1K_GPU = numpy.array([0.024, 0.04, 0.11, 0.09])
Time_N8K_GPU = numpy.array([0.017, 0.11, 0.10, 0.09])
Time_N64K_GPU = numpy.array([0.10, 0.09, 0.09, 0.53])
Time_N512K_GPU = numpy.array([0.2, 0.17, 0.57, 0.4])
Time_N4M_GPU = numpy.array([1.6, 1.19, 0.69, 0.44])
N_1CPU = numpy.array([64000, 512000, 4096000])
Time_1CPU = numpy.array([0.23, 3.06, 45.37])
err_1CPU = numpy.array([0., 0., 0.])
N_2CPU = numpy.array([64000, 512000, 4096000])
Time_2CPU = numpy.array([0.17, 3.12, 39.05])
err_2CPU = numpy.array([0., 0., 0.])
N_4CPU = numpy.array([64000, 512000, 4096000])
Time_4CPU = numpy.array([0.09, 1.65, 21.88])
err_4CPU = numpy.array([0., 0., 0.])
N_8CPU = numpy.array([64000, 512000, 4096000])
Time_8CPU = numpy.array([0.05, 1.22, 18.3])
err_8CPU = numpy.array([0., 0., 0.])
nCPU = numpy.array([1, 2, 4, 8])
Time_N64K_CPU = numpy.array([0.23, 0.17, 0.09, 0.05])
Time_N512K_CPU = numpy.array([3.06, 3.12, 1.65, 1.22])
Time_N4M_CPU = numpy.array([45.37, 39.05, 21.88, 18.3])
#pyplot.figure(figsize=(16,8), dpi=400)
#pyplot.subplot(1, 2, 1)
#pyplot.title("Weak Scaling")
#ax = pyplot.gca()
#ax.set_xscale("log", nonposx='clip')
#ax.set_yscale("log", nonposx='clip')
#pyplot.errorbar(N_1GPU, Time_1GPU, yerr = err_1GPU, fmt='ks-', label="1 GPU")
#pyplot.errorbar(N_2GPU, Time_2GPU, yerr = err_2GPU, fmt='r^-', label="2 GPU")
#pyplot.errorbar(N_4GPU, Time_4GPU, yerr = err_4GPU, fmt='gx-', label="4 GPU")
#pyplot.errorbar(N_8GPU, Time_8GPU, yerr = err_8GPU, fmt='bo-', label="8 GPU")
#
#pyplot.errorbar(N_1CPU, Time_1CPU, yerr = err_1CPU, fmt='ks--', label="1 CPU")
#pyplot.errorbar(N_2CPU, Time_2CPU, yerr = err_2CPU, fmt='r^--', label="2 CPU")
#pyplot.errorbar(N_4CPU, Time_4CPU, yerr = err_4CPU, fmt='gx--', label="4 CPU")
#pyplot.errorbar(N_8CPU, Time_8CPU, yerr = err_8CPU, fmt='bo--', label="8 CPU")
#pyplot.xlabel("Number of total grid points")
#pyplot.ylabel("Wall time for solve (sec)")
#pyplot.legend(loc=0)
pyplot.figure(figsize=(16,8), dpi=400)
pyplot.title("Weak Scaling")
ax = pyplot.gca()
ax.set_xscale("log", nonposx='clip')
ax.set_yscale("log", nonposx='clip')
#pyplot.plot(nGPU, Time_N1K_GPU, 'ks-', label="GPU, 10x10x10")
#pyplot.plot(nGPU, Time_N8K_GPU, 'r^-', label="GPU, 20x20x20")
pyplot.plot(nGPU, Time_N64K_GPU, 'rx-', label="GPU, 40x40x40")
pyplot.plot(nGPU, Time_N512K_GPU, 'go-', label="GPU, 80x80x80")
pyplot.plot(nGPU, Time_N4M_GPU, 'b>-', label="GPU, 160x160x160")
pyplot.plot(nCPU, Time_N64K_CPU, 'rx--', label="CPU, 40x40x40")
pyplot.plot(nCPU, Time_N512K_CPU, 'go--', label="CPU, 80x80x80")
pyplot.plot(nCPU, Time_N4M_CPU, 'b>--', label="CPU, 160x160x160")
pyplot.xlabel("Number of GPUs / CPUs")
pyplot.ylabel("Wall time for solve (sec)")
#pyplot.ylim(0, 4)
pyplot.legend(loc=0)
N_4M_GPU = numpy.array([1, 2, 4, 8, 16, 32])
Time_4M_GPU_Raw = numpy.array([[1.04, 1.11, 0.86, 0.5, 3.7, 3.49],
[1.04, 1.11, 0.86, 0.49, 3.72, 3.47],
[1.04, 1.11, 0.86, 0.49, 3.69, 3.51],
[1.04, 1.11, 0.86, 0.49, 3.69, 3.47],
[1.04, 1.11, 0.86, 0.49, 3.69, 3.48]])
Time_4M_GPU = numpy.average(Time_4M_GPU_Raw, axis=0)
N_8M_GPU = numpy.array([4, 8, 16, 32])
Time_8M_GPU_Raw = numpy.array([[1.37, 0.81, 0.57, 2.1],
[1.44, 0.81, 0.58, 2.09],
[1.37, 0.81, 0.58, 2.09],
[1.37, 0.82, 0.58, 2.09],
[1.37, 0.81, 0.59, 2.09]])
Time_8M_GPU = numpy.average(Time_8M_GPU_Raw, axis=0)
N_4M_CPU = numpy.array([1, 2, 3, 4, 5, 6, 7, 8])
Time_4M_CPU = numpy.array([9.53, 4.72, 3.06, 2.19, 1.74, 1.53, 1.31, 1.13])
N_8M_CPU = numpy.array([1, 2, 3, 4, 5, 6, 7, 8])
Time_8M_CPU = numpy.array([20.65, 10.33, 6.65, 4.92, 3.9, 3.27, 2.82, 2.45])
N_4M_GPU_OPT = numpy.array([1, 2, 4, 8, 16])
Time_4M_GPU_OPT = numpy.array([0.81, 0.67, 0.42, 0.31, 0.26])
pyplot.figure(figsize=(16,8), dpi=400)
pyplot.title("Strong Scaling (GPU)")
ax = pyplot.gca()
ax.set_xscale("log", nonposx='clip')
ax.set_yscale("log", nonposx='clip')
pyplot.plot(N_4M_GPU, Time_4M_GPU, 'ks-', label="GPU, 200x200x100")
pyplot.plot(N_8M_GPU, Time_8M_GPU, 'rx-', label="GPU, 200x200x200")
pyplot.plot(N_4M_CPU, Time_4M_CPU, 'ks--', label="CPU, 200x200x100")
pyplot.plot(N_8M_CPU, Time_8M_CPU, 'rx--', label="CPU, 200x200x200")
pyplot.plot(N_4M_GPU_OPT, Time_4M_GPU_OPT, 'ks-.', label="GPU, 160x160x160")
pyplot.xlabel("Number of GPUs / CPU-Nodes (12 CPUs per node)")
pyplot.ylabel("Wall time for solve (sec)")
pyplot.legend(loc=0)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Description
Step7: 1.4. Land Atmosphere Flux Exchanges
Step8: 1.5. Atmospheric Coupling Treatment
Step9: 1.6. Land Cover
Step10: 1.7. Land Cover Change
Step11: 1.8. Tiling
Step12: 2. Key Properties --> Conservation Properties
Step13: 2.2. Water
Step14: 2.3. Carbon
Step15: 3. Key Properties --> Timestepping Framework
Step16: 3.2. Time Step
Step17: 3.3. Timestepping Method
Step18: 4. Key Properties --> Software Properties
Step19: 4.2. Code Version
Step20: 4.3. Code Languages
Step21: 5. Grid
Step22: 6. Grid --> Horizontal
Step23: 6.2. Matches Atmosphere Grid
Step24: 7. Grid --> Vertical
Step25: 7.2. Total Depth
Step26: 8. Soil
Step27: 8.2. Heat Water Coupling
Step28: 8.3. Number Of Soil layers
Step29: 8.4. Prognostic Variables
Step30: 9. Soil --> Soil Map
Step31: 9.2. Structure
Step32: 9.3. Texture
Step33: 9.4. Organic Matter
Step34: 9.5. Albedo
Step35: 9.6. Water Table
Step36: 9.7. Continuously Varying Soil Depth
Step37: 9.8. Soil Depth
Step38: 10. Soil --> Snow Free Albedo
Step39: 10.2. Functions
Step40: 10.3. Direct Diffuse
Step41: 10.4. Number Of Wavelength Bands
Step42: 11. Soil --> Hydrology
Step43: 11.2. Time Step
Step44: 11.3. Tiling
Step45: 11.4. Vertical Discretisation
Step46: 11.5. Number Of Ground Water Layers
Step47: 11.6. Lateral Connectivity
Step48: 11.7. Method
Step49: 12. Soil --> Hydrology --> Freezing
Step50: 12.2. Ice Storage Method
Step51: 12.3. Permafrost
Step52: 13. Soil --> Hydrology --> Drainage
Step53: 13.2. Types
Step54: 14. Soil --> Heat Treatment
Step55: 14.2. Time Step
Step56: 14.3. Tiling
Step57: 14.4. Vertical Discretisation
Step58: 14.5. Heat Storage
Step59: 14.6. Processes
Step60: 15. Snow
Step61: 15.2. Tiling
Step62: 15.3. Number Of Snow Layers
Step63: 15.4. Density
Step64: 15.5. Water Equivalent
Step65: 15.6. Heat Content
Step66: 15.7. Temperature
Step67: 15.8. Liquid Water Content
Step68: 15.9. Snow Cover Fractions
Step69: 15.10. Processes
Step70: 15.11. Prognostic Variables
Step71: 16. Snow --> Snow Albedo
Step72: 16.2. Functions
Step73: 17. Vegetation
Step74: 17.2. Time Step
Step75: 17.3. Dynamic Vegetation
Step76: 17.4. Tiling
Step77: 17.5. Vegetation Representation
Step78: 17.6. Vegetation Types
Step79: 17.7. Biome Types
Step80: 17.8. Vegetation Time Variation
Step81: 17.9. Vegetation Map
Step82: 17.10. Interception
Step83: 17.11. Phenology
Step84: 17.12. Phenology Description
Step85: 17.13. Leaf Area Index
Step86: 17.14. Leaf Area Index Description
Step87: 17.15. Biomass
Step88: 17.16. Biomass Description
Step89: 17.17. Biogeography
Step90: 17.18. Biogeography Description
Step91: 17.19. Stomatal Resistance
Step92: 17.20. Stomatal Resistance Description
Step93: 17.21. Prognostic Variables
Step94: 18. Energy Balance
Step95: 18.2. Tiling
Step96: 18.3. Number Of Surface Temperatures
Step97: 18.4. Evaporation
Step98: 18.5. Processes
Step99: 19. Carbon Cycle
Step100: 19.2. Tiling
Step101: 19.3. Time Step
Step102: 19.4. Anthropogenic Carbon
Step103: 19.5. Prognostic Variables
Step104: 20. Carbon Cycle --> Vegetation
Step105: 20.2. Carbon Pools
Step106: 20.3. Forest Stand Dynamics
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
Step109: 22.2. Growth Respiration
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
Step111: 23.2. Allocation Bins
Step112: 23.3. Allocation Fractions
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
Step115: 26. Carbon Cycle --> Litter
Step116: 26.2. Carbon Pools
Step117: 26.3. Decomposition
Step118: 26.4. Method
Step119: 27. Carbon Cycle --> Soil
Step120: 27.2. Carbon Pools
Step121: 27.3. Decomposition
Step122: 27.4. Method
Step123: 28. Carbon Cycle --> Permafrost Carbon
Step124: 28.2. Emitted Greenhouse Gases
Step125: 28.3. Decomposition
Step126: 28.4. Impact On Soil Properties
Step127: 29. Nitrogen Cycle
Step128: 29.2. Tiling
Step129: 29.3. Time Step
Step130: 29.4. Prognostic Variables
Step131: 30. River Routing
Step132: 30.2. Tiling
Step133: 30.3. Time Step
Step134: 30.4. Grid Inherited From Land Surface
Step135: 30.5. Grid Description
Step136: 30.6. Number Of Reservoirs
Step137: 30.7. Water Re Evaporation
Step138: 30.8. Coupled To Atmosphere
Step139: 30.9. Coupled To Land
Step140: 30.10. Quantities Exchanged With Atmosphere
Step141: 30.11. Basin Flow Direction Map
Step142: 30.12. Flooding
Step143: 30.13. Prognostic Variables
Step144: 31. River Routing --> Oceanic Discharge
Step145: 31.2. Quantities Transported
Step146: 32. Lakes
Step147: 32.2. Coupling With Rivers
Step148: 32.3. Time Step
Step149: 32.4. Quantities Exchanged With Rivers
Step150: 32.5. Vertical Grid
Step151: 32.6. Prognostic Variables
Step152: 33. Lakes --> Method
Step153: 33.2. Albedo
Step154: 33.3. Dynamics
Step155: 33.4. Dynamic Lake Extent
Step156: 33.5. Endorheic Basins
Step157: 34. Lakes --> Wetlands
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'dwd', 'sandbox-1', 'land')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Because we are currently dealing with a homogeneous block model, we don't have to care about the artihmetic and harmonic averaging of density and shear modulus, respectively. In the next step we define the FD updates for particle velocity and stresses and assemble the 2D PSV FD code.
Step2: Update stresses
Step3: Assemble the 2D PSV code
Step4: Let's run the 2D PSV code for a homogeneous block model
|
<ASSISTANT_TASK:>
Python Code:
# load all necessary libraries
import numpy
from matplotlib import pyplot, cm
from mpl_toolkits.mplot3d import Axes3D
from numba import jit
%matplotlib notebook
# spatial discretization
nx = 601
ny = 601
dh = 5.0
x = numpy.linspace(0, dh*(nx-1), nx)
y = numpy.linspace(0, dh*(ny-1), ny)
X, Y = numpy.meshgrid(x, y)
# time discretization
T = 0.55
dt = 0.6e-3
nt = numpy.floor(T/dt)
nt = nt.astype(int)
# snapshot frequency [timesteps]
isnap = 10
# wavefield clip
clip = 2.5e-2
# define model parameters
rho = 7100.0
vp = 2955.0
vs = 2362.0
mu = rho * vs * vs
lam = rho * vp * vp - 2 * mu
@jit(nopython=True) # use JIT for C-performance
def update_v(vx, vy, sxx, syy, sxy, nx, ny, dtdx, rhoi):
for j in range(1, ny-1):
for i in range(1, nx-1):
# calculate spatial derivatives
sxx_x = sxx[j, i+1] - sxx[j, i]
syy_y = syy[j+1, i] - syy[j, i]
sxy_x = sxy[j, i] - sxy[j, i-1]
sxy_y = sxy[j, i] - sxy[j-1, i]
# update particle velocities
vx[j, i] = vx[j, i] + dtdx * rhoi * (sxx_x + sxy_y)
vy[j, i] = vy[j, i] + dtdx * rhoi * (sxy_x + syy_y)
return vx, vy
@jit(nopython=True) # use JIT for C-performance
def update_s(vx, vy, sxx, syy, sxy, nx, ny, dtdx, lam, mu):
for j in range(1, ny-1):
for i in range(1, nx-1):
# calculate spatial derivatives
vxx = vx[j][i] - vx[j][i-1]
vyy = vy[j][i] - vy[j-1][i]
vyx = vy[j][i+1] - vy[j][i]
vxy = vx[j+1][i] - vx[j][i]
# update stresses
sxx[j, i] = sxx[j, i] + dtdx * ( lam * (vxx + vyy) + 2.0 * mu * vxx )
syy[j, i] = syy[j, i] + dtdx * ( lam * (vxx + vyy) + 2.0 * mu * vyy )
sxy[j, i] = sxy[j, i] + dtdx * ( mu * (vyx + vxy) )
return sxx, syy, sxy
def psv_mod(nt, nx, ny, dt, dh, rho, lam, mu, clip, isnap, X, Y):
# initialize wavefields
vx = numpy.zeros((ny, nx))
vy = numpy.zeros((ny, nx))
sxx = numpy.zeros((ny, nx))
syy = numpy.zeros((ny, nx))
sxy = numpy.zeros((ny, nx))
# define some parameters
dtdx = dt / dh
rhoi = 1.0 / rho
# define source wavelet parameters
fc = 17.0
tshift = 0.0
ts = 1.0 / fc
# source position [gridpoints]
jjs = 300
iis = 300
# initalize animation
fig = pyplot.figure(figsize=(11,7))
extent = [numpy.min(X),numpy.max(X),numpy.min(X),numpy.max(Y)]
image = pyplot.imshow(vy, animated=True, cmap=cm.seismic, interpolation='nearest', vmin=-clip, vmax=clip)
pyplot.colorbar()
pyplot.title('Wavefield vy')
pyplot.xlabel('X [m]')
pyplot.ylabel('Y [m]')
pyplot.gca().invert_yaxis()
pyplot.ion()
pyplot.show(block=False)
# loop over timesteps
for n in range(nt):
# define Ricker wavelet
t = n * dt
tau = numpy.pi * (t - 1.5 * ts - tshift) / (1.5 * ts)
amp = (1.0 - 4.0 * tau * tau) * numpy.exp(-2.0 * tau * tau)
# update particle velocities
vx, vy = update_v(vx, vy, sxx, syy, sxy, nx, ny, dtdx, rhoi)
# apply vertical impact source term @ source position
vy[jjs, iis] = vy[jjs, iis] + amp
# update stresses
sxx, syy, sxy = update_s(vx, vy, sxx, syy, sxy, nx, ny, dtdx, lam, mu)
# display vy snapshots
if (n % isnap) == 0:
image.set_data(vy)
fig.canvas.draw()
return vx, vy
# run 2D PSV code
vy, vy = psv_mod(nt, nx, ny, dt, dh, rho, lam, mu, clip, isnap, X, Y)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Galaxy Model
Step2: In addition to the "standard" way of defining functions shown above, Python has an additional method that can be used to define simple functions using lambda. These are convenient if you ever want to define functions "on the fly" for any reason, as discussed here. The same function defined in lambda notation is illustrated below.
Step3: Check that these give the same result using a grid of radii. Feel free to use np.linspace, np.arange, or any other tool to generate the set of radii for testing.
Step4: Experiment with different values for $a$ and $r_e$ and different gridding for radius to see how they change the resulting distribution. Also feel free to play around with the above plot until you have something you're happy with. (You can never spend too much time making good plots!)
Step5: The next step is to turn our two 1-D arrays into a 2-D grid in $\mathbf{x}$ and $\mathbf{y}$. More explicitly, we want to compute the intensity of our galaxy at (x_grid[i], y_grid[j]) for all i and j elements in x_grid and y_grid, respectively. This means we need to create a new array of values x and y of length $N_x \times N_y$.
Step6: As with many problems that come up often but are tedious to implement by hand, Python has a function to do this! The solution is shown below.
Step7: We see that our new x and y are the same shape, and contain $N_x \times N_y$ elements. Unfortunately, the number of points on our axes is flipped
Step8: Let's now visually check the results.
Step9: Woah -- what is going on here (assuming your notebook didn't crash making the plot)? Why do we havea bunch of different colored lines?
Step10: The second thing you might notice is that although we're using points specified by '.' in plt, what we get looks like a thick line. This is also plt working as intended
Step11: So with that in hand, let's now see what our grid looks like (thinned by 20). To help out with plotting, we're also going to "flatten" our arrays from 2-D to 1-D before plotting them.
Step12: With our 2-D grid of points in (x, y), we are now ready to compute our 2-D galaxy image. First, we can compute the radius using
Step13: Rather than trying to struggle with getting images to look good in plt.plot (or, alternately, plt.scatter), we're instead going to use plt.imshow (which is designed for this). Some examples are shown here.
Step14: Notice that there's some weird stuff going on with the default imshow plot
Step15: Let's assume a typical seeing of 0.8 arcsec. Compute and plot the PSF profile below. Take the code we used to compute and plot our galaxy profile earlier as a guide. Again, feel free to play around with the parameters to see how they change the PSF and plot; just make sure that $\beta$ is set back its default value of $\beta=4.765$ before moving on.
Step16: Observed Galaxy Model
Step17: Compare our "convolved" image with the original galaxy model and the original PSF. This "smeared out" image now represents the galaxy we'd actually observe through the atmosphere from our ground-based telescope!
Step18: A solution is given below.
Step19: Pixelation
Step20: Take a look at some of the additional arguments that can be passed to plt.hist. See if you can
Step21: Let's bin this using np.histogram to see what a histogram output looks like.
Step22: With all that done, let's now define the bins set by the pixel scale resolution of the Multiple Mirror Telescope (MMT) and Magellan Infrared Spectrograph (MMIRS) instrument. This is a spectrograph operated by a joint venture between the Smithsonian and the University of Arizona that you could one day use if you end up going to either institution!
Step23: Using the bins/bin centers computed above
|
<ASSISTANT_TASK:>
Python Code:
# only necessary if you're running Python 2.7 or lower
from __future__ import print_function
from __builtin__ import range
import numpy as np
# import plotting utility and define our naming alias
from matplotlib import pyplot as plt
# plot figures within the notebook rather than externally
%matplotlib inline
# Galaxy intensity model: Exponential
def prof_expo(r, re):
a = 1.68
return np.exp(-a * r / re)
# defining galaxy intensity using exponential
prof_expo2 = lambda r, re: np.exp(-1.68 * r / re)
re = 1. # effective radius
radius = ... # radii
gal1 = ... # function 1 (def)
gal2 = ... # function 2 (lambda)
# numerical checks
# plot results
# galaxy effective radius
re = 0.5 # in arcsec
# define 2-D grid
Nx, Ny = 1000 + 1, 1050 + 1 # number of grid points in x and y (1 padded for the edge)
x_grid = np.linspace(-5. * re, 5. * re, Nx) # grid in x direction
y_grid = np.linspace(-5. * re, 5. * re, Ny) # grid in y direction
# space for experimenting with computing a 2-D grid from 2 1-D grids
# mesh (x_grid, y_grid) into a new set of 2-D (x, y) arrays
x, y = np.meshgrid(x_grid, y_grid) # x,y for our 2-D grid
print(x, x.shape)
print(y, y.shape)
# *properly* mesh (x_grid, y_grid) into a new set of 2-D (x, y) arrays
x, y = np.meshgrid(x_grid, y_grid, indexing='ij') # x,y for our 2-D grid
# print array and array shapes
print(x, x.shape)
print(y, y.shape)
plt.plot(x, y, '.');
# select 10 columns of the array
x_temp, y_temp = x[:, 15:20], y[:, 15:20] # example of array slicing
plt.figure()
plt.plot(x_temp, y_temp, '.');
# print array and array shape
print(x_temp, x_temp.shape)
print(y_temp, y_temp.shape)
# select 10 columns of the array
x_temp, y_temp = x[::20, 15:20:1], y[::20, 15:20:1] # example of array slicing/thinning
plt.figure()
plt.plot(x_temp, y_temp, '.');
# print array shape
print('x:', x_temp.shape)
print('y:', y_temp.shape)
# thin grid by a factor of 20
x_temp, y_temp = x[::20, ::20].flatten(), y[::20, ::20].flatten() # slicing/thinning/flattening
plt.figure()
plt.plot(x_temp, y_temp, '.', markersize=2)
# print array shape (2-D vs flattened)
print(x[::20, ::20].shape, x_temp.shape)
r = ... # 2-D grid of radii
model_gal = prof_expo(r, re) # 2-D grid of galaxy intensity
# plotting our galaxy profile
plt.figure()
# default plot
plt.imshow(model_gal)
# more detailed plot
#plt.imshow(model_gal.T, # take the transpose to flip x and y in plot
# origin='lower', # specify the origin to be at the bottom not the top
# extent=[x_grid[0], x_grid[-1], y_grid[0], y_grid[-1]], # specify [left, right, bottom, top] positions
# cmap='magma', interpolation='none') # additional options
#plt.xlabel('x [arcsec]')
#plt.ylabel('y [arcsec]')
#plt.title('Intrinsic Galaxy Profile')
#plt.colorbar(label='Intensity') # add a colorbar
# PSF Model: Moffat
def prof_moffat(r, fwhm):
# compute constants
beta = 4.765 # beta is fixed
bnorm = np.sqrt(2.**(1. / beta) - 1) # constant, computed from beta
alpha = fwhm / 2. / bnorm # alpha, computed from beta and FWHM
# compute PSF
norm = (beta - 1.) / (np.pi * alpha**2)
psf = norm * (1 + (r / alpha)**2)**-beta
return psf
# define our typical seeing
psf_fwhm = 0.8 # FWHM [arcsec]
# compute our psf
model_psf = ...
# plot our psf
...
# compute convolution of galaxy and PSF
model_obs = fftconvolve()
model_obs /= np.max(model_obs) # normalize result to 1.
# plot our result
...
# slit dimensions [arcsec]
slit_width = 0.8
slit_height = 7.
# post-slit galaxy model
model_spec =
# plotting the results
...
# one possible solution using boolean algebra (0=False, 1=True)
# slit dimensions [arcsec]
slit_width = 0.8
slit_height = 7.
# slit model
model_slit = (abs(x) <= slit_width / 2.) # set all x's within slit_width / 2. of 0. to 1; otherwise set to 0
model_slit *= (abs(y) <= slit_height / 2.) # *also* set all y's within slit_height / 2. to 1; otherwise set to 0
# post-slit observed galaxy model
model_spec = model_slit * model_obs
# plot results (basic)
plt.figure(figsize=(14, 6)) # create figure object
plt.subplot(1, 2, 1) # split the figure into a grid with 1 row and 2 columns; pick subplot 1
plt.imshow(model_slit.T) # plot slitmask model
plt.colorbar(label='Transmission')
plt.subplot(1, 2, 2) # pick subplot 2
plt.imshow(model_spec.T) # plot combined galaxy+slit model
plt.colorbar(label='Intensity')
plt.tight_layout()
rand = np.random.normal(loc=0., scale=1., size=100000) # normally distributed random numbers
plt.hist(rand); # plot a histogram
plt.plot(radius, gal1) # plot original relation
plt.hist(radius, weights=gal1, normed=True); # plot a histogram, normalized based on our weights
counts, bin_edges = np.histogram(radius, weights=gal1, normed=True)
print(counts)
print(bin_edges)
# the pixel scale of the MMIRS instrument
pix_scale = 0.2012 # [arcsec/pix]
# define our bins
x_bin = np.arange(x_grid[0], x_grid[-1] + pix_scale, pix_scale) # bin edges in x with width=pix_scale
y_bin = np.arange(y_grid[0], y_grid[-1] + pix_scale, pix_scale) # bin edges in y with width=pix_scale
# define the centers of each bin
x_cent, y_cent = 0.5 * (x_bin[1:] + x_bin[:-1]), 0.5 * (y_bin[1:] + y_bin[:-1])
# the total number of bins
Nx_pix, Ny_pix = len(x_cent), len(y_cent)
print(Nx_pix, Ny_pix)
# convert from arcsec to pixels
model_pix, x_edges, y_edges = np.histogram2d(Xs, Ys, weights=Model, bins=[x_bin, y_bin]) # bin over pixel scale
# plot results
plt.imshow(model_pix.T)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This notebook provides an introduction to some of the basic concepts of machine learning.
Step2: What is the simplest story that we could tell about processing speed these data? Well, we could simply say that the variable is normal with a mean of zero and a standard deviation of 1. Let's see how likely the observed processing speed values are given that set of parameters. First, let's create a function that returns the normal log-likelihood of the data given a set of predicted values.
Step3: We are pretty sure that the mean of our variables is not zero, so let's compute the mean and see if the likelihood of the data is higher.
Step4: What about using the observed variance as well?
Step6: Is there a relation between processing speed and age? Compute the linear regression equation to find out.
Step7: This shows us that linear regression can provide a simple description of a complex dataset - we can describe the entire dataset in 2 numbers. Now let's ask how good this description is for a new dataset generated by the same process
Step8: Now let's do this 100 times and look at how variable the fits are.
Step9: Cross-validation
Step10: It is often more common to use larger test folds, both to speed up performance (since LOO can require lots of model fitting when there are a large number of observations) and because LOO error estimates can have high variance due to the fact that the models are so highly correlated. This is referred to as K-fold cross-validation; generally we want to choose K somewhere around 5-10. It's generally a good idea to shuffle the order of the observations so that the folds are grouped randomly.
Step11: Now let's perform leave-one-out cross-validation on our original dataset, so that we can compare it to the performance on new datasets. We expect that the correlation between LOO estimates and actual data should be very similar to the Mean R2 for new datasets. We can also plot a histogram of the estimates, to see how they vary across folds.
Step12: Now let's look at the effect of outliers on in-sample correlation and out-of-sample prediction.
Step14: Model selection
Step15: Bias-variance tradeoffs
Step16: Now let's fit two different models to the data that we will generate. First, we will fit a standard linear regression model, using ordinary least squares. This is the best linear unbiased estimator for the regression model. We will also fit a model that uses regularization, which places some constraints on the parameter estimates. In this case, we use the Lasso model, which minimizes the sum of squares while also constraining (or penalizing) the sum of the absolute parameter estimates (known as an L1 penalty). The parameter estimates of this model will be biased towards zero, and will be sparse, meaning that most of the estimates will be exactly zero.
Step17: Let's run the simulation 100 times and look at the average parameter estimates.
Step18: The prediction error for the Lasso model is substantially less than the error for the linear regression model. What about the parameters? Let's display the mean parameter estimates and their variabilty across runs.
Step19: Another place where regularization is essential is when your data are wider than they are tall - that is, when you have more variables than observations. This is almost always the case for brain imaging data, when the number of voxels far outweighs the number of subjects or events. In this case, the ordinary least squares solution is ill-posed, meaning that it has an infinite number of possible solutions. The sklearn LinearRegression() estimator will return an estimate even in this case, but the parameter estimates will be highly variable. However, we can use a regularized regression technique to find more robust estimates in this case.
|
<ASSISTANT_TASK:>
Python Code:
import numpy,pandas
%matplotlib inline
import matplotlib.pyplot as plt
import scipy.stats
from sklearn.model_selection import LeaveOneOut,KFold
from sklearn.preprocessing import PolynomialFeatures,scale
from sklearn.linear_model import LinearRegression,LassoCV,Ridge
import seaborn as sns
import statsmodels.formula.api as sm
from statsmodels.tools.tools import add_constant
recreate=True
if recreate:
seed=20698
else:
seed=numpy.ceil(numpy.random.rand()*100000).astype('int')
print(seed)
numpy.random.seed(seed)
def make_continuous_data(mean=[45,100],var=[10,10],cor=-0.6,N=100):
generate a synthetic data set with two variables
cor=numpy.array([[1.,cor],[cor,1.]])
var=numpy.array([[var[0],0],[0,var[1]]])
cov=var.dot(cor).dot(var)
return numpy.random.multivariate_normal(mean,cov,N)
n=50
d=make_continuous_data(N=n)
y=d[:,1]
plt.scatter(d[:,0],d[:,1])
plt.xlabel('age')
plt.ylabel('processing speed')
print('data R-squared: %f'%numpy.corrcoef(d.T)[0,1]**2)
def loglike(y,yhat,s2=None,verbose=True):
N = len(y)
SSR = numpy.sum((y-yhat)**2)
if s2 is None:
# use observed stdev
s2 = SSR / float(N)
logLike = -(n/2.)*numpy.log(s2) - (n/2.)*numpy.log(2*numpy.pi) - SSR/(2*s2)
if verbose:
print('SSR:',SSR)
print('s2:',s2)
print('logLike:',logLike)
return logLike
logLike_null=loglike(y,numpy.zeros(len(y)),s2=1)
mean=numpy.mean(y)
print('mean:',mean)
pred=numpy.ones(len(y))*mean
logLike_mean=loglike(y,pred,s2=1)
var=numpy.var(y)
print('variance',var)
pred=numpy.ones(len(y))*mean
logLike_mean_std=loglike(y,pred)
X=d[:,0]
X=add_constant(X)
result = sm.OLS( y, X ).fit()
print(result.summary())
intercept=result.params[0]
slope=result.params[1]
pred=result.predict(X)
logLike_ols=loglike(y,pred)
plt.scatter(y,pred)
print('processing speed = %f + %f*age'%(intercept,slope))
print('p =%f'%result.pvalues[1])
def get_RMSE(y,pred):
return numpy.sqrt(numpy.mean((y - pred)**2))
def get_R2(y,pred):
compute r-squared
return numpy.corrcoef(y,pred)[0,1]**2
ax=plt.scatter(d[:,0],d[:,1])
plt.xlabel('age')
plt.ylabel('processing speed')
plt.plot(d[:,0], slope * d[:,0] + intercept, color='red')
# plot residual lines
d_predicted=slope*d[:,0] + intercept
for i in range(d.shape[0]):
x=d[i,0]
y=d[i,1]
plt.plot([x,x],[d_predicted[i],y],color='blue')
RMSE=get_RMSE(d[:,1],d_predicted)
rsq=get_R2(d[:,1],d_predicted)
print('rsquared=%f'%rsq)
d_new=make_continuous_data(N=n)
d_new_predicted=intercept + slope*d_new[:,0]
RMSE_new=get_RMSE(d_new[:,1],d_new_predicted)
rsq_new=get_R2(d_new[:,1],d_new_predicted)
print('R2 for new data: %f'%rsq_new)
ax=plt.scatter(d_new[:,0],d_new[:,1])
plt.xlabel('age')
plt.ylabel('processing speed')
plt.plot(d_new[:,0], slope * d_new[:,0] + intercept, color='red')
nruns=100
slopes=numpy.zeros(nruns)
intercepts=numpy.zeros(nruns)
rsquared=numpy.zeros(nruns)
fig = plt.figure()
ax = fig.gca()
for i in range(nruns):
data=make_continuous_data(N=n)
slopes[i],intercepts[i],_,_,_=scipy.stats.linregress(data[:,0],data[:,1])
ax.plot(data[:,0], slopes[i] * data[:,0] + intercepts[i], color='red', alpha=0.05)
pred_orig=intercept + slope*data[:,0]
rsquared[i]=get_R2(data[:,1],pred_orig)
print('Original R2: %f'%rsq)
print('Mean R2 for new datasets on original model: %f'%numpy.mean(rsquared))
# initialize the sklearn leave-one-out operator
loo=LeaveOneOut()
for train,test in loo.split(range(10)):
print('train:',train,'test:',test)
# initialize the sklearn leave-one-out operator
kf=KFold(n_splits=5,shuffle=True)
for train,test in kf.split(range(10)):
print('train:',train,'test:',test)
loo=LeaveOneOut()
slopes_loo=numpy.zeros(n)
intercepts_loo=numpy.zeros(n)
pred=numpy.zeros(n)
ctr=0
for train,test in loo.split(range(n)):
slopes_loo[ctr],intercepts_loo[ctr],_,_,_=scipy.stats.linregress(d[train,0],d[train,1])
pred[ctr]=intercepts_loo[ctr] + slopes_loo[ctr]*data[test,0]
ctr+=1
print('R2 for leave-one-out prediction: %f'%get_R2(pred,data[:,1]))
plt.figure(figsize=(12,5))
plt.subplot(1,2,1)
_=plt.hist(slopes_loo,20)
plt.xlabel('slope estimate')
plt.ylabel('frequency')
plt.subplot(1,2,2)
_=plt.hist(intercepts_loo,20)
plt.xlabel('intercept estimate')
plt.ylabel('frequency')
# add an outlier
data_null=make_continuous_data(N=n,cor=0.0)
outlier_multiplier=2.0
data=numpy.vstack((data_null,[numpy.max(data_null[:,0])*outlier_multiplier,
numpy.max(data_null[:,1])*outlier_multiplier*-1]))
plt.scatter(data[:,0],data[:,1])
slope,intercept,r,p,se=scipy.stats.linregress(data[:,0],data[:,1])
plt.plot([numpy.min(data[:,0]),intercept + slope*numpy.min(data[:,0])],
[numpy.max(data[:,0]),intercept + slope*numpy.max(data[:,0])])
rsq_outlier=r**2
print('R2 for regression with outlier: %f'%rsq_outlier)
loo=LeaveOneOut()
pred_outlier=numpy.zeros(data.shape[0])
ctr=0
for train,test in loo.split(range(data.shape[0])):
s,i,_,_,_=scipy.stats.linregress(data[train,0],data[train,1])
pred_outlier[ctr]=i + s*data[test,0]
ctr+=1
print('R2 for leave-one-out prediction: %f'%get_R2(pred_outlier,data[:,1]))
# from https://gist.github.com/iizukak/1287876
def gram_schmidt_columns(X):
Q, R = numpy.linalg.qr(X)
return Q
def make_continuous_data_poly(mean=0,var=1,betaval=5,order=1,N=100):
generate a synthetic data set with two variables
allowing polynomial functions up to 5-th order
x=numpy.random.randn(N)
x=x-numpy.mean(x)
pf=PolynomialFeatures(5,include_bias=False)
x_poly=gram_schmidt_columns(pf.fit_transform(x[:,numpy.newaxis]))
betas=numpy.zeros(5)
betas[0]=mean
for i in range(order):
betas[i]=betaval
func=x_poly.dot(betas)+numpy.random.randn(N)*var
d=numpy.vstack((x,func)).T
return d,x_poly
n=25
trueorder=2
data,x_poly=make_continuous_data_poly(N=n,order=trueorder)
# fit models of increasing complexity
npolyorders=7
plt.figure()
plt.scatter(data[:,0],data[:,1])
plt.title('fitted data')
xp=numpy.linspace(numpy.min(data[:,0]),numpy.max(data[:,0]),100)
for i in range(npolyorders):
f = numpy.polyfit(data[:,0], data[:,1], i)
p=numpy.poly1d(f)
plt.plot(xp,p(xp))
plt.legend(['%d'%i for i in range(npolyorders)])
# compute in-sample and out-of-sample error using LOO
loo=LeaveOneOut()
pred=numpy.zeros((n,npolyorders))
mean_trainerr=numpy.zeros(npolyorders)
prederr=numpy.zeros(npolyorders)
for i in range(npolyorders):
ctr=0
trainerr=numpy.zeros(n)
for train,test in loo.split(range(data.shape[0])):
f = numpy.polyfit(data[train,0], data[train,1], i)
p=numpy.poly1d(f)
trainerr[ctr]=numpy.sqrt(numpy.mean((data[train,1]-p(data[train,0]))**2))
pred[test,i]=p(data[test,0])
ctr+=1
mean_trainerr[i]=numpy.mean(trainerr)
prederr[i]=numpy.sqrt(numpy.mean((data[:,1]-pred[:,i])**2))
plt.plot(range(npolyorders),mean_trainerr)
plt.plot(range(npolyorders),prederr,color='red')
plt.xlabel('Polynomial order')
plt.ylabel('root mean squared error')
plt.legend(['training error','test error'],loc=9)
plt.plot([numpy.argmin(prederr),numpy.argmin(prederr)],
[numpy.min(mean_trainerr),numpy.max(prederr)],'k--')
plt.text(0.5,numpy.max(mean_trainerr),'underfitting')
plt.text(4.5,numpy.max(mean_trainerr),'overfitting')
print('True order:',trueorder)
print('Order estimated by cross validation:',numpy.argmin(prederr))
def make_larger_dataset(beta,n,sd=1):
X=numpy.random.randn(n,len(beta)) # design matrix
beta=numpy.array(beta)
y=X.dot(beta)+numpy.random.randn(n)*sd
return(y-numpy.mean(y),X)
def compare_lr_lasso(n=100,nvars=20,n_splits=8,sd=1):
beta=numpy.zeros(nvars)
beta[0]=1
beta[1]=-1
y,X=make_larger_dataset(beta,100,sd=1)
kf=KFold(n_splits=n_splits,shuffle=True)
pred_lr=numpy.zeros(X.shape[0])
coefs_lr=numpy.zeros((n_splits,X.shape[1]))
pred_lasso=numpy.zeros(X.shape[0])
coefs_lasso=numpy.zeros((n_splits,X.shape[1]))
lr=LinearRegression()
lasso=LassoCV()
ctr=0
for train,test in kf.split(X):
Xtrain=X[train,:]
Ytrain=y[train]
lr.fit(Xtrain,Ytrain)
lasso.fit(Xtrain,Ytrain)
pred_lr[test]=lr.predict(X[test,:])
coefs_lr[ctr,:]=lr.coef_
pred_lasso[test]=lasso.predict(X[test,:])
coefs_lasso[ctr,:]=lasso.coef_
ctr+=1
prederr_lr=numpy.sum((pred_lr-y)**2)
prederr_lasso=numpy.sum((pred_lasso-y)**2)
return [prederr_lr,prederr_lasso],numpy.mean(coefs_lr,0),numpy.mean(coefs_lasso,0),beta
nsims=100
prederr=numpy.zeros((nsims,2))
lrcoef=numpy.zeros((nsims,20))
lassocoef=numpy.zeros((nsims,20))
for i in range(nsims):
prederr[i,:],lrcoef[i,:],lassocoef[i,:],beta=compare_lr_lasso()
print('mean sum of squared error:')
print('linear regression:',numpy.mean(prederr,0)[0])
print('lasso:',numpy.mean(prederr,0)[1])
coefs_df=pandas.DataFrame({'True value':beta,'Regression (mean)':numpy.mean(lrcoef,0),'Lasso (mean)':numpy.mean(lassocoef,0),
'Regression(stdev)':numpy.std(lrcoef,0),'Lasso(stdev)':numpy.std(lassocoef,0)})
coefs_df
nsims=100
prederr=numpy.zeros((nsims,2))
lrcoef=numpy.zeros((nsims,1000))
lassocoef=numpy.zeros((nsims,1000))
for i in range(nsims):
prederr[i,:],lrcoef[i,:],lassocoef[i,:],beta=compare_lr_lasso(nvars=1000)
print('mean sum of squared error:')
print('linear regression:',numpy.mean(prederr,0)[0])
print('lasso:',numpy.mean(prederr,0)[1])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Q1. Let x be a ndarray [10, 10, 3] with all elements set to one. Reshape x so that the size of the second dimension equals 150.
Step2: Q2. Let x be array [[1, 2, 3], [4, 5, 6]]. Convert it to [1 4 2 5 3 6].
Step3: Q3. Let x be array [[1, 2, 3], [4, 5, 6]]. Get the 5th element.
Step4: Q4. Let x be an arbitrary 3-D array of shape (3, 4, 5). Permute the dimensions of x such that the new shape will be (4,3,5).
Step5: Q5. Let x be an arbitrary 2-D array of shape (3, 4). Permute the dimensions of x such that the new shape will be (4,3).
Step6: Q5. Let x be an arbitrary 2-D array of shape (3, 4). Insert a nex axis such that the new shape will be (3, 1, 4).
Step7: Q6. Let x be an arbitrary 3-D array of shape (3, 4, 1). Remove a single-dimensional entries such that the new shape will be (3, 4).
Step8: Q7. Lex x be an array <br/>
Step9: Q8. Lex x be an array <br/>
Step10: Q8. Let x be an array [1 2 3] and y be [4 5 6]. Convert it to [[1, 4], [2, 5], [3, 6]].
Step11: Q9. Let x be an array [[1],[2],[3]] and y be [[4], [5], [6]]. Convert x to [[[1, 4]], [[2, 5]], [[3, 6]]].
Step12: Q10. Let x be an array [1, 2, 3, ..., 9]. Split x into 3 arrays, each of which has 4, 2, and 3 elements in the original order.
Step13: Q11. Let x be an array<br/>
Step14: Q12. Let x be an array <br />
Step15: Q13. Let x be an array <br />
Step16: Q14. Let x be an array [0, 1, 2]. Convert it to <br/>
Step17: Q15. Let x be an array [0, 1, 2]. Convert it to <br/>
Step18: Q16. Let x be an array [0, 0, 0, 1, 2, 3, 0, 2, 1, 0].<br/>
Step19: Q17. Let x be an array [2, 2, 1, 5, 4, 5, 1, 2, 3]. Get two arrays of unique elements and their counts.
Step20: Q18. Lex x be an array <br/>
Step21: Q19. Lex x be an array <br/>
Step22: Q20. Lex x be an array <br/>
Step23: Q21 Lex x be an array <br/>
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
np.__version__
x = np.ones([10, 10, 3])
out = np.reshape(x, [-1, 150])
print out
assert np.allclose(out, np.ones([10, 10, 3]).reshape([-1, 150]))
x = np.array([[1, 2, 3], [4, 5, 6]])
out1 = np.ravel(x, order='F')
out2 = x.flatten(order="F")
assert np.allclose(out1, out2)
print out1
x = np.array([[1, 2, 3], [4, 5, 6]])
out1 = x.flat[4]
out2 = np.ravel(x)[4]
assert np.allclose(out1, out2)
print out1
x = np.zeros((3, 4, 5))
out1 = np.swapaxes(x, 1, 0)
out2 = x.transpose([1, 0, 2])
assert out1.shape == out2.shape
print out1.shape
x = np.zeros((3, 4))
out1 = np.swapaxes(x, 1, 0)
out2 = x.transpose()
out3 = x.T
assert out1.shape == out2.shape == out3.shape
print out1.shape
x = np.zeros((3, 4))
print np.expand_dims(x, axis=1).shape
x = np.zeros((3, 4, 1))
print np.squeeze(x).shape
x = np.array([[1, 2, 3], [4, 5, 6]])
y = np.array([[7, 8, 9], [10, 11, 12]])
out1 = np.concatenate((x, y), 1)
out2 = np.hstack((x, y))
assert np.allclose(out1, out2)
print out2
x = np.array([[1, 2, 3], [4, 5, 6]])
y = np.array([[7, 8, 9], [10, 11, 12]])
out1 = np.concatenate((x, y), 0)
out2 = np.vstack((x, y))
assert np.allclose(out1, out2)
print out2
x = np.array((1,2,3))
y = np.array((4,5,6))
out1 = np.column_stack((x, y))
out2 = np.squeeze(np.dstack((x, y)))
out3 = np.vstack((x, y)).T
assert np.allclose(out1, out2)
assert np.allclose(out2, out3)
print out1
x = np.array([[1],[2],[3]])
y = np.array([[4],[5],[6]])
out = np.dstack((x, y))
print out
x = np.arange(1, 10)
print np.split(x, [4, 6])
x = np.arange(16).reshape(2, 2, 4)
out1 = np.split(x, [3],axis=2)
out2 = np.dsplit(x, [3])
assert np.allclose(out1[0], out2[0])
assert np.allclose(out1[1], out2[1])
print out1
x = np.arange(16).reshape((4, 4))
out1 = np.hsplit(x, 2)
out2 = np.split(x, 2, 1)
assert np.allclose(out1[0], out2[0])
assert np.allclose(out1[1], out2[1])
print out1
x = np.arange(16).reshape((4, 4))
out1 = np.vsplit(x, 2)
out2 = np.split(x, 2, 0)
assert np.allclose(out1[0], out2[0])
assert np.allclose(out1[1], out2[1])
print out1
x = np.array([0, 1, 2])
out1 = np.tile(x, [2, 2])
out2 = np.resize(x, [2, 6])
assert np.allclose(out1, out2)
print out1
x = np.array([0, 1, 2])
print np.repeat(x, 2)
x = np.array((0, 0, 0, 1, 2, 3, 0, 2, 1, 0))
out = np.trim_zeros(x)
print out
x = np.array([2, 2, 1, 5, 4, 5, 1, 2, 3])
u, indices = np.unique(x, return_counts=True)
print u, indices
x = np.array([[1,2], [3,4]])
out1 = np.fliplr(x)
out2 = x[:, ::-1]
assert np.allclose(out1, out2)
print out1
x = np.array([[1,2], [3,4]])
out1 = np.flipud(x)
out2 = x[::-1, :]
assert np.allclose(out1, out2)
print out1
x = np.array([[1,2], [3,4]])
out = np.rot90(x)
print out
x = np.arange(1, 9).reshape([2, 4])
print np.roll(x, 1, axis=1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1.1 - 1$^{st}$order in space and time
Step2: 1.2 - 1$^{st}$order in time and 4$^{th}$order in space
Step3: 1.3 - 2$^{nd}$order in time and 4$^{th}$order in space
Step4: By incrementally changing the accuracy (and thus order) of the schemes, it is apparent that the order of the time iteration scheme is much more important at driving dispersion and instabilities of the original signal. Increasing the spatial order serves to increase the dissipation of the signal, quickly spreading and muting the disturbance.
Step5: 2 - 2D Tracer Advection
Step6: The above streamfunction, and corresponding velocity fields are now used to advect our tracer through the grid, following the specifications described above.
|
<ASSISTANT_TASK:>
Python Code:
#Import toolboxes
from scipy import sparse #Allows me to create sparse matrices (i.e. not store all of the zeros in the 'A' matrix)
from scipy.sparse import linalg as linal
from numpy import * #To make matrices and do matrix manipulation
import matplotlib.pyplot as plt
import matplotlib.cm as cm #Load colormaps
plt.style.use('fivethirtyeight')
%matplotlib inline
c=1.
t_del = 0.1
x_del = 0.1
n=100;
T=matrix(zeros((n+1,1)))
for t in range(0,20):
T[t,0]=sin((t/19.)*2*pi)
Y=(c*t_del)/x_del
Y=0.8 #Force value to be less than one to illustrate dissipation
for Tint in range(0,100):
T=append(T,matrix(zeros((shape(T)[0],1))),axis=1)
for int in range(0,shape(T)[0]-1):
T[int+1,Tint+1]=T[int+1,Tint]-Y*((T[int+1,Tint]-T[int,Tint]))
plt.figure(figsize=(12,5))
plt.subplot(121)
plt.plot(arange(0,n+1)*x_del,array(T),linewidth=0.5);plt.title('Propagation of a Sine wave')
plt.ylabel('$\eta$');plt.xlabel('x')
plt.subplot(122)
plt.contourf(arange(0,n+1)*x_del,arange(0,101)*t_del,array(T).T,cmap=cm.RdBu_r)
plt.colorbar();plt.title('Signal over Time');plt.xlabel('x');plt.ylabel('Time')
plt.show()
c=1.
t_del = 0.1
x_del = 0.1
n=100;
t_steps=100
T=matrix(zeros((n+1,1)))
for t in range(3,23):
T[t,0]=sin(((t-3)/19.)*2*pi)
Y=(c*t_del)/x_del
Y=0.7 #Force value to be less than one to illustrate dissipation
for Tint in range(0,t_steps):
T=append(T,matrix(zeros((shape(T)[0],1))),axis=1)
for int in range(0,shape(T)[0]-1):
if int==0 or int==shape(T)[0]-2:
T[int+1,Tint+1]=T[int+1,Tint]-Y*((T[int+1,Tint]-T[int,Tint]))
elif int==1 or int==shape(T)[0]-3:
T[int+1,Tint+1]=T[int+1,Tint]-Y*((T[int+1,Tint]-T[int-1,Tint])/2)
else:
T[int+1,Tint+1]=T[int+1,Tint]-Y*((T[int-2,Tint]-8*T[int-1,Tint]+8*T[int+1,Tint]-T[int+2,Tint])/(12))
plt.figure(figsize=(12,5))
plt.subplot(121)
plt.plot(arange(0,n+1)*x_del,array(T),linewidth=0.5);plt.title('Propagation of a Sine wave')
plt.ylabel('$\eta$');plt.xlabel('x')
plt.subplot(122)
plt.contourf(arange(0,n+1)*x_del,arange(0,t_steps+1)*t_del,array(T).T,cmap=cm.RdBu_r)
plt.colorbar();plt.title('Signal over Time');plt.xlabel('x');plt.ylabel('Time')
plt.show()
c=1.
t_del = 0.1
x_del = 0.1
n=100;
t_steps=600
T=matrix(zeros((n+1,1)))
for t in range(3,23):
T[t,0]=sin(((t-3)/19.)*2*pi)
Y=(c*t_del)/x_del
Y=0.01 #Force value to be less than one to illustrate dissipation
for Tint in range(0,t_steps):
T=append(T,matrix(zeros((shape(T)[0],1))),axis=1)
for int in range(0,shape(T)[0]-1):
if Tint==0:
T[int+1,Tint+1]=T[int+1,Tint]-Y*((T[int+1,Tint]-T[int,Tint]))
else:
if int==0 or int==shape(T)[0]-2:
T[int+1,Tint+1]=T[int+1,Tint-1]-2*Y*((T[int+1,Tint]-T[int,Tint]))
elif int==1 or int==shape(T)[0]-3:
T[int+1,Tint+1]=T[int+1,Tint-1]-2*Y*((T[int+1,Tint]-T[int-1,Tint])/2)
else:
T[int+1,Tint+1]=T[int+1,Tint-1]-2*Y*((T[int-2,Tint]-8*T[int-1,Tint]+8*T[int+1,Tint]-T[int+2,Tint])/(12))
plt.figure(figsize=(12,5))
plt.subplot(121)
plt.plot(arange(0,n+1)*x_del,array(T),linewidth=0.05);plt.title('Propagation of a Sine wave')
plt.ylabel('$\eta$');plt.xlabel('x')
plt.subplot(122)
plt.contourf(arange(0,n+1)*x_del,arange(0,t_steps+1)*t_del,array(T).T,cmap=cm.RdBu_r)
plt.colorbar();plt.title('Signal over Time');plt.xlabel('x');plt.ylabel('Time')
plt.show()
dx=1
c = 1.
k=arange(0,pi+0.1,pi/100.)
plt.figure(figsize=(6,6))
plt.plot(k,c*k)
plt.plot(k,(c/dx)*sin(k*dx))
plt.plot(k,(c/(6*dx))*(8*sin(k*dx) - sin(2*k*dx)))
plt.legend(['$\omega = ck$','1$^{st}$ Order Upwind Scheme','4$^{th}$ Order Centered-Difference'])
plt.xticks(arange(0,1.1,1/10.)*pi,['0','0.1$\pi$', '0.2$\pi$', '0.3$\pi$','0.4$\pi$','0.5$\pi$','0.6$\pi$','0.7$\pi$','0.8$\pi$','0.9$\pi$','$\pi$']);
plt.xlim((0,1*pi));plt.ylim((0,1*pi))
plt.ylabel('$\omega$ - Frequency');plt.xlabel('$k$ - Wave Number');plt.title('Numerical Dispersion Relation');
plt.show()
##Attempt to find points where C=C_g for each of the methods; did not look nice.
#k = pi/2.
#dx = arange(0,100+(1/50.),1/50.)+(1/50.)
#plt.figure(figsize=(6,6))
#plt.plot(dx,abs(((c/(dx*k))*(sin(k*dx)))/((c/(k))*(cos(k*dx)))))
#plt.plot(dx,abs(((c/(6*dx*k))*(8*sin(k*dx) - sin(2*k*dx)))/((c/(3*dx*k))*(4*cos(k*dx) - cos(2*k*dx)))))
#plt.legend(['1$^{st}$ Order Upwind','4$^{th}$ Order Centered']);plt.xlim((0,10));plt.ylim((0,10));
#plt.xlabel('$\delta x$');plt.ylabel('C/C$_{g}$')
#plt.show()
#Provide user-defined inputs
dx = 0.2
dy = 0.2
n=100;
m=100;
t_steps = 1200;
x = arange(0,n+1)*dx
y = arange(0,m+1)*dy
#Create non-divergent velocity field
Phi = zeros((n+1,m+1))
for t in range(0,n+1):
for t2 in range(0,m+1):
Phi[t,t2]= 1 * exp(-(x[t] - median(x))**2 / (2*(max(x)/2))) * exp(-(y[t2] - median(y))**2 / (2*(max(y)/2)))
[v,u]=gradient(Phi,dx,dy); u= -u;
#Calculate Dt to avoid violating CFL condition
dt = (0.4*dx)/sqrt(nanmax(u)**2 + nanmax(v)**2)
#Initialize tracer distribution for advection
T=zeros((n+1,m+1))
for t in range(0,n+1):
for t2 in range(0,m+1):
T[t,t2]= 0.95 * exp(-(x[t] - 7)**2 / (2*2)) * exp(-(y[t2] - 7)**2 / (2*5))
#Plot Velocity field
plt.figure(figsize=(8,10));
plt.subplot(211)
plt.contourf(x,y,array(Phi).T,20,cmap=cm.RdBu_r)
plt.colorbar();plt.title('Streamfunction');plt.xlabel('x');plt.ylabel('y')
plt.subplot(223)
plt.contourf(x,y,array(u).T,20,cmap=cm.RdBu_r)
plt.colorbar();plt.title('U');plt.xlabel('x');plt.ylabel('y')
plt.subplot(224)
plt.contourf(x,y,array(v).T,20,cmap=cm.RdBu_r)
plt.colorbar();plt.title('V');plt.xlabel('x');plt.ylabel('y')
plt.show()
#Application of Advection scheme: Finite Volume + Flux form of equations
upos=[]; uneg=[];
vpos=[]; vneg=[];
T_old=zeros(shape(T))
T_new=zeros(shape(T))
T_old[:]=T[:]
#Begin time-stepping
for t in range(0,t_steps):
upos[:] = 0.5*(u[:]+abs(u[:])); upos = array(upos)
vpos[:] = 0.5*(v[:]+abs(v[:])); vpos = array(vpos)
uneg[:] = 0.5*(u[:]-abs(u[:])); uneg = array(uneg)
vneg[:] = 0.5*(v[:]-abs(v[:])); vneg = array(vneg)
Dx=zeros((n+1,m+1))
for i in range(0,n-1):
Dx[i,:]=(T[i+1,:]-T[i,:])
Dx[n,:]=(T[0,:]-T[n,:]) #Provide cyclical boundary conditions
Dy=zeros((n+1,m+1))
for i in range(0,m-1):
Dy[:,i]=(T[:,i+1]-T[:,i])
Dy[:,m]=(T[:,0]-T[:,m]) #Provide cyclical boundary conditions
#Calculate Fluxes (using QUICK)
fx=zeros((n+1,m+1))
for i in range(0,n):
if i==0 or i==n-1:
fx[i,:] = 0.5*(T[i,:]+ T[i+1,:])* u[i,:]
else:
fx[i,:] = u[i,:]*0.5*(T[i+1,:]+T[i,:]) -0.125*upos[i,:]*(Dx[i,:]- Dx[i-1,:]) -0.125*uneg[i,:]*(Dx[i+1,:]-Dx[i,:]); #3rd Order
fy=zeros((n+1,m+1))
for i in range(0,m):
if i==0 or i==m-1:
fy[:,i] = 0.5*(T[:,i]+ T[:,i+1])* v[:,i]
else:
fy[:,i] = v[:,i]*0.5*(T[:,i+1]+T[:,i]) -0.125*vpos[:,i]*(Dy[:,i]- Dy[:,i-1]) -0.125*vneg[:,i]*(Dy[:,i+1]-Dy[:,i]); #3rd Order
#Adjust to be a leapfrog scheme after the first timestep
if t==0:
C=1
else: C=1
for i in range(1,n):
for j in range(1,m):
T_new[i,j] = T[i,j] - C*dt*(-fx[i-1,j]+fx[i,j]-fy[i,j-1]+fy[i,j])
#Boundary
T_new[n,:]=T_new[n-1,:]
T_new[0,:]=T_new[1,:]
T_new[:,m]=T_new[:,m-1]
T_new[:,0]=T_new[:,1]
T_old[:]=T[:]
T[:]=T_new[:]
if t%100 == 0:
plt.figure(figsize=(10,4))
plt.subplot(121);plt.pcolor(x,y,T.T,cmap=cm.RdBu_r)
plt.clim([-1.2,1.2]);plt.plot(x,ones(shape(x))*(50*dy),linewidth=0.5,color='k')
plt.colorbar();plt.title('Tracer Distribution at time '+str(round(t*dt)))
plt.xlabel('x');plt.ylabel('y')
plt.subplot(122);plt.plot(x,T[:,50].T)
plt.title('Tracer Distribution across y = '+str(50*dy))
plt.ylabel('Concentration');plt.xlabel('x')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Here we stopped the simulation using the Notebook interrupt button. Calling the start_macro() function again continues the processing and re-run any task that wasn't completed in the first run and any task that exited with errors.
Step2: Note
Step3: But the information is there
Step4: We can also see all task information by evaluating the result object as standard Python dictionary
Step5: Saving output to re-process at a later time
Step6: The effect is that the result of an analysis can be saved to files and later restarted. The next example illustrates this.
Step7: All this stored data can be be reloaded
|
<ASSISTANT_TASK:>
Python Code:
from anypytools import AnyPyProcess
app = AnyPyProcess(num_processes=1)
macro = [
'load "Knee.any"',
'operation Main.MyStudy.InverseDynamics',
'run',
]
macrolist = [macro]*20
app.start_macro(macrolist);
app.start_macro(macrolist);
from anypytools import AnyPyProcess
from anypytools.macro_commands import Load, OperationRun, Dump
app = AnyPyProcess()
macro = [
Load('Knee.any', defs={'N_STEP':10}),
OperationRun('Main.MyStudy.InverseDynamics'),
Dump('Main.MyStudy.Output.MaxMuscleActivity'),
]
result = app.start_macro(macro)[0]
result
result["task_macro"]
dict(result)
from anypytools import AnyPyProcess
app = AnyPyProcess()
macro = [
Load('Knee.any', defs={'N_STEP':10}),
OperationRun('Main.MyStudy.InverseDynamics'),
Dump('Main.MyStudy.Output.MaxMuscleActivity'),
]
output = app.start_macro(macro)
app = AnyPyProcess()
app.start_macro(output)
import os
from scipy.stats import distributions
from anypytools import AnyPyProcess, AnyMacro
from anypytools.macro_commands import Load, SetValue_random, OperationRun, Dump
tibia_knee_srel = distributions.norm([0, 0.18, 0], [0.005, 0.005, 0.005] )
femur_knee_srel = distributions.norm([0, -0.3, 0], [0.005, 0.005, 0.005] )
app = AnyPyProcess(silent=True)
mg = AnyMacro(number_of_macros = 500)
mg.extend([
Load('knee.any', defs = {'N_STEP':20}),
SetValue_random('Main.MyModel.Tibia.Knee.sRel', tibia_knee_srel),
SetValue_random('Main.MyModel.Femur.Knee.sRel', femur_knee_srel),
OperationRun('Main.MyStudy.InverseDynamics'),
Dump('Main.MyStudy.Output.MaxMuscleActivity'),
])
try:
os.remove('data.db')
except OSError:
pass
for macros in mg.create_macros_MonteCarlo(batch_size=50):
app.start_macro(macros)
app.save_results('data.db', append=True)
print('Data saved')
print('Done')
reloaded_results = app.load_results('data.db')
print('Entries in file: {}'.format(len(reloaded_results)))
reloaded_results[456:457]
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(reloaded_results['MaxMuscleAct'].T, 'b', lw=0.2, alpha = 0.3);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship
Step3: The very same sample of the RMS Titanic data now shows the Survived feature removed from the DataFrame. Note that data (the passenger data) and outcomes (the outcomes of survival) are now paired. That means for any passenger data.loc[i], they have the survival outcome outcomes[i].
Step5: Tip
Step6: Question 1
Step7: Answer
Step9: Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females did survive the ship sinking. Let's build on our previous prediction
Step10: Question 2
Step11: Answer
Step13: Examining the survival statistics, the majority of males younger than 10 survived the ship sinking, whereas most males age 10 or older did not survive the ship sinking. Let's continue to build on our previous prediction
Step14: Question 3
Step15: Answer
Step18: After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction.
Step19: Question 4
|
<ASSISTANT_TASK:>
Python Code:
# Import libraries necessary for this project
import numpy as np
import pandas as pd
from IPython.display import display # Allows the use of display() for DataFrames
# Import supplementary visualizations code visuals.py
import visuals as vs
# Pretty display for notebooks
%matplotlib inline
# Load the dataset
in_file = 'titanic_data.csv'
full_data = pd.read_csv(in_file)
# Print the first few entries of the RMS Titanic data
display(full_data.head())
# Store the 'Survived' feature in a new variable and remove it from the dataset
outcomes = full_data['Survived']
data = full_data.drop('Survived', axis = 1)
# Show the new dataset with 'Survived' removed
display(data.head())
def accuracy_score(truth, pred):
Returns accuracy score for input truth and predictions.
# Ensure that the number of predictions matches number of outcomes
if len(truth) == len(pred):
# Calculate and return the accuracy as a percent
return "Predictions have an accuracy of {:.2f}%.".format((truth == pred).mean()*100)
else:
return "Number of predictions does not match number of outcomes!"
# Test the 'accuracy_score' function
predictions = pd.Series(np.ones(5, dtype = int))
print accuracy_score(outcomes[:5], predictions)
def predictions_0(data):
Model with no features. Always predicts a passenger did not survive.
predictions = []
for _, passenger in data.iterrows():
# Predict the survival of 'passenger'
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_0(data)
print accuracy_score(outcomes, predictions)
vs.survival_stats(data, outcomes, 'Sex')
def predictions_1(data):
Model with one feature:
- Predict a passenger survived if they are female.
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
if passenger['Sex'] == "female":
predictions.append(1)
else:
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_1(data)
print accuracy_score(outcomes, predictions)
vs.survival_stats(data, outcomes, 'Age', ["Sex == 'male'"])
def predictions_2(data):
Model with two features:
- Predict a passenger survived if they are female.
- Predict a passenger survived if they are male and younger than 10.
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
if passenger['Sex'] == "female" or passenger['Age'] < 10:
predictions.append(1)
else:
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_2(data)
print accuracy_score(outcomes, predictions)
vs.survival_stats(data, outcomes, 'Age', ["Sex == 'female'","SibSp == 3"])
def predictions_3(data):
Model with multiple features. Makes a prediction with an accuracy of at least 80%.
predictions = []
for _, passenger in data.iterrows():
Model with features of Sex, Age, Pclass, and SibSp.
if passenger['Sex'] == "female":
if passenger['Pclass'] == 3 and passenger['Age']>=40 and passenger['Age']<60:
predictions.append(0)
elif passenger['SibSp'] == 3 and passenger['Age'] <= 10:
predictions.append(0)
else:
predictions.append(1)
else:
if passenger['Age'] < 10:
predictions.append(1)
elif passenger['Pclass'] == 1 and passenger['Age']>=20 and passenger['Age'] < 40:
predictions.append(1)
else:
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_3(data)
print accuracy_score(outcomes, predictions)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 12.2 Perceptrons
Step2: 12.2.2 Convergence of Perceptrons
Step3: 12.2.7 Problems With Perceptrons
Step4: 12.2.8 Parallel Implementation of Perceptrons
|
<ASSISTANT_TASK:>
Python Code:
#exercise
show_image('fig12_5.png')
show_image('fig12_10.png')
show_image('fig12_11.png')
show_image('fig12_12.png')
#Exercise
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's generate a cubic network again, but with a different connectivity
Step2: This Network has pores distributed in a cubic lattice, but connected to diagonal neighbors due to the connectivity being set to 8 (the default is 6 which is orthogonal neighbors). The various options are outlined in the Cubic class's documentation which can be viewed with the Object Inspector in Spyder.
Step3: The above statements result in two distinct Geometry objects, each applying to different regions of the domain. geom1 applies to only the pores on the top and bottom surfaces (automatically labeled 'top' and 'bottom' during the network generation step), while geom2 applies to the pores 'not' on the top and bottom surfaces.
Step4: Each of the above lines produced an array of different length, corresponding to the number of pores assigned to each Geometry object. This is accomplished by the calls to geom1.Np and geom2.Np, which return the number of pores on each object.
Step5: The following code illustrates the shortcut approach, which accomplishes the same result as above in a single line
Step6: This shortcut works because the pn dictionary does not contain an array called 'pore.seed', so all associated Geometry objects are then checked for the requested array(s). If it is found, then OpenPNM essentially performs the interleaving of the data as demonstrated by the manual approach and returns all the values together in a single full-size array. If it is not found, then a standard KeyError message is received.
Step7: Pore-scale models tend to be the most complex (i.e. confusing) aspects of OpenPNM, so it's worth dwelling on the important points of the above two commands
Step8: Instead of using statistical distribution functions, the above lines use the neighbor model which determines each throat value based on the values found 'pore_prop' from it's neighboring pores. In this case, each throat is assigned the minimum pore diameter of it's two neighboring pores. Other options for mode include 'max' and 'mean'.
Step9: Create a Phase Object and Assign Thermophysical Property Models
Step10: Note that all Phase objects are automatically assigned standard temperature and pressure conditions when created. This can be adjusted
Step11: A variety of pore-scale models are available for calculating Phase properties, generally taken from correlations in the literature. An empirical correlation specifically for the viscosity of water is available
Step12: Create Physics Objects for Each Geometry
Step13: Next add the Hagan-Poiseuille model to both
Step14: The same function (mod) was passed as the model argument to both Physics objects. This means that both objects will calculate the hydraulic conductance using the same function. A model must be assigned to both objects in order for the 'throat.hydraulic_conductance' property be defined everywhere in the domain since each Physics applies to a unique selection of pores and throats.
Step15: Each Physics applies to the same subset for pores and throats as the Geometries so its values are distributed spatially, but each Physics is also associated with a single Phase object. Consequently, only a Phase object can to request all of the values within the domain pertaining to itself.
Step16: Now, let's alter the Geometry objects by assigning new random seeds, and adjust the temperature of water.
Step17: So far we have not run the regenerate command on any of these objects, which means that the above changes have not yet been applied to all the dependent properties. Let's do this and examine what occurs at each step
Step18: These two lines trigger the re-calculation of all the size related models on each Geometry object.
Step19: This line causes the viscosity to be recalculated at the new temperature. Let's confirm that the hydraulic conductance has NOT yet changed since we have not yet regenerated the Physics objects' models
Step20: Finally, if we regenerate phys1 and phys2 we can see that the hydraulic conductance will be updated to reflect the new sizes on the Geometries and the new temperature on the Phase
Step21: Determine Permeability Tensor by Changing Inlet and Outlet Boundary Conditions
Step22: Set boundary conditions for flow in the X-direction
Step23: The resulting pressure field can be seen using Paraview
Step24: To find K, we need to solve Darcy's law
Step25: The dimensions of the network can be determined manually from the shape and spacing specified during its generation
Step26: The pressure drop was specified as 1 atm when setting boundary conditions, so Kxx can be found as
Step27: We can either create 2 new Algorithm objects to perform the simulations in the other two directions, or reuse alg by adjusting the boundary conditions and re-running it.
Step28: The first call to set_boundary_conditions used the overwrite mode, which replaces all existing boundary conditions on the alg object with the specified values. The second call uses the merge mode which adds new boundary conditions to any already present, which is the default behavior.
Step29: The values of Kxx and Kyy should be nearly identical since both these two directions are parallel to the small surface pores. For the Z-direction
Step30: The permeability in the Z-direction is about half that in the other two directions due to the constrictions caused by the small surface pores.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import scipy as sp
import openpnm as op
np.random.seed(10)
ws = op.Workspace()
ws.settings["loglevel"] = 40
pn = op.network.Cubic(shape=[20, 20, 10], spacing=0.0001, connectivity=8)
Ps1 = pn.pores(['top', 'bottom'])
Ts1 = pn.find_neighbor_throats(pores=Ps1, mode='union')
geom1 = op.geometry.GenericGeometry(network=pn, pores=Ps1, throats=Ts1, name='boundaries')
Ps2 = pn.pores(['top', 'bottom'], mode='not')
Ts2 = pn.find_neighbor_throats(pores=Ps2, mode='xnor')
geom2 = op.geometry.GenericGeometry(network=pn, pores=Ps2, throats=Ts2, name='core')
geom1['pore.seed'] = np.random.rand(geom1.Np)*0.5 + 0.2
geom2['pore.seed'] = np.random.rand(geom2.Np)*0.5 + 0.2
seeds = np.zeros_like(pn.Ps, dtype=float)
seeds[pn.pores(geom1.name)] = geom1['pore.seed']
seeds[pn.pores(geom2.name)] = geom2['pore.seed']
print(np.all(seeds > 0)) # Ensure all zeros are overwritten
seeds = pn['pore.seed']
geom1.add_model(propname='pore.diameter',
model=op.models.geometry.pore_size.normal,
scale=0.00001, loc=0.00005,
seeds='pore.seed')
geom2.add_model(propname='pore.diameter',
model=op.models.geometry.pore_size.weibull,
shape=1.2, scale=0.00001, loc=0.00005,
seeds='pore.seed')
geom1.add_model(propname='throat.diameter',
model=op.models.misc.from_neighbor_pores,
pore_prop='pore.diameter',
mode='min')
geom2.add_model(propname='throat.diameter',
model=op.models.misc.from_neighbor_pores,
mode='min')
pn['pore.diameter'][pn['throat.conns']]
geom1.add_model(propname='throat.endpoints',
model=op.models.geometry.throat_endpoints.spherical_pores)
geom2.add_model(propname='throat.endpoints',
model=op.models.geometry.throat_endpoints.spherical_pores)
geom1.add_model(propname='throat.area',
model=op.models.geometry.throat_area.cylinder)
geom2.add_model(propname='throat.area',
model=op.models.geometry.throat_area.cylinder)
geom1.add_model(propname='pore.area',
model=op.models.geometry.pore_area.sphere)
geom2.add_model(propname='pore.area',
model=op.models.geometry.pore_area.sphere)
geom1.add_model(propname='throat.conduit_lengths',
model=op.models.geometry.throat_length.conduit_lengths)
geom2.add_model(propname='throat.conduit_lengths',
model=op.models.geometry.throat_length.conduit_lengths)
water = op.phases.GenericPhase(network=pn)
air = op.phases.GenericPhase(network=pn)
water['pore.temperature'] = 353 # K
water.add_model(propname='pore.viscosity',
model=op.models.phases.viscosity.water)
phys1 = op.physics.GenericPhysics(network=pn, phase=water, geometry=geom1)
phys2 = op.physics.GenericPhysics(network=pn, phase=water, geometry=geom2)
mod = op.models.physics.hydraulic_conductance.hagen_poiseuille
phys1.add_model(propname='throat.hydraulic_conductance', model=mod)
phys2.add_model(propname='throat.hydraulic_conductance', model=mod)
g = water['throat.hydraulic_conductance']
g1 = phys1['throat.hydraulic_conductance'] # Save this for later
g2 = phys2['throat.hydraulic_conductance'] # Save this for later
geom1['pore.seed'] = np.random.rand(geom1.Np)
geom2['pore.seed'] = np.random.rand(geom2.Np)
water['pore.temperature'] = 370 # K
geom1.regenerate_models()
geom2.regenerate_models()
water.regenerate_models()
print(np.all(phys1['throat.hydraulic_conductance'] == g1)) # g1 was saved above
print(np.all(phys2['throat.hydraulic_conductance'] == g2) ) # g2 was saved above
phys1.regenerate_models()
phys2.regenerate_models()
print(np.all(phys1['throat.hydraulic_conductance'] != g1))
print(np.all(phys2['throat.hydraulic_conductance'] != g2))
alg = op.algorithms.StokesFlow(network=pn, phase=water)
alg.set_value_BC(values=202650, pores=pn.pores('right'))
alg.set_value_BC(values=101325, pores=pn.pores('left'))
alg.run()
Q = alg.rate(pores=pn.pores('right'))
mu = np.mean(water['pore.viscosity'])
L = 20 * 0.0001
A = 20 * 10 * (0.0001**2)
Kxx = Q * mu * L / (A * 101325)
alg.set_value_BC(values=202650, pores=pn.pores('front'))
alg.set_value_BC(values=101325, pores=pn.pores('back'))
alg.run()
Q = alg.rate(pores=pn.pores('front'))
Kyy = Q * mu * L / (A * 101325)
alg.set_value_BC(values=202650, pores=pn.pores('top'))
alg.set_value_BC(values=101325, pores=pn.pores('bottom'))
alg.run()
Q = alg.rate(pores=pn.pores('top'))
L = 10 * 0.0001
A = 20 * 20 * (0.0001**2)
Kzz = Q * mu * L / (A * 101325)
print(Kxx, Kyy, Kzz)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Description
Step2: The excitation branch impedances are given referred to the high-voltage side of the transformer.
Step3: Therefore, the primary impedances referred to the low voltage (secondary) side are
Step4: and the excitation branch elements referred to the secondary side are
Step5: The resulting equivalent circuit is
Step6: So the rated current $I_\text{base}$ in the secondary side is
Step7: Therefore, the base impedance on the primary side is
Step8: Since $Z_{pu} = Z_\text{actual} / Z_\text{base}$
Step9: <img src="figs/Problem_2-01c.jpg" width="70%">
Step10: The secondary current $I_S$ in this transformer is
Step11: Therefore, the primary voltage on this transformer (referred to the secondary side) is
Step12: The voltage regulation $VR$ of the transformer under these conditions is
Step13: (d)
Step14: (e)
|
<ASSISTANT_TASK:>
Python Code:
%pylab notebook
RP = 5.0 #[Ohm]
RS = 0.005 #[Ohm]
XP = 6.0j #[Ohm]
XS = 0.006j #[Ohm]
RC = 50e3 #[Ohm]
XM = 10e3j #[Ohm]
V_high = 8000 #[V]
V_low = 277 #[V]
S = 100e3 #[VA]
a = V_high/V_low
print('a = {:.2f}'.format(a))
R_P = RP / a**2
X_P = XP / a**2
print('R_P = {:.3f} Ω'.format(R_P))
print('X_P = {:.4f} Ω'.format(abs(X_P)))
R_C = RC / a**2
X_M = XM / a**2
print('R_C = {:.0f} Ω'.format(R_C))
print('X_M = {:.0f} Ω'.format(abs(X_M)))
S_base = 100e3 #[VA]
V_base = 277.0 #[V]
I_base = S_base / V_base
print('I_base = {:.0f} A'.format(I_base))
Z_base = V_base / I_base
print('Z_base = {:.3f} Ω'.format(Z_base))
Req = RP/a**2 + RS #[Ohm]
Xeq = XP/a**2 + XS #[Ohm]
print('Req = {:.3f} Ω'.format(Req))
print('Xeq = {:.4f} Ω'.format(Xeq))
VS = 277 # [V]
PF = 0.85
Is = S / abs(VS) # absolute value of IS [A]
IS_angle = -arccos(PF) # angle of IS [rad]
IS = Is*cos(IS_angle) + Is*sin(IS_angle)*1j # value of IS [A]
print('IS = {:.0f} A ∠{:.1f}°'.format(*(abs(IS), degrees(IS_angle))))
V_P = VS + (Req + Xeq)*IS
print('V_P = {:.0f} V ∠{:.1f}°'.format(*(abs(V_P), angle(V_P, deg=True))))
VR = (abs(V_P)-abs(VS))/abs(VS) * 100
print('VR = {:.2f} %'.format(VR))
P_out = S * PF
P_cu = abs(IS)**2 * Req
P_core = abs(V_P)**2 / R_C
print('P_OUT = {:>6.1f} kW'.format(P_out/1000))
print('P_CU = {:>6.1f} W'.format(P_cu))
print('P_core = {:>6.1f} W'.format(P_core))
eta = P_out/ (P_out + P_cu + P_core) * 100
print('η = {:.1f} %'.format(eta))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Les données sont lues comme une seule ligne de texte avant d'être restructurées au bon format d'une matrice creuse à savoir une liste de triplets contenant les indices de ligne, de colonne et la note pour les seules valeurs renseignées.
Step2: 3. Optimisation du rang sur l'échantillon 10k
Step3: 3.2 Optimisation du rang de la NMF
Step4: 3.3 Résultats et test
Step5: Prévision finale de l'échantillon test.
Step6: 3 Analyse du fichier complet
Step7: 3.2 Echantillonnage
Step8: 3.3 Estimation du modèle
Step9: 3.4 Prévision de l'échantillon test et erreur
|
<ASSISTANT_TASK:>
Python Code:
sc
# Chargement des fichiers si ce n'est déjà fait
#Renseignez ici le dossier où vous souhaitez stocker le fichier téléchargé.
DATA_PATH=""
import urllib.request
# fichier réduit
f = urllib.request.urlretrieve("http://www.math.univ-toulouse.fr/~besse/Wikistat/data/ml-ratings100k.csv",DATA_PATH+"ml-ratings100k.csv")
# Importer les données au format texte dans un RDD
small_ratings_raw_data = sc.textFile(DATA_PATH+"ml-ratings100k.csv")
# Identifier et afficher la première ligne
small_ratings_raw_data_header = small_ratings_raw_data.take(1)[0]
print(small_ratings_raw_data_header)
# Create RDD without header
all_lines = small_ratings_raw_data.filter(lambda l : l!=small_ratings_raw_data_header)
# Séparer les champs (user, item, note) dans un nouveau RDD
from pyspark.sql import Row
split_lines = all_lines.map(lambda l : l.split(","))
ratingsRDD = split_lines.map(lambda p: Row(user=int(p[0]), item=int(p[1]),
rating=float(p[2]), timestamp=int(p[3])))
# .cache() : le RDD est conservé en mémoire une fois traité
ratingsRDD.cache()
# Display the two first rows
ratingsRDD.take(2)
# Convert RDD to DataFrame
ratingsDF = spark.createDataFrame(ratingsRDD)
ratingsDF.take(2)
tauxTrain=0.6
tauxVal=0.2
tauxTes=0.2
# Si le total est inférieur à 1, les données sont sous-échantillonnées.
(trainDF, validDF, testDF) = ratingsDF.randomSplit([tauxTrain, tauxVal, tauxTes])
# validation et test à prédire, sans les notes
validDF_P = validDF.select("user", "item")
testDF_P = testDF.select("user", "item")
trainDF.take(2), validDF_P.take(2), testDF_P.take(2)
from pyspark.ml.recommendation import ALS
import math
import collections
# Initialisation du générateur
seed = 5
# Nombre max d'itérations (ALS)
maxIter = 10
# Régularisation L1; à optimiser également
regularization_parameter = 0.1
# Choix d'une grille pour les valeurs du rang à optimiser
ranks = [4, 8, 12]
#Initialisation variable
# création d'un dictionaire pour stocker l'erreur par rang testé
errors = collections.defaultdict(float)
tolerance = 0.02
min_error = float('inf')
best_rank = -1
best_iteration = -1
from pyspark.ml.evaluation import RegressionEvaluator
for rank in ranks:
als = ALS( rank=rank, seed=seed, maxIter=maxIter,
regParam=regularization_parameter)
model = als.fit(trainDF)
# Prévision de l'échantillon de validation
predDF = model.transform(validDF).select("prediction","rating")
#Remove unpredicter row due to no-presence of user in the train dataset
pred_without_naDF = predDF.na.drop()
# Calcul du RMSE
evaluator = RegressionEvaluator(metricName="rmse", labelCol="rating",
predictionCol="prediction")
rmse = evaluator.evaluate(pred_without_naDF)
print("Root-mean-square error for rank %d = "%rank + str(rmse))
errors[rank] = rmse
if rmse < min_error:
min_error = rmse
best_rank = rank
# Meilleure solution
print('Rang optimal: %s' % best_rank)
# Quelques prévisions
pred_without_naDF.take(3)
#On concatane la DataFrame Train et Validatin
trainValidDF = trainDF.union(validDF)
# On crée un model avec le nouveau Dataframe complété d'apprentissage et le rank fixé à la valeur optimal
als = ALS( rank=best_rank, seed=seed, maxIter=maxIter,
regParam=regularization_parameter)
model = als.fit(trainValidDF)
#Prediction sur la DataFrame Test
testDF = model.transform(testDF).select("prediction","rating")
#Remove unpredicter row due to no-presence of user in the trai dataset
pred_without_naDF = predDF.na.drop()
# Calcul du RMSE
evaluator = RegressionEvaluator(metricName="rmse", labelCol="rating",
predictionCol="prediction")
rmse = evaluator.evaluate(pred_without_naDF)
print("Root-mean-square error for rank %d = "%best_rank + str(rmse))
# Chargement des fichiers si ce n'est déjà fait
import urllib.request
# fichier complet mais compressé
f = urllib.request.urlretrieve("http://www.math.univ-toulouse.fr/~besse/Wikistat/data/ml-ratings20M.zip",DATA_PATH+"ml-ratings20M.zip")
#Unzip downloaded file
import zipfile
zip_ref = zipfile.ZipFile(DATA_PATH+"ml-ratings20M.zip", 'r')
zip_ref.extractall(DATA_PATH)
zip_ref.close()
# Importer les données au format texte dans un RDD
ratings_raw_data = sc.textFile(DATA_PATH+"ratings20M.csv")
# Identifier et afficher la première ligne
ratings_raw_data_header = ratings_raw_data.take(1)[0]
ratings_raw_data_header
# Create RDD without header
all_lines = ratings_raw_data.filter(lambda l : l!=ratings_raw_data_header)
# Séparer les champs (user, item, note) dans un nouveau RDD
split_lines = all_lines.map(lambda l : l.split(","))
ratingsRDD = split_lines.map(lambda p: Row(user=int(p[0]), item=int(p[1]),
rating=float(p[2]), timestamp=int(p[3])))
# Display the two first rows
ratingsRDD.take(2)
# Convert RDD to DataFrame
ratingsDF = spark.createDataFrame(ratingsRDD)
ratingsDF.take(2)
tauxTest=0.1
# Si le total est inférieur à 1, les données sont sous-échantillonnées.
(trainTotDF, testDF) = ratingsDF.randomSplit([1-tauxTest, tauxTest])
# Sous-échantillonnage de l'apprentissage permettant de
# tester pour des tailles croissantes de cet échantillon
tauxEch=0.2
(trainDF, DropData) = trainTotDF.randomSplit([tauxEch, 1-tauxEch])
testDF.take(2), trainDF.take(2)
import time
time_start=time.time()
# Initialisation du générateur
seed = 5
# Nombre max d'itérations (ALS)
maxIter = 10
# Régularisation L1 (valeur par défaut)
regularization_parameter = 0.1
best_rank = 8
# Estimation pour chaque valeur de rang
als = ALS(rank=rank, seed=seed, maxIter=maxIter,
regParam=regularization_parameter)
model = als.fit(trainDF)
time_end=time.time()
time_als=(time_end - time_start)
print("ALS prend %d s" %(time_als))
# Prévision de l'échantillon de validation
predDF = model.transform(testDF).select("prediction","rating")
#Remove unpredicter row due to no-presence of user in the train dataset
pred_without_naDF = predDF.na.drop()
# Calcul du RMSE
evaluator = RegressionEvaluator(metricName="rmse", labelCol="rating",
predictionCol="prediction")
rmse = evaluator.evaluate(pred_without_naDF)
print("Root-mean-square error for rank %d = "%best_rank + str(rmse))
trainDF.count()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Then we define the scenario we are considering. In full generality we define it with the three parameters $(N,m,d)$, corresponding to the case of $N$ parties, $m$ measurement choices with $d$ outcomes each. For instance
Step2: We generated the symbolic variables defining the measurements. They are treated as commuting variables, since to test for locality, we use the NPA hierarchy with commuting measurements. Given that we will always work in the correlator space, we define the substitution rule $\mathcal({M}_{x_i}^{(i)})^2 = \mathbb{1}$ for any $i = 1,...,N$ and $x_i = 0,..,m-1$.
Step3: Feasibility problem
Step4: Then we generate the moments, i.e. the values of the correlators
Step5: We construct the hierarchy at level two with corresponding substitutions and moments assignments
Step6: Finally, we solve the SDP feasibility problem
Step7: $GHZ$ state example
Step8: For the $GHZ$ state, we need to add the values of the two full-body correlators $\langle M_0^{(1)}M_0^{(2)}...M_0^{(N)} \rangle$ and $\langle M_1^{(1)}M_0^{(2)}...M_0^{(N)} \rangle$.
Step9: We construct the hierarchy at the hybrid level $\lbrace \mathcal{O}_2,M_0^{(1)}M_0^{(2)}...M_0^{(N)},M_1^{(1)}M_0^{(2)}...M_0^{(N)} \rbrace$ with corresponding substitutions and moments assignments
Step10: We solve the SDP feasibility problem
Step11: Noise robustness
Step12: $W$ state example
Step13: Then the relaxation
Step14: and finally we solve the SDP
Step15: $GHZ$ state example
Step16: The relaxation is generated as
Step17: Solving it, we get
Step18: Dual inequality
Step19: We only assign the values of up to the two-body correlators. This will ensure that we get the same expression for the Bell inequality as the one presented in the manuscript.
Step20: Then we assign to the second SDP the same dual variables as for the one solved before.
Step21: We extract the value of the dual variable for each monomial appearing in the moment matrix. For the function "extract_dual_value" to work properly we needed to generate a moment matrix in which the values of the correlators where not substituted yet.
Step22: Lastly, we normalize the inequality so to be in the form presented before (i.e. with the classical bound being equal to $1$ )
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function, division
from math import sqrt
from qutip import sigmax, sigmaz
from ncpol2sdpa import flatten, SdpRelaxation, generate_variables
from time import time
from sympy import S
from local_tools import generate_commuting_measurements, get_W_reduced, \
get_GHZ_reduced, get_moment_constraints, get_fullmeasurement
N, m, d = 7, 2, 2
configuration = [d for _ in range(m)]
measurements = [generate_commuting_measurements(configuration, chr(65+i))
for i in range(N)]
substitutions = {M**2: S.One for M in flatten(measurements)}
W_state = get_W_reduced(N)
W_operators = [[sigmax(), sigmaz()] for _ in range(N)]
time0 = time()
moments = get_moment_constraints(N, W_state, measurements, W_operators)
print("Constraints were generated in " + str(time()-time0) + " seconds.")
time0 = time()
sdp = SdpRelaxation(flatten(measurements))
sdp.get_relaxation(2, substitutions=substitutions, momentsubstitutions=moments)
print("SDP relaxation was generated in " + str(time()-time0) + " seconds.")
sdp.solve(solver="mosek")
print("SDP was solved in " + str(sdp.solution_time) + " seconds.")
print("SDP status is " + sdp.status)
GHZ_state = get_GHZ_reduced(4)
GHZ_operators = [[sigmax(), 1/sqrt(2)*(sigmaz()+sigmax())] for _ in range(N)]
time0 = time()
moments = get_moment_constraints(N, GHZ_state, measurements, GHZ_operators)
extra = get_fullmeasurement([0 for _ in range(N)], measurements)
extramonomials = [extra]
moments[extra] = 1
extra = get_fullmeasurement(flatten([1,[0 for _ in range(N-1)]]), measurements)
extramonomials.append(extra)
moments[extra] = (1/sqrt(2))
print("Constraints were generated in " + str(time()-time0) + " seconds.")
time0 = time()
sdp = SdpRelaxation(flatten(measurements), verbose=1, parallel=True)
sdp.get_relaxation(2, substitutions=substitutions,
momentsubstitutions=moments, extramonomials=extramonomials)
print("SDP relaxation was generated in " + str(time()-time0) + " seconds.")
sdp.solve(solver="mosek")
print("SDP was solved in " + str(sdp.solution_time) + " seconds.")
print("SDP status is " + sdp.status)
lambda_ = generate_variables("\lambda")[0]
time0 = time()
moments = get_moment_constraints(N, W_state, measurements, W_operators, lambda_)
print("Constraints were generated in " + str(time()-time0) + " seconds.")
sdp = SdpRelaxation(flatten(measurements), parameters=[lambda_],
verbose=1, parallel=True)
sdp.get_relaxation(2, objective=lambda_, substitutions=substitutions,
momentsubstitutions=moments)
print("SDP relaxation was generated in " + str(time()-time0) + " seconds.")
sdp.solve(solver="mosek")
print("SDP was solved in " + str(sdp.solution_time) + " seconds.")
print("lambda_min is " + str(sdp.primal))
time0 = time()
moments = get_moment_constraints(N, GHZ_state, measurements, GHZ_operators, lambda_)
extra = get_fullmeasurement([0 for _ in range(N)], measurements)
extramonomials = [extra]
moments[extra] = (1 - lambda_)
extra = get_fullmeasurement(flatten([1, [0 for _ in range(N-1)]]), measurements)
extramonomials.append(extra)
moments[extra] = (1/sqrt(2))*(1 - lambda_)
print("Constraints were generated in " + str(time()-time0) + " seconds.")
sdp = SdpRelaxation(flatten(measurements), parameters=[lambda_],verbose=1, parallel=True)
sdp.get_relaxation(2, objective=lambda_, substitutions=substitutions,
momentsubstitutions=moments, extramonomials=extramonomials)
print("SDP relaxation was generated in " + str(time()-time0) + " seconds.")
sdp.solve(solver="mosek")
print("SDP was solved in " + str(sdp.solution_time) + " seconds.")
print("lambda_min is " + str(sdp.primal))
time0 = time()
moments = get_moment_constraints(N, GHZ_state,measurements, GHZ_operators,
lambda_, 2)
extra = get_fullmeasurement([0 for _ in range(N)], measurements)
extramonomials = [extra]
moments[extra] = (1 - lambda_)
extra = get_fullmeasurement(flatten([1, [0 for _ in range(N-1)]]), measurements)
extramonomials.append(extra)
moments[extra] = (1/sqrt(2))*(1 - lambda_)
print("Constraints were generated in " + str(time()-time0) + " seconds.")
sdp = SdpRelaxation(flatten(measurements), parameters=[lambda_], verbose=1, parallel=True)
sdp.get_relaxation(2, objective=lambda_, substitutions=substitutions,
momentsubstitutions=moments, extramonomials=extramonomials)
print("SDP relaxation was generated in " + str(time()-time0) + " seconds.")
sdp.solve(solver="mosek")
print("SDP was solved in " + str(sdp.solution_time) + " seconds.")
print("lambda_min is " + str(sdp.primal))
time0 = time()
sdp2 = SdpRelaxation(flatten(measurements), verbose=1, parallel=True)
sdp2.get_relaxation(2, substitutions=substitutions, extramonomials=extramonomials)
print("Second SDP relaxation was generated in " + str(time()-time0) + " seconds.")
sdp2.status = sdp.status
sdp2.y_mat = [sdp.y_mat[1]]
time0 = time()
bound = sdp2.extract_dual_value(1)
ineq = 0
for monomial, index in sdp2.monomial_index.items():
ineq += round(2*sdp2.extract_dual_value(monomial),4)*monomial
print("Dual was generated in " + str(time()-time0) + " seconds")
print(ineq/bound)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now let's create a network that represents a rolling window in time (Aaron's "delay network"). The process determines what sort of pattern the network will be optimized for -- here we just go with white noise of a maximum of 3Hz. theta determines how big the rolling window is -- here we use 0.5 seconds.
Step2: Now we need to create the training data for decoding out of the rolling window. Our patterns are larger than the rolling window, so to create our training data we will take our patterns, shift them, and cut them down to the right size. In order to then give that to nengo, we also need to project from the window's space to the internal representation space (using the inv_basis).
Step3: Now we can create a connection optimized to do this decoding
Step4: Let's try feeding in those two patterns and see what the response is
Step5: It successfully detects the two frequencies, outputting 1 for the 1Hz pattern (pattern1) and -1 for the 0.5Hz (pattern2)!
|
<ASSISTANT_TASK:>
Python Code:
s_pattern = 2000 # number of data points in the pattern
t = np.arange(s_pattern)*0.001 # time points for the elements in the patter
pattern1 = np.sin(t*np.pi*2)
pattern2 = np.sin(0.5*t*np.pi*2)
plt.plot(t, pattern1, label='pattern1')
plt.plot(t, pattern2, label='pattern2')
plt.legend(loc='best')
plt.show()
net = nengo.Network()
with net:
process = nengo.processes.WhiteSignal(period=100., high=3., y0=0)
rw = nengolib.networks.RollingWindow(theta=0.5, n_neurons=3000, process=process, neuron_type=nengo.LIFRate())
s_window = 500
t_window = np.linspace(0, 1, s_window)
inv_basis = rw.inverse_basis(t_window)
eval_points = []
target = []
for i in range(s_pattern):
eval_points.append(np.dot(inv_basis, np.roll(pattern1, i)[:s_window]))
target.append([1])
eval_points.append(np.dot(inv_basis, np.roll(pattern2, i)[:s_window]))
target.append([-1])
with net:
result = nengo.Node(None, size_in=1)
nengo.Connection(rw.state, result,
eval_points=eval_points, scale_eval_points=False,
function=target, synapse=0.1)
model = nengo.Network()
model.networks.append(net)
with model:
freqs = [1, 0.5]
def stim_func(t):
freq = freqs[int(t/5) % len(freqs)]
return np.sin(t*2*np.pi*freq)
stim = nengo.Node(stim_func)
nengo.Connection(stim, rw.input, synapse=None)
p_result = nengo.Probe(result)
p_stim = nengo.Probe(stim)
sim = nengo.Simulator(model)
sim.run(10)
plt.plot(sim.trange(), sim.data[p_stim], label='input')
plt.plot(sim.trange(), sim.data[p_result], label='output')
plt.legend(loc='best')
model = nengo.Network()
model.networks.append(net)
with model:
freqs = [1, 0.5, 0.75, 0.875, 0.625]
def stim_func(t):
freq = freqs[int(t/5) % len(freqs)]
return np.sin(t*2*np.pi*freq)
stim = nengo.Node(stim_func)
nengo.Connection(stim, rw.input, synapse=None)
p_result = nengo.Probe(result)
p_stim = nengo.Probe(stim)
sim = nengo.Simulator(model)
sim.run(25)
plt.plot(sim.trange(), sim.data[p_stim], label='input')
plt.plot(sim.trange(), sim.data[p_result], label='output')
plt.legend(loc='best')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Bracket Indexing and Selection
Step2: Broadcasting
Step3: Now note the changes also occur in our original array!
Step4: Data is not copied, it's a view of the original array! This avoids memory problems!
Step5: Indexing a 2D array (matrices)
Step6: More Indexing Help
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
#Creating sample array
arr = np.arange(0, 11)
#Show
arr
#Get a value at an index
arr[8]
#Get values in a range
arr[1:5]
#Get values in a range
arr[0:5]
#Setting a value with index range (Broadcasting)
arr[0:5] = 100
#Show
arr
# Reset array, we'll see why I had to reset in a moment
arr = np.arange(0, 11)
#Show
arr
#Important notes on Slices
slice_of_arr = arr[0:6]
#Show slice
slice_of_arr
#Change Slice
slice_of_arr[:] = 99
#Show Slice again
slice_of_arr
arr
#To get a copy, need to be explicit
arr_copy = arr.copy()
arr_copy
arr_2d = np.array(([5, 10, 15], [20, 25, 30], [35, 40, 45]))
#Show
arr_2d
#Indexing row
arr_2d[1]
# Format is arr_2d[row][col] or arr_2d[row,col]
# Getting individual element value
arr_2d[1][0]
# Getting individual element value
arr_2d[1,0]
# 2D array slicing
#Shape (2,2) from top right corner
arr_2d[:2,1:]
#Shape bottom row
arr_2d[2]
#Shape bottom row
arr_2d[2,:]
arr = np.arange(1, 11)
arr
arr > 4
bool_arr = arr > 4
bool_arr
arr[bool_arr]
arr[arr > 2]
x = 2
arr[arr > x]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We will train the classifier on all left visual vs auditory trials
Step2: Score on the epochs where the stimulus was presented to the right.
Step3: Plot
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Jean-Remi King <jeanremi.king@gmail.com>
# Alexandre Gramfort <alexandre.gramfort@inria.fr>
# Denis Engemann <denis.engemann@gmail.com>
#
# License: BSD (3-clause)
import matplotlib.pyplot as plt
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
import mne
from mne.datasets import sample
from mne.decoding import GeneralizingEstimator
print(__doc__)
# Preprocess data
data_path = sample.data_path()
# Load and filter data, set up epochs
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
events_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
raw = mne.io.read_raw_fif(raw_fname, preload=True)
picks = mne.pick_types(raw.info, meg=True, exclude='bads') # Pick MEG channels
raw.filter(1., 30., fir_design='firwin') # Band pass filtering signals
events = mne.read_events(events_fname)
event_id = {'Auditory/Left': 1, 'Auditory/Right': 2,
'Visual/Left': 3, 'Visual/Right': 4}
tmin = -0.050
tmax = 0.400
decim = 2 # decimate to make the example faster to run
epochs = mne.Epochs(raw, events, event_id=event_id, tmin=tmin, tmax=tmax,
proj=True, picks=picks, baseline=None, preload=True,
reject=dict(mag=5e-12), decim=decim)
clf = make_pipeline(StandardScaler(), LogisticRegression(solver='lbfgs'))
time_gen = GeneralizingEstimator(clf, scoring='roc_auc', n_jobs=1,
verbose=True)
# Fit classifiers on the epochs where the stimulus was presented to the left.
# Note that the experimental condition y indicates auditory or visual
time_gen.fit(X=epochs['Left'].get_data(),
y=epochs['Left'].events[:, 2] > 2)
scores = time_gen.score(X=epochs['Right'].get_data(),
y=epochs['Right'].events[:, 2] > 2)
fig, ax = plt.subplots(1)
im = ax.matshow(scores, vmin=0, vmax=1., cmap='RdBu_r', origin='lower',
extent=epochs.times[[0, -1, 0, -1]])
ax.axhline(0., color='k')
ax.axvline(0., color='k')
ax.xaxis.set_ticks_position('bottom')
ax.set_xlabel('Testing Time (s)')
ax.set_ylabel('Training Time (s)')
ax.set_title('Generalization across time and condition')
plt.colorbar(im, ax=ax)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's create a sample JSON fie and save it to some variable called input.
Step2: As you can see here, our JSON documents is nothing else than a list of two dictionaires with 3 keys each (and a value for each key). To parse it as a usual Python object (list in this case), the loads() function from the json package is used.
Step3: Reading JSON from a file
Step4: using with open()
Step5: Writing JSON files
Step6: Yet, as you may have already noticed, the saved JSON files not human-readible. To make them more user friendly, we may sort the Keys and provide 4-tab indentation.
Step7: Converting JSON to CSV
|
<ASSISTANT_TASK:>
Python Code:
import json
input = '''[
{ "id" : "01",
"status" : "Instructor",
"name" : "Hrant"
} ,
{ "id" : "02",
"status" : "Student",
"name" : "Jimmy"
}
]'''
# parse/load string
data = json.loads(input)
# data is a usual list
type(data)
print(data)
from pprint import pprint
pprint(data)
print 'User count:', len(data), "\n"
data[0]['name']
for element in data:
print 'Name: ', element['name']
print 'Id: ', element['id']
print 'Status: ', element['status'], "\n"
import pandas as pd
address = "C:\Data_scraping\JSON\sample_data.json"
my_json_data = pd.read_json(address)
my_json_data.head()
import json
with open(address,"r") as file:
local_json = json.load(file)
print(local_json)
type(local_json)
pprint(local_json)
with open('our_json_w.json', 'w') as output:
json.dump(local_json, output)
with open('our_json_w.json', 'w') as output:
json.dump(local_json, output, sort_keys = True, indent = 4)
import csv, json
address = "C:\Data_scraping\JSON\sample_data.json"
with open(address,"r") as file:
local_json = json.load(file)
with open("from_json.csv", "w") as f:
writer = csv.writer(f)
writer.writerow(["ID","Name","Status"])
for item in local_json:
writer.writerow([item['id'],item['name'],item['status']])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: Setting up the Problem
Step4: Note that these functions have a bunch of arguments that are optional. If you do not provide them, they will be fixed to the pre-specified default.
Step5: Now, compute, plot, and label the luminosity distance $d_L(z; \Omega_{m,0}, \Omega_\Lambda)$ for different combinations of $(\Omega_{m,0}, \Omega_\Lambda)$
Step7: As discussed above, what we measure is not actually the luminosity distance directly, but rather a flux $F$. In general, most astronomical measurements are not even reported in flux, but rather in magnitudes, a logarithmic unit of brightness. Assuming we know the intrinsic brightness of a source, we can convert from distance $d_L$ to magnitude $m$ using
Step8: Part 2
Step10: Part 3
Step11: Part 4
Step12: Using scipy.optimize.minimize, find the best-fit combination of $(\Omega_{m,0}, \Omega_{\Lambda,0})$ using the loss function defined above starting from $(0.5, 0.5)$ and subject to the constraints $0 < \Omega_{m,0} < 1$ and $0 < \Omega_{\Lambda,0} < 1$.
Step13: Plot a 2-D histogram of 10000 draws from the resulting Multivariate Normal distribution using numpy.random.multivariate_normal. Limit the x and y axes to be from 0 to 1.
Step14: Brute Force
Step15: Just like with our galaxy example, we can convert each of our $\chi^2$ values to a corresponding weight that we assign each combination of parameters. This is computed via
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function, division
from six.moves import range
%matplotlib inline
# function to compute the luminosity distance.
def d_L(zs, Omega_m=0.3, Omega_L=0.7,
Omega_r=0.0, H0=100., N=1000, zgrid=None):
Compute luminosity distance. See `cosmocalc`
for information on the other parameters.
Parameters
----------
zs : array
Redshift grid.
zgrid : array, optional
The redshift grid used to compute the
baseline relation (which is subsequently interpolated).
Default is `arange(0, 2.5+1e-5, 0.02)`.
Returns
-------
DL_Mpc : array
Luminosity distance in Mpc.
if zgrid is None: # define our default grid if none is provided
zgrid = np.arange(0, 2.5+1e-5, 0.02)
d_l = np.array([cosmocalc(z, H0=H0, N=N,
Omega_m=Omega_m,
Omega_L=Omega_L,
Omega_r=Omega_r)[3] # compute d_L
for z in zgrid])
return np.interp(zs, zgrid, d_l) # interpolate results along `zgrid`
# Define a very simple cosmology calculator.
def cosmocalc(z, Omega_m=0.3, Omega_L=0.7,
Omega_r=0.0, H0=70., N=1000):
Compute cosmological quantities.
Parameters
----------
z : array
Redshift.
Omega_m : float, optional
Matter density today (in units of the critical density).
Default `0.3`.
Omega_L : float, optional
Dark energy density today (in units of the critical density).
Default `0.7`.
Omega_K : float, optional
Curvature energy density today (in units of the
critical density). Default `0.0`.
Omega_r : float, optional
Radiation energy density today (in units of the
critical density). Default `0.0`.
H0 : float, optional
Hubble constant today (`z=0`). Default is `70.`.
N : int, optional
The number of points used in the integral.
Default is `10000`.
Returns
-------
zage_Gyr : float
Age at z.
DCMR_Mpc : float
Comoving distance in Mpc.
DA_Mpc : float
Angular diameter distance in Mpc.
DL_Mpc : float
Luminosity distance in Mpc.
V_Gpc : float
Comoving volume in Gpc^3.
Omega_K = 1. - Omega_m - Omega_r - Omega_L
# Initialize constants.
c = 299792.458 # velocity of light in km/sec
Tyr = 977.8 # conversion from 1/H to Gyr
h = H0/100. # normalized H0
az = 1.0 / (1. + z) # scale factor at redshift z
# Compute age.
age = 0.
for i in range(N):
a = az * (i + 0.5) / N
adot = np.sqrt(Omega_K + (Omega_m / a) + (Omega_r / a**2)
+ (Omega_L * a**2))
age = age + 1. / adot
zage = az * age / N
zage_Gyr = (Tyr/H0) * zage
# Compute comoving radial distance.
DTT = 0.0
DCMR = 0.0
for i in range(N):
a = az + (1 - az) * (i + 0.5) / N
adot = np.sqrt(Omega_K + (Omega_m / a) + (Omega_r / a**2)
+ (Omega_L * a**2))
DTT = DTT + 1. / adot
DCMR = DCMR + 1. / (a * adot)
DTT = (1. - az) * DTT / N
DCMR = (1. - az) * DCMR / N
# Compute/convert quantities.
age = DTT + zage # age [1/H]
age_Gyr = (Tyr / H0) * age # age [Gyr]
DTT_Gyr = (Tyr / H0) * DTT # travel time [Gyr]
DCMR_Gyr = (Tyr / H0) * DCMR # comoving R [Glyr]
DCMR_Mpc = (c / H0) * DCMR # comoving R [Mpc]
# Compute tangential quantities.
ratio = 1.00
x = np.sqrt(abs(Omega_K)) * DCMR
if x > 0.1:
if Omega_K > 0:
ratio = 0.5 * (np.exp(x) - np.exp(-x)) / x
else:
ratio = np.sin(x) / x
else:
y = x**2
if Omega_K < 0:
y = -y
ratio = 1. + y / 6. + y**2 / 120.
# Compute/convert quantities.
DCMT = ratio * DCMR # comoving T [1/H]
DA = az * DCMT # angular diameter distance [1/H]
DA_Mpc = (c / H0) * DA # DA [Mpc]
kpc_DA = DA_Mpc / 206.264806 # DA [Mpc / per arcsec]
DA_Gyr = (Tyr / H0) * DA # DA [Glyr]
DL = DA / (az * az) # luminosity distance [1/H]
DL_Mpc = (c / H0) * DL # DL [Mpc]
DL_Gyr = (Tyr / H0) * DL # DL [Glyr]
# Compute comoving volume.
ratio = 1.00
x = np.sqrt(abs(Omega_K)) * DCMR
if x > 0.1:
if Omega_K > 0:
ratio = (0.125 * (np.exp(2. * x) - np.exp(-2. * x))
- x / 2.) / (x**3 / 3.)
else:
ratio = (x / 2. - np.sin(2. * x) / 4.) / (x**3 / 3.)
else:
y = x**2
if Omega_K < 0:
y = -y
ratio = 1. + y / 5. + (2. / 105.) * y**2
VCM = ratio * DCMR**3 / 3 # comoving volume [1/H]
V_Gpc = 4. * np.pi * (1e-3 * c / H0)**3 * VCM # convert to Gpc
return zage_Gyr, DCMR_Mpc, DA_Mpc, DL_Mpc, V_Gpc
# define redshift grid
zgrid = ...
# compute luminosity distances
dists = d_L(...)
# plot results
# remember to label your axes!
plt.plot(...)
# compute dists for each combination (1->5)
...
# plot results
# remember to label your axes!
...
def mag(d, M=-19.5):
Convert distance(s) `d` into magnitude(s) given
absolute magnitude `M`. Assumes `d` is in units of Mpc.
return ...
# convert from dists to mags
...
# plot results
# remember to label your axes!
# read in data
z_spec, mag_spec, magerr_spec = ...
# plot histogram of redshifts
...
# plot magnitudes vs redshifts
...
# Define our chi2 function.
def chi2(pred, obs, err):
Compute the chi2 between the predictions `pred`
and the observations `obs` given the observed errors `err`.
return ...
# compute chi2 for each of the combinations
# should follow something like the example shown below
for Om, OL in stuff:
dpred = d_L(z_spec, ...)
mpred = mag(...)
chisquare = chi2(...)
print('Omega_m, Omega_L = {0}, {1}'.format(Om, OL),
'; chi2 = ', chisquare)
# Define our modified function to match the format for `minimize`.
def calc_chi2(theta):
Om, OL = theta
dpred = d_L(z_spec, Omega_m=Om, Omega_L=OL)
mpred = mag(dpred)
chisquare = chi2(mpred, mag_spec, magerr_spec)
return chisquare
# Find the best fit.
x0 = ... # initial guess
bounds = ... # bounds for guesses (see documentation)
results = minimize(calc_chi2, x0, bounds=bounds) # minimize!
theta_best = results['x'] # get best fit
theta_cov = results['hess_inv'].todense() # get covariance matrix
# print results
print(theta_best)
print(theta_cov)
# plot 2-D histogram
# remember to label your axes!
# define our grid
Omega_m = arange(0, 1.01, 0.04) # grid in Omega_m
Omega_L = arange(0, 1.01, 0.04) # grid in Omega_L
# do our brute-force grid search
chi2_arr = np.zeros((len(Omega_m), len(Omega_L)))
for ...: # loop over Omega_m
for ...: # loop over Omega_L
# compute chi2
chisquare = ...
# fill in array
chi2_arr[i, j] = chisquare
# Print progress.
sys.stderr.write('\r Omega_m = {0}, Omega_L = {1} '
.format(Om, OL))
# convert chi2 to weights
weights = ...
# plot results
# remember to label your axes!
plt.imshow(weights.T, origin=..., extent=...)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the preprocessed dataset containing features extracted by GoogLeNet
Step2: Count words occuring at least 5 times and construct mapping int <-> word
Step3: Define symbolic variables for the various inputs
|
<ASSISTANT_TASK:>
Python Code:
import pickle
import random
import numpy as np
import theano
import theano.tensor as T
import lasagne
from collections import Counter
from lasagne.utils import floatX
dataset = pickle.load(open('coco_with_cnn_features.pkl'))
allwords = Counter()
for item in dataset:
for sentence in item['sentences']:
allwords.update(sentence['tokens'])
vocab = [k for k, v in allwords.items() if v >= 5]
vocab.insert(0, '#START#')
vocab.append('#END#')
word_to_index = {w: i for i, w in enumerate(vocab)}
index_to_word = {i: w for i, w in enumerate(vocab)}
len(vocab)
SEQUENCE_LENGTH = 32
MAX_SENTENCE_LENGTH = SEQUENCE_LENGTH - 3 # 1 for image, 1 for start token, 1 for end token
BATCH_SIZE = 100
CNN_FEATURE_SIZE = 1000
EMBEDDING_SIZE = 512
# Returns a list of tuples (cnn features, list of words, image ID)
def get_data_batch(dataset, size, split='train'):
items = []
while len(items) < size:
item = random.choice(dataset)
if item['split'] != split:
continue
sentence = random.choice(item['sentences'])['tokens']
if len(sentence) > MAX_SENTENCE_LENGTH:
continue
items.append((item['cnn features'], sentence, item['cocoid']))
return items
# Convert a list of tuples into arrays that can be fed into the network
def prep_batch_for_network(batch):
x_cnn = floatX(np.zeros((len(batch), 1000)))
x_sentence = np.zeros((len(batch), SEQUENCE_LENGTH - 1), dtype='int32')
y_sentence = np.zeros((len(batch), SEQUENCE_LENGTH), dtype='int32')
mask = np.zeros((len(batch), SEQUENCE_LENGTH), dtype='bool')
for j, (cnn_features, sentence, _) in enumerate(batch):
x_cnn[j] = cnn_features
i = 0
for word in ['#START#'] + sentence + ['#END#']:
if word in word_to_index:
mask[j, i] = True
y_sentence[j, i] = word_to_index[word]
x_sentence[j, i] = word_to_index[word]
i += 1
mask[j, 0] = False
return x_cnn, x_sentence, y_sentence, mask
# sentence embedding maps integer sequence with dim (BATCH_SIZE, SEQUENCE_LENGTH - 1) to
# (BATCH_SIZE, SEQUENCE_LENGTH-1, EMBEDDING_SIZE)
l_input_sentence = lasagne.layers.InputLayer((BATCH_SIZE, SEQUENCE_LENGTH - 1))
l_sentence_embedding = lasagne.layers.EmbeddingLayer(l_input_sentence,
input_size=len(vocab),
output_size=EMBEDDING_SIZE,
)
# cnn embedding changes the dimensionality of the representation from 1000 to EMBEDDING_SIZE,
# and reshapes to add the time dimension - final dim (BATCH_SIZE, 1, EMBEDDING_SIZE)
l_input_cnn = lasagne.layers.InputLayer((BATCH_SIZE, CNN_FEATURE_SIZE))
l_cnn_embedding = lasagne.layers.DenseLayer(l_input_cnn, num_units=EMBEDDING_SIZE,
nonlinearity=lasagne.nonlinearities.identity)
l_cnn_embedding = lasagne.layers.ReshapeLayer(l_cnn_embedding, ([0], 1, [1]))
# the two are concatenated to form the RNN input with dim (BATCH_SIZE, SEQUENCE_LENGTH, EMBEDDING_SIZE)
l_rnn_input = lasagne.layers.ConcatLayer([l_cnn_embedding, l_sentence_embedding])
l_dropout_input = lasagne.layers.DropoutLayer(l_rnn_input, p=0.5)
l_lstm = lasagne.layers.LSTMLayer(l_dropout_input,
num_units=EMBEDDING_SIZE,
unroll_scan=True,
grad_clipping=5.)
l_dropout_output = lasagne.layers.DropoutLayer(l_lstm, p=0.5)
# the RNN output is reshaped to combine the batch and time dimensions
# dim (BATCH_SIZE * SEQUENCE_LENGTH, EMBEDDING_SIZE)
l_shp = lasagne.layers.ReshapeLayer(l_dropout_output, (-1, EMBEDDING_SIZE))
# decoder is a fully connected layer with one output unit for each word in the vocabulary
l_decoder = lasagne.layers.DenseLayer(l_shp, num_units=len(vocab), nonlinearity=lasagne.nonlinearities.softmax)
# finally, the separation between batch and time dimension is restored
l_out = lasagne.layers.ReshapeLayer(l_decoder, (BATCH_SIZE, SEQUENCE_LENGTH, len(vocab)))
# cnn feature vector
x_cnn_sym = T.matrix()
# sentence encoded as sequence of integer word tokens
x_sentence_sym = T.imatrix()
# mask defines which elements of the sequence should be predicted
mask_sym = T.imatrix()
# ground truth for the RNN output
y_sentence_sym = T.imatrix()
output = lasagne.layers.get_output(l_out, {
l_input_sentence: x_sentence_sym,
l_input_cnn: x_cnn_sym
})
def calc_cross_ent(net_output, mask, targets):
# Helper function to calculate the cross entropy error
preds = T.reshape(net_output, (-1, len(vocab)))
targets = T.flatten(targets)
cost = T.nnet.categorical_crossentropy(preds, targets)[T.flatten(mask).nonzero()]
return cost
loss = T.mean(calc_cross_ent(output, mask_sym, y_sentence_sym))
MAX_GRAD_NORM = 15
all_params = lasagne.layers.get_all_params(l_out, trainable=True)
all_grads = T.grad(loss, all_params)
all_grads = [T.clip(g, -5, 5) for g in all_grads]
all_grads, norm = lasagne.updates.total_norm_constraint(
all_grads, MAX_GRAD_NORM, return_norm=True)
updates = lasagne.updates.adam(all_grads, all_params, learning_rate=0.001)
f_train = theano.function([x_cnn_sym, x_sentence_sym, mask_sym, y_sentence_sym],
[loss, norm],
updates=updates
)
f_val = theano.function([x_cnn_sym, x_sentence_sym, mask_sym, y_sentence_sym], loss)
for iteration in range(20000):
x_cnn, x_sentence, y_sentence, mask = prep_batch_for_network(get_data_batch(dataset, BATCH_SIZE))
loss_train, norm = f_train(x_cnn, x_sentence, mask, y_sentence)
if not iteration % 250:
print('Iteration {}, loss_train: {}, norm: {}'.format(iteration, loss_train, norm))
try:
batch = get_data_batch(dataset, BATCH_SIZE, split='val')
x_cnn, x_sentence, y_sentence, mask = prep_batch_for_network(batch)
loss_val = f_val(x_cnn, x_sentence, mask, y_sentence)
print('Val loss: {}'.format(loss_val))
except IndexError:
continue
param_values = lasagne.layers.get_all_param_values(l_out)
d = {'param values': param_values,
'vocab': vocab,
'word_to_index': word_to_index,
'index_to_word': index_to_word,
}
pickle.dump(d, open('lstm_coco_trained.pkl','w'), protocol=pickle.HIGHEST_PROTOCOL)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: strings yoiu can use the %s to format strings into your print statements
|
<ASSISTANT_TASK:>
Python Code:
# this is a comment and will not run in the code
'''this is just a mulit line comment'''
pwd
#addition
2+1
# substraction
2-1
1-2
2*2
3/2
3.0/2
float(3)/2
3/float(2)
from __future__ import division
3/2
1/2
2/3
root(2)
sqrt(2)
4^2
4^.5
4**.5
a=5
a=6
a+a
a
0.1+0.2-0.3
'hello'
'this entire thing can be a string'
"this is using double quotes"
print 'hello'
print("hello")
s='hello'
s
len(s)
print(s)
s[3]
s[10]
s[5]
s[2:4]
z*10
letter='z'
letter*10
letter.upper()
letter.center('z')
print 'this is a string'
s = 'STRING'
print 'place another string with a mod and s: %s' %(s)
from __future__ import print_function
print('hello')
print('one: {x}'.format(x='INSERT'))
my_list = [1,2,3,'o','29a jeoilapif a']
len(my_list)
my_list[1:]
my_list[1]+1
my_list[1] = 5
my_list
my_list + ['just a simple test'
]
my_list * 2
l = [1,2,3]
l.append('append me')
l
l.pop(2)
l
l.pop()
l
l[100]
new_list = ['a','b','x','f','c']
new_list
new_list.reverse()
new_list.reverse()
new_list
lst_1 = [1,2,3]
lst_2 = [4,5,6]
lst_3 = [7,8,9]
matrix = [lst_1, lst_2, lst_3]
matrix
matrix[2][1]
first_col = [row[0] for row in matrix]
first_col
my_dict = {'key1':'this is cool','key2':2, 'key3':'duh','key4':['a','b',3]}
my_dict['key4']
my_dict.pop('key3')
my_dict
my_dict['key4'][1].upper()
my_dict['key2'] -= 5
my_dict
d = {}
d['animal'] = 'dog'
d
d['answer']=421
d
d.pop('anwer')
d
d = {'key 1':{'nestkey':{'subnestkey':'value'}}}
d
d['key 1']['nestkey']['subnestkey']
d = {'key1':1,'key2':2,'key3':3}
d.keys()
d.values()
d.items()
t = (1,2,3)
len(t)
t = ('one',2,'cool dude')
t
t[1
]
t[-1]
t.index('one
')
t.index('one')
t.count('one')
t[0] = 'change'
t.append('nope')
pwd
f = open('text_file.txt')
f.read()
f.seek(0)
f.read()
f.read()
f.readlines()
f.seek()
f.seek(0)
f.readlines()
%%writefile new.txt
first line
second line
for line in open('new.txt'):
print('line')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: STEP 2
Step2: STEP 2-1
Step3: STEP 3-1
Step4: STEP 3-2
Step5: $\left|finalstate\right\rangle$ is essentially a pure state (it should be if all the process was perfect), so now we just isolate that main part, that is our $\left|x\right\rangle$ (after a transformation if our original A was not Hermitian) in the basis that diagonalizes $A$.
|
<ASSISTANT_TASK:>
Python Code:
# Create the matrix that diagonalizes A
diagonalizer = qt.Qobj(np.array([eigenvecs[i].full().T.flatten()
for i in range(len(eigenvals))]))
b = diagonalizer * b
A = diagonalizer.dag() * A * diagonalizer
T = prec
t0 = κ / ϵ # It should be O(κ/ϵ), whatever that means
ψ0 = qt.Qobj([[math.sqrt(2 / T) * math.sin(π * (τ + 0.5) / T)]
for τ in range(T)])
# Order is b, τ, and then ancilla
evo = qt.tensor(qt.identity(A.shape[0]), qt.ket2dm(qt.basis(T, 0)))
for τ in range(1, T):
evo += qt.tensor((1j * A * τ * t0 / T).expm(), qt.ket2dm(qt.basis(T, τ)))
ψev = evo * qt.tensor(b, ψ0)
ftrans = qt.tensor(qt.identity(b.shape[0]), qft(T))
ψfourier = ftrans * ψev # This is Eq. 3 in HHL
# w = (ψfourier[:T] / b[0]).argmax()
# prj = qt.ket2dm(qt.basis(T, w))
# ψfourier = qt.tensor(qt.identity(b.shape[0]), prj) * ψfourier
total_state = qt.tensor(ψfourier, qt.basis(2, 0)) # Add ancilla for swapping
C = 1 / κ # Constant, should be O(1/κ)
# Do conditional rotation only on τ and ancilla
rotation = qt.tensor(qt.ket2dm(qt.basis(T, 0)), qt.identity(2))
for τ in range(1, T):
rotation += qt.tensor(qt.ket2dm(qt.basis(T, τ)), rot(τ, t0, C))
final_state = qt.tensor(qt.identity(b.shape[0]), rotation) * total_state
projector = qt.tensor(qt.identity(b.shape[0]), qt.identity(T), qt.ket2dm(qt.basis(2, 1)))
postsel = projector * final_state
prob1 = qt.expect(projector, final_state)
# Trace out ancilla and τ registers, leaving only the b register
finalstate = qt.ket2dm(postsel).ptrace([0]) / prob1
finalstate.eigenenergies()
fsevls, fsevcs = finalstate.eigenstates()
x = math.sqrt(fsevls.max()) * fsevcs[fsevls.argmax()]
x
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Get zip code from wikipedia
Step6: 2. Convert zip code to coordinates
Step7: 3. Sanity check
Step8: 4. Get bussiness type and # of establishments per year from US census
Step9: 3. Collect property values per zip code over time
Step12: Neighborhood boundaries in SF
|
<ASSISTANT_TASK:>
Python Code:
GET SF ZIP CODES from http://www.city-data.com/zipmaps/San-Francisco-California.html
import itertools
sf_zip_codes = [94102, 94103, 94104, 94105, 94107, 94108, 94109, 94110, 94111, 94112, 94114, 94115, 94116, 94117, 94118, 94121, 94122, 94123, 94124, 94127, 94129, 94131, 94132, 94133, 94134, 94158]
Geopy has zip code converter!
from geopy.geocoders import Nominatim
geolocator = Nominatim()
location = geolocator.geocode("78704")
print 'EXAMPLE:'
print(location.address)
print((location.latitude, location.longitude))
But something is wrong.
location = geolocator.geocode(sf_zip_codes[0])
print 'EXAMPLE:'
print(location.address)
print((location.latitude, location.longitude))
So we're using Google Geocode API.
GOOGLE_KEY = ''
query_url = 'https://maps.googleapis.com/maps/api/geocode/json?address=94102&key=%s' % (GOOGLE_KEY)
r = requests.get(query_url)
r.json()
Get coordinates.
temp = r.json()
temp_ = temp['results'][0]['geometry']['location']
temp_
lats = []
lngs = []
for sf_zip_code in sf_zip_codes:
query_url = 'https://maps.googleapis.com/maps/api/geocode/json?address=%s&key=%s' % (str(sf_zip_code),GOOGLE_KEY)
r = requests.get(query_url)
temp = r.json()
lat = temp['results'][0]['geometry']['location']['lat']
lng = temp['results'][0]['geometry']['location']['lng']
lats.append(lat)
lngs.append(lng)
import folium
m = folium.Map(location=[37.7786871, -122.4212424],zoom_start=13)
m.circle_marker(location=[37.7786871, -122.4212424],radius=100)
for i in range(len(sf_zip_codes)):
m.circle_marker(location=[lats[i], lngs[i]], radius=500, #100 seems good enough for now
popup=str(sf_zip_codes[i]), line_color = "#980043",
fill_color="#980043", fill_opacity=.2)
m.create_map(path='sf_zip_code_map.html')
# business type
df = pd.read_csv('zbp13detail.txt')
df.head()
sf_zip_codes = [94102, 94103, 94104, 94105, 94107, 94108, 94109, 94110, 94111, 94112, 94114, 94115, 94116, 94117, 94118, 94121, 94122, 94123, 94124, 94127, 94129, 94131, 94132, 94133, 94134, 94158]
oak_zip_codes = [94601, 94602, 94603, 94605, 94606, 94607, 94610, 94611, 94612, 94613, 94621]
bay_zip_codes = sf_zip_codes + oak_zip_codes
# save zipcode file
import csv
myfile = open('bay_zip_codes.csv', 'wb')
wr = csv.writer(myfile)
wr.writerow(bay_zip_codes)
# load zipcode file
with open('bay_zip_codes.csv', 'rb') as f:
reader = csv.reader(f)
bay_zip_codes = list(reader)[0]
# convert str list to int list
bay_zip_codes = map(int, bay_zip_codes)
df_sf_oak = df.loc[df['zip'].isin(bay_zip_codes)]
# save as a file
df_sf_oak.to_csv('ZCBT_sf_oak_2013.csv',encoding='utf-8',index=False)
# sf1.sort(columns='est',ascending=False)
df_sf_oak.tail()
# let's compare to EPA
epa = b.loc[b['zip'] == 94303]
epa.sort(columns='est',ascending=False)
import trulia.stats as trustat
import trulia.location as truloc
zip_code_stats = trulia.stats.TruliaStats(TRULIA_KEY).get_zip_code_stats(zip_code='90025', start_date='2014-01-01', end_date='2014-01-31')
temp = zip_code_stats['listingStats']['listingStat']
df = DataFrame(temp)
df.head()
def func(x,key):
k = x['subcategory'][0][key] # here I read key values
return pd.Series(k)
df['numProperties']=df['listingPrice'].apply((lambda x: func(x,'numberOfProperties')))
df['medPrice']=df['listingPrice'].apply((lambda x: func(x,'medianListingPrice')))
df['avrPrice']=df['listingPrice'].apply((lambda x: func(x,'averageListingPrice')))
df = df.drop('listingPrice',1)
df.head()
Get neighborhoods
neighborhoods = trulia.location.LocationInfo(TRULIA_KEY).get_neighborhoods_in_city('San Francisco', 'CA')
neighborhoods
Trulia does not provide coordinates.
Alamo_Square = neighborhoods[0]
Alamo_Square
neighborhood_stats = trustat.TruliaStats(TRULIA_KEY).get_neighborhood_stats(neighborhood_id=7183, start_date='2012-01-01', end_date='2012-06-30')
neighborhood_stats.keys()
neighborhood_stats['listingStats'].keys()
a = neighborhood_stats['listingStats']['listingStat']
b = DataFrame(a)
b.head()
# Let's focus on All properties
x = b['listingPrice'][0]
x['subcategory'][0]
x['subcategory'][0]['type']
b['numProperties']=b['listingPrice'].apply((lambda x: func(x,'numberOfProperties')))
b['medPrice']=b['listingPrice'].apply((lambda x: func(x,'medianListingPrice')))
b['avrPrice']=b['listingPrice'].apply((lambda x: func(x,'averageListingPrice')))
b.drop('listingPrice',1)
matplotlib.dates.date2num(a)
date_list=[]
for date in b['weekEndingDate']:
date_list.append(datetime.strptime(date,'%Y-%m-%d'))
#a = datetime.strptime(b['weekEndingDate'],'%Y-%m-%d')
# plot time vs. value
dates = matplotlib.dates.date2num(date_list)
fig, ax = plt.subplots()
ax.plot_date(dates, b.medPrice,'-')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now let's add on an activation signal to both voxels
Step2: How can we address this problem? A general solution is to first run a general linear model to remove the task effect and then compute the correlation on the residuals.
Step3: What happens if we get the hemodynamic model wrong? Let's use the temporal derivative model to generate an HRF that is lagged compared to the canonical.
Step4: Let's see if using a more flexible basis set, like an FIR model, will allow us to get rid of the task-induced correlation.
|
<ASSISTANT_TASK:>
Python Code:
import os,sys
import numpy
%matplotlib inline
import matplotlib.pyplot as plt
sys.path.insert(0,'../utils')
from mkdesign import create_design_singlecondition
from nipy.modalities.fmri.hemodynamic_models import spm_hrf,compute_regressor
from make_data import make_continuous_data
data=make_continuous_data(N=200)
print 'correlation without activation:',numpy.corrcoef(data.T)[0,1]
plt.plot(range(data.shape[0]),data[:,0],color='blue')
plt.plot(range(data.shape[0]),data[:,1],color='red')
design_ts,design=create_design_singlecondition(blockiness=1.0,offset=30,blocklength=20,deslength=data.shape[0])
regressor,_=compute_regressor(design,'spm',numpy.arange(0,len(design_ts)))
regressor*=50.
data_act=data+numpy.hstack((regressor,regressor))
plt.plot(range(data.shape[0]),data_act[:,0],color='blue')
plt.plot(range(data.shape[0]),data_act[:,1],color='red')
print 'correlation with activation:',numpy.corrcoef(data_act.T)[0,1]
X=numpy.vstack((regressor.T,numpy.ones(data.shape[0]))).T
beta_hat=numpy.linalg.inv(X.T.dot(X)).dot(X.T).dot(data_act)
y_est=X.dot(beta_hat)
resid=data_act - y_est
print 'correlation of residuals:',numpy.corrcoef(resid.T)[0,1]
regressor_td,_=compute_regressor(design,'spm_time',numpy.arange(0,len(design_ts)))
regressor_lagged=regressor_td.dot(numpy.array([1,0.5]))*50
plt.plot(regressor_lagged)
plt.plot(regressor)
data_lagged=data+numpy.vstack((regressor_lagged,regressor_lagged)).T
beta_hat_lag=numpy.linalg.inv(X.T.dot(X)).dot(X.T).dot(data_lagged)
plt.subplot(211)
y_est_lag=X.dot(beta_hat_lag)
plt.plot(y_est)
plt.plot(data_lagged)
resid=data_lagged - y_est_lag
print 'correlation of residuals:',numpy.corrcoef(resid.T)[0,1]
plt.subplot(212)
plt.plot(resid)
regressor_fir,_=compute_regressor(design,'fir',numpy.arange(0,len(design_ts)),fir_delays=range(28))
regressor_fir.shape
X_fir=numpy.vstack((regressor_fir.T,numpy.ones(data.shape[0]))).T
beta_hat_fir=numpy.linalg.inv(X_fir.T.dot(X_fir)).dot(X_fir.T).dot(data_lagged)
plt.subplot(211)
y_est_fir=X_fir.dot(beta_hat_fir)
plt.plot(y_est)
plt.plot(data_lagged)
resid=data_lagged - y_est_fir
print 'correlation of residuals:',numpy.corrcoef(resid.T)[0,1]
plt.subplot(212)
plt.plot(resid)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Scipy 2016 Poster
Step2: Details of the physics at
Step3: Paraview view
Step4: Types of data
Step5: Run the inversions on a cluster
|
<ASSISTANT_TASK:>
Python Code:
import SimPEG as simpeg
from SimPEG import NSEM
import MT_poster_utils
from IPython.display import HTML
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
HTML("Figures/Magnetotelluric_Movie_ThibautAstic.html")
# Load the geological discretized model
mesh, modelDict = simpeg.Mesh.TensorMesh.readVTK('./datafiles/nsmesh_model.vtr')
sigma = modelDict['S/m']
# Print model information
print mesh.nC," cells"
print mesh
# Define the area of interest in UTM (meters)
bw, be = 557100, 557580
bs, bn = 7133340, 7133960
bb, bt = 0,480
# View the model
slice=20
fig,ax=plt.subplots(figsize=(10,8))
modelplot = mesh.plotSlice(np.log10(sigma),normal='Y',ind=slice,ax=ax,grid=True, pcolorOpts={"cmap":"viridis"})
ax.set_xlim([bw,be])
ax.set_ylim([0,bt])
ax.set_aspect('equal')
plt.colorbar(modelplot[0])
ax.set_title("Discretized Model",fontsize=24)
# Load stations locations and frequency range
locs = np.load('./datafiles/MTlocations.npy')
freqList = np.load('./datafiles/MTfrequencies.npy')
# View a scatter plot of the locations at the surface
plt.scatter(locs[:,0],locs[:,1])
# List of the frequencies used for the problem
print freqList
# Load the data - stored as numpy.recArray
mtData = np.load('./datafiles/MTdata.npy')
# Plot data
fig, axes, csList = MT_poster_utils.pseudoSect_OffDiagTip_RealImag(mtData,{'y':7133627.5},colBarMode='each')
# Make the plot
fig, axes, csList = MT_poster_utils.pseudoSect_FullImpTip_RealImag(mtData,{'y':7133627.5},colBarMode='each')
#Load Model Off-diagonal
mesh,inv= simpeg.Mesh.TensorMesh.readVTK('./datafiles/recoveredMod_off_it18.vtr')
siginvoff=inv['S/m']
#Load Model Tipper
mesh,invtip= simpeg.Mesh.TensorMesh.readVTK('./datafiles/recoveredMod_tip_it36.vtr')
siginvtip=invtip['S/m']
MT_poster_utils.CompareInversion(mesh,sigma,siginvoff,siginvtip,slice_ver=20,slice_hor=40)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Train
Step2: Predict
Step4: Analyze
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import division
import re
import numpy as np
from sklearn.metrics import confusion_matrix
import matplotlib.pyplot as plt
%matplotlib inline
#%qtconsole
!rm train.vw.cache
!rm mnist_train_nn.model
!vw -d data/mnist_train_pca.vw --cache_file train.vw.cache -f mnist_train_nn.model --nn 200 -b 19 --oaa 10 --passes 55 -l 0.4 --early_terminate 3 --power_t 0.6
!rm predict.txt
!vw -t data/mnist_test_pca.vw -i mnist_train_nn.model -p predict.txt
y_true=[]
with open("data/mnist_test_pca.vw", 'rb') as f:
for line in f:
m = re.search('^\d+', line)
if m:
found = m.group()
y_true.append(int(found))
y_pred = []
with open("predict.txt", 'rb') as f:
for line in f:
m = re.search('^\d+', line)
if m:
found = m.group()
y_pred.append(int(found))
target_names = ["1", "2", "3", "4", "5", "6", "7", "8", "9", "10"] # NOTE: plus one
def plot_confusion_matrix(cm,
target_names,
title='Proportional Confusion matrix: VW on PCA',
cmap=plt.cm.Paired):
given a confusion matrix (cm), make a nice plot
see the skikit-learn documentation for the original done for the iris dataset
plt.figure(figsize=(8, 6))
plt.imshow((cm/cm.sum(axis=1)), interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(target_names))
plt.xticks(tick_marks, target_names, rotation=45)
plt.yticks(tick_marks, target_names)
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
cm = confusion_matrix(y_true, y_pred)
print(cm)
model_accuracy = sum(cm.diagonal())/len(y_pred)
model_misclass = 1 - model_accuracy
print("\nModel accuracy: {0}, model misclass rate: {1}".format(model_accuracy, model_misclass))
plot_confusion_matrix(cm, target_names)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code::
import tensorflow as tf
model = tf.keras.models.load_model('filename')
pred = model.predict(X_val)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Open feature file of mirflickr08
Step2: Load image ids of mirflickr08
Step3: Perform tag relevance learning on the test set
Step4: Evaluation
Step5: Compute map based on image ranking results
Step6: Compute miap based on tag ranking results
|
<ASSISTANT_TASK:>
Python Code:
from instance_based.tagvote import TagVoteTagger
trainCollection = 'train10k'
annotationName = 'concepts130.txt'
feature = 'vgg-verydeep-16-fc7relu'
tagger = TagVoteTagger(collection=trainCollection, annotationName=annotationName, feature=feature, distance='cosine')
from basic.constant import ROOT_PATH
from util.simpleknn.bigfile import BigFile
import os
rootpath = ROOT_PATH
testCollection = 'mirflickr08'
feat_dir = os.path.join(rootpath, testCollection, 'FeatureData', feature)
feat_file = BigFile(feat_dir)
# load image ids of mirflickr08
from basic.util import readImageSet
testimset = readImageSet(testCollection)
# load a subset of 200 images for test
import random
testimset = random.sample(testimset, 200)
renamed, vectors = feat_file.read(testimset)
import time
s_time = time.time()
results = [tagger.predict(vec) for vec in vectors]
timespan = time.time() - s_time
print ('processing %d images took %g seconds' % (len(renamed), timespan))
from basic.annotationtable import readConcepts, readAnnotationsFrom
testAnnotationName = 'conceptsmir14.txt'
concepts = readConcepts(testCollection, testAnnotationName)
nr_of_concepts = len(concepts)
label2imset = {}
im2labelset = {}
for i,concept in enumerate(concepts):
names,labels = readAnnotationsFrom(testCollection, testAnnotationName, concept)
pos_set = [x[0] for x in zip(names,labels) if x[1]>0]
print ('%s has %d positives' % (concept, len(pos_set)))
for im in pos_set:
label2imset.setdefault(concept, set()).add(im)
im2labelset.setdefault(im, set()).add(concept)
# sort images to compute AP scores per concept
ranklists = {}
for _id, res in zip(renamed,results):
for tag,score in res:
ranklists.setdefault(tag, []).append((_id, score))
from basic.metric import getScorer
scorer = getScorer('AP')
mean_ap = 0.0
for i,concept in enumerate(concepts):
pos_set = label2imset[concept]
ranklist = ranklists[concept]
ranklist.sort(key=lambda v:(v[1], v[0]), reverse=True) # sort images by scores in descending order
sorted_labels = [2*int(x[0] in pos_set)-1 for x in ranklist]
perf = scorer.score(sorted_labels)
print ('%s %.3f' % (concept, perf))
mean_ap += perf
mean_ap /= len(concepts)
print ('meanAP %.3f' % mean_ap)
# compute iAP per image
miap = 0.0
for _id, res in zip(renamed,results):
pos_set = im2labelset.get(_id, set()) # some images might be negatives to all the 14 concepts
ranklist = [x for x in res if x[0] in label2imset] # evaluate only concepts with ground truth
sorted_labels = [2*int(x[0] in pos_set)-1 for x in ranklist]
perf = scorer.score(sorted_labels)
miap += perf
miap /= len(renamed)
print ('miap %.3f' % miap)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Part 1
Step2: Part 2
Step3: Questions
|
<ASSISTANT_TASK:>
Python Code:
# Import standard Python modules
import numpy as np
import pandas
import matplotlib.pyplot as plt
# Import the FrostNumber PyMT model
import pymt.models
frost_number = pymt.models.FrostNumber()
config_file, config_folder = frost_number.setup(T_air_min=-13., T_air_max=19.5)
frost_number.initialize(config_file, config_folder)
frost_number.update()
frost_number.output_var_names
frost_number.get_value('frostnumber__air')
args = frost_number.setup(T_air_min=-40.9, T_air_max=19.5)
frost_number.initialize(*args)
frost_number.update()
frost_number.get_value('frostnumber__air')
data = pandas.read_csv("https://raw.githubusercontent.com/mcflugen/pymt_frost_number/master/data/t_air_min_max.csv")
data
frost_number = pymt.models.FrostNumber()
config_file, run_folder = frost_number.setup()
frost_number.initialize(config_file, run_folder)
t_air_min = data["atmosphere_bottom_air__time_min_of_temperature"]
t_air_max = data["atmosphere_bottom_air__time_max_of_temperature"]
fn = np.empty(6)
for i in range(6):
frost_number.set_value("atmosphere_bottom_air__time_min_of_temperature", t_air_min.values[i])
frost_number.set_value("atmosphere_bottom_air__time_max_of_temperature", t_air_max.values[i])
frost_number.update()
fn[i] = frost_number.get_value('frostnumber__air')
years = range(2000, 2006)
plt.subplot(211)
plt.plot(years, t_air_min, years, t_air_max)
plt.subplot(212)
plt.plot(years, fn)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create Data
Step2: Delete A Table
Step3: View Table
|
<ASSISTANT_TASK:>
Python Code:
# Ignore
%load_ext sql
%sql sqlite://
%config SqlMagic.feedback = False
%%sql
-- Create a table of criminals
CREATE TABLE criminals (pid, name, age, sex, city, minor);
INSERT INTO criminals VALUES (412, 'James Smith', 15, 'M', 'Santa Rosa', 1);
INSERT INTO criminals VALUES (234, 'Bill James', 22, 'M', 'Santa Rosa', 0);
INSERT INTO criminals VALUES (632, 'Stacy Miller', 23, 'F', 'Santa Rosa', 0);
INSERT INTO criminals VALUES (621, 'Betty Bob', NULL, 'F', 'Petaluma', 1);
INSERT INTO criminals VALUES (162, 'Jaden Ado', 49, 'M', NULL, 0);
INSERT INTO criminals VALUES (901, 'Gordon Ado', 32, 'F', 'Santa Rosa', 0);
INSERT INTO criminals VALUES (512, 'Bill Byson', 21, 'M', 'Santa Rosa', 0);
INSERT INTO criminals VALUES (411, 'Bob Iton', NULL, 'M', 'San Francisco', 0);
%%sql
-- Delete the table called 'criminals'
DROP TABLE criminals
%%sql
-- Select everything
SELECT *
-- From the table 'criminals'
FROM criminals
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: B
Step2: C
|
<ASSISTANT_TASK:>
Python Code:
def cumulative_product(start_list):
out_list = []
### BEGIN SOLUTION
### END SOLUTION
return out_list
inlist = [89, 22, 3, 24, 8, 59, 43, 97, 30, 88]
outlist = [89, 1958, 5874, 140976, 1127808, 66540672, 2861248896, 277541142912, 8326234287360, 732708617287680]
assert set(cumulative_product(inlist)) == set(outlist)
inlist = [56, 22, 81, 65, 40, 44, 95, 48, 45, 26]
outlist = [56, 1232, 99792, 6486480, 259459200, 11416204800, 1084539456000, 52057893888000, 2342605224960000, 60907735848960000]
assert set(cumulative_product(inlist)) == set(outlist)
def average(numbers):
avg_val = 0.0
### BEGIN SOLUTION
### END SOLUTION
return avg_val
import numpy as np
inlist = np.random.randint(10, 100, 10).tolist()
np.testing.assert_allclose(average(inlist), np.mean(inlist))
inlist = np.random.randint(10, 1000, 10).tolist()
np.testing.assert_allclose(average(inlist), np.mean(inlist))
def return_ordinals(numbers):
out_list = []
### BEGIN SOLUTION
### END SOLUTION
return out_list
inlist = [5, 6, 1, 9, 5, 5, 3, 3, 9, 4]
outlist = ["5th", "6th", "1st", "9th", "5th", "5th", "3rd", "3rd", "9th", "4th"]
for y_true, y_pred in zip(outlist, return_ordinals(inlist)):
assert y_true == y_pred.lower()
inlist = [7, 5, 6, 6, 3, 5, 1, 0, 5, 2]
outlist = ["7th", "5th", "6th", "6th", "3rd", "5th", "1st", "0th", "5th", "2nd"]
for y_true, y_pred in zip(outlist, return_ordinals(inlist)):
assert y_true == y_pred.lower()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We will continue to refer to this client object for accessing the remote server.
Step2: NOTE
|
<ASSISTANT_TASK:>
Python Code:
import ga4gh_client.client as client
c = client.HttpClient("http://1kgenomes.ga4gh.org")
dataset = c.search_datasets().next()
print dataset
data_set_id = dataset.id
dataset_via_get = c.get_dataset(dataset_id=data_set_id)
print dataset_via_get
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As well as our function to read the hdf5 reflectance files and associated metadata
Step2: Define the location where you are holding the data for the data institute. The h5_filename will be the flightline which contains the tarps, and the tarp_48_filename and tarp_03_filename contain the field validated spectra for the white and black tarp respectively, organized by wavelength and reflectance.
Step3: We want to pull the spectra from the airborne data from the center of the tarp to minimize any errors introduced by infiltrating light in adjecent pixels, or through errors in ortho-rectification (source 2). We have pre-determined the coordinates for the center of each tarp which are as follows
Step4: Now we'll use our function designed for NEON AOP's HDF5 files to access the hyperspectral data
Step5: Within the reflectance curves there are areas with noisey data due to atmospheric windows in the water absorption bands. For this exercise we do not want to plot these areas as they obscure detailes in the plots due to their anamolous values. The meta data assocaited with these band locations is contained in the metadata gatherd by our function. We will pull out these areas as 'bad band windows' and determine which indexes in the reflectance curves contain the bad bands
Step6: Now join the list of indexes together into a single variable
Step7: The reflectance data is saved in files which are 'tab delimited.' We will use a numpy function (genfromtxt) to quickly import the tarp reflectance curves observed with the ASD using the '\t' delimeter to indicate tabs are used.
Step8: Now we'll set all the data inside of those windows to NaNs (not a number) so they will not be included in the plots
Step9: The next step is to determine which pixel in the reflectance data belongs to the center of each tarp. To do this, we will subtract the tarp center pixel location from the upper left corner pixels specified in the map info of the H5 file. This information is saved in the metadata dictionary output from our function that reads NEON AOP HDF5 files. The difference between these coordaintes gives us the x and y index of the reflectance curve.
Step10: Next, we will plot both the curve from the airborne data taken at the center of the tarps as well as the curves obtained from the ASD data to provide a visualisation of their consistency for both tarps. Once generated, we will also save the figure to a pre-determined location.
Step11: This produces plots showing the results of the ASD and airborne measurements over the 48% tarp. Visually, the comparison between the two appears to be fairly good. However, over the 3% tarp we appear to be over-estimating the reflectance. Large absolute differences could be associated with ATCOR input parameters (source 4). For example, the user must input the local visibility, which is related to aerosal optical thickness (AOT). We don't measure this at every site, therefore input a standard parameter for all sites.
Step12: From this we are able to see that the 48% tarp actually has larger absolute differences than the 3% tarp. The 48% tarp performs poorly at the shortest and longest waveleghts as well as near the edges of the 'bad band windows.' This is related to difficulty in calibrating the sensor in these sensitive areas (source 1).
|
<ASSISTANT_TASK:>
Python Code:
import h5py
import csv
import numpy as np
import os
import gdal
import matplotlib.pyplot as plt
import sys
from math import floor
import time
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
def h5refl2array(h5_filename):
hdf5_file = h5py.File(h5_filename,'r')
#Get the site name
file_attrs_string = str(list(hdf5_file.items()))
file_attrs_string_split = file_attrs_string.split("'")
sitename = file_attrs_string_split[1]
refl = hdf5_file[sitename]['Reflectance']
reflArray = refl['Reflectance_Data']
refl_shape = reflArray.shape
wavelengths = refl['Metadata']['Spectral_Data']['Wavelength']
#Create dictionary containing relevant metadata information
metadata = {}
metadata['shape'] = reflArray.shape
metadata['mapInfo'] = refl['Metadata']['Coordinate_System']['Map_Info']
#Extract no data value & set no data value to NaN\n",
metadata['scaleFactor'] = float(reflArray.attrs['Scale_Factor'])
metadata['noDataVal'] = float(reflArray.attrs['Data_Ignore_Value'])
metadata['bad_band_window1'] = (refl.attrs['Band_Window_1_Nanometers'])
metadata['bad_band_window2'] = (refl.attrs['Band_Window_2_Nanometers'])
metadata['projection'] = refl['Metadata']['Coordinate_System']['Proj4'].value
metadata['EPSG'] = int(refl['Metadata']['Coordinate_System']['EPSG Code'].value)
mapInfo = refl['Metadata']['Coordinate_System']['Map_Info'].value
mapInfo_string = str(mapInfo); #print('Map Info:',mapInfo_string)\n",
mapInfo_split = mapInfo_string.split(",")
#Extract the resolution & convert to floating decimal number
metadata['res'] = {}
metadata['res']['pixelWidth'] = mapInfo_split[5]
metadata['res']['pixelHeight'] = mapInfo_split[6]
#Extract the upper left-hand corner coordinates from mapInfo\n",
xMin = float(mapInfo_split[3]) #convert from string to floating point number\n",
yMax = float(mapInfo_split[4])
#Calculate the xMax and yMin values from the dimensions\n",
xMax = xMin + (refl_shape[1]*float(metadata['res']['pixelWidth'])) #xMax = left edge + (# of columns * resolution)\n",
yMin = yMax - (refl_shape[0]*float(metadata['res']['pixelHeight'])) #yMin = top edge - (# of rows * resolution)\n",
metadata['extent'] = (xMin,xMax,yMin,yMax),
metadata['ext_dict'] = {}
metadata['ext_dict']['xMin'] = xMin
metadata['ext_dict']['xMax'] = xMax
metadata['ext_dict']['yMin'] = yMin
metadata['ext_dict']['yMax'] = yMax
hdf5_file.close
return reflArray, metadata, wavelengths
print('Start CHEQ tarp uncertainty script')
## You will need to change these filepaths according to your own machine
## As you can see here, I saved the files downloaded above into my ~/Git/data/ directory
h5_filename = '/Users/olearyd/Git/data/NEON_D05_CHEQ_DP1_20160912_160540_reflectance.h5'
tarp_48_filename = '/Users/olearyd/Git/data/CHEQ_Tarp_48_01_refl_bavg.txt'
tarp_03_filename = '/Users/olearyd/Git/data/CHEQ_Tarp_03_02_refl_bavg.txt'
tarp_48_center = np.array([727487,5078970])
tarp_03_center = np.array([727497,5078970])
[reflArray,metadata,wavelengths] = h5refl2array(h5_filename)
bad_band_window1 = (metadata['bad_band_window1'])
bad_band_window2 = (metadata['bad_band_window2'])
index_bad_window1 = [i for i, x in enumerate(wavelengths) if x > bad_band_window1[0] and x < bad_band_window1[1]]
index_bad_window2 = [i for i, x in enumerate(wavelengths) if x > bad_band_window2[0] and x < bad_band_window2[1]]
index_bad_windows = index_bad_window1+index_bad_window2
tarp_48_data = np.genfromtxt(tarp_48_filename, delimiter = '\t')
tarp_03_data = np.genfromtxt(tarp_03_filename, delimiter = '\t')
tarp_48_data[index_bad_windows] = np.nan
tarp_03_data[index_bad_windows] = np.nan
x_tarp_48_index = int((tarp_48_center[0] - metadata['ext_dict']['xMin'])/float(metadata['res']['pixelWidth']))
y_tarp_48_index = int((metadata['ext_dict']['yMax'] - tarp_48_center[1])/float(metadata['res']['pixelHeight']))
x_tarp_03_index = int((tarp_03_center[0] - metadata['ext_dict']['xMin'])/float(metadata['res']['pixelWidth']))
y_tarp_03_index = int((metadata['ext_dict']['yMax'] - tarp_03_center[1])/float(metadata['res']['pixelHeight']))
plt.figure(1)
tarp_48_reflectance = np.asarray(reflArray[y_tarp_48_index,x_tarp_48_index,:], dtype=np.float32)/metadata['scaleFactor']
tarp_48_reflectance[index_bad_windows] = np.nan
plt.plot(wavelengths,tarp_48_reflectance,label = 'Airborne Reflectance')
plt.plot(wavelengths,tarp_48_data[:,1], label = 'ASD Reflectance')
plt.title('CHEQ 20160912 48% tarp')
plt.xlabel('Wavelength (nm)'); plt.ylabel('Refelctance (%)')
plt.legend()
#plt.savefig('CHEQ_20160912_48_tarp.png',dpi=300,orientation='landscape',bbox_inches='tight',pad_inches=0.1)
plt.figure(2)
tarp_03_reflectance = np.asarray(reflArray[y_tarp_03_index,x_tarp_03_index,:], dtype=np.float32)/ metadata['scaleFactor']
tarp_03_reflectance[index_bad_windows] = np.nan
plt.plot(wavelengths,tarp_03_reflectance,label = 'Airborne Reflectance')
plt.plot(wavelengths,tarp_03_data[:,1],label = 'ASD Reflectance')
plt.title('CHEQ 20160912 3% tarp')
plt.xlabel('Wavelength (nm)'); plt.ylabel('Refelctance (%)')
plt.legend()
#plt.savefig('CHEQ_20160912_3_tarp.png',dpi=300,orientation='landscape',bbox_inches='tight',pad_inches=0.1)
plt.figure(3)
plt.plot(wavelengths,tarp_48_reflectance-tarp_48_data[:,1])
plt.title('CHEQ 20160912 48% tarp absolute difference')
plt.xlabel('Wavelength (nm)'); plt.ylabel('Absolute Refelctance Difference (%)')
#plt.savefig('CHEQ_20160912_48_tarp_absolute_diff.png',dpi=300,orientation='landscape',bbox_inches='tight',pad_inches=0.1)
plt.figure(4)
plt.plot(wavelengths,tarp_03_reflectance-tarp_03_data[:,1])
plt.title('CHEQ 20160912 3% tarp absolute difference')
plt.xlabel('Wavelength (nm)'); plt.ylabel('Absolute Refelctance Difference (%)')
#plt.savefig('CHEQ_20160912_3_tarp_absolute_diff.png',dpi=300,orientation='landscape',bbox_inches='tight',pad_inches=0.1)
plt.figure(5)
plt.plot(wavelengths,100*np.divide(tarp_48_reflectance-tarp_48_data[:,1],tarp_48_data[:,1]))
plt.title('CHEQ 20160912 48% tarp percent difference')
plt.xlabel('Wavelength (nm)'); plt.ylabel('Percent Refelctance Difference')
plt.ylim((-100,100))
#plt.savefig('CHEQ_20160912_48_tarp_relative_diff.png',dpi=300,orientation='landscape',bbox_inches='tight',pad_inches=0.1)
plt.figure(6)
plt.plot(wavelengths,100*np.divide(tarp_03_reflectance-tarp_03_data[:,1],tarp_03_data[:,1]))
plt.title('CHEQ 20160912 3% tarp percent difference')
plt.xlabel('Wavelength (nm)'); plt.ylabel('Percent Refelctance Difference')
plt.ylim((-100,150))
#plt.savefig('CHEQ_20160912_3_tarp_relative_diff.png',dpi=300,orientation='landscape',bbox_inches='tight',pad_inches=0.1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The input image is processed in the first convolutional layer using the filter-weights. This results in 16 new images, one for each filter in the convolutional layer. The images are also down-sampled so the image resolution is decreased from 28x28 to 14x14.
Step2: This was developed using Python 3.5.2 (Anaconda) and TensorFlow version
Step3: Load Data
Step4: The MNIST data-set has now been loaded and consists of 70,000 images and associated labels (i.e. classifications of the images). The data-set is split into 3 mutually exclusive sub-sets. We will only use the training and test-sets in this tutorial.
Step5: The class-labels are One-Hot encoded, which means that each label is a vector with 10 elements, all of which are zero except for one element. The index of this one element is the class-number, that is, the digit shown in the associated image. We also need the class-numbers as integers for the test-set, so we calculate it now.
Step6: Data Dimensions
Step7: Helper-function for plotting images
Step8: Plot a few images to see if data is correct
Step9: TensorFlow Graph
Step10: The convolutional layers expect x to be encoded as a 4-dim tensor so we have to reshape it so its shape is instead [num_images, img_height, img_width, num_channels]. Note that img_height == img_width == img_size and num_images can be inferred automatically by using -1 for the size of the first dimension. So the reshape operation is
Step11: Next we have the placeholder variable for the true labels associated with the images that were input in the placeholder variable x. The shape of this placeholder variable is [None, num_classes] which means it may hold an arbitrary number of labels and each label is a vector of length num_classes which is 10 in this case.
Step12: We could also have a placeholder variable for the class-number, but we will instead calculate it using argmax. Note that this is a TensorFlow operator so nothing is calculated at this point.
Step13: TensorFlow Implementation
Step14: The following helper-function creates a new convolutional network. The input and output are 4-dimensional tensors (aka. 4-rank tensors). Note the low-level details of the TensorFlow API, such as the shape of the weights-variable. It is easy to make a mistake somewhere which may result in strange error-messages that are difficult to debug.
Step15: The following helper-function flattens a 4-dim tensor to 2-dim so we can add fully-connected layers after the convolutional layers.
Step16: The following helper-function creates a fully-connected layer.
Step17: Graph Construction
Step18: PrettyTensor Implementation
Step19: Now that we have wrapped the input image in a PrettyTensor object, we can add the convolutional and fully-connected layers in just a few lines of source-code.
Step20: That's it! We have now created the exact same Convolutional Neural Network in a few simple lines of code that required many complex lines of code in the direct TensorFlow implementation.
Step21: Using this helper-function we can retrieve the variables. These are TensorFlow objects. In order to get the contents of the variables, you must do something like
Step22: Optimization Method
Step23: Performance Measures
Step24: Then we create a vector of booleans telling us whether the predicted class equals the true class of each image.
Step25: The classification accuracy is calculated by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then taking the average of these numbers.
Step26: TensorFlow Run
Step27: Initialize variables
Step28: Helper-function to perform optimization iterations
Step29: Function for performing a number of optimization iterations so as to gradually improve the variables of the network layers. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples. The progress is printed every 100 iterations.
Step30: Helper-function to plot example errors
Step31: Helper-function to plot confusion matrix
Step32: Helper-function for showing the performance
Step33: Performance before any optimization
Step34: Performance after 1 optimization iteration
Step35: Performance after 100 optimization iterations
Step36: Performance after 1000 optimization iterations
Step37: Performance after 10,000 optimization iterations
Step38: Visualization of Weights and Layers
Step39: Convolution Layer 1
Step40: Convolution Layer 2
Step41: There are 16 input channels to the second convolutional layer, so we can make another 15 plots of filter-weights like this. We just make one more with the filter-weights for the second channel.
Step42: Close TensorFlow Session
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import Image
Image('images/02_network_flowchart.png')
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
from sklearn.metrics import confusion_matrix
import time
from datetime import timedelta
import math
# We also need PrettyTensor.
import prettytensor as pt
tf.__version__
from tensorflow.examples.tutorials.mnist import input_data
data = input_data.read_data_sets('data/MNIST/', one_hot=True)
print("Size of:")
print("- Training-set:\t\t{}".format(len(data.train.labels)))
print("- Test-set:\t\t{}".format(len(data.test.labels)))
print("- Validation-set:\t{}".format(len(data.validation.labels)))
data.test.cls = np.argmax(data.test.labels, axis=1)
# We know that MNIST images are 28 pixels in each dimension.
img_size = 28
# Images are stored in one-dimensional arrays of this length.
img_size_flat = img_size * img_size
# Tuple with height and width of images used to reshape arrays.
img_shape = (img_size, img_size)
# Number of colour channels for the images: 1 channel for gray-scale.
num_channels = 1
# Number of classes, one class for each of 10 digits.
num_classes = 10
def plot_images(images, cls_true, cls_pred=None):
assert len(images) == len(cls_true) == 9
# Create figure with 3x3 sub-plots.
fig, axes = plt.subplots(3, 3)
fig.subplots_adjust(hspace=0.3, wspace=0.3)
for i, ax in enumerate(axes.flat):
# Plot image.
ax.imshow(images[i].reshape(img_shape), cmap='binary')
# Show true and predicted classes.
if cls_pred is None:
xlabel = "True: {0}".format(cls_true[i])
else:
xlabel = "True: {0}, Pred: {1}".format(cls_true[i], cls_pred[i])
# Show the classes as the label on the x-axis.
ax.set_xlabel(xlabel)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
# Get the first images from the test-set.
images = data.test.images[0:9]
# Get the true classes for those images.
cls_true = data.test.cls[0:9]
# Plot the images and labels using our helper-function above.
plot_images(images=images, cls_true=cls_true)
x = tf.placeholder(tf.float32, shape=[None, img_size_flat], name='x')
x_image = tf.reshape(x, [-1, img_size, img_size, num_channels])
y_true = tf.placeholder(tf.float32, shape=[None, 10], name='y_true')
y_true_cls = tf.argmax(y_true, dimension=1)
def new_weights(shape):
return tf.Variable(tf.truncated_normal(shape, stddev=0.05))
def new_biases(length):
return tf.Variable(tf.constant(0.05, shape=[length]))
def new_conv_layer(input, # The previous layer.
num_input_channels, # Num. channels in prev. layer.
filter_size, # Width and height of filters.
num_filters, # Number of filters.
use_pooling=True): # Use 2x2 max-pooling.
# Shape of the filter-weights for the convolution.
# This format is determined by the TensorFlow API.
shape = [filter_size, filter_size, num_input_channels, num_filters]
# Create new weights aka. filters with the given shape.
weights = new_weights(shape=shape)
# Create new biases, one for each filter.
biases = new_biases(length=num_filters)
# Create the TensorFlow operation for convolution.
# Note the strides are set to 1 in all dimensions.
# The first and last stride must always be 1,
# because the first is for the image-number and
# the last is for the input-channel.
# But e.g. strides=[1, 2, 2, 1] would mean that the filter
# is moved 2 pixels across the x- and y-axis of the image.
# The padding is set to 'SAME' which means the input image
# is padded with zeroes so the size of the output is the same.
layer = tf.nn.conv2d(input=input,
filter=weights,
strides=[1, 1, 1, 1],
padding='SAME')
# Add the biases to the results of the convolution.
# A bias-value is added to each filter-channel.
layer += biases
# Use pooling to down-sample the image resolution?
if use_pooling:
# This is 2x2 max-pooling, which means that we
# consider 2x2 windows and select the largest value
# in each window. Then we move 2 pixels to the next window.
layer = tf.nn.max_pool(value=layer,
ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1],
padding='SAME')
# Rectified Linear Unit (ReLU).
# It calculates max(x, 0) for each input pixel x.
# This adds some non-linearity to the formula and allows us
# to learn more complicated functions.
layer = tf.nn.relu(layer)
# Note that ReLU is normally executed before the pooling,
# but since relu(max_pool(x)) == max_pool(relu(x)) we can
# save 75% of the relu-operations by max-pooling first.
# We return both the resulting layer and the filter-weights
# because we will plot the weights later.
return layer, weights
def flatten_layer(layer):
# Get the shape of the input layer.
layer_shape = layer.get_shape()
# The shape of the input layer is assumed to be:
# layer_shape == [num_images, img_height, img_width, num_channels]
# The number of features is: img_height * img_width * num_channels
# We can use a function from TensorFlow to calculate this.
num_features = layer_shape[1:4].num_elements()
# Reshape the layer to [num_images, num_features].
# Note that we just set the size of the second dimension
# to num_features and the size of the first dimension to -1
# which means the size in that dimension is calculated
# so the total size of the tensor is unchanged from the reshaping.
layer_flat = tf.reshape(layer, [-1, num_features])
# The shape of the flattened layer is now:
# [num_images, img_height * img_width * num_channels]
# Return both the flattened layer and the number of features.
return layer_flat, num_features
def new_fc_layer(input, # The previous layer.
num_inputs, # Num. inputs from prev. layer.
num_outputs, # Num. outputs.
use_relu=True): # Use Rectified Linear Unit (ReLU)?
# Create new weights and biases.
weights = new_weights(shape=[num_inputs, num_outputs])
biases = new_biases(length=num_outputs)
# Calculate the layer as the matrix multiplication of
# the input and weights, and then add the bias-values.
layer = tf.matmul(input, weights) + biases
# Use ReLU?
if use_relu:
layer = tf.nn.relu(layer)
return layer
if False: # Don't execute this! Just show it for easy comparison.
# First convolutional layer.
layer_conv1, weights_conv1 = \
new_conv_layer(input=x_image,
num_input_channels=num_channels,
filter_size=5,
num_filters=16,
use_pooling=True)
# Second convolutional layer.
layer_conv2, weights_conv2 = \
new_conv_layer(input=layer_conv1,
num_input_channels=16,
filter_size=5,
num_filters=36,
use_pooling=True)
# Flatten layer.
layer_flat, num_features = flatten_layer(layer_conv2)
# First fully-connected layer.
layer_fc1 = new_fc_layer(input=layer_flat,
num_inputs=num_features,
num_outputs=128,
use_relu=True)
# Second fully-connected layer.
layer_fc2 = new_fc_layer(input=layer_fc1,
num_inputs=128,
num_outputs=num_classes,
use_relu=False)
# Predicted class-label.
y_pred = tf.nn.softmax(layer_fc2)
# Cross-entropy for the classification of each image.
cross_entropy = \
tf.nn.softmax_cross_entropy_with_logits(logits=layer_fc2,
labels=y_true)
# Loss aka. cost-measure.
# This is the scalar value that must be minimized.
loss = tf.reduce_mean(cross_entropy)
x_pretty = pt.wrap(x_image)
with pt.defaults_scope(activation_fn=tf.nn.relu):
y_pred, loss = x_pretty.\
conv2d(kernel=5, depth=16, name='layer_conv1').\
max_pool(kernel=2, stride=2).\
conv2d(kernel=5, depth=36, name='layer_conv2').\
max_pool(kernel=2, stride=2).\
flatten().\
fully_connected(size=128, name='layer_fc1').\
softmax_classifier(class_count=10, labels=y_true)
def get_weights_variable(layer_name):
# Retrieve an existing variable named 'weights' in the scope
# with the given layer_name.
# This is awkward because the TensorFlow function was
# really intended for another purpose.
with tf.variable_scope(layer_name, reuse=True):
variable = tf.get_variable('weights')
return variable
weights_conv1 = get_weights_variable(layer_name='layer_conv1')
weights_conv2 = get_weights_variable(layer_name='layer_conv2')
optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(loss)
y_pred_cls = tf.argmax(y_pred, dimension=1)
correct_prediction = tf.equal(y_pred_cls, y_true_cls)
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
session = tf.Session()
session.run(tf.initialize_all_variables())
train_batch_size = 64
# Counter for total number of iterations performed so far.
total_iterations = 0
def optimize(num_iterations):
# Ensure we update the global variable rather than a local copy.
global total_iterations
# Start-time used for printing time-usage below.
start_time = time.time()
for i in range(total_iterations,
total_iterations + num_iterations):
# Get a batch of training examples.
# x_batch now holds a batch of images and
# y_true_batch are the true labels for those images.
x_batch, y_true_batch = data.train.next_batch(train_batch_size)
# Put the batch into a dict with the proper names
# for placeholder variables in the TensorFlow graph.
feed_dict_train = {x: x_batch,
y_true: y_true_batch}
# Run the optimizer using this batch of training data.
# TensorFlow assigns the variables in feed_dict_train
# to the placeholder variables and then runs the optimizer.
session.run(optimizer, feed_dict=feed_dict_train)
# Print status every 100 iterations.
if i % 100 == 0:
# Calculate the accuracy on the training-set.
acc = session.run(accuracy, feed_dict=feed_dict_train)
# Message for printing.
msg = "Optimization Iteration: {0:>6}, Training Accuracy: {1:>6.1%}"
# Print it.
print(msg.format(i + 1, acc))
# Update the total number of iterations performed.
total_iterations += num_iterations
# Ending time.
end_time = time.time()
# Difference between start and end-times.
time_dif = end_time - start_time
# Print the time-usage.
print("Time usage: " + str(timedelta(seconds=int(round(time_dif)))))
def plot_example_errors(cls_pred, correct):
# This function is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# correct is a boolean array whether the predicted class
# is equal to the true class for each image in the test-set.
# Negate the boolean array.
incorrect = (correct == False)
# Get the images from the test-set that have been
# incorrectly classified.
images = data.test.images[incorrect]
# Get the predicted classes for those images.
cls_pred = cls_pred[incorrect]
# Get the true classes for those images.
cls_true = data.test.cls[incorrect]
# Plot the first 9 images.
plot_images(images=images[0:9],
cls_true=cls_true[0:9],
cls_pred=cls_pred[0:9])
def plot_confusion_matrix(cls_pred):
# This is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# Get the true classifications for the test-set.
cls_true = data.test.cls
# Get the confusion matrix using sklearn.
cm = confusion_matrix(y_true=cls_true,
y_pred=cls_pred)
# Print the confusion matrix as text.
print(cm)
# Plot the confusion matrix as an image.
plt.matshow(cm)
# Make various adjustments to the plot.
plt.colorbar()
tick_marks = np.arange(num_classes)
plt.xticks(tick_marks, range(num_classes))
plt.yticks(tick_marks, range(num_classes))
plt.xlabel('Predicted')
plt.ylabel('True')
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
# Split the test-set into smaller batches of this size.
test_batch_size = 256
def print_test_accuracy(show_example_errors=False,
show_confusion_matrix=False):
# Number of images in the test-set.
num_test = len(data.test.images)
# Allocate an array for the predicted classes which
# will be calculated in batches and filled into this array.
cls_pred = np.zeros(shape=num_test, dtype=np.int)
# Now calculate the predicted classes for the batches.
# We will just iterate through all the batches.
# There might be a more clever and Pythonic way of doing this.
# The starting index for the next batch is denoted i.
i = 0
while i < num_test:
# The ending index for the next batch is denoted j.
j = min(i + test_batch_size, num_test)
# Get the images from the test-set between index i and j.
images = data.test.images[i:j, :]
# Get the associated labels.
labels = data.test.labels[i:j, :]
# Create a feed-dict with these images and labels.
feed_dict = {x: images,
y_true: labels}
# Calculate the predicted class using TensorFlow.
cls_pred[i:j] = session.run(y_pred_cls, feed_dict=feed_dict)
# Set the start-index for the next batch to the
# end-index of the current batch.
i = j
# Convenience variable for the true class-numbers of the test-set.
cls_true = data.test.cls
# Create a boolean array whether each image is correctly classified.
correct = (cls_true == cls_pred)
# Calculate the number of correctly classified images.
# When summing a boolean array, False means 0 and True means 1.
correct_sum = correct.sum()
# Classification accuracy is the number of correctly classified
# images divided by the total number of images in the test-set.
acc = float(correct_sum) / num_test
# Print the accuracy.
msg = "Accuracy on Test-Set: {0:.1%} ({1} / {2})"
print(msg.format(acc, correct_sum, num_test))
# Plot some examples of mis-classifications, if desired.
if show_example_errors:
print("Example errors:")
plot_example_errors(cls_pred=cls_pred, correct=correct)
# Plot the confusion matrix, if desired.
if show_confusion_matrix:
print("Confusion Matrix:")
plot_confusion_matrix(cls_pred=cls_pred)
print_test_accuracy()
optimize(num_iterations=1)
print_test_accuracy()
optimize(num_iterations=99) # We already performed 1 iteration above.
print_test_accuracy(show_example_errors=True)
optimize(num_iterations=900) # We performed 100 iterations above.
print_test_accuracy(show_example_errors=True)
optimize(num_iterations=9000) # We performed 1000 iterations above.
print_test_accuracy(show_example_errors=True,
show_confusion_matrix=True)
def plot_conv_weights(weights, input_channel=0):
# Assume weights are TensorFlow ops for 4-dim variables
# e.g. weights_conv1 or weights_conv2.
# Retrieve the values of the weight-variables from TensorFlow.
# A feed-dict is not necessary because nothing is calculated.
w = session.run(weights)
# Get the lowest and highest values for the weights.
# This is used to correct the colour intensity across
# the images so they can be compared with each other.
w_min = np.min(w)
w_max = np.max(w)
# Number of filters used in the conv. layer.
num_filters = w.shape[3]
# Number of grids to plot.
# Rounded-up, square-root of the number of filters.
num_grids = math.ceil(math.sqrt(num_filters))
# Create figure with a grid of sub-plots.
fig, axes = plt.subplots(num_grids, num_grids)
# Plot all the filter-weights.
for i, ax in enumerate(axes.flat):
# Only plot the valid filter-weights.
if i<num_filters:
# Get the weights for the i'th filter of the input channel.
# See new_conv_layer() for details on the format
# of this 4-dim tensor.
img = w[:, :, input_channel, i]
# Plot image.
ax.imshow(img, vmin=w_min, vmax=w_max,
interpolation='nearest', cmap='seismic')
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
plot_conv_weights(weights=weights_conv1)
plot_conv_weights(weights=weights_conv2, input_channel=0)
plot_conv_weights(weights=weights_conv2, input_channel=1)
# This has been commented out in case you want to modify and experiment
# with the Notebook without having to restart it.
# session.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Housing Price Dataset
Step2: Statistical summary of the data
Step3: Visualize the data
Step4: Training a Univariate Linear Regression Model
Step5: Using the following step, train a Linear Regression model on housing price dataset
Step6: Use your trained LinearRegression model to predict on the same dataset (i.e. "Area" features stored as numpy array X). Also calculate Mean Squared Error using the mean_squared_error() method from sklearn library.
Step7: Question
Step8: Now, we will visualize the predicted prices along with the actual data.
Step9: Training a Multivariate Linear Regression Model
Step10: Homework
Step11: Supplementary Material
Step12: Regression Models on Energy Efficiency Dataset
Step13: Training Set and Testing Set
Step14: LinearRegression
Step15: Support Vector Regression (SVR)
Step16: Random Forest Regressor
|
<ASSISTANT_TASK:>
Python Code:
# Write code to import required libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# For visualzing plots in this notebook
%matplotlib inline
# We start by importing the data using pandas
# Hint: use "read_csv" method, Note that comma (",") is the field separator, and we have no "header"
housing = pd.read_csv('housing_price.txt', sep=",", header=None)
# We name the columns based on above features
housing.columns = ["Area", "Bedrooms", "Price"]
# We sneak peek into the data
# Hint: use dataframe "head" method with "n" parameter
housing.head(n=5)
# Write code to get a summary of the data
# Hint: use "DataFrame.describe()" on our dataframe housing
housing.describe()
# Write code to create Scatter Plot between "Area" and "Price"
# Hint: use "DataFrame.plot.scatter()" on our dataframe housing,
# mention the "x" and "y" axis features
housing.plot.scatter(x="Area", y="Price")
# Write code to convert desired dataframe columns into numpy arrays
# Hint: "columns" atttribute of DataFrame.as_matrix() accepts only list.
# Even if you wish to select only one column, you have to pass it in a list.
X = housing.as_matrix(columns=["Area"])
y = housing.as_matrix(columns=["Price"])
# Write code to learn a linear regression model on housing price dataset
from sklearn.linear_model import LinearRegression # TO DELETE
lin_reg = LinearRegression()
lin_reg.fit(X, y)
# Write code to predict prices using the trained LinearRegression
y_predicted = lin_reg.predict(X)
# Importing modules to calculate MSE
from sklearn.metrics import mean_squared_error
# Write code to calculate and print the MSE on the predicted values.
# Hint 1: use "mean_squared_error()" method
# Hint 2: you have to pass both original y and predicted y to compute the MSE.
mse = mean_squared_error(y, y_predicted)
print("MSE = ", mse)
# Write code to get coefficient of determination using "score()"
# Hint: you have to pass both X and original y to score()
R2 = lin_reg.score(X, y)
print("R2 = ", R2)
# Write code to create a scatter plot with the data as above.
# Then add the best line to that.
# Hint 1: store the returned "axes" object and then use "axes.plot()"
# to plot the best_line
# Hint 2: "axes.plot()" takes the X and the y_predicted arrays
ax = housing.plot.scatter(x="Area", y="Price")
ax.plot(X, y_predicted, "r")
# Write code to convert desired dataframe columns into numpy arrays
X = housing.as_matrix(columns=["Area","Bedrooms"])
y = housing.as_matrix(columns=["Price"])
# Write code to train a LinearRegression model
lin_reg = LinearRegression()
lin_reg.fit(X, y)
# Write code to calculate and print the MSE
R2 = lin_reg.score(X, y)
print("R2 = ", R2)
# Write code to create a 3D scatter plot for "Area", "Bedroom" and actual "Price"
# Then add visualization of "Area" and "Bedroom" against the predicted price.
from mpl_toolkits.mplot3d import Axes3D
y_pred = lin_reg.predict(X)
fig_scatter = plt.figure()
ax = fig_scatter.add_subplot(111, projection='3d')
ax.scatter(housing["Area"], housing["Bedrooms"], housing["Price"])
ax.scatter(housing["Area"], housing["Bedrooms"], y_pred)
ax.set_xlabel("Area")
ax.set_ylabel("Bedrooms")
ax.set_zlabel("Price")
# Write code to import the data using pandas
# Hint: note that comma (",") is the field separator, and we have no "header"
energy = pd.read_csv("energy_efficiency.csv", sep=",", header=None)
# We name the columns based on above features
energy.columns = ["Compactness","Surface","Wall", "Roof", "Heiht",
"Orientation","Glazing","GlazingDist", "Heating"]
# We sneak peek into the data
# Hint: use dataframe "head" method with "n" parameter
energy.head(n=5)
# Write code to convert desired dataframe columns into numpy arrays
X = energy.as_matrix(columns=["Compactness","Surface","Wall", "Roof", "Heiht",
"Orientation","Glazing","GlazingDist"])
y = energy.as_matrix(columns=["Heating"])
# Importing the module
from sklearn.model_selection import train_test_split
# Write code for splitting the data into train and test sets.
# Hint: use "train_test_split" on X and y, and test size should be 0.2 (20%)
X_train,X_test,y_train,y_test = train_test_split(X, y, test_size=0.2)
# Write code to train a Linear Regression model and to test its performance on the test set
lin_reg = LinearRegression()
lin_reg.fit(X_train, y_train)
lin_reg_R2 = lin_reg.score(X_test, y_test)
print("Linear Regression R2 = ", lin_reg_R2)
# Write code to import necessary module for SVR
from sklearn.svm import SVR
# Write code to train a SVR model and to test its performance on the test set
svr = SVR() # TO DELETE
svr.fit(X_train, y_train)
svr_R2 = svr.score(X_test, y_test)
print("Support Vector Regression R2 = ", svr_R2)
# Write code to import necessary module for RandomForestRegressor
from sklearn.ensemble import RandomForestRegressor
# Write code to train a RandomForestRegressor model and to test its performance on the test set
rfr = RandomForestRegressor()
rfr.fit(X_train, y_train)
rfr_R2 = rfr.score(X_test, y_test)
print("Random Forest Regressor R2 = ", rfr_R2)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Graphical excellence and integrity
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import Image
# Add your filename and uncomment the following line:
Image(filename='alcohol-consumption-by-country-pure-alcohol-consumption-per-drinker-2010_chartbuilder-1.png')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: State 3 of the following automaton cannot reach a final state.
Step2: Calling accessible returns a copy of the automaton without non-accessible states
|
<ASSISTANT_TASK:>
Python Code:
import vcsn
%%automaton a
context = "lal_char(abc), b"
$ -> 0
0 -> 1 a
1 -> $
2 -> 0 a
1 -> 3 a
a.is_coaccessible()
a.coaccessible()
a.coaccessible().is_coaccessible()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data exploration
Step3: Data processing
Step5: Feature engineering
Step6: Modelling
Step7: Variables selection
Step8: Tuning
Step9: Training
Step10: Evaluate
Step12: There is a weird thing, with self.XX the code does not work. I tried self.assertEqual
|
<ASSISTANT_TASK:>
Python Code:
raw_dataset = pd.read_csv(source_path + "Speed_Dating_Data.csv",encoding = "ISO-8859-1")
raw_dataset.head(2)
raw_dataset_copy = raw_dataset
columns_by_types = raw_dataset.columns.to_series().groupby(raw_dataset.dtypes).groups
raw_dataset.dtypes.value_counts()
raw_dataset.isnull().sum().head(3)
summary = raw_dataset.describe() #.transpose()
#print (summary.head())
#raw_dataset.groupby("gender").agg({"iid": pd.Series.nunique})
raw_dataset.groupby('gender').iid.nunique()
raw_dataset.groupby('career').iid.nunique().sort_values(ascending=False).head(5)
raw_dataset.groupby(["gender","match"]).iid.nunique()
local_path = "/Users/sandrapietrowska/Documents/Trainings/luigi/data_source/"
local_filename = "Speed_Dating_Data.csv"
my_variables_selection = ["iid", "pid", "match","gender","date","go_out","sports","tvsports","exercise","dining",
"museums","art","hiking","gaming","clubbing","reading","tv","theater","movies",
"concerts","music","shopping","yoga"]
class RawSetProcessing():
This class aims to load and clean the dataset.
def __init__(self,source_path,filename,features):
self.source_path = source_path
self.filename = filename
self.features = features
# Load data
def load_data(self):
raw_dataset_df = pd.read_csv(self.source_path + self.filename,encoding = "ISO-8859-1")
return raw_dataset_df
# Select variables to process and include in the model
def subset_features(self, df):
sel_vars_df = df[self.features]
return sel_vars_df
@staticmethod
# Remove ids with missing values
def remove_ids_with_missing_values(df):
sel_vars_filled_df = df.dropna()
return sel_vars_filled_df
@staticmethod
def drop_duplicated_values(df):
df = df.drop_duplicates()
return df
# Combine processing stages
def combiner_pipeline(self):
raw_dataset = self.load_data()
subset_df = self.subset_features(raw_dataset)
subset_no_dup_df = self.drop_duplicated_values(subset_df)
subset_filled_df = self.remove_ids_with_missing_values(subset_no_dup_df)
return subset_filled_df
raw_set = RawSetProcessing(local_path, local_filename, my_variables_selection)
dataset_df = raw_set.combiner_pipeline()
dataset_df.head(2)
# Number of unique participants
dataset_df.iid.nunique()
dataset_df.shape
suffix_me = "_me"
suffix_partner = "_partner"
my_label = "match_perc"
class FeatureEngineering():
This class aims to load and clean the dataset.
def __init__(self,suffix_1, suffix_2, label):
self.suffix_1 = suffix_1
self.suffix_2 = suffix_2
self.label = label
def get_partner_features(self, df, ignore_vars=True):
df_partner = df.copy()
if ignore_vars is True:
df_partner = df_partner.drop(['pid','match'], 1).drop_duplicates()
else:
df_partner = df_partner.copy()
merged_datasets = df.merge(df_partner, how = "inner",left_on="pid", right_on="iid",suffixes=(self.suffix_1,self.suffix_2))
return merged_datasets
def add_success_failure_match(self, df):
df['total_match'] = df['match'].groupby(df['iid']).transform('sum')
df['total_dates'] = df['match'].groupby(df['iid']).transform('count')
df['total_nomatch'] = df['total_dates'] - df['total_match']
df['match_perc'] = df['total_match'] / df['total_dates']
return df
def label_to_categories(self, df):
df['match_success'] = pd.cut(df[self.label], bins=(0,0.2,1), include_lowest=True)
return df
@staticmethod
def aggregate_data(df):
model_set = dataset_df.drop(["pid","match"],1)
model_set = model_set.drop_duplicates()
return model_set
# Combine engineering stages
def combiner_pipeline(self, df):
add_match_feat_df = self.add_success_failure_match(df)
labels_df = self.label_to_categories(df)
model_set = self.aggregate_data(labels_df)
return model_set
feat_eng = FeatureEngineering(suffix_me, suffix_partner, my_label)
#feat_engineered_model_1_df = feat_eng.combiner_pipeline(dataset_df)
#feat_engineered_model_1_df.head(2)
feat_engineered_df = feat_eng.get_partner_features(dataset_df)
feat_engineered_df.head(2)
feat_engineered_df.groupby("match").iid_me.count()
import sklearn
print (sklearn.__version__)
from sklearn import tree
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import classification_report
import subprocess
#features = list(["gender","age_o","race_o","goal","samerace","imprace","imprelig","date","go_out","career_c"])
features = list(['iid',"gender","date","go_out","sports","tvsports","exercise","dining","museums","art",
"hiking","gaming","clubbing","reading","tv","theater","movies","concerts","music",
"shopping","yoga"])
label = "match"
#add suffix to each element of list
def process_features_names(features, suffix_1, suffix_2):
features_me = [feat + suffix_1 for feat in features]
features_partner = [feat + suffix_2 for feat in features]
features_all = features_me + features_partner
return features_all
features_model = process_features_names(features, suffix_me, suffix_partner)
explanatory = feat_engineered_df[features_model]
explained = feat_engineered_df[label]
explanatory[explanatory["iid_me"] == 1].head(5)
from sklearn import ensemble
warnings.filterwarnings("ignore")
# Parameters for Random Forest
parameters = [
{'max_depth': [8,10,12,14,16,18],
'min_samples_split': [10,15,20,25,30],
'min_samples_leaf': [10,15,20,25,30]
}
]
scores = ['precision', 'recall']
RFModel = ensemble.RandomForestClassifier(n_estimators=5, oob_score=False)
class TuneParameters():
def __init__(self, explanatory_vars, explained_var, estimator, parameters, scores):
self.explanatory_vars = explanatory_vars
self.explained_var = explained_var
self.estimator = estimator
self.parameters = parameters
self.scores = scores
def create_train_test_splits(self):
X_train, X_test, y_train, y_test = train_test_split(self.explanatory_vars, self.explained_var,
test_size = 0.5, random_state = 0,
stratify = self.explained_var)
return X_train, X_test, y_train, y_test
def tuning_parameters(self, trainset_x, testset_x, trainset_y, testset_y):
for score in self.scores:
print("# Tuning hyper-parameters for %s" % score)
print("")
grid_rfc = GridSearchCV(self.estimator, self.parameters,n_jobs=100, cv=10, refit=True,
scoring='%s_macro' % score)
grid_rfc.fit(trainset_x, trainset_y)
print("Best parameters set found on development set:")
print("")
print(grid_rfc.best_params_)
print("")
y_true, y_pred = testset_y, grid_rfc.predict(testset_x)
print(classification_report(y_true, y_pred))
print("")
best_parameters = grid_rfc.best_estimator_.get_params()
return best_parameters
def combiner_pipeline(self):
X_train, X_test, y_train, y_test = self.create_train_test_splits()
best_params = self.tuning_parameters(X_train, X_test, y_train, y_test)
return best_params
tune = TuneParameters(explanatory, explained, RFModel, parameters, scores)
best_parameters = tune.combiner_pipeline()
X_train, X_test, y_train, y_test = tune.create_train_test_splits()
estimator_RFC = ensemble.RandomForestClassifier()
best_parameters
class Trainer():
def __init__(self, X_train, y_train, best_params):
self.X_train = X_train
self.y_train = y_train
self.X_test = X_test
self.y_test = y_test
self.estimator = None
self.best_params = best_params
def build_best_estimator(self):
params = self.best_params
model = ensemble.RandomForestClassifier(**params)
self.estimator = model.fit(self.X_train,self.y_train)
return self.estimator
def score_estimator_train(self):
return self.estimator.score(self.X_train,self.y_train)
def score_estimator_test(self):
return self.estimator.score(self.X_test,self.y_test)
def combiner_pipeline(self):
self.estimator = self.build_best_estimator()
score_train = self.score_estimator_train()
score_test = self.score_estimator_test()
return self.estimator, score_train, score_test
train = Trainer(X_train, y_train, best_parameters)
estimator, score_train, score_test = train.combiner_pipeline()
print (estimator, score_train, score_test)
import unittest
from pandas.util.testing import assert_frame_equal
def get_partner_features(df, suffix_1, suffix_2, ignore_vars=True):
df_partner = df.copy()
if ignore_vars is True:
df_partner = df_partner.drop(['pid','match'], 1).drop_duplicates()
else:
df_partner = df_partner.copy()
merged_datasets = df.merge(df_partner, how = "inner",left_on="pid", right_on="iid",suffixes=(suffix_1,suffix_2))
return merged_datasets
class FeatureEngineeringTest(unittest.TestCase):
def test_get_partner_features(self):
:return:
# Given
raw_data_a = {
'iid': ['1', '2', '3', '4', '5','6'],
'first_name': ['Sue', 'Maria', 'Sandra', 'Bill', 'Brian','Bruce'],
'sport':['foot','run','volley','basket','swim','tv'],
'pid': ['4', '5', '6', '1', '2','3'],}
df_a = pd.DataFrame(raw_data_a, columns = ['iid', 'first_name', 'sport','pid'])
expected_data = {
'iid_me': ['1', '2', '3', '4', '5','6'],
'first_name_me': ['Sue', 'Maria', 'Sandra', 'Bill', 'Brian','Bruce'],
'sport_me': ['foot','run','volley','basket','swim','tv'],
'pid_me': ['4', '5', '6', '1', '2','3'],
'iid_partner': ['4', '5', '6', '1', '2','3'],
'first_name_partner': ['Bill', 'Brian','Bruce','Sue', 'Maria', 'Sandra'],
'sport_partner': ['basket','swim','tv','foot','run','volley'],
'pid_partner':['1', '2', '3', '4', '5','6']}
expected_output_values = pd.DataFrame(expected_data,
columns = ['iid_me','first_name_me','sport_me','pid_me',
'iid_partner','first_name_partner','sport_partner',
'pid_partner'])
# When
output_values = get_partner_features(df_a, "_me","_partner",ignore_vars=False)
# Then
assert_frame_equal(output_values, expected_output_values)
suite = unittest.TestLoader().loadTestsFromTestCase(FeatureEngineeringTest)
unittest.TextTestRunner(verbosity=2).run(suite)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Define a magnetic dipole
Step2: Define the Earth's magnetic field $B_0$
Step3: Define the observations
Step4: Calculate data for plotting
Step5: 3D plot of field lines and data
|
<ASSISTANT_TASK:>
Python Code:
def MagneticMonopoleField(obsloc,poleloc=(0.,0.,0.),Q=1):
# relative obs. loc. to pole, assuming pole at origin
dx, dy, dz = obsloc[0]-poleloc[0], obsloc[1]-poleloc[1], obsloc[2]-poleloc[2]
r = np.sqrt(dx**2+dy**2+dz**2)
Bx = Q * 1e-7 / r**2 * dx
By = Q * 1e-7 / r**2 * dy
Bz = Q * 1e-7 / r**2 * dz
return Bx, By, Bz
def VerticalMagneticLongDipoleLine(radius,L,stepsize=0.1,nstepmax=1000,dist_tol=0.5):
yloc, zloc = [radius], [0.]
dist2pole = np.sqrt( yloc[0]**2 + (zloc[0]-L/2)**2 )
# loop to get the lower half
count = 1
while (dist2pole > dist_tol) & (count<nstepmax):
_, By1, Bz1 = MagneticMonopoleField((0.,yloc[-1],zloc[-1]),(0.,0.,L/2),Q=1)
_, By2, Bz2 = MagneticMonopoleField((0.,yloc[-1],zloc[-1]),(0.,0.,-L/2),Q=-1)
By, Bz = By1+By2, Bz1+Bz2
B = np.sqrt(By**2 + Bz**2)
By, Bz = By/B*stepsize, Bz/B*stepsize
yloc = np.append(yloc, yloc[-1]+By)
zloc = np.append(zloc, zloc[-1]+Bz)
dist2pole = np.sqrt( yloc[-1]**2 + (zloc[-1]-L/2)**2 )
count += 1
# mirror to get the upper half
yloc = np.append(yloc[-1:0:-1],yloc)
zloc = np.append(-zloc[-1:0:-1],zloc)
return yloc, zloc
def MagneticLongDipoleLine(dipoleloc,dipoledec,dipoleinc,dipoleL,radii,Nazi=10):
x0, y0, z0 = dipoleloc[0], dipoleloc[1], dipoleloc[2]
# rotation matrix
theta, alpha = -np.pi*(dipoleinc+90.)/180., -np.pi*dipoledec/180.
Rx = np.array([[1.,0.,0.],[0.,np.cos(theta),-np.sin(theta)],[0.,np.sin(theta),np.cos(theta)]])
Rz = np.array([[np.cos(alpha),-np.sin(alpha),0.],[np.sin(alpha),np.cos(alpha),0.],[0.,0.,1.]])
R = np.dot(Rz,Rx) # Rz @ Rx
azimuth = np.linspace(0.,2*np.pi,num=Nazi,endpoint=False)
xloc, yloc, zloc = [], [], []
for r in radii:
hloc, vloc = VerticalMagneticLongDipoleLine(r,dipoleL,stepsize=0.5)
for a in azimuth:
x, y, z = np.sin(a)*hloc, np.cos(a)*hloc, vloc
xyz = R @ np.vstack((x,y,z))
xloc.append(xyz[0]+x0)
yloc.append(xyz[1]+y0)
zloc.append(xyz[2]+z0)
return xloc, yloc, zloc
def MagneticLongDipoleField(dipoleloc,dipoledec,dipoleinc,dipoleL,obsloc,dipolemoment=1.):
dec, inc, L = np.radians(dipoledec), np.radians(dipoleinc), dipoleL
x1 = L/2 * np.cos(inc) * np.sin(dec)
y1 = L/2 * np.cos(inc) * np.cos(dec)
z1 = L/2 * -np.sin(inc)
x2, y2, z2 = -x1, -y1, -z1
Q = dipolemoment * 4e-7 * np.pi / L
Bx1, By1, Bz1 = MagneticMonopoleField(obsloc,(x1+dipoleloc[0],y1+dipoleloc[1],z1+dipoleloc[2]),Q=Q)
Bx2, By2, Bz2 = MagneticMonopoleField(obsloc,(x2+dipoleloc[0],y2+dipoleloc[1],z2+dipoleloc[2]),Q=-Q)
return Bx1+Bx2, By1+By2, Bz1+Bz2
# define a dipole
dipoleloc = (0.,0.,-1.)
dipoleL = 2.
dipoledec, dipoleinc = 0., 90.
dipolemoment = 1e13
# geomagnetic field
B0, Binc, Bdec = 53600e-9, 90., 0. # in Tesla, degree, degree
B0x = B0*np.cos(np.radians(Binc))*np.sin(np.radians(Bdec))
B0y = B0*np.cos(np.radians(Binc))*np.cos(np.radians(Bdec))
B0z = -B0*np.sin(np.radians(Binc))
# set observation grid
xmin, xmax, ymin, ymax, z = -5., 5., -5., 5., 1. # x, y bounds and elevation
profile_x = 0. # x-coordinate of y-profile
profile_y = 0. # y-coordinate of x-profile
h = 0.2 # grid interval
radii = (2., 5.) # how many layers of field lines for plotting
Nazi = 10 # number of azimuth
# get field lines
linex, liney, linez = MagneticLongDipoleLine(dipoleloc,dipoledec,dipoleinc,dipoleL,radii,Nazi)
# get map
xi, yi = np.meshgrid(np.r_[xmin:xmax+h:h], np.r_[ymin:ymax+h:h])
x1, y1 = xi.flatten(), yi.flatten()
z1 = np.full(x1.shape,z)
Bx, By, Bz = np.zeros(len(x1)), np.zeros(len(x1)), np.zeros(len(x1))
for i in np.arange(len(x1)):
Bx[i], By[i], Bz[i] = MagneticLongDipoleField(dipoleloc,dipoledec,dipoleinc,dipoleL,(x1[i],y1[i],z1[i]),dipolemoment)
Ba1 = np.dot(np.r_[B0x,B0y,B0z], np.vstack((Bx,By,Bz)) )
# get x-profile
x2 = np.r_[xmin:xmax+h:h]
y2, z2 = np.full(x2.shape,profile_y), np.full(x2.shape,z)
Bx, By, Bz = np.zeros(len(x2)), np.zeros(len(x2)), np.zeros(len(x2))
for i in np.arange(len(x2)):
Bx[i], By[i], Bz[i] = MagneticLongDipoleField(dipoleloc,dipoledec,dipoleinc,dipoleL,(x2[i],y2[i],z2[i]),dipolemoment)
Ba2 = np.dot( np.r_[B0x,B0y,B0z], np.vstack((Bx,By,Bz)) )
# get y-profile
y3 = np.r_[ymin:ymax+h:h]
x3, z3 = np.full(y3.shape,profile_x), np.full(y3.shape,z)
Bx, By, Bz = np.zeros(len(x3)), np.zeros(len(x3)), np.zeros(len(x3))
for i in np.arange(len(x3)):
Bx[i], By[i], Bz[i] = MagneticLongDipoleField(dipoleloc,dipoledec,dipoleinc,dipoleL,(x3[i],y3[i],z3[i]),dipolemoment)
Ba3 = np.dot( np.r_[B0x,B0y,B0z] , np.vstack((Bx,By,Bz)) )
fig = plt.figure()
ax = fig.gca(projection='3d')
# plot field lines
for lx,ly,lz in zip(linex,liney,linez):
ax.plot(lx,ly,lz,'-',markersize=1)
# plot map
ax.scatter(x1,y1,z1,s=2,alpha=0.3)
Bt = Ba1.reshape(xi.shape)*1e9 # contour and color scale in nT
c = ax.contourf(xi,yi,Bt,alpha=1,zdir='z',offset=z-max(radii)*2,cmap='jet',
levels=np.linspace(Bt.min(),Bt.max(),50,endpoint=True))
fig.colorbar(c)
# auto-scaling for profile plot
ptpmax = np.max((Ba2.ptp(),Ba3.ptp())) # dynamic range
autoscaling = np.max(radii) / ptpmax
# plot x-profile
ax.scatter(x2,y2,z2,s=2,c='black',alpha=0.3)
ax.plot(x2,Ba2*autoscaling,zs=ymax,c='black',zdir='y')
# plot y-profile
ax.scatter(x3,y3,z3,s=2,c='black',alpha=0.3)
ax.plot(y3,Ba3*autoscaling,zs=xmin,c='black',zdir='x')
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
ax.set_xlim(xmin, xmax)
ax.set_ylim(ymin, ymax)
ax.set_zlim(z-max(radii)*2, max(radii)*1.5)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Helper functions
Step2: First Convolution Layer
Step3: Second Convolution Layer
Step4: Densely Connected Layer
Step5: Dropout Layer
Step6: Readout Layer
Step7: Training
|
<ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/data/MNIST/",one_hot=True)
sess = tf.InteractiveSession()
def weight_variable(shape):
initial = tf.truncated_normal(shape,stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
def conv2d(x,W):
return tf.nn.conv2d(x,W,strides=[1,1,1,1],padding='SAME')
def max_pool_2x2(x):
return tf.nn.max_pool(x,ksize=[1,2,2,1],strides=[1,2,2,1], padding='SAME')
# Setup our Input placeholder
x = tf.placeholder(tf.float32, [None, 784])
# Define loss and optimizer
y_ = tf.placeholder(tf.float32,[None,10])
W_conv1 = weight_variable([5,5,1,32])
b_conv1 = bias_variable([32])
x_image = tf.reshape(x,[-1,28,28,1])
h_conv1 = tf.nn.relu(conv2d(x_image,W_conv1) + b_conv1)
h_pool1 = max_pool_2x2(h_conv1)
W_conv2 = weight_variable([5,5,32,64])
b_conv2 = bias_variable([64])
h_conv2 = tf.nn.relu(conv2d(h_pool1,W_conv2)+b_conv2)
h_pool2 = max_pool_2x2(h_conv2)
W_fc1 = weight_variable([7*7*64,1024])
b_fc1 = bias_variable([1024])
h_pool2_flat = tf.reshape(h_pool2, [-1,7*7*64])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat,W_fc1)+b_fc1)
keep_prob = tf.placeholder(tf.float32)
h_fc1_drop = tf.nn.dropout(h_fc1,keep_prob)
W_fc2 = weight_variable([1024, 10])
b_fc2 = bias_variable([10])
y_conv = tf.matmul(h_fc1_drop, W_fc2) + b_fc2
cross_entropy = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y_conv))
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
sess.run(tf.global_variables_initializer())
for i in range(10000):
batch = mnist.train.next_batch(50)
if i%100 == 0:
train_accuracy = accuracy.eval(feed_dict={
x:batch[0], y_: batch[1], keep_prob: 1.0})
print("step %d, training accuracy %g"%(i, train_accuracy))
train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
print("test accuracy %g"%accuracy.eval(feed_dict={
x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Example 1
Step2: It is clear from the figure below that the higher the order of polynomial (or the number of terms in the summation) more precise the approximates become, even for distant points from which the series was generated at.
Step3: Example 2
Step4: The conclusion is the same as for the case shown above, namely, the higher the order of polynomial (or the number of terms in the summation) more precise the approximates become, even for distant points from which the series was generated at.
|
<ASSISTANT_TASK:>
Python Code:
def taylor(f, x, var, max_terms=6, x0=0):
def taylor_terms():
for k in range(max_terms):
term = (sp.diff(f, var, k).subs(var, x0).evalf()/np.math.factorial(k)) * (x - x0)**k
yield term
serie = 0
for term in taylor_terms():
serie += term
return serie
x = sp.symbols("x")
f = sp.cos(x)
f_series = functools.partial(taylor, var=x)
df1 = pd.DataFrame({"x": np.linspace(-8, 8, num=100)})
max_terms = [3, 10, 15, 20]
for max_term in max_terms:
df1["y" + str(max_term)] = np.array([f_series(f, x, max_terms=max_term) for x in df1["x"]])
df1["y_actual"] = np.cos(df1["x"])
df1.head()
fig, ax = plt.subplots(figsize=(13, 8))
ax.set_ylim(-1.5, 1.5)
colors = ["red", "purple", "blue", "green"]
for max_term, color in zip(max_terms, colors):
plt.plot(df1["x"], df1["y"+str(max_term)], color=color)
plt.plot(df1["x"], df1["y_actual"], color='black')
plt.legend(loc='lower left', fancybox=True, framealpha=1, shadow=True, borderpad=1, frameon=True)
ax.set(title=r"Taylor polynomials for $\cos(x)$", xlabel="x", ylabel="y");
x = sp.symbols("x")
f = sp.exp(x)
f_series = functools.partial(taylor, var=x)
df2 = pd.DataFrame({"x": np.linspace(0, 10, num=100)})
max_terms = [4, 7, 10, 12]
for max_term in max_terms:
df2["y" + str(max_term)] = np.array([f_series(f, x, max_terms=max_term) for x in df2["x"]])
df2["y_actual"] = np.exp(df2["x"])
df2.head()
fig, ax = plt.subplots(figsize=(13, 8))
ax.set_ylim(0, 1000)
colors = ["red", "purple", "blue", "green"]
for max_term, color in zip(max_terms, colors):
plt.plot(df2["x"], df2["y"+str(max_term)], color=color)
plt.plot(df2["x"], df2["y_actual"], color='black')
plt.legend(loc='lower left', fancybox=True, framealpha=1, shadow=True, borderpad=1, frameon=True)
ax.set(title=r"Taylor polynomials for $\exp(x)$", xlabel="x", ylabel="y");
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TF-Hub によるベンガル語の記事分類
Step2: データセット
Step3: 事前トレーニング済み単語ベクトルを TF-Hub モジュールにエクスポートする
Step4: 次に、エクスポートスクリプトを埋め込みファイル上で実行します。fastText Embedding にはヘッダ行があり、かなり大きい(ベンガル語でモジュール変換後 3.3GB 程度)ため、ヘッダ行を無視して最初の 100,000 トークンのみをテキスト埋め込みモジュールにエクスポートします。
Step5: テキスト埋め込みモジュールは、文字列の 1 次元テンソル内の文のバッチを入力として受け取り、文に対応する形状の埋め込みベクトル (batch_size, embedding_dim) を出力します。これは入力をスペースで分割して、前処理を行います。単語埋め込みは sqrtn コンバイナ(こちらを参照)を使用して文の埋め込みに結合されます。これの実演として、ベンガル語の単語リストを入力として渡し、対応する埋め込みベクトルを取得します。
Step6: Tensorflow Dataset を変換する
Step7: シャッフル後には、トレーニング例と検証例のラベルの分布を確認することができます。
Step8: ジェネレータを使用して Datasete を作成するには、まず file_paths から各項目を、ラベル配列からラベルを読み込むジェネレータ関数を書き込み、各ステップ毎にそれぞれ 1 つのトレーニング例を生成します。このジェネレータ関数を tf.data.Dataset.from_generator メソッドに渡して出力タイプを指定します。各トレーニング例は、tf.string データ型の項目と One-Hot エンコーディングされたラベルを含むタプルです。tf.data.Dataset.skip メソッドとtf.data.Dataset.take メソッドを使用して、データセットは 80 対 20 の割合でトレーニングデータと検証データに分割しています。
Step9: モデルのトレーニングと評価
Step10: トレーニング
Step11: 評価
Step12: 予測
Step13: パフォーマンスを比較する
|
<ASSISTANT_TASK:>
Python Code:
# Copyright 2019 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
%%bash
# https://github.com/pypa/setuptools/issues/1694#issuecomment-466010982
pip install gdown --no-use-pep517
%%bash
sudo apt-get install -y unzip
import os
import tensorflow as tf
import tensorflow_hub as hub
import gdown
import numpy as np
from sklearn.metrics import classification_report
import matplotlib.pyplot as plt
import seaborn as sns
gdown.download(
url='https://drive.google.com/uc?id=1Ag0jd21oRwJhVFIBohmX_ogeojVtapLy',
output='bard.zip',
quiet=True
)
%%bash
unzip -qo bard.zip
%%bash
curl -O https://dl.fbaipublicfiles.com/fasttext/vectors-crawl/cc.bn.300.vec.gz
curl -O https://raw.githubusercontent.com/tensorflow/hub/master/examples/text_embeddings_v2/export_v2.py
gunzip -qf cc.bn.300.vec.gz --k
%%bash
python export_v2.py --embedding_file=cc.bn.300.vec --export_path=text_module --num_lines_to_ignore=1 --num_lines_to_use=100000
module_path = "text_module"
embedding_layer = hub.KerasLayer(module_path, trainable=False)
embedding_layer(['বাস', 'বসবাস', 'ট্রেন', 'যাত্রী', 'ট্রাক'])
dir_names = ['economy', 'sports', 'entertainment', 'state', 'international']
file_paths = []
labels = []
for i, dir in enumerate(dir_names):
file_names = ["/".join([dir, name]) for name in os.listdir(dir)]
file_paths += file_names
labels += [i] * len(os.listdir(dir))
np.random.seed(42)
permutation = np.random.permutation(len(file_paths))
file_paths = np.array(file_paths)[permutation]
labels = np.array(labels)[permutation]
train_frac = 0.8
train_size = int(len(file_paths) * train_frac)
# plot training vs validation distribution
plt.subplot(1, 2, 1)
plt.hist(labels[0:train_size])
plt.title("Train labels")
plt.subplot(1, 2, 2)
plt.hist(labels[train_size:])
plt.title("Validation labels")
plt.tight_layout()
def load_file(path, label):
return tf.io.read_file(path), label
def make_datasets(train_size):
batch_size = 256
train_files = file_paths[:train_size]
train_labels = labels[:train_size]
train_ds = tf.data.Dataset.from_tensor_slices((train_files, train_labels))
train_ds = train_ds.map(load_file).shuffle(5000)
train_ds = train_ds.batch(batch_size).prefetch(tf.data.AUTOTUNE)
test_files = file_paths[train_size:]
test_labels = labels[train_size:]
test_ds = tf.data.Dataset.from_tensor_slices((test_files, test_labels))
test_ds = test_ds.map(load_file)
test_ds = test_ds.batch(batch_size).prefetch(tf.data.AUTOTUNE)
return train_ds, test_ds
train_data, validation_data = make_datasets(train_size)
def create_model():
model = tf.keras.Sequential([
tf.keras.layers.Input(shape=[], dtype=tf.string),
embedding_layer,
tf.keras.layers.Dense(64, activation="relu"),
tf.keras.layers.Dense(16, activation="relu"),
tf.keras.layers.Dense(5),
])
model.compile(loss=tf.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer="adam", metrics=['accuracy'])
return model
model = create_model()
# Create earlystopping callback
early_stopping_callback = tf.keras.callbacks.EarlyStopping(monitor='val_loss', min_delta=0, patience=3)
history = model.fit(train_data,
validation_data=validation_data,
epochs=5,
callbacks=[early_stopping_callback])
# Plot training & validation accuracy values
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
# Plot training & validation loss values
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
y_pred = model.predict(validation_data)
y_pred = np.argmax(y_pred, axis=1)
samples = file_paths[0:3]
for i, sample in enumerate(samples):
f = open(sample)
text = f.read()
print(text[0:100])
print("True Class: ", sample.split("/")[0])
print("Predicted Class: ", dir_names[y_pred[i]])
f.close()
y_true = np.array(labels[train_size:])
print(classification_report(y_true, y_pred, target_names=dir_names))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Descarga de datos
Step2: Extraccion de indices
Step3: Los índices obtenidos de esta manera recibirán una limpieza manual desde Excel.
Step4: Seleccionar renglones correspondientes a volumen de ventas de energía en MW/h
Step5: Extraer datos de todas las ciudades
Step6: Revision de datos extraidos
Step7: Revision de columnas en todos los datasets
Step8: Podemos ver que la extracción de columnas no fue uniforme. Tomando como base la Ciudad de México, se revisarán los casos particulares
Step9: Los siguientes estados contienen columnas que no son estándar
Step10: Se van a revisar caso por caso los siguientes estados.
Step11: CVE_EDO 05
Step12: CVE_EDO 10
Step13: CVE_EDO 12
Step14: CVE_EDO 16
Step15: CVE_EDO 18
Step16: CVE_EDO 26
Step17: CVE_EDO 29
Step18: Consolidacion de dataframe
Step19: Como todo lo triste de esta vida, el dataset no tiene asignadas claves geoestadísticas municipales, por lo que será necesario etiquetarlo manualmente en Excel.
Step20: Las siguientes columnas de código muestran el dataset después de la limpieza realizada en Excel.
Step21: El dataset limpio quedó guardado como '..\PCCS\01_Dmine\Datasets\AGEO\2017\VentasElectricidad.xlsx'
Step22: Seleccionar renglones correspondientes a Longitud total de la red de carreteras (kilometros)
Step23: Falta el índice para CDMX, hay que agregarlo aparte
Step24: Extraer datos de todas las ciudades
Step25: Los datos extraídos son muy irregulares por lo que es mas rapido limpiarlos en excel.
|
<ASSISTANT_TASK:>
Python Code:
descripciones = {
'P0610': 'Ventas de electricidad',
'P0701': 'Longitud total de la red de carreteras del municipio (excluyendo las autopistas)'
}
# Librerias utilizadas
import pandas as pd
import sys
import urllib
import os
import csv
import zipfile
# Configuracion del sistema
print('Python {} on {}'.format(sys.version, sys.platform))
print('Pandas version: {}'.format(pd.__version__))
import platform; print('Running on {} {}'.format(platform.system(), platform.release()))
raiz = 'http://internet.contenidos.inegi.org.mx/contenidos/Productos/prod_serv/contenidos/espanol/bvinegi/productos/nueva_estruc/anuarios_2017/'
# El diccionario tiene como llave la CVE_EDO y dirige hacia la liga de descarga del archivo zip con las tablas del
# Anuario Geoestadístico de cada estado
links = {
'01': raiz + '702825092078.zip',
'02': raiz + '702825094874.zip',
'03': raiz + '702825094881.zip',
'04': raiz + '702825095109.zip',
'05': raiz + '702825095406.zip',
'06': raiz + '702825092061.zip',
'07': raiz + '702825094836.zip',
'08': raiz + '702825092139.zip',
'09': raiz + '702825094683.zip',
'10': raiz + '702825092115.zip',
'11': raiz + '702825092146.zip',
'12': raiz + '702825094690.zip',
'13': raiz + '702825095093.zip',
'14': raiz + '702825092085.zip',
'15': raiz + '702825094706.zip',
'16': raiz + '702825092092.zip',
'17': raiz + '702825094713.zip',
'18': raiz + '702825092054.zip',
'19': raiz + '702825094911.zip',
'20': raiz + '702825094843.zip',
'21': raiz + '702825094973.zip',
'22': raiz + '702825092108.zip',
'23': raiz + '702825095130.zip',
'24': raiz + '702825092122.zip',
'25': raiz + '702825094898.zip',
'26': raiz + '702825094904.zip',
'27': raiz + '702825095123.zip',
'28': raiz + '702825094928.zip',
'29': raiz + '702825096212.zip',
'30': raiz + '702825094980.zip',
'31': raiz + '702825095116.zip',
'32': raiz + '702825092047.zip'
}
for value in links.values():
print(value)
# Descarga de archivos a carpeta local
destino = r'D:\PCCS\00_RawData\01_CSV\AGEO\2017'
archivos = {} # Diccionario para guardar memoria de descarga
for k,v in links.items():
archivo_local = destino + r'\{}.zip'.format(k)
if os.path.isfile(archivo_local):
print('Ya existe el archivo: {}'.format(archivo_local))
archivos[k] = archivo_local
else:
print('Descargando {} ... ... ... ... ... '.format(archivo_local))
urllib.request.urlretrieve(v, archivo_local) #
archivos[k] = archivo_local
print('se descargó {}'.format(archivo_local))
# Descompresión de archivos de estado
unzipped = {}
for estado, comprimido in archivos.items():
target = destino + '\\' + estado
if os.path.isdir(target):
print('Ya existe el directorio: {}'.format(target))
unzipped[estado] = target
else:
print('Descomprimiendo {} ... ... ... ... ... '.format(target))
descomprimir = zipfile.ZipFile(comprimido, 'r')
descomprimir.extractall(target)
descomprimir.close
unzipped[estado] = target
unzipped
# Extraer indices
indices = {}
for estado, ruta in unzipped.items():
for file in os.listdir(ruta):
if file.endswith('.xls'):
path = ruta + '\\' + file
indice = pd.read_excel(path, sheetname='Índice', skiprows = 1) # Primera lectura al indice para sacar columnas
dtypes = list(indice)
tempdic = {}
for i in dtypes:
tempdic[i] = 'str'
indice = pd.read_excel(path,
sheetname='Índice',
skiprows = 1,
dtype = tempdic).dropna(how = 'all') # Segunda lectura al indice ya con dtypes
name = list(indice)[0] # Guarda el nombre del indice
cols = []
for i in range(len(list(indice))):
cols.append('col{}'.format(i)) # Crea nombres estandar de columna
indice.columns = cols # Asigna nombres de columna
indice['indice'] = name
indice['file'] = file
if estado not in indices.keys(): # Crea un diccionario para cada estado, si no existe
indices[estado] = {}
indices[estado][name] = indice
print('Procesado {} |||NOMBRE:||| {}; [{}]'.format(file, name, len(cols))) # Imprime los resultados del proceso
# Reordenar los dataframes por tipo
indices_2 = {}
for estado in indices.keys():
for indice in indices[estado].keys():
if indice not in indices_2.keys():
indices_2[indice] = {}
indices_2[indice][estado] = indices[estado][indice]
# Convertir indices en archivos unicos.
finalindexes = {}
for i in indices_2.keys():
print(i)
frameslist = []
for estado in indices_2[i].keys():
frame = indices_2[i][estado]
frame['estado'] = estado
frameslist.append(frame)
fullindex = pd.concat(frameslist)
finalindexes[i] = fullindex
print('Hecho: {}\n'.format(i))
# Escribir archivos xlsx
path = r'D:\PCCS\01_Dmine\Datasets\AGEO\2017\indices'
for indice in finalindexes.keys():
file = path+'\\'+indice+'.xlsx'
writer = pd.ExcelWriter(file)
finalindexes[indice].to_excel(writer, sheet_name = 'Indice')
writer.save()
print('[{}] lineas - archivo {}'.format(len(finalindexes[indice]), file))
# Importar dataset de índices
f_indice = r'D:\PCCS\01_Dmine\Datasets\AGEO\2017\indices\Limpios\Electricidad.xlsx'
ds_indices = pd.read_excel(f_indice, dtype={'Numeral':'str', 'estado':'str'}).set_index('estado')
ds_indices.head()
# Dataframe con índice de hojas sobre el tema "Ventas de electricidad"
ventaselec = ds_indices[ds_indices['Units'] == '(Megawatts-hora)']
ventaselec.head()
len(ventaselec)
# Crear columna con rutas
path = r'D:\PCCS\00_RawData\01_CSV\AGEO\2017'
ventaselec['path'] = path+'\\'+ventaselec.index+'\\'+ventaselec['file']
# Definir función para traer datos a python
unnameds = set(['Unnamed: '+str(i) for i in range(0, 50)]) # Lista 'Unnamed: x' de 0 a 50
def get_ventas(path, sheet, estado):
temp = pd.ExcelFile(path)
temp = temp.parse(sheet, header = 6).dropna(axis = 0, how='all').dropna(axis = 1, how='all')
# Elimina las columnas unnamed
dropplets = set(temp.columns).intersection(unnameds)
temp = temp.drop(dropplets, axis = 1)
temp = temp.dropna(axis = 0, how='all')
temp = temp.reset_index().drop('index', axis = 1)
# Identifica los últimos renglones, que no contienen datos
col0 = temp.columns[0] # Nombre de la columna 0, para usarlo en un chingo de lugares. Bueno 3
try: tempnotas = temp[col0][temp[col0] == 'Nota:'].index[0] # Para las hojas que terminan en 'Notas'
except: tempnotas = temp[col0][temp[col0] == 'a/'].index[0] # Para las hojas que terminan en 'a/'
print(tempnotas)
# Aparta los renglones después de "a/", para conocer la información que dejé fuera.
trashes = temp.iloc[tempnotas:-1]
# Elimina los renglones después de "a/"
temp = temp.iloc[0:tempnotas]
# Crear columna de estado y renombrar la primera columna para poder concatenar datframes más tarde.
temp['CVE_EDO'] = estado
temp = temp.rename(columns={col0:'NOM_MUN'})
print(type(temp))
return temp, trashes
# Funcion para extraer datos
def getdata(serie, estado):
path = serie['path']
sheet = serie['Numeral']
print('{}\n{}'.format('-'*30, path)) # Imprime la ruta hacia el archivo
print('Hoja: {}'.format(sheet)) # Imprime el nombre de la hoja que se va a extraer
trashes, temp = get_ventas(path, sheet, estado)
print(temp.iloc[[0, -1]][temp.columns[0]])
print(list(temp))
print(('len = {}'.format(len(temp))))
return trashes, temp
ventasdic = {}
trashesdic = {}
for estado in ventaselec.index:
ventasdic[estado], trashesdic[estado] = getdata(ventaselec.loc[estado], estado)
# Ejemplo de uno de los dataframes extraidos
ventasdic['09']
ventaselec['path']
a = '-'*30
for CVE_EDO in trashesdic.keys():
print('{}\n{}\n{}'.format(a, trashesdic[CVE_EDO], a))
for CVE_EDO in ventasdic.keys():
variables = list(ventasdic[CVE_EDO])
longitud = len(variables) # Cuantas variables existen?
longset = len(set(variables)) # Cuantas variables son distintas?
print('{}{} [{} - {}]\n{}\n{}'.format(a, CVE_EDO, longitud, longset, variables, a))
varCDMX = set(list(ventasdic['09']))
varCDMX
nostandar = [] # Lista de estados cuyas columnas no son estandar
for CVE_EDO in ventasdic.keys():
varsedo = set(list(ventasdic[CVE_EDO]))
diffs = varCDMX.symmetric_difference(varsedo)
if len(diffs) != 0:
print('{}{}\n{}\n{}'.format(a, CVE_EDO, diffs, a))
nostandar.append(CVE_EDO)
nostandar
REV_EDO = '05'
ventasdic[REV_EDO].head(6)
list(ventasdic[REV_EDO])
drops = ['a/', '\nb/', '\nc/', 'd/', '\ne/'] # Nombres de las columnas que se van a eliminar
ventasdic['05'] = ventasdic['05'].drop(drops, axis = 1)
old_colnames = list(ventasdic[REV_EDO])
new_colnames = list(ventasdic['09'])
colnames = {i:j for i,j in zip(old_colnames,new_colnames)}
ventasdic[REV_EDO] = ventasdic[REV_EDO].rename(columns = colnames) # Normalizacion
ventasdic['05'].head()
REV_EDO = '10'
ventasdic[REV_EDO].head(6)
list(ventasdic[REV_EDO])
drops = ['d/'] # Nombres de las columnas que se van a eliminar
ventasdic[REV_EDO] = ventasdic[REV_EDO].drop(drops, axis = 1)
REV_EDO = '12'
ventasdic[REV_EDO].head(6)
list(ventasdic[REV_EDO])
drops = ['a/'] # Nombres de las columnas que se van a eliminar
ventasdic[REV_EDO] = ventasdic[REV_EDO].drop(drops, axis = 1)
REV_EDO = '16'
ventasdic[REV_EDO].head(6)
list(ventasdic[REV_EDO])
drops = ['d/'] # Nombres de las columnas que se van a eliminar
ventasdic[REV_EDO] = ventasdic[REV_EDO].drop(drops, axis = 1)
REV_EDO = '18'
ventasdic[REV_EDO].head(6)
list(ventasdic[REV_EDO])
drops = ['d/'] # Nombres de las columnas que se van a eliminar
ventasdic[REV_EDO] = ventasdic[REV_EDO].drop(drops, axis = 1)
REV_EDO = '26'
ventasdic[REV_EDO].head(6)
list(ventasdic[REV_EDO])
drops = ['d/'] # Nombres de las columnas que se van a eliminar
ventasdic[REV_EDO] = ventasdic[REV_EDO].drop(drops, axis = 1)
REV_EDO = '29'
ventasdic[REV_EDO].head(6)
list(ventasdic[REV_EDO])
drops = ['a/', '\nb/', '\nc/', 'd/', '\ne/'] # Nombres de las columnas que se van a eliminar
ventasdic[REV_EDO] = ventasdic[REV_EDO].drop(drops, axis = 1)
old_colnames = list(ventasdic[REV_EDO])
new_colnames = list(ventasdic['09'])
colnames = {i:j for i,j in zip(old_colnames,new_colnames)}
ventasdic[REV_EDO] = ventasdic[REV_EDO].rename(columns = colnames) # Normalizacion
# Unificacion de datos a un solo dataset
ventasDS = pd.DataFrame()
for CVE_EDO, dataframe in ventasdic.items():
estado = ventasdic[CVE_EDO]['NOM_MUN'][0]
print('Adjuntando {} - {} -----'.format(CVE_EDO, estado))
ventasDS = ventasDS.append(dataframe)
# Nombre final para columnas, para evitar duplicidades con otros datasets
# y para eliminar acentos que podrían causar problemas de encoding
colnames = {
'NOM_MUN':'NOM_MUN',
'Total':'Total ventas elec',
'Doméstico':'VE Domestico',
'Alumbrado\npúblico':'VE alumbrado publico',
'Bombeo de aguas \npotables y negras':'VE Bombeo agua potable y negra',
'Agrícola':'VE Agricola',
'Industrial y \nde servicios':'VE Industrial y servicios',
}
ventasDS = ventasDS.rename(columns=colnames)
ventasDS.head()
# Metadatos
metadatos = {
'Nombre del Dataset': 'Anuario Geoestadistico 2017 por estado, Datos de electricidad',
'Descripcion del dataset': None,
'Disponibilidad Temporal': '2016',
'Periodo de actualizacion': 'Anual',
'Nivel de Desagregacion': 'Municipal',
'Notas': 's/n',
'Fuente': 'INEGI - Anuarios Geoestadisticos',
'URL_Fuente': 'http://www.beta.inegi.org.mx/proyectos/ccpv/2010/?#section',
'Dataset base': None
}
metadatos = pd.DataFrame.from_dict(metadatos, orient='index', dtype='str')
metadatos.columns = ['Descripcion']
metadatos= metadatos.rename_axis('Metadato')
metadatos
Variables = {
'NOM_MUN':'Nombre del Municipio',
'Total ventas elec':'Total de ventas de energía electrica',
'VE Domestico':'Ventas de energia en el sector domestico (Megawatts-Hora)',
'VE alumbrado publico':'Ventas de energia en alumbrado publico (Megawatts-Hora)',
'VE Bombeo agua potable y negra':'Ventas de energia en bombeo de agua potable y negra (Megawatts-Hora)',
'VE Agricola':'Ventas de energia en el sector Agricola (Megawatts-Hora)',
'VE Industrial y servicios':'Ventas de energia en el sector industrial y de Servicios (Megawatts-Hora)',
'CVE_EDO':'Clave Geoestadistica Estatal de 2 Digitos',
}
Variables = pd.DataFrame.from_dict(Variables, orient='index', dtype='str')
Variables.columns = ['Descripcion']
Variables= Variables.rename_axis('Mnemonico')
Variables
#Exportar dataset
file = r'D:\PCCS\01_Dmine\Datasets\AGEO\2017'+'\\'+'electricidad.xlsx'
writer = pd.ExcelWriter(file)
ventasDS.to_excel(writer, sheet_name = 'DATOS')
metadatos.to_excel(writer, sheet_name ='METADATOS')
Variables.to_excel(writer, sheet_name ='VARIABLES')
writer.save()
# Muestra del dataset con claves geoestadísticas asignadas
archivo = r'D:\PCCS\01_Dmine\Datasets\AGEO\2017\VentasElectricidad.xlsx'
elec = pd.read_excel(archivo, sheetname='DATOS', dtype={'CVE_MUN':'str'})
elec = elec.set_index('CVE_MUN')
elec.head()
# Importar dataset de índices
f_indice = r'D:\PCCS\01_Dmine\Datasets\AGEO\2017\indices\Limpios\IndicesMovilidad.xlsx'
ds_indices = pd.read_excel(f_indice, dtype={'Numeral':'str', 'estado':'str'}).set_index('estado')
ds_indices.head()
# Dataframe con índice de hojas sobre el tema "Longitud total de la red de carreteras"
dataframe = ds_indices[ds_indices['ID'] == '22.1']
dataframe.head()
len(dataframe)
dataframe.index
ds_indices[ds_indices['ID'] == '20.16']
dataframe = dataframe.append(ds_indices[ds_indices['ID'] == '20.16'])
# Crear columna con rutas
path = r'D:\PCCS\00_RawData\01_CSV\AGEO\2017'
dataframe['path'] = path+'\\'+dataframe.index+'\\'+dataframe['file']
dataframe.head(2)
# Definir función para traer datos a python
unnameds = set(['Unnamed: '+str(i) for i in range(0, 50)]) # Lista 'Unnamed: x' de 0 a 50
# Esta lista se utilizará para eliminar columnas nombradas Unnamed, que por lo general no tienen datos.
def get_datos(path, sheet, estado):
temp = pd.ExcelFile(path)
temp = temp.parse(sheet, header = 6).dropna(axis = 0, how='all').dropna(axis = 1, how='all')
return temp
# Identifica los últimos renglones, que no contienen datos
# col0 = temp.columns[0] # Nombre de la columna 0, para usarlo en un chingo de lugares. Bueno 3
# try: tempnotas = temp[col0][temp[col0] == 'Nota:'].index[0] # Para las hojas que terminan en 'Notas'
# except: tempnotas = temp[col0][temp[col0] == 'a/'].index[0] # Para las hojas que terminan en 'a/'
# print(tempnotas)
#
# # Aparta los renglones después de "a/", para conocer la información que dejé fuera.
# trashes = temp.iloc[tempnotas:-1]
#
# # Elimina los renglones después de "a/"
# temp = temp.iloc[0:tempnotas]
#
# # Crear columna de estado y renombrar la primera columna para poder concatenar datframes más tarde.
# temp['CVE_EDO'] = estado
# temp = temp.rename(columns={col0:'NOM_MUN'})
# print(type(temp))
#
# return temp, trashes
# Funcion para extraer datos
def getdata(serie, estado):
path = serie['path']
sheet = serie['ID']
print('{}\n{}'.format('-'*30, path)) # Imprime la ruta hacia el archivo
print('Hoja: {}'.format(sheet)) # Imprime el nombre de la hoja que se va a extraer
temp = get_datos(path, sheet, estado)
print(temp.iloc[[0, -1]][temp.columns[0]])
print(list(temp))
print(('len = {}'.format(len(temp))))
return temp
datadic = {}
for estado in dataframe.index:
datadic[estado] = getdata(dataframe.loc[estado], estado)
datadic.keys()
#Exportar dataset
file = r'D:\PCCS\01_Dmine\Datasets\AGEO\2017'+'\\'+'Long_Carreteras_raw1.xlsx'
writer = pd.ExcelWriter(file)
for estado, dataset in datadic.items():
dataset.to_excel(writer, sheet_name = estado)
print('Se guardó dataset para {}'.format(estado))
writer.save()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Data cleaning
Step2: 1.2. LA County Top Earners
Step3: Idea
Step4: 1.3. LA City Active Businesses
Step5: So, the location column contains a mix of nulls and alongside strings of tuples of coordinates. Yikes. There are different ways to parse these coordinates out. Here's a relatively efficient option. First, some explanation
Step6: 2. Exploration
Step7: 2.1. Understanding the data's distribution
Step8: That's better... but it's still hard to pick out patterns and trends by just staring at a table full of numbers. Let's visualize it.
Step9: Ideally, your xlabel would state what year the USD are in (e.g., "2017 inflation-adjusted USD") but the data source doesn't say clearly. My guess is that they are nominal dollars from the reported year.
Step10: Histograms visualize the distribution of some variable by binning it then counting observations per bin. KDE plots are similar, but continuous and smooth.
Step11: You can compare multiple histograms to see how different groups overlap or differ by some measure.
Step12: Looks like a pretty big difference! But is it statistically significant?
Step13: Social service workers in LA county make, on average, $56k less than LASD employees and this difference is statistically significant (p<0.001).
Step14: 2.2. Pairwise relationships
Step15: Do you see patterns in these scatter plots? Correlation tells us to what extent two variables are linearly related to one another. Pearson correlation coefficients range from -1 to 1, with 0 indicating no linear relationship, -1 indicating a perfect negative linear relationship, and 1 indicating a perfect positive linear relationship. If you are hypothesis-testing a correlation, make sure to report and interpret the p-value.
Step16: 2.3. Bar plots and count plots
Step17: NAICS sector 54 is "professional, scientific, and technical services" and sector 53 is "real estate and rental and leasing."
Step18: 2.4. Line plots
|
<ASSISTANT_TASK:>
Python Code:
import ast
import numpy as np
import pandas as pd
import seaborn as sns
from scipy import stats
# load the data
df = pd.read_csv('../../data/LA_County_Covid19_CSA_case_death_table.csv')
df.shape
# what do you see in the raw data?
df
# check the data types: do we need to change/convert any?
df.dtypes
# drop the duplicate IDs and rename the place column to something meaningful
df = df.drop(columns=['Unnamed: 0']).rename(columns={'geo_merge':'place_name'})
df
# clean up place names
df['place_name'] = df['place_name'].str.replace('City of ', '').str.replace('Unincorporated - ', '').str.replace('Los Angeles - ', '')
df.sort_values('place_name')
df_covid = df
# now it's your turn
# create a new column representing the proportion of cases that were fatal
# load the data
df = pd.read_csv('../../data/Top_County_Earners.csv')
df.shape
# what do you see in the raw data?
df
# check the data types: do we need to change/convert any?
df.dtypes
# why does the total earnings column name above look weird?
df.columns
# rename the total earnings column to something that won't trip you up
df = df.rename(columns={' Total Earnings':'Total Earnings'})
# convert the float columns to ints: a couple ways you could do it (either works)...
# OPTION 1: use IndexSlice from last week's lecture
slicer = pd.IndexSlice[:, 'Base Earnings':'Total Compensation']
df.loc[slicer] = df.loc[slicer].astype(int)
# OPTION 2: select columns where type is float64
float_cols = df.columns[df.dtypes=='float64']
df[float_cols] = df[float_cols].astype(int)
# move year to end and employee name to beginning
cols = [df.columns[-1]] + df.columns[1:-1].to_list() + [df.columns[0]]
df = df.reindex(columns=cols)
df
# convert from USD to 1000s of USD
df['Total Compensation 1000s'] = df['Total Compensation'] / 1000
# improve the capitalization (note, only Series can do vectorized str methods)
slicer = pd.IndexSlice[:, 'Employee Name':'Department']
df.loc[slicer] = df.loc[slicer].apply(lambda col: col.str.title(), axis='rows')
df
df_earnings = df
# now it's your turn
# convert all the earnings/compensation columns from USD to Euros, using today's exchange rate
# load the data
df = pd.read_csv('../../data/Listing_of_Active_Businesses.csv')
df.shape
# what do you see in the raw data?
df
# check the data types: do we need to change/convert any?
df.dtypes
# you have to make a decision: NAICS should be int, but it contains nulls
# you could drop nulls then convert to int, or just leave it as float
pd.isnull(df['NAICS']).sum()
# make sure end dates are all null, then drop that column
assert pd.isnull(df['LOCATION END DATE']).all()
df = df.drop(columns=['LOCATION END DATE'])
# make the column names lower case and without spaces or hash signs
cols = df.columns.str.lower().str.replace(' ', '_').str.strip('_#')
df.columns = cols
# make sure account numbers are unique, then set as index and sort index
assert df['location_account'].is_unique
df = df.set_index('location_account').sort_index()
df
# convert the start date from strings to datetimes
df['location_start_date'] = pd.to_datetime(df['location_start_date'])
# improve the capitalization
slicer = pd.IndexSlice[:, 'business_name':'mailing_city']
df.loc[slicer] = df.loc[slicer].apply(lambda col: col.str.title(), axis='rows')
df
# what's going on with those location coordinates?
df['location'].iloc[0]
mask = pd.notnull(df['location'])
latlng = df.loc[mask, 'location'].map(ast.literal_eval)
df.loc[mask, ['lat', 'lng']] = pd.DataFrame(latlng.to_list(),
index=latlng.index,
columns=['lat', 'lng'])
df = df.drop(columns=['location'])
df
df_business = df
# now it's your turn
# create a new column containing only the 5-digit zip
# which zip codes appear the most in the data set?
# configure seaborn's style for subsequent use
sns.set_style('whitegrid') #visual styles
sns.set_context('paper') #presets for scaling figure element sizes
# our cleaned data sets from earlier
print(df_business.shape)
print(df_covid.shape)
print(df_earnings.shape)
# quick descriptive stats for some variable
# but... looking across the whole population obscures between-group heterogeneity
df_earnings['Total Compensation 1000s'].describe()
# which departments have the most employees in the data set?
dept_counts = df_earnings['Department'].value_counts().head()
dept_counts
# recall grouping and summarizing from last week
# look at compensation distribution across the 5 largest departments
mask = df_earnings['Department'].isin(dept_counts.index)
df_earnings.loc[mask].groupby('Department')['Total Compensation 1000s'].describe().astype(int)
# visualize compensation distribution across the 5 largest departments
x = df_earnings.loc[mask, 'Total Compensation 1000s']
y = df_earnings.loc[mask, 'Department']
# fliersize changes the size of the outlier dots
# boxprops lets you set more configs with a dict, such as alpha (which means opacity)
ax = sns.boxplot(x=x, y=y, fliersize=0.3, boxprops={'alpha':0.7})
# set the x-axis limit, the figure title, and x/y axis labels
ax.set_xlim(left=0)
ax.set_title('Total compensation by department')
ax.set_xlabel('Total compensation (USD, 1000s)')
ax.set_ylabel('')
# save figure to disk at 300 dpi and with a tight bounding box
ax.get_figure().savefig('boxplot-earnings.png', dpi=300, bbox_inches='tight')
# what is this "ax" variable we created?
type(ax)
# every matplotlib axes is associated with a "figure" which is like a container
fig = ax.get_figure()
type(fig)
# manually change the plot's size/dimension by adjusting its figure's size
fig = ax.get_figure()
fig.set_size_inches(16, 4) #width, height in inches
fig
# histplot visualizes the variable's distribution as a histogram and optionally a KDE
ax = sns.histplot(df_earnings['Total Compensation 1000s'].dropna(), kde=False, bins=30)
_ = ax.set_xlim(left=0)
# typical LASD employee earns more than the typical regional planner :(
df_earnings.groupby('Department')['Total Compensation 1000s'].median().sort_values(ascending=False).head(10)
# visually compare sheriff and social services dept subsets
mask = df_earnings['Department'].isin(['Public Social Services Dept', 'Sheriff'])
ax = sns.histplot(data=df_earnings.loc[mask],
x='Total Compensation 1000s',
hue='Department',
bins=50,
kde=False)
ax.set_xlim(0, 400)
ax.set_xlabel('Total compensation (USD, 1000s)')
ax.set_title('Employee Compensation: LASD vs Social Services')
ax.get_figure().savefig('boxplot-hists.png', dpi=300, bbox_inches='tight')
# difference-in-means: compute difference, t-statistic, and p-value
group1 = df_earnings[df_earnings['Department']=='Public Social Services Dept']['Total Compensation 1000s']
group2 = df_earnings[df_earnings['Department']=='Sheriff']['Total Compensation 1000s']
t, p = stats.ttest_ind(group1, group2, equal_var=False, nan_policy='omit')
print(group1.mean() - group2.mean(), t, p)
# the big reveal... who (individually) had the highest earnings?
cols = ['Employee Name', 'Position Title', 'Department', 'Total Compensation 1000s']
df_earnings[cols].sort_values('Total Compensation 1000s', ascending=False).head(10)
# now it's your turn
# choose 3 departments and visualize their overtime earnings distributions with histograms
df_covid
# use seaborn to scatter-plot two variables
ax = sns.scatterplot(x=df_covid['cases_final'],
y=df_covid['deaths_final'])
ax.set_xlim(left=0)
ax.set_ylim(bottom=0)
ax.get_figure().set_size_inches(5, 5) #make it square
# show a pair plot of these SF tracts across these 4 variables
cols = ['cases_final', 'deaths_final', 'population']
ax = sns.pairplot(df_covid[cols].dropna())
# calculate correlation (and significance) between two variables
r, p = stats.pearsonr(x=df_covid['population'], y=df_covid['cases_final'])
print(round(r, 3), round(p, 3))
# a correlation matrix
correlations = df_covid[cols].corr()
correlations.round(2)
# visual correlation matrix via seaborn heatmap
# use vmin, vmax, center to set colorbar scale properly
ax = sns.heatmap(correlations, vmin=-1, vmax=1, center=0,
cmap='coolwarm', square=True, linewidths=1)
# now it's your turn
# visualize a correlation matrix of the various compensation columns in the earnings dataframe
# from the visualize, pick two variables, calculate their correlation coefficient and p-value
# regress one variable on another: a change in x is associated with what change in y?
m, b, r, p, se = stats.linregress(x=df_covid['population'], y=df_covid['cases_final'])
print(m, b, r, p, se)
# a linear (regression) trend line + confidence interval
ax = sns.regplot(x=df_covid['population'], y=df_covid['cases_final'])
ax.get_figure().set_size_inches(5, 5)
# now it's your turn
# does logarithmic transformation improve the heteroskedasticity and linear fit?
# extract the two-digit sector code from each NAICS classification
sectors = df_business['naics'].dropna().astype(int).astype(str).str.slice(0, 2)
sectors
# count plot: like a histogram counting observations across categorical instead of continuous data
order = sectors.value_counts().index
ax = sns.countplot(x=sectors, order=order, alpha=0.9, palette='plasma')
ax.set_xlabel('NAICS Sector')
ax.set_ylabel('Number of businesses')
ax.get_figure().savefig('countplot-naics.png', dpi=300, bbox_inches='tight')
# bar plot: estimate mean total compensation per dept + 95% confidence interval
order = df_earnings.groupby('Department')['Total Compensation 1000s'].mean().sort_values(ascending=False).index
ax = sns.barplot(x=df_earnings['Total Compensation 1000s'],
y=df_earnings['Department'],
estimator=np.mean,
ci=95,
order=order,
alpha=0.9)
ax.set_xlabel('Mean Total Compensation (USD, 1000s)')
ax.set_ylabel('')
ax.get_figure().set_size_inches(4, 12)
# now it's your turn
# use the businesses dataframe to visualize a bar plot of mean start year
# extract years from each start date then count their appearances
years = df_business['location_start_date'].dropna().dt.year.value_counts().sort_index()
years
# reindex so we're not missing any years
labels = range(years.index.min(), years.index.max() + 1)
years = years.reindex(labels).fillna(0).astype(int)
years
# line plot showing counts per start year over past 40 years
ax = sns.lineplot(data=years.loc[1980:2020])
# rotate the tick labels
ax.tick_params(axis='x', labelrotation=45)
ax.set_xlim(1980, 2020)
ax.set_ylim(bottom=0)
ax.set_xlabel('Year')
ax.set_ylabel('Count')
ax.set_title('Business Location Starts by Year')
ax.get_figure().savefig('lineplot-businesses.png', dpi=300, bbox_inches='tight')
# now it's your turn
# extract month + year from the original date column
# re-create the line plot to visualize location starts by month + year
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This was developed using Python 3.6.4 (Anaconda) and TensorFlow.
Step2: The MNIST data set has now been loaded and it consists of 70,000 images and associated labels.
Step3: One-Hot Encoding
Step4: We also need the classes as single numbers for various comparisons and performance measures, so we convert the One-Hot encoded vectors to a single number by taking the index of the highest element. Note that the word 'class' is a keyword used in Python so we need to use the name 'cls' instead.
Step5: We can now see the class for the first five images in the test-set. Compare these to One-Hot encoded vector above. For example, the class for the first image is 7, which corresponds to a One-Hot encoded vector where all elements are zero except for the element with index 7.
Step6: Data Dimensions
Step7: Helper-functin for plotting images
Step8: Plot a few images to see if data is correct
Step9: TensorFlow Graph
Step10: Next we have the placeholder variable for the true labels associated with the images that were input in the placeholder variable x. The shape of this variable is [None, num_clssses] which means it may hold an arbitrary number of labels and each label is a vector of length num_classes which is 10 in this case.
Step11: Finally we have the placeholder variable for the true classes of each image in the placeholder variable x. These are integers and the dimensionality of this placeholder variable is se to [None] which means the placeholder variable is a one-dimensional vector or arbitrary length.
Step12: Model
Step13: Now logits is a matrix with num_images rows and num_classes columns, where the element of the ith row and jth column is an estimate of how likely the ith input image is to be of jth class.
Step14: The predicted class can be calculated from the y_pred matrix by taking the index of the largest element in each row.
Step15: Cost-function to be optimized
Step16: We have now calculated the cross-entropy for each of the image classifications so we have a measure of how well the model performs on each image individually. But in order to use the cross-entropy to guide the optimization of the model's variables we need a single scalar value, so we simply take the average of the cross-entropy for all the entire classifications.
Step17: Optimization method
Step18: Performance measures
Step19: This calculates the classification accuracy by first type-casting the vector of booleans to floats, so that False becomes 0, and True becomes 1, and then calculating the average of these numbers.
Step20: TensorFlow Run
Step21: Initilize variables
Step22: Helper-function to perform optimization iterations
Step23: Function for performing a number of optimization iterations so as to gradually improve the weights and biases of the model. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples.
|
<ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np
from sklearn.metrics import confusion_matrix
from tensorflow.examples.tutorials.mnist import input_data
data = input_data.read_data_sets("data/MNIST/", one_hot=True)
print("Size of:")
print("- Training-set:\t\t{}".format(len(data.train.labels)))
print("- Test-set:\t\t{}".format(len(data.test.labels)))
print("- Validatin-set:\t{}".format(len(data.validation.labels)))
data.test.labels[0:5,:]
data.test.cls = np.array([label.argmax() for label in data.test.labels])
data.test.cls[0:5]
# MNIST images are 28 pixels in each dimension.
img_size = 28
# Images are stored in one-dimensional arrays of this length.
img_size_flat = img_size * img_size
# Tuple with height and width of images used to reshape arrays.
ima_shape = (img_size, img_size)
# Number of classes, one class for each of 10 digits.
num_classes = 10
def plot_image(images, cls_true, cls_pred=None):
assert len(image) == len(cls_true) == 9
#Create figure with 3*3 sub-plots.
fig, axes = plt.subplots(3,3)
fig.subplots_adjust(hspace = 0.3, wspace = 0.3)
for i, ax in enumerate(axes.flat):
# Plot image
ax.imshow(images[i].reshape(img_shape), cmap='binary')
# Show true and predicted classes.
if cls_pred is None:
xlabel = "True: {0}".format(cls_true[i])
else:
xlabel = "True: {0}, Pred:{1}".format(cls_true[i],cls_pred[i])
ax.set_xlabel(xlabel)
#Remove ticks from the plot
ax.set_xticks([])
ax.set_yticks([])
#Get the first images from the test-set.
images = data.test.images[0:9]
#Get the true classes for those images
cls_true = data.test.cls[0:9]
#Plot the images and labels using our helper-function above
plot_images(images = images, cls_true=cls_true)
x = tf.placeholder(tf.float32, [None, img_size_flat])
y_true = tf.placeholder(tf.float32, [None, num_classes])
y_true_cls = tf.placeholder(tf.int64, [None])
logits = tf.matmul(x, weights) + biases
y_pred = tf.nn.softmax(logits)
y_pred_cls = tf.argmax(y_pred, dimension = 1)
cross-entropy = tf.nn.softmax_cross_entropy_logits(logits=logits,
labels=y_true)
cost = tf.reduce_mean(cross-entropy)
optimizer = tf.train.GradientDescentOptimizer(learning-rate=0.5).minimize(cost)
correct_prediction = tf.equal(y_pred_cls,y_true_cls)
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
session = tf.Session()
session.run(tf.global_variables_initializer())
batch_size = 100
def optimize(num_iterations):
for i in range(num_iterations):
x_batch, y_true_batch = data.train.next_batch(batch_size)
feed_dict_train = {x: x_batch,
y_true: y_true_batch}
session.run(optimizer, feed_dict = feed_dict_train)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import data
Step2: Define parameters
Step3: Create TF Graph
Step4: Launch TF Graph
|
<ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist_data = input_data.read_data_sets("MNIST_data/", one_hot=True)
# Hyper parameters
training_epochs = 100
learning_rate = 0.01
batch_size = 256
print_loss_for_each_epoch = 10
test_validation_size = 512 # validation images to use during training - solely for printing purposes
# Network parameters
n_input = 784 # MNIST length of 28 by 28 image when stored as a column vector
n_hidden_layer_1 = 1024 # features in the 1st hidden layer
n_hidden_layer_2 = 1024 # features in the 2nd hidden layer
n_classes = 10 # total label classes (0-9 digits)
dropout_keep_rate = 0.75 # only 25% of the hidden outputs are passed on
# Graph input placeholders
x = tf.placeholder(tf.float32, [None, n_input])
y = tf.placeholder(tf.float32, [None, n_classes])
keep_prob = tf.placeholder(tf.float32)
# Define weights and biases
weights = {'hl_1': tf.Variable(tf.truncated_normal([n_input,n_hidden_layer_1])),
'hl_2': tf.Variable(tf.truncated_normal([n_hidden_layer_1,n_hidden_layer_2])),
'output': tf.Variable(tf.truncated_normal([n_hidden_layer_2,n_classes]))}
biases = {'hl_1': tf.Variable(0.01 * tf.truncated_normal([n_hidden_layer_1])),
'hl_2': tf.Variable(0.01 * tf.truncated_normal([n_hidden_layer_2])),
'output': tf.Variable(0.01 * tf.truncated_normal([n_classes]))}
def multilayer_perceptron_network(x, weights, biases):
# Hidden layer 1 with ReLu
hidden_layer_1 = tf.add(tf.matmul(x, weights['hl_1']), biases['hl_1'])
hidden_layer_1 = tf.nn.relu(hidden_layer_1)
hidden_layer_1 = tf.nn.dropout(hidden_layer_1, keep_prob=keep_prob)
# Hidden layer 2 with ReLu
hidden_layer_2 = tf.add(tf.matmul(hidden_layer_1, weights['hl_2']), biases['hl_2'])
hidden_layer_2 = tf.nn.relu(hidden_layer_2)
hidden_layer_2 = tf.nn.dropout(hidden_layer_2, keep_prob=keep_prob)
# Output layer with linear activation
output_layer = tf.add(tf.matmul(hidden_layer_2, weights['output']), biases['output'])
return output_layer
# Construct model
logits = multilayer_perceptron_network(x, weights, biases)
# Define cost and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(loss=cost)
# Define accuracy
correct_prediction = tf.equal(tf.arg_max(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Init variables
init = tf.global_variables_initializer()
# Run Graph
with tf.Session() as sess:
sess.run(init)
# Training cycle
for epoch in range(training_epochs):
for batch in range(mnist_data.train.num_examples//batch_size):
# Get x and y values for the given batch
batch_x, batch_y = mnist_data.train.next_batch(batch_size)
# Compute graph with respect to 'optimizer' and 'cost'
_, loss, training_accuracy = sess.run([optimizer, cost, accuracy], feed_dict={x: batch_x,
y: batch_y,
keep_prob: dropout_keep_rate})
# Compute graph with respect to validation data
validation_accuracy = sess.run(accuracy, feed_dict={x: mnist_data.validation.images[:test_validation_size],
y: mnist_data.validation.labels[:test_validation_size],
keep_prob: 1.})
# Display logs per epoch step
if epoch % print_loss_for_each_epoch == 0:
print('Epoch {:>2}, Batches {:>3}, Loss: {:>10.4f}, Train Accuracy: {:.4f}, Val Accuracy: {:.4f}'.format(
epoch + 1, # epoch starts at 0
batch + 1, # batch starts at 0
loss,
training_accuracy,
validation_accuracy))
print('Optimization Finished!')
# Testing cycle
test_accuracy = sess.run(accuracy, feed_dict={x: mnist_data.test.images,
y: mnist_data.test.labels,
keep_prob: 1.})
print('Test accuracy: {:3f}'.format(test_accuracy))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 3.6.1.2 Example 2
Step2: 3.6.2 Executing sequence alignment processes
Step3: When the process is done, the AlnConf objects will be stored in pj.used_methods, which is a dictionary using the method names as keys
Step4: if we print one of these AlnConf objects as a string, we will get complete details about the process, including programme versions and references
Step5: 3.6.3 Accessing sequence alignments
Step6: 3.6.3.2 Accessing a MultipleSeqAlignment object
Step7: 3.6.3.3 Writing sequence alignment files
Step8: The files will always be written to the current working directory (where this notebook file is), and can immediately be moved programmatically to avoid clutter
Step9: 3.6.3.4 Viewing alignments
Step10: As a result of this command, a new browser tab will open, showing the alignment.
Step11: 3.6.4 Quick reference
|
<ASSISTANT_TASK:>
Python Code:
mafft_linsi = AlnConf(pj, # The Project
method_name='mafftLinsi', # Any unique method name,
# 'mafftDefault' by default
CDSAlign=True, # Use this method to align
# protein sequences, and then
# pal2nal to align the CDSs
# This is the default setting
# and it is ignored with non-CDS
# loci.
codontable=4, # The genetic code that
# applies to this data,
# codontable=1 by default
program_name='mafft', # mafft or muscle.
# 'mafft' by default
cmd='mafft', # The command on your machine
# that invokes the program.
# 'mafft' by default
loci=['MT-CO1'], # A list of loci names to align.
# loci='all' by default, which will
# align all the loci in the project.
# If loci=='all', and CDSAlign==True
# CDS loci will be aligned as proteins
# (and then at the DNA level with pal2nal)
# but other DNA loci (e.g. rRNA) will be
# alighed directly at the DNA level.
cline_args={'localpair': True, # Program specific keywords and arguments.
'maxiterate': 1000} # cine_args=={} by default, which will
) # execute the program with default settings
muscle_defaults = AlnConf(pj,
method_name="muscleDefault",
program_name="muscle",
loci=['18s','28s'])
pj.align([mafft_linsi, muscle_defaults])
pj.used_methods
print pj.used_methods['mafftLinsi']
pj.alignments
print pj.fa('18s@muscleDefault')[:4,410:420].format('phylip-relaxed')
# record_id and source_organism are feature qualifiers in the SeqRecord object
# See section 3.4
files = pj.write_alns(id=['record_id','source_organism'],
format='fasta')
files
# make a new directory for your alignment files:
if not os.path.exists('alignment_files'):
os.mkdir('alignment_files')
# move the files there
for f in files:
os.rename(f, "./alignment_files/%s"%f)
pj.show_aln('MT-CO1@mafftLinsi',id=['source_organism'])
# source_organism is a feature qualifier in the SeqRecord object
# See section 3.4
from IPython.display import Image
Image('./images/show_aln_example.png', width=700)
pickle_pj(pj, 'outputs/my_project.pkpj')
# Make a AlnConf object
alnconf = AlnConf(pj, **kwargs)
# Execute alignment process
pj.align([alnconf])
# Show AlnConf description
print pj.used_methods['method_name']
# Fetch a MultipleSeqAlignment object
aln_obj = pj.fa('locus_name@method_name')
# Write alignment text files
pj.write_alns(id=['some_feature_qualifier'], format='fasta')
# the default feature qualifier is 'feature_id'
# 'fasta' is the default format
# View alignment in browser
pj.show_aln('locus_name@method_name',id=['some_feature_qualifier'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load LendingClub Dataset
Step2: reassign the labels to have +1 for a safe loan, and -1 for a risky (bad) loan.
Step3: use 4 categorical features
Step4: Subsample dataset to make sure classes are balanced
Step5: Transform categorical data into binary features
Step6: The feature columns now look like this
Step7: Train-Validation split
Step8: Early stopping methods for decision trees
Step9: Early stopping condition 3
Step10: Binary decision tree helper functions
Step11: Incorporating new early stopping conditions in binary decision tree implementation
Step12: Build a tree.
Step13: Making predictions
Step14: Evaluating the model
Step15: Exploring the effect of max_depth
Step16: Evaluating the models
Step17: Measuring the complexity of the tree
Step18: Exploring the effect of min_error
Step19: Exploring the effect of min_node_size
|
<ASSISTANT_TASK:>
Python Code:
import graphlab
loans = graphlab.SFrame('lending-club-data.gl/')
loans['safe_loans'] = loans['bad_loans'].apply(lambda x : +1 if x==0 else -1)
loans = loans.remove_column('bad_loans')
features = ['grade', # grade of the loan
'term', # the term of the loan
'home_ownership', # home_ownership status: own, mortgage or rent
'emp_length', # number of years of employment
]
target = 'safe_loans'
loans = loans[features + [target]]
safe_loans_raw = loans[loans[target] == 1]
risky_loans_raw = loans[loans[target] == -1]
percentage = len(risky_loans_raw)/float(len(safe_loans_raw))
safe_loans = safe_loans_raw.sample(percentage, seed = 1)
risky_loans = risky_loans_raw
loans_data = risky_loans.append(safe_loans)
print "Percentage of safe loans :", len(safe_loans) / float(len(loans_data))
print "Percentage of risky loans :", len(risky_loans) / float(len(loans_data))
print "Total number of loans in our new dataset :", len(loans_data)
loans_data = risky_loans.append(safe_loans)
for feature in features:
loans_data_one_hot_encoded = loans_data[feature].apply(lambda x: {x: 1})
loans_data_unpacked = loans_data_one_hot_encoded.unpack(column_name_prefix=feature)
# Change None's to 0's
for column in loans_data_unpacked.column_names():
loans_data_unpacked[column] = loans_data_unpacked[column].fillna(0)
loans_data.remove_column(feature)
loans_data.add_columns(loans_data_unpacked)
features = loans_data.column_names()
features.remove('safe_loans') # Remove the response variable
features
train_data, validation_set = loans_data.random_split(.8, seed=1)
def reached_minimum_node_size(data, min_node_size):
if len(data)<=min_node_size:
return True
else:
return False
def error_reduction(error_before_split, error_after_split):
return error_before_split-error_after_split
def intermediate_node_num_mistakes(labels_in_node):
if len(labels_in_node) == 0:
return 0
no_safe_loans = (labels_in_node == 1).sum()
no_risky_loans = (labels_in_node == -1).sum()
if no_safe_loans > no_risky_loans :
return no_risky_loans
else:
return no_safe_loans
def best_splitting_feature(data, features, target):
best_feature = None
best_error = 10
num_data_points = float(len(data))
for feature in features:
left_split = data[data[feature] == 0]
right_split = data[data[feature] == 1]
left_mistakes = intermediate_node_num_mistakes(left_split[target])
right_mistakes = intermediate_node_num_mistakes(right_split[target])
error = (left_mistakes+right_mistakes)/num_data_points
if error < best_error:
best_error = error
best_feature = feature
return best_feature
def create_leaf(target_values):
leaf = {'splitting_feature' : None,
'left' : None,
'right' : None,
'is_leaf': None,
'prediction' : None}
num_ones = len(target_values[target_values == +1])
num_minus_ones = len(target_values[target_values == -1])
if num_ones > num_minus_ones:
leaf['prediction'] = 1
else:
leaf['prediction'] = -1
leaf['is_leaf'] = True
return leaf
def decision_tree_create(data, features, target, current_depth = 0,
max_depth = 10, min_node_size=1,
min_error_reduction=0.0):
remaining_features = features[:]
target_values = data[target]
print "--------------------------------------------------------------------"
print "Subtree, depth = %s (%s data points)." % (current_depth, len(target_values))
# Stopping condition 1: All nodes are of the same type.
if intermediate_node_num_mistakes(target_values) == 0:
print "Stopping condition 1 reached. All data points have the same target value."
return create_leaf(target_values)
# Stopping condition 2: No more features to split on.
if remaining_features == []:
print "Stopping condition 2 reached. No remaining features."
return create_leaf(target_values)
# Early stopping condition 1: Reached max depth limit.
if current_depth >= max_depth:
print "Early stopping condition 1 reached. Reached maximum depth."
return create_leaf(target_values)
# Early stopping condition 2: Reached the minimum node size.
if reached_minimum_node_size(data,min_node_size):
print "Early stopping condition 2 reached. Reached minimum node size."
return create_leaf(target_values)
splitting_feature = best_splitting_feature(data, features, target)
left_split = data[data[splitting_feature] == 0]
right_split = data[data[splitting_feature] == 1]
error_before_split = intermediate_node_num_mistakes(target_values) / float(len(data))
left_mistakes = intermediate_node_num_mistakes(left_split[target])
right_mistakes = intermediate_node_num_mistakes(right_split[target])
error_after_split = (left_mistakes + right_mistakes) / float(len(data))
if error_reduction(error_before_split,error_after_split)<=min_error_reduction:
print "Early stopping condition 3 reached. Minimum error reduction."
return create_leaf(target_values)
remaining_features.remove(splitting_feature)
print "Split on feature %s. (%s, %s)" % (\
splitting_feature, len(left_split), len(right_split))
left_tree = decision_tree_create(left_split, remaining_features, target,
current_depth + 1, max_depth, min_node_size, min_error_reduction)
right_tree = decision_tree_create(right_split, remaining_features, target,
current_depth + 1, max_depth, min_node_size, min_error_reduction)
return {'is_leaf' : False,
'prediction' : None,
'splitting_feature': splitting_feature,
'left' : left_tree,
'right' : right_tree}
def count_nodes(tree):
if tree['is_leaf']:
return 1
return 1 + count_nodes(tree['left']) + count_nodes(tree['right'])
my_decision_tree_new = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6,
min_node_size = 100, min_error_reduction=0.0)
my_decision_tree_old = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6,
min_node_size = 0, min_error_reduction=-1)
def classify(tree, x, annotate = False):
if tree['is_leaf']:
if annotate:
print "At leaf, predicting %s" % tree['prediction']
return tree['prediction']
else:
split_feature_value = x[tree['splitting_feature']]
if annotate:
print "Split on %s = %s" % (tree['splitting_feature'], split_feature_value)
if split_feature_value == 0:
return classify(tree['left'], x, annotate)
else:
return classify(tree['right'], x, annotate)
validation_set[0]
print 'Predicted class: %s ' % classify(my_decision_tree_new, validation_set[0])
classify(my_decision_tree_new, validation_set[0], annotate = True)
classify(my_decision_tree_old, validation_set[0], annotate = True)
def evaluate_classification_error(tree, data):
prediction = data.apply(lambda x: classify(tree, x))
mistakes = (prediction!=data['safe_loans']).sum()
return mistakes/float(len(data))
evaluate_classification_error(my_decision_tree_new, validation_set)
evaluate_classification_error(my_decision_tree_old, validation_set)
model_1 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 2,
min_node_size = 0, min_error_reduction=-1)
model_2 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6,
min_node_size = 0, min_error_reduction=-1)
model_3 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 14,
min_node_size = 0, min_error_reduction=-1)
print "Training data, classification error (model 1):", evaluate_classification_error(model_1, train_data)
print "Training data, classification error (model 2):", evaluate_classification_error(model_2, train_data)
print "Training data, classification error (model 3):", evaluate_classification_error(model_3, train_data)
print "Training data, classification error (model 1):", evaluate_classification_error(model_1, validation_set)
print "Training data, classification error (model 2):", evaluate_classification_error(model_2, validation_set)
print "Training data, classification error (model 3):", evaluate_classification_error(model_3, validation_set)
def count_leaves(tree):
if tree['is_leaf']:
return 1
return count_leaves(tree['left']) + count_leaves(tree['right'])
print count_leaves(model_1),count_leaves(model_2),count_leaves(model_3)
model_4 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6,
min_node_size = 0, min_error_reduction=-1)
model_5 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6,
min_node_size = 0, min_error_reduction=0)
model_6 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6,
min_node_size = 0, min_error_reduction=5)
print "Validation data, classification error (model 4):", evaluate_classification_error(model_4, validation_set)
print "Validation data, classification error (model 5):", evaluate_classification_error(model_5, validation_set)
print "Validation data, classification error (model 6):", evaluate_classification_error(model_6, validation_set)
print count_leaves(model_4),count_leaves(model_5),count_leaves(model_6)
model_7 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6,
min_node_size = 0, min_error_reduction=-1)
model_8 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6,
min_node_size = 2000, min_error_reduction=-1)
model_9 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6,
min_node_size = 50000, min_error_reduction=-1)
print "Validation data, classification error (model 7):", evaluate_classification_error(model_7, validation_set)
print "Validation data, classification error (model 8):", evaluate_classification_error(model_8, validation_set)
print "Validation data, classification error (model 9):", evaluate_classification_error(model_9, validation_set)
print count_leaves(model_7),count_leaves(model_8),count_leaves(model_9)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exercise 2
Step2: Exercise 3
Step3: Exercise 4
Step4: Exercise 5
Step5: Exercise 6
Step6: Exercise 7
Step7: Exercise 8
Step8: Exercise 9
|
<ASSISTANT_TASK:>
Python Code:
print("Hello World!")
print("Hello Again")
print("I like typing this.")
print("This is fun.")
print('Yay! Printing.')
print("I'd much rather you 'not'.")
print('I "said" do not touch this.')
'''
Notes:
octothorpe, mesh, or pund #
'''
# A comment, this is so you can read your program later.
# Anything after the # is ignored by python.
print("I could have code like this.") # and the comment after is ignored
# You can also use a comment to "disable" or comment out code:
# print("This won't run.")
print("This will run.")
# BODMAS
print("I will now count my chickens:")
print("Hens", 25 + 30 / 6)
print("Roosters", 100 - 25 * 3 % 4)
print("Now I will count the eggs:")
print(3 + 2 + 1 - 5 + 4 % 2 - 1 / 4 + 6)
print("Is it true that 3 + 2 < 5 - 7?")
print(3 + 2 < 5 - 7)
print("What is 3 + 2?", 3 + 2)
print("What is 5 - 7?", 5 - 7)
print("Oh, that's why it's False.")
print("How about some more.")
print("Is it greater?", 5 > -2)
print("Is it greater or equal?", 5 >= -2)
print("Is it less or equal?", 5 <= -2)
cars = 100
space_in_a_car = 4.0
drivers = 30
passengers = 90
cars_not_driven = cars - drivers
cars_driven = drivers
carpool_capacity = cars_driven * space_in_a_car
average_passengers_per_car = passengers / cars_driven
print("There are", cars, "cars available.")
print("There are only", drivers, "drivers available.")
print("There will be", cars_not_driven, "empty cars today.")
print("We can transport", carpool_capacity, "people today.")
print("We have", passengers, "to carpool today.")
print("We need to put about", average_passengers_per_car,
"in each car.")
# assigning variables in a single line
a = b = c = 0
# this seems easier but when using basic objects like arrays or dictionaries it gets wierder
l1 = l2 = []
l1.append(1)
print(l1, l2)
l2.append(2)
print(l1, l2)
# Here list objects l1 and l2 are names assigned to the same memory location. It works different from following
# code
l1 = []
l2 = []
l1.append(1)
print(l1, l2)
l2.append(2)
print(l1, l2)
my_name = 'Zed A. Shaw'
my_age = 35 # not a lie
my_height = 74 # inches
my_weight = 180 # lbs
my_eyes = 'Blue'
my_teeth = 'White'
my_hair = 'Brown'
print(f"Let's talk about {my_name}.")
print(f"He's {my_height} inches tall.")
print(f"He's {my_weight} pounds heavy.")
print("Actually that's not too heavy.")
print(f"He's got {my_eyes} eyes and {my_hair} hair.")
print(f"His teeth are usually {my_teeth} depending on the coffee.")
# this line is tricky, try to get it exactly right
total = my_age + my_height + my_weight
print(f"If I add {my_age}, {my_height}, and {my_weight} I get {total}.")
# f'' format string
# converting inches to pounds
def inches_to_centi_meters(inches):
centi_meters = inches * 2.54
return centi_meters
def pounds_to_kilo_grams(pounds):
kilo_grams = pounds * 0.453592
return kilo_grams
inches = 1.0
pounds = 1.0
print(inches, inches_to_centi_meters(inches))
print(pounds, pounds_to_kilo_grams(pounds))
types_of_people = 10
x = f"There are {types_of_people} types of people."
binary = "binary"
do_not = "don't"
y = f"Those who know {binary} and those who {do_not}."
print(x)
print(y)
print(f"I said: {x}")
print(f"I also said: '{y}'")
hilarious = False
joke_evaluation = "Isn't that joke so funny?! {}"
print(joke_evaluation.format(hilarious))
w = "This is the left side of..."
e = "a string with a right side."
print(w + e)
print("Mary had a little lamb.")
print("Its fleece was white as {}.".format('snow'))
print("And everywhere that Mary went.")
print("." * 10) # what'd that do?
end1 = "C"
end2 = "h"
end3 = "e"
end4 = "e"
end5 = "s"
end6 = "e"
end7 = "B"
end8 = "u"
end9 = "r"
end10 = "g"
end11 = "e"
end12 = "r"
# watch end = ' ' at the end. try removing it to see what happens
print(end1 + end2 + end3 + end4 + end5 + end6, end=' ')
print(end7 + end8 + end9 + end10 + end11 + end12)
formatter = "{} {} {} {}"
print(formatter.format(1, 2, 3, 4))
print(formatter.format("one", "two", "three", "four"))
print(formatter.format(True, False, False, True))
print(formatter.format(formatter, formatter, formatter, formatter))
print(formatter.format(
"Try your",
"Own text here",
"Maybe a poem",
"Or a song about fear"
))
:( You have to pay 30
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: How many years have been "Batman years", with more Batman characters than Superman characters?
Step2: Plot the number of actor roles each year and the number of actress roles each year over the history of film.
Step3: Plot the number of actor roles each year and the number of actress roles each year, but this time as a kind='area' plot.
Step4: Plot the difference between the number of actor roles each year and the number of actress roles each year over the history of film.
Step5: Plot the fraction of roles that have been 'actor' roles each year in the hitsory of film.
Step6: Plot the fraction of supporting (n=2) roles that have been 'actor' roles each year in the history of film.
Step7: Build a plot with a line for each rank n=1 through n=3, where the line shows what fraction of that rank's roles were 'actor' roles for each year in the history of film.
|
<ASSISTANT_TASK:>
Python Code:
both = cast[(cast.character=='Superman') | (cast.character == 'Batman')].groupby(['year','character']).size().unstack().fillna(0)
diff = both.Superman - both.Batman
print("Superman: " + str(len(diff[diff>0])))
both = cast[(cast.character=='Superman') | (cast.character == 'Batman')].groupby(['year','character']).size().unstack().fillna(0)
diff = both.Batman - both.Superman
print("Batman: " + str(len(diff[diff>0])))
cast.groupby(['year','type']).size().unstack().plot()
cast.groupby(['year','type']).size().unstack().plot(kind='area')
foo = cast.groupby(['year','type']).size().unstack().fillna(0)
foo['diff'] = foo['actor']-foo['actress']
foo['diff'].plot()
foo['totalRoles'] = foo['actor']+foo['actress']
foo['manFrac'] = foo['actor']/foo['totalRoles']
foo['manFrac'].plot()
support = cast[cast.n==2]
bar = support.groupby(['year','type']).size().unstack().fillna(0)
bar['totalRoles'] = bar['actor']+bar['actress']
bar['manFrac'] = bar['actor']/bar['totalRoles']
bar['manFrac'].plot()
thirdWheel = cast[cast.n==3]
baz = thirdWheel.groupby(['year','type']).size().unstack().fillna(0)
baz['totalRoles'] = baz['actor']+baz['actress']
baz['manFrac'] = baz['actor']/baz['totalRoles']
foo['manFrac'].plot() + (bar['manFrac'].plot() + baz['manFrac'].plot())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's use TcpServerConnection to make a HTTP web server request.
Step2: Let's use TcpServerConnection to make a HTTPS web server request.
|
<ASSISTANT_TASK:>
Python Code:
from proxy.core.connection import TcpServerConnection
from proxy.common.utils import build_http_request
from proxy.http.methods import httpMethods
from proxy.http.parser import HttpParser, httpParserTypes
request = build_http_request(
method=httpMethods.GET,
url=b'/',
headers={
b'Host': b'jaxl.com',
}
)
http_client = TcpServerConnection('jaxl.com', 80)
http_client.connect()
http_client.queue(memoryview(request))
http_client.flush()
http_response = HttpParser(httpParserTypes.RESPONSE_PARSER)
while not http_response.is_complete:
http_response.parse(http_client.recv())
http_client.close()
print(http_response.build_response())
assert http_response.is_complete
assert http_response.code == b'301'
assert http_response.reason == b'Moved Permanently'
assert http_response.has_header(b'location')
assert http_response.header(b'location') == b'https://jaxl.com/'
https_client = TcpServerConnection('jaxl.com', 443)
https_client.connect()
https_client.wrap(hostname='jaxl.com')
https_client.queue(memoryview(request))
https_client.flush()
https_response = HttpParser(httpParserTypes.RESPONSE_PARSER)
while not https_response.is_complete:
https_response.parse(https_client.recv())
https_client.close()
print(https_response.build_response())
assert https_response.is_complete
assert https_response.code == b'200'
assert https_response.reason == b'OK'
assert https_response.has_header(b'content-type')
assert https_response.header(b'content-type') == b'text/html'
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read Data
Step2: Create Linear Regression Network
Step3: SVRGModule with SVRG Optimization
Step4: Module with SGD Optimization
Step5: Training Loss over 100 Epochs Using lr_scheduler
Step6: Training Loss Comparison with SGD with fixed learning rates
|
<ASSISTANT_TASK:>
Python Code:
import os
import json
import sys
import tempfile
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
import mxnet as mx
from mxnet.contrib.svrg_optimization.svrg_module import SVRGModule
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.datasets import load_svmlight_file
sys.path.insert(0, "../linear_regression")
from data_reader import get_year_prediction_data
%matplotlib inline
feature_dim, train_features, train_labels = get_year_prediction_data()
train_features = train_features[-5000:]
train_labels = train_labels[-5000:]
def create_lin_reg_network(batch_size=100):
train_iter = mx.io.NDArrayIter(train_features, train_labels, batch_size=batch_size, shuffle=True,
data_name='data', label_name='label')
data = mx.sym.Variable("data")
label = mx.sym.Variable("label")
weight = mx.sym.Variable("fc_weight", shape=(1, 90))
net = mx.sym.dot(data, weight.transpose())
bias = mx.sym.Variable("fc_bias", shape=(1,), wd_mult=0.0, lr_mult=10.0)
net = mx.sym.broadcast_plus(net, bias)
net = mx.sym.LinearRegressionOutput(data=net, label=label)
return train_iter, net
def train_svrg_lin_reg(num_epoch=100, batch_size=100, update_freq=2, output='svrg_lr.json',
optimizer_params=None):
di, net = create_lin_reg_network(batch_size=batch_size)
#Create a SVRGModule
mod = SVRGModule(symbol=net, context=mx.cpu(0), data_names=['data'], label_names=['label'], update_freq=update_freq)
mod.bind(data_shapes=di.provide_data, label_shapes=di.provide_label)
mod.init_params(initializer=mx.init.Zero(), allow_missing=False, force_init=False, allow_extra=False)
mod.init_optimizer(kvstore='local', optimizer='sgd', optimizer_params=optimizer_params)
metrics = mx.metric.create("mse")
results = {}
for e in range(num_epoch):
results[e] = {}
metrics.reset()
if e % mod.update_freq == 0:
mod.update_full_grads(di)
di.reset()
for batch in di:
mod.forward_backward(data_batch=batch)
mod.update()
mod.update_metric(metrics, batch.label)
results[e]["mse"] = metrics.get()[1]
f = open(output, 'w+')
f.write(json.dumps(results, indent=4, sort_keys=True))
f.close()
def train_sgd_lin_reg(num_epoch=100, batch_size=100, update_freq=2, output='sgd_lr.json',
optimizer_params=None):
di, net = create_lin_reg_network(batch_size=batch_size)
#Create a standard module
mod = mx.mod.Module(symbol=net, context=mx.cpu(0), data_names=['data'], label_names=['label'])
mod.bind(data_shapes=di.provide_data, label_shapes=di.provide_label)
mod.init_params(initializer=mx.init.Zero(), allow_missing=False, force_init=False, allow_extra=False)
mod.init_optimizer(kvstore='local', optimizer='sgd', optimizer_params=optimizer_params)
metrics = mx.metric.create("mse")
results = {}
for e in range(num_epoch):
results[e] = {}
metrics.reset()
di.reset()
for batch in di:
mod.forward_backward(data_batch=batch)
mod.update()
mod.update_metric(metrics, batch.label)
results[e]["mse"] = metrics.get()[1]
f = open(output, 'w+')
f.write(json.dumps(results, indent=4, sort_keys=True))
f.close()
train_svrg_lin_reg(optimizer_params={'lr_scheduler': mx.lr_scheduler.FactorScheduler(step=10, factor=0.99)})
train_sgd_lin_reg(optimizer_params={'lr_scheduler': mx.lr_scheduler.FactorScheduler(step=10, factor=0.99)})
# plot graph
#Plot training loss over Epochs:
color = sns.color_palette()
#Draw Weight Variance Ratio
dataplot3 = {"svrg_mse": [], "sgd_mse": []}
with open('sgd_lr.json') as sgd_data, open('svrg_lr.json') as svrg_data:
sgd = json.load(sgd_data)
svrg = json.load(svrg_data)
for epoch in range(100):
dataplot3["svrg_mse"].append(svrg[str(epoch)]["mse"])
dataplot3["sgd_mse"].append(sgd[str(epoch)]["mse"])
x3 = list(range(100))
plt.figure(figsize=(20, 12))
plt.title("Training Loss Over Epochs")
sns.pointplot(x3, dataplot3['svrg_mse'], color=color[9])
sns.pointplot(x3, dataplot3['sgd_mse'], color=color[8])
color_patch1 = mpatches.Patch(color=color[9], label="svrg_mse")
color_patch2 = mpatches.Patch(color=color[8], label="sgd_mse")
plt.legend(handles=[color_patch1, color_patch2])
plt.ylabel('Training Loss', fontsize=12)
plt.xlabel('Epochs', fontsize=12)
train_svrg_lin_reg(output="svrg_0.025.json", optimizer_params=(('learning_rate', 0.025),))
train_sgd_lin_reg(output="sgd_0.001.json", optimizer_params=(("learning_rate", 0.001),))
train_sgd_lin_reg(output="sgd_0.0025.json", optimizer_params=(("learning_rate", 0.0025),))
train_sgd_lin_reg(output="sgd_0.005.json", optimizer_params=(("learning_rate", 0.005),))
#Plot training loss over Epochs:
color = sns.color_palette()
#Draw Weight Variance Ratio
dataplot3 = {"svrg_mse": [], "sgd_mse_lr_0.001": [], "sgd_mse_lr_0.0025": [], "sgd_mse_lr_0.005":[]}
with open('sgd_0.001.json') as sgd_data, open('svrg_0.025.json') as svrg_data, open('sgd_0.0025.json') as sgd_data_2, open('sgd_0.005.json') as sgd_data_3:
sgd = json.load(sgd_data)
svrg = json.load(svrg_data)
sgd_lr = json.load(sgd_data_2)
sgd_lr_2 = json.load(sgd_data_3)
for epoch in range(100):
dataplot3["svrg_mse"].append(svrg[str(epoch)]["mse"])
dataplot3["sgd_mse_lr_0.001"].append(sgd[str(epoch)]["mse"])
dataplot3["sgd_mse_lr_0.0025"].append(sgd_lr[str(epoch)]["mse"])
dataplot3["sgd_mse_lr_0.005"].append(sgd_lr_2[str(epoch)]["mse"])
x3 = list(range(100))
plt.figure(figsize=(20, 12))
plt.title("Training Loss Over Epochs")
sns.pointplot(x3, dataplot3['svrg_mse'], color=color[9])
sns.pointplot(x3, dataplot3['sgd_mse_lr_0.001'], color=color[8])
sns.pointplot(x3, dataplot3['sgd_mse_lr_0.0025'], color=color[3])
sns.pointplot(x3, dataplot3['sgd_mse_lr_0.005'], color=color[7])
color_patch1 = mpatches.Patch(color=color[9], label="svrg_mse_lr_0.025")
color_patch2 = mpatches.Patch(color=color[8], label="sgd_mse_lr_0.001")
color_patch3 = mpatches.Patch(color=color[3], label="sgd_mse_lr_0.0025")
color_patch4 = mpatches.Patch(color=color[7], label="sgd_mse_lr_0.005")
plt.legend(handles=[color_patch1, color_patch2, color_patch3, color_patch4])
plt.ylabel('Training Loss', fontsize=12)
plt.xlabel('Epochs', fontsize=12)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Iris Data from Seaborn
Step2: Visualisation
Step3: Key Points
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
sns.set(color_codes=True)
%matplotlib inline
df = pd.read_csv('iris.data')
df.head()
pd.read_csv?
df = pd.read_csv('iris.data', header=-1)
df.head()
col_name = ['sepal length', 'sepal width', 'petal length', 'petal width', 'class']
df.columns = col_name
df.head()
iris = sns.load_dataset('iris')
iris.head()
df.describe()
iris.describe()
print(iris.info())
print(iris.groupby('species').size())
sns.pairplot(iris, hue='species', size=3, aspect=1.0)
iris.hist(edgecolor='black', linewidth=1.2, figsize=(12, 8))
plt.show()
iris.hist?
plt.figure(figsize=(12, 8))
plt.subplot(2, 2, 1)
sns.violinplot(x='species', y='sepal_length', data=iris)
plt.subplot(2, 2, 2)
sns.violinplot(x='species', y='sepal_width', data=iris)
plt.subplot(2, 2, 3)
sns.violinplot(x='species', y='petal_length', data=iris)
plt.subplot(2, 2, 4)
sns.violinplot(x='species', y='petal_width', data=iris)
iris.boxplot(by='species', figsize=(12, 8))
plt.show()
pd.scatter_matrix(iris, figsize=(12, 8))
plt.show()
iris.head()
x = 10 * np.random.rand(100)
y = 3 * x + np.random.rand(100)
plt.scatter(x, y)
from sklearn.linear_model import LinearRegression
model = LinearRegression(fit_intercept=True)
model
X = x.reshape(-1, 1)
X.shape
model.fit(X, y)
model.coef_
model.intercept_
x_fit = np.linspace(-1, 11)
X_fit = x_fit.reshape(-1, 1)
y_fit = model.predict(X_fit)
plt.scatter(x, y)
plt.plot(x_fit, y_fit)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step5: Tutorial
Step6: 2. Harmonic Oscillator Example
Step7: Let's see how the trajectory and the spike trains look like.
Step8: Thus, we have generated 50-dimensional spike train data, derived from 2-dimensional latent dynamics, i.e., two cycles of circular rotation.
Step9: Then we call the fit() method of the class, with the generated spike train data as input.
Step10: Then we transform the spike trains from the remaining half of the trials into tranjectories in the latent variable space, using the transform() method.
Step11: Let's see how the extracted trajectories look like.
Step12: GPFA successfuly exatracted, as the trial averaged trajectory, the two cycles of rotation in 2-dimensional latent space from the 50-dimensional spike train data.
Step13: We obtain almost the same latent dynamics, but single trial trajectories are slightly modified owing to an increased amount of the data used for fitting.
Step14: Let's plot the obtained trajectory and the spike trains.
Step15: The 3-dimensional latent trajectory exhibit a charactistic structure of the Lorentz attractor
Step16: Let's see how well the method worked in this case.
Step17: Again, the characteristic structure of the original latent dynamics was successfully extracted by GPFA.
Step18: Any of the extracted dimension does not correspond solely to a single dimension of the original latent dynamics. In addition, the amplitude of Dim 3 is much smaller than the other two, reflecting the fact that the dimensionality of the original latent dynamics is close to 2, evident from the very similar time series of $x$ and $y$ of the original latent dynamics.
Step19: Let's plot the obtained log-likeliyhood as a function of the dimensionality.
Step20: The red cross denotes the maximum log-likelihood, which is taken at the dimensionality of 2.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from scipy.integrate import odeint
import quantities as pq
import neo
from elephant.spike_train_generation import inhomogeneous_poisson_process
def integrated_oscillator(dt, num_steps, x0=0, y0=1, angular_frequency=2*np.pi*1e-3):
Parameters
----------
dt : float
Integration time step in ms.
num_steps : int
Number of integration steps -> max_time = dt*(num_steps-1).
x0, y0 : float
Initial values in three dimensional space.
angular_frequency : float
Angular frequency in 1/ms.
Returns
-------
t : (num_steps) np.ndarray
Array of timepoints
(2, num_steps) np.ndarray
Integrated two-dimensional trajectory (x, y, z) of the harmonic oscillator
assert isinstance(num_steps, int), "num_steps has to be integer"
t = dt*np.arange(num_steps)
x = x0*np.cos(angular_frequency*t) + y0*np.sin(angular_frequency*t)
y = -x0*np.sin(angular_frequency*t) + y0*np.cos(angular_frequency*t)
return t, np.array((x, y))
def integrated_lorenz(dt, num_steps, x0=0, y0=1, z0=1.05,
sigma=10, rho=28, beta=2.667, tau=1e3):
Parameters
----------
dt :
Integration time step in ms.
num_steps : int
Number of integration steps -> max_time = dt*(num_steps-1).
x0, y0, z0 : float
Initial values in three dimensional space
sigma, rho, beta : float
Parameters defining the lorenz attractor
tau : characteristic timescale in ms
Returns
-------
t : (num_steps) np.ndarray
Array of timepoints
(3, num_steps) np.ndarray
Integrated three-dimensional trajectory (x, y, z) of the Lorenz attractor
def _lorenz_ode(point_of_interest, timepoint, sigma, rho, beta, tau):
Fit the model with `spiketrains` data and apply the dimensionality
reduction on `spiketrains`.
Parameters
----------
point_of_interest : tuple
Tupel containing coordinates (x,y,z) in three dimensional space.
timepoint : a point of interest in time
dt :
Integration time step in ms.
num_steps : int
Number of integration steps -> max_time = dt*(num_steps-1).
sigma, rho, beta : float
Parameters defining the lorenz attractor
tau : characteristic timescale in ms
Returns
-------
x_dot, y_dot, z_dot : float
Values of the lorenz attractor's partial derivatives
at the point x, y, z.
x, y, z = point_of_interest
x_dot = (sigma*(y - x)) / tau
y_dot = (rho*x - y - x*z) / tau
z_dot = (x*y - beta*z) / tau
return x_dot, y_dot, z_dot
assert isinstance(num_steps, int), "num_steps has to be integer"
t = dt*np.arange(num_steps)
poi = (x0, y0, z0)
return t, odeint(_lorenz_ode, poi, t, args=(sigma, rho, beta, tau)).T
def random_projection(data, embedding_dimension, loc=0, scale=None):
Parameters
----------
data : np.ndarray
Data to embed, shape=(M, N)
embedding_dimension : int
Embedding dimension, dimensionality of the space to project to.
loc : float or array_like of floats
Mean (“centre”) of the distribution.
scale : float or array_like of floats
Standard deviation (spread or “width”) of the distribution.
Returns
-------
np.ndarray
Random (normal) projection of input data, shape=(dim, N)
See Also
--------
np.random.normal()
if scale is None:
scale = 1 / np.sqrt(data.shape[0])
projection_matrix = np.random.normal(loc, scale, (embedding_dimension, data.shape[0]))
return np.dot(projection_matrix, data)
def generate_spiketrains(instantaneous_rates, num_trials, timestep):
Parameters
----------
instantaneous_rates : np.ndarray
Array containing time series.
timestep :
Sample period.
num_steps : int
Number of timesteps -> max_time = timestep*(num_steps-1).
Returns
-------
spiketrains : list of neo.SpikeTrains
List containing spiketrains of inhomogeneous Poisson
processes based on given instantaneous rates.
spiketrains = []
for _ in range(num_trials):
spiketrains_per_trial = []
for inst_rate in instantaneous_rates:
anasig_inst_rate = neo.AnalogSignal(inst_rate, sampling_rate=1/timestep, units=pq.Hz)
spiketrains_per_trial.append(inhomogeneous_poisson_process(anasig_inst_rate))
spiketrains.append(spiketrains_per_trial)
return spiketrains
# set parameters for the integration of the harmonic oscillator
timestep = 1 * pq.ms
trial_duration = 2 * pq.s
num_steps = int((trial_duration.rescale('ms')/timestep).magnitude)
# set parameters for spike train generation
max_rate = 70 * pq.Hz
np.random.seed(42) # for visualization purposes, we want to get identical spike trains at any run
# specify data size
num_trials = 20
num_spiketrains = 50
# generate a low-dimensional trajectory
times_oscillator, oscillator_trajectory_2dim = integrated_oscillator(
timestep.magnitude, num_steps=num_steps, x0=0, y0=1)
times_oscillator = (times_oscillator*timestep.units).rescale('s')
# random projection to high-dimensional space
oscillator_trajectory_Ndim = random_projection(
oscillator_trajectory_2dim, embedding_dimension=num_spiketrains)
# convert to instantaneous rate for Poisson process
normed_traj = oscillator_trajectory_Ndim / oscillator_trajectory_Ndim.max()
instantaneous_rates_oscillator = np.power(max_rate.magnitude, normed_traj)
# generate spike trains
spiketrains_oscillator = generate_spiketrains(
instantaneous_rates_oscillator, num_trials, timestep)
import matplotlib.pyplot as plt
f, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, figsize=(15, 10))
ax1.set_title('2-dim Harmonic Oscillator')
ax1.set_xlabel('time [s]')
for i, y in enumerate(oscillator_trajectory_2dim):
ax1.plot(times_oscillator, y, label=f'dimension {i}')
ax1.legend()
ax2.set_title('Trajectory in 2-dim space')
ax2.set_xlabel('Dim 1')
ax2.set_ylabel('Dim 2')
ax2.set_aspect(1)
ax2.plot(oscillator_trajectory_2dim[0], oscillator_trajectory_2dim[1])
ax3.set_title(f'Projection to {num_spiketrains}-dim space')
ax3.set_xlabel('time [s]')
y_offset = oscillator_trajectory_Ndim.std() * 3
for i, y in enumerate(oscillator_trajectory_Ndim):
ax3.plot(times_oscillator, y + i*y_offset)
trial_to_plot = 0
ax4.set_title(f'Raster plot of trial {trial_to_plot}')
ax4.set_xlabel('Time (s)')
ax4.set_ylabel('Spike train index')
for i, spiketrain in enumerate(spiketrains_oscillator[trial_to_plot]):
ax4.plot(spiketrain, np.ones_like(spiketrain) * i, ls='', marker='|')
plt.tight_layout()
plt.show()
from elephant.gpfa import GPFA
# specify fitting parameters
bin_size = 20 * pq.ms
latent_dimensionality = 2
gpfa_2dim = GPFA(bin_size=bin_size, x_dim=latent_dimensionality)
gpfa_2dim.fit(spiketrains_oscillator[:num_trials//2])
print(gpfa_2dim.params_estimated.keys())
trajectories = gpfa_2dim.transform(spiketrains_oscillator[num_trials//2:])
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 5))
linewidth_single_trial = 0.5
color_single_trial = 'C0'
alpha_single_trial = 0.5
linewidth_trial_average = 2
color_trial_average = 'C1'
ax1.set_title('Original latent dynamics')
ax1.set_xlabel('Dim 1')
ax1.set_ylabel('Dim 2')
ax1.set_aspect(1)
ax1.plot(oscillator_trajectory_2dim[0], oscillator_trajectory_2dim[1])
ax2.set_title('Latent dynamics extracted by GPFA')
ax2.set_xlabel('Dim 1')
ax2.set_ylabel('Dim 2')
ax2.set_aspect(1)
# single trial trajectories
for single_trial_trajectory in trajectories:
ax2.plot(single_trial_trajectory[0], single_trial_trajectory[1], '-', lw=linewidth_single_trial, c=color_single_trial, alpha=alpha_single_trial)
# trial averaged trajectory
average_trajectory = np.mean(trajectories, axis=0)
ax2.plot(average_trajectory[0], average_trajectory[1], '-', lw=linewidth_trial_average, c=color_trial_average, label='Trial averaged trajectory')
ax2.legend()
plt.tight_layout()
plt.show()
# here we just reuse the existing instance of the GPFA() class as we use the same fitting parameters as before
trajectories_all = gpfa_2dim.fit_transform(spiketrains_oscillator)
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 5))
ax1.set_title('Latent dynamics extracted by GPFA')
ax1.set_xlabel('Dim 1')
ax1.set_ylabel('Dim 2')
ax1.set_aspect(1)
for single_trial_trajectory in trajectories_all:
ax1.plot(single_trial_trajectory[0], single_trial_trajectory[1], '-', lw=linewidth_single_trial, c=color_single_trial, alpha=alpha_single_trial)
average_trajectory = np.mean(trajectories_all, axis=0)
ax1.plot(average_trajectory[0], average_trajectory[1], '-', lw=linewidth_trial_average, c=color_trial_average, label='Trial averaged trajectory')
ax1.legend()
trial_to_plot = 0
ax2.set_title(f'Trajectory for trial {trial_to_plot}')
ax2.set_xlabel('Time [s]')
times_trajectory = np.arange(len(trajectories_all[trial_to_plot][0])) * bin_size.rescale('s')
ax2.plot(times_trajectory, trajectories_all[0][0], c='C0', label="Dim 1, fitting with all trials")
ax2.plot(times_trajectory, trajectories[0][0], c='C0', alpha=0.2, label="Dim 1, fitting with a half of trials")
ax2.plot(times_trajectory, trajectories_all[0][1], c='C1', label="Dim 2, fitting with all trials")
ax2.plot(times_trajectory, trajectories[0][1], c='C1', alpha=0.2, label="Dim 2, fitting with a half of trials")
ax2.legend()
plt.tight_layout()
plt.show()
# set parameters for the integration of the Lorentz attractor
timestep = 1 * pq.ms
transient_duration = 10 * pq.s
trial_duration = 30 * pq.s
num_steps_transient = int((transient_duration.rescale('ms')/timestep).magnitude)
num_steps = int((trial_duration.rescale('ms')/timestep).magnitude)
# set parameters for spike train generation
max_rate = 70 * pq.Hz
np.random.seed(42) # for visualization purposes, we want to get identical spike trains at any run
# specify data
num_trials = 20
num_spiketrains = 50
# calculate the oscillator
times, lorentz_trajectory_3dim = integrated_lorenz(
timestep, num_steps=num_steps_transient+num_steps, x0=0, y0=1, z0=1.25)
times = (times - transient_duration).rescale('s').magnitude
times_trial = times[num_steps_transient:]
# random projection
lorentz_trajectory_Ndim = random_projection(
lorentz_trajectory_3dim[:, num_steps_transient:], embedding_dimension=num_spiketrains)
# calculate instantaneous rate
normed_traj = lorentz_trajectory_Ndim / lorentz_trajectory_Ndim.max()
instantaneous_rates_lorentz = np.power(max_rate.magnitude, normed_traj)
# generate spiketrains
spiketrains_lorentz = generate_spiketrains(
instantaneous_rates_lorentz, num_trials, timestep)
from mpl_toolkits.mplot3d import Axes3D
f = plt.figure(figsize=(15, 10))
ax1 = f.add_subplot(2, 2, 1)
ax2 = f.add_subplot(2, 2, 2, projection='3d')
ax3 = f.add_subplot(2, 2, 3)
ax4 = f.add_subplot(2, 2, 4)
ax1.set_title('Lorentz system')
ax1.set_xlabel('Time [s]')
labels = ['x', 'y', 'z']
for i, x in enumerate(lorentz_trajectory_3dim):
ax1.plot(times, x, label=labels[i])
ax1.axvspan(-transient_duration.rescale('s').magnitude, 0, color='gray', alpha=0.1)
ax1.text(-5, -20, 'Initial transient', ha='center')
ax1.legend()
ax2.set_title(f'Trajectory in 3-dim space')
ax2.set_xlabel('x')
ax2.set_ylabel('y')
ax2.set_ylabel('z')
ax2.plot(lorentz_trajectory_3dim[0, :num_steps_transient],
lorentz_trajectory_3dim[1, :num_steps_transient],
lorentz_trajectory_3dim[2, :num_steps_transient], c='C0', alpha=0.3)
ax2.plot(lorentz_trajectory_3dim[0, num_steps_transient:],
lorentz_trajectory_3dim[1, num_steps_transient:],
lorentz_trajectory_3dim[2, num_steps_transient:], c='C0')
ax3.set_title(f'Projection to {num_spiketrains}-dim space')
ax3.set_xlabel('Time [s]')
y_offset = lorentz_trajectory_Ndim.std() * 3
for i, y in enumerate(lorentz_trajectory_Ndim):
ax3.plot(times_trial, y + i*y_offset)
trial_to_plot = 0
ax4.set_title(f'Raster plot of trial {trial_to_plot}')
ax4.set_xlabel('Time (s)')
ax4.set_ylabel('Neuron id')
for i, spiketrain in enumerate(spiketrains_lorentz[trial_to_plot]):
ax4.plot(spiketrain, np.ones(len(spiketrain)) * i, ls='', marker='|')
plt.tight_layout()
plt.show()
# specify fitting parameters
bin_size = 20 * pq.ms
latent_dimensionality = 3
gpfa_3dim = GPFA(bin_size=bin_size, x_dim=latent_dimensionality)
trajectories = gpfa_3dim.fit_transform(spiketrains_lorentz)
f = plt.figure(figsize=(15, 5))
ax1 = f.add_subplot(1, 2, 1, projection='3d')
ax2 = f.add_subplot(1, 2, 2, projection='3d')
linewidth_single_trial = 0.5
color_single_trial = 'C0'
alpha_single_trial = 0.5
linewidth_trial_average = 2
color_trial_average = 'C1'
ax1.set_title('Original latent dynamics')
ax1.set_xlabel('x')
ax1.set_ylabel('y')
ax1.set_zlabel('z')
ax1.plot(lorentz_trajectory_3dim[0, num_steps_transient:],
lorentz_trajectory_3dim[1, num_steps_transient:],
lorentz_trajectory_3dim[2, num_steps_transient:])
ax2.set_title('Latent dynamics extracted by GPFA')
ax2.set_xlabel('Dim 1')
ax2.set_ylabel('Dim 2')
ax2.set_zlabel('Dim 3')
# single trial trajectories
for single_trial_trajectory in trajectories:
ax2.plot(single_trial_trajectory[0], single_trial_trajectory[1], single_trial_trajectory[2],
lw=linewidth_single_trial, c=color_single_trial, alpha=alpha_single_trial)
# trial averaged trajectory
average_trajectory = np.mean(trajectories, axis=0)
ax2.plot(average_trajectory[0], average_trajectory[1], average_trajectory[2], lw=linewidth_trial_average, c=color_trial_average, label='Trial averaged trajectory')
ax2.legend()
ax2.view_init(azim=-5, elev=60) # an optimal viewing angle for the trajectory extracted from our fixed spike trains
plt.tight_layout()
plt.show()
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 5))
ax1.set_title('Original latent dynamics')
ax1.set_xlabel('Time [s]')
labels = ['x', 'y', 'z']
for i, x in enumerate(lorentz_trajectory_3dim[:, num_steps_transient:]):
ax1.plot(times_trial, x, label=labels[i])
ax1.legend()
ax2.set_title('Latent dynamics extracted by GPFA')
ax2.set_xlabel('Time [s]')
for i, x in enumerate(average_trajectory):
ax2.plot(np.arange(len(x))*0.02, x, label=f'Dim {i+1}')
ax2.legend()
plt.tight_layout()
plt.show()
from sklearn.model_selection import cross_val_score
x_dims = [1, 2, 3, 4, 5]
log_likelihoods = []
for x_dim in x_dims:
gpfa_cv = GPFA(x_dim=x_dim)
# estimate the log-likelihood for the given dimensionality as the mean of the log-likelihoods from 3 cross-vailidation folds
cv_log_likelihoods = cross_val_score(gpfa_cv, spiketrains_lorentz, cv=3, n_jobs=3, verbose=True)
log_likelihoods.append(np.mean(cv_log_likelihoods))
f = plt.figure(figsize=(7, 5))
plt.xlabel('Dimensionality of latent variables')
plt.ylabel('Log-likelihood')
plt.plot(x_dims, log_likelihoods, '.-')
plt.plot(x_dims[np.argmax(log_likelihoods)], np.max(log_likelihoods), 'x', markersize=10, color='r')
plt.tight_layout()
plt.show()
print(scipy.__version__)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Establish a secure connection with HydroShare by instantiating the hydroshare class that is defined within hs_utils. In addition to connecting with HydroShare, this command also sets and prints environment variables for several parameters that will be useful for saving work back to HydroShare.
Step4: If you are curious about where the data is being downloaded, click on the Jupyter Notebook dashboard icon to return to the File System view. The homedir directory location printed above is where you can find the data and contents you will download to a HydroShare JupyterHub server. At the end of this work session, you can migrate this data to the HydroShare iRods server as a Generic Resource.
Step5: Summarize the file availability from each watershed mapping file
Step6: 3. Compare Hydrometeorology
Step7: NetCDF retrieval and clipping to a spatial extent
Step8: Convert collection of NetCDF files into a collection of ASCII files
Step9: Create a dictionary of climate variables for the long-term mean (ltm).
Step10: Compute the total monthly and yearly precipitation, as well as the mean values across time and across stations
Step11: Visualize the "average monthly total precipitations"
Step12: generate a raster
Step13: Higher resolution children of gridded cells
Step14: Visualize the "average monthly total precipitation"
|
<ASSISTANT_TASK:>
Python Code:
#!conda install -c conda-forge ogh libgdal gdal pygraphviz ncurses matplotlib=2.2.3 --yes
# silencing warning
import warnings
warnings.filterwarnings("ignore")
# data processing
import os
import pandas as pd, numpy as np, dask
# data migration library
import ogh
import ogh_xarray_landlab as oxl
from utilities import hydroshare
from ecohydrology_model_functions import run_ecohydrology_model, plot_results
from landlab import imshow_grid, CLOSED_BOUNDARY
# modeling input params
InputFile = os.path.join(os.getcwd(),'ecohyd_inputs.yaml')
# plotting and shape libraries
import matplotlib.pyplot as plt
%matplotlib inline
InputFile
# initialize ogh_meta
meta_file = dict(ogh.ogh_meta())
sorted(meta_file.keys())
notebookdir = os.getcwd()
hs=hydroshare.hydroshare()
homedir = hs.getContentPath(os.environ["HS_RES_ID"])
os.chdir(homedir)
1/16-degree Gridded cell centroids
# List of available data
hs.getResourceFromHydroShare('ef2d82bf960144b4bfb1bae6242bcc7f')
NAmer = hs.content['NAmer_dem_list.shp']
Sauk
# Watershed extent
hs.getResourceFromHydroShare('c532e0578e974201a0bc40a37ef2d284')
sauk = hs.content['wbdhub12_17110006_WGS84_Basin.shp']
# reproject the shapefile into WGS84
ogh.reprojShapefile(sourcepath=sauk)
%%time
# map the mappingfiles from usecase1
mappingfile1=ogh.treatgeoself(shapefile=sauk, NAmer=NAmer, buffer_distance=0.06,
mappingfile=os.path.join(homedir,'Sauk_mappingfile.csv'))
help(ogh.getDailyMET_livneh2013)
help(oxl.get_x_dailymet_Livneh2013_raw)
maptable, nstations = ogh.mappingfileToDF(mappingfile1)
spatialbounds = {'minx':maptable.LONG_.min(), 'maxx':maptable.LONG_.max(),
'miny':maptable.LAT.min(), 'maxy':maptable.LAT.max()}
outputfiles = oxl.get_x_dailymet_Livneh2013_raw(homedir=homedir,
subdir='livneh2013/Daily_MET_1970_1970/raw_netcdf',
spatialbounds=spatialbounds,
nworkers=6,
start_date='1970-01-01', end_date='1970-12-31')
%%time
# convert the netCDF files into daily ascii time-series files for each gridded location
outfilelist = oxl.netcdf_to_ascii(homedir=homedir,
subdir='livneh2013/Daily_MET_1970_1970/raw_ascii',
source_directory=os.path.join(homedir, 'livneh2013/Daily_MET_1970_1970/raw_netcdf'),
mappingfile=mappingfile1,
temporal_resolution='D',
meta_file=meta_file,
catalog_label='sp_dailymet_livneh_1970_1970')
t1 = ogh.mappingfileSummary(listofmappingfiles = [mappingfile1],
listofwatershednames = ['Sauk-Suiattle river'],
meta_file=meta_file)
t1
# Save the metadata
ogh.saveDictOfDf(dictionaryObject=meta_file, outfilepath='test.json')
meta_file['sp_dailymet_livneh_1970_1970']['variable_list']
%%time
ltm = ogh.gridclim_dict(mappingfile=mappingfile1,
metadata=meta_file,
dataset='sp_dailymet_livneh_1970_1970',
variable_list=['Prec','Tmax','Tmin'])
sorted(ltm.keys())
# extract metadata
dr = meta_file['sp_dailymet_livneh_1970_1970']
# compute sums and mean monthly an yearly sums
ltm = ogh.aggregate_space_time_sum(df_dict=ltm,
suffix='Prec_sp_dailymet_livneh_1970_1970',
start_date=dr['start_date'],
end_date=dr['end_date'])
# print the name of the analytical dataframes and values within ltm
sorted(ltm.keys())
# initialize list of outputs
files=[]
# create the destination path for the dictionary of dataframes
ltm_sauk=os.path.join(homedir, 'ltm_1970_1970_sauk.json')
ogh.saveDictOfDf(dictionaryObject=ltm, outfilepath=ltm_sauk)
files.append(ltm_sauk)
# append the mapping file for Sauk-Suiattle gridded cell centroids
files.append(mappingfile1)
# # two lowest elevation locations
lowE_ref = ogh.findCentroidCode(mappingfile=mappingfile1, colvar='ELEV', colvalue=164)
# one highest elevation location
highE_ref = ogh.findCentroidCode(mappingfile=mappingfile1, colvar='ELEV', colvalue=2216)
# combine references together
reference_lines = highE_ref + lowE_ref
reference_lines
ogh.renderValueInBoxplot(ltm['meanbymonthsum_Prec_sp_dailymet_livneh_1970_1970'],
outfilepath='totalMonthlyRainfall.png',
plottitle='Total monthly rainfall',
time_steps='month',
wateryear=True,
reference_lines=reference_lines,
ref_legend=True,
value_name='Total daily precipitation (mm)',
cmap='seismic_r',
figsize=(6,6))
ogh.renderValuesInPoints(ltm['meanbymonthsum_Prec_sp_dailymet_livneh_1970_1970'],
vardf_dateindex=12,
shapefile=sauk,
cmap='seismic_r',
outfilepath='test.png',
plottitle='December total rainfall',
colorbar_label='Total monthly rainfall (mm)',
figsize=(1.5,1.5))
minx2, miny2, maxx2, maxy2 = oxl.calculateUTMbounds(mappingfile=mappingfile1,
mappingfile_crs={'init':'epsg:4326'},
spatial_resolution=0.06250)
print(minx2, miny2, maxx2, maxy2)
help(oxl.rasterDimensions)
# generate a raster
raster, row_list, col_list = oxl.rasterDimensions(minx=minx2, miny=miny2, maxx=maxx2, maxy=maxy2, dx=1000, dy=1000)
raster.shape
help(oxl.mappingfileToRaster)
%%time
# landlab raster node crossmap to gridded cell id
nodeXmap, raster, m = oxl.mappingfileToRaster(mappingfile=mappingfile1, spatial_resolution=0.06250,
minx=minx2, miny=miny2, maxx=maxx2, maxy=maxy2, dx=1000, dy=1000)
# print the raster dimensions
raster.shape
%%time
nodeXmap.plot(column='ELEV', figsize=(10,10), legend=True)
# generate vector array of December monthly precipitation
prec_vector = ogh.rasterVector(vardf=ltm['meanbymonthsum_Prec_sp_dailymet_livneh_1970_1970'],
vardf_dateindex=12,
crossmap=nodeXmap,
nodata=-9999)
# close-off areas without data
raster.status_at_node[prec_vector==-9999] = CLOSED_BOUNDARY
fig =plt.figure(figsize=(10,10))
imshow_grid(raster,
prec_vector,
var_name='Monthly precipitation',
var_units=meta_file['sp_dailymet_livneh_1970_1970']['variable_info']['Prec'].attrs['units'],
color_for_closed='black',
cmap='seismic_r')
tmax_vector = ogh.rasterVector(vardf=ltm['meanbymonth_Tmax_sp_dailymet_livneh_1970_1970'],
vardf_dateindex=12,
crossmap=nodeXmap,
nodata=-9999)
fig = plt.figure(figsize=(10,10))
imshow_grid(raster,
tmax_vector,
var_name='Daily maximum temperature',
var_units=meta_file['sp_dailymet_livneh_1970_1970']['variable_info']['Tmax'].attrs['units'],
color_for_closed='black', symmetric_cbar=False, cmap='magma')
tmin_vector = ogh.rasterVector(vardf=ltm['meanbymonth_Tmin_sp_dailymet_livneh_1970_1970'],
vardf_dateindex=12,
crossmap=nodeXmap,
nodata=-9999)
fig = plt.figure(figsize=(10,10))
imshow_grid(raster,
tmin_vector,
var_name='Daily minimum temperature',
var_units=meta_file['sp_dailymet_livneh_1970_1970']['variable_info']['Tmin'].attrs['units'],
color_for_closed='black', symmetric_cbar=False, cmap='magma')
# convert a raster vector back to geospatial presentation
t4, t5 = oxl.rasterVectorToWGS(prec_vector, nodeXmap=nodeXmap, UTM_transformer=m)
t4.plot(column='value', figsize=(10,10), legend=True)
# this is one decade
inputvectors = {'precip_met': np.tile(ltm['meandaily_Prec_sp_dailymet_livneh_1970_1970'], 15000),
'Tmax_met': np.tile(ltm['meandaily_Tmax_sp_dailymet_livneh_1970_1970'], 15000),
'Tmin_met': np.tile(ltm['meandaily_Tmin_sp_dailymet_livneh_1970_1970'], 15000)}
%%time
(VegType_low, yrs_low, debug_low) = run_ecohydrology_model(raster,
input_data=inputvectors,
input_file=InputFile,
synthetic_storms=False,
number_of_storms=100000,
pet_method='PriestleyTaylor')
plot_results(raster, VegType_low, yrs_low, yr_step=yrs_low-1)
plt.show()
plt.savefig('grid_low.png')
len(files)
# for each file downloaded onto the server folder, move to a new HydroShare Generic Resource
title = 'Computed spatial-temporal summaries of two gridded data product data sets for Sauk-Suiattle'
abstract = 'This resource contains the computed summaries for the Meteorology data from Livneh et al. 2013 and the WRF data from Salathe et al. 2014.'
keywords = ['Sauk-Suiattle', 'Livneh 2013', 'Salathe 2014','climate','hydromet','watershed', 'visualizations and summaries']
rtype = 'genericresource'
# create the new resource
resource_id = hs.createHydroShareResource(abstract,
title,
keywords=keywords,
resource_type=rtype,
content_files=files,
public=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. data loading
Step2: 2. data exploration
Step3: 3. data preprocessing for ML
Step4: Tests con features x e y
|
<ASSISTANT_TASK:>
Python Code:
# Serialization
import pickle
# Numbers
import numpy as np
import pandas as pd
# Plotting
import seaborn as sns
sns.set(color_codes=True)
from matplotlib import pyplot as plt
%matplotlib inline
# Machine learning
from sklearn.preprocessing import LabelEncoder, StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import AdaBoostClassifier
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import learning_curve
from sklearn.model_selection import TimeSeriesSplit
flocal = open('local.pkl','rb')
local_data = pickle.load(flocal)
flocal.close()
fremote = open('remote.pkl','rb')
remote_data = pickle.load(fremote)
fremote.close()
fother_local = open('other_local.pkl','rb')
other_data = pickle.load(fother_local)
fother_local.close()
df_local = pd.DataFrame(local_data['x'][:,:])
df_local['origin'] = 'USER0_LOCAL'
df_remote = pd.DataFrame(remote_data['x'][:,:])
df_remote['origin'] = 'USER0_REMOTE'
df_remote_user1 = pd.DataFrame(other_data['x'][:,:])
df_remote_user1 = df_remote_user1[df_remote_user1[2] == 'MOUSE_MOVE']
df_remote_user1['origin'] = 'USER1_LOCAL'
df = pd.concat([df_local, df_remote, df_remote_user1])
df.columns = ['dt', 'device', 'event_type', 'x', 'y', 'origin']
df['dt'] = pd.to_numeric(df['dt'])
df['x'] = pd.to_numeric(df['x'])
df['y'] = pd.to_numeric(df['y'])
df.head()
def print_stats(df, colname, origin):
col = df[colname][df.origin == origin]
print('{:3} => {:12}: mean:{:10.3f} variance:{:10.3f} std:{:10.3f} min:{:10.3f} max:{:10.3f}'.format(
colname, origin, np.mean(col), np.var(col), np.std(col), np.min(col), np.max(col)))
print_stats(df, 'dt', 'USER0_LOCAL')
print_stats(df, 'dt', 'USER1_LOCAL')
print_stats(df, 'dt', 'USER0_REMOTE')
print()
print_stats(df, 'x', 'USER0_LOCAL')
print_stats(df, 'x', 'USER1_LOCAL')
print_stats(df, 'x', 'USER0_REMOTE')
print()
print_stats(df, 'y', 'USER0_LOCAL')
print_stats(df, 'y', 'USER1_LOCAL')
print_stats(df, 'y', 'USER0_REMOTE')
sns.set_context("talk")
plt.figure(figsize=(10, 6))
sns.distplot(df.query('dt < 0.025 and origin == \'USER0_REMOTE\'')['dt'])
sns.distplot(df.query('dt < 0.025 and origin == \'USER0_LOCAL\'')['dt'])
sns.distplot(df.query('dt < 0.025 and origin == \'USER1_LOCAL\'')['dt'])
sns.set_context("paper")
sns.jointplot(x="x", y="y", data=df.query('origin == \'USER0_LOCAL\'')[0:50]);
sns.jointplot(x="x", y="y", data=df.query('origin == \'USER0_REMOTE\'')[0:50]);
sns.jointplot(x="x", y="y", data=df.query('origin == \'USER1_LOCAL\'')[0:50]);
# Drop unused columns
df_dts = df.drop(['device', 'event_type', 'x', 'y'], 1)
# Numerical encoding for labels
label_encoder = LabelEncoder()
df_dts['origin'] = label_encoder.fit_transform(df_dts['origin'])
labels = label_encoder.inverse_transform([0, 1, 2])
print('Label encoding: 0 -> {} , 1 -> {}, 2 -> {}'.format(labels[0], labels[1], labels[2]))
# Remove outliers
df_dts = df_dts[df.dt < df.dt.quantile(.95)]
# Give 0 mean and unit variance to data
standard_scaler = StandardScaler()
df_dts['dt'] = standard_scaler.fit_transform(df_dts['dt'].values.reshape(-1, 1))
# Create labeled subsequences
# (All categories must have the same number of examples)
min_len = np.min([len(df_dts[df_dts.origin == 0]), len(df_dts[df_dts.origin == 1]), len(df_dts[df_dts.origin == 2])])
seq_size = 50
num_examples = min_len // seq_size
class0 = df_dts[df_dts.origin == 0][:num_examples * seq_size]['dt']
class1 = df_dts[df_dts.origin == 1][:num_examples * seq_size]['dt']
class2 = df_dts[df_dts.origin == 2][:num_examples * seq_size]['dt']
class0_x = np.hsplit(class0, num_examples)
class1_x = np.hsplit(class1, num_examples)
class2_x = np.hsplit(class2, num_examples)
class0_y = np.full(num_examples, 0, dtype=np.float32)
class1_y = np.full(num_examples, 1, dtype=np.float32)
class2_y = np.full(num_examples, 2, dtype=np.float32)
x = np.vstack([class0_x, class1_x])
y = np.append(class0_y, class1_y)
# Random permutation of all the examples
permutation = np.random.permutation(len(y))
y = y[permutation]
x = x[permutation]
clf = LogisticRegression()
scores = cross_val_score(clf, x, y, cv=20)
np.mean(scores)
clf = DecisionTreeClassifier(max_depth=2)
scores = cross_val_score(clf, x, y, cv=20)
np.mean(scores)
clf = AdaBoostClassifier(n_estimators=300)
scores = cross_val_score(clf, x, y, cv=20)
np.mean(scores)
clf = AdaBoostClassifier(
base_estimator=DecisionTreeClassifier(max_depth=2),
n_estimators=300)
_, train_scores, test_scores = learning_curve(clf, x, y,
cv = 5,
train_sizes=np.linspace(0.1, 1.0, 10))
plt.plot(test_scores)
df_pos = df.copy()
# Numerical encoding for labels
label_encoder = LabelEncoder()
df_pos['origin'] = label_encoder.fit_transform(df_pos['origin'])
labels = label_encoder.inverse_transform([0, 1, 2])
print('Label encoding: 0 -> {} , 1 -> {}, 2 -> {}'.format(labels[0], labels[1], labels[2]))
# Remove outliers (no outliers in position data)
df_pos = df_pos[df_pos.dt < df_pos.dt.quantile(.95)]
# Give 0 mean and unit variance to data
standard_scaler = StandardScaler()
df_pos['x'] = standard_scaler.fit_transform(df_pos['x'].values.reshape(-1, 1))
df_pos['y'] = standard_scaler.fit_transform(df_pos['y'].values.reshape(-1, 1))
df_pos['dt'] = standard_scaler.fit_transform(df_pos['dt'].values.reshape(-1, 1))
# Create labeled subsequences
# (All categories must have the same number of examples)
min_len = np.min([len(df_pos[df_pos.origin == 0]), len(df_pos[df_pos.origin == 1]), len(df_pos[df_pos.origin == 2])])
seq_size = 70
num_examples = min_len // seq_size
class0 = df_pos[df_pos.origin == 0][:num_examples * seq_size][['x', 'y', 'dt']].values
class1 = df_pos[df_pos.origin == 1][:num_examples * seq_size][['x', 'y', 'dt']].values
class2 = df_pos[df_pos.origin == 2][:num_examples * seq_size][['x', 'y', 'dt']].values
class0_x = np.vsplit(class0, num_examples)
class1_x = np.vsplit(class1, num_examples)
class2_x = np.vsplit(class2, num_examples)
class0_x = [ arr.reshape(seq_size*3) for arr in class0_x ]
class1_x = [ arr.reshape(seq_size*3) for arr in class1_x ]
class2_x = [ arr.reshape(seq_size*3) for arr in class2_x ]
class0_y = np.full(num_examples, 0, dtype=np.float32)
class1_y = np.full(num_examples, 1, dtype=np.float32)
class2_y = np.full(num_examples, 2, dtype=np.float32)
x = np.concatenate([class0_x, class2_x], axis=0)
# x = np.concatenate(x, axis=2)
y = np.append(class0_y, class2_y)
# Random permutation of all the examples
permutation = np.random.permutation(len(y))
y = y[permutation]
x = x[permutation]
x.shape
clf = AdaBoostClassifier(n_estimators=300)
scores = cross_val_score(clf, x, y, cv=20)
np.mean(scores)
clf = AdaBoostClassifier(
base_estimator=DecisionTreeClassifier(max_depth=2),
n_estimators=100)
_, train_scores, test_scores = learning_curve(clf, x, y,
cv = 3,
train_sizes=np.linspace(0.1, 1.0, 10))
plt.plot(test_scores)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 텐서플로우의 디폴트 그래프는 직접 접근을 할 수 없고 get_default_graph 메소드를 이용합니다.
Step2: 초기에는 디폴트 그래프에 아무런 연산도 들어 있지 않고 비어 있습니다.
Step3: 실수 1.0 값을 가지는 상수 input_value 를 만듭니다. name 옵션을 사용하여 이름을 지정하면 텐서보드의 그래프에서 구분하기 편리합니다.
Step4: 상수 하나도 텐서플로우 그래프에서는 하나의 노드로 취급되어 get_operations 에서 리턴되는 리스트가 비어있지 않습니다.
Step5: get_operations 로 받은 리스트에는 하나의 엘리먼트가 들어 있고 엘리먼트는 Operation 클래스의 인스턴스입니다.
Step6: ops 의 첫번째 노드(여기서는 상수노드)의 정의를 조회하면 프로토콜 버퍼 형식으로 연산 노드를 표현하고 있음을 알 수 있습니다.
Step7: input_value는 상수 텐서를 위한 일종의 연산 노드이며 값이 들어 있지 않습니다.
Step8: 텐서플로우의 세션을 생성한 후에 input_value 를 실행하면 결과 값이 리턴됩니다.
Step9: 두 수를 곱하는 연산을 만들어 보기 위해 weight 변수를 만듭니다. 상수 노드는 tensorflow.python.framework.ops.Tensor 의 객체인데 변수 노드는 tensorflow.python.ops.variables.Variable 의 객체입니다.
Step10: 이제 연산 노드는 다섯 개로 늘어납니다. 변수 텐서를 생성하면 그래프에 초기값과 할당, 조회에 관련된 노드가 추가로 생성되기 때문입니다.
Step11: weight 변수와 input_value 상수를 곱하여 곱셈 노드로 만들어지는 output_value 텐서를 만듭니다.
Step12: 그래프의 노드를 다시 조회하면 mul 노드가 추가된 것을 확인할 수 있습니다.
Step13: 그래프의 모든 변수를 초기화하기 위해 init 노드를 만들고 run 메소드로 실행합니다.
Step14: 1 * 0.8 의 출력 값은 예상대로 0.8을 리턴합니다.
Step15: SummaryWriter를 사용하여 log_simple_graph 디렉토리에 sess에서 만들어진 세션 그래프 정보를 기록합니다.
Step16: 쉘에서 tensorboard --logdir=log_simple_graph 명령으로 텐서보드를 실행하고 브라우저로 http
Step17: 실제 참 값(y_)을 0이라고 정의하고 예측값과의 차이의 제곱을 에러 함수(loss function or error function)로 정의합니다.
Step18: 학습속도를 0.025로 하고 그래디언트 디센트 최적화 방식을 선택합니다.
Step19: 수동으로 에러함수의 기울기를 계산해 보면 아래와 같이 1.6이 나옵니다. 가중치 값이 0.8 이므로 y = 0.8 * 1.0 = 0.8 이 되고 위에서 가정한 대로 y_ = 0 입니다. 에러 함수의 미분 방정식은 2(y - y_) 이므로 결과 값은 2 * 0.8 = 1.6 이 됩니다.
Step20: 계산된 그래디언트를 가중치에 반영하면 학습속도가 0.025 이므로 0.025 * 1.6 = 0.04 가 되어 w 가 0.04 만큼 감소됩니다.
Step21: 이 과정을 반복하도록 루프를 작성하고 summary_writer를 이용하여 결과 y 값을 기록하면 텐서보드에서 그래프로 값의 변화를 확인할 수 있습니다.
|
<ASSISTANT_TASK:>
Python Code:
%load_ext watermark
%watermark -vm -p tensorflow,numpy,scikit-learn
import tensorflow as tf
graph = tf.get_default_graph()
graph.get_operations()
input_value = tf.constant(1.0, name='input_value')
graph.get_operations()
ops = graph.get_operations()
len(ops), ops[0].__class__
op = ops[0]
op.node_def
input_value.__class__, input_value
sess = tf.Session()
sess.run(input_value)
weight = tf.Variable(0.8, name='weight')
weight
ops = graph.get_operations()
for op in ops:
print(op.name)
output_value = weight * input_value
output_value
ops = graph.get_operations()
for op in ops:
print(op.name)
init = tf.initialize_all_variables()
sess.run(init)
sess.run(output_value)
summary_writer = tf.train.SummaryWriter('log_simple_graph', sess.graph)
x = tf.constant(1.0, name='x')
w = tf.Variable(0.8, name='w')
y = tf.mul(w, x, name='y')
init = tf.initialize_all_variables()
sess.run(init)
y_ = tf.constant(0.0)
loss = (y - y_)**2
optim = tf.train.GradientDescentOptimizer(learning_rate=0.025)
grads_and_vars = optim.compute_gradients(loss)
grads_and_vars
sess.run(grads_and_vars[1][0])
sess.run(optim.apply_gradients(grads_and_vars))
sess.run(w)
train_step = tf.train.GradientDescentOptimizer(0.025).minimize(loss)
summary_y = tf.scalar_summary('output', y)
for i in range(100):
summary_str = sess.run(summary_y)
summary_writer.add_summary(summary_str, i)
sess.run(train_step)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Count all of the the ROV dives whose Measurements are near MARS
Step2: Near surface ROV location data is notoriously noisy (because of fundamental inaccuracies of USBL navigation systems). Let's remove near surface Measurment values from our selection. Count all of the dives near MARS and whose Measurments are deeper than 800 m.
Step3: Let's plot the measurement points of dives on a map of Monterey Bay to confirm that the selection is in the right spot.
Step4: (The major cluster is around the MARS site, but there are a few spurious navigation points even for the deep dive data.)
|
<ASSISTANT_TASK:>
Python Code:
db = 'stoqs_rovctd_mb'
from django.contrib.gis.geos import fromstr
from django.contrib.gis.measure import D
mars = fromstr('POINT(-122.18681000 36.71137000)')
near_mars = Measurement.objects.using(db).filter(geom__distance_lt=(mars, D(km=.1)))
mars_dives = Activity.objects.using(db).filter(instantpoint__measurement__in=near_mars
).distinct()
print(mars_dives.count())
deep_mars_dives = Activity.objects.using(db
).filter(instantpoint__measurement__in=near_mars,
instantpoint__measurement__depth__gt=800
).distinct()
print(deep_mars_dives.count())
%%time
%matplotlib inline
import pylab as plt
from mpl_toolkits.basemap import Basemap
m = Basemap(projection='cyl', resolution='l',
llcrnrlon=-122.7, llcrnrlat=36.5,
urcrnrlon=-121.7, urcrnrlat=37.0)
m.arcgisimage(server='http://services.arcgisonline.com/ArcGIS', service='Ocean_Basemap')
for dive in deep_mars_dives:
points = Measurement.objects.using(db).filter(instantpoint__activity=dive,
instantpoint__measurement__depth__gt=800
).values_list('geom', flat=True)
m.scatter(
[geom.x for geom in points],
[geom.y for geom in points])
%%time
# A Python dictionary comprehension for all the Parameters and axis labels we want to plot
parms = {p.name: '{} ({})'.format(p.long_name, p.units) for
p in Parameter.objects.using(db).filter(name__in=
('t', 's', 'o2', 'sigmat', 'spice', 'light'))}
plt.rcParams['figure.figsize'] = (18.0, 8.0)
fig, ax = plt.subplots(1, len(parms), sharey=True)
ax[0].invert_yaxis()
ax[0].set_ylabel('Depth (m)')
dive_names = []
for dive in deep_mars_dives.order_by('startdate'):
dive_names.append(dive.name)
# Use select_related() to improve query performance for the depth lookup
# Need to also order by time
mps = MeasuredParameter.objects.using(db
).filter(measurement__instantpoint__activity=dive
).select_related('measurement'
).order_by('measurement__instantpoint__timevalue')
depth = [mp.measurement.depth for mp in mps.filter(parameter__name='t')]
for i, (p, label) in enumerate(parms.items()):
ax[i].set_xlabel(label)
try:
ax[i].plot(mps.filter(parameter__name=p).values_list(
'datavalue', flat=True), depth)
except ValueError:
pass
from IPython.display import display, HTML
display(HTML('<p>All dives at MARS site: ' + ' '.join(dive_names) + '<p>'))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: numpy.testing
Step2: engarde decorators
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
import os
import numpy as np
import pandas as pd
PROJ_ROOT = os.path.abspath(os.path.join(os.pardir, os.pardir))
data = np.random.normal(0.0, 1.0, 1000000)
assert np.mean(data) == 0.0
np.testing.assert_almost_equal(np.mean(data), 0.0, decimal=2)
a = np.random.normal(0, 0.0001, 10000)
b = np.random.normal(0, 0.0001, 10000)
np.testing.assert_array_equal(a, b)
np.testing.assert_array_almost_equal(a, b, decimal=3)
import engarde.decorators as ed
test_data = pd.DataFrame({'a': np.random.normal(0, 1, 100),
'b': np.random.normal(0, 1, 100)})
@ed.none_missing()
def process(dataframe):
dataframe.loc[10, 'a'] = 1.0
return dataframe
process(test_data).head()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We can also modify our Path in the same ways as any other PHIDL object
Step2: We can also check the length of the curve with the length() method
Step3: Defining the cross-section
Step4: Option 2
Step5: Option 3
Step6: Assembling Paths quickly
Step7: Example 2
Step8: Example 3
Step9: Note you can also use the Path() constructor to immediately contruct your Path
Step10: Waypoint-based smooth paths
Step11: We can use smooth() to generate a smoothed Path from these waypoints. You can specify the corner-rounding function with the corner_fun argument (typically using either euler() or arc()), and provide any additional parameters to the corner-rounding function. Note here we provide the the additional argument use_eff = False (which gets passed to pp.euler so that the minimum radius of curvature is 2).
Step12: Sharp/angular paths
Step13: Example 2
Step14: As usual, these paths can be used with a CrossSection to create nice extrusions
Step16: Custom curves
Step17: You can create Paths from any array of points! If we examine our path P we can see that all we've simply created a long list of points
Step18: Simplifying / reducing point usage
Step19: Let's say we need fewer points. We can increase the simplify tolerance by specifying simplify = 1e-1. This drops the number of points to ~400 points form a line that is identical to within 1e-1 distance from the original
Step20: Taken to absurdity, what happens if we set simplify = 0.3? Once again, the ~200 remaining points form a line that is within 0.3 units from the original -- but that line looks pretty bad.
Step21: Curvature calculation
Step22: Transitioning between cross-sections
Step23: Now let's create the transitional CrossSection by calling transition() with these two CrossSections as input. If we want the width to vary as a smooth sinusoid between the sections, we can set width_type to 'sine' (alternatively we could also use 'linear').
Step24: Now that we have all of our components, let's connect() everything and see what it looks like
Step25: Note that since transition() outputs a CrossSection, we can make the transition follow an arbitrary path
Step26: Variable width / offset
Step27: We can do the same thing with the offset argument
Step28: Offsetting a Path
Step29: Modifying a CrossSection
Step30: In case we want to change any of the CrossSection elements, we simply access the Python dictionary that specifies that element and modify the values
|
<ASSISTANT_TASK:>
Python Code:
from phidl import Path, CrossSection, Device
import phidl.path as pp
import numpy as np
P = Path()
P.append( pp.arc(radius = 10, angle = 90) ) # Circular arc
P.append( pp.straight(length = 10) ) # Straight section
P.append( pp.euler(radius = 3, angle = -90) ) # Euler bend (aka "racetrack" curve)
P.append( pp.straight(length = 40) )
P.append( pp.arc(radius = 8, angle = -45) )
P.append( pp.straight(length = 10) )
P.append( pp.arc(radius = 8, angle = 45) )
P.append( pp.straight(length = 10) )
from phidl import quickplot as qp
qp(P)
P.movey(10)
P.xmin = 20
qp(P)
P.length()
waveguide_device = P.extrude(1.5, layer = 3)
qp(waveguide_device)
waveguide_device = P.extrude([0.7,3.7], layer = 4)
qp(waveguide_device)
# Create a blank CrossSection
X = CrossSection()
# Add a a few "sections" to the cross-section
X.add(width = 1, offset = 0, layer = 0, ports = ('in','out'))
X.add(width = 3, offset = 2, layer = 2)
X.add(width = 3, offset = -2, layer = 2)
# Combine the Path and the CrossSection
waveguide_device = P.extrude(X)
# Quickplot the resulting Device
qp(waveguide_device)
P = Path()
# Create the basic Path components
left_turn = pp.euler(radius = 4, angle = 90)
right_turn = pp.euler(radius = 4, angle = -90)
straight = pp.straight(length = 10)
# Assemble a complex path by making list of Paths and passing it to `append()`
P.append([
straight,
left_turn,
straight,
right_turn,
straight,
straight,
right_turn,
left_turn,
straight,
])
qp(P)
P = Path()
# Create an "S-turn" just by making a list
s_turn = [left_turn, right_turn]
P.append(s_turn)
qp(P)
P = Path()
# Create an "S-turn" using a list
s_turn = [left_turn, right_turn]
# Repeat the S-turn 3 times by nesting our S-turn list 3x times in another list
triple_s_turn = [s_turn, s_turn, s_turn]
P.append(triple_s_turn)
qp(P)
P = Path([straight, left_turn, straight, right_turn, straight])
qp(P)
points = np.array([(20,10), (40,10), (20,40), (50,40), (50,20), (70,20)])
plt.plot(points[:,0], points[:,1], '.-')
plt.axis('equal');
points = np.array([(20,10), (40,10), (20,40), (50,40), (50,20), (70,20)])
P = pp.smooth(
points = points,
radius = 2,
corner_fun = pp.euler, # Alternatively, use pp.arc
use_eff = False,
)
qp(P)
P = Path([(20,10), (30,10), (40,30), (50,30), (50,20), (70,20)])
qp(P)
P = Path()
P.append( pp.straight(length = 10, num_pts = 2) )
P.end_angle += 90 # "Turn" 90 deg (left)
P.append( pp.straight(length = 10, num_pts = 2) ) # "Walk" length of 10
P.end_angle += -135 # "Turn" -135 degrees (right)
P.append( pp.straight(length = 15, num_pts = 2) ) # "Walk" length of 10
P.end_angle = 0 # Force the direction to be 0 degrees
P.append( pp.straight(length = 10, num_pts = 2) ) # "Walk" length of 10
qp(P)
X = CrossSection()
X.add(width = 1, offset = 0, layer = 3)
X.add(width = 1.5, offset = 2.5, layer = 4)
X.add(width = 1.5, offset = -2.5, layer = 7)
wiring_device = P.extrude(X)
qp(wiring_device)
def looploop(num_pts = 1000):
Simple limacon looping curve
t = np.linspace(-np.pi,0,num_pts)
r = 20 + 25*np.sin(t)
x = r*np.cos(t)
y = r*np.sin(t)
points = np.array((x,y)).T
return points
# Create the path points
P = Path()
P.append( pp.arc(radius = 10, angle = 90) )
P.append( pp.straight())
P.append( pp.arc(radius = 5, angle = -90) )
P.append( looploop(num_pts = 1000) )
P.rotate(-45)
# Create the crosssection
X = CrossSection()
X.add(width = 0.5, offset = 2, layer = 0, ports = [None,None])
X.add(width = 0.5, offset = 4, layer = 1, ports = [None,'out2'])
X.add(width = 1.5, offset = 0, layer = 2, ports = ['in','out'])
X.add(width = 1, offset = 0, layer = 3)
D = P.extrude(X)
qp(D) # quickplot the resulting Device
import numpy as np
path_points = P.points # Curve points are stored as a numpy array in P.points
print(np.shape(path_points)) # The shape of the array is Nx2
print(len(P)) # Equivalently, use len(P) to see how many points are inside
# The remaining points form a identical line to within `1e-3` from the original
D = P.extrude(X, simplify = 1e-3)
qp(D) # quickplot the resulting Device
D = P.extrude(X, simplify = 1e-1)
qp(D) # quickplot the resulting Device
D = P.extrude(X, simplify = 0.3)
qp(D) # quickplot the resulting Device
P = Path()
P.append([
pp.straight(length = 10, num_pts = 100), # Curvature of 0
# Euler straight-to-bend transition with min. bend radius of 3 (max curvature of 1/3)
pp.euler(radius = 3, angle = 90, p = 0.5, use_eff = False),
pp.straight(length = 10, num_pts = 100), # Curvature of 0
pp.arc(radius = 10, angle = 90), # Curvature of 1/10
pp.arc(radius = 5, angle = -90), # Curvature of -1/5
pp.straight(length = 20, num_pts = 100), # Curvature of 0
])
s,K = P.curvature()
plt.plot(s,K,'.-')
plt.xlabel('Position along curve (arc length)')
plt.ylabel('Curvature');
# Create our first CrossSection
X1 = CrossSection()
X1.add(width = 1.2, offset = 0, layer = 2, name = 'wg', ports = ('in1', 'out1'))
X1.add(width = 2.2, offset = 0, layer = 3, name = 'etch')
X1.add(width = 1.1, offset = 3, layer = 1, name = 'wg2')
# Create the second CrossSection that we want to transition to
X2 = CrossSection()
X2.add(width = 1, offset = 0, layer = 2, name = 'wg', ports = ('in2', 'out2'))
X2.add(width = 3.5, offset = 0, layer = 3, name = 'etch')
X2.add(width = 3, offset = 5, layer = 1, name = 'wg2')
# To show the cross-sections, let's create two Paths and
# create Devices by extruding them
P1 = pp.straight(length = 5)
P2 = pp.straight(length = 5)
WG1 = P1.extrude(X1)
WG2 = P2.extrude(X2)
# Place both cross-section Devices and quickplot them
D = Device()
wg1 = D << WG1
wg2 = D << WG2
wg2.movex(7.5)
qp(D)
# Create the transitional CrossSection
Xtrans = pp.transition(cross_section1 = X1,
cross_section2 = X2,
width_type = 'sine')
# Create a Path for the transitional CrossSection to follow
P3 = pp.straight(length = 15)
# Use the transitional CrossSection to create a Device
WG_trans = P3.extrude(Xtrans)
qp(WG_trans)
D = Device()
wg1 = D << WG1 # First cross-section Device
wg2 = D << WG2
wgt = D << WG_trans
wgt.connect('in2', wg1.ports['out1'])
wg2.connect('in2', wgt.ports['out1'])
qp(D)
# Transition along a curving Path
P4 = pp.euler(radius = 25, angle = 45, p = 0.5, use_eff = False)
WG_trans = P4.extrude(Xtrans)
D = Device()
wg1 = D << WG1 # First cross-section Device
wg2 = D << WG2
wgt = D << WG_trans
wgt.connect('in2', wg1.ports['out1'])
wg2.connect('in2', wgt.ports['out1'])
qp(D)
def my_custom_width_fun(t):
# Note: Custom width/offset functions MUST be vectorizable--you must be able
# to call them with an array input like my_custom_width_fun([0, 0.1, 0.2, 0.3, 0.4])
num_periods = 5
w = 3 + np.cos(2*np.pi*t * num_periods)
return w
# Create the Path
P = pp.straight(length = 40)
# Create two cross-sections: one fixed width, one modulated by my_custom_offset_fun
X = CrossSection()
X.add(width = 3, offset = -6, layer = 0)
X.add(width = my_custom_width_fun, offset = 0, layer = 0)
# Extrude the Path to create the Device
D = P.extrude(X)
qp(D)
def my_custom_offset_fun(t):
# Note: Custom width/offset functions MUST be vectorizable--you must be able
# to call them with an array input like my_custom_offset_fun([0, 0.1, 0.2, 0.3, 0.4])
num_periods = 3
w = 3 + np.cos(2*np.pi*t * num_periods)
return w
# Create the Path
P = pp.straight(length = 40)
# Create two cross-sections: one fixed offset, one modulated by my_custom_offset_fun
X = CrossSection()
X.add(width = 1, offset = my_custom_offset_fun, layer = 0)
X.add(width = 1, offset = 0, layer = 0)
# Extrude the Path to create the Device
D = P.extrude(X)
qp(D)
def my_custom_offset_fun(t):
# Note: Custom width/offset functions MUST be vectorizable--you must be able
# to call them with an array input like my_custom_offset_fun([0, 0.1, 0.2, 0.3, 0.4])
num_periods = 1
w = 2 + np.cos(2*np.pi*t * num_periods)
return w
P1 = pp.straight(length = 40)
P2 = P1.copy() # Make a copy of the Path
P1.offset(offset = my_custom_offset_fun)
P2.offset(offset = my_custom_offset_fun)
P2.mirror((1,0)) # reflect across X-axis
qp([P1, P2])
# Create the Path
P = pp.arc(radius = 10, angle = 45)
# Create two cross-sections: one fixed width, one modulated by my_custom_offset_fun
X = CrossSection()
X.add(width = 1, offset = 0, layer = 0, ports = (1,2), name = 'myelement1')
X.add(width = 1, offset = 3, layer = 0, ports = (3,4), name = 'myelement2')
# Extrude the Path to create the Device
D = P.extrude(X)
qp(D)
# Copy our original CrossSection
Xcopy = X.copy()
# Modify
Xcopy['myelement2']['width'] = 2 # X['myelement2'] is a dictionary
Xcopy['myelement2']['layer'] = 1 # X['myelement2'] is a dictionary
# Extrude the Path to create the Device
D = P.extrude(Xcopy)
qp(D)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Number of bridges with repect to interventions identified by NDOT flow chart
Step2: Number of bridges with respect to 'Yes' or 'No' intervention identified by random forest
|
<ASSISTANT_TASK:>
Python Code:
category = Counter(df['category']).keys()
values = Counter(df['category']).values()
plt.bar(category, values)
plt.xticks(rotation='vertical')
plt.show()
category = Counter(df['intervention']).keys()
values = Counter(df['intervention']).values()
plt.bar(category, values)
plt.xticks(rotation='vertical')
plt.show()
category = Counter(df['rfIntervention']).keys()
values = Counter(df['rfIntervention']).values()
plt.bar(category, values)
plt.xticks(rotation='vertical')
plt.show()
Counter(df[df['rfIntervention'] == 'yes']['category'])
df[df['rfIntervention'] == 'yes']
#
Counter(df[df['rfIntervention'] == 'no']['category'])
df[df['rfIntervention'] == 'no'].head(10)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Even though we can dispose the Axes how we want inside the figure,
Step2: Another useful way to create grids of plots is by creating a figure and then adding subplots to it with the $add_subplot$ function. With add subplots, you specify the grid structure you are imagining in your mind, and he will return you an Axes with these dimensions. This function takes 3 parameters
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
%matplotlib inline
X = [0,1,2,3,4]
Fx = [x**2 for x in X]
fig = plt.figure()
ax = fig.add_axes([0., 0., 1., 1., ]) # define a rectangle
ax.plot(X,Fx) # plots happen inside Axes objects
plt.show(fig)
fig,axes = plt.subplots(2,2)
F0 = [x**0 for x in X]
F1 = [x**1 for x in X]
F2 = [x**2 for x in X]
F3 = [x**3 for x in X]
axes[0,0].plot(X,F0)
axes[0,1].plot(X,F1)
axes[1,0].plot(X,F2)
axes[1,1].plot(X,F3)
plt.show(fig)
fig = plt.figure()
ax12 = fig.add_subplot(2,1,1) # This one fills
# the top half of the picture
ax3 = fig.add_subplot(2,2,3) # These ones fill half of the
ax4 = fig.add_subplot(2,2,4) # space left each.
ax12.plot(X,Fx)
ax3.plot(X,F0)
ax4.plot(X,F1)
plt.show(fig)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First, initialize the values needed in this program.
Step2: Calculate the Thevenin voltage and impedance from Equations 7-41a
Step3: Now calculate the torque-speed characteristic for many slips between -1 and 1.
Step4: Calculate torque for original rotor resistance using
Step5: Calculate torque for triple rotor resistance
Step6: Plot the torque-speed curve
|
<ASSISTANT_TASK:>
Python Code:
%pylab notebook
r1 = 0.641 # Stator resistance
x1 = 1.106 # Stator reactance
r2 = 0.332 # Rotor resistance
x2 = 0.464 # Rotor reactance
xm = 26.3 # Magnetization branch reactance
v_phase = 460 / sqrt(3) # Phase voltage
n_sync = 1800 # Synchronous speed (r/min)
w_sync = n_sync * 2*pi/60 # Synchronous speed (rad/s)
v_th = v_phase * ( xm / sqrt(r1**2 + (x1 + xm)**2) )
z_th = ((1j*xm) * (r1 + 1j*x1)) / (r1 + 1j*(x1 + xm))
r_th = real(z_th)
x_th = imag(z_th)
s = linspace(-1, 1, 100) # slip
nm = (1 - s) * n_sync # mechanical speed
t_ind1 = ((3 * v_th**2 * r2/s) /
(w_sync * ((r_th + r2/s)**2 + (x_th + x2)**2)))
t_ind2 = ((3 * v_th**2 * 3*r2/s) /
(w_sync * ((r_th + 3*r2/s)**2 + (x_th + x2)**2)))
rc('text', usetex=True) # enable LaTeX commands for plot
plot(nm, t_ind1,'b',
nm, t_ind2,'k--',
lw=2)
xlabel(r'$\mathbf{n_{m}}\ [rpm]$')
ylabel(r'$\mathbf{\tau_{ind}}\ [Nm]$')
title ('Induction machine torque-speed characteristic')
legend (('Original $R_{2}$', '$3 \cdot R_{2}$',), loc = 1);
grid()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: To check correctness we are going to solve simple differential equation
Step2: The next method we are going to use is Runge-Kutta method family. Actually Euler method is a special case of Runge-Kutta methods.
Step3: Let's solve slightly different equation
Step4: Now let's move to system of differential equations.
Step5: Eg. We have system of differential equations
Step6: Predator-prey equation
Step7: Equilibrium
|
<ASSISTANT_TASK:>
Python Code:
def euler(f, x, y0):
h = x[1] - x[0]
y = np.empty_like(x)
y[0] = y0
for i in range(1, len(x)):
y[i] = y[i - 1] + h * f(x[i - 1], y[i - 1])
return y
dy = lambda x, y: x*x + y*y
x = np.linspace(0, 0.5, 100)
y0 = 0
y = euler(dy, x, y0)
y_ans = np.tan(x) - x
plt.figure(figsize=(15, 10))
plt.plot(x, y, x, y_ans)
plt.legend(['euler', 'answer'], loc='best')
plt.xlabel('x')
plt.title('Euler method (Runge-Kutta 1-st order method)')
plt.show()
def runge_kutta3(f, x, y0):
h = x[1] - x[0]
y = np.empty_like(x)
y[0] = y0
for i in range(1, len(x)):
k1 = h * f(x[i - 1], y[i - 1])
k2 = h * f(x[i - 1] + h/3, y[i - 1] + k1/3)
k3 = h * f(x[i - 1] + 2*h/3, y[i - 1] + 2*k2/3)
y[i] = y[i - 1] + (k1 + 3*k3) / 4
return y
def runge_kutta4(f, x, y0):
h = x[1] - x[0]
y = np.empty_like(x)
y[0] = y0
for i in range(1, len(x)):
k1 = h * f(x[i - 1], y[i - 1])
k2 = h * f(x[i - 1] + h/2, y[i - 1] + k1/2)
k3 = h * f(x[i - 1] + h/2, y[i - 1] + k2/2)
k4 = h * f(x[i - 1] + h, y[i - 1] + k3)
y[i] = y[i - 1] + (k1 + 2*k2 + 2*k3 + k4) / 6
return y
dy = lambda x, y: np.sin(x) / y
x = np.linspace(0, 5, 4)
y0 = 1
y3 = runge_kutta3(dy, x, y0)
y4 = runge_kutta4(dy, x, y0)
y_ans = np.sqrt(3 - 2*np.cos(x))
plt.figure(figsize=(15, 10))
plt.plot(x, y3, x, y4, x, y_ans)
plt.legend(['rk3', 'rk4', 'ans'], loc='best')
plt.xlabel('x')
plt.title('Runge-Kutta 3-rd and 4-th order methods')
plt.show()
def fmap(fs, x):
return np.array([f(*x) for f in fs])
def runge_kutta4_system(fs, x, y0):
h = x[1] - x[0]
y = np.empty((len(x), len(y0)))
y[0] = y0
for i in range(1, len(x)):
k1 = h * fmap(fs, [x[i - 1], *y[i - 1]])
k2 = h * fmap(fs, [x[i - 1] + h/2, *(y[i - 1] + k1/2)])
k3 = h * fmap(fs, [x[i - 1] + h/2, *(y[i - 1] + k2/2)])
k4 = h * fmap(fs, [x[i - 1] + h, *(y[i - 1] + k3)])
y[i] = y[i - 1] + (k1 + 2*k2 + 2*k3 + k4) / 6
return y
dy = lambda x, y, z: z
dz = lambda x, y, z: 2*x*z / (x*x + 1)
fs = [dy, dz]
x = np.linspace(0, 1, 10)
y0 = np.array([1, 3])
y = runge_kutta4_system(fs, x, y0)
plt.figure(figsize=(15, 10))
plt.plot(x, y[:, 0], x, y[:, 1])
plt.legend(['y(x)', 'z(x)'], loc='best')
plt.xlabel('x')
plt.title('Runge-Kutta 4-th order method for system of differential equations')
plt.show()
dx = lambda t, x, y: 2/3*x - 4/3*x*y
dy = lambda t, x, y: x*y - y
fs = [dx, dy]
t = np.linspace(0, 20, 500)
y0 = np.array([1, 2])
z = runge_kutta4_system(fs, t, y0)
plt.figure(figsize=(15, 10))
plt.plot(t, z[:, 0], t, z[:, 1])
plt.legend(['prey', 'predator'], loc='best')
plt.xlabel('time (sec)')
plt.ylabel('population')
plt.title('Lotka-Volterra equation')
plt.show()
plt.figure(figsize=(15, 10))
plt.plot(z[:, 0], z[:, 1])
plt.xlabel('pray')
plt.ylabel('predator')
plt.title('Parametric graph')
plt.show()
dx = lambda t, x, y: 2/3*x - 4/3*x*y
dy = lambda t, x, y: x*y - y
fs = [dx, dy]
t = np.linspace(0, 20, 500)
y0 = np.array([1, 101/200])
z = runge_kutta4_system(fs, t, y0)
plt.figure(figsize=(15, 10))
plt.plot(t, z[:, 0], t, z[:, 1])
plt.legend(['prey', 'predator'], loc='best')
plt.xlabel('time (sec)')
plt.ylabel('population')
plt.title('Lotka-Volterra equilibrium')
plt.show()
plt.figure(figsize=(15, 10))
plt.plot(z[:, 0], z[:, 1])
plt.xlabel('pray')
plt.ylabel('predator')
plt.title('Parametric graph of equilibrium')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We'll start with just looking at analysis in euclidian space, then thinking about weighing by synaptic density later. Since we hypothesize that our data will show that tissue varies as we move down the y-axis (z-axis in brain) through cortical layers, an interesting thing to do would be compare properties of the graphs on each layer (ie how does graph connectivity vary as we move through layers).
Step2: Now that our data is in the right format, we'll create 52 delaunay graphs. Then we'll perform analyses on these graphs. A simple but useful metric would be to analyze edge length distributions in each layer.
Step3: We're going to need a method to get edge lengths from 2D centroid pairs
Step4: Realizing after all this that simply location is useless. We know the voxels are evenly spaced, which means our edge length data will be all the same. See that the "centroids" are no different
Step5: There is no distance between the two. Therefore it is perhaps more useful to consider a graph that considers node weights. Voronoi is dual to Delaunay, so that's not much of an option. We want something that considers both spacial location and density similarity.
Step6: This is using the spring layout, so we're losing positional information. We can improve the plot by adding position information.
Step7: Interesting. There are no self loops. Why would this be? Let's come back to this. In the meantime, I want to give some though to what it means to have a self loop, whether it should be theoretically possible given our data, and whether our graphs are formed properly.
Step8: Compare that to the test data
Step9: X-Layers
Step10: We can see here the number of edges is low in that area that does not have many synapses. It, as expected, mirrors the distribution of synapses. It appears to be approximately uniform at the top, with buffers of very few synapses on the sides. Remember from here
|
<ASSISTANT_TASK:>
Python Code:
import csv
from scipy.stats import kurtosis
from scipy.stats import skew
from scipy.spatial import Delaunay
import numpy as np
import math
import skimage
import matplotlib.pyplot as plt
import seaborn as sns
from skimage import future
import networkx as nx
from ragGen import *
%matplotlib inline
sns.set_color_codes("pastel")
from scipy.signal import argrelextrema
# Read in the data
data = open('../../data/data.csv', 'r').readlines()
fieldnames = ['x', 'y', 'z', 'unmasked', 'synapses']
reader = csv.reader(data)
reader.next()
rows = [[int(col) for col in row] for row in reader]
# These will come in handy later
sorted_x = sorted(list(set([r[0] for r in rows])))
sorted_y = sorted(list(set([r[1] for r in rows])))
sorted_z = sorted(list(set([r[2] for r in rows])))
a = np.array(rows)
b = np.delete(a, np.s_[3::],1)
# Separate layers - have to do some wonky stuff to get this to work
b = sorted(b, key=lambda e: e[1])
b = np.array([v.tolist() for v in b])
b = np.split(b, np.where(np.diff(b[:,1]))[0]+1)
graphs = []
centroid_list = []
for layer in b:
centroids = np.array(layer)
# get rid of the y value - not relevant anymore
centroids = np.delete(centroids, 1, 1)
centroid_list.append(centroids)
graph = Delaunay(centroids)
graphs.append(graph)
def get_d_edge_length(edge):
(x1, y1), (x2, y2) = edge
return math.sqrt((x2-x1)**2 + (y2-y1)**2)
edge_length_list = [[]]
tri_area_list = [[]]
for del_graph in graphs:
tri_areas = []
edge_lengths = []
triangles = []
for t in centroids[del_graph.simplices]:
triangles.append(t)
a, b, c = [tuple(map(int,list(v))) for v in t]
edge_lengths.append(get_d_edge_length((a,b)))
edge_lengths.append(get_d_edge_length((a,c)))
edge_lengths.append(get_d_edge_length((b,c)))
try:
tri_areas.append(float(Triangle(a,b,c).area))
except:
continue
edge_length_list.append(edge_lengths)
tri_area_list.append(tri_areas)
np.subtract(centroid_list[0], centroid_list[1])
real_volume = np.zeros((len(sorted_x), len(sorted_y), len(sorted_z)))
for r in rows:
real_volume[sorted_x.index(r[0]), sorted_y.index(r[1]), sorted_z.index(r[2])] = r[-1]
nx_graphs = []
for layer in b:
G = nx.Graph(graph)
nx_graphs.append(G)
for graph in graphs:
plt.figure()
nx.draw(graph, node_size=100)
num_self_loops = []
for rag in y_rags:
num_self_loops.append(rag.number_of_selfloops())
num_self_loops
# y_rags[0].adjacency_list()
# Test Data
test = np.array([[1,2],[3,4]])
test_rag = skimage.future.graph.RAG(test)
test_rag.adjacency_list()
real_volume_x = np.zeros((len(sorted_x), len(sorted_y), len(sorted_z)))
for r in rows:
real_volume_x[ sorted_x.index(r[0]), sorted_y.index(r[1]), sorted_z.index(r[2])] = r[-1]
x_rags = []
count = 0;
for layer in real_volume_x:
count = count + 1
x_rags.append(skimage.future.graph.RAG(layer))
num_edges_x = []
for rag in x_rags:
num_edges_x.append(rag.number_of_edges())
sns.barplot(x=range(len(num_edges_x)), y=num_edges_x)
sns.plt.show()
plt.imshow(np.amax(real_volume, axis=2), interpolation='nearest')
plt.show()
# edge_length_list[3]
# tri_area_list[3]
# triangles
# Note for future
# del_features['d_edge_length_mean'] = np.mean(edge_lengths)
# del_features['d_edge_length_std'] = np.std(edge_lengths)
# del_features['d_edge_length_skew'] = scipy.stats.skew(edge_lengths)
# del_features['d_edge_length_kurtosis'] = scipy.stats.kurtosis(edge_lengths)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We then instantiate a dPCA model where the two parameter axis are labeled by 's' (stimulus) and 't' (time) respectively. We set regularizer to 'auto' to optimize the regularization parameter when we fit the data.
Step2: Now fit the data (R) using the model we just instatiated. Note that we only need trial-to-trial data when we want to optimize over the regularization parameter.
Step3: The 1st mixing component looks merely like noise. But to be sure, we can run a significance analysis
Step4: We can highlight the significant parts of the demixed components with a black bar underneath. Note that there is no significant analysis time, since there are no classes to compute the significance over.
|
<ASSISTANT_TASK:>
Python Code:
# number of neurons, time-points and stimuli
N,T,S = 100,250,6
# noise-level and number of trials in each condition
noise, n_samples = 0.2, 10
# build two latent factors
zt = (arange(T)/float(T))
zs = (arange(S)/float(S))
# build trial-by trial data
trialR = noise*randn(n_samples,N,S,T)
trialR += randn(N)[None,:,None,None]*zt[None,None,None,:]
trialR += randn(N)[None,:,None,None]*zs[None,None,:,None]
# trial-average data
R = mean(trialR,0)
# center data
R -= mean(R.reshape((N,-1)),1)[:,None,None]
dpca = dPCA.dPCA(labels='st',regularizer='auto')
dpca.protect = ['t']
Z = dpca.fit_transform(R,trialR)
time = arange(T)
figure(figsize=(16,7))
subplot(131)
for s in range(S):
plot(time,Z['t'][0,s])
title('1st time component')
subplot(132)
for s in range(S):
plot(time,Z['s'][0,s])
title('1st stimulus component')
subplot(133)
for s in range(S):
plot(time,Z['st'][0,s])
title('1st mixing component')
show()
significance_masks = dpca.significance_analysis(R, trialR, n_shuffles=10, n_splits=10, n_consecutive=10)
time = arange(T)
figure(figsize=(16,7))
subplot(131)
for s in range(S):
plot(time,Z['t'][0,s])
title('1st time component')
subplot(132)
for s in range(S):
plot(time,Z['s'][0,s])
imshow(significance_masks['s'][0][None,:],extent=[0,250,amin(Z['s'])-1,amin(Z['s'])-0.5],aspect='auto',cmap='gray_r',vmin=0,vmax=1)
ylim([amin(Z['s'])-1,amax(Z['s'])+1])
title('1st stimulus component')
subplot(133)
for s in range(S):
plot(time,Z['st'][0,s])
dZ = amax(Z['st'])-amin(Z['st'])
imshow(significance_masks['st'][0][None,:],extent=[0,250,amin(Z['st'])-dZ/10.,amin(Z['st'])-dZ/5.],aspect='auto',cmap='gray_r',vmin=0,vmax=1)
ylim([amin(Z['st'])-dZ/10.,amax(Z['st'])+dZ/10.])
title('1st mixing component')
show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: PhantomJS
Step2: 访问页面
Step3: 查找元素
Step4: 这里我们通过三种不同的方式去获取响应的元素,第一种是通过id的方式,第二个中是CSS选择器,第三种是xpath选择器,结果都是相同的。
Step5: 多个元素查找
Step6: 当然上面的方式也是可以通过导入from selenium.webdriver.common.by import By 这种方式实现
Step7: 运行的结果可以看出程序会自动打开Chrome浏览器并打开淘宝输入ipad,然后删除,重新输入MacBook pro,并点击搜索
Step8: 获取元素属性
Step9: 获取文本值
Step10: 获取ID,位置,标签名
Step11: 一个例子
|
<ASSISTANT_TASK:>
Python Code:
from selenium import webdriver
help(webdriver)
#browser = webdriver.Firefox() # 打开Firefox浏览器
browser = webdriver.Chrome() # 打开Chrome浏览器
from selenium import webdriver
browser = webdriver.Chrome()
browser.get("http://www.baidu.com")
print(browser.page_source)
#browser.close()
from selenium import webdriver
browser = webdriver.Chrome()
browser.get("http://www.taobao.com")
input_first = browser.find_element_by_id("q")
input_second = browser.find_element_by_css_selector("#q")
input_third = browser.find_element_by_xpath('//*[@id="q"]')
print(input_first)
print(input_second)
print(input_third)
browser.close()
# 下面这种方式是比较通用的一种方式:这里需要记住By模块所以需要导入
from selenium.webdriver.common.by import By
browser = webdriver.Chrome()
browser.get("http://www.taobao.com")
input_first = browser.find_element(By.ID,"q")
print(input_first)
browser.close()
browser = webdriver.Chrome()
browser.get("http://www.taobao.com")
lis = browser.find_elements_by_css_selector('.service-bd li')
print(lis)
browser.close()
from selenium import webdriver
import time
browser = webdriver.Chrome()
browser.get("http://www.taobao.com")
#browser.get("http://www.taobao.com")
input_str = browser.find_element_by_id('q')
input_str.send_keys("ipad")
time.sleep(3)
input_str.clear()
input_str.send_keys("MacBook pro")
button = browser.find_element_by_class_name('btn-search')
button.click()
from selenium import webdriver
browser = webdriver.Chrome()
browser.get("https://www.zhihu.com/explore")
browser.execute_script('window.scrollTo(0, document.body.scrollHeight)')
browser.execute_script('alert("To Bottom")')
browser.get("http://www.zhihu.com/explore")
logo = browser.find_element_by_id('zh-top-link-logo')
print(logo)
print(logo.get_attribute('class'))
browser.get("http://www.zhihu.com/explore")
input = browser.find_element_by_class_name('zu-top-add-question')
print(input.text)
browser.get("http://www.zhihu.com/explore")
input = browser.find_element_by_class_name('zu-top-add-question')
print(input.id)
print(input.location)
print(input.tag_name)
print(input.size)
from selenium import webdriver
# import selenium.webdriver.support.ui as ui
browser = webdriver.Chrome()
browser.get("https://www.privco.com/home/login") #需要翻墙打开网址
username = 'wangchj04@126.com'
password = 'CityUniversityHK'
browser.find_element_by_id("username").clear()
browser.find_element_by_id("username").send_keys(username)
browser.find_element_by_id("password").clear()
browser.find_element_by_id("password").send_keys(password)
browser.find_element_by_css_selector("#login-form > div:nth-child(5) > div > button").click()
# url = "https://www.privco.com/private-company/329463"
def download_excel(url):
browser.get(url)
name = url.split('/')[-1]
title = browser.title
source = browser.page_source
with open(name+'.html', 'w') as f:
f.write(source)
try:
soup = BeautifulSoup(source, 'html.parser')
url_new = soup.find('span', {'class', 'profile-name'}).a['href']
url_excel = url_new + '/export'
browser.get(url_excel)
except Exception as e:
print(url, 'no excel')
pass
urls = [ 'https://www.privco.com/private-company/1135789',
'https://www.privco.com/private-company/542756',
'https://www.privco.com/private-company/137908',
'https://www.privco.com/private-company/137138']
for k, url in enumerate(urls):
print(k)
try:
download_excel(url)
except Exception as e:
print(url, e)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Note the semicolon
Step2: Booleans create checkbox
Step3: Using decorators
Step4: From Portilla's notes
Step5: Function Annotations
Step6: multiple instances remain in sync!
Step7: There are client-server nuances!
Step8: Source
Step9: This is not working!
|
<ASSISTANT_TASK:>
Python Code:
# Start with some imports!
from __future__ import print_function
from ipywidgets import interact, interactive, fixed
import ipywidgets as widgets
# Very basic function
def f(x):
return x
help(interact)
# Generate a slider to interact with
interact(f, x=10);
interact(f, x=10,);
# Booleans generate check-boxes
interact(f, x=True);
# Strings generate text areas
interact(f, x='Hi there!');
# Using a decorator!
@interact(x=True, y=1.0)
def g(x, y):
return (x, y)
# Again, a simple function
def h(p, q):
return (p, q)
interact(h, p=5, q=fixed(20));
interact(f, x=widgets.IntSlider(min=-10., max=30, value=10));
# Min,Max slider with Tuples
interact(f, x=(0,4));
# (min, max, step)
interact(f, x=(0,8,2));
interact(f, x=(0.0,10.0));
interact(f, x=(0.0,10.0,0.01));
@interact(x=(0.0,20.0,0.5))
def h(x=5.5):
return x
interact(f, x=('apples','oranges'));
interact(f, x={'one': 10, 'two': 20});
def f(x:True): # python 3 only
return x
from IPython.utils.py3compat import annotate
@annotate(x=True)
def f(x):
return x
interact(f);
def f(a, b):
return a+b
w = interactive(f, a=10, b=20)
type(w)
w.children
from IPython.display import display
display(w)
w.kwargs
w.result
from ipywidgets import *
IntSlider()
from IPython.display import display
w = IntSlider()
display(w)
display(w)
w.close()
w = IntSlider()
display(w)
w.value
w.value = 100
w.keys
Text(value='Hello World!')
Text(value='Hello World!', disabled=True)
from traitlets import link
a = FloatText()
b = FloatSlider()
display(a,b)
mylink = link((a, 'value'), (b, 'value'))
mylink.unlink()
print(widgets.Button.on_click.__doc__)
from IPython.display import display
button = widgets.Button(description="Click Me!")
display(button)
def on_button_clicked(b):
print("Button clicked.")
button.on_click(on_button_clicked)
text = widgets.Text()
display(text)
def handle_submit(sender):
print(text.value)
text.on_submit(handle_submit)
print(widgets.Widget.on_trait_change.__doc__)
print(widgets.obse)
int_range = widgets.IntSlider()
display(int_range)
def on_value_change(name, value):
print(value)
int_range.on_trait_change(on_value_change, 'value')
import traitlets
# Create Caption
caption = widgets.Label(value = 'The values of slider1 and slider2 are synchronized')
# Create IntSlider
slider1 = widgets.IntSlider(description='Slider 1')
slider2 = widgets.IntSlider(description='Slider 2')
# Use trailets to link
l = traitlets.link((slider1, 'value'), (slider2, 'value'))
# Display!
display(caption, slider1, slider2)
# Create Caption
caption = widgets.Latex(value = 'Changes in source values are reflected in target1')
# Create Sliders
source = widgets.IntSlider(description='Source')
target1 = widgets.IntSlider(description='Target 1')
# Use dlink
dl = traitlets.dlink((source, 'value'), (target1, 'value'))
display(caption, source, target1)
# May get an error depending on order of cells being run!
l.unlink()
dl.unlink()
# NO LAG VERSION
caption = widgets.Latex(value = 'The values of range1 and range2 are synchronized')
range1 = widgets.IntSlider(description='Range 1')
range2 = widgets.IntSlider(description='Range 2')
l = widgets.jslink((range1, 'value'), (range2, 'value'))
display(caption, range1, range2)
# NO LAG VERSION
caption = widgets.Latex(value = 'Changes in source_range values are reflected in target_range1')
source_range = widgets.IntSlider(description='Source range')
target_range1 = widgets.IntSlider(description='Target range ')
dl = widgets.jsdlink((source_range, 'value'), (target_range1, 'value'))
display(caption, source_range, target_range1)
l.unlink()
dl.unlink()
import ipywidgets as widgets
# Show all available widgets!
widgets.Widget.widget_types.values()
widgets.FloatSlider(
value=7.5,
min=5.0,
max=10.0,
step=0.1,
description='Test:',
)
widgets.FloatSlider(
value=7.5,
min=5.0,
max=10.0,
step=0.1,
description='Test',
orientation='vertical',
)
widgets.FloatProgress(
value=7.5,
min=5.0,
max=10.0,
step=0.1,
description='Loading:',
)
widgets.BoundedFloatText(
value=7.5,
min=5.0,
max=10.0,
description='Text:',
)
widgets.FloatText(
value=7.5,
description='Any:',
)
widgets.ToggleButton(
description='Click me',
value=False,
)
widgets.Checkbox(
description='Check me',
value=True,
)
widgets.Valid(
value=True,
)
from IPython.display import display
w = widgets.Dropdown(
options=['1', '2', '3'],
value='2',
description='Number:',
)
display(w)
# Show value
w.value
w = widgets.Dropdown(
options={'One': 1, 'Two': 2, 'Three': 3},
value=2,
description='Number:')
display(w)
w.value
widgets.RadioButtons(
description='Pizza topping:',
options=['pepperoni', 'pineapple', 'anchovies'],
)
widgets.Select(
description='OS:',
options=['Linux', 'Windows', 'OSX'],
)
widgets.ToggleButtons(
description='Speed:',
options=['Slow', 'Regular', 'Fast'],
)
w = widgets.SelectMultiple(
description="Fruits",
options=['Apples', 'Oranges', 'Pears'])
display(w)
w.value
widgets.Text(
description='String:',
value='Hello World',
)
widgets.Textarea(
description='String:',
value='Hello World',
)
widgets.Latex(
value="$$\\frac{n!}{k!(n-k)!}$$",
)
widgets.HTML(
value="Hello <b>World</b>"
)
widgets.Button(description='Click me')
%%html
<style>
.example-container { background: #999999; padding: 2px; min-height: 100px; }
.example-container.sm { min-height: 50px; }
.example-box { background: #9999FF; width: 50px; height: 50px; text-align: center; vertical-align: middle; color: white; font-weight: bold; margin: 2px;}
.example-box.med { width: 65px; height: 65px; }
.example-box.lrg { width: 80px; height: 80px; }
</style>
import ipywidgets as widgets
from IPython.display import display
button = widgets.Button(
description='Hello World!',
width=100, # Integers are interpreted as pixel measurements.
height='2em', # em is valid HTML unit of measurement.
color='lime', # Colors can be set by name,
background_color='#0022FF', # and also by color code.
border_color='cyan')
display(button)
from IPython.display import display
float_range = widgets.FloatSlider()
string = widgets.Text(value='hi')
container = widgets.Box(children=[float_range, string])
container.border_color = 'red'
container.border_style = 'dotted'
container.border_width = 3
display(container) # Displays the `container` and all of it's children.
container = widgets.Box()
container.border_color = 'red'
container.border_style = 'dotted'
container.border_width = 3
display(container)
int_range = widgets.IntSlider()
container.children=[int_range]
name1 = widgets.Text(description='Location:')
zip1 = widgets.BoundedIntText(description='Zip:', min=0, max=99999)
page1 = widgets.Box(children=[name1, zip1])
name2 = widgets.Text(description='Location:')
zip2 = widgets.BoundedIntText(description='Zip:', min=0, max=99999)
page2 = widgets.Box(children=[name2, zip2])
accord = widgets.Accordion(children=[page1, page2], width=400)
display(accord)
accord.set_title(0, 'From')
accord.set_title(1, 'To')
name = widgets.Text(description='Name:', padding=4)
color = widgets.Dropdown(description='Color:', padding=4, options=['red', 'orange', 'yellow', 'green', 'blue', 'indigo', 'violet'])
page1 = widgets.Box(children=[name, color], padding=4)
age = widgets.IntSlider(description='Age:', padding=4, min=0, max=120, value=50)
gender = widgets.RadioButtons(description='Gender:', padding=4, options=['male', 'female'])
page2 = widgets.Box(children=[age, gender], padding=4)
tabs = widgets.Tab(children=[page1, page2])
display(tabs)
tabs.set_title(0, 'Name')
tabs.set_title(1, 'Details')
display(widgets.Text(description="a:"))
display(widgets.Text(description="aa:"))
display(widgets.Text(description="aaa:"))
display(widgets.Text(description="a:"))
display(widgets.Text(description="aa:"))
display(widgets.Text(description="aaa:"))
display(widgets.Text(description="aaaaaaaaaaaaaaaaaa:"))
display(widgets.Text(description="a:"))
display(widgets.Text(description="aa:"))
display(widgets.Text(description="aaa:"))
display(widgets.Text())
buttons = [widgets.Button(description=str(i)) for i in range(3)]
display(*buttons)
container = widgets.HBox(children=buttons)
display(container)
container = widgets.VBox(children=buttons)
display(container)
container = widgets.FlexBox(children=buttons)
display(container)
w1 = widgets.Latex(value="First line")
w2 = widgets.Latex(value="Second line")
w3 = widgets.Latex(value="Third line")
display(w1, w2, w3)
w2.visible=None
w2.visible=False
w2.visible=True
form = widgets.VBox()
first = widgets.Text(description="First:")
last = widgets.Text(description="Last:")
student = widgets.Checkbox(description="Student:", value=False)
school_info = widgets.VBox(visible=False, children=[
widgets.Text(description="School:"),
widgets.IntText(description="Grade:", min=0, max=12)
])
pet = widgets.Text(description="Pet:")
form.children = [first, last, student, school_info, pet]
display(form)
def on_student_toggle(name, value):
if value:
school_info.visible = True
else:
school_info.visible = False
student.on_trait_change(on_student_toggle, 'value')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <i class="fa fa-diamond"></i> Primero pimpea tu libreta!
Step2: <i class="fa fa-book"></i> Primero librerias
Step3: <i class="fa fa-database"></i> Vamos a crear datos de jugete
Step4: <i class="fa fa-tree"></i> Ahora vamos a crear un modelo de arbol
Step5: <i class="fa fa-question-circle"></i> Que parametros y funciones tiene el classificador?
Step6: vamos a ajustar nuestro modelo con fit y sacar su puntaje con score
Step7: <i class="fa fa-question-circle"></i>
Step8: cuales son los tamanios de estos nuevos datos?
Step9: <i class="fa fa-question-circle"></i>
Step10: Validación cruzada y
Step11: <i class="fa fa-question-circle"></i>
Step12: a probarlo!
Step13: <i class="fa fa-pagelines"></i> El conjunto de datos Iris
Step14: Actividad
|
<ASSISTANT_TASK:>
Python Code:
from IPython.core.display import HTML
import os
def css_styling():
Load default custom.css file from ipython profile
base = os.getcwd()
styles = "<style>\n%s\n</style>" % (open(os.path.join(base,'files/custom.css'),'r').read())
return HTML(styles)
css_styling()
import numpy as np
import sklearn as sk
import matplotlib.pyplot as plt
import sklearn.datasets as datasets
import seaborn as sns
%matplotlib inline
X,Y = datasets.make_blobs()
from sklearn.tree import DecisionTreeClassifier
clf = DecisionTreeClassifier()
help(clf)
clf.fit(X, Y)
clf.score(X,Y)
from sklearn.cross_validation import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(X,Y, test_size=0.33)
clf.fit(X_train, Y_train)
clf.score(X_test,Y_test)
clf.feature_importances_
from sklearn.cross_validation import cross_val_score
resultados =cross_val_score(clf, X, Y, cv=10)
from sklearn.ensemble import RandomForestClassifier
ks=np.arange(2,40)
scores=[]
for k in ks:
clf = RandomForestClassifier(n_estimators=k)
scores.append(cross_val_score(clf, X, Y, cv=10).mean())
plt.plot(ks,scores)
g = sns.PairGrid(iris, hue="species")
g = g.map(plt.scatter)
g = g.add_legend()
iris = datasets.load_iris()
X = iris.data
Y = iris.target
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Introduction
Step2: We can understand these parameters by looking at their relationship to the weights and activations of the layer. Let's do that now.
Step3: You could think about the activation function as scoring pixel values according to some measure of importance. The ReLU activation says that negative values are not important and so sets them to 0. ("Everything unimportant is equally unimportant.")
Step4: For the filtering step, we'll define a kernel and then apply it with the convolution. The kernel in this case is an "edge detection" kernel. You can define it with tf.constant just like you'd define an array in Numpy with np.array. This creates a tensor of the sort TensorFlow uses.
Step5: TensorFlow includes many common operations performed by neural networks in its tf.nn module. The two that we'll use are conv2d and relu. These are simply function versions of Keras layers.
Step6: Now let's apply our kernel and see what happens.
Step7: Next is the detection step with the ReLU function. This function is much simpler than the convolution, as it doesn't have any parameters to set.
|
<ASSISTANT_TASK:>
Python Code:
#$HIDE$
import numpy as np
from itertools import product
def show_kernel(kernel, label=True, digits=None, text_size=28):
# Format kernel
kernel = np.array(kernel)
if digits is not None:
kernel = kernel.round(digits)
# Plot kernel
cmap = plt.get_cmap('Blues_r')
plt.imshow(kernel, cmap=cmap)
rows, cols = kernel.shape
thresh = (kernel.max()+kernel.min())/2
# Optionally, add value labels
if label:
for i, j in product(range(rows), range(cols)):
val = kernel[i, j]
color = cmap(0) if val > thresh else cmap(255)
plt.text(j, i, val,
color=color, size=text_size,
horizontalalignment='center', verticalalignment='center')
plt.xticks([])
plt.yticks([])
from tensorflow import keras
from tensorflow.keras import layers
model = keras.Sequential([
layers.Conv2D(filters=64, kernel_size=3), # activation is None
# More layers follow
])
model = keras.Sequential([
layers.Conv2D(filters=64, kernel_size=3, activation='relu')
# More layers follow
])
#$HIDE_INPUT$
import tensorflow as tf
import matplotlib.pyplot as plt
plt.rc('figure', autolayout=True)
plt.rc('axes', labelweight='bold', labelsize='large',
titleweight='bold', titlesize=18, titlepad=10)
plt.rc('image', cmap='magma')
image_path = '../input/computer-vision-resources/car_feature.jpg'
image = tf.io.read_file(image_path)
image = tf.io.decode_jpeg(image)
plt.figure(figsize=(6, 6))
plt.imshow(tf.squeeze(image), cmap='gray')
plt.axis('off')
plt.show();
import tensorflow as tf
kernel = tf.constant([
[-1, -1, -1],
[-1, 8, -1],
[-1, -1, -1],
])
plt.figure(figsize=(3, 3))
show_kernel(kernel)
#$HIDE$
# Reformat for batch compatibility.
image = tf.image.convert_image_dtype(image, dtype=tf.float32)
image = tf.expand_dims(image, axis=0)
kernel = tf.reshape(kernel, [*kernel.shape, 1, 1])
kernel = tf.cast(kernel, dtype=tf.float32)
image_filter = tf.nn.conv2d(
input=image,
filters=kernel,
# we'll talk about these two in lesson 4!
strides=1,
padding='SAME',
)
plt.figure(figsize=(6, 6))
plt.imshow(tf.squeeze(image_filter))
plt.axis('off')
plt.show();
image_detect = tf.nn.relu(image_filter)
plt.figure(figsize=(6, 6))
plt.imshow(tf.squeeze(image_detect))
plt.axis('off')
plt.show();
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Reading in data.
Step2: Let's now try K2SC!
Step3: Now we run with default values!
Step4: Now we plot! See how the k2sc lightcurve has such better quality than the uncorrected data.
Step5: Now we save the data.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib
import matplotlib as mpl
import lightkurve as lk
import k2sc
from k2sc.standalone import k2sc_lc
from astropy.io import fits
%pylab inline --no-import-all
matplotlib.rcParams['image.origin'] = 'lower'
matplotlib.rcParams['figure.figsize']=(10.0,10.0) #(6.0,4.0)
matplotlib.rcParams['font.size']=16 #10
matplotlib.rcParams['savefig.dpi']= 300 #72
colours = mpl.rcParams['axes.prop_cycle'].by_key()['color']
import warnings
warnings.filterwarnings('ignore')
print(lk.__version__)
print(k2sc.__version__)
lc = lk.search_lightcurve('EPIC 212300977')[1].download()
lc = lc.remove_nans()
lc = lc[lc.quality==0]
lc.__class__ = k2sc_lc
lc.k2sc()
fig = plt.figure(figsize=(12.0,8.0))
plt.plot(lc.time.value,lc.flux.value,'.',label="Uncorrected")
detrended = lc.corr_flux-lc.tr_time + np.nanmedian(lc.tr_time)
plt.plot(lc.time.value,detrended.value,'.',label="K2SC")
plt.legend()
plt.xlabel('BJD')
plt.ylabel('Flux')
plt.title('WASP-55',y=1.01)
extras = {'CORR_FLUX':lc.corr_flux.value,
'TR_TIME':lc.tr_time.value,
'TR_POSITION':lc.tr_position.value}
out = lc.to_fits(extra_data=extras,path='test.fits',overwrite=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import training data
Step2: Separate tweets into two sets
Step3: Split the data into the training set and test set for crossvalidation
Step4: Create a pipeline for each classifier algorithm
Step5: Parameter tuning
Step6: Set models and do the search of the parameter
Step7: Set the classifiers to the have the best combination of parameters
Step8: Train algorithms
Step9: Predict on the test set and check metrics
Step10: Perform k-fold
Step12: Plot Confusion Matrix
Step13: Print the metrics of the performance of the algorithms
Step14: ROC Curves
Step15: Predict on unclassified data
Step16: Predict sentiment
Step18: Categorize data by political party
Step19: Put data into a dataframe
Step20: Get some information about the predictions
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import itertools
import math
import pandas as pd
import csv
import time
from sklearn.cross_validation import train_test_split, KFold
from sklearn.naive_bayes import MultinomialNB
from sklearn.linear_model import LogisticRegression, SGDClassifier
from sklearn.model_selection import learning_curve
from sklearn import svm
from sklearn.ensemble import RandomForestClassifier, BaggingClassifier
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.metrics import make_scorer, accuracy_score, precision_score, recall_score, f1_score, roc_curve, auc, confusion_matrix
from sklearn.grid_search import GridSearchCV
from sklearn.utils import shuffle
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style="whitegrid", color_codes=True)
with open("/resources/data/classified_tweets.txt", "r",encoding="utf8") as myfile:
data = myfile.readlines()
X=[]
y=[]
for x in data:
X.append(x[1:])
y.append(x[0])
X_train, X_test, y_train, y_test = train_test_split(X,y, random_state=42, test_size = 0.3)
#Logistic Regression
Log_clf=Pipeline([('vect', CountVectorizer(analyzer='word')), ('tfidf', TfidfTransformer()), ('clf', LogisticRegression())])
#Multinomial Naive Bayes
MNB_clf=Pipeline([('vect', CountVectorizer(analyzer='word')), ('tfidf', TfidfTransformer()), ('clf', MultinomialNB())])
parameters_log = {'vect__ngram_range': [(1, 1), (1, 2)],'tfidf__use_idf': (True, False), 'clf__penalty': ['l1','l2'], 'clf__solver':['liblinear']}
parameters_mnb = {'vect__ngram_range': [(1, 1), (1, 2)],'tfidf__use_idf': (True, False),'clf__alpha': (1,1e-2, 1e-3)}
#Set models
acc_scorer = make_scorer(accuracy_score)
gs_log_clf = GridSearchCV(Log_clf, parameters_log, n_jobs=-1, scoring=acc_scorer)
gs_mnb_clf = GridSearchCV(MNB_clf, parameters_mnb, n_jobs=-1, scoring=acc_scorer)
# Grid search of best parameters
print ("-----Tunning of parameters-----")
start = time.time()
gs_log_clf = gs_log_clf.fit(X_train, y_train)
end = time.time()
print ("Logistic Regression -"," Running Time:", end - start,"s")
start = time.time()
gs_mnb_clf= gs_mnb_clf.fit(X_train, y_train)
end = time.time()
print ("Multinomial Naive Bayes -"," Running Time:", end - start,"s")
Log_clf= gs_log_clf .best_estimator_
MNB_clf= gs_mnb_clf .best_estimator_
start = time.time()
Log_clf = Log_clf.fit(X_train, y_train)
end = time.time()
print ("Logistic Regression -"," Running Time:", end - start,"s")
start = time.time()
MNB_clf = MNB_clf.fit(X_train, y_train)
end = time.time()
print ("Multinomial Naive Bayes -"," Running Time:", end - start,"s")
predicted_Log = Log_clf.predict(X_test)
predicted_MNB =MNB_clf.predict(X_test)
dec_Log = Log_clf.decision_function(X_test)
dec_MNB =MNB_clf.predict_proba(X_test)
def run_kfold(clf):
#run KFold with 10 folds instead of the default 3
#on the 200000 records in the training_data
kf = KFold(200000, n_folds=10,shuffle=True)
X_new=np.array(X)
y_new=np.array(y)
outcomes = []
fold = 0
for train_index, test_index in kf:
fold += 1
X1_train, X1_test = X_new[train_index], X_new[test_index]
y1_train, y1_test = y_new[train_index], y_new[test_index]
clf.fit(X1_train, y1_train)
predictions = clf.predict(X1_test)
accuracy = accuracy_score(y1_test, predictions)
outcomes.append(accuracy)
print("Fold {0} accuracy: {1}".format(fold, accuracy))
mean_outcome = np.mean(outcomes)
print("Mean Accuracy: {0}".format(mean_outcome))
run_kfold(Log_clf)
run_kfold(MNB_clf)
Log_matrix=confusion_matrix(y_test, predicted_Log)
Log_matrix=Log_matrix[::-1, ::-1]
MNB_matrix=confusion_matrix(y_test, predicted_MNB)
MNB_matrix=MNB_matrix[::-1, ::-1]
def plot_confusion_matrix(cm, classes, normalize=False, title='Confusion matrix', cmap=plt.cm.Blues):
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i,round(cm[i, j],2), horizontalalignment="center", color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plot_confusion_matrix(Log_matrix, classes=["Positive","Negative"], normalize=True, title='Normalized confusion matrix')
plot_confusion_matrix(MNB_matrix, classes=["Positive","Negative"], normalize=True, title='Normalized confusion matrix')
X_test_new_Log=[]
X_test_new_MNB=[]
y_new=[]
Log_list=predicted_Log.tolist()
MNB_list=predicted_MNB.tolist()
for x in Log_list:
X_test_new_Log.append(int(x))
for x in MNB_list:
X_test_new_MNB.append(int(x))
for x in y_test:
y_new.append(int(x))
Log_list_prediction=[1 if x==4 else x for x in X_test_new_Log]
MNB_list_prediction=[1 if x==4 else x for x in X_test_new_MNB]
target_new=[1 if x==4 else x for x in y_new]
print('Metrics Logistic Regression')
print('-------------------------------------')
print("Accuracy:",accuracy_score(target_new, Log_list_prediction))
print("Recall:",recall_score(target_new,Log_list_prediction))
print("Precision:",precision_score(target_new, Log_list_prediction))
print("F1 Score:",f1_score(target_new, Log_list_prediction))
print('' '')
print('Metrics Multinomial Naive Bayes')
print('-------------------------------------')
print("Accuracy:",accuracy_score(target_new, MNB_list_prediction))
print("Recall:",recall_score(target_new,MNB_list_prediction))
print("Precision:",precision_score(target_new, MNB_list_prediction))
print("F1 Score:",f1_score(target_new, MNB_list_prediction))
predicted_Log_new=[]
y_actual=[]
for x in dec_Log:
predicted_Log_new.append(int(x))
for x in y_test:
y_actual.append(int(x))
Log_list_prediction=[1 if x==4 else x for x in predicted_Log_new]
target_new=[1 if x==4 else x for x in y_actual]
fpr, tpr, thresholds=roc_curve(target_new, Log_list_prediction, pos_label=1)
roc_auc= auc(fpr, tpr)
plt.figure()
lw = 2
plt.plot(fpr, tpr, color='darkorange', lw=lw, label='ROC curve (area = %0.3f)' % roc_auc )
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Logistic Regression ROC Curve')
plt.legend(loc="lower right")
plt.show()
plt.savefig('Log_roc.jpg')
predicted_MNB_new=[]
y_actual=[]
for x in range(0,60000):
predicted_MNB_new.append(dec_MNB[x][1])
#for x in dec_MNB:
# predicted_MNB_new.append(int(x))
for x in y_test:
y_actual.append(int(x))
#MNB_list_prediction=[1 if x==4 else x for x in predicted_MNB_new]
target_new=[1 if x==4 else x for x in y_actual]
fpr, tpr, thresholds=roc_curve(target_new, predicted_MNB_new)
roc_auc= auc(fpr, tpr)
plt.figure()
lw = 2
plt.plot(fpr, tpr, color='darkorange', lw=lw, label='ROC curve (area = %0.3f)' % roc_auc )
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Multinomial Naive Bayes ROC Curve')
plt.legend(loc="lower right")
plt.show()
plt.savefig('MNB_roc.jpg')
with open("/resources/data/unclassified_tweets.txt", "r",encoding="utf8") as myfile:
unclass_data = myfile.readlines()
MNB_clf = MNB_clf.fit(X_train, y_train)
predicted_MNB_unclass =MNB_clf.predict(unclass_data)
def party(tw):
For NDP is chosen the name of the candidate of that party in various forms and its campaign slogan
For Liberals is chosen the name of the candidate of that party in various forms and its campaign slogan
For Conservatives was chosen the name of the candidate of that partym and associations related with the party (tcot,ccot),
nickname used by them (like tory) and the bill that was inroduced by the conservative government (c51)
tw_clean=tw.split()
hashtags=[]
NDP_list=['tommulcair','mulcair','ndp','tm4pm', 'ready4change','thomasmulcair']
Lib_list=['justin', 'trudeau\'s','lpc','trudeau','realchange','liberal','liberals','justintrudeau','teamtrudeau']
Cons_list=['c51','harper','cpc','conservative', 'tory','tcot','stephenharper','ccot','harper','conservatives']
for x in range(0,len(tw_clean)):
if tw_clean[x].find('#')!= -1:
hashtags.append(tw_clean[x].replace('#',''))
result=''
if hashtags:
for x in hashtags:
if x in NDP_list:
result= 'NDP'
return result
elif x in Lib_list:
result= 'Liberal'
return result
elif x in Cons_list:
result= 'Conservative'
return result
if result=='':
result='Other'
return result
party_af=[]
for x in range(0,len(unclass_data)):
party_af.append(party(unclass_data[x]))
predictions=[]
for x in range(0,len(unclass_data)):
predictions.append((unclass_data[x],party_af[x],predicted_MNB_unclass[x]))
tweets=pd.DataFrame(predictions, columns=["Tweet","Party","Classification - MNB"])
tweets.head()
def get_sent(tweet):
classf = tweet
return 'Positive' if classf=='4' else 'Negative'
tweets_clean= tweets[tweets.Party !="Other"]
tweets_clean['Sentiment'] = tweets_clean['Classification - MNB'].apply(get_sent)
tweets_clean.drop(labels=['Classification - MNB'], axis=1, inplace=True)
tweets_clean.head()
sns.countplot(x='Sentiment', hue="Party", data=tweets_clean)
print ("Number of tweets of classfied for each party")
tweets_clean.Party.value_counts().head()
print ("Number of tweets of classfied for each party")
tweets_clean[tweets_clean.Sentiment=='Positive'].Party.value_counts().head()
print ("Number of tweets of classfied for each party")
tweets_clean[tweets_clean.Sentiment=='Negative'].Party.value_counts().head()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Complete graph Laplacian
Step3: The Laplacian Matrix is a matrix that is extremely important in graph theory and numerical analysis. It is defined as $L=D-A$. Where $D$ is the degree matrix and $A$ is the adjecency matrix. For the purpose of this problem you don't need to understand the details of these matrices, although their definitions are relatively simple.
Step5: The adjacency matrix for $K_n$ is an $n \times n$ matrix with zeros along the diagonal and ones everywhere else. Write a function to compute the adjacency matrix for $K_n$ using NumPy.
Step6: Use NumPy to explore the eigenvalues or spectrum of the Laplacian L of $K_n$. What patterns do you notice as $n$ changes? Create a conjecture about the general Laplace spectrum of $K_n$.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import networkx as nx
K_5=nx.complete_graph(5)
nx.draw(K_5)
def complete_deg(n):
Return the integer valued degree matrix D for the complete graph K_n.
z=np.zeros((n,n), dtype=int)
np.fill_diagonal(z,(n-1))
return z
D = complete_deg(5)
assert D.shape==(5,5)
assert D.dtype==np.dtype(int)
assert np.all(D.diagonal()==4*np.ones(5))
assert np.all(D-np.diag(D.diagonal())==np.zeros((5,5),dtype=int))
def complete_adj(n):
Return the integer valued adjacency matrix A for the complete graph K_n.
u = np.zeros((n,n), dtype=int)
u = u + 1
np.fill_diagonal(u,0)
return u
A = complete_adj(5)
assert A.shape==(5,5)
assert A.dtype==np.dtype(int)
assert np.all(A+np.eye(5,dtype=int)==np.ones((5,5),dtype=int))
def Laplacian(n):
return complete_deg(n) - complete_adj(n)
for n in range(1,10):
print(np.linalg.eigvals(Laplacian(n)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This notebook calculates and plots the theoretical tilt angles. It will also plot the alpha and p0 factors vs temperature that are given in the cell below this.
Step2: Data
Step4: Langevin-Debye Model
Step5: Second, Calculate the Tilt Angle $\psi$
Step6: $$ denominator()\
Step9: $$ tan(2\psi) = \frac{\int_{\theta_{min}}^{\theta_{max}} \int_0^{2\pi} {sin\
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from scipy.integrate import quad, dblquad
%matplotlib inline
import matplotlib.pyplot as plt
import scipy.optimize as opt
thetamin = 25.6*np.pi/180
thetamax = 33.7*np.pi/180
t = 1*10**-6 #Cell Thickness
tempsC = np.array([26, 27, 29, 31, 33, 35, 37])
voltages = np.array([2,3,6,7,9,11,12.5,14,16,18,20,22,23.5,26,27.5,29,31,32.5,34,36])
alpha_micro = np.array([.2575,.2475,.2275,.209,.189,.176,.15])
p0Debye = np.array([650,475,300,225,160,125,100]) #Temperature Increases to the right
#This Block just converts units
fields = np.array([entry/t for entry in voltages])
debye = 3.33564e-30
p0_array = np.array([entry*debye for entry in p0Debye]) #debye units to SI units
k = 1.3806488e-23
p0k_array = np.array([entry/k for entry in p0_array]) #p0k is used because it helps with the integration
KC = 273.15
tempsK = np.array([entry+KC for entry in tempsC]) #Celsius to Kelvin
alpha_array = np.array([entry*1e-6 for entry in alpha_micro])
PSIdata = np.array([11.4056,20.4615,25.4056,27.9021,29.028,29.6154,30.2517,30.8392,31.1329,31.5245,31.8671,32.014,32.3077,32.5034,32.7972,32.9929,33.1399,33.3357,33.4336,33.6783])
Edata = fields
T = tempsK[0]
def Boltz(theta,phi,T,p0k,alpha,E):
Compute the integrand for the Boltzmann factor.
Returns
-------
A function of theta,phi,T,p0k,alpha,E to be used within dblquad
return np.exp((1/T)*p0k*E*np.sin(theta)*np.cos(phi)*(1+alpha*E*np.cos(phi)))*np.sin(theta)
def numerator(theta,phi,T,p0k,alpha,E):
boltz = Boltz(theta,phi,T,p0k,alpha,E)
return np.sin(2*theta)*np.cos(phi)*boltz
def denominator(theta,phi,T,p0k,alpha,E):
boltz = Boltz(theta,phi,T,p0k,alpha,E)
return ((np.cos(theta)**2) - ((np.sin(theta)**2) * (np.cos(phi)**2)))*boltz
def compute_psi(E,p0k,alpha):
def Boltz(theta,phi,T,p0k,alpha,E):
Compute the integrand for the Boltzmann factor.
Returns
-------
A function of theta,phi,T,p0k,alpha,E to be used within dblquad
return np.exp((1/T)*p0k*E*np.sin(theta)*np.cos(phi)*(1+alpha*E*np.cos(phi)))*np.sin(theta)
def numerator(theta,phi,T,p0k,alpha,E):
boltz = Boltz(theta,phi,T,p0k,alpha,E)
return np.sin(2*theta)*np.cos(phi)*boltz
def denominator(theta,phi,T,p0k,alpha,E):
boltz = Boltz(theta,phi,T,p0k,alpha,E)
return ((np.cos(theta)**2) - ((np.sin(theta)**2) * (np.cos(phi)**2)))*boltz
Computes the tilt angle(psi) by use of our tan(2psi) equation
Returns
-------
Float:
The statistical tilt angle with conditions T,p0k,alpha,E
avg_numerator, avg_numerator_error = dblquad(numerator, 0, 2*np.pi, lambda theta: thetamin, lambda theta: thetamax,args=(T,p0k,alpha,E))
avg_denominator, avg_denominator_error = dblquad(denominator, 0, 2*np.pi, lambda theta: thetamin, lambda theta: thetamax,args=(T,p0k,alpha,E))
psi = np.arctan(avg_numerator / (avg_denominator)) * (180 /(2*np.pi)) #Converting to degrees from radians and divide by two
return psi
compute_psi(tdata[0],p0k_array[0]*1e7,alpha_array[0]*1e10)
PSIdata[0]
guess = [p0k_array[0]*1e7,alpha_array[0]*1e10]
guess
theta_best, theta_cov = opt.curve_fit(compute_psi, Edata, PSIdata,guess,absolute_sigma=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 回帰
Step2: Auto MPG データセット
Step3: pandasを使ってデータをインポートします。
Step4: データのクレンジング
Step5: この最初のチュートリアルでは簡単化のためこれらの行を削除します。
Step6: "Origin"の列は数値ではなくカテゴリーです。このため、ワンホットエンコーディングを行います。
Step7: データを訓練用セットとテスト用セットに分割
Step8: データの調査
Step9: 全体の統計値も見てみましょう。
Step10: ラベルと特徴量の分離
Step11: データの正規化
Step12: この正規化したデータを使ってモデルを訓練することになります。
Step13: モデルの検証
Step14: では、モデルを試してみましょう。訓練用データのうち10個のサンプルからなるバッチを取り出し、それを使ってmodel.predictメソッドを呼び出します。
Step15: うまく動作しているようです。予定通りの型と形状の出力が得られています。
Step16: historyオブジェクトに保存された数値を使ってモデルの訓練の様子を可視化します。
Step17: このグラフを見ると、検証エラーは100エポックを過ぎたあたりで改善が見られなくなり、むしろ悪化しているようです。検証スコアの改善が見られなくなったら自動的に訓練を停止するように、model.fitメソッド呼び出しを変更します。ここでは、エポック毎に訓練状態をチェックするEarlyStoppingコールバックを使用します。設定したエポック数の間に改善が見られない場合、訓練を自動的に停止します。
Step18: 検証用データセットでのグラフを見ると、平均誤差は+/- 2 MPG(マイル/ガロン)前後です。これは良い精度でしょうか?その判断はおまかせします。
Step19: モデルを使った予測
Step20: そこそこ良い予測ができているように見えます。誤差の分布を見てみましょう。
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
# ペアプロットのためseabornを使用します
!pip install seaborn
import pathlib
import pandas as pd
import seaborn as sns
import tensorflow.compat.v1 as tf
from tensorflow import keras
from tensorflow.keras import layers
print(tf.__version__)
dataset_path = keras.utils.get_file("auto-mpg.data", "http://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data")
dataset_path
column_names = ['MPG','Cylinders','Displacement','Horsepower','Weight',
'Acceleration', 'Model Year', 'Origin']
raw_dataset = pd.read_csv(dataset_path, names=column_names,
na_values = "?", comment='\t',
sep=" ", skipinitialspace=True)
dataset = raw_dataset.copy()
dataset.tail()
dataset.isna().sum()
dataset = dataset.dropna()
origin = dataset.pop('Origin')
dataset['USA'] = (origin == 1)*1.0
dataset['Europe'] = (origin == 2)*1.0
dataset['Japan'] = (origin == 3)*1.0
dataset.tail()
train_dataset = dataset.sample(frac=0.8,random_state=0)
test_dataset = dataset.drop(train_dataset.index)
sns.pairplot(train_dataset[["MPG", "Cylinders", "Displacement", "Weight"]], diag_kind="kde")
train_stats = train_dataset.describe()
train_stats.pop("MPG")
train_stats = train_stats.transpose()
train_stats
train_labels = train_dataset.pop('MPG')
test_labels = test_dataset.pop('MPG')
def norm(x):
return (x - train_stats['mean']) / train_stats['std']
normed_train_data = norm(train_dataset)
normed_test_data = norm(test_dataset)
def build_model():
model = keras.Sequential([
layers.Dense(64, activation=tf.nn.relu, input_shape=[len(train_dataset.keys())]),
layers.Dense(64, activation=tf.nn.relu),
layers.Dense(1)
])
optimizer = tf.keras.optimizers.RMSprop(0.001)
model.compile(loss='mean_squared_error',
optimizer=optimizer,
metrics=['mean_absolute_error', 'mean_squared_error'])
return model
model = build_model()
model.summary()
example_batch = normed_train_data[:10]
example_result = model.predict(example_batch)
example_result
# エポックが終わるごとにドットを一つ出力することで進捗を表示
class PrintDot(keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs):
if epoch % 100 == 0: print('')
print('.', end='')
EPOCHS = 1000
history = model.fit(
normed_train_data, train_labels,
epochs=EPOCHS, validation_split = 0.2, verbose=0,
callbacks=[PrintDot()])
hist = pd.DataFrame(history.history)
hist['epoch'] = history.epoch
hist.tail()
import matplotlib.pyplot as plt
def plot_history(history):
hist = pd.DataFrame(history.history)
hist['epoch'] = history.epoch
plt.figure()
plt.xlabel('Epoch')
plt.ylabel('Mean Abs Error [MPG]')
plt.plot(hist['epoch'], hist['mean_absolute_error'],
label='Train Error')
plt.plot(hist['epoch'], hist['val_mean_absolute_error'],
label = 'Val Error')
plt.legend()
plt.ylim([0,5])
plt.figure()
plt.xlabel('Epoch')
plt.ylabel('Mean Square Error [$MPG^2$]')
plt.plot(hist['epoch'], hist['mean_squared_error'],
label='Train Error')
plt.plot(hist['epoch'], hist['val_mean_squared_error'],
label = 'Val Error')
plt.legend()
plt.ylim([0,20])
plot_history(history)
model = build_model()
# The patience parameter is the amount of epochs to check for improvement
early_stop = keras.callbacks.EarlyStopping(monitor='val_loss', patience=10)
history = model.fit(normed_train_data, train_labels, epochs=EPOCHS,
validation_split = 0.2, verbose=0, callbacks=[early_stop, PrintDot()])
plot_history(history)
loss, mae, mse = model.evaluate(normed_test_data, test_labels, verbose=2)
print("Testing set Mean Abs Error: {:5.2f} MPG".format(mae))
test_predictions = model.predict(normed_test_data).flatten()
plt.scatter(test_labels, test_predictions)
plt.xlabel('True Values [MPG]')
plt.ylabel('Predictions [MPG]')
plt.axis('equal')
plt.axis('square')
plt.xlim([0,plt.xlim()[1]])
plt.ylim([0,plt.ylim()[1]])
_ = plt.plot([-100, 100], [-100, 100])
error = test_predictions - test_labels
plt.hist(error, bins = 25)
plt.xlabel("Prediction Error [MPG]")
_ = plt.ylabel("Count")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <span style="color
Step2: <span style="color
Step3: <span style="color
|
<ASSISTANT_TASK:>
Python Code:
import cobra
from utils import findBiomarkers
import pandas as pd
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
M = cobra.io.load_json_model('models/recon_2_2_simple_medium.json')
model = M.copy() # this way we can edit model but leave M unaltered
exchanges = [ rxn for rxn in model.reactions if rxn.products == [] and 'EX_' in rxn.id ]
# Shlomi et al suggested using the following medium: everything in (-1) with a few exceptions
# everything out (unlimited)
# This was actually first proposed in (Sahoo et al, 2012)
for rxn in exchanges:
rxn.lower_bound = -1
rxn.upper_bound = 999999
# specifics
M.reactions.EX_o2_e.lower_bound = -40
for rxn in ['EX_h2o_e','EX_h_e','EX_co2_e','EX_nh4_e','EX_pi_e','EX_hco3_e','EX_so4_e']:
M.reactions.get_by_id(rxn).lower_bound = - 100
# to reduce computation time we check all amino acids + a couple neurotransmitters
biomarkers_to_check = ['EX_his_L_e','EX_ile_L_e','EX_leu_L_e','EX_lys_L_e','EX_met_L_e',
'EX_phe_L_e','EX_thr_L_e','EX_trp_L_e','EX_val_L_e','EX_cys_L_e',
'EX_glu_L_e','EX_tyr_L_e','EX_ala_L_e','EX_asp_L_e','EX_gly_e',
'EX_arg_L_e','EX_gln_L_e','EX_pro_L_e','EX_ser_L_e','EX_asn_L_e',
'EX_dopa_e','EX_adrnl_e','EX_srtn_e']
# UNCOMMENT & FIX THE LINE BELOW
findBiomarkers(model,fvaRxns=biomarkers_to_check,mods=['HGNC:8582'],synchronous=True)
len(exchanges)
findBiomarkers(model,fvaRxns=exchanges,mods=['HGNC:8582'],synchronous=True)
model.reactions.DHPR.gene_reaction_rule, model.reactions.DHPR.reaction
model.reactions.DHPR2.gene_reaction_rule,model.reactions.DHPR2.reaction
model.reactions.r0398.gene_reaction_rule,model.reactions.r0398.reaction
# UNCOMMENT & FIX THE LINE BELOW
# findBiomarkers(model,fvaRxns=biomarkers_to_check,mods=[ADD THE RELEVANT GENES HERE],synchronous=True)
# Copy what you did above but give reactions as input
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship
Step3: The very same sample of the RMS Titanic data now shows the Survived feature removed from the DataFrame. Note that data (the passenger data) and outcomes (the outcomes of survival) are now paired. That means for any passenger data.loc[i], they have the survival outcome outcome[i].
Step5: Tip
Step6: Question 1
Step7: Answer
Step9: Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females did survive the ship sinking. Let's build on our previous prediction
Step10: Question 2
Step11: Answer
Step13: Examining the survival statistics, the majority of males younger than 10 survived the ship sinking, whereas most males age 10 or older did not survive the ship sinking. Let's continue to build on our previous prediction
Step14: Question 3
Step15: Answer
Step17: After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction.
Step18: Question 4
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
# RMS Titanic data visualization code
from titanic_visualizations import survival_stats
from IPython.display import display
%matplotlib inline
# Load the dataset
in_file = 'titanic_data.csv'
full_data = pd.read_csv(in_file)
# Print the first few entries of the RMS Titanic data
display(full_data.head())
# Store the 'Survived' feature in a new variable and remove it from the dataset
outcomes = full_data['Survived']
data = full_data.drop('Survived', axis = 1)
# Show the new dataset with 'Survived' removed
display(data.head())
def accuracy_score(truth, pred):
Returns accuracy score for input truth and predictions.
# Ensure that the number of predictions matches number of outcomes
if len(truth) == len(pred):
# Calculate and return the accuracy as a percent
return "Predictions have an accuracy of {:.2f}%.".format((truth == pred).mean()*100)
else:
return "Number of predictions does not match number of outcomes!"
# Test the 'accuracy_score' function
predictions = pd.Series(np.ones(5, dtype = int))
print accuracy_score(outcomes[:5], predictions)
def predictions_0(data):
Model with no features. Always predicts a passenger did not survive.
predictions = []
for _, passenger in data.iterrows():
# Predict the survival of 'passenger'
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_0(data)
print accuracy_score(outcomes, predictions)
survival_stats(data, outcomes, 'Sex')
def predictions_1(data):
Model with one feature:
- Predict a passenger survived if they are female.
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
if passenger['Sex'] == "female":
predictions.append(1)
else:
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_1(data)
print accuracy_score(outcomes, predictions)
survival_stats(data, outcomes, 'Age', ["Sex == 'male'"])
def predictions_2(data):
Model with two features:
- Predict a passenger survived if they are female.
- Predict a passenger survived if they are male and younger than 10.
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
if passenger['Sex'] == "female":
predictions.append(1)
elif passenger['Sex'] == "male" and passenger['Age'] < 10:
predictions.append(1)
else:
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_2(data)
print accuracy_score(outcomes, predictions)
survival_stats(data, outcomes, 'Age', ["Sex == 'male'", "SibSp == 1"])
def predictions_3(data):
Model with multiple features. Makes a prediction with an accuracy of at least 80%.
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
if passenger['Sex'] == "female":
if passenger['Sex'] == "female" and passenger['Age'] > 40 and passenger['Age'] < 50 and passenger['Pclass'] == 3:
predictions.append(0)
elif passenger['Sex'] == "female" and passenger['SibSp'] > 2 and passenger['Pclass'] == 3:
predictions.append(0)
else:
predictions.append(1)
elif passenger['Sex'] == "male" and passenger['Age'] < 10:
predictions.append(1)
elif passenger['Sex'] == "male" and passenger['Age'] > 20 and passenger['Age'] < 40 and passenger['Pclass'] == 1:
predictions.append(1)
else:
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_3(data)
print accuracy_score(outcomes, predictions)
<END_TASK>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.