text_prompt
stringlengths 168
30.3k
| code_prompt
stringlengths 67
124k
|
|---|---|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Why do we get the answer above rather than what we would expect?
Step2: Note that Python will make the output a float in this case. What happens for the following though?
Step3: Good practice to just add a decimal after any number you really want to treat as a float.
Step4: Note that to use "named" functions such as sqrt or sin we need to import a module so that we have access to those functions. When you import a module (or package) in Python we are asking Python to go look for the code that is named and make them active in our workspace (also called a namespace in more general parlance). Here is an example where we use Python's builtin math module
Step5: Note that in order to access these functions we need to prepend the math. to the functions and the constant $\pi$. We can forgo this and import all of what math holds if we do the following
Step6: Note that many of these functions always return a float number regardless of their input.
Step7: Note that we do not get what we expect from this expression as we expected from above. What would we have to change to get this to work?
Step8: Note here we have left the defined variables as integers as it makes sense that they remain that way (fractional students aside).
Step9: Lists
Step10: Lists are defined with square brackets and delineated by commas. Note that there is another data type called sequences denoted by ( ) which are immutable (cannot be changed) once created. Lets try to do some list manipulations with our list of grades above.
Step11: Note that Python is 0 indexed, i.e. the first value in the list is accessed by 0.
Step12: Add values to a list
Step13: Slicing is another important operation
Step14: Note that the range of values does not include the last indexed! This is important to remember for more than lists but we will get to that later.
Step15: Another property of lists is that you can put different types in them at the same time. This can be important to remember if you may have both int and float types.
Step16: Finally, one of the more useful list creation functions is range which creates a list with the bounds requested
Step17: Control Flow
Step18: for
Step19: Related to the for statement are the control statements break and continue. Ideally we can create a loop with logic that can avoid these but sometimes code can be more readable with judiciuos use of these statements.
Step20: The pass statement might appear fairly useless as it simply does nothing but can provide a stub to remember to come back and implement something
Step21: Defining Functions
Step23: Lets try writing a bit more of a complex (and useful) function. The Fibinocci sequence is formed by adding the previous two numbers of the sequence to get the next value (starting with [0, 1]).
|
<ASSISTANT_TASK:>
Python Code:
2 + 2
32 - (4 + 2)**2
1 / 2
1.0 / 2
4.0 + 4**(3/2)
4.0 + 4.0**(3.0 / 2.0)
3+5j
import math
math.sqrt(4)
math.sin(math.pi / 2.0)
math.exp(-math.pi / 4.0)
from math import *
sin(pi / 2.0)
num_students = 80
room_capacity = 85
(room_capacity - num_students) / room_capacity * 100.0
float(room_capacity - num_students) / float(room_capacity) * 100.0
a = 10
b = a + 2
print b
grades = [90.0, 67.0, 85.0, 76.0, 98.0, 70.0]
grades[3]
len(grades)
grades = grades + [62.0, 82.0, 59.0]
print grades
grades[2:5]
grades[0:4]
grades[:4]
grades[4:]
grades[4:11]
remember = ["2", 2, 2.0]
remember[0] / 1
remember[1] / 1
remember[2] / 1
count = range(3,7)
print count
x = 4
if x > 5:
print "x is greater than 5"
elif x < 5:
print "x is less than 5"
else:
print "x is equal to 5"
for i in range(5):
print i
for i in range(3,7):
print i
for animal in ['cat', 'dog', 'chinchilla']:
print animal
for n in range(2, 10):
is_prime = True
for x in range(2, n):
if n % x == 0:
print n, 'equals', x, '*', n / x
is_prime = False
break
if is_prime:
print "%s is a prime number" % (n)
def my_func(x):
# Remember to implement this later!
pass
def my_print_function(x):
print x
my_print_function(3)
def my_add_function(a, b):
return a + b
my_add_function(3.0, 5.0)
def my_crazy_function(a, b, c=1.0):
d = a + b**c
return d
my_crazy_function(2.0, 3.0), my_crazy_function(2.0, 3.0, 2.0), my_crazy_function(2.0, 3.0, c=2.0)
def my_other_function(a, b, c=1.0):
return a + b, a + b**c, a + b**(3.0 / 7.0)
my_other_function(2.0, 3.0, c=2.0)
def fibonacci(n):
Return a list of the Fibonacci sequence up to n
values = [0, 1]
while values[-1] <= n:
values.append(values[-1] + values[-2])
print values
return values
fibonacci(100)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A single automaton can be read using spot.automaton(), with the same convention.
Step2: Error handling
Step3: Reading an empty file with spot.automaton() is an error.
|
<ASSISTANT_TASK:>
Python Code:
for a in spot.automata('ltl2tgba -s "a U b"; ltl2tgba --lbtt "b"|', 'ltl2tgba -H "GFa" "a & GFb"|'):
display(a)
spot.automaton('ltl2tgba -s6 "a U b"|')
spot.automaton('non-existing-command|')
for a in spot.automata("ltl2tgba 'a U b'|", 'ltl2tgba "syntax U U error"|'):
display(a)
spot.automaton('true|')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Naive Bayes Using Scikit_Lerarn
Step2: Pre-Processing The Data
|
<ASSISTANT_TASK:>
Python Code:
import sklearn as sk
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn .datasets import fetch_olivetti_faces
faces = fetch_olivetti_faces()
faces.DESCR
faces.keys()
faces.images.shape
faces.data.shape
faces.target.shape
np.max(faces.data)
np.min(faces.data)
np.median(faces.data)
def print_faces(images , target , top_n):
fig = plt.figure(figsize=(20,20))
for i in range(top_n):
p = fig.add_subplot(20,20,i+1,xticks=[],yticks=[])
p.imshow(images[i],cmap=plt.cm.bone)
p.text(0,14,str(target[i]))
p.text(0,59,str(i))
print_faces(faces.images,faces.target,20)
plt.show()
from sklearn.svm import SVC
from sklearn.cross_validation import cross_val_score,KFold
from scipy.stats import sem
svc_1 = SVC(kernel='linear')
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(faces.data, faces.target, test_size=0.25, random_state=0)
def evaluate_cross_validation(clf, X, y, K):
cv = KFold(len(y) , K, shuffle =True, random_state = 0)
scores = cross_val_score(clf,X,y,cv=cv)
print scores
evaluate_cross_validation(svc_1,X_train,y_train,5)
from sklearn import metrics
def train_and_test(clf, X_train, X_test, y_train, y_test):
clf.fit(X_train, y_train)
print "Accuracy on training Set"
print clf.score(X_train, y_train)
print "Accuracy on testing Set"
print clf.score(X_test, y_test)
y_pred = clf.predict(X_test)
print "Classification Report"
print metrics.classification_report(y_test, y_pred)
print "Confudion Matrix"
print metrics.confusion_matrix(y_test, y_pred)
train_and_test(svc_1, X_train, X_test, y_train, y_test)
glasses = [
(10, 19), (30, 32), (37, 38), (50, 59), (63, 64),
(69, 69), (120, 121), (124, 129), (130, 139), (160, 161),
(164, 169), (180, 182), (185, 185), (189, 189), (190, 192),
(194, 194), (196, 199), (260, 269), (270, 279), (300, 309),
(330, 339), (358, 359), (360, 369)]
def create_target(segments):
y = np.zeros(faces.target.shape[0])
for (start, end) in segments:
y[start:end+1] = 1
return y
target_glasses = create_target(glasses)
X_train, X_test, y_train, y_test = train_test_split(faces.data, target_glasses, test_size=0.25, random_state=0)
svc_2 = SVC(kernel='linear')
evaluate_cross_validation(svc_2, X_train, y_train, 5)
train_and_test(svc_2, X_train, X_test, y_train, y_test)
X_test = faces.data[30:40]
y_test = target_glasses[30:40]
y_test.shape
select = np.ones(target_glasses.shape[0])
select[30:40] = 0
X_train = faces.data[select == 1]
y_train = target_glasses[select == 1]
y_train.shape
svc_3 = SVC(kernel='linear')
train_and_test(svc_3, X_train, X_test, y_train, y_test)
from sklearn.datasets import fetch_20newsgroups
news = fetch_20newsgroups(subset='all')
print type(news.data), type(news.target), type(news.target_names)
print news.target_names
len(news.data)
len(news.target)
news.data[0] #Content of the data at 0th index
news.target[0], news.target_names[news.target[0]] # Target_Name
SPLIT_PERC = .75
split_size = int(len(news.data)*SPLIT_PERC)
X_train = news.data[:split_size]
X_test = news.data[split_size:]
y_train = news.target[:split_size]
y_test = news.target[split_size:]
from sklearn.naive_bayes import MultinomialNB
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import TfidfVectorizer, HashingVectorizer, CountVectorizer
clf_1 = Pipeline([('vect', CountVectorizer()), ('clf', MultinomialNB())])
clf_2 = Pipeline([('vect', HashingVectorizer(non_negative=True)), ('clf', MultinomialNB())])
clf_3 = Pipeline([('vect', TfidfVectorizer()), ('clf', MultinomialNB())])
from sklearn.cross_validation import cross_val_score, KFold
from scipy.stats import sem
clfs = [clf_1, clf_2, clf_3]
for clf in clfs:
print clf
evaluate_cross_validation(clf, news.data, news.target, 5)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Twiss parameters with and without coupler kick
Step2: Trajectories with Coupler Kick
Step3: Horizantal and vertical emittances
Step4: Tracking of the particles though lattice with coupler kicks
Steps
Step5: Eigenemittance
|
<ASSISTANT_TASK:>
Python Code:
# the output of plotting commands is displayed inline within frontends,
# directly below the code cell that produced it
%matplotlib inline
from time import time
# this python library provides generic shallow (copy)
# and deep copy (deepcopy) operations
from copy import deepcopy
# import from Ocelot main modules and functions
from ocelot import *
# extra function to track the Particle though a lattice
from ocelot.cpbd.track import lattice_track
# import from Ocelot graphical modules
from ocelot.gui.accelerator import *
# import lattice
from xfel_l1 import *
tws0 = Twiss()
tws0.E = 0.005
tws0.beta_x = 7.03383607232
tws0.beta_y = 4.83025657816
tws0.alpha_x = 0.981680481977
tws0.alpha_y = -0.524776086698
tws0.E = 0.1300000928
lat = MagneticLattice(cell_l1, start=bpmf_103_i1, stop=qd_210_b1)
# twiss parameters without coupler kick
tws1 = twiss(lat, tws0)
# adding coupler coefficients in [1/m]
for elem in lat.sequence:
if elem.__class__ == Cavity:
if not(".AH1." in elem.id):
# 1.3 GHz cavities
elem.vx_up = (-56.813 + 10.751j) * 1e-6
elem.vy_up = (-41.091 + 0.5739j) * 1e-6
elem.vxx_up = (0.99943 - 0.81401j) * 1e-3
elem.vxy_up = (3.4065 - 0.4146j) * 1e-3
elem.vx_down = (-24.014 + 12.492j) * 1e-6
elem.vy_down = (36.481 + 7.9888j) * 1e-6
elem.vxx_down = (-4.057 - 0.1369j) * 1e-3
elem.vxy_down = (2.9243 - 0.012891j) * 1e-3
else:
# AH1 cavity (3.9 GHz) module names are 'C3.AH1.1.1.I1', 'C3.AH1.1.2.I1', ...
# Modules with odd and even number X 'C3.AH1.1.X.I1' have different coefficients
module_number = float(elem.id.split(".")[-2])
if module_number % 2 == 1:
elem.vx_up = -0.00057076 - 1.3166e-05j
elem.vy_up = -3.5079e-05 + 0.00012636j
elem.vxx_up = -0.026045 - 0.042918j
elem.vxy_up = 0.0055553 - 0.023455j
elem.vx_down = -8.8766e-05 - 0.00024852j
elem.vy_down = 2.9889e-05 + 0.00014486j
elem.vxx_down = -0.0050593 - 0.013491j
elem.vxy_down = 0.0051488 + 0.024771j
else:
elem.vx_up = 0.00057076 + 1.3166e-05j
elem.vy_up = 3.5079e-05 - 0.00012636j
elem.vxx_up = -0.026045 - 0.042918j
elem.vxy_up = 0.0055553 - 0.023455j
elem.vx_down = 8.8766e-05 + 0.00024852j
elem.vy_down = -2.9889e-05 - 0.00014486j
elem.vxx_down = -0.0050593 - 0.013491j
elem.vxy_down = 0.0051488 + 0.024771j
# update transfer maps
lat.update_transfer_maps()
tws = twiss(lat, tws0)
bx0 = [tw.beta_x for tw in tws1]
by0 = [tw.beta_y for tw in tws1]
s0 = [tw.s for tw in tws1]
bx = [tw.beta_x for tw in tws]
by = [tw.beta_y for tw in tws]
s = [tw.s for tw in tws]
fig, ax = plot_API(lat, legend=False)
ax.plot(s0, bx0, "b", lw=1, label=r"$\beta_x$")
ax.plot(s, bx, "b--", lw=1, label=r"$\beta_x$, CK")
ax.plot(s0, by0, "r", lw=1, label=r"$\beta_y$")
ax.plot(s, by, "r--", lw=1, label=r"$\beta_y$, CK")
ax.set_ylabel(r"$\beta_{x,y}$, m")
ax.legend()
plt.show()
def plot_trajectories(lat):
f, (ax1, ax2) = plt.subplots(2, 1, sharex=True)
for a in np.arange(-0.6, 0.6, 0.1):
cix_118_i1.angle = a*0.001
lat.update_transfer_maps()
p = Particle(px=0, E=0.130)
plist = lattice_track(lat, p)
s = [p.s for p in plist]
x = [p.x for p in plist]
y = [p.y for p in plist]
px = [p.px for p in plist]
py = [p.py for p in plist]
ax1.plot(s, x)
ax2.plot(s, y)
plt.xlabel("z [m]")
plt.show()
plot_trajectories(lat)
for elem in lat.sequence:
if elem.__class__ == Cavity:
if not(".AH1." in elem.id):
# 1.3 GHz cavities
elem.vx_up = 0.
elem.vy_up = 0.
elem.vxx_up = (0.99943 - 0.81401j) * 1e-3
elem.vxy_up = (3.4065 - 0.4146j) * 1e-3
elem.vx_down = 0.
elem.vy_down = 0.
elem.vxx_down = (-4.057 - 0.1369j) * 1e-3
elem.vxy_down = (2.9243 - 0.012891j) * 1e-3
# update transfer maps
lat.update_transfer_maps()
# plot the trajectories
plot_trajectories(lat)
# create ParticleArray with "one clice"
parray = generate_parray(sigma_tau=0., sigma_p=0.0, chirp=0.0)
print(parray)
# track the beam though the lattice
navi = Navigator(lat)
tws_track, _ = track(lat, parray, navi)
# plot emittances
emit_x = np.array([tw.emit_x for tw in tws_track])
emit_y = np.array([tw.emit_y for tw in tws_track])
gamma = np.array([tw.E for tw in tws_track])/m_e_GeV
s = [tw.s for tw in tws_track]
fig, ax = plot_API(lat, legend=False)
ax.plot(s, emit_x * gamma * 1e6, "b", lw=1, label=r"$\varepsilon_x$ [mm $\cdot$ mrad]")
ax.plot(s, emit_y * gamma * 1e6, "r", lw=1, label=r"$\varepsilon_y$ [mm $\cdot$ mrad]")
ax.set_ylabel(r"$\varepsilon_{x,y}$ [mm $\cdot$ mrad]")
ax.legend()
plt.show()
# plot emittances
emit_x = np.array([tw.eigemit_1 for tw in tws_track])
emit_y = np.array([tw.eigemit_2 for tw in tws_track])
gamma = np.array([tw.E for tw in tws_track])/m_e_GeV
s = [tw.s for tw in tws_track]
fig, ax = plot_API(lat, legend=False)
ax.plot(s, emit_x * gamma * 1e6, "b", lw=1, label=r"$\varepsilon_x$ [mm $\cdot$ mrad]")
ax.plot(s, emit_y * gamma * 1e6, "r", lw=1, label=r"$\varepsilon_y$ [mm $\cdot$ mrad]")
ax.set_ylabel(r"$\varepsilon_{x,y}$ [mm $\cdot$ mrad]")
ax.legend()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Gradient Boosting Machine (GBM)
Step4: Clearly, Gradient Boosting has some similarities to Random Forests and Extra Trees
Step6: But the way the ensembles are constructed differs substantially between each model. In Random Forests and Extra Trees, all trees are created independently and each tree contributes equally to the final model. The trees in Gradient Boosting, however, are dependent on past trees and contribute unequally to the final model. Despite these differences, Random Forests, Extra Trees and Gradient Boosting all offer competitive predictive performance (Gradient Boosting often wins when carefully tuned). As for computation time, Gradient Boosting is often greater than for Random Forests, Extra Trees, since the two former models' procedure can be easily parallel processed given that their individual trees are created independently.
Step10: Next, we wish to compute the derivative of this function with respect to the input $o_i$ so we can use it later when computing the derivative of the loss function. To be explicit we wish to find
Step14: Understanding Model Complexity
Step15: Recall that in a single regression tree, we can use the max_depth parameter to control how deep to grow the tree and the deeper the tree the more variance can be explained.
Step16: The plot above shows that the decision boundaries made by decision trees are always perpendicular to $x$ and $y$ axis (due to the fact that they consists of nested if-else statements). Let's see what happens when we use gradient boosting without tuning the parameters (by specifying a fix max_depth).
Step17: Hopefully, it should be clear that compared with decision trees, gradient boosting machine is far more susceptible to overfitting the training data, hence it is common to tune parameters including max_depth, max_features, min_samples_leaf, subsample (explained below) to reduce the overfitting phenomenon from occurring.
|
<ASSISTANT_TASK:>
Python Code:
# code for loading the format for the notebook
import os
# path : store the current path to convert back to it later
path = os.getcwd()
os.chdir(os.path.join('..', '..', 'notebook_format'))
from formats import load_style
load_style(css_style = 'custom2.css', plot_style = False)
os.chdir(path)
# 1. magic for inline plot
# 2. magic to print version
# 3. magic so that the notebook will reload external python modules
# 4. magic to enable retina (high resolution) plots
# https://gist.github.com/minrk/3301035
%matplotlib inline
%load_ext watermark
%load_ext autoreload
%autoreload 2
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.metrics import mean_squared_error
from sklearn.tree import DecisionTreeRegressor
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.ensemble import RandomForestRegressor, GradientBoostingRegressor
%watermark -d -t -v -p numpy,pandas,matplotlib,sklearn
# read in the data and shuffle the row order for model stability
np.random.seed(4321)
wine_path = os.path.join('..', 'winequality-white.csv')
wine = pd.read_csv(wine_path, sep = ';')
wine = wine.sample(frac = 1)
# train/test split the features and response column
y = wine['quality'].values
X = wine.drop('quality', axis = 1).values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 1234)
print('dimension of the dataset: ', wine.shape)
wine.head()
class GBMReg:
Regression gradient boosting machine using scikit learn's
decision tree as the base tree
Parameters
----------
n_estimators: int
number of trees to train
learning_rate: float
learning rate, some calls it shrinkage,
shrinks the contribution of each tree
to prevent overfitting
max_depth: int
controls how deep to grow the tree;
this is more of a decision tree parameter,
it is tune here to make later comparison fair
all the other parameters for a decision tree like
max_features or min_sample_split also applies to GBM,
it is just not used here as that is more
related to a single decision tree
def __init__(self, n_estimators, learning_rate, max_depth):
self.max_depth = max_depth
self.n_estimators = n_estimators
self.learning_rate = learning_rate
def fit(self, X, y):
self.estimators = []
# simply use the response as the original residuals
# and covert it to float type to prevent error warning
# that it's converting from int to float
residual = y.astype(np.float)
for i in range(self.n_estimators):
tree = DecisionTreeRegressor(max_depth = self.max_depth)
tree.fit(X, residual)
y_pred = tree.predict(X)
self.estimators.append(tree)
residual -= self.learning_rate * y_pred
return self
def predict(self, X):
y_pred = np.zeros(X.shape[0])
for tree in self.estimators:
y_pred += self.learning_rate * tree.predict(X)
return y_pred
# compare the results between a single decision tree,
# gradient boosting, the lower the mean square
# error, the better
tree = DecisionTreeRegressor(max_depth = 6)
tree.fit(X_train, y_train)
tree_y_pred = tree.predict(X_test)
print('tree: ', mean_squared_error(y_test, tree_y_pred))
# library to confirm result
gbm_reg = GBMReg(n_estimators = 100, learning_rate = 0.1, max_depth = 6)
gbm_reg.fit(X_train, y_train)
gbm_reg_y_pred = gbm_reg.predict(X_test)
print('gbm: ', mean_squared_error(y_test, gbm_reg_y_pred))
# gradient boosting for 100 trees and learning rate of 0.1
gbm = GradientBoostingRegressor(n_estimators = 100, learning_rate = 0.1, max_depth = 6)
gbm.fit(X_train, y_train)
gbm_y_pred = gbm.predict(X_test)
print('gbm library: ', mean_squared_error(y_test, gbm_y_pred))
def viz_importance(model, feature_names, n_features):
Visualize the relative importance of predictors
# sort the importance in decreasing order
importances = model.feature_importances_
idx = np.argsort(importances)[-n_features:]
names = feature_names[idx]
scores = importances[idx]
y_pos = np.arange(1, n_features + 1)
plt.barh(y_pos, scores, color = 'lightskyblue', align = 'center')
plt.yticks(y_pos, names)
plt.xlabel('Importance')
plt.title('Feature Importance Plot')
# change default figure and font size
plt.rcParams['figure.figsize'] = 8, 6
plt.rcParams['font.size'] = 12
viz_importance(gbm, wine.columns[:-1], X.shape[1])
def compute_softmax(x):
compute the softmax of vector
exp_x = np.exp(x)
softmax = exp_x / np.sum(exp_x)
return softmax
# this can be interpreted as the probability
# of belonging to the three classes
compute_softmax([1, 2, 3])
from sklearn.metrics import accuracy_score
from sklearn.preprocessing import LabelEncoder
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import GradientBoostingClassifier
class GBMClass:
Classification gradient boosting machine using scikit learn's
decision tree as the base tree
Parameters
----------
n_estimators: int
number of trees to train
learning_rate: float
learning rate, some calls it shrinkage,
shrinks the contribution of each tree
to prevent overfitting
max_depth: int
controls how deep to grow the tree;
this is more of a decision tree parameter,
it is tune here to make later comparison fair
all the other parameters for a decision tree like
max_features or min_sample_split also applies to GBM,
it is just not used here as that is more
related to a single decision tree
def __init__(self, n_estimators, learning_rate, max_depth):
self.max_depth = max_depth
self.n_estimators = n_estimators
self.learning_rate = learning_rate
def fit(self, X, y):
# encode labels with value between 0 and n_classes - 1,
# so we can easily one-hot encode them
self.le = LabelEncoder()
labels = self.le.fit_transform(y)
Y = self._to_categorical(labels)
del labels
# the predicted probability starts out with
# a value that's uniform over all classes;
# then we compute the residuals (negative gradient),
# which is the difference between the predicted
# probability and the class label
y_proba = np.full(Y.shape, 1 / Y.shape[1])
residuals = Y - y_proba
# train a base decision tree on the residuals
# for every single class, hence we end up with
# n_estimators * n_classes base tree models
self.estimators = []
for i in range(self.n_estimators):
for j in range(self.n_classes):
tree = DecisionTreeRegressor(max_depth = self.max_depth)
tree.fit(X, residuals[:, j])
y_pred = tree.predict(X)
self.estimators.append(tree)
residuals[:, j] -= self.learning_rate * y_pred
return self
def _to_categorical(self, y):
one hot encode class vector y
self.n_classes = np.amax(y) + 1
Y = np.zeros((y.shape[0], self.n_classes))
for i in range(y.shape[0]):
Y[i, y[i]] = 1.0
return Y
def predict(self, X):
# after predicting the class remember to
# transform it back to the actual class label
y_prob = self.predict_proba(X)
y_pred = np.argmax(y_prob, axis = 1)
y_pred = self.le.inverse_transform(y_pred)
return y_pred
def predict_proba(self, X):
# add up raw score for every class and convert
# it to probability using softmax
y_raw = np.zeros((X.shape[0], self.n_classes))
# obtain the tree for each class and add up the prediction
for c in range(self.n_classes):
class_tree = self.estimators[c::self.n_classes]
for tree in class_tree:
y_raw[:, c] += self.learning_rate * tree.predict(X)
y_proba = self._compute_softmax(y_raw)
return y_proba
def _compute_softmax(self, z):
compute the softmax of matrix z in a numerically stable way,
by substracting each row with the max of each row. For more
information refer to the following link:
https://nolanbconaway.github.io/blog/2017/softmax-numpy
shift_z = z - np.amax(z, axis = 1, keepdims = 1)
exp_z = np.exp(shift_z)
softmax = exp_z / np.sum(exp_z, axis = 1, keepdims = 1)
return softmax
# compare the results between a single decision tree,
# gradient boosting, the higher the accuracy, the better
tree = DecisionTreeClassifier(max_depth = 6)
tree.fit(X_train, y_train)
tree_y_pred = tree.predict(X_test)
print('tree: ', accuracy_score(y_test, tree_y_pred))
# gradient boosting for 150 trees and learning rate of 0.2
# unlike random forest, gradient boosting's base tree can be shallower
# meaning that there depth can be smaller
gbm_class = GBMClass(n_estimators = 150, learning_rate = 0.2, max_depth = 3)
gbm_class.fit(X_train, y_train)
gbm_class_y_pred = gbm_class.predict(X_test)
print('gbm: ', accuracy_score(y_test, gbm_class_y_pred))
# library to confirm results are comparable
gbm = GradientBoostingClassifier(n_estimators = 150, learning_rate = 0.2, max_depth = 3)
gbm.fit(X_train, y_train)
gbm_y_pred = gbm.predict(X_test)
print('gbm library: ', accuracy_score(y_test, gbm_y_pred))
def ground_truth(x):
Ground truth -- function to approximate
return x * np.sin(x) + np.sin(2 * x)
def gen_data(low, high, n_samples):
generate training and testing data from the ground truth function
np.random.seed(15)
X = np.random.uniform(low, high, size = n_samples)
# generate the response from the ground truth function and add
# some random noise to it
y = ground_truth(X) + np.random.normal(scale = 2, size = n_samples)
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size = 0.2, random_state = 3)
return X_train, X_test, y_train, y_test
def plot_data(x_plot, X_train, X_test, y_train, y_test):
plot training and testing data
s = 20
alpha = 0.4
plt.plot(x_plot, ground_truth(x_plot), alpha = alpha, label = 'ground truth')
plt.scatter(X_train, y_train, s = s, alpha = alpha)
plt.scatter(X_test, y_test, s = s, alpha = alpha, color = 'red')
plt.xlim(( 0, 10 ))
plt.ylabel('y')
plt.xlabel('x')
plt.legend(loc = 'upper left')
plt.show()
low = 0
high = 10
x_plot = np.linspace(low, high, 500)
X_train, X_test, y_train, y_test = gen_data(low = low, high = high, n_samples = 100)
plot_data(x_plot, X_train, X_test, y_train, y_test)
# when using scikit-learn, the training data has to be
# a 2d-array even if it only has 1 features
tree1 = DecisionTreeRegressor(max_depth = 1)
tree1.fit(X_train[:, np.newaxis], y_train)
tree2 = DecisionTreeRegressor(max_depth = 3)
tree2.fit(X_train[:, np.newaxis], y_train)
plt.plot(x_plot, tree1.predict(x_plot[:, np.newaxis]),
label = 'RT max_depth=1', color = 'g', alpha = 0.9, linewidth = 2)
plt.plot(x_plot, tree2.predict(x_plot[:, np.newaxis]),
label = 'RT max_depth=3', color = 'g', alpha = 0.7, linewidth = 1)
plot_data(x_plot, X_train, X_test, y_train, y_test)
gbm = GradientBoostingRegressor(n_estimators = 300, max_depth = 6, learning_rate = 0.1)
gbm.fit(X_train[:, np.newaxis], y_train)
plt.plot(x_plot, gbm.predict(x_plot[:, np.newaxis]),
label = 'GBM max_depth=6', color = 'r', alpha = 0.9, linewidth = 2)
plot_data(x_plot, X_train, X_test, y_train, y_test)
param_grid = {
'max_depth': [4, 6],
'min_samples_leaf': [3, 5, 8],
'subsample': [0.9, 1]
# 'max_features': [1.0, 0.3, 0.1] # not possible in this example (there's only 1)
}
gs_gbm = GridSearchCV(gbm, param_grid, scoring = 'neg_mean_squared_error', n_jobs = 4)
gs_gbm.fit(X_train[:, np.newaxis], y_train)
print('Best hyperparameters: %r' % gs_gbm.best_params_)
plt.plot(x_plot, gs_gbm.predict(x_plot[:, np.newaxis]),
label = 'GBM tuned', color = 'r', alpha = 0.9, linewidth = 2)
plot_data(x_plot, X_train, X_test, y_train, y_test)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Accumulate
Step2: We can use any other arithmetic operator we want, such as *. In fact, we can use any function that takes two operands and returns a new value. For summing, the two "input" numbers of the function are the previous accumulated value and the next value in the sequence. The result of that function is the new accumulated value. + and * are the most common operators.
Step3: Map
Step4: In other words
Step5: Notice that I have included the initialization of the empty list as part of this code snippet. The reason is that we really want to mentally associate the initialization with this code pattern.
Step6: Exercise
Step7: Exercise
Step8: Combine
Step9: When traversing more than a single list, we typically need to use an indexed loop rather than a for-each loop
Step10: List comprehension for combine operation
Step11: Tricks for formatting
Step12: Note that you might be tempted to use a double for-each loop in a list comprehension, but you get the cross product of each value in quantity times each value in your price. That's not what we want, as you can see here
Step13: The zip function
Step14: Or, with a list comprehension
Step15: Split
Step16: We can even traverse a list to get a matrix (list of lists)
Step17: Slice
Step18: Warning
Step19: List comprehension for filter operation
Step20: We can also filter on one column but keep the data within each row together. Here is an example, using Excel, that filters Oscar winners from the list of nominees (the condition is winner equals 1)
Step21: The code for the looping construct looks like (jumping straight to the list comprehension form)
Step22: The output is a list of lists, a filtered table of rows one per movie.
Step23: Exercise
Step24: Exercise
Step25: Search
Step26: The break statement breaks out of the immediately-enclosing loop, regardless of the type of loop.
Step27: To determine the index of the end of the string, programmers tend to use the length of the string. The length works out to be an index whose value is one past the end of the string, which is what we want for a slice using an exclusive right index.
Step28: Because this is not a one-dimensional data structure, we can't use a simple "for each element in the matrix" loop to examine each element. The most common pattern for iterating through all elements of an nrows x ncols matrix looks like this
Step29: Notice how the column j value varies more quickly than the row i value. We can reverse this order of traversal by changing the loop order
Step30: With the j loop on the outside, it will vary less quickly than the inner i loop.
Step31: Now, let's use an accumulator to sum all of the elements of a 3x3 matrix, we let nrows=3 and ncols=3 and use an addition operation
Step32: Processing images
Step33: Warning
Step34: Exercise
Step35: Exercise
|
<ASSISTANT_TASK:>
Python Code:
UnitPrice = [38.94, 208.16, 8.69, 195.99]
Shipping = [35, 68.02, 2.99, 3.99, 5.94, 4.95, 7.72, 6.22]
names=['Xue', 'Mary', 'Bob']
Oscars = [
[1984, "A Soldier's Story", 0],
[1984, 'Places in the Heart', 0],
[1984, 'The Killing Fields', 0],
[1984, 'A Passage to India', 0],
[1984, 'Amadeus', 1],
[1985, "Prizzi's Honor", 0],
[1985, 'Kiss of the Spider Woman', 0],
[1985, 'Witness', 0],
[1985, 'The Color Purple', 0],
[1985, 'Out of Africa', 1]
]
Quantity = [6, 49, 27, 30, 19, 21, 12, 22, 21]
A = [
[1, 3, 10],
[4, -9, 0],
[2, 5, 8]
]
B = [
[7, 4, 0],
[4, 3, 1],
[1, 6, 8]
]
first=['Xue', 'Mary', 'Robert']
last=['Li', 'Smith', 'Dixon']
sum = 0
for q in Quantity:
sum += q # same as: sum = sum + q
print(sum)
even = 0
odd = 0
for q in Quantity:
if q % 2 == 0: even += 1 # % is mod operator
else: odd += 1
print(even, odd)
UnitPrice = [38.94, 208.16, 8.69, 195.99]
Discounted = [] # empty list
Discounted = [] # empty list
for price in UnitPrice:
Discounted.append(price * 0.95)
print(Discounted)
Discounted = [price * 0.95 for price in UnitPrice] # a list comprehension
print(Discounted)
Discounted = [price/2 for price in UnitPrice] # a list comprehension
print(Discounted)
names = ['Xue', 'Mary', 'Robert']
namelens = [len(name) for name in names]
print(namelens)
Quantity = [6, 49, 27, 30, 19, 21, 12, 22, 21]
UnitPrice = [38.94, 208.16, 8.69, 195.99, 21.78, 6.64, 7.3, 42.76, 138.14]
Cost = []
for i in range(len(Quantity)): # from 0 to length of Quantity-1, inclusively
Cost.append( Quantity[i] * UnitPrice[i] )
print(Cost)
Cost = [Quantity[i] * UnitPrice[i] for i in range(len(Quantity))]
print(Cost)
f"{3.14159:.2f}"
[f"{c:.2f}" for c in Cost]
[round(c,2) for c in Cost]
print( [q*p for q in Quantity for p in UnitPrice] ) # WRONG!
Cost = []
for q,u in zip(Quantity,UnitPrice):
Cost.append( q * u )
print(Cost)
Cost = [q * u for q,u in zip(Quantity,UnitPrice)]
print(Cost)
'1 2 3'.split(' ')
'1 2 3'.split(' ')
values = '1 2 3'.split(' ')
[int(v) for v in values]
rows = [
'1 2 3',
'3 4 5'
]
matrix = [[int(v) for v in row.split(' ')] for row in rows]
matrix
from lolviz import *
objviz(matrix)
names=['Xue', 'Mary', 'Bob']
print(f"Length {len(names)}")
print(names[0:1])
print(names[0:2])
print(names[0:3])
print(names[2:3])
Shipping = [35, 68.02, 2.99, 3.99, 5.94, 4.95, 7.72, 6.22]
Shipping2 = []
for x in Shipping:
if x < 10:
Shipping2.append(x)
print(Shipping2)
Shipping2 = [x for x in Shipping if x < 10]
print(Shipping2)
Oscars = [
[1984, "A Soldier's Story", 0],
[1984, 'Places in the Heart', 0],
[1984, 'The Killing Fields', 0],
[1984, 'A Passage to India', 0],
[1984, 'Amadeus', 1],
[1985, "Prizzi's Honor", 0],
[1985, 'Kiss of the Spider Woman', 0],
[1985, 'Witness', 0],
[1985, 'The Color Purple', 0],
[1985, 'Out of Africa', 1]
]
objviz(Oscars)
Oscars2 = [movie for movie in Oscars if movie[2]==1]
print(Oscars2)
objviz(Oscars2)
Oscars2 = [movie for movie in Oscars if len(movie[1].split(' '))==3]
print(Oscars2)
Shipping2 = [x*2 for x in Shipping if x < 10]
print(Shipping2)
names=['Xue', 'Mary', 'Bob']
names2 = [name for name in names if name.startswith('X')]
print(names2)
first=['Xue', 'Mary', 'Robert'] # our given input
target = 'Mary' # searching for Mary
index = -1
for i in range(len(first)): # i is in range [0..n-1] or [0..n)
if first[i]==target:
index = i
break
print(index)
name = 'Xue Li'
# SEARCH
index = -1
for i in range(len(name)):
if name[i]==' ':
index = i
break
print(f"Index of space is {index}")
# SLICE
print(f"First: {name[0:index]}")
print(f"Last: {name[index+1:]}") # or name[index+1:len(name)]
A = [
[1, 3, 10],
[4, -9, 0],
[2, 5, 8]
]
nrows = 3 # or len(A)
ncols = 3 # or len(A[0]) length of first row
for i in range(nrows):
for j in range(ncols):
print( i, j )
for j in range(ncols):
for i in range(nrows):
print(i, j)
coords = [f"{i},{j}" for i in range(nrows) for j in range(ncols)]
print(coords)
sum = 0
for i in range(nrows):
for j in range(ncols):
sum = sum + A[i][j]
print(sum)
A = [
[1, 3, 10],
[4, -9, 0],
[2, 5, 8]
]
B = [
[7, 4, 0],
[4, 3, 1],
[1, 6, 8]
]
# Use list comprehension to init list of lists
C = [[0]*ncols for i in range(nrows)]
for i in range(nrows):
for j in range(ncols):
C[i][j] = A[i][j] + B[i][j]
print(C)
import numpy as np
np.array(A) + np.array(B)
D=[[0]*ncols]*nrows
objviz(D)
D[0][1] = 3
D
# compare to this:
objviz([[0]*ncols for i in range(nrows)])
first=['Xue', 'Mary', 'Robert']
last=['Li', 'Smith', 'Dixon']
for f in first:
for l in last:
print(f+' '+l)
print([f+' '+l for f in first for l in last])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1:
Step2:
Step3:
|
<ASSISTANT_TASK:>
Python Code:
from itertools import repeat
from sympy import *
#from type_system import *
%run ../../src/commons.py
%run ./type-system.py
init_printing()
x,y,m,n,t,z = symbols('x y m n t z', commutative=True)
alpha, beta, gamma, eta = symbols(r'\alpha \beta \gamma \eta', commutative=True)
f,g = Function('f'), Function('g')
bin_tree_gfs = bin_tree(tyvar(alpha)[z]).gf()
bin_tree_gfs
bin_tree_gf = bin_tree_gfs[0]
bin_tree_gf.series(z, n=10, kernel='ordinary')
bin_tree_gf.series(z, n=10, kernel='catalan')
occupancy(bin_tree_gf, syms=[alpha], objects='unlike', boxes='unlike').series(z)
bin_tree_of_boolean_gfs = bin_tree(maybe(tyvar(alpha))[z]).gf()
bin_tree_of_boolean_gfs
bin_tree_of_boolean_gf = bin_tree_of_boolean_gfs[0]
occupancy(bin_tree_of_boolean_gf, syms=[alpha], objects='unlike', boxes='unlike').series(z,n=6, kernel='ordinary')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Autoencoders
Step2: Each image in the dataset is 28 x 28 pixels. Let's flatten each to a 1-dimensional vector of length 784.
Step3: Autoencoder Structure
Step4: Encoder
Step5: Decoder
Step6: Training
Step7: We will train for 50 epochs, using EarlyStopping to stop training early if validation loss improves by less than 0.0001 for 10 consecutive epochs. Using a batch size of 2048, this should take 1-2 minutes to train.
Step8: Visualize Predictions
Step9: This shows 10 original images with their corresponding reconstructed images directly below. Clearly, our autoencoder captured the basic digit structure of each image, though the reconstructed images are less sharp.
Step10: Now we can visualize the latent representation of each image that the autoencoder learned. Since this reduces the 784-dimensional original image to a 36-dimensional image, it essentially performs an image compression.
Step11: For comparison, here are what 5 images look like before we add noise
Step12: After we add noise, the images look like this
Step13: As you can see, the images are quite noisy and difficult to denoise even with the human eye. Luckily, autoencoders are much better at this task. We'll follow a similar architecture as before, but this time we'll train the model using the noisy images as input and the original, un-noisy images as output.
Step14: Decoder
Step15: Training
Step16: We will only train for 10 epochs this time since the model is more complex and takes longer to train. This should take around a minute.
Step17: Visualize Denoised Images
Step18: As we can see, the autoencoder is mostly successful in recovering the original image, though a few denoised images are still blurry or unclear. More training or a different model architecture may help.
Step19: We'll use the smaller videos (80x60) in this exercise in order to fit within Colab's memory limits and in order to get our model to run faster.
|
<ASSISTANT_TASK:>
Python Code:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Set random seeds for reproducible results.
import numpy as np
import tensorflow as tf
np.random.seed(42)
tf.random.set_seed(42)
# Load dataset using keras data loader.
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
image_size = x_train.shape[1]
original_dim = image_size * image_size
# Flatten each image into a 1-d vector.
x_train = np.reshape(x_train, [-1, original_dim])
x_test = np.reshape(x_test, [-1, original_dim])
# Rescale pixel values to a 0-1 range.
x_train = x_train.astype('float32') / 255
x_test = x_test.astype('float32') / 255
print('x_train:', x_train.shape)
print('x_test:', x_test.shape)
from tensorflow.keras import Input
from tensorflow.keras.layers import Dense
from tensorflow.keras.models import Model
latent_dim = 36
# input layer (needed for the Model API).
input_layer = Input(shape=(original_dim,), name='encoder_input')
# Notice that with all layers except for the first,
# we need to specify which layer is used as input.
latent_layer = Dense(latent_dim, activation='relu',
name='latent_layer')(input_layer)
encoder = Model(input_layer, latent_layer, name='encoder')
encoder.summary()
latent_inputs = Input(shape=(latent_dim,), name='decoder_input')
output_layer = Dense(original_dim, name='decoder_output')(latent_inputs)
decoder = Model(latent_inputs, output_layer, name='decoder')
decoder.summary()
autoencoder = Model(
input_layer,
decoder(encoder(input_layer)),
name="autoencoder"
)
autoencoder.compile(optimizer='adam', loss='mse')
autoencoder.summary()
early_stopping = tf.keras.callbacks.EarlyStopping(
monitor='val_loss',
# minimum change in loss that qualifies as "improvement"
# higher values of min_delta lead to earlier stopping
min_delta=0.0001,
# threshold for number of epochs with no improvement
patience=10,
verbose=1
)
autoencoder.fit(
# input
x_train,
# output
x_train,
epochs=50,
batch_size=2048,
validation_data=(x_test, x_test),
callbacks=[early_stopping]
)
decoded_imgs = autoencoder.predict(x_test)
import matplotlib.pyplot as plt
def visualize_imgs(nrows, axis_names, images, sizes, n=10):
'''
Plots images in a grid layout.
nrows: number of rows of images to display
axis_names: list of names for each row
images: list of arrays of images
sizes: list of image size to display for each row
n: number of images to display per row (default 10)
nrows = len(axis_names) = len(images)
'''
fig, axes = plt.subplots(figsize=(20,4), nrows=nrows, ncols=1, sharey=False)
for i in range(nrows):
axes[i].set_title(axis_names[i], fontsize=16)
axes[i].axis('off')
for col in range(n):
for i in range(nrows):
ax = fig.add_subplot(nrows, n, col + 1 + i * n)
plt.imshow(images[i][col].reshape(sizes[i], sizes[i]))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout()
plt.show()
visualize_imgs(
2,
['Original Images', 'Reconstructions'],
[x_test, decoded_imgs],
[image_size, image_size]
)
# Compress original images.
encoded_imgs = encoder.predict(x_test)
# Reconstruct original images.
decoded_imgs = decoder.predict(encoded_imgs)
visualize_imgs(
3,
['Original Images', '36-dimensional Latent Representation', 'Reconstructions'],
[x_test, encoded_imgs, decoded_imgs],
[image_size, 6, image_size]
)
from imgaug import augmenters
# Reshape images to 3-dimensional for augmenter. Since the images were
# originally 2-dimensional, the third dimension is just 1.
x_train = x_train.reshape(-1, image_size, image_size, 1)
x_test = x_test.reshape(-1, image_size, image_size, 1)
# p is the probability of changing a pixel to noise.
# higher values of p mean noisier images.
noise = augmenters.SaltAndPepper(p=0.6)
# We could chain multiple augmenters using Sequential.
seq = augmenters.Sequential([noise])
# Rescale pixel values to 0-255 (instead of 0-1) for augmenter,
# add noise to images, then rescale pixel values back to 0-1.
x_train_noise = seq.augment_images(x_train * 255) / 255
x_test_noise = seq.augment_images(x_test * 255) / 255
f, ax = plt.subplots(figsize=(20,2), nrows=1, ncols=5)
for i in range(5, 10):
ax[i-5].imshow(x_train[i].reshape(image_size, image_size))
plt.show()
f, ax = plt.subplots(figsize=(20,2), nrows=1, ncols=5)
for i in range(5, 10):
ax[i-5].imshow(x_train_noise[i].reshape(image_size, image_size))
plt.show()
from tensorflow.keras.layers import Conv2D, MaxPool2D, UpSampling2D
filter_1 = 64
filter_2 = 32
filter_3 = 16
kernel_size = (3, 3)
pool_size = (2, 2)
latent_dim = 4
input_layer = Input(shape=(image_size, image_size, 1))
# First convolutional layer
encoder_conv1 = Conv2D(filter_1, kernel_size,
activation='relu', padding='same')(input_layer)
encoder_pool1 = MaxPool2D(pool_size, padding='same')(encoder_conv1)
# Second convolutional layer
encoder_conv2 = Conv2D(filter_2, kernel_size, activation='relu',
padding='same')(encoder_pool1)
encoder_pool2 = MaxPool2D(pool_size, padding='same')(encoder_conv2)
# Third convolutional layer
encoder_conv3 = Conv2D(filter_3, kernel_size,
activation='relu', padding='same')(encoder_pool2)
latent_layer = MaxPool2D(pool_size, padding='same')(encoder_conv3)
encoder_denoise = Model(input_layer, latent_layer, name='encoder')
encoder_denoise.summary()
latent_inputs = Input(shape=(latent_dim, latent_dim, filter_3))
# First convolutional layer
decoder_conv1 = Conv2D(filter_3, kernel_size,
activation='relu', padding='same')(latent_inputs)
decoder_up1 = UpSampling2D(pool_size)(decoder_conv1)
# Second convolutional layer
decoder_conv2 = Conv2D(filter_2, kernel_size,
activation='relu', padding='same')(decoder_up1)
decoder_up2 = UpSampling2D(pool_size)(decoder_conv2)
# Third convolutional layer
decoder_conv3 = Conv2D(filter_1, kernel_size,
activation='relu')(decoder_up2)
decoder_up3 = UpSampling2D(pool_size)(decoder_conv3)
# Output layer, which outputs images of size (28 x 28 x 1)
output_layer = Conv2D(1, kernel_size, padding='same')(decoder_up3)
decoder_denoise = Model(latent_inputs, output_layer, name='decoder')
decoder_denoise.summary()
denoise_autoencoder = Model(
input_layer,
decoder_denoise(encoder_denoise(input_layer))
)
denoise_autoencoder.compile(optimizer='adam', loss='mse')
denoise_autoencoder.summary()
denoise_autoencoder.fit(
# Input
x_train_noise,
# Output
x_train,
epochs=10,
batch_size=2048,
validation_data=(x_test_noise, x_test),
callbacks=[early_stopping]
)
denoised_imgs = denoise_autoencoder.predict(x_test_noise[:10])
visualize_imgs(
3,
['Noisy Images', 'Denoised Images', 'Original Images'],
[x_test_noise, denoised_imgs, x_test],
[image_size, image_size, image_size]
)
! chmod 600 kaggle.json && (ls ~/.kaggle 2>/dev/null || mkdir ~/.kaggle) && cp kaggle.json ~/.kaggle/ && echo 'Done'
! kaggle datasets download joshmcadams/mighty-mouse-wolf-wolf
! unzip mighty-mouse-wolf-wolf.zip
! ls
# Your answer goes here
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Loading continuous data
Step2: <div class="alert alert-info"><h4>Note</h4><p>This size can also be obtained by examining `raw._data.shape`.
Step3: Selecting subsets of channels and samples
Step4: Notice the different scalings of these types
Step5: You can restrict the data to a specific time range
Step6: And drop channels by name
Step7: Concatenating
|
<ASSISTANT_TASK:>
Python Code:
import mne
import os.path as op
from matplotlib import pyplot as plt
data_path = op.join(mne.datasets.sample.data_path(), 'MEG',
'sample', 'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(data_path, preload=True)
raw.set_eeg_reference('average', projection=True) # set EEG average reference
# Give the sample rate
print('sample rate:', raw.info['sfreq'], 'Hz')
# Give the size of the data matrix
print('%s channels x %s samples' % (len(raw), len(raw.times)))
# Extract data from the first 5 channels, from 1 s to 3 s.
sfreq = raw.info['sfreq']
data, times = raw[:5, int(sfreq * 1):int(sfreq * 3)]
_ = plt.plot(times, data.T)
_ = plt.title('Sample channels')
# Pull all MEG gradiometer channels:
# Make sure to use .copy() or it will overwrite the data
meg_only = raw.copy().pick_types(meg=True)
eeg_only = raw.copy().pick_types(meg=False, eeg=True)
# The MEG flag in particular lets you specify a string for more specificity
grad_only = raw.copy().pick_types(meg='grad')
# Or you can use custom channel names
pick_chans = ['MEG 0112', 'MEG 0111', 'MEG 0122', 'MEG 0123']
specific_chans = raw.copy().pick_channels(pick_chans)
print(meg_only)
print(eeg_only)
print(grad_only)
print(specific_chans)
f, (a1, a2) = plt.subplots(2, 1)
eeg, times = eeg_only[0, :int(sfreq * 2)]
meg, times = meg_only[0, :int(sfreq * 2)]
a1.plot(times, meg[0])
a2.plot(times, eeg[0])
del eeg, meg, meg_only, grad_only, eeg_only, data, specific_chans
raw = raw.crop(0, 50) # in seconds
print('New time range from', raw.times.min(), 's to', raw.times.max(), 's')
nchan = raw.info['nchan']
raw = raw.drop_channels(['MEG 0241', 'EEG 001'])
print('Number of channels reduced from', nchan, 'to', raw.info['nchan'])
# Create multiple :class:`Raw <mne.io.RawFIF>` objects
raw1 = raw.copy().crop(0, 10)
raw2 = raw.copy().crop(10, 20)
raw3 = raw.copy().crop(20, 40)
# Concatenate in time (also works without preloading)
raw1.append([raw2, raw3])
print('Time extends from', raw1.times.min(), 's to', raw1.times.max(), 's')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: the glob function finds a file using a wildcard to save typing (google
Step2: Read the radiance data from MODIS_SWATH_Type_L1B/Data Fields/EV_1KM_Emissive
Step3: the data is stored as unsigned, 2 byte integers which can hold values from 0 to $2^{16}$ - 1 = 65,535
Step4: we need to apply a
Step5: and here is the offset for 16 channels
Step7: now convert this to brightness temperature
Step8: histogram the calibrated radiances and show that they lie between
Step9: Read MYD03 Geolocation Fields
Step10: now regrid the radiances and brightness temperatures on a 0.1 x 0.1 degree regular lat/lon grid
Step11: Plot this gridded data without a map projections
Step12: Now replot using an lcc (Lambert conformal conic) projection from basemap at http
Step13: repeat for brightness temperature
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
import os,site
import glob
import h5py
from IPython.display import Image
import numpy as np
from matplotlib import pyplot as plt
#
# add the lib folder to the path assuming it is on the same
# level as the notebooks folder
#
libdir=os.path.abspath('../lib')
site.addsitedir(libdir)
import h5dump
h5_filename=glob.glob('../data/MYD02*.h5')
print("found {}".format(h5_filename))
h5_file=h5py.File(h5_filename[0])
index31=10
chan31=h5_file['MODIS_SWATH_Type_L1B']['Data Fields']['EV_1KM_Emissive'][index31,:,:]
print(chan31.shape,chan31.dtype)
chan31[:3,:3]
scale=h5_file['MODIS_SWATH_Type_L1B']['Data Fields']['EV_1KM_Emissive'].attrs['radiance_scales'][...]
print(scale)
offset=h5_file['MODIS_SWATH_Type_L1B']['Data Fields']['EV_1KM_Emissive'].attrs['radiance_offsets'][...]
print(offset)
chan31=(chan31 - offset[index31])*scale[index31]
def planckInvert(wavel,Llambda):
input wavelength in microns and Llambda in W/m^2/micron/sr, output
output brightness temperature in K (note that we've remove the factor
of pi because we are working with radiances, not fluxes)
c=2.99792458e+08 #m/s -- speed of light in vacumn
h=6.62606876e-34 #J s -- Planck's constant
kb=1.3806503e-23 # J/K -- Boltzman's constant
c1=2.*h*c**2.
c2=h*c/kb
Llambda=Llambda*1.e6 #convert to W/m^2/m/sr
wavel=wavel*1.e-6 #convert wavelength to m
Tbright=c2/(wavel*np.log(c1/(wavel**5.*Llambda) + 1.))
return Tbright
chan31_Tbright=planckInvert(11.02, chan31)
%matplotlib inline
import matplotlib.pyplot as plt
out=plt.hist(chan31_Tbright.flat)
geom_filename=glob.glob('../data/MYD03*.h5')
print("found {}".format(h5_filename))
geom_h5=h5py.File(geom_filename[0])
h5dump.dumph5(geom_h5)
the_long=geom_h5['MODIS_Swath_Type_GEO']['Geolocation Fields']['Longitude'][...]
the_lat=geom_h5['MODIS_Swath_Type_GEO']['Geolocation Fields']['Latitude'][...]
print(the_long.shape,the_lat.shape)
print('===================================================')
print('Size of Longitude: {}'.format(the_long.shape))
print('Longitude Range: {} ~ {}'.format(np.min(the_long), np.max(the_long)))
print('===================================================')
print('Size of Latitude: {}'.format(the_lat.shape))
print('Latitude Range: {} ~ {}'.format(np.min(the_lat), np.max(the_lat)))
def reproj_L1B(raw_data, raw_x, raw_y, xlim, ylim, res):
'''
=========================================================================================
Reproject MODIS L1B file to a regular grid
-----------------------------------------------------------------------------------------
d_array, x_array, y_array, bin_count = reproj_L1B(raw_data, raw_x, raw_y, xlim, ylim, res)
-----------------------------------------------------------------------------------------
Input:
raw_data: L1B data, N*M 2-D array.
raw_x: longitude info. N*M 2-D array.
raw_y: latitude info. N*M 2-D array.
xlim: range of longitude, a list.
ylim: range of latitude, a list.
res: resolution, single value.
Output:
d_array: L1B reprojected data.
x_array: reprojected longitude.
y_array: reprojected latitude.
bin_count: how many raw data point included in a reprojected grid.
Note:
function do not performs well if "res" is larger than the resolution of input data.
size of "raw_data", "raw_x", "raw_y" must agree.
=========================================================================================
'''
x_bins=np.arange(xlim[0], xlim[1], res)
y_bins=np.arange(ylim[0], ylim[1], res)
# x_indices=np.digitize(raw_x.flat, x_bins)
# y_indices=np.digitize(raw_y.flat, y_bins)
x_indices=np.searchsorted(x_bins, raw_x.flat, 'right')
y_indices=np.searchsorted(y_bins, raw_y.flat, 'right')
y_array=np.zeros([len(y_bins), len(x_bins)], dtype=np.float)
x_array=np.zeros([len(y_bins), len(x_bins)], dtype=np.float)
d_array=np.zeros([len(y_bins), len(x_bins)], dtype=np.float)
bin_count=np.zeros([len(y_bins), len(x_bins)], dtype=np.int)
for n in range(len(y_indices)): #indices
bin_row=y_indices[n]-1 # '-1' is because we call 'right' in np.searchsorted.
bin_col=x_indices[n]-1
bin_count[bin_row, bin_col] += 1
x_array[bin_row, bin_col] += raw_x.flat[n]
y_array[bin_row, bin_col] += raw_y.flat[n]
d_array[bin_row, bin_col] += raw_data.flat[n]
for i in range(x_array.shape[0]):
for j in range(x_array.shape[1]):
if bin_count[i, j] > 0:
x_array[i, j]=x_array[i, j]/bin_count[i, j]
y_array[i, j]=y_array[i, j]/bin_count[i, j]
d_array[i, j]=d_array[i, j]/bin_count[i, j]
else:
d_array[i, j]=np.nan
x_array[i, j]=np.nan
y_array[i,j]=np.nan
return d_array, x_array, y_array, bin_count
xlim=[np.min(the_long), np.max(the_long)]
ylim=[np.min(the_lat), np.max(the_lat)]
chan31_grid, longitude, latitude, bin_count = reproj_L1B(chan31, the_long, the_lat, xlim, ylim, 0.1)
tbright_grid,longitude,latitude,bin_count=reproj_L1B(chan31_Tbright, the_long, the_lat, xlim, ylim, 0.1)
chan31_grid=np.ma.masked_where(np.isnan(chan31_grid), chan31_grid)
bin_count=np.ma.masked_where(np.isnan(bin_count), bin_count)
longitude=np.ma.masked_where(np.isnan(longitude), longitude)
latitude=np.ma.masked_where(np.isnan(latitude), latitude)
longitude.shape
fig=plt.figure(figsize=(10.5, 9.5))
ax=fig.add_subplot(111)
ax.set_xlim(xlim[0], xlim[1])
ax.set_ylim(ylim[0], ylim[1])
image=ax.pcolormesh(longitude, latitude, chan31_grid)
from mpl_toolkits.basemap import Basemap
lcc_values=dict(resolution='l',projection='lcc',
lat_1=20,lat_2=40,lat_0=30,lon_0=135,
llcrnrlon=120,llcrnrlat=20,
urcrnrlon=150,urcrnrlat=42)
proj=Basemap(**lcc_values)
# create figure, add axes
fig=plt.figure(figsize=(12, 12))
ax=fig.add_subplot(111)
## define parallels and meridians to draw.
parallels=np.arange(-90, 90, 5)
meridians=np.arange(0, 360, 5)
proj.drawparallels(parallels, labels=[1, 0, 0, 0],\
fontsize=10, latmax=90)
proj.drawmeridians(meridians, labels=[0, 0, 0, 1],\
fontsize=10, latmax=90)
# draw coast & fill continents
#map.fillcontinents(color=[0.25, 0.25, 0.25], lake_color=None) # coral
out=proj.drawcoastlines(linewidth=1.5, linestyle='solid', color='k')
x, y=proj(longitude, latitude)
# contourf the bathmetry
CS=proj.pcolor(x, y, chan31_grid, cmap=plt.cm.hot)
# colorbar
CBar=proj.colorbar(CS, 'right', size='5%', pad='5%')
CBar.set_label('Channel 31 radiance ($W\,m^{-2}\,\mu m\,sr^{-1})$', fontsize=10)
CBar.ax.tick_params(axis='y', length=0)
# create figure, add axes
fig=plt.figure(figsize=(12, 12))
ax=fig.add_subplot(111)
## define parallels and meridians to draw.
parallels=np.arange(-90, 90, 5)
meridians=np.arange(0, 360, 5)
proj.drawparallels(parallels, labels=[1, 0, 0, 0],\
fontsize=10, latmax=90)
proj.drawmeridians(meridians, labels=[0, 0, 0, 1],\
fontsize=10, latmax=90)
# draw coast & fill continents
#map.fillcontinents(color=[0.25, 0.25, 0.25], lake_color=None) # coral
out=proj.drawcoastlines(linewidth=1.5, linestyle='solid', color='k')
x, y=proj(longitude, latitude)
# contourf the bathmetry
CS=proj.pcolor(x, y, tbright_grid, cmap=plt.cm.hot)
# colorbar
CBar=proj.colorbar(CS, 'right', size='5%', pad='5%')
CBar.set_label('Channel 31 Brightness temperature (K)', fontsize=10)
CBar.ax.tick_params(axis='y', length=0)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now, we'll tackle the "function in time" part of this model by learning how to make and use arrays to represent time.
Step2: We can assign time intervals to variables
Step3: Your Turn - 'Use np.linspace() to make a time interval for our model
Step4: We can set and change parameter values to see how the model behaves
Step5: PREDICT BEFORE YOU PLOT
Step6: Explore the model by visualizing it
Step7: Explore the model by changing parameter values
Step8: // Note each change you made and what you saw
Step9: You Can View a Pandas DataFrame by Executing It
Step10: You can view the first/last few rows of data with .head() and .tail() functions
Step11: Use data["column_name"] to View or Refer to a Column of Data
Step12: Pandas DataFrame Columns Behave Like Numpy Arrays
Step13: Plotting the Experimental Data
Step14: Fitting Our Model To Experimental Data
Step15: THINK about Tuning the Model Parameters to Fit the Model to Data
Step16: Please Give Feedback on this assignment
|
<ASSISTANT_TASK:>
Python Code:
# Make plots inline
%matplotlib inline
# Make inline plots vector graphics instead of raster graphics
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('pdf', 'svg')
# import modules for plotting and data analysis
import matplotlib.pyplot as plt
import numpy as np
import pandas
import numpy as np
np.linspace(
0, # where the interval starts
1, # where the interval ends
11 # How many steps (elements) we want in the final array
)
import numpy as np
# Make our array and assign it to `time`
trying_to_make_a_time_interval = np.linspace(
0, # where the interval starts
1, # where the interval ends
11 # How many steps (elements) we want in the final array
)
# Check that there are 11 elements in our array (which we asked for)
trying_to_make_a_time_interval.size
# Create your time array here
time = np.linspace(0, 10, 101)
# Make B equal to zero and set some non-zero values for the other parameters
A = 3
B = 0
alpha = 2
beta = 5
# Write and evaluate viral_load = ...
viral_load = (A * np.exp(-alpha * time)) + (B * np.exp(-beta * time))
# Verify that both arrays are the same length.
# You can use .size, as in time.size or viral_load.size
# Then, try plotting viral_load vs. time
# Make plots inline
%matplotlib inline
# Make inline plots vector graphics instead of raster graphics
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('pdf', 'svg')
import matplotlib.pyplot as plt
# Put your plot code here
plt.plot(viral_load, time)
plt.yscale('log')
# Change the values and make a new plot
A = 10
B = -0.5
alpha = 1
beta = 1
viral_load = (A * np.exp(-alpha * time)) + (B * np.exp(-beta * time))
plt.plot(time, viral_load)
# Loading the data
hiv_data = pandas.read_csv(
"https://raw.githubusercontent.com/ComputationalModeling/IPML-Data/master/01HIVseries/HIVseries.csv",
header = None,
names = ["time_in_days", "viral_load"]
)
# the data type of hiv_data is "dataframe"
type(hiv_data)
# Execute this cell (Shift + Enter) to see the data
hiv_data
# If you have a pandas dataframe, you can call `head()` on it like this:
hiv_data.head()
# To see the last few rows, call `tail()` on it
hiv_data.tail()
# How to view an individual column
hiv_data["time_in_days"] # or
hiv_data["viral_load"]
# Here's the viral load column again
hiv_data["viral_load"]
# And we can calulate its mean, max, size, and other properties
# Just like we would on a numpy array
hiv_data["viral_load"].mean()
hiv_data["viral_load"].max()
hiv_data["viral_load"].size
# Plot viral load vs. time
plt.plot(hiv_data["time_in_days"],hiv_data["viral_load"],'ro')
plt.yscale('log')
# note that there's some very interesting double-exponential behavior here!
# Plot the data and model together
A = 95000
B = 150000
alpha = 0.3
beta = 2.0
viral_load = (A * np.exp(-alpha * time)) + (B * np.exp(-beta * time))
plt.plot(
time, viral_load, "r",
hiv_data['time_in_days'], hiv_data['viral_load'], "b."
)
#plt.plot(
# hiv_data['time_in_days'], hiv_data['viral_load'], "b."
#)
#plt.yscale('log') # much less clear what's going on here
# Do whatever work you need here to determine
# the parameter values you think work best.
# REMEMBER: You can assign each new model to a new variable,
# like `model_01`, `model_02`, ...
# fiddle with parameters
A = 10
B = 160000 - A
alpha = .002
beta = 10
time1 = np.linspace(0, 2, 100)
slope1 = -np.log((106100 - 66435) / 2)
y_int1 = np.log(106100)
line1 = (slope1 * time1) + y_int1
slope2 = -np.log((75906 / 5))
y_int2 = np.log(4785.2)
line2 = slope2 * (time - 7) + y_int2
log_model = np.log((A * np.exp(-alpha * time)) + (B * np.exp(-beta * time)))
log_hiv_data = np.log(hiv_data["viral_load"])
plt.plot(
hiv_data['time_in_days'], np.log(hiv_data['viral_load']), 'bo',
# ,time, line2, 'g'
time1, line1, 'r'
)
line1[0]
hiv_data['viral_load'].where(hiv_data['viral_load'] < 60000)
from IPython.display import IFrame
IFrame('http://goo.gl/forms/v8oZUSLDaa', width=800, height=1200)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TV Script Generation
Step3: Explore the Data
Step6: Implement Preprocessing Functions
Step9: Tokenize Punctuation
Step11: Preprocess all the data and save it
Step13: Check Point
Step15: Build the Neural Network
Step18: Input
Step21: Build RNN Cell and Initialize
Step24: Word Embedding
Step27: Build RNN
Step30: Build the Neural Network
Step33: Batches
Step35: Neural Network Training
Step37: Build the Graph
Step39: Train
Step41: Save Parameters
Step43: Checkpoint
Step46: Implement Generate Functions
Step49: Choose Word
Step51: Generate TV Script
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
# TODO: Implement Function
print(text[:10])
vocab_to_int = {word:count for count, word in enumerate(set(text))}
#print(vocab_to_int[text[0]])
int_to_vocab = {count:word for count, word in enumerate(set(text))}
return vocab_to_int, int_to_vocab
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
# TODO: Implement Function
tokens = {'.':'||Period||', ',':'||Comma||', '"':'||Quotation-Mark||', ';':'||Semicolon||',
'!':'||Exclamation-mark||', '?':'||Question-mark||', '(':'||Left-Parentheses||', ')':'||Right-Parentheses||',
'--':'||Dash||', '\n':'||Return||'}
return tokens
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
# TODO: Implement Function
inputs = tf.placeholder(tf.int32, [None, None], name="input")
targets = tf.placeholder(tf.int32, [None, None], name="targets")
learning_rate = tf.placeholder(tf.float32, name="learningRate")
print(inputs.name)
return inputs, targets, learning_rate
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
# TODO: Implement Function
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
cell = tf.contrib.rnn.MultiRNNCell([lstm])
initalState = cell.zero_state(batch_size, tf.float32)
initalState = tf.identity(initalState, name='initial_state')
return cell, initalState
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
# TODO: Implement Function
#print(input_data.shape)
#print(vocab_size)
#print(embed_dim)
embed = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1 ))
embedding = tf.nn.embedding_lookup(embed, input_data)
return embedding
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
#print("inputs shape "+str(inputs.get_shape()))
output, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
# TODO: Implement Function
final_state = tf.identity(final_state, name="final_state")
#print(output.get_shape())
#print(final_state.name)
return output, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
# TODO: Implement Function
#print('input_data shape = ' + str(input_data.shape))
#print('vocab_size = ' + str(vocab_size))
#print('input_data + vocab_size shape = ' + str(input_data.get_shape().as_list() + [vocab_size]))
input_data = get_embed(input_data, vocab_size, embed_dim)
#cell, initalState = get_init_cell(input_data.shape, rnn_size)
output, final_state = build_rnn(cell, input_data)
logits = tf.contrib.layers.fully_connected(output, vocab_size, activation_fn=None)
#print('input_data + vocab_size shape = ' + str(input_data.get_shape().as_list() + [vocab_size]))
return logits, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
# TODO: Implement Function
print(batch_size)
print(seq_length)
n_batches = len(int_text) // (batch_size * seq_length)
int_text1= np.array(int_text[:n_batches*batch_size*seq_length])
int_text2= np.array(int_text[1:n_batches*batch_size*seq_length +1])
int_text2[-1] = int_text1[0] # last one is first input
x = np.split(int_text1.reshape(batch_size, -1), n_batches, 1)
y = np.split(int_text2.reshape(batch_size, -1), n_batches, 1)
result = list(zip(x, y))
return np.array(result)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
# Number of Epochs
num_epochs = 300
# Batch Size
batch_size = 256
# RNN Size
rnn_size = 256
# Embedding Dimension Size
embed_dim = 256
# Sequence Length
seq_length = 20
# Learning Rate
learning_rate = 0.009
# Show stats for every n number of batches
show_every_n_batches = 50
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
# TODO: Implement Function
input_tensor = loaded_graph.get_tensor_by_name('input:0')
initial_state_tensor = loaded_graph.get_tensor_by_name('initial_state:0')
final_state_tensor = loaded_graph.get_tensor_by_name('final_state:0')
probs_tensor = loaded_graph.get_tensor_by_name('probs:0')
return input_tensor, initial_state_tensor, final_state_tensor, probs_tensor
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
# TODO: Implement Function
return int_to_vocab[np.argmax(probabilities)]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
gen_length = 2000
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Using PCA to extract features
Step2: Let's visualize these faces to see what we're working with
Step3: We'll do a typical train-test split on the images before performing unsupervised learning
Step4: Feature Reduction Using Principal Component Analysis
Step5: One interesting part of PCA is that it computes the "mean" face, which can be
Step6: The principal components measure deviations about this mean along orthogonal axes.
Step7: The components ("eigenfaces") are ordered by their importance from top-left to bottom-right.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
from sklearn import datasets
lfw_people = datasets.fetch_lfw_people(min_faces_per_person=70, resize=0.4,
data_home='datasets')
lfw_people.data.shape
fig = plt.figure(figsize=(8, 6))
# plot several images
for i in range(15):
ax = fig.add_subplot(3, 5, i + 1, xticks=[], yticks=[])
ax.imshow(lfw_people.images[i], cmap=plt.cm.bone)
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(lfw_people.data, lfw_people.target, random_state=0)
print(X_train.shape, X_test.shape)
from sklearn import decomposition
pca = decomposition.RandomizedPCA(n_components=150, whiten=True)
pca.fit(X_train)
plt.imshow(pca.mean_.reshape((50, 37)), cmap=plt.cm.bone)
print(pca.components_.shape)
fig = plt.figure(figsize=(16, 6))
for i in range(30):
ax = fig.add_subplot(3, 10, i + 1, xticks=[], yticks=[])
ax.imshow(pca.components_[i].reshape((50, 37)), cmap=plt.cm.bone)
X_train_pca = pca.transform(X_train)
X_test_pca = pca.transform(X_test)
print(X_train_pca.shape)
print(X_test_pca.shape)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Classification
Step2: Output
Step4: Example of how to expand
|
<ASSISTANT_TASK:>
Python Code:
file = 'sample_report.txt' #Sample Radiology report from MIMIC-III dataset
report = ''
with open(file,'r') as f:
report += f.read()
print(report[:1000])
# Here is a pipeline provided in the fcFinder module
# A custom function can be made by the user
# This section will walk through this process
def fc_pipeline(report, preprocess=lambda x:x.lower(),
splitter=helpers.my_sentence_splitter):
report = preprocess(report)
sentences = splitter(report)
markups = create_list_of_markups(sentences,spans=True) #Create markups and set markup_conditions
markups = [m for m in markups if m.markupClass] #Exclude any markups that aren't assigned a class
return markups
#preprocess
# Any preprocessing done for report. By default is all lowercase.
preprocess = lambda x:x.lower()
print(preprocess(report)[:300])
# A function that will split sentences.
# Default is to split on '.'
# This example uses a custom-made function found in ../helpers.py
# This function returns named tuples that contain the text and span of each sentence
# You can also just return a list of sentences without keeping track of the sentence spans in the document.
splitter = helpers.my_sentence_splitter
sentences = splitter(report)
for named_tup in sentences[:5]:
print(sentences.index(named_tup))
print(named_tup)
print()
# We'll create a markup using this SentenceSpanPair
s = sentences[8]
print(s)
span = s.span
markup = ConTextMarkup()
markup.setRawText(s.text)
if not span: # optional for if you do not use a SentenceSpanPair
span = (0,len(s))
markup.docSpan = span
#The following functionality is explained in pyConText's notebook
markup.cleanText()
markup.markItems(modifiers, mode="modifier")
markup.markItems(targets, mode="target")
markup.pruneMarks()
markup.dropMarks('Exclusion')
markup.applyModifiers()
markup.pruneSelfModifyingRelationships()
markup.dropInactiveModifiers()
# To keep track of relevant data found by pyConText, we'll use a class called markup_conditions
markup.conditions = markup_conditions(markup) #add this as an attribute to the markup object
markup.target = markup.conditions.target
markup.target = markup.conditions.target
markup.modifiers = markup.conditions.modifiers #markup_classifier(markup)
print(markup.conditions.indication) # Check if it is modified by indication, negated_existence or anatomy
print(markup.conditions.negated)
print(markup.conditions.anatomy)
print(markup.conditions) # Print the object
#Now that we have specified what information is relevant, we can classify the markup using this function.
def markup_classifier(markup):
conditions = markup.conditions
markup_class = None
if not conditions.target:
pass
#positive
elif (conditions.anatomy and not conditions.negated and not conditions.indication)\
or (conditions.anatomy and conditions.definitive):
markup_class = "Fluid collection-positive"
#negated
elif conditions.negated and not conditions.definitive:
markup_class = "Fluid collection-negated"
#indication
elif conditions.indication and not (conditions.negated or conditions.definitive
or conditions.historical or conditions.probable):
markup_class = "fluid collection-indication"
#check for pseudoanatomy
if conditions.pseudoanatomy and not conditions.anatomy:
markup_class = None
return markup_class
#The markup has the target with a category 'Fluid Collection' and is modified by 'anatomy'.
#According to our function, that means it should be classified as 'Fluid collection-positive'
markup.markupClass = markup_classifier(markup)
print(markup.markupClass)
#Note that if it were not modified by anatomy, it would not be assigned a markupClass
markup2 = markup
markup2.conditions = markup_conditions(markup2)
markup2.conditions.anatomy = False
markup2.markupClass = markup_classifier(markup2)
print(markup2.markupClass)
#We've now classified our markup. The pipeline above applies this process to an entire report and returns a list of markups.
markups = fc.fc_pipeline(report,splitter=helpers.my_sentence_splitter,spans=True)
for m in markups[:5]:
print(m.getRawText())
print(m.markupClass)
print()
def classify_document(markups):
if any([m.markupClass for m in markups if m.markupClass == 'Fluid collection-positive']):
doc_class = 'Fluid collection-present'
doc_class = 'Fluid collection-not present'
return doc_class
classify_document(markups)
import input_output as io
import pandas as pd
annotations = [io.createAnnotation(m, file) for m in markups]
XMLstring = io.write_knowtator(annotations, file)
print(XMLstring[:1000])
with open('sample_reference_standard.knowtator.xml') as f: #manual-annotated results
XMLreference = f.read()
print(XMLreference[:1000])
columns=['file_name','original_text','reference_standard','fcFinder_results']
data = [os.path.splitext(file)[0],report,XMLreference,XMLstring]
df = pd.DataFrame(columns=columns)
df = df.append(pd.DataFrame([data],columns=columns),ignore_index=True)
df
#Similar functionality with JSON and .csv will be added at a future date
#Array with markupClass counts
io.fc_vectorizer(annotations,["Fluid collection-positive",'fluid collection-indication','Fluid collection-negated'])
import numpy as np
def markups_vectorizer(markups,classes=["Fluid collection-positive",
'fluid collection-indication','Fluid collection-negated']):
arr = np.ravel(np.zeros((len(classes),1)))
for i in range(len(classes)):
for a in markups:
if a.markupClass == classes[i]:
arr[i] += 1
return arr
print('positive, indication, negated')
print(markups_vectorizer(markups))
#You can use pandas to export to .xlsx, .csv, sqlite or .json
import pandas as pd
def markups_to_pandas(markups,file_name):
columns=['file_name','text','span','class']
df = pd.DataFrame(columns=columns)
for m in markups:
data = [file, m.getRawText(), m.docSpan, m.markupClass]
row = pd.Series(data,columns)
df = df.append(pd.DataFrame([row],columns=columns),ignore_index=True)
return df
df = markups_to_pandas(markups,file)
df
#df.to_csv('example.csv')
#df.to_json('example.json')
#df.to_excel('example.xlsx')
import pyConTextNLP.itemData as itemData
import pyConTextNLP.pyConTextGraph as pyConText
new_modifiers = itemData.instantiateFromCSVtoitemData(os.path.join(os.path.abspath('.'),'sample_modifiers.tsv'))
new_targets = itemData.instantiateFromCSVtoitemData(os.path.join(os.path.abspath('.'),'sample_targets.tsv'))
print(new_modifiers)
print(new_targets)
new_report = Pneumonia can be ruled out. I cannot see any fluid collection in the abdomen.
The patient is at risk of developing a urinary tract infection.
def new_markup_classifier(m):
markup_class = None
if not m.conditions.target:
return None
markup_target = m.conditions.target.getCategory()[0]
if m.conditions.negated:
lex = 'negated existence'
elif m.conditions.future:
lex = 'future existence'
else:
lex = 'definite existence'
markup_class = '{target}-{lex}'.format(target=markup_target, lex=lex)
return markup_class
def new_pipeline(report):
report = report.lower()
spans = False
sentences = helpers.my_sentence_splitter(report)
markups = fc.create_list_of_markups(sentences,modifiers=new_modifiers,targets=new_targets,spans=True)
for m in markups:
m.conditions = fc.markup_conditions(markup=m)
for t in new_targets: #add new targets
m.conditions.add_target(t.getCategory())
m.target = m.conditions.target
if m.isModifiedByCategory(m.conditions.target,'future_existence'): #Add new classes
m.conditions.future = True
else:
m.conditions.future = False
m.markupClass = new_markup_classifier(m)
return [m for m in markups if m.markupClass]
markups = new_pipeline(new_report)
markups = new_pipeline(new_report)
for m in markups:
print(m.getRawText())
print(m.markupClass)
print()
annotations = [io.createAnnotation(m, 'new_report') for m in markups]
XMLstring = io.write_knowtator(annotations, 'new_report')
print(XMLstring)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: NaN are handled correctly by universal function
Step2: dot method on DataFrame implements matrix multiplication
Step3: dot method on Series implements dot product
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randn(10, 4), columns=['A', 'B', 'C', 'D'])
df2 = pd.DataFrame(np.random.randn(7, 3), columns=['A', 'B', 'C'])
sum_df = df + df2
sum_df
np.transpose(sum_df.values)
A_df = pd.DataFrame(np.arange(15).reshape((3,5)))
B_df = pd.DataFrame(np.arange(10).reshape((5,2)))
A_df.dot(B_df)
C_Series = pd.Series(np.arange(5,10))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First reload the data we generated in 1_notmnist.ipynb.
Step2: Reformat into a shape that's more adapted to the models we're going to train
Step3: We're first going to train a multinomial logistic regression using simple gradient descent.
Step4: Let's run this computation and iterate
Step5: Let's now switch to stochastic gradient descent training instead, which is much faster.
Step6: Let's run it
Step7: Problem
|
<ASSISTANT_TASK:>
Python Code:
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import numpy as np
import tensorflow as tf
from six.moves import cPickle as pickle
from six.moves import range
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
save = pickle.load(f)
train_dataset = save['train_dataset']
train_labels = save['train_labels']
valid_dataset = save['valid_dataset']
valid_labels = save['valid_labels']
test_dataset = save['test_dataset']
test_labels = save['test_labels']
del save # hint to help gc free up memory
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
image_size = 28
num_labels = 10
def reformat(dataset, labels):
dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)
# Map 0 to [1.0, 0.0, 0.0 ...], 1 to [0.0, 1.0, 0.0 ...]
labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
# With gradient descent training, even this much data is prohibitive.
# Subset the training data for faster turnaround.
train_subset = 10000
graph = tf.Graph()
with graph.as_default():
# Input data.
# Load the training, validation and test data into constants that are
# attached to the graph.
tf_train_dataset = tf.constant(train_dataset[:train_subset, :])
tf_train_labels = tf.constant(train_labels[:train_subset])
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
# These are the parameters that we are going to be training. The weight
# matrix will be initialized using random values following a (truncated)
# normal distribution. The biases get initialized to zero.
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation.
# We multiply the inputs with the weight matrix, and add biases. We compute
# the softmax and cross-entropy (it's one operation in TensorFlow, because
# it's very common, and it can be optimized). We take the average of this
# cross-entropy across all training examples: that's our loss.
logits = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits))
# Optimizer.
# We are going to find the minimum of this loss using gradient descent.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
# These are not part of training, but merely here so that we can report
# accuracy figures as we train.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
# Defining accuracy function to find accuracy of predictions against actuals
def accuracy(predictions, labels):
return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
/ predictions.shape[0])
num_steps = 801
with tf.Session(graph=graph) as session:
# This is a one-time operation which ensures the parameters get initialized as
# we described in the graph: random weights for the matrix, zeros for the
# biases.
tf.global_variables_initializer().run()
print('Initialized')
for step in range(num_steps):
# Run the computations. We tell .run() that we want to run the optimizer,
# and get the loss value and the training predictions returned as numpy
# arrays.
_, l, predictions = session.run([optimizer, loss, train_prediction])
if (step % 100 == 0):
print('Loss at step %d: %f' % (step, l))
print('Training accuracy: %.1f%%' % accuracy(
predictions, train_labels[:train_subset, :]))
# Calling .eval() on valid_prediction is basically like calling run(), but
# just to get that one numpy array. Note that it recomputes all its graph
# dependencies.
print('Validation accuracy: %.1f%%' % accuracy(
valid_prediction.eval(), valid_labels))
print('Test accuracy: %.1f%%' % accuracy(test_prediction.eval(), test_labels))
batch_size = 128
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation.
logits = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.global_variables_initializer().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
batch_size = 128
num_hidden_nodes = 1024
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights = {
'hidden': tf.Variable(tf.truncated_normal([image_size * image_size, num_hidden_nodes])),
'output': tf.Variable(tf.truncated_normal([num_hidden_nodes, num_labels]))
}
biases = {
'hidden': tf.Variable(tf.zeros([num_hidden_nodes])),
'output': tf.Variable(tf.zeros([num_labels]))
}
# Training computation.
hidden_train = tf.nn.relu(tf.matmul(tf_train_dataset, weights['hidden']) + biases['hidden'])
logits = tf.matmul(hidden_train, weights['output']) + biases['output']
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=tf_train_labels))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
hidden_valid = tf.nn.relu(tf.matmul(tf_valid_dataset, weights['hidden']) + biases['hidden'])
valid_prediction = tf.nn.softmax(tf.matmul(hidden_valid, weights['output']) + biases['output'])
hidden_test = tf.nn.relu(tf.matmul(tf_test_dataset, weights['hidden']) + biases['hidden'])
test_prediction = tf.nn.softmax(tf.matmul(hidden_test, weights['output']) + biases['output'])
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.global_variables_initializer().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Features used for short horizon
Step2: Features used for long horizon
Step3: Multiple Goals
Step4: Custom Grid
Step5: Updating Goals dynamically
Step6: Updating Rewards dynamically
Step7: Adding custom obstacles
|
<ASSISTANT_TASK:>
Python Code:
# Other Imports.
from simple_rl.tasks import NavigationWorldMDP
from simple_rl.agents import QLearningAgent
from simple_rl.planning import ValueIteration
from simple_rl.tasks.grid_world.GridWorldStateClass import GridWorldState
from simple_rl.tasks.navigation.NavigationStateClass import NavigationWorldState
from simple_rl.tasks.navigation.NavigationWorldMDP import *
# Python Imports.
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
%load_ext autoreload
%autoreload 2
np.random.seed(0)
nvmdp = NavigationWorldMDP(width=30, height=30,
nav_cell_types=['lightgray', 'yellow', 'red', 'lime', 'magenta'],
nav_cell_rewards=[0, 0, -10, -10, -10],
nav_cell_p_or_locs=[0.68, 0.17, 0.05, 0.05, 0.05],
goal_cell_locs=[[(21,21)]],
goal_cell_rewards=[1.],
goal_cell_types=["blue"],
slip_prob=0.00, step_cost=0.0, gamma=.99)
traj_states_list, traj_action_list = nvmdp.sample_trajectories(n_traj=8, horizon=100,
init_states=[NavigationWorldState(2,2)],
init_cell_types=["lightgray"], init_unique=True,
rand_init_to_match_n_traj=True)
nvmdp.visualize_grid(trajectories=traj_states_list, show_colorbar=True, show_rewards_colorbar=True, goal_marker="*c")
print("Cells: {}\nGoals: {}".format(nvmdp.combined_cell_types, nvmdp.goal_cell_types))
print("Feature format: \n<one-hot:{}>, <distance: {}>, <distance:{}>\n".format(
", ".join(nvmdp.combined_cell_types),
", ".join(nvmdp.combined_cell_types),
", ".join(nvmdp.goal_cell_types)))
sample_states = [(1,1), (1,2), (2,1), (1,8), (21,21)]
list(map(lambda s: nvmdp.feature_at_state(NavigationWorldState(s[0],s[1]),
feature_type="indicator",
incl_cell_distances=True,
incl_goal_indicator=False,
incl_goal_distances=True,
normalize_distance=False, dtype=np.float).tolist(), sample_states))
sample_states = [(1,1), (1,2), (2,1), (1,8), (21,21)]
list(map(lambda s: nvmdp.feature_at_state(NavigationWorldState(s[0],s[1]),
feature_type="indicator",
incl_cell_distances=False,
incl_goal_indicator=False,
incl_goal_distances=True,
normalize_distance=False, dtype=np.float).tolist(), sample_states))
np.random.seed(0)
nvmdp = NavigationWorldMDP(width=30, height=30,
nav_cell_types=['lightgray', 'yellow', 'red', 'lime', 'magenta'],
nav_cell_rewards=[0, 0, -10, -10, -10],
nav_cell_p_or_locs=[0.68, 0.17, 0.05, 0.05, 0.05],
goal_cell_locs=[[(21,21)], [(11,11)]],
goal_cell_rewards=[1., 1.2],
goal_cell_types=["orange","blue"],
slip_prob=0.00, step_cost=0.0, gamma=.95)
traj_states_list, traj_action_list = nvmdp.sample_trajectories(n_traj=16, horizon=100,
init_states=[NavigationWorldState(2,2)],
init_cell_types=["lightgray"], init_unique=True,
rand_init_to_match_n_traj=True)
nvmdp.visualize_grid(trajectories=traj_states_list, show_colorbar=True, show_rewards_colorbar=True, goal_marker="*c")
## Features: <Cell Type Ind, Goal Distances>
print("Sample State Trajectory")
[nvmdp.feature_at_state(s,
feature_type="indicator",
incl_cell_distances=False,
incl_goal_indicator=False,
incl_goal_distances=True,
normalize_distance=False, dtype=np.float).tolist() for s in traj_states_list[0]]
nvmdp = NavigationWorldMDP(width=7, height=7,
nav_cell_types=['lightgray', 'yellow', 'red'],
nav_cell_rewards=[0, 0, -10],
nav_cell_p_or_locs=[0.5, 0.5, [(2,i) for i in range(1,7)]],
goal_cell_locs=[[(7,1)],[(1,1)]],
goal_cell_types=["blue", "orange"],
goal_cell_rewards=[1.,10.],
slip_prob=0.00, step_cost=0.0, gamma=.50)
traj_states_list, traj_action_list = nvmdp.sample_trajectories(n_traj=41, horizon=100,
init_states=[NavigationWorldState(1,2)],
init_cell_types=["lightgray", "yellow"], init_unique=True,
rand_init_to_match_n_traj=True)
nvmdp.visualize_grid(trajectories=traj_states_list, show_colorbar=True, show_rewards_colorbar=True, goal_marker="*c")
nvmdp._reset_goals([[(7,1)],[(1,1)]],[10.,1.],["blue", "orange"])
traj_states_list, traj_action_list = nvmdp.sample_trajectories(n_traj=41, horizon=100,
init_states=None,
init_cell_types=["lightgray", "yellow"], init_unique=True,
rand_init_to_match_n_traj=False)
nvmdp.visualize_grid(trajectories=traj_states_list, show_colorbar=True, show_rewards_colorbar=True, goal_marker="*c")
nvmdp._reset_goals([[(7,1)],[(1,1)],[(7,7)]],[10.,1.,10.],["blue", "orange","purple"])
traj_states_list, traj_action_list = nvmdp.sample_trajectories(n_traj=41, horizon=100,
init_states=None,
init_cell_types=["lightgray", "yellow"], init_unique=True,
rand_init_to_match_n_traj=True)
nvmdp.visualize_grid(trajectories=traj_states_list, show_colorbar=True, show_rewards_colorbar=True, goal_marker="*c")
nvmdp._reset_goals([[(7,1)],[(1,1)],[(7,7)],[(4,4)]], [10.,1.,10.,5.],list(range(4)))
traj_states_list, traj_action_list = nvmdp.sample_trajectories(n_traj=41, horizon=100,
init_states=None,
init_cell_types=["lightgray", "yellow"], init_unique=True,
rand_init_to_match_n_traj=True)
fig = plt.figure(figsize=(14,8))
nvmdp.visualize_grid(trajectories=traj_states_list,
show_colorbar=True, show_rewards_colorbar=True,
goal_marker="*c", fig=fig, subplot_str="121")
nvmdp.visualize_grid(nvmdp.get_value_grid(), trajectories=traj_states_list,
show_colorbar=True, show_rewards_colorbar=False,
goal_marker="*c", fig=fig, subplot_str="122",
state_space_cmap=False, title="Value")
nvmdp.nav_cell_rewards, nvmdp.goal_cell_rewards
nvmdp._reset_rewards([0, 0, -0.01], [], [10., 1.0, 10., 5.])
traj_states_list, traj_action_list = nvmdp.sample_trajectories(n_traj=41, horizon=100,
init_states=None,
init_cell_types=["lightgray", "yellow"], init_unique=True,
rand_init_to_match_n_traj=True)
fig = plt.figure(figsize=(14,8))
nvmdp.visualize_grid(trajectories=traj_states_list, show_colorbar=True,
show_rewards_colorbar=True, goal_marker="*c", fig=fig, subplot_str="121")
nvmdp.visualize_grid(nvmdp.get_value_grid(), trajectories=traj_states_list,
show_colorbar=True, show_rewards_colorbar=False,
goal_marker="*c", fig=fig, subplot_str="122",
state_space_cmap=False, title="Value")
W, H = 30, 30
n_obstacles = 30
goal_tile = RectangularTile(21,21,1,1)
obstacle_cells = list(itertools.chain(*[o() for o in generate_obstacles(
W, H, n_obstacles,
obstacle_w_mu=3, obstacle_w_std=3,
obstacle_h_mu=3, obstacle_h_std=3,
buffer_w=2, buffer_h=1, max_search_tries=100, exclude_tiles=[goal_tile])]))
reward_living = -0.01
reward_obtacle = -1.0
nvmdp = NavigationWorldMDP(width=W, height=H,
nav_cell_types=['lightgray', 'red'],
nav_cell_rewards=[0, -10],
nav_cell_p_or_locs=[1., obstacle_cells],
goal_cell_locs=[[(goal_tile.x, goal_tile.y)]],
goal_cell_rewards=[1.],
goal_cell_types=["blue"],
slip_prob=0.00, step_cost=0.0, gamma=.99)
traj_states_list, traj_action_list = nvmdp.sample_trajectories(n_traj=8, horizon=100,
init_states=[NavigationWorldState(2,2)],
init_cell_types=["lightgray"], init_unique=True,
rand_init_to_match_n_traj=True)
nvmdp.visualize_grid(trajectories=traj_states_list, show_colorbar=False,
show_rewards_colorbar=True, goal_marker="*c")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Running FBA
Step2: The Model.optimize() function will return a Solution object, which will also be stored at model.solution. A solution object has several attributes
Step3: Changing the Objectives
Step4: The objective function can be changed by assigning Model.objective, which can be a reaction object (or just it's name), or a dict of {Reaction
Step5: The objective function can also be changed by setting Reaction.objective_coefficient directly.
Step6: Running FVA
Step7: Setting parameter fraction_of_optimium=0.90 would give the flux ranges for reactions at 90% optimality.
|
<ASSISTANT_TASK:>
Python Code:
import pandas
pandas.options.display.max_rows = 100
import cobra.test
model = cobra.test.create_test_model("textbook")
model.optimize()
model.solution.status
model.solution.f
model.objective
# change the objective to ATPM
# the upper bound should be 1000 so we get the actual optimal value
model.reactions.get_by_id("ATPM").upper_bound = 1000.
model.objective = "ATPM"
model.objective
model.optimize()
model.reactions.get_by_id("ATPM").objective_coefficient = 0.
model.reactions.get_by_id("Biomass_Ecoli_core").objective_coefficient = 1.
model.objective
fva_result = cobra.flux_analysis.flux_variability_analysis(model, model.reactions[:20])
pandas.DataFrame.from_dict(fva_result).T
fva_result = cobra.flux_analysis.flux_variability_analysis(model, model.reactions[:20], fraction_of_optimum=0.9)
pandas.DataFrame.from_dict(fva_result).T
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Server Performance - Before and After
Step2: Error rate (percentage response times > 20ms)
Step3: Percentiles
Step4: Server Pauses
Step5: The difference between the old and the new is effectively night and day. What better way to see the improvement than that classic comparison tool, the QQ plot
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas
import glob
FILES = glob.glob('/x/home/kurose/*_srv.csv')
Expected format is comma-separated, floating-point pairs ("timestamp, latency").
1459452334.73, 0.00141596794128
1459452334.83, 0.00196099281311
1459452334.93, 0.00141191482544
...
import os.path
import datetime
def file2pts(path):
f = open(path)
points = []
for i, line in enumerate(f):
clock, latency = line.split(',')
points.append((datetime.datetime.fromtimestamp(float(clock)), float(latency)))
return points
def pts2series(pts):
return (pandas.Series([e[1] for e in pts], index=[e[0] for e in pts]) * 1000)
def all_files2series(files):
file_series = {}
df = pandas.DataFrame()
all_pts = []
for path in sorted(files, reverse=True):
try:
pts = file2pts(path)
all_pts.extend(pts)
name = os.path.split(path)[1]
file_series[name] = pts2series(pts)
df[name] = pandas.Series([e[1] for e in pts]) * 1000
except Exception:
import traceback
traceback.print_exc()
return file_series, df, all_pts
FILE_SERIES, DATAFRAME, ALL_PTS = all_files2series(FILES)
NEW = FILE_SERIES['new_srv.csv']
OLD = FILE_SERIES['old_srv.csv']
print "old {0:0.02f}%".format(sum(OLD > 20) * 1.0 / len(OLD) * 100)
print "new {0:0.02f}%".format(sum(NEW > 20) * 1.0 / len(NEW) * 100)
DATAFRAME.quantile([0.9, 0.95, 0.99, 0.995, 0.999, 0.9995, 0.9999])
"note the log scale on the y axis"
DATAFRAME.quantile([0.9, 0.95, 0.99, 0.995, 0.999, 0.9995, 0.9999]).plot(kind='bar', logy=True)
OLD.plot(legend=True, label="old")
NEW.plot(legend=True, label="new")
"Again, note the log scale on the y axis."
"Even with log scale, the new server was clearly made much less variable."
OLD.plot(legend=True, label="old", logy=True)
NEW.plot(legend=True, label="new", logy=True)
from statsmodels.graphics.gofplots import qqplot_2samples
import numpy
def qq_plot(x_data, y_data):
x_pts, y_pts = x_data.values, y_data.values
size = min(len(x_pts), len(y_pts))
x_pts, y_pts = numpy.array(x_pts[:size], dtype='f'), numpy.array(y_pts[:size], dtype='f')
qqplot_2samples(x_pts, y_pts, line='45')
qq_plot(OLD, NEW)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Значение теорем сходимости (Б.Т. Поляк Введение в оптимизацию, гл. 1, $\S$ 6)
Step2: Метод золотого сечения
Step3: Сравнение методов одномерной минимизации
Step4: Пример иного поведения методов
Step5: Сравнение скорости сходимости и времени работы методов
Step6: Метод золотого сечения
Step7: Сходимость
Step8: Время работы
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
USE_COLAB = False
if not USE_COLAB:
plt.rc("text", usetex=True)
import numpy as np
C = 10
alpha = -0.5
q = 0.9
num_iter = 7
sublinear = np.array([C * k**alpha for k in range(1, num_iter + 1)])
linear = np.array([C * q**k for k in range(1, num_iter + 1)])
superlinear = np.array([C * q**(k**2) for k in range(1, num_iter + 1)])
quadratic = np.array([C * q**(2**k) for k in range(1, num_iter + 1)])
plt.figure(figsize=(12,8))
plt.semilogy(np.arange(1, num_iter+1), sublinear,
label=r"Sublinear, $\alpha = -0.5$")
plt.semilogy(np.arange(1, num_iter+1), superlinear,
label=r"Superlinear, $q = 0.5, p=2$")
plt.semilogy(np.arange(1, num_iter+1), linear,
label=r"Linear, $q = 0.5$")
plt.semilogy(np.arange(1, num_iter+1), quadratic,
label=r"Quadratic, $q = 0.5$")
plt.xlabel("Number of iterations, $k$", fontsize=28)
plt.ylabel("Error rate upper bound", fontsize=28)
plt.legend(loc="best", fontsize=26)
plt.xticks(fontsize = 28)
_ = plt.yticks(fontsize = 28)
def binary_search(f, a, b, epsilon, callback=None):
c = (a + b) / 2.0
while abs(b - a) > epsilon:
# Check left subsegment
y = (a + c) / 2.0
if f(y) <= f(c):
b = c
c = y
else:
# Check right subsegment
z = (b + c) / 2.0
if f(c) <= f(z):
a = y
b = z
else:
a = c
c = z
if callback is not None:
callback(a, b)
return c
def my_callback(a, b, left_bound, right_bound, approximation):
left_bound.append(a)
right_bound.append(b)
approximation.append((a + b) / 2.0)
import numpy as np
left_boud_bs = []
right_bound_bs = []
approximation_bs = []
callback_bs = lambda a, b: my_callback(a, b,
left_boud_bs, right_bound_bs, approximation_bs)
# Target unimodal function on given segment
f = lambda x: (x - 2) * x * (x + 2)**2 # np.power(x+2, 2)
# f = lambda x: -np.sin(x)
x_true = -2
# x_true = np.pi / 2.0
a = -3
b = -1.5
epsilon = 1e-8
x_opt = binary_search(f, a, b, epsilon, callback_bs)
print(np.abs(x_opt - x_true))
plt.figure(figsize=(10,6))
plt.plot(np.linspace(a,b), f(np.linspace(a,b)))
plt.title("Objective function", fontsize=28)
plt.xticks(fontsize = 28)
_ = plt.yticks(fontsize = 28)
def golden_search(f, a, b, tol=1e-5, callback=None):
tau = (np.sqrt(5) + 1) / 2.0
y = a + (b - a) / tau**2
z = a + (b - a) / tau
while b - a > tol:
if f(y) <= f(z):
b = z
z = y
y = a + (b - a) / tau**2
else:
a = y
y = z
z = a + (b - a) / tau
if callback is not None:
callback(a, b)
return (a + b) / 2.0
left_boud_gs = []
right_bound_gs = []
approximation_gs = []
cb_gs = lambda a, b: my_callback(a, b, left_boud_gs, right_bound_gs, approximation_gs)
x_gs = golden_search(f, a, b, epsilon, cb_gs)
print(f(x_opt))
print(f(x_gs))
print(np.abs(x_opt - x_true))
plt.figure(figsize=(10,6))
plt.semilogy(np.arange(1, len(approximation_bs) + 1), np.abs(x_true - np.array(approximation_bs, dtype=np.float64)), label="Binary search")
plt.semilogy(np.arange(1, len(approximation_gs) + 1), np.abs(x_true - np.array(approximation_gs, dtype=np.float64)), label="Golden search")
plt.xlabel(r"Number of iterations, $k$", fontsize=26)
plt.ylabel("Error rate upper bound", fontsize=26)
plt.legend(loc="best", fontsize=26)
plt.xticks(fontsize = 26)
_ = plt.yticks(fontsize = 26)
%timeit binary_search(f, a, b, epsilon)
%timeit golden_search(f, a, b, epsilon)
f = lambda x: np.sin(np.sin(np.sin(np.sqrt(x))))
x_true = (3 * np.pi / 2)**2
a = 2
b = 60
epsilon = 1e-8
plt.plot(np.linspace(a,b), f(np.linspace(a,b)))
plt.xticks(fontsize = 28)
_ = plt.yticks(fontsize = 28)
left_boud_bs = []
right_bound_bs = []
approximation_bs = []
callback_bs = lambda a, b: my_callback(a, b,
left_boud_bs, right_bound_bs, approximation_bs)
x_opt = binary_search(f, a, b, epsilon, callback_bs)
print(np.abs(x_opt - x_true))
left_boud_gs = []
right_bound_gs = []
approximation_gs = []
cb_gs = lambda a, b: my_callback(a, b, left_boud_gs, right_bound_gs, approximation_gs)
x_gs = golden_search(f, a, b, epsilon, cb_gs)
print(np.abs(x_opt - x_true))
plt.figure(figsize=(8,6))
plt.semilogy(np.abs(x_true - np.array(approximation_bs, dtype=np.float64)), label="Binary")
plt.semilogy(np.abs(x_true - np.array(approximation_gs, dtype=np.float64)), label="Golden")
plt.legend(fontsize=28)
plt.xticks(fontsize=28)
_ = plt.yticks(fontsize=28)
plt.xlabel(r"Number of iterations, $k$", fontsize=26)
plt.ylabel("Error rate upper bound", fontsize=26)
%timeit binary_search(f, a, b, epsilon)
%timeit golden_search(f, a, b, epsilon)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Install the latest GA version of google-cloud-storage library as well.
Step2: Note
Step3: Restart the kernel
Step4: Set your project ID
Step5: Region
Step6: Timestamp
Step7: Create a Cloud Storage bucket
Step8: Only if your bucket doesn't already exist
Step9: Finally, validate access to your Cloud Storage bucket by examining its contents
Step10: Set up variables
Step11: Initialize Vertex SDK for Python
Step12: Set hardware accelerators
Step13: Set pre-built containers
Step14: Set machine type
Step15: Tutorial
Step16: Task.py contents
Step17: Store training script on your Cloud Storage bucket
Step18: Create and run custom training job
Step19: Prepare your command-line arguments
Step20: Run the custom training job
Step21: Load the saved model
Step22: Evaluate the model
Step23: Perform the model evaluation
Step24: Get the serving function signature
Step25: Explanation Specification
Step26: Explanation Metadata
Step27: Upload the model
Step28: Deploy the model
Step29: Get test item
Step30: Make the prediction with explanation
Step31: Understanding the explanations response
Step32: Examine feature attributions
Step33: Check your explanations and baselines
Step34: Sanity check
Step35: Undeploy the model
Step36: Cleaning up
|
<ASSISTANT_TASK:>
Python Code:
import os
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
! pip3 install -U google-cloud-storage $USER_FLAG
if os.getenv("IS_TESTING"):
! pip3 install --upgrade tensorflow $USER_FLAG
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
REGION = "us-central1" # @param {type: "string"}
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
BUCKET_NAME = "gs://[your-project-id]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
! gsutil mb -l $REGION $BUCKET_NAME
! gsutil ls -al $BUCKET_NAME
import google.cloud.aiplatform as aip
aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
import os
import sys
if os.getenv("IS_TESTING_TRAIN_GPU"):
TRAIN_GPU, TRAIN_NGPU = (
aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_TRAIN_GPU")),
)
else:
TRAIN_GPU, TRAIN_NGPU = (None, None)
if os.getenv("IS_TESTING_DEPLOY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPLOY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (None, None)
if os.getenv("IS_TESTING_TF"):
TF = os.getenv("IS_TESTING_TF")
else:
TF = "2-1"
if TF[0] == "2":
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf2-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf2-cpu.{}".format(TF)
else:
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf-cpu.{}".format(TF)
TRAIN_IMAGE = "gcr.io/cloud-aiplatform/training/{}:latest".format(TRAIN_VERSION)
DEPLOY_IMAGE = "gcr.io/cloud-aiplatform/prediction/{}:latest".format(DEPLOY_VERSION)
print("Training:", TRAIN_IMAGE, TRAIN_GPU, TRAIN_NGPU)
print("Deployment:", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU)
if os.getenv("IS_TESTING_TRAIN_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_TRAIN_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Train machine type", TRAIN_COMPUTE)
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
# Make folder for Python training script
! rm -rf custom
! mkdir custom
# Add package information
! touch custom/README.md
setup_cfg = "[egg_info]\n\ntag_build =\n\ntag_date = 0"
! echo "$setup_cfg" > custom/setup.cfg
setup_py = "import setuptools\n\nsetuptools.setup(\n\n install_requires=[\n\n 'tensorflow_datasets==1.3.0',\n\n ],\n\n packages=setuptools.find_packages())"
! echo "$setup_py" > custom/setup.py
pkg_info = "Metadata-Version: 1.0\n\nName: Boston Housing tabular regression\n\nVersion: 0.0.0\n\nSummary: Demostration training script\n\nHome-page: www.google.com\n\nAuthor: Google\n\nAuthor-email: aferlitsch@google.com\n\nLicense: Public\n\nDescription: Demo\n\nPlatform: Vertex"
! echo "$pkg_info" > custom/PKG-INFO
# Make the training subfolder
! mkdir custom/trainer
! touch custom/trainer/__init__.py
%%writefile custom/trainer/task.py
# Single, Mirror and Multi-Machine Distributed Training for Boston Housing
import tensorflow_datasets as tfds
import tensorflow as tf
from tensorflow.python.client import device_lib
import numpy as np
import argparse
import os
import sys
tfds.disable_progress_bar()
parser = argparse.ArgumentParser()
parser.add_argument('--model-dir', dest='model_dir',
default=os.getenv('AIP_MODEL_DIR'), type=str, help='Model dir.')
parser.add_argument('--lr', dest='lr',
default=0.001, type=float,
help='Learning rate.')
parser.add_argument('--epochs', dest='epochs',
default=20, type=int,
help='Number of epochs.')
parser.add_argument('--steps', dest='steps',
default=100, type=int,
help='Number of steps per epoch.')
parser.add_argument('--distribute', dest='distribute', type=str, default='single',
help='distributed training strategy')
parser.add_argument('--param-file', dest='param_file',
default='/tmp/param.txt', type=str,
help='Output file for parameters')
args = parser.parse_args()
print('Python Version = {}'.format(sys.version))
print('TensorFlow Version = {}'.format(tf.__version__))
print('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))
# Single Machine, single compute device
if args.distribute == 'single':
if tf.test.is_gpu_available():
strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
else:
strategy = tf.distribute.OneDeviceStrategy(device="/cpu:0")
# Single Machine, multiple compute device
elif args.distribute == 'mirror':
strategy = tf.distribute.MirroredStrategy()
# Multiple Machine, multiple compute device
elif args.distribute == 'multi':
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
# Multi-worker configuration
print('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync))
def make_dataset():
# Scaling Boston Housing data features
def scale(feature):
max = np.max(feature)
feature = (feature / max).astype(np.float)
return feature, max
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.boston_housing.load_data(
path="boston_housing.npz", test_split=0.2, seed=113
)
params = []
for _ in range(13):
x_train[_], max = scale(x_train[_])
x_test[_], _ = scale(x_test[_])
params.append(max)
# store the normalization (max) value for each feature
with tf.io.gfile.GFile(args.param_file, 'w') as f:
f.write(str(params))
return (x_train, y_train), (x_test, y_test)
# Build the Keras model
def build_and_compile_dnn_model():
model = tf.keras.Sequential([
tf.keras.layers.Dense(128, activation='relu', input_shape=(13,)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(1, activation='linear')
])
model.compile(
loss='mse',
optimizer=tf.keras.optimizers.RMSprop(learning_rate=args.lr))
return model
NUM_WORKERS = strategy.num_replicas_in_sync
# Here the batch size scales up by number of workers since
# `tf.data.Dataset.batch` expects the global batch size.
BATCH_SIZE = 16
GLOBAL_BATCH_SIZE = BATCH_SIZE * NUM_WORKERS
with strategy.scope():
# Creation of dataset, and model building/compiling need to be within
# `strategy.scope()`.
model = build_and_compile_dnn_model()
# Train the model
(x_train, y_train), (x_test, y_test) = make_dataset()
model.fit(x_train, y_train, epochs=args.epochs, batch_size=GLOBAL_BATCH_SIZE)
model.save(args.model_dir)
! rm -f custom.tar custom.tar.gz
! tar cvf custom.tar custom
! gzip custom.tar
! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_boston.tar.gz
# Define your custom training job
job = # TODO 1: Your code goes here(
display_name="boston_" + TIMESTAMP,
script_path="custom/trainer/task.py",
container_uri=TRAIN_IMAGE,
requirements=["gcsfs==0.7.1", "tensorflow-datasets==4.4"],
)
print(job)
MODEL_DIR = "{}/{}".format(BUCKET_NAME, TIMESTAMP)
EPOCHS = 20
STEPS = 100
DIRECT = True
if DIRECT:
CMDARGS = [
"--model-dir=" + MODEL_DIR,
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
]
else:
CMDARGS = [
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
]
if TRAIN_GPU:
job.run(
args=CMDARGS,
replica_count=1,
machine_type=TRAIN_COMPUTE,
accelerator_type=TRAIN_GPU.name,
accelerator_count=TRAIN_NGPU,
base_output_dir=MODEL_DIR,
sync=True,
)
else:
job.run(
args=CMDARGS,
replica_count=1,
machine_type=TRAIN_COMPUTE,
base_output_dir=MODEL_DIR,
sync=True,
)
model_path_to_deploy = MODEL_DIR
import tensorflow as tf
local_model = tf.keras.models.load_model(MODEL_DIR)
import numpy as np
from tensorflow.keras.datasets import boston_housing
(_, _), (x_test, y_test) = boston_housing.load_data(
path="boston_housing.npz", test_split=0.2, seed=113
)
def scale(feature):
max = np.max(feature)
feature = (feature / max).astype(np.float32)
return feature
# Let's save one data item that has not been scaled
x_test_notscaled = x_test[0:1].copy()
for _ in range(13):
x_test[_] = scale(x_test[_])
x_test = x_test.astype(np.float32)
print(x_test.shape, x_test.dtype, y_test.shape)
print("scaled", x_test[0])
print("unscaled", x_test_notscaled)
# TODO 2: Your code goes here
loaded = tf.saved_model.load(model_path_to_deploy)
serving_input = list(
loaded.signatures["serving_default"].structured_input_signature[1].keys()
)[0]
print("Serving function input:", serving_input)
serving_output = list(loaded.signatures["serving_default"].structured_outputs.keys())[0]
print("Serving function output:", serving_output)
input_name = local_model.input.name
print("Model input name:", input_name)
output_name = local_model.output.name
print("Model output name:", output_name)
XAI = "ig" # [ shapley, ig, xrai ]
if XAI == "shapley":
PARAMETERS = {"sampled_shapley_attribution": {"path_count": 10}}
elif XAI == "ig":
PARAMETERS = {"integrated_gradients_attribution": {"step_count": 50}}
elif XAI == "xrai":
PARAMETERS = {"xrai_attribution": {"step_count": 50}}
parameters = aip.explain.ExplanationParameters(PARAMETERS)
INPUT_METADATA = {
"input_tensor_name": serving_input,
"encoding": "BAG_OF_FEATURES",
"modality": "numeric",
"index_feature_mapping": [
"crim",
"zn",
"indus",
"chas",
"nox",
"rm",
"age",
"dis",
"rad",
"tax",
"ptratio",
"b",
"lstat",
],
}
OUTPUT_METADATA = {"output_tensor_name": serving_output}
input_metadata = aip.explain.ExplanationMetadata.InputMetadata(INPUT_METADATA)
output_metadata = aip.explain.ExplanationMetadata.OutputMetadata(OUTPUT_METADATA)
metadata = aip.explain.ExplanationMetadata(
inputs={"features": input_metadata}, outputs={"medv": output_metadata}
)
model = # TODO 3: Your code goes here(
display_name="boston_" + TIMESTAMP,
artifact_uri=MODEL_DIR,
serving_container_image_uri=DEPLOY_IMAGE,
explanation_parameters=parameters,
explanation_metadata=metadata,
sync=False,
)
model.wait()
DEPLOYED_NAME = "boston-" + TIMESTAMP
TRAFFIC_SPLIT = {"0": 100}
MIN_NODES = 1
MAX_NODES = 1
if DEPLOY_GPU:
endpoint = # TODO 4a: Your code goes here(
deployed_model_display_name=DEPLOYED_NAME,
traffic_split=TRAFFIC_SPLIT,
machine_type=DEPLOY_COMPUTE,
accelerator_type=DEPLOY_GPU,
accelerator_count=DEPLOY_NGPU,
min_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
)
else:
endpoint = # TODO 4b: Your code goes here(
deployed_model_display_name=DEPLOYED_NAME,
traffic_split=TRAFFIC_SPLIT,
machine_type=DEPLOY_COMPUTE,
accelerator_type=DEPLOY_GPU,
accelerator_count=0,
min_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
)
test_item = x_test[0]
test_label = y_test[0]
print(test_item.shape)
instances_list = [test_item.tolist()]
prediction = # TODO 5: Your code goes here
print(prediction)
value = prediction[0][0][0]
print("Predicted Value:", value)
!python3 -m pip install tabulate
from tabulate import tabulate
feature_names = [
"crim",
"zn",
"indus",
"chas",
"nox",
"rm",
"age",
"dis",
"rad",
"tax",
"ptratio",
"b",
"lstat",
]
attributions = prediction.explanations[0].attributions[0].feature_attributions
rows = []
for i, val in enumerate(feature_names):
rows.append([val, test_item[i], attributions[val]])
print(tabulate(rows, headers=["Feature name", "Feature value", "Attribution value"]))
# Prepare 10 test examples to your model for prediction
instances = []
for i in range(10):
instances.append(x_test[i].tolist())
response = endpoint.explain(instances)
import numpy as np
def sanity_check_explanations(
explanation, prediction, mean_tgt_value=None, variance_tgt_value=None
):
passed_test = 0
total_test = 1
# `attributions` is a dict where keys are the feature names
# and values are the feature attributions for each feature
baseline_score = explanation.attributions[0].baseline_output_value
print("baseline:", baseline_score)
# Sanity check 1
# The prediction at the input is equal to that at the baseline.
# Please use a different baseline. Some suggestions are: random input, training
# set mean.
if abs(prediction - baseline_score) <= 0.05:
print("Warning: example score and baseline score are too close.")
print("You might not get attributions.")
else:
passed_test += 1
print("Sanity Check 1: Passed")
print(passed_test, " out of ", total_test, " sanity checks passed.")
i = 0
for explanation in response.explanations:
try:
prediction = np.max(response.predictions[i]["scores"])
except TypeError:
prediction = np.max(response.predictions[i])
sanity_check_explanations(explanation, prediction)
i += 1
endpoint.undeploy_all()
delete_all = True
if delete_all:
# Delete the dataset using the Vertex dataset object
try:
if "dataset" in globals():
dataset.delete()
except Exception as e:
print(e)
# Delete the model using the Vertex model object
try:
if "model" in globals():
model.delete()
except Exception as e:
print(e)
# Delete the endpoint using the Vertex endpoint object
try:
if "endpoint" in globals():
endpoint.delete()
except Exception as e:
print(e)
# Delete the AutoML or Pipeline trainig job
try:
if "dag" in globals():
dag.delete()
except Exception as e:
print(e)
# Delete the custom trainig job
try:
if "job" in globals():
job.delete()
except Exception as e:
print(e)
# Delete the batch prediction job using the Vertex batch prediction object
try:
if "batch_predict_job" in globals():
batch_predict_job.delete()
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex hyperparameter tuning object
try:
if "hpt_job" in globals():
hpt_job.delete()
except Exception as e:
print(e)
if "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Forward and Backward mode gradients in TFF
Step2: This notebook demonstrates the difference between forward and backward gradient computation
Step3: Consider a simple vector-function in two variables $x$ and $y$
Step4: Backward mode
Step5: The user has access to [$u_1$, $u_2$, $u_3$] as well. Setting the values
Step6: Forward mode
Step7: Remember, Tensorflow is the tool commonly used in Machine Learning. In Machine Learning, the aim is to minmize the scalar loss function, the loss function being the sum of the gradient with respect to the feature set. This lowest loss is the loss summed up over all training examples which can be computed via backward gradient.
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title Upgrade to TensorFlow nightly
!pip install --upgrade tf-nightly
#@title Install TF Quant Finance
!pip install tff-nightly
#@title Imports { display-mode: "form" }
import tensorflow as tf
import functools
import tf_quant_finance as tff
def func(x):
func = tf.stack([x[0]**2, x[1]**2, x[0] * x[1]])
return func
start = tf.constant([1,2], dtype=tf.float64)
# Note that the output is u d(u.f(x, y))dx and d(u.f(x, y))dy
tff.math.gradients(func, start)
tff.math.gradients(func, start,
output_gradients=tf.constant([0, 0, 1], dtype=tf.float64))
tff.math.fwd_gradient(func, start)
tff.math.fwd_gradient(func, start,
input_gradients=tf.constant([1.0, 0.0], dtype=tf.float64))
tff.math.fwd_gradient(func, start,
input_gradients=tf.constant([0.0, 0.1], dtype=tf.float64))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Use pwd (print working directory) to see what directory we're currently in.
Step2: For this tutorial to run we need to be in the hyperspectral directory. If you're not there, you can use cd (change directory) to get there.
Step3: Next we'll import some hyperspectral functions specific to the AOP data, from the neon_aop_hyperspectral.py module. We encourage you to look through the code in the module on your own time to gain a better understanding of how to read in the NEON hyperspectral hdf5 format. We'll start by reading in an h5 reflectance file using the aop_h5refl2array function, which pulls out the hyperspectral data cube (m x n x bands) and corresponding metadata that contains relevant information such as the spatial referencing.
Step4: To see all the functions and modules we've imported use %whos
Step5: As with standard python functions, we've included docstrings in these functions so that you can use help or ? to display details about how to use the function.
Step6: Now that we have an idea of how this function works, let's try it out. First, set the variable serc_h5_tile to be reflectance data, specifying the relative path, or alterantively using r'Path\To\File.h5' or os.path.join to create the full path to the data file. Note that if you want to run this notebook later on a different reflectance tile, you just have to change this variable.
Step7: Now that we've specified our reflectance tile, we can call aop_h5refl2array to read in the reflectance tile as a Python Numpy array called refl , and the associated metadata into a dictionary metadata.
Step8: We can use the shape method to see the dimensions of the array we read in. NEON tiles are (1000 x 1000 x # of bands), the number of bands may vary depending on the hyperspectral sensor used, but should be in the vicinity of 426.
Step9: Plot a single band of reflectance data
Step10: Now that we have loaded this function, let's extract a single band from the SERC reflectance array and plot it
Step11: RGB Plots - Band Stacking
Step12: First we will look at red, green, and blue bands, whos indices are defined below. To confirm that these band indices correspond to wavelengths in the expected portion of the spectrum, we can print out the wavelength values stored in metadata['wavelength']
Step13: We selected these bands so that they fall within the visible range of the electromagnetic spectrum (400-700 nm)
Step14: Plot an RGB band combination
Step15: You'll notice that this image is very dark; it is possible to make out some of the features (roads, buildings), but it is not ideal. Since colorlimits don't apply to 3-band images, we have to use some other image processing tools to enhance the visibility of this image.
Step16: False Color Image - Color Infrared (CIR)
Step17: Demo
Step18: Demo
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import numpy as np
import h5py, os, osr, copy
%matplotlib inline
pwd
cd hyperspectral
from neon_aop_hyperspectral import *
%whos
help(aop_h5refl2array)
#h5_tile = ('../../../data/NEON_D16_MCRA_DP3_566300_4901000_reflectance.h5')
h5_tile = ('../../../data/2019_JORN/Refl/NEON_D14_JORN_DP3_324000_3620000_reflectance.h5')
refl,metadata = aop_h5refl2array(h5_tile)
refl.shape
list(metadata.items())
plot_aop_refl?
b56 = refl[:,:,55]
plot_aop_refl(b56,
metadata['spatial extent'],
colorlimit=(0,0.3),
title='Band 56 Reflectance',
cmap_title='Reflectance',
colormap='Greys_r')
stack_rgb?
rgb_bands = (58,34,19)
print('Band 58 Center Wavelength = %.2f' %(metadata['wavelength'][57]),'nm')
print('Band 33 Center Wavelength = %.2f' %(metadata['wavelength'][33]),'nm')
print('Band 19 Center Wavelength = %.2f' %(metadata['wavelength'][18]),'nm')
rgb = stack_rgb(refl,rgb_bands)
rgb.shape
plot_aop_refl(rgb,
metadata['spatial extent'],
title='RGB Image',
cbar='off')
plot_aop_rgb?
plot_aop_rgb(rgb,
metadata['spatial extent'],
ls_pct=1,
plot_title = 'RGB')
CIRbands = (90,34,19)
print('Band 90 Center Wavelength = %.2f' %(metadata['wavelength'][89]),'nm')
print('Band 34 Center Wavelength = %.2f' %(metadata['wavelength'][33]),'nm')
print('Band 19 Center Wavelength = %.2f' %(metadata['wavelength'][18]),'nm')
cir = stack_rgb(refl,CIRbands)
plot_aop_rgb(cir,
metadata['spatial extent'],
ls_pct=2,
plot_title='Color Infrared Image')
from ipywidgets import *
array = refl.copy()
metadata = metadata.copy()
def RGBplot_widget(R,G,B):
#Pre-allocate array size
rgbArray = np.zeros((array.shape[0],array.shape[1],3), 'uint8')
Rband = array[:,:,R-1].astype(np.float)
#Rband_clean = clean_band(Rband,Refl_md)
Gband = array[:,:,G-1].astype(np.float)
#Gband_clean = clean_band(Gband,Refl_md)
Bband = array[:,:,B-1].astype(np.float)
#Bband_clean = clean_band(Bband,Refl_md)
rgbArray[..., 0] = Rband*256
rgbArray[..., 1] = Gband*256
rgbArray[..., 2] = Bband*256
# Apply Adaptive Histogram Equalization to Improve Contrast:
img_nonan = np.ma.masked_invalid(rgbArray) #first mask the image
img_adapteq = exposure.equalize_adapthist(img_nonan, clip_limit=0.10)
plot = plt.imshow(img_adapteq,extent=metadata['spatial extent']);
plt.title('Bands: \nR:' + str(R) + ' (' + str(round(metadata['wavelength'][R-1])) +'nm)'
+ '\n G:' + str(G) + ' (' + str(round(metadata['wavelength'][G-1])) + 'nm)'
+ '\n B:' + str(B) + ' (' + str(round(metadata['wavelength'][B-1])) + 'nm)');
ax = plt.gca(); ax.ticklabel_format(useOffset=False, style='plain')
rotatexlabels = plt.setp(ax.get_xticklabels(),rotation=90)
interact(RGBplot_widget, R=(1,426,1), G=(1,426,1), B=(1,426,1));
rgbArray = rgb.copy()
def linearStretch(percent):
pLow, pHigh = np.percentile(rgbArray[~np.isnan(rgbArray)], (percent,100-percent))
img_rescale = exposure.rescale_intensity(rgbArray, in_range=(pLow,pHigh))
plt.imshow(img_rescale,extent=metadata['spatial extent'])
plt.title('SERC RGB \n Linear ' + str(percent) + '% Contrast Stretch');
ax = plt.gca()
ax.ticklabel_format(useOffset=False, style='plain')
rotatexlabels = plt.setp(ax.get_xticklabels(),rotation=90)
interact(linearStretch,percent=(0,20,1));
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The coefficients for the $i+1$ temperature (predicted) are
Step2: The coefficients for the $i$ temperature (current) are
|
<ASSISTANT_TASK:>
Python Code:
from sympy import *
init_printing()
tnew_x = Symbol('T^{i+1}_x')
tnew_xprev = Symbol('T^{i+1}_{x-1}')
tnew_xafter = Symbol('T^{i+1}_{x+1}')
told_x = Symbol('T^{i}_x')
told_xprev = Symbol('T^{i}_{x-1}')
told_xafter = Symbol('T^{i}_{x+1}')
u_x = Symbol('\kappa_x')
u_xprev = Symbol('\kappa_{x-1}')
u_xafter = Symbol('\kappa_{x+1}')
delta_t = Symbol('\Delta t')
delta_x = Symbol('\Delta x')
told_x, u_xprev, tnew_xafter, delta_x
lhs = (tnew_x - told_x)/delta_t
lhs # The time derivative
rhs_new = 0.5*(u_x*(tnew_xprev - 2*tnew_x + tnew_xafter)/delta_x**2 +
((tnew_x - tnew_xprev)/(delta_x))*((u_x - u_xprev)/(delta_x)))
rhs_old = 0.5*(u_x*(told_xprev - 2*told_x + told_xafter)/delta_x**2 +
((told_x - told_xprev)/(delta_x))*((u_x - u_xprev)/(delta_x)))
rhs_new, rhs_old # The two parts of the crank-nicholson RHS.
expr = lhs - rhs_new - rhs_old
expr
poly_form = Poly(expr, tnew_x, tnew_xafter, tnew_xprev, told_x, told_xafter, told_xprev)
poly_form
(poly_form.coeff_monomial(tnew_xprev)*delta_t).simplify(), (poly_form.coeff_monomial(tnew_x)*delta_t).simplify(), (poly_form.coeff_monomial(tnew_xafter)*delta_t).simplify()
-(poly_form.coeff_monomial(told_xprev)*delta_t).simplify(), (poly_form.coeff_monomial(told_x)*-delta_t).simplify(), -(poly_form.coeff_monomial(told_xafter)*delta_t).simplify()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Step 1
Step2: Step 1
Step3: Step 2
Step4: Step 2
Step5: Step 3
Step6: Step 3
Step7: Step 4
Step8: Step 5
Step9: Step 5
Step10: Pause
Step11: Step 7
Step12: Step 8
|
<ASSISTANT_TASK:>
Python Code:
import unittest
def run_tests():
suite = unittest.TestLoader().loadTestsFromTestCase(TestFibonacci)
unittest.TextTestRunner().run(suite)
class TestFibonacci(unittest.TestCase):
def test_fibonacci(self):
self.assertEqual(0, fibonacci(0), "fibonacci(0) should equal 0")
run_tests()
def fibonacci(n):
return 0
run_tests()
class TestFibonacci(unittest.TestCase):
def test_fibonacci(self):
self.assertEqual(0, fibonacci(0), "fibonacci(0) should equal 0")
self.assertEqual(1, fibonacci(1), "fibonacci(1) should equal 1")
run_tests()
def fibonacci(n):
if n == 0: return 0
return 1
run_tests()
class TestFibonacci(unittest.TestCase):
def test_fibonacci(self):
self.assertEqual(0, fibonacci(0), "fibonacci(0) should equal 0")
self.assertEqual(1, fibonacci(1), "fibonacci(1) should equal 1")
self.assertEqual(1, fibonacci(2), "fibonacci(2) should equal 1")
run_tests()
class TestFibonacci(unittest.TestCase):
def test_fibonacci(self):
self.assertEqual(0, fibonacci(0), "fibonacci(0) should equal 0")
self.assertEqual(1, fibonacci(1), "fibonacci(1) should equal 1")
self.assertEqual(1, fibonacci(2), "fibonacci(2) should equal 1")
self.assertEqual(2, fibonacci(3), "fibonacci(3) should equal 2")
run_tests()
def fibonacci(n):
if n == 0: return 0
if n <= 2: return 1
return 2
run_tests()
class TestFibonacci(unittest.TestCase):
def test_fibonacci(self):
self.assertEqual(0, fibonacci(0), "fibonacci(0) should equal 0")
self.assertEqual(1, fibonacci(1), "fibonacci(1) should equal 1")
self.assertEqual(1, fibonacci(2), "fibonacci(2) should equal 1")
self.assertEqual(2, fibonacci(3), "fibonacci(3) should equal 2")
self.assertEqual(3, fibonacci(4), "fibonacci(4) should equal 3")
run_tests()
def fibonacci(n):
if n == 0: return 0
if n <= 2: return 1
if n == 3: return 2
return 3
run_tests()
def fibonacci(n):
if n == 0: return 0
if n <= 2: return 1
if n == 3: return 2
return 2 + 1
run_tests()
def fibonacci(n):
if n == 0: return 0
if n <= 2: return 1
return fibonacci(n - 1) + fibonacci(n - 2)
run_tests()
def fibonacci(n):
if n == 0: return 0
if n == 1: return 1
return fibonacci(n - 1) + fibonacci(n - 2)
run_tests()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now there’s a lot of volatility in oil right now. It’s been rough for producers to say the least. So let’s check out some stocks that are involved in that market.
Step2: WTI which is W&T Offshore Inc. They drill in the gulf of Mexico.
Step3: Now we can get these one by one in a for loop...
Step4: or we can just get them all by passing in the list....
Step5: But we get something different back that we haven’t encountered yet. This is a panel. Now panels are advanced and explaining their use case is outside of this video. However, I’ll give you the basics.
Step6: We can see they’ve got an items axis, a major axis and a minor axis. Panels are a core part of pandas however they are much less used in pandas and therefore are a bit neglected as of now. That’s not just me trying to avoid the topic - which would be my suspicion if I heard that. But that ’s stated almost verbatim in the docs.
Step7: However let’s touch on the basis because you may come across them.
Step8: We’ve got these three axes so we want to query data in them. We've got to do that a bit differently.
Step9: The major and minor axises are done differently. with the major_xs and minor_xs commands.
Step10: Some summary statistics are available to us like mean on the panel.
Step11: We can perform different kinds of selections and transposition using the major and minor axes however as I said above I'm not going to cover this material. However I’m going to convert this panel to a data frame to show you how to do that and in the process and we’re going to cover a new topic!
Step12: But when we use the head method to see the first 5 we see things are a little different and that’s because we now have multiple indices or a hierarchical or multi index. Now hierarchical indexes are extremely powerful but they’re beyond the scope of this current video - I'll touch on them a bit later in this section.
Step13: Alright we have our dataset in a hierarchical index. but that's not what we want to work with right now which will likely come up when you're analyzing data - you'll want to completely reset your index. Well have no fear, we can do that with the reset index command.
Step14: Now we've reset our index. Now I don't want to work with this data set as I'm not an expert on financial data however we'll be working with a really cool data set in our next video. We'll be working with an airplane data set that has flights across the country. This is going to give us the opportunity to work on a ton of cool problems.
|
<ASSISTANT_TASK:>
Python Code:
import pandas.io.data
?pandas.io.data # <tab>
import datetime
print(datetime.datetime(2010,1,1))
start = pd.to_datetime('2010-1-1')
end = datetime.datetime(2015,1,1)
ticker_symbols = ['WTI','CHK','TSLA','CBAK']
wti = pd.io.data.get_data_yahoo(ticker_symbols[0],start=start,end=end)
wti.head()
for symbol in ticker_symbols:
print(symbol)
df = pd.io.data.get_data_yahoo(symbol,start=start,end=end)
panl = pd.io.data.get_data_yahoo(ticker_symbols,start=start,end=end)
panl
type(panl)
panl.shape
panl.Open.head()
panl.major_xs('2013-5-1')
panl.minor_xs('CHK').head()
panl.mean()
df = panl.to_frame()
df.head()
df.index.levels
print(len(df.index.levels))
df.reset_index()
df.reset_index(inplace=True)
df
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Replace the variable values in the cell below
Step2: Create a Dataset from BigQuery
Step3: Let's do some regular expression parsing in BigQuery to get the source of the newspaper article from the URL. For example, if the url is http
Step6: Now that we have good parsing of the URL to get the source, let's put together a dataset of source and titles. This will be our labeled dataset for machine learning.
Step7: For ML training, we usually need to split our dataset into training and evaluation datasets (and perhaps an independent test dataset if we are going to do model or feature selection based on the evaluation dataset). AutoML however figures out on its own how to create these splits, so we won't need to do that here.
Step8: AutoML for text classification requires that
Step9: Let's make sure we have roughly the same number of labels for each of our three labels
Step10: Finally we will save our data, which is currently in-memory, to disk.
Step11: Now let's sample 1000 articles from the full dataset and make sure we have enough examples for each label in our sample dataset (see here for further details on how to prepare data for AutoML).
Step12: Let's write the sample datatset to disk.
Step13: Let's start by specifying where the information about the trained models will be saved as well as where our dataset is located
Step14: Loading the dataset
Step15: Let's look again at the number of examples per label to make sure we have a well-balanced dataset
Step16: Preparing the labels
Step17: Preparing the train/test splits
Step18: To be on the safe side, we verify that the train and test splits
Step19: Now let's create the features and labels we will feed our models with
Step20: NNLM Model
Step21: Note that this TF-Hub embedding produces a single 50-dimensional vector when passed a sentence
Step22: Swivel Model
Step23: Similarly as the previous pre-trained embedding, it outputs a single vector when passed a sentence
Step24: Building the models
Step25: Let's also wrap the training code into a train_and_evaluate function that
Step26: Training NNLM
Step27: Training Swivel
Step28: Deploying the model
Step29: Then we can deploy the model using the gcloud CLI as before
Step30: Before we try our deployed model, let's inspect its signature to know what to send to the deployed API
Step31: Let's go ahead and hit our model
|
<ASSISTANT_TASK:>
Python Code:
import os
from google.cloud import bigquery
import pandas as pd
%load_ext google.cloud.bigquery
PROJECT = # Replace with your PROJECT
BUCKET = PROJECT
REGION = "us-east1"
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
%%bigquery --project $PROJECT
SELECT
url, title, score
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
LENGTH(title) > 10
AND score > 10
AND LENGTH(url) > 0
LIMIT 10
%%bigquery --project $PROJECT
SELECT
ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.'))[OFFSET(1)] AS source,
COUNT(title) AS num_articles
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
REGEXP_CONTAINS(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.com$')
AND LENGTH(title) > 10
GROUP BY
source
ORDER BY num_articles DESC
LIMIT 100
regex = '.*://(.[^/]+)/'
sub_query =
SELECT
title,
ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '{0}'), '.'))[OFFSET(1)] AS source
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
REGEXP_CONTAINS(REGEXP_EXTRACT(url, '{0}'), '.com$')
AND LENGTH(title) > 10
.format(regex)
query =
SELECT
LOWER(REGEXP_REPLACE(title, '[^a-zA-Z0-9 $.-]', ' ')) AS title,
source
FROM
({sub_query})
WHERE (source = 'github' OR source = 'nytimes' OR source = 'techcrunch')
.format(sub_query=sub_query)
print(query)
bq = bigquery.Client(project=PROJECT)
title_dataset = bq.query(query).to_dataframe()
title_dataset.head()
print("The full dataset contains {n} titles".format(n=len(title_dataset)))
title_dataset.source.value_counts()
DATADIR = './data/'
if not os.path.exists(DATADIR):
os.makedirs(DATADIR)
FULL_DATASET_NAME = 'titles_full.csv'
FULL_DATASET_PATH = os.path.join(DATADIR, FULL_DATASET_NAME)
# Let's shuffle the data before writing it to disk.
title_dataset = title_dataset.sample(n=len(title_dataset))
title_dataset.to_csv(
FULL_DATASET_PATH, header=False, index=False, encoding='utf-8')
sample_title_dataset = title_dataset.sample(n=1000)
sample_title_dataset.source.value_counts()
SAMPLE_DATASET_NAME = 'titles_sample.csv'
SAMPLE_DATASET_PATH = os.path.join(DATADIR, SAMPLE_DATASET_NAME)
sample_title_dataset.to_csv(
SAMPLE_DATASET_PATH, header=False, index=False, encoding='utf-8')
sample_title_dataset.head()
import datetime
import os
import shutil
import pandas as pd
import tensorflow as tf
from tensorflow.keras.callbacks import TensorBoard, EarlyStopping
from tensorflow_hub import KerasLayer
from tensorflow.keras.layers import Dense
from tensorflow.keras.models import Sequential
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.utils import to_categorical
print(tf.__version__)
%matplotlib inline
MODEL_DIR = "./text_models"
DATA_DIR = "./data"
ls ./data/
DATASET_NAME = "titles_full.csv"
TITLE_SAMPLE_PATH = os.path.join(DATA_DIR, DATASET_NAME)
COLUMNS = ['title', 'source']
titles_df = pd.read_csv(TITLE_SAMPLE_PATH, header=None, names=COLUMNS)
titles_df.head()
titles_df.source.value_counts()
CLASSES = {
'github': 0,
'nytimes': 1,
'techcrunch': 2
}
N_CLASSES = len(CLASSES)
def encode_labels(sources):
classes = [CLASSES[source] for source in sources]
one_hots = to_categorical(classes, num_classes=N_CLASSES)
return one_hots
encode_labels(titles_df.source[:4])
N_TRAIN = int(len(titles_df) * 0.95)
titles_train, sources_train = (
titles_df.title[:N_TRAIN], titles_df.source[:N_TRAIN])
titles_valid, sources_valid = (
titles_df.title[N_TRAIN:], titles_df.source[N_TRAIN:])
sources_train.value_counts()
sources_valid.value_counts()
X_train, Y_train = titles_train.values, encode_labels(sources_train)
X_valid, Y_valid = titles_valid.values, encode_labels(sources_valid)
X_train[:3]
Y_train[:3]
# TODO 1
NNLM = "https://tfhub.dev/google/nnlm-en-dim50/2"
nnlm_module = KerasLayer(
NNLM, output_shape=[50], input_shape=[], dtype=tf.string, trainable=True)
# TODO 1
nnlm_module(tf.constant(["The dog is happy to see people in the street."]))
# TODO 1
SWIVEL = "https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim-with-oov/1"
swivel_module = KerasLayer(
SWIVEL, output_shape=[20], input_shape=[], dtype=tf.string, trainable=True)
# TODO 1
swivel_module(tf.constant(["The dog is happy to see people in the street."]))
def build_model(hub_module, name):
model = Sequential([
hub_module, # TODO 2
Dense(16, activation='relu'),
Dense(N_CLASSES, activation='softmax')
], name=name)
model.compile(
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy']
)
return model
def train_and_evaluate(train_data, val_data, model, batch_size=5000):
X_train, Y_train = train_data
tf.random.set_seed(33)
model_dir = os.path.join(MODEL_DIR, model.name)
if tf.io.gfile.exists(model_dir):
tf.io.gfile.rmtree(model_dir)
history = model.fit(
X_train, Y_train,
epochs=100,
batch_size=batch_size,
validation_data=val_data,
callbacks=[EarlyStopping(), TensorBoard(model_dir)],
)
return history
data = (X_train, Y_train)
val_data = (X_valid, Y_valid)
nnlm_model = build_model(nnlm_module, 'nnlm')
nnlm_history = train_and_evaluate(data, val_data, nnlm_model)
history = nnlm_history
pd.DataFrame(history.history)[['loss', 'val_loss']].plot()
pd.DataFrame(history.history)[['accuracy', 'val_accuracy']].plot()
swivel_model = build_model(swivel_module, name='swivel')
swivel_history = train_and_evaluate(data, val_data, swivel_model)
history = swivel_history
pd.DataFrame(history.history)[['loss', 'val_loss']].plot()
pd.DataFrame(history.history)[['accuracy', 'val_accuracy']].plot()
OUTPUT_DIR = "./savedmodels"
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
EXPORT_PATH = os.path.join(OUTPUT_DIR, 'swivel')
os.environ['EXPORT_PATH'] = EXPORT_PATH
shutil.rmtree(EXPORT_PATH, ignore_errors=True)
tf.saved_model.save(swivel_model, EXPORT_PATH)
%%bash
# TODO 5
MODEL_NAME=title_model
VERSION_NAME=swivel
if [[ $(gcloud ai-platform models list --format='value(name)' | grep ^$MODEL_NAME$) ]]; then
echo "$MODEL_NAME already exists"
else
echo "Creating $MODEL_NAME"
gcloud ai-platform models create --region=$REGION $MODEL_NAME
fi
if [[ $(gcloud ai-platform versions list --model $MODEL_NAME --format='value(name)' | grep ^$VERSION_NAME$) ]]; then
echo "Deleting already existing $MODEL_NAME:$VERSION_NAME ... "
echo yes | gcloud ai-platform versions delete --model=$MODEL_NAME $VERSION_NAME
echo "Please run this cell again if you don't see a Creating message ... "
sleep 2
fi
echo "Creating $MODEL_NAME:$VERSION_NAME"
gcloud ai-platform versions create \
--model=$MODEL_NAME $VERSION_NAME \
--framework=tensorflow \
--python-version=3.7 \
--runtime-version=2.1 \
--origin=$EXPORT_PATH \
--staging-bucket=gs://$BUCKET \
--machine-type n1-standard-4 \
--region=$REGION
!saved_model_cli show \
--tag_set serve \
--signature_def serving_default \
--dir {EXPORT_PATH}
!find {EXPORT_PATH}
%%writefile input.json
{"keras_layer_1_input": "hello"}
!gcloud ai-platform predict \
--model title_model \
--json-instances input.json \
--version swivel \
--region=$REGION
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Review
Step2: Check your work by executing the next cell.
Step3: 2. Text
Step4: print prints all of its arguments together, separated by single spaces.
Step5: 2.1. String Methods
Step6: Once a name is bound to a string value, methods can be invoked on that name as well. The name doesn't change in this case, so a new name is needed to capture the result.
Step7: Remember that you could call functions on the results of other functions. For example,
Step8: Here's a picture of how Python evaluates a "chained" method call like that
Step9: Other string methods do not take any arguments at all, because the original string is all that's needed to compute the result. In this case, parentheses are still needed, but there's nothing in between the parentheses. Here are some methods that work that way
Step10: Methods can also take arguments that aren't strings. For example, strings have a method called zfill that "pads" them with the character 0 so that they reach a certain length. This can be useful for displaying numbers in a uniform format
Step11: All these string methods are useful, but most programmers don't memorize their names or how to use them. In the "real world," people usually just search the internet for documentation and examples. (Stack Overflow has a huge database of answered questions.)
Step12: However, there are built-in functions to convert numbers to strings and strings to numbers.
Step13: Suppose you're writing a program that looks for dates in a text, and you want your program to find the amount of time that elapsed between two years it has identified. It doesn't make sense to subtract two texts, but you can first convert the text containing the years into numbers.
Step14: 2.3. Strings as function arguments
Step15: 3. Importing code
Step16: pi is defined inside math, and the way that we access names that are inside modules is by writing the module's name, then a dot, then the name of the thing we want
Step17: 3.1. Importing functions
Step18: For your reference, here are some more examples of functions from the math module
Step19: A function that displays a picture
Step20: 4. Arrays
Step21: Each thing in an array (in the above case, the numbers 0.125, 4.75, and -1.3) is called an element of that array.
Step22: Question 4.1.2. Make an array containing the numbers 0, 1, -1, $\pi$, and $e$, in that order. Name it interesting_numbers. Hint
Step23: Question 4.1.3. Make an array containing the five strings "Hello", ",", " ", "world", and "!". (The third one is a single space inside quotes.) Name it hello_world_components.
Step24: 4.1.1. np.arange
Step25: Temperature readings
Step26: 4.2. Working with single elements of arrays ("indexing")
Step27: Here's how we get the first element of population, which is the world population in the first year in the dataset, 1950.
Step28: The value of that expression is the number 2557628654 (around 2.5 billion), because that's the first thing in the array population.
Step29: Question 4.2.1. Set population_1973 to the world population in 1973, by getting the appropriate element from population using item.
Step30: Run the next cell to visualize the elements of population and their indices. You'll learn next week how to make tables like this!
Step31: Question 4.2.2. What's the index of the 31st item in population? Try to answer the question without looking at the table or the data!
Step32: 4.3. Doing something to every element of an array
Step33: But this is tedious and doesn't really take advantage of the fact that we are using a computer.
Step34: <img src="array_logarithm.jpg">
Step35: You can do the same with addition, subtraction, multiplication, and exponentiation (**). For example, you can calculate a tip on several restaurant bills at once (in this case just 3)
Step36: <img src="array_multiplication.jpg">
Step37: Question 4.3.3. more_restaurant_bills.csv contains 100,000 bills! Compute the total charge for each one. How is your code different?
Step38: The function sum takes a single array of numbers as its argument. It returns the sum of all the numbers in that array (so it returns a single number, not an array).
Step39: Question 4.3.5. The powers of 2 ($2^0 = 1$, $2^1 = 2$, $2^2 = 4$, etc) arise frequently in computer science. (For example, you may have noticed that storage on cell phones comes in powers of 2, like 16 GB, 32 GB, or 64 GB.) Use np.arange and the exponentiation operator ** to compute the first 30 powers of 2, starting from 2^0.
|
<ASSISTANT_TASK:>
Python Code:
from datascience import *
from client.api.assignment import load_assignment
tests = load_assignment('lab02.ok')
new_year = ...
new_year
_ = tests.grade('q11')
print("I <3", 'Data Science')
woman_asking = ...
woman_quote = '"Can it be that you have come from outer space?"'
gagarin_reply = 'Gagarin replied:'
gagarin_quote = ...
print(woman_asking, woman_quote)
print(gagarin_reply, gagarin_quote)
_ = tests.grade('q21')
# Replace one letter
'Hello'.replace('o', 'a')
# Replace a sequence of letters, which appears twice
'hitchhiker'.replace('hi', 'ma')
sharp = 'edged'
hot = sharp.replace('ed', 'ma')
print('sharp =', sharp)
print('hot =', hot)
# Calling replace on the output of another call to
# replace
'train'.replace('t', 'ing').replace('in', 'de')
you = ...
this = ...
a = 'beeper'
the = a.replace('p', you)
the.replace('bee', this)
_ = tests.grade('q211')
'unIverSITy of caliFORnia'.title()
print("5.12".zfill(6))
print("10.50".zfill(6))
8 + "8"
8 + int("8")
# Some text data:
one_year = "1618"
another_year = "1648"
# Complete the next line. Note that we can't just write:
# another_year - one_year
# If you don't see why, try seeing what happens when you
# write that here.
difference = ...
difference
_ = tests.grade('q221')
a_very_long_sentence = "The representatives of the French people, organized as a National Assembly, believing that the ignorance, neglect, or contempt of the rights of man are the sole cause of public calamities and of the corruption of governments, have determined to set forth in a solemn declaration the natural, unalienable, and sacred rights of man, in order that this declaration, being constantly before all the members of the Social body, shall remind them continually of their rights and duties; in order that the acts of the legislative power, as well as those of the executive power, may be compared at any moment with the objects and purposes of all political institutions and may thus be more respected, and, lastly, in order that the grievances of the citizens, based hereafter upon simple and incontestable principles, shall tend to the maintenance of the constitution and redound to the happiness of all."
sentence_length = ...
sentence_length
_ = tests.grade('q231')
import math
radius = 5
area_of_circle = radius**2 * math.pi
area_of_circle
near_twenty = ...
near_twenty
_ = tests.grade('q31')
sine_of_pi_over_four = ...
sine_of_pi_over_four
_ = tests.grade('q311')
# Calculating factorials.
math.factorial(5)
# Calculating logarithms (the logarithm of 8 in base 2).
# The result is 3 because 2 to the power of 3 is 8.
math.log(8, 2)
# Calculating square roots.
math.sqrt(5)
# Calculating cosines.
math.cos(math.pi)
# Import the module IPython.display.
...
# Replace the ... with a call to the Image function
# in the IPython.display module, which should produce
# a picture.
art = ...
art
_ = tests.grade('q312')
make_array(0.125, 4.75, -1.3)
small_numbers = ...
small_numbers
_ = tests.grade('q411')
interesting_numbers = ...
interesting_numbers
_ = tests.grade('q412')
hello_world_components = ...
hello_world_components
_ = tests.grade('q413')
...
multiples_of_99 = ...
multiples_of_99
_ = tests.grade('q4111')
collection_times = ...
collection_times
_ = tests.grade('q4112')
# Don't worry too much about what goes on in this cell.
from datascience import Table
population = Table.read_table("world_population.csv").column("Population")
population
population.item(0)
# The third element in the array is the population
# in 1952.
population_1952 = population.item(2)
population_1952
# The thirteenth element in the array is the population
# in 1962 (which is 1950 + 12).
population_1962 = population.item(12)
population_1962
# The 66th element is the population in 2015.
population_2015 = population.item(65)
population_2015
# The array has only 66 elements, so this doesn't work.
# (There's no element with 66 other elements before it.)
population_2016 = population.item(66)
population_2016
# Since make_array returns an array, we can call .item(3)
# on its output to get its 4th element, just like we
# "chained" together calls to the method "replace" earlier.
make_array(-1, -3, 4, -2).item(3)
population_1973 = ...
population_1973
_ = tests.grade('q421')
Table.read_table("world_population.csv").show()
index_of_31st_item = ...
_ = tests.grade("q422")
population_1950_magnitude = math.log10(population.item(0))
population_1951_magnitude = math.log10(population.item(1))
population_1952_magnitude = math.log10(population.item(2))
population_1953_magnitude = math.log10(population.item(3))
...
population_magnitudes = ...
population_magnitudes
_ = tests.grade('q431')
population_in_billions = population / 1000000000
population_in_billions
restaurant_bills = Table.read_table("restaurant_bills.csv").column("Bill")
print("Restaurant bills:\t", restaurant_bills)
tips = .2 * restaurant_bills
print("Tips:\t\t\t", tips)
total_charges = ...
total_charges
_ = tests.grade('q432')
more_restaurant_bills = Table.read_table("more_restaurant_bills.csv").column("Bill")
more_total_charges = ...
more_total_charges
_ = tests.grade('q433')
sum_of_bills = ...
sum_of_bills
_ = tests.grade('q434')
powers_of_2 = ...
powers_of_2
_ = tests.grade('q435')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First, have a look at the distribution of gestational ages, as this might determine which buckets we use.
Step2: Investigate the pappa distributions and what range is "reasonable"
Step3: Now look at some basic plots of hormone readings, to see if there are simple looking relationships before we express as multiples of the mean
Step4: I think the right way to interpret this is that each vertical slice (which is probably a single day) is a distribution, and so (assuming enough data in that slice), we determine the mean of that, then express every other reading in that slice as a multiple of that mean.
Step5: Pappa distributions by GA
Step6: The main observation here is that the mean is not representative of where most of the data lies, because of the skewed nature of the distribution.
Step7: This is showing that if we look at the log of pappa instead, we get something that looks a lot more guassian, and in fact the mean of this distribution translates back into something that visually looks a lot closer to the middle of the original distribution.
|
<ASSISTANT_TASK:>
Python Code:
from os.path import join
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
data_fname = r"../data_staging/all_by_baby_enriched_v3.csv"
df = pd.read_csv(data_fname)
all = pd.concat([df.t1_ga_weeks, df.t2_ga_weeks, df.t3_ga_weeks])
all.dropna(inplace=True)
print all.describe()
print "Less than 0: {}".format(len(all[all < 0]))
all = all[all > 0]
all.hist().plot()
plt.show()
# Look at the trimester scans one by one
t1 = df[df.t1_ga_weeks > 0]
t2 = df[(df.t2_ga_weeks > 0) & (df.t2_ga_weeks < 30)]
t3 = df[(df.t3_ga_weeks >= 30)]
print "T1\n", t1.t1_ga_weeks.describe()
print "T2\n", t2.t2_ga_weeks.describe()
print "T3\n", t3.t3_ga_weeks.describe()
t1.t1_ga_weeks.hist(color="blue", label="scan1", bins=20).plot()
t2.t2_ga_weeks.hist(color="green", label="scan2", bins=20).plot()
t3.t3_ga_weeks.hist(color="purple", label="scan3", bins=20).plot()
plt.title("Trimester scan count by week")
plt.legend()
plt.show()
t2 = df[(df.t2_ga_weeks > 0) & (df.t2_ga_weeks < 30)]
print t2.t2_ga_weeks.describe()
t2.t2_ga_weeks.hist().plot()
plt.show()
# Check whether the records we're dropping are actually ok and that we haven't made a mistake somewhere
# print df[df.t2_ga_weeks > 30].iloc[0]
t3 = df[(df.t3_ga_weeks >= 30)]
print t3.t3_ga_weeks.describe()
t3_dist = np.histogram(t3.t3_ga_weeks)
print t3_dist
t3.t3_ga_weeks.hist().plot()
plt.show()
t1.loc[:, "t1_pappa_log"] = np.log(t1.t1_pappa)
print t1.t1_pappa.describe()
print t1.t1_pappa_log.describe()
t1_pappa_threshold = 99.95
pappatv = np.percentile(t1.t1_pappa.dropna().sort_values(), t1_pappa_threshold)
t1.t1_pappa_log.hist(bins=50).plot()
plt.title("Histogram of log(Trimester1 pappa)")
plt.show()
plt.scatter(t1.t1_ga_weeks, t1.t1_pappa_log, alpha=0.5, label="log(t1_pappa)")
plt.plot([t1.t1_ga_weeks.min(), t1.t1_ga_weeks.max()], np.log([pappatv, pappatv]),
linestyle="--", label="{}% < {}".format(t1_pappa_threshold, pappatv))
plt.title("log(Trimester1 pappa) by GA")
plt.legend(bbox_to_anchor=(1.6, 1.))
plt.show()
hfields = ["pappa", "b_hcg"]
print len(t1)
t11 = t1[(t1.t1_pappa.map(np.isnan) == False) & (t1.t1_pappa > 0) & (t1.t1_pappa <= 10)]
t12 = t1[t1.t1_pappa > 10]
t12_hist = np.histogram(t12.t1_pappa)
print "Hist of pappa>10"
print t12_hist
print pd.DataFrame(t12_hist[0], index=t12_hist[1][0:-1])
print len(t11)
print t11.t1_pappa.describe()
t11.t1_pappa.hist(bins=50, color="blue").plot()
plt.title("Trimester 1 pappa [0, 10]")
plt.show()
plt.scatter(t11.t1_ga_weeks, t11.t1_pappa, alpha=0.5)
plt.title("Trimester 1 GA weeks by Pappa [0, 10]")
plt.show()
data_rows = []
# Bucket the gestational ages
ga_buckets = np.histogram(t11.t1_ga_weeks, bins=50)[1]
# Use the same buckets for all ga bins, so bucket the total pappa
h_buckets = np.histogram(t11.t1_pappa, bins=50)[1]
# For each ga bucket, get the associated pappa readings
for l, r in zip(ga_buckets[0:-1], ga_buckets[1:]):
ga_bucket = t11[(t11.t1_ga_weeks >= l) & (t11.t1_ga_weeks < r)]
if len(ga_bucket) == 0:
#TODO Want to interpolate ideally, but just use the last row for now
data_rows.append(data_rows[-1])
else:
data_rows.append(ga_bucket.t1_pappa)
data_rows.append(t11[(t11.t1_ga_weeks >= r)].t1_pappa)
# Now calculate histograms of the pappa by ga bucket, so that we have a density for each slice
rows = []
for drow in data_rows:
rows.append(np.histogram(drow, bins=h_buckets)[0])
# Calc extents for the axes
ga_extents = [t11.t1_ga_weeks.min(), t11.t1_ga_weeks.max()]
pappa_extents = [t11.t1_pappa.min(), t11.t1_pappa.max()]
# Plot the whole lot, sort of a 3d heatmap?
# Transpose the data so we have rows as pappa levels and columns as GA
rows = np.transpose(rows)
fig, ax = plt.subplots(figsize=(5, 6))
im = ax.imshow(rows, extent=ga_extents + pappa_extents, cmap="hot", origin="lower", interpolation="bilinear")
plt.colorbar(im, orientation='vertical')
plt.title("Trimester 1 GA by Pappa [0, 10] by density")
plt.show()
week = 12
d1 = 4
s = t11[(t11.t1_ga_weeks >= (week + d1 / 7.)) & (t11.t1_ga_weeks < (week + (d1 + 1) / 7.))]
print s.t1_pappa.describe()
s.t1_pappa.hist(bins=50).plot()
plt.title("Week {}, day {} Pappa [0, 10] distribution".format(week, d1))
plt.show()
log_s = np.log(s.t1_pappa)
print log_s.describe()
print np.exp(log_s.mean())
# log_hist = np.histogram(log_s, bins=50)
log_s.hist(bins=50).plot()
plt.title("Week {}, day {} log(Pappa [0, 10]) distribution".format(week, d1))
plt.show()
plt.scatter(s.dem_mat_age, log_s, alpha=0.5)
plt.title("Maternal age by log(pappa)")
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set some parameters for modelling
Step2: Oblate ellipsoid versus sphere
Step3: Sphere
Step4: Total-field anomalies
Step5: Field components
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
from matplotlib.colors import BoundaryNorm
from matplotlib.ticker import MaxNLocator
from fatiando import gridder, utils
from fatiando.gravmag import sphere
from fatiando.mesher import Sphere
import oblate_ellipsoid
from mesher import OblateEllipsoid
# Set some plot parameters
from matplotlib import rcParams
rcParams['figure.dpi'] = 300.
rcParams['font.size'] = 6
rcParams['xtick.labelsize'] = 'medium'
rcParams['ytick.labelsize'] = 'medium'
rcParams['axes.labelsize'] = 'large'
rcParams['legend.fontsize'] = 'medium'
rcParams['savefig.dpi'] = 300.
# The local-geomagnetic field
F, inc, dec = 60000, 50, 20
# Create a regular grid at z = 0 m
shape = (50, 50)
area = [-5000, 5000, -4000, 6000]
xp, yp, zp = gridder.regular(area, shape, z=0)
ellipsoid = OblateEllipsoid(0, 0, 1000, 499.9, 500.1, 40, -60, 180,
{'principal susceptibilities': [0.01, 0.01, 0.01],
'susceptibility angles': [-40, 90, 7],
'remanent magnetization': [0.7, -7, 10]})
magnetization = oblate_ellipsoid.magnetization(ellipsoid, F, inc, dec, demag=True)
magnetization
spherical_body = Sphere(ellipsoid.x, ellipsoid.y, ellipsoid.z,
0.5*(ellipsoid.large_axis + ellipsoid.small_axis),
{'magnetization': magnetization})
spherical_body.props['magnetization']
# total-field anomaly produced by the ellipsoid (in nT)
tf_t = oblate_ellipsoid.tf(xp, yp, zp, [ellipsoid],
F, inc, dec)
# total-field anomaly produced by the sphere (in nT)
tf_s = sphere.tf(xp, yp, zp, [spherical_body], inc, dec)
# residuals
tf_r = tf_t - tf_s
plt.figure(figsize=(3.15, 7))
plt.axis('scaled')
ranges = np.max(np.abs([np.min(tf_t), np.max(tf_t),
np.min(tf_s), np.max(tf_s)]))
levels = MaxNLocator(nbins=20).tick_values(-ranges, ranges)
cmap = plt.get_cmap('RdBu_r')
norm = BoundaryNorm(levels, ncolors=cmap.N, clip=True)
plt.subplot(3,1,1)
plt.contourf(0.001*yp.reshape(shape), 0.001*xp.reshape(shape),
tf_t.reshape(shape), levels=levels,
cmap = cmap, norm=norm)
plt.ylabel('x (km)')
plt.xlim(0.001*np.min(yp), 0.001*np.max(yp))
plt.ylim(0.001*np.min(xp), 0.001*np.max(xp))
cbar = plt.colorbar()
plt.annotate(s='(a)', xy=(0.88,0.92),
xycoords = 'axes fraction', color='k',
fontsize = 10)
plt.subplot(3,1,2)
plt.contourf(0.001*yp.reshape(shape), 0.001*xp.reshape(shape),
tf_s.reshape(shape), levels=levels,
cmap = cmap, norm=norm)
plt.ylabel('x (km)')
plt.xlim(0.001*np.min(yp), 0.001*np.max(yp))
plt.ylim(0.001*np.min(xp), 0.001*np.max(xp))
plt.colorbar()
plt.annotate(s='(b)', xy=(0.88,0.92),
xycoords = 'axes fraction', color='k',
fontsize = 10)
ranges = np.max(np.abs([np.min(tf_r), np.max(tf_r)]))
levels = MaxNLocator(nbins=20).tick_values(-ranges, ranges)
cmap = plt.get_cmap('RdBu_r')
norm = BoundaryNorm(levels, ncolors=cmap.N, clip=True)
plt.subplot(3,1,3)
plt.contourf(0.001*yp.reshape(shape), 0.001*xp.reshape(shape),
tf_r.reshape(shape), levels=levels,
cmap = cmap, norm=norm)
plt.ylabel('x (km)')
plt.xlabel('y (km)')
plt.xlim(0.001*np.min(yp), 0.001*np.max(yp))
plt.ylim(0.001*np.min(xp), 0.001*np.max(xp))
plt.colorbar()
plt.annotate(s='(c)', xy=(0.88,0.92),
xycoords = 'axes fraction', color='k',
fontsize = 10)
plt.tight_layout()
plt.show()
# field components produced by the ellipsoid (in nT)
bx_t = oblate_ellipsoid.bx(xp, yp, zp, [ellipsoid],
F, inc, dec)
by_t = oblate_ellipsoid.by(xp, yp, zp, [ellipsoid],
F, inc, dec)
bz_t = oblate_ellipsoid.bz(xp, yp, zp, [ellipsoid],
F, inc, dec)
bt = [bx_t, by_t, bz_t]
# field components produced by the sphere (in nT)
bx_s = sphere.bx(xp, yp, zp, [spherical_body])
by_s = sphere.by(xp, yp, zp, [spherical_body])
bz_s = sphere.bz(xp, yp, zp, [spherical_body])
bs = [bx_s, by_s, bz_s]
# residuals
bx_r = bx_t - bx_s
by_r = by_t - by_s
bz_r = bz_t - bz_s
br = [bx_r, by_r, bz_r]
plt.figure(figsize=(3.15, 7))
plt.axis('scaled')
ranges = np.max(np.abs([np.min(bx_t), np.max(bx_t),
np.min(bx_s), np.max(bx_s)]))
levels = MaxNLocator(nbins=20).tick_values(-ranges, ranges)
cmap = plt.get_cmap('RdBu_r')
norm = BoundaryNorm(levels, ncolors=cmap.N, clip=True)
plt.subplot(3,1,1)
plt.contourf(0.001*yp.reshape(shape), 0.001*xp.reshape(shape),
bx_t.reshape(shape), levels=levels,
cmap = cmap, norm=norm)
plt.ylabel('x (km)')
plt.xlim(0.001*np.min(yp), 0.001*np.max(yp))
plt.ylim(0.001*np.min(xp), 0.001*np.max(xp))
cbar = plt.colorbar()
plt.annotate(s='(a)', xy=(0.88,0.92),
xycoords = 'axes fraction', color='k',
fontsize = 10)
plt.subplot(3,1,2)
plt.contourf(0.001*yp.reshape(shape), 0.001*xp.reshape(shape),
bx_s.reshape(shape), levels=levels,
cmap = cmap, norm=norm)
plt.ylabel('x (km)')
plt.xlim(0.001*np.min(yp), 0.001*np.max(yp))
plt.ylim(0.001*np.min(xp), 0.001*np.max(xp))
plt.colorbar()
plt.annotate(s='(b)', xy=(0.88,0.92),
xycoords = 'axes fraction', color='k',
fontsize = 10)
ranges = np.max(np.abs([np.min(bx_r), np.max(bx_r)]))
levels = MaxNLocator(nbins=20).tick_values(-ranges, ranges)
cmap = plt.get_cmap('RdBu_r')
norm = BoundaryNorm(levels, ncolors=cmap.N, clip=True)
plt.subplot(3,1,3)
plt.contourf(0.001*yp.reshape(shape), 0.001*xp.reshape(shape),
bx_r.reshape(shape), levels=levels,
cmap = cmap, norm=norm)
plt.ylabel('x (km)')
plt.xlabel('y (km)')
plt.xlim(0.001*np.min(yp), 0.001*np.max(yp))
plt.ylim(0.001*np.min(xp), 0.001*np.max(xp))
plt.colorbar()
plt.annotate(s='(c)', xy=(0.88,0.92),
xycoords = 'axes fraction', color='k',
fontsize = 10)
plt.tight_layout()
plt.show()
plt.figure(figsize=(3.15, 7))
plt.axis('scaled')
ranges = np.max(np.abs([np.min(by_t), np.max(by_t),
np.min(by_s), np.max(by_s)]))
levels = MaxNLocator(nbins=20).tick_values(-ranges, ranges)
cmap = plt.get_cmap('RdBu_r')
norm = BoundaryNorm(levels, ncolors=cmap.N, clip=True)
plt.subplot(3,1,1)
plt.contourf(0.001*yp.reshape(shape), 0.001*xp.reshape(shape),
by_t.reshape(shape), levels=levels,
cmap = cmap, norm=norm)
plt.ylabel('x (km)')
plt.xlim(0.001*np.min(yp), 0.001*np.max(yp))
plt.ylim(0.001*np.min(xp), 0.001*np.max(xp))
cbar = plt.colorbar()
plt.annotate(s='(a)', xy=(0.88,0.92),
xycoords = 'axes fraction', color='k',
fontsize = 10)
plt.subplot(3,1,2)
plt.contourf(0.001*yp.reshape(shape), 0.001*xp.reshape(shape),
by_s.reshape(shape), levels=levels,
cmap = cmap, norm=norm)
plt.ylabel('x (km)')
plt.xlim(0.001*np.min(yp), 0.001*np.max(yp))
plt.ylim(0.001*np.min(xp), 0.001*np.max(xp))
plt.colorbar()
plt.annotate(s='(b)', xy=(0.88,0.92),
xycoords = 'axes fraction', color='k',
fontsize = 10)
ranges = np.max(np.abs([np.min(by_r), np.max(by_r)]))
levels = MaxNLocator(nbins=20).tick_values(-ranges, ranges)
cmap = plt.get_cmap('RdBu_r')
norm = BoundaryNorm(levels, ncolors=cmap.N, clip=True)
plt.subplot(3,1,3)
plt.contourf(0.001*yp.reshape(shape), 0.001*xp.reshape(shape),
by_r.reshape(shape), levels=levels,
cmap = cmap, norm=norm)
plt.ylabel('x (km)')
plt.xlabel('y (km)')
plt.xlim(0.001*np.min(yp), 0.001*np.max(yp))
plt.ylim(0.001*np.min(xp), 0.001*np.max(xp))
plt.colorbar()
plt.annotate(s='(c)', xy=(0.88,0.92),
xycoords = 'axes fraction', color='k',
fontsize = 10)
plt.tight_layout()
plt.show()
plt.figure(figsize=(3.15, 7))
plt.axis('scaled')
ranges = np.max(np.abs([np.min(bz_t), np.max(bz_t),
np.min(bz_s), np.max(bz_s)]))
levels = MaxNLocator(nbins=20).tick_values(-ranges, ranges)
cmap = plt.get_cmap('RdBu_r')
norm = BoundaryNorm(levels, ncolors=cmap.N, clip=True)
plt.subplot(3,1,1)
plt.contourf(0.001*yp.reshape(shape), 0.001*xp.reshape(shape),
bz_t.reshape(shape), levels=levels,
cmap = cmap, norm=norm)
plt.ylabel('x (km)')
plt.xlim(0.001*np.min(yp), 0.001*np.max(yp))
plt.ylim(0.001*np.min(xp), 0.001*np.max(xp))
cbar = plt.colorbar()
plt.annotate(s='(a)', xy=(0.88,0.92),
xycoords = 'axes fraction', color='k',
fontsize = 10)
plt.subplot(3,1,2)
plt.contourf(0.001*yp.reshape(shape), 0.001*xp.reshape(shape),
bz_s.reshape(shape), levels=levels,
cmap = cmap, norm=norm)
plt.ylabel('x (km)')
plt.xlim(0.001*np.min(yp), 0.001*np.max(yp))
plt.ylim(0.001*np.min(xp), 0.001*np.max(xp))
plt.colorbar()
plt.annotate(s='(b)', xy=(0.88,0.92),
xycoords = 'axes fraction', color='k',
fontsize = 10)
ranges = np.max(np.abs([np.min(bz_r), np.max(bz_r)]))
levels = MaxNLocator(nbins=20).tick_values(-ranges, ranges)
cmap = plt.get_cmap('RdBu_r')
norm = BoundaryNorm(levels, ncolors=cmap.N, clip=True)
plt.subplot(3,1,3)
plt.contourf(0.001*yp.reshape(shape), 0.001*xp.reshape(shape),
bz_r.reshape(shape), levels=levels,
cmap = cmap, norm=norm)
plt.ylabel('x (km)')
plt.xlabel('y (km)')
plt.xlim(0.001*np.min(yp), 0.001*np.max(yp))
plt.ylim(0.001*np.min(xp), 0.001*np.max(xp))
plt.colorbar()
plt.annotate(s='(c)', xy=(0.88,0.92),
xycoords = 'axes fraction', color='k',
fontsize = 10)
plt.tight_layout()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set parameters
Step2: Calculate power and intertrial coherence
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Denis A. Engemann <denis.engemann@gmail.com>
# Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
#
# License: BSD (3-clause)
import mne
from mne import io
from mne.time_frequency import tfr_stockwell
from mne.datasets import somato
print(__doc__)
data_path = somato.data_path()
raw_fname = data_path + '/MEG/somato/sef_raw_sss.fif'
event_id, tmin, tmax = 1, -1., 3.
# Setup for reading the raw data
raw = io.Raw(raw_fname)
baseline = (None, 0)
events = mne.find_events(raw, stim_channel='STI 014')
# picks MEG gradiometers
picks = mne.pick_types(raw.info, meg='grad', eeg=False, eog=True, stim=False)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=baseline, reject=dict(grad=4000e-13, eog=350e-6),
preload=True)
epochs = epochs.pick_channels([epochs.ch_names[82]]) # reduce computation
power, itc = tfr_stockwell(epochs, fmin=6., fmax=30., decim=4, n_jobs=1,
width=.3, return_itc=True)
power.plot([0], baseline=(-0.5, 0), mode=None, title='S-transform (power)')
itc.plot([0], baseline=None, mode=None, title='S-transform (ITC)')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: MPA and MPS basics
Step2: The MPA is an instance of the MPArray class
Step3: Number of sites
Step4: Number of physical legs at each site (=number of array indices at each site)
Step5: Because the MPA has one physical leg per site, we have created a matrix product state (i.e. a tensor train). In the graphical notation, this MPS looks like this
Step6: Note that the number and dimension of the physical legs at each site can differ (altough this is rarely used in practice).
Step7: In physics, the representation ranks are usually called the bond dimensions of the representation.
Step8: Convert to a dense array, which should be used with care due because the memory used increases exponentially with the number of sites
Step9: The resulting full array has one index for each physical leg.
Step10: We have obtained an MPA with length 1. This is not what we expected. The reason is that by default, all legs are placed on a single site (also notice the difference between mpa2.shape here and mpa.shape from above)
Step11: We obtain the desired result by specifying the number of legs per site we want
Step12: Finally, we can compute the norm distance between the two MPAs. (Again, the Frobenius norm is used.)
Step13: Since this is an often used operation and allows for additional optimization (not implemented currently), it is advisable to use the specific <code>mp.normdist</code> for this
Step14: Sums, differences and scalar multiplication of MPAs is done with the normal operators
Step15: Multiplication with a scalar leaves the bond dimension unchanged
Step16: The bond dimensions of a sum (or difference) are given by the sums of the bond dimensions
Step17: MPO basics
Step18: In graphical notation, mpo looks like this
Step19: Each site has two physical legs, one with dimension 3 and one with dimension 2. This corresponds to a non-square full array.
Step20: Now convert the mpo to a full array
Step21: We refer to this arangement of axes as local form, since indices which correspond to the same site are neighboring. This is a natural form for the MPO representation. However, for some operations it is necessary to have row and column indices grouped together -- we refer to this as global form
Step22: This gives the expected result. Note that it is crucial to specify the correct number of sites, otherwise we do not get what we want
Step23: As an alternative, there is the following shorthand
Step24: An array in global form can be converted into matrix-product form with the following API
Step25: MPO-MPS product and arbitrary MPA-MPA products
Step26: The result is a new MPS, with local dimension changed by mpo and looks like this
Step27: Note that in any case, the ranks of the output of mp.dot are the products of the original ranks
Step28: Now we compute the same product using the full arrays arr and mpo_arr
Step29: As you can see, we need to reshape the result prod3_vec before we can convert it back to an MPA
Step30: Now we can compare the two results
Step31: We can also compare by converting prod to a full array
Step32: Converting full operators to MPOs
Step33: This operator is the so-called controlled Z gate
Step34: Now we can create an MPO, being careful to specify the correct number of legs per site
Step35: To test it, we apply the operator to the state which has both qubits in state e2
Step36: Reshape and convert to an MPS
Step37: Now we can compute the matrix-vector product
Step38: The output is as expected
Step39: However, the result is not what we want
Step40: The reason is easy to see
Step41: Keep in mind that we have to use to_array_global before the reshape. Using to_array would not provide us the matrix which we have applied to the state with mp.dot. Instead, it will exactly return the input
Step42: Again, from_array_global is just the shorthand for the following
Step43: As you can see, in the explicit version you must submit both the correct number of sites and the correct number of physical legs per site.
Step44: Then we can use from_kron to directly create an MPS representation of the Kronecker product
Step45: This works well for large numbers of sites because the needed memory scales linearly with the number of sites
Step46: An even more pythonic solution is the use of iterators in this example
Step47: Do not call .to_array() on this state!
Step48: We can also create a single-site MPS
Step49: After that, we can use mp.chain to create Kronecker products of the MPS directly
Step50: It returns the same result as before
Step51: We can also use mp.chain on the three-site MPS
Step52: Note that mp.chain interprets the factors in the tensor product as distinct sites. Hence, the factors do not need to be of the same length or even have the same number of indices. In contrast, there is also mp.localouter, which computes the tensor product of MPArrays with the same number of sites
Step53: Compression
Step54: As it is a tensor product operator, it has rank 1
Step55: However, addition increases the rank
Step56: Matrix multiplication multiplies the individual ranks
Step57: (NB
Step58: Calling compress on an MPA replaces the MPA in place with a version with smaller bond dimension. Overlap gives the absolute value of the (Hilbert-Schmidt) inner product between the original state and the output
Step59: Instead of in-place compression, we can also obtain a compressed copy
Step60: SVD compression can also be told to meet a certain truncation error (see the documentation of mp.MPArray.compress for details).
Step61: We can also use variational compression instead of SVD compression
Step62: As a reminder, it is always advisable to check whether the overlap between the input state and the compression is large enough. In an involved algorithm, it can be useful to store the compression error at each invocation of compression.
Step63: First, we have to convert the local term h to an MPO
Step64: h_mpo has rank 4 even though h is a tensor product. This is far from optimal. We improve things as follows
Step65: The most simple way is to implement the formula from above with MPOs
Step66: Next, we compute the sum of all the local terms and check the bond dimension of the result
Step67: The ranks are explained by the ranks of the local terms
Step68: We just have to add the ranks at each position.
Step69: Without additional arguments, mp.local_sum() just adds the local terms with the first term starting on site 0, the second on site 1 and so on. In addition, the length of the chain is chosen such that the last site of the chain coincides with the last site of the last local term. Other constructions can be obtained by prodividing additional arguments.
Step70: Of course, this means that we could just compress H
Step71: We can also check the minimal bond dimension which can be achieved with SVD compression with small error
Step72: MPS, MPOs and PMPS
Step73: As expected, the rank of mps_mpo is the square of the rank of mps.
Step74: Again, the rank is squared, as expected. We can verify that the first physical leg of each site of pmps is indeed the system site by checking the shape of pmps_mpo
Step75: Local reduced states
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import numpy.linalg as la
import mpnum as mp
rng = np.random.RandomState(seed=42)
mpa = mp.random_mpa(sites=4, ldim=2, rank=3, randstate=rng, normalized=True)
mpa
len(mpa)
mpa.ndims
mpa.shape
mpa.ranks
mp.norm(mpa)
arr = mpa.to_array()
arr.shape
mpa2 = mp.MPArray.from_array(arr)
len(mpa2)
mpa2.shape
mpa.shape
mpa2 = mp.MPArray.from_array(arr, ndims=1)
len(mpa2)
mp.norm(mpa - mpa2)
mp.normdist(mpa, mpa2)
mp.norm(3 * mpa)
mp.norm(mpa + 0.5 * mpa)
mp.norm(mpa - 1.5 * mpa)
mpa.ranks
(3 * mpa).ranks
mpa2 = mp.random_mpa(sites=4, ldim=2, rank=2, randstate=rng)
mpa2.ranks
(mpa + mpa2).ranks
mpo = mp.random_mpa(sites=4, ldim=(3, 2), rank=3, randstate=rng, normalized=True)
[len(mpo), mpo.ndims, mpo.ranks]
mpo.shape
mpo_arr = mpo.to_array()
mpo_arr.shape
from mpnum.utils.array_transforms import local_to_global
mpo_arr = mpo.to_array()
mpo_arr = local_to_global(mpo_arr, sites=len(mpo))
mpo_arr.shape
mpo_arr = mpo.to_array()
mpo_arr = local_to_global(mpo_arr, sites=2)
mpo_arr.shape
mpo_arr = mpo.to_array_global()
mpo_arr.shape
mpo2 = mp.MPArray.from_array_global(mpo_arr, ndims=2)
mp.normdist(mpo, mpo2)
mpa.shape
mpo.shape
prod = mp.dot(mpo, mpa, axes=(-1, 0))
prod.shape
prod2 = mp.dot(mpa, mpo, axes=(0, 1))
mp.normdist(prod, prod2)
mpo.ranks, mpa.ranks, prod.ranks
arr_vec = arr.ravel()
mpo_arr = mpo.to_array_global()
mpo_arr_matrix = mpo_arr.reshape((81, 16))
prod3_vec = np.dot(mpo_arr_matrix, arr_vec)
prod3_vec.shape
prod3_arr = prod3_vec.reshape((3, 3, 3, 3))
prod3 = mp.MPArray.from_array(prod3_arr, ndims=1)
prod3.shape
mp.normdist(prod, prod3)
prod_arr = prod.to_array()
la.norm((prod3_arr - prod_arr).reshape(81))
CZ = np.array([[ 1., 0., 0., 0.],
[ 0., 1., 0., 0.],
[ 0., 0., 1., 0.],
[ 0., 0., 0., -1.]])
CZ_arr = CZ.reshape((2, 2, 2, 2))
CZ_mpo = mp.MPArray.from_array_global(CZ_arr, ndims=2)
vec = np.kron([0, 1], [0, 1])
vec
vec_arr = vec.reshape([2, 2])
mps = mp.MPArray.from_array(vec_arr, ndims=1)
out = mp.dot(CZ_mpo, mps)
out.to_array().ravel()
CZ_mpo2 = mp.MPArray.from_array(CZ_arr, ndims=2)
out2 = mp.dot(CZ_mpo2, mps)
out2.to_array().ravel()
CZ_mpo2.to_array_global().reshape(4, 4)
CZ_mpo2.to_array().reshape(4, 4)
from mpnum.utils.array_transforms import global_to_local
CZ_mpo3 = mp.MPArray.from_array(global_to_local(CZ_arr, sites=2), ndims=2)
mp.normdist(CZ_mpo, CZ_mpo3)
e1 = np.array([1, 0])
e1
mps = mp.MPArray.from_kron([e1, e1, e1])
mps.to_array().ravel()
mps = mp.MPArray.from_kron([e1] * 2000)
len(mps)
from itertools import repeat
mps = mp.MPArray.from_kron(repeat(e1, 2000))
len(mps)
np.array(mps.ranks) # Convert to an array for nicer display
mps1 = mp.MPArray.from_array(e1, ndims=1)
len(mps1)
mps = mp.chain([mps1, mps1, mps1])
len(mps)
mps.to_array().ravel()
mps = mp.chain([mps] * 100)
len(mps)
mps = mp.chain([mps1] * 4)
len(mps), mps.shape,
rho = mp.localouter(mps.conj(), mps)
len(rho), rho.shape
op = mp.eye(sites=6, ldim=3)
op.shape
op.ranks
op2 = op + op + op
op2.ranks
op3 = mp.dot(op2, op2)
op3.ranks
op3 /= mp.norm(op3.copy()) # normalize to make overlap meaningful
copy = op3.copy()
overlap = copy.compress(method='svd', rank=1)
copy.ranks
overlap
compr, overlap = op3.compression(method='svd', rank=2)
overlap, compr.ranks, op3.ranks
compr, overlap = op3.compression(method='svd', relerr=1e-6)
overlap, compr.ranks, op3.ranks
compr, overlap = op3.compression(method='var', rank=2, num_sweeps=10, var_sites=2)
# Convert overlap from numpy array with shape () to float for nicer display:
overlap = overlap.flat[0]
complex(overlap), compr.ranks, op3.ranks
zeros = np.zeros((2, 2))
zeros
idm = np.eye(2)
idm
# Create a float array instead of an int array to avoid problems later
Z = np.diag([1., -1])
Z
h = np.kron(Z, Z)
h
h_arr = h.reshape((2, 2, 2, 2))
h_mpo = mp.MPArray.from_array_global(h_arr, ndims=2)
h_mpo.ranks
h_mpo = mp.MPArray.from_kron([Z, Z])
h_mpo.ranks
width = 2
sites = 6
local_terms = []
for startpos in range(sites - width + 1):
left = [mp.MPArray.from_kron([idm] * startpos)] if startpos > 0 else []
right = [mp.MPArray.from_kron([idm] * (sites - width - startpos))] \
if sites - width - startpos > 0 else []
h_at_startpos = mp.chain(left + [h_mpo] + right)
local_terms.append(h_at_startpos)
local_terms
H = local_terms[0]
for local_term in local_terms[1:]:
H += local_term
H.ranks
[local_term.ranks for local_term in local_terms]
H2 = mp.local_sum([h_mpo] * (sites - width + 1))
H2.ranks
mp.normdist(H, H2)
H_comp, overlap = H.compression(method='svd', rank=3)
overlap / mp.norm(H)**2
H_comp.ranks
H_comp, overlap = H.compression(method='svd', relerr=1e-6)
overlap / mp.norm(H)**2
H_comp.ranks
mps = mp.random_mpa(sites=5, ldim=2, rank=3, normalized=True)
mps_mpo = mp.mps_to_mpo(mps)
mps_mpo.ranks
pmps = mp.random_mpa(sites=5, ldim=(2, 3), rank=3, normalized=True)
pmps.shape
pmps_mpo = mp.pmps_to_mpo(pmps)
pmps_mpo.ranks
pmps_mpo.shape
width = 3
startsites = range(len(pmps) - width + 1)
for startsite, red in zip(startsites, mp.reductions_pmps(pmps, width, startsites)):
print('Reduction starting on site', startsite)
print('bdims:', red.ranks)
red_mpo = mp.pmps_to_mpo(red)
print('trace:', mp.trace(red_mpo))
print()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Source estimation method such as MNE require a noise estimations from the
Step2: The definition of noise depends on the paradigm. In MEG it is quite common
Step3: Now that you have the covariance matrix in an MNE-Python object you can
Step4: Note that this method also attenuates any activity in your
Step5: Plot the covariance matrices
Step6: How should I regularize the covariance matrix?
Step7: This procedure evaluates the noise covariance quantitatively by how well it
Step8: This plot displays both, the whitened evoked signals for each channels and
Step9: This will plot the whitened evoked for the optimal estimator and display the
|
<ASSISTANT_TASK:>
Python Code:
import os.path as op
import mne
from mne.datasets import sample
data_path = sample.data_path()
raw_empty_room_fname = op.join(
data_path, 'MEG', 'sample', 'ernoise_raw.fif')
raw_empty_room = mne.io.read_raw_fif(raw_empty_room_fname)
raw_fname = op.join(data_path, 'MEG', 'sample', 'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(raw_fname)
raw.set_eeg_reference('average', projection=True)
raw.info['bads'] += ['EEG 053'] # bads + 1 more
raw_empty_room.info['bads'] = [
bb for bb in raw.info['bads'] if 'EEG' not in bb]
raw_empty_room.add_proj(
[pp.copy() for pp in raw.info['projs'] if 'EEG' not in pp['desc']])
noise_cov = mne.compute_raw_covariance(
raw_empty_room, tmin=0, tmax=None)
events = mne.find_events(raw)
epochs = mne.Epochs(raw, events, event_id=1, tmin=-0.2, tmax=0.5,
baseline=(-0.2, 0.0), decim=3, # we'll decimate for speed
verbose='error') # and ignore the warning about aliasing
noise_cov_baseline = mne.compute_covariance(epochs, tmax=0)
noise_cov.plot(raw_empty_room.info, proj=True)
noise_cov_baseline.plot(epochs.info, proj=True)
noise_cov_reg = mne.compute_covariance(epochs, tmax=0., method='auto')
evoked = epochs.average()
evoked.plot_white(noise_cov_reg, time_unit='s')
noise_covs = mne.compute_covariance(
epochs, tmax=0., method=('empirical', 'shrunk'), return_estimators=True)
evoked.plot_white(noise_covs, time_unit='s')
evoked_meg = evoked.copy().pick_types(meg=True, eeg=False)
noise_cov_meg = mne.pick_channels_cov(noise_cov_baseline, evoked_meg.ch_names)
noise_cov['method'] = 'empty_room'
noise_cov_meg['method'] = 'baseline'
evoked_meg.plot_white([noise_cov_meg, noise_cov], time_unit='s')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Data prep
Step2: 1.2. California tract-level census variables
Step3: 1.3. Merge the data
Step4: 2. Initial exploration
Step5: Looks like we have some missing values. We'll spatially interpolate them later.
Step6: Every 1 person/km^2 increase in density is associated with a m change in median household income.
Step7: 3. Spatial weights matrix
Step8: 3.2. Contiguity-based weights
Step9: How many neighbors does the average tract have?
Step10: Plot a census tract of interest, along with its neighbors
Step11: 3.3. Distance-based weights
Step12: 3.4. Distance-based weights
Step13: 3.5. Standardizing weights
Step14: PySAL supports the following transformations
Step15: 4. Spatial interpolation
Step16: 4. Spatial lag
Step17: 5. Spatial autocorrelation
Step18: 5.1. Moran's I
Step19: If we generated a large number of maps with the same values but randomly allocated over space, and calculated Moran's I for each of these maps, only 1/1000 of them would display a larger absolute value than the one we computed from the real-world data set. Thus there is a 1/1000 chance of getting the observed value of Moran's I if the spatial distribution of our variable is random. We can conclude that the variable's distribution is statistically significantly postively spatially autocorrelated.
Step20: 5.2. Moran plots
Step21: Notice the 95% confidence interval shading and the positive slope. Given the p-value of Moran's I that we calculated earlier, we can conclude that the slope of the line is statistically-significantly different from zero.
Step22: 5.3. LISAs
Step23: Now map the tracts, colored according to their LISA quadrants, to identify clusters
|
<ASSISTANT_TASK:>
Python Code:
import geopandas as gpd
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pysal as ps
import seaborn as sns
from scipy.stats import stats
np.random.seed(0)
# load CA tracts, display shape
tracts_ca = gpd.read_file('../../data/tl_2017_06_tract/')
tracts_ca = tracts_ca.set_index('GEOID')
tracts_ca.shape
# what variables are present?
tracts_ca.columns
# inspect the first 5 rows
tracts_ca.head()
# retain LA county only (and drop channel island tracts)
tracts_ca = tracts_ca[tracts_ca['COUNTYFP']=='037'].drop(index=['06037599100', '06037599000'])
tracts_ca.shape
# project spatial geometries to a meter-based projection for SoCal
crs = '+proj=utm +zone=11 +ellps=WGS84 +datum=WGS84 +units=m +no_defs'
tracts_ca = tracts_ca.to_crs(crs)
# load CA tract-level census variables
df_census = pd.read_csv('../../data/census_tracts_data_ca.csv', dtype={'GEOID10':str}).set_index('GEOID10')
df_census.shape
df_census.columns
df_census.head()
# merge tract geometries with census variables
tracts = tracts_ca.merge(df_census, left_index=True, right_index=True, how='left')
tracts.shape
# calculate pop density in persons per sq km
# turn any infinities into nulls
tracts['pop_density'] = tracts['total_pop'] / (tracts['ALAND'] / 1e6)
tracts = tracts.replace([np.inf, -np.inf], np.nan)
tracts.columns
# descriptive stats
tracts['med_household_income'].describe()
# descriptive stats
tracts['pop_density'].describe()
# inspect these variables' statistical distributions
fig, axes = plt.subplots(1, 2, figsize=(12, 2))
ax1 = sns.boxplot(ax=axes[0], x=tracts['med_household_income'])
ax2 = sns.boxplot(ax=axes[1], x=tracts['pop_density'])
# map a couple variables to inspect their spatial distributions
cols = ['pop_density', 'med_household_income']
for col in cols:
ax = tracts.dropna(subset=[col]).plot(column=col,
scheme='NaturalBreaks',
cmap='plasma',
figsize=(4, 4),
legend=True,
legend_kwds={'bbox_to_anchor': (1.7, 1)})
ax.set_title(col)
_ = ax.axis('off')
# calculate correlation coefficient and p-value
subset = tracts.dropna(subset=['pop_density', 'med_household_income'])
r, p = stats.pearsonr(x=subset['pop_density'],
y=subset['med_household_income'])
print('r={:.4f}, p={:.4f}'.format(r, p))
# quick and dirty scatter plot with matplotlib
fig, ax = plt.subplots()
sc = ax.scatter(x=subset['pop_density'],
y=subset['med_household_income'],
s=1)
# estimate a simple linear regression model with scipy
# what if you log transform your variables first?
m, b, r, p, se = stats.linregress(x=subset['pop_density'],
y=subset['med_household_income'])
print(f'm={m:.4f}, b={b:.4f}, r^2={r**2:.4f}, p={p:.4f}')
# now it's your turn
# look through the list of columns, pick two new variables, and map them
# do they look like they are correlated? would you expect them to be?
# get the tract labels (GEOIDs) and pick one (arbitrarily) to work with throughout
labels = tracts.index.tolist()
label = labels[603]
label
%%time
# calculate rook spatial weights
w_rook = ps.lib.weights.Rook.from_dataframe(tracts, ids=labels, id_order=labels)
# find the neighbors of some tract
# this is a raw contiguity matrix, so weights are binary 1s and 0s meaning neighbor/not
w_rook[label]
%%time
# calculate queen spatial weights
w_queen = ps.lib.weights.Queen.from_dataframe(tracts, ids=labels, id_order=labels)
# find the neighbors of some tract
# this is a raw contiguity matrix, so weights are binary 1s and 0s meaning neighbor/not
w_queen[label]
# how many neighbors does this tract have?
w_queen.cardinalities[label]
# convert cardinalities to series and describe data
pd.Series(w_queen.cardinalities).describe()
# min number of neighbors
w_queen.min_neighbors
# max number of neighbors
w_queen.max_neighbors
# islands are observations with no neighbors, disconnected in space (can cause modeling problems)
w_queen.islands
fig, ax = plt.subplots(figsize=(6, 6))
tracts.plot(ax=ax, facecolor='#666666', edgecolor='w', linewidth=0.5)
# plot some tract of interest in red
tract = tracts.loc[[label]]
tract.plot(ax=ax, facecolor='#ff0000', edgecolor='w', linewidth=2)
# plot the neighbors in blue
neighbors = tracts.loc[w_queen[label]]
neighbors.plot(ax=ax, facecolor='#0033cc', edgecolor='w', linewidth=2)
# zoom to area of interest
xmin, ymin, xmax, ymax = neighbors.unary_union.bounds
ax.axis('equal')
ax.set_xlim(xmin-100, xmax+100) # +/- 100 meters
ax.set_ylim(ymin, ymax)
ax.set_title('Neighbors of tract {}'.format(label))
_ = ax.axis('off')
%%time
# draw a queen-contiguity graph of the tracts
fig, ax = plt.subplots(figsize=(12, 12), facecolor='#111111')
tracts.plot(ax=ax, facecolor='#333333', edgecolor='k', linewidth=0.3)
# extract centroids of tract and its neighbors, then draw lines between them
for tract, neighbors in w_queen:
tract_centroid = tracts.loc[tract, 'geometry'].centroid
for neighbor_centroid in tracts.loc[neighbors, 'geometry'].centroid:
Xs = [tract_centroid.x, neighbor_centroid.x]
Ys = [tract_centroid.y, neighbor_centroid.y]
ax.plot(Xs, Ys, color='r', linewidth=0.3)
_ = ax.axis('off')
%%time
# k-nearest neighbors finds the closest k tract centroids to each tract centroid
w_knn = ps.lib.weights.KNN.from_dataframe(tracts, k=6)
# they all have exactly k neighbors
w_knn.neighbors[label]
# calculate maximum nearest neighbor distance so each unit is assured of >=1 neighbor
x = tracts.centroid.x
y = tracts.centroid.y
coords = np.array([x, y]).T
threshold = ps.lib.weights.min_threshold_distance(coords)
threshold
%%time
# calculate linear decay continuous weights
w_dist = ps.lib.weights.distance.DistanceBand.from_dataframe(tracts,
threshold=threshold,
binary=False,
alpha=-1)
# how many distance-band neighbors does our tract have?
len(w_dist.neighbors[label])
# map the neighbors, colored by weight from nearest to furthest
fig, ax = plt.subplots(figsize=(6, 6))
tracts.plot(ax=ax, facecolor='#333333', edgecolor='gray', linewidth=0.1)
# get the tract of interest and its neighbors/weights
tract = tracts.loc[[label]]
weights = pd.Series(w_dist[label])
neighbors = tracts.loc[weights.index, ['geometry']]
neighbors['weights_scaled'] = weights
# plot the tract's neighbors in blues by weight
neighbors.plot(ax=ax,
column='weights_scaled',
cmap='Blues_r',
edgecolor='gray',
linewidth=0.3,
scheme='NaturalBreaks')
# plot the tract of interest in red
tract.plot(ax=ax, facecolor='r', edgecolor='r', linewidth=0.1)
# zoom to area of interest
xmin, ymin, xmax, ymax = neighbors.unary_union.bounds
ax.set_xlim(xmin, xmax)
ax.set_ylim(ymin, ymax)
ax.set_title('Neighbors of tract {}'.format(label))
_ = ax.axis('off')
# now it's your turn
# recompute the distance-based spatial weights with a gravity decay
# how and why does this impact the number of neighbors and the map above? why?
# inspect the neighbors and weights of our tract
w_queen[label]
# check the current transformation of the weights matrix (O = original)
w_queen.get_transform()
# transform the queen weights
w_queen.set_transform('R')
w_queen[label]
# transform the linear-decay distance-based weights
w_dist.set_transform('R')
#w_dist[label]
# save your matrix to disk
f = ps.lib.io.open('tracts_queen.gal', 'w')
f.write(w_queen)
f.close()
# read a matrix from disk (notice its transformation)
w_queen = ps.lib.io.open('tracts_queen.gal', 'r').read()
w_queen[label]
# how many tracts are missing values for this variable?
col = 'med_household_income'
nulls = tracts[pd.isnull(tracts[col])].index
len(nulls)
# for example, this tract is missing a value
tract = nulls[0]
tract
# local averaging: equal-weighted queen-adjacent tracts
neighbors = w_queen[tract]
tracts.loc[neighbors, col].mean()
# or, calculate inverse distance weighted mean
neighbors = w_dist[tract]
inv_dist_wt = pd.Series(neighbors)
(tracts.loc[neighbors, col] * inv_dist_wt).sum()
# or, interpolate all the missing values across this variable
estimates = {}
for tract in nulls:
neighbors = w_dist[tract]
inv_dist_wt = pd.Series(w_dist[tract])
estimates[tract] = (tracts.loc[neighbors, col] * inv_dist_wt).sum()
pd.Series(estimates).head()
# now it's your turn
# spatially interpolate missing values for median home value
# pick a variable to investigate and drop null rows
col = 'med_household_income'
tracts_not_null = tracts[[col, 'geometry']].dropna()
y = tracts_not_null[col]
# recompute spatial weights for just these observations then row-standardize
w_queen = ps.lib.weights.Queen.from_dataframe(tracts_not_null)
w_queen.set_transform('R')
# compute spatial lag
y_lag = ps.lib.weights.lag_spatial(w_queen, y)
# is a tract's med income similar to those of its neighbors?
col_lag = f'{col}_lag'
data_lag = pd.DataFrame(data={col:y, col_lag:y_lag}).astype(int)
data_lag
# does household income exhibit spatial autocorrelation?
# let's find out
data_lag.sample(5)
# calculate the statistic
mi = ps.explore.esda.Moran(data_lag[col], w_queen)
# show the I value
mi.I
# statistical inference: show the p value
mi.p_sim
# now it's your turn
# calculate the moran's I of median home values
# is it statistically significant? what does it tell you?
fig, ax = plt.subplots(figsize=(6, 6))
sns.regplot(x=col, y=col_lag, data=data_lag, scatter_kws={'s':1, 'color':'gray'})
plt.show()
# standardize the variable's values (i.e., calculate z-scores)
y_std = (y - y.mean()) / y.std()
y_std.head()
# compute spatial lag of standardized values and save as series with same index
y_std_lag = pd.Series(ps.lib.weights.lag_spatial(w_queen, y_std),
index=y_std.index,
name=col_lag)
y_std_lag
# estimate a simple linear regression model
m, b, r, p, se = stats.linregress(x=y_std, y=y_std_lag)
print('m={:.4f}, b={:.4f}, r^2={:.4f}, p={:.4f}'.format(m, b, r ** 2, p))
# the slope is the same as moran's I, calculated earlier
mi.I
# standardized moran's plot
fig, ax = plt.subplots(figsize=(4, 4))
ax.scatter(x=y_std, y=y_std_lag, s=1, color='gray')
# draw quadrants and ignore outliers beyond 3 std devs (99.7% of distribution)
plt.axvline(0, c='k', alpha=0.5)
plt.axhline(0, c='k', alpha=0.5)
ax.set_xlim(-3, 3)
ax.set_ylim(-3, 3)
# draw a line with moran's I as the slope
Xs = pd.Series([-3, 3])
Ys = Xs * mi.I
line = ax.plot(Xs, Ys, lw=2)
# now it's your turn
# visualize a standardized moran's plot of median home values
# standardized moran's plot again, from above, but labeled this time
fig, ax = plt.subplots(figsize=(6, 6))
ax.scatter(x=y_std, y=y_std_lag, s=1, color='gray')
# draw quadrants and ignore outliers beyond 3 std devs
plt.axvline(0, c='k', alpha=0.5)
plt.axhline(0, c='k', alpha=0.5)
ax.set_xlim(-3, 3)
ax.set_ylim(-3, 3)
# label the quadrants
ax.text(1.25, 1.25, 'HH', fontsize=30)
ax.text(1.25, -1.75, 'HL', fontsize=30)
ax.text(-1.75, 1.25, 'LH', fontsize=30)
ax.text(-1.75, -1.75, 'LL', fontsize=30)
# draw a line with moran's I as the slope
Xs = pd.Series([-3, 3])
Ys = Xs * mi.I
line = ax.plot(Xs, Ys, lw=2)
# calculate LISA values using the queen spatial weights
lisa = ps.explore.esda.Moran_Local(data_lag[col], w_queen)
# set the statistical significance threshold (alpha)
alpha = 0.05
# identify whether each observation is significant or not
# p-value interpretation same as earlier with moran's I
data_lag['significant'] = lisa.p_sim < alpha
data_lag['significant'].value_counts()
# identify the quadrant each observation belongs to
data_lag['quadrant'] = lisa.q
data_lag['quadrant'] = data_lag['quadrant'].replace({1:'HH', 2:'LH', 3:'LL', 4:'HL'})
data_lag['quadrant'].sort_values().value_counts()
# what have we got in the end?
data_lag
fig, ax = plt.subplots(figsize=(9, 9))
# merge original tracts and LISA quadrants data together, plot tracts basemap
tracts_lisa = tracts.merge(data_lag, how='left', left_index=True, right_index=True)
tracts_lisa.plot(ax=ax, facecolor='#999999', edgecolor='k', linewidth=0.1)
# plot each quandrant's tracts (if significant LISA) in a different color
quadrant_colors = {'HH':'r', 'LL':'b', 'LH':'skyblue', 'HL':'pink'}
for q, c in quadrant_colors.items():
mask = tracts_lisa['significant'] & (tracts_lisa['quadrant']==q)
rows = tracts_lisa.loc[mask]
rows.plot(ax=ax, color=c, edgecolor='k', linewidth=0.1)
ax.axis('off')
fig.savefig('clusters.png', dpi=600, bbox_inches='tight')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 语言翻译
Step3: 探索数据
Step6: 实现预处理函数
Step8: 预处理所有数据并保存
Step10: 检查点
Step12: 检查 TensorFlow 版本,确认可访问 GPU
Step15: 构建神经网络
Step18: 处理解码输入
Step21: 编码
Step24: 解码 - 训练
Step27: 解码 - 推论
Step30: 构建解码层级
Step33: 构建神经网络
Step34: 训练神经网络
Step36: 构建图表
Step39: 训练
Step41: 保存参数
Step43: 检查点
Step46: 句子到序列
Step48: 翻译
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
# TODO: Implement Function
sentences = source_text.split('\n')
source_id_text = [[source_vocab_to_int[word] for word in sentence.split()] for sentence in sentences]
sentences = target_text.split('\n')
target_id_text = [[target_vocab_to_int[word] for word in (sentence+' <EOS>').split()] for sentence in sentences]
return source_id_text, target_id_text
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
# source_int_text 是一些句子的list,比如[[2, 54, 23 ...], [43, 3, 55...], ......],target_int_text也一样
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def model_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate, keep probability)
# TODO: Implement Function
input = tf.placeholder(tf.int32, shape=(None, None), name='input')
output = tf.placeholder(tf.int32, shape=(None, None), name='output')
learning_rate = tf.placeholder(tf.float32, shape=None, name='learning_rate')
keep_prob = tf.placeholder(tf.float32, shape=None, name='keep_prob')
return input, output, learning_rate, keep_prob
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
# 训练时,在序列的解码部分,将正确的word(来自target)作为模型的输入序列,而不是用model预测的word作为输入
def process_decoding_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for dencoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
# TODO: Implement Function
go_array = tf.fill((batch_size,1), target_vocab_to_int['<GO>'])
# print(go_array)
target_data = tf.concat([go_array, target_data], 1)
# print(target_data)
# target_data = tf.slice(target_data, [0,0], [batch_size, -1])
target_data = target_data[:,0:-1]
# print(target_data)
return target_data
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_decoding_input(process_decoding_input)
# 序列到序列模型的编码部分的LSTM
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:return: RNN state
# TODO: Implement Function
cell = tf.contrib.rnn.BasicLSTMCell(num_units=rnn_size)
cells = tf.contrib.rnn.MultiRNNCell(num_layers * [cell])
# initial_state = cells.zero_state(batch_size, tf.float32)
outputs, state = tf.nn.dynamic_rnn(cells, rnn_inputs, dtype=tf.float32)
# final_state = tf.identity(final_state, name='final_state')
return state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
# 模型解码部分,训练时(inference)的输出
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param sequence_length: Sequence Length
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Train Logits
# TODO: Implement Function
dynamic_fn_train = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state)
outputs_train, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, dynamic_fn_train, inputs=dec_embed_input, sequence_length=sequence_length, scope=decoding_scope)
# print('outputs_train.get_shape()', outputs_train.get_shape())
outputs_train = tf.nn.dropout(outputs_train, keep_prob)
outputs_train = output_fn(outputs_train)
# print('outputs_train.get_shape()', outputs_train.get_shape())
return outputs_train
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
# 模型解码部分,预测时(inference)的输出。预测阶段keep_prob应无必要。
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size, decoding_scope, output_fn, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param maximum_length: The maximum allowed time steps to decode
:param vocab_size: Size of vocabulary
:param decoding_scope: TensorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Inference Logits
# TODO: Implement Function
dynamic_fn_infer = tf.contrib.seq2seq.simple_decoder_fn_inference(output_fn, encoder_state, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size)
outputs_infer, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, dynamic_fn_infer, scope=decoding_scope)
# print('outputs_infer.get_shape()', outputs_infer.get_shape())
# outputs_infer = tf.nn.dropout(outputs_infer, keep_prob)
# print('outputs_infer.get_shape()', outputs_infer.get_shape())
# outputs_infer = output_fn(outputs_infer)
# print('outputs_infer.get_shape()', outputs_infer.get_shape())
return outputs_infer
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
# 模型的解码部分。包括训练时解码和预测时解码的输出
def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,
num_layers, target_vocab_to_int, keep_prob):
Create decoding layer
:param dec_embed_input: Decoder embedded input。已经经过embedding矩阵转换后的编码,shape=(batch_size, sequence_length, embedding_size)
:param dec_embeddings: Decoder embeddings。embedding转换矩阵,shape=(vocab_size, embedding_size)
:param encoder_state: The encoded state
:param vocab_size: Size of vocabulary
:param sequence_length: Sequence Length
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param keep_prob: Dropout keep probability
:return: Tuple of (Training Logits, Inference Logits)
# TODO: Implement Function
with tf.variable_scope("decoding") as decoding_scope:
# 模型解码部分的LSTM
cell = tf.contrib.rnn.BasicLSTMCell(num_units=rnn_size)
cells = tf.contrib.rnn.MultiRNNCell(num_layers * [cell])
# outputs, state = tf.nn.dynamic_rnn(cells, dec_embed_output, dtype=tf.float32)
# LSTM输出接入全连接层(rnn_size向量转换为vocab_size向量)
output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size, None, scope=decoding_scope)
# 模型解码部分,训练时的输出序列(用target作为输入)
outputs_train = decoding_layer_train(encoder_state, cells, dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob)
#
decoding_scope.reuse_variables()
# 模型解码部分,预测时(inference)的输出序列(用上一个时间步的输出作为输入)
outputs_infer = decoding_layer_infer(encoder_state, cells, dec_embeddings,
target_vocab_to_int['<GO>'], target_vocab_to_int['<EOS>'],
sequence_length, vocab_size, decoding_scope, output_fn, keep_prob)
return outputs_train, outputs_infer
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
# 序列到序列模型
def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder。shape=(batch_size, sequence_length)
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param sequence_length: Sequence Length
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training Logits, Inference Logits)
# TODO: Implement Function
# 编码部分和解码部分的embedding转换矩阵
embedding_enc = tf.Variable(tf.truncated_normal(shape=[source_vocab_size, enc_embedding_size], mean=0, stddev=1)) # create embedding weight matrix here
embedding_dec = tf.Variable(tf.truncated_normal(shape=[target_vocab_size, dec_embedding_size], mean=0, stddev=1)) # create embedding weight matrix here
# input_data进行embed编码
input_data_embed = tf.nn.embedding_lookup(embedding_enc, input_data)
# 模型的编码部分(处理输入序列)
encoder_state = encoding_layer(input_data_embed, rnn_size, num_layers, keep_prob)
# 模型的解码部分,先将target_data转换成解码部分的输入
dec_input = process_decoding_input(target_data, target_vocab_to_int, batch_size)
# dec_input进行embed编码
dec_embed_input = tf.nn.embedding_lookup(embedding_dec, dec_input)
# 模型的解码部分(处理输出序列)
outputs_train, outputs_infer = decoding_layer(dec_embed_input, embedding_dec, encoder_state,
target_vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob)
return outputs_train, outputs_infer
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
# Number of Epochs
epochs = 10
# Batch Size
batch_size = 1024
# RNN Size
rnn_size = 64
# Number of Layers
num_layers = 3
# Embedding Size
encoding_embedding_size = 200
decoding_embedding_size = 200
# Learning Rate
learning_rate = 0.02
# Dropout Keep Probability
keep_probability = 0.6
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_source_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob = model_inputs()
sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(
tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int),
encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int)
tf.identity(inference_logits, 'logits')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
train_logits,
targets,
tf.ones([input_shape[0], sequence_length]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
DON'T MODIFY ANYTHING IN THIS CELL
import time
num_batches_to_print = 20
num_batches_to_list = 1
train_acc_list = list()
valid_acc_list = list()
train_loss_list = list()
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1]), (0,0)],
'constant')
return np.mean(np.equal(target, np.argmax(logits, 2)))
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = helper.pad_sentence_batch(source_int_text[:batch_size])
valid_target = helper.pad_sentence_batch(target_int_text[:batch_size])
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch) in enumerate(
helper.batch_data(train_source, train_target, batch_size)):
start_time = time.time()
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
sequence_length: target_batch.shape[1],
keep_prob: keep_probability})
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch, keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_source, keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits)
end_time = time.time()
# 记录loss和acc
if batch_i % num_batches_to_list == 0:
train_acc_list.append(train_acc)
valid_acc_list.append(valid_acc)
train_loss_list.append(loss)
if batch_i % num_batches_to_print == 0:
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
# 打印loss和acc
import matplotlib.pyplot as plt
batch_number = list(range(len(train_acc_list)))
plt.plot(batch_number, train_loss_list, label='train_loss')
plt.legend()
plt.show()
plt.plot(batch_number, train_acc_list, label='train_acc')
plt.plot(batch_number, valid_acc_list, label='valid_acc')
plt.legend()
plt.show()
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
# TODO: Implement Function
return [vocab_to_int.get(word, vocab_to_int['<UNK>']) for word in sentence.lower().split()]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
translate_sentence = 'he saw a old yellow truck .'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('logits:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)]))
print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
np.random.seed(0)
r_old = np.random.randint(3, size=(100, 2000)) - 1
np.random.seed(0)
r_new = np.random.randint(3, size=(100, 2000)) - 1
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The match percentage distribution peaks near ~75%, but differences are hard to see.
Step2: Additionally, there is a decreasing number of users at a given age that are presented in the search results.
Step3: To show this effect visually, I show the the 2D age and match percentage distribution in the left panel.
Step4: You can clearly see three peaks which I suggest correspond to three schemas
Step5: Clearly, two digit numbers in usernames show interesting structure which can be used to infer the age.
Step6: Here I have normalized again in the right panel to remove the underlying population distribution. I do not find any significant differences. There are few sources in the 40%-80% region which limits what we can say about that population.
Step7: The two age dependent populations can be seen here. There is a second population that can be seen here at 22 years old. Going back to the previous 2 digit plots, the trend can be seen as well.
Step8: While 43% correct seems pretty bad, from the earlier visualizations the signal is only coming from two main populations which contain 23% + 33% of the entire 2 digit sample. There is signal coming from the 08-15 range, however this is well mixed with the year of creation limiting its utility This means this classifier is actually correct ~77% of the time when there is enough data to estimate the age.
Step9: The random forest classifier is underestimating many of the ages as 24 year olds (bottom row of pixels).
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import time
import pylab
import numpy as np
import pandas as pd
import seaborn as sns
sns.set_style('white')
from pysurvey.plot import setup_sns as setup
from pysurvey.plot import density, icolorbar, text, legend, outline
people = pd.read_csv('/Users/ajmendez/data/okcupid/random_v4.csv')
people = people[(people['age'].notnull()) & (people['match'].notnull()) & (people['match'] > 20)]
print('Found {:,d} profiles'.format(len(people)))
_ = people.hist('match', by='age', bins=np.arange(20,101,4),
lw=0, alpha=0.7, figsize=(18,6), layout=(2,5))
people.hist('age', lw=0, alpha=0.7, grid=False)
def setup_plot(**kwargs):
setup(xlabel='Age [Years]', ylabel='Match percentage', tickmarks=False, grid=False, **kwargs)
pylab.plot(26, 101, 'v', color='r', markeredgecolor='k', clip_on=False)
pylab.text(26.2, 100, 'Age when Profile Written', ha='left', va='bottom', clip_on=False)
setup_plot(figsize=(12,6), subplt=(1,2,1))
den = density(people['age'], people['match'],
bins=[np.arange(24,34), np.arange(20,101,1)],
label='number of profiles / bin', vmin=10, vmax=2000,
cmap=pylab.cm.Spectral_r, colorbar=False, logvrange=False)
icolorbar(den, loc=1, borderpad=-4)
setup_plot(subplt=(1,2,2), yticks=False)
den = density(people['age'], people['match'],
bins=[np.arange(24,34), np.arange(20,101,4)],
label='Relative Density',
vmin=2e-6, vmax=4e-6,
cmap=pylab.cm.Spectral_r, colorbar=False,
ynorm=True, xnorm=True, logvrange=False)
pylab.axhline(80, lw=2, color='k')
# convert binning size and probability distribution values to a simple number
icolorbar(den, loc=1, borderpad=-4, tickfmt='{:0.1f}', tickfcn=lambda x: x*0.5e6)
pylab.tight_layout()
num = people[(people['username_nlength'] == 2)]
print('Found {:,d} users with a 2 digit number'.format(len(num)))
def add_range(xr, label):
yr = pylab.ylim()
n = len(np.where((num['username_number'] > xr[0]) & (num['username_number'] < xr[1]))[0])
print('Found {:,d} {:0.2f}%'.format(n, n*100.0/len(num)))
pylab.axvspan(xr[0],xr[1], lw=0, alpha=0.2, zorder=-1, color='k')
text(np.mean(xr), yr[-1]*0.9, label+'\n({:d}-{:d})'.format(*xr), ha='center', va='top', rotation=90, fontsize=12,
outline_prop=dict(color='0.8'))
ax=setup(figsize=(12,6), xlabel='Number in Username', tickmarks=False, grid=True, ylabel='Number/bin')
num.hist('username_number', bins=np.arange(0,101), lw=0, ax=ax)
pylab.title('')
add_range([8, 15], 'Year of profile creation')
add_range([21,34], 'Age at profile creation')
add_range([82,91], 'Year of birth')
def setup_plot(**kwargs):
setup(xlabel='number in username', ylabel='age [years]',
ytickv=age+0.5, yticknames=age, **kwargs)
def add_line(x,y, label, **kwargs):
params = dict(label=label, lw=3, alpha=0.5, color='k')
params.update(kwargs)
outline(pylab.plot(x,y, **params))
text(x[-1]+5, y[-1], label, rotation=90, ha='center', va='top',
fontsize=12, fontweight='bold', color=params['color'])
# def add_vline(x, label, **kwargs):
# outline(pylab.axvline(x, lw=3, alpha=0.5, color='k', label=label))
# text(x+5, 32, label, rotation=90, alpha)
age = np.arange(24,34)
setup_plot(figsize=(12,6), subplt=(1,2,1))
den = density(num['username_number'], num['age'],
bins=[np.arange(0,101), np.arange(24,34), ],
label='Number of Profiles', vmin=0, vmax=400,
ynorm=False, cmap=pylab.cm.Spectral_r, colorbar=False)
icolorbar(den, loc=1, borderpad=-4)
setup_plot(subplt=(1,2,2), xr=[0,100], yticks=False)
den = density(num['username_number'], num['age'], alpha=0.3,
bins=[np.arange(0,101), np.arange(24,34), ],
label='Number of Profiles', vmin=0, vmax=400,
cmap=pylab.cm.Spectral_r, colorbar=False)
add_line((2015-age)-1900, age, 'Birth Year')
add_line(age, age, 'Age')
add_line(33-age, age, '18 Year Olds')
add_line(14*np.ones(age.shape), age, '~2014')
pylab.tight_layout()
zip(age, (2015-age)%1900, ((2015-age)%1900+18)%100)
def setup_plot(**kwargs):
setup(xlabel='number in username', ylabel='match percentage', **kwargs)
setup_plot(figsize=(12,6), subplt=(1,2,1))
den = density(num['username_number'], num['match'],
bins=[np.arange(20,101,2), np.arange(0,101,2)],
label='number of profiles / bin',
cmap=pylab.cm.Spectral_r, colorbar=False, logvrange=False)
icolorbar(den, loc=1, borderpad=-4)
setup_plot(subplt=(1,2,2), yticks=False)
den = density(num['username_number'], num['match'],
bins=[np.arange(20,101,4), np.arange(0,101,4)],
label='Log Relative Density', vmin=-5.3, vmax=-4,
xnorm=True, ynorm=True, logvrange=True,
cmap=pylab.cm.Spectral_r, colorbar=False)
pylab.axhline(80, lw=2, color='k')
icolorbar(den, loc=1, borderpad=-4)
pylab.tight_layout()
fournum = people[(people['username_nlength'] == 4)]
print('Found {:,d} profiles with 4 digits'.format(len(fournum)))
def setup_plot(**kwargs):
setup(xlabel='number in username', ylabel='age [years]', **kwargs)
def add_line(x,y, label, offset=5, **kwargs):
params = dict(label=label, lw=3, alpha=0.5, color='r')
params.update(kwargs)
outline(pylab.plot(x,y, **params))
text(x[-1]+offset, y[-1], label, rotation=90, ha='center', va='top', fontsize=12, color=params['color'])
setup_plot(figsize=(12,6), subplt=(1,2,1))
den = density(fournum['username_number'], fournum['age'],
bins=[np.arange(1970,2018), np.arange(24,34), ],
label='Number of Profiles', vmin=0, vmax=200,
ynorm=False, cmap=pylab.cm.Spectral_r, colorbar=False)
icolorbar(den, loc=1, borderpad=-4)
setup_plot(subplt=(1,2,2), xr=[1970,2017], yticks=False)
den = density(fournum['username_number'], fournum['age'], logvrange=True,
bins=[np.arange(1970,2018), np.arange(24,34), ],
label='Number of Profiles', #vmin=0, vmax=1000,
cmap=pylab.cm.Blues, colorbar=False)
age = np.arange(24,34)
add_line(2015-age, age, 'Birth Year')
add_line(2015+18-age, age, '18 year olds', color='purple', offset=-2)
add_line(2015+22-age, age, '22 year olds', color='orange')
pylab.tight_layout()
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import GradientBoostingClassifier, AdaBoostRegressor
from sklearn.svm import SVC, LinearSVC
from sklearn import cross_validation
rcf = RandomForestClassifier(
random_state=0,
# n_estimators=150,
# min_samples_split=4,
# min_samples_leaf=2
)
scores = cross_validation.cross_val_score(rcf,
num['username_number'].reshape(-1, 1),
num["age"],
cv=3
)
print('{:0.1f} +/- {:0.1f}'.format(100*scores.mean(), 200*scores.std()))
predicted = cross_validation.cross_val_predict(rfc, num['username_number'].reshape(-1, 1), num["age"], cv=10)
setup(figsize=(12,6), subplt=(1,2,1),
title='All profiles: {:,d}'.format(len(num)),
xlabel='Age', ylabel='Predicted Age')
den = density(num["age"], predicted,
bins=(np.arange(24,33),np.arange(24,33)),
colorbar=False, cmap=pylab.cm.Spectral_r)
icolorbar(den)
isgood = np.where( ( (num['username_number'] > 22) & (num['username_number'] < 34) ) |
( (num['username_number'] > 82) & (num['username_number'] < 91) ) )[0]
setup(subplt=(1,2,2), yticks=False,
title='Profile Subset {:,d}'.format(len(isgood)),
xlabel='Age', ylabel='Predicted Age')
den = density(num.iloc[isgood]['age'], predicted[isgood],
bins=(np.arange(24,33),np.arange(24,33)),
colorbar=False, cmap=pylab.cm.Spectral_r)
icolorbar(den)
pylab.tight_layout()
scores = cross_validation.cross_val_score(
GradientBoostingClassifier(
# n_estimators=100,
# learning_rate=1.0,
# max_depth=1,
random_state=0
),
num['username_number'].reshape(-1, 1),
num["age"],
cv=3
)
print('{:0.1f} +/- {:0.1f}'.format(100*scores.mean(), 200*scores.std()))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Creating Numpy Arrays
Step2: You can initialize an array (of any dimension) of all ones or all zeroes with the ones() and zeros() functions
Step3: You can also initialize an empty array which will be filled with values. This is the fastest way to initialize a fixed-size numpy array however you must ensure that you replace all of the values.
Step4: Accessing array elements
Step5: 2D arrays are accessed similarly by referring to the row and column index separated by a comma
Step6: Sequences of indices can be accessed using '
Step7: You can also pass a list of indices.
Step8: You can also use true/false values to select values
Step9: For 2D arrays you can select specific columns and specific rows. Passing '
Step10: Operations on Arrays
Step11: You can compute the sum with np.sum() and the average with np.average()
Step12: The dot product
Step13: Recall that the Euclidean length (or magnitude) of a vector is the squareroot of the sum of the squares of the components. This is just the squareroot of the dot product of the vector with itself
Step14: We can also use the dot product when we have a 2D array (or matrix). When you have an vector with the same number of elements as the matrix (2D array) has columns you can right-multiply the matrix by the vector to get another vector with the same number of elements as the matrix has rows. For example this is how you compute the predicted values given a matrix of features and an array of weights.
Step15: Similarly if you have a vector with the same number of elements as the matrix has rows you can left multiply them.
Step16: Multiplying Matrices
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np # importing this way allows us to refer to numpy as np
mylist = [1., 2., 3., 4.]
mynparray = np.array(mylist)
mynparray
one_vector = np.ones(4)
print one_vector # using print removes the array() portion
one2Darray = np.ones((2, 4)) # an 2D array with 2 "rows" and 4 "columns"
print one2Darray
zero_vector = np.zeros(4)
print zero_vector
empty_vector = np.empty(5)
print empty_vector
mynparray[2]
my_matrix = np.array([[1, 2, 3], [4, 5, 6]])
print my_matrix
print my_matrix[1, 2]
print my_matrix[0:2, 2] # recall 0:2 = [0, 1]
print my_matrix[0, 0:3]
fib_indices = np.array([1, 1, 2, 3])
random_vector = np.random.random(10) # 10 random numbers between 0 and 1
print random_vector
print random_vector[fib_indices]
my_vector = np.array([1, 2, 3, 4])
select_index = np.array([True, False, True, False])
print my_vector[select_index]
select_cols = np.array([True, False, True]) # 1st and 3rd column
select_rows = np.array([False, True]) # 2nd row
print my_matrix[select_rows, :] # just 2nd row but all columns
print my_matrix[:, select_cols] # all rows and just the 1st and 3rd column
my_array = np.array([1., 2., 3., 4.])
print my_array*my_array
print my_array**2
print my_array - np.ones(4)
print my_array + np.ones(4)
print my_array / 3
print my_array / np.array([2., 3., 4., 5.]) # = [1.0/2.0, 2.0/3.0, 3.0/4.0, 4.0/5.0]
print np.sum(my_array)
print np.average(my_array)
print np.sum(my_array)/len(my_array)
array1 = np.array([1., 2., 3., 4.])
array2 = np.array([2., 3., 4., 5.])
print np.dot(array1, array2)
print np.sum(array1*array2)
array1_mag = np.sqrt(np.dot(array1, array1))
print array1_mag
print np.sqrt(np.sum(array1*array1))
my_features = np.array([[1., 2.], [3., 4.], [5., 6.], [7., 8.]])
print my_features
my_weights = np.array([0.4, 0.5])
print my_weights
my_predictions = np.dot(my_features, my_weights) # note that the weights are on the right
print my_predictions # which has 4 elements since my_features has 4 rows
my_matrix = my_features
my_array = np.array([0.3, 0.4, 0.5, 0.6])
print np.dot(my_array, my_matrix) # which has 2 elements because my_matrix has 2 columns
matrix_1 = np.array([[1., 2., 3.],[4., 5., 6.]])
print matrix_1
matrix_2 = np.array([[1., 2.], [3., 4.], [5., 6.]])
print matrix_2
print 2 * np.dot(matrix_1, matrix_2)
print np.dot(matrix_1, matrix_2)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: plot the time courses....
Step2: plot brain in 3D with mne.viz.Brain if available
|
<ASSISTANT_TASK:>
Python Code:
# Author: Luke Bloy <luke.bloy@gmail.com>
# Alex Gramfort <alexandre.gramfort@inria.fr>
# License: BSD-3-Clause
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.minimum_norm import read_inverse_operator, apply_inverse
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
fname_inv = (
data_path / 'MEG' / 'sample' / 'sample_audvis-meg-oct-6-meg-inv.fif')
fname_evoked = data_path / 'MEG' / 'sample' / 'sample_audvis-ave.fif'
subjects_dir = data_path / 'subjects'
subject = 'sample'
snr = 3.0
lambda2 = 1.0 / snr ** 2
method = "dSPM" # use dSPM method (could also be MNE or sLORETA)
# Compute a label/ROI based on the peak power between 80 and 120 ms.
# The label bankssts-lh is used for the comparison.
aparc_label_name = 'bankssts-lh'
tmin, tmax = 0.080, 0.120
# Load data
evoked = mne.read_evokeds(fname_evoked, condition=0, baseline=(None, 0))
inverse_operator = read_inverse_operator(fname_inv)
src = inverse_operator['src'] # get the source space
# Compute inverse solution
stc = apply_inverse(evoked, inverse_operator, lambda2, method,
pick_ori='normal')
# Make an STC in the time interval of interest and take the mean
stc_mean = stc.copy().crop(tmin, tmax).mean()
# use the stc_mean to generate a functional label
# region growing is halted at 60% of the peak value within the
# anatomical label / ROI specified by aparc_label_name
label = mne.read_labels_from_annot(subject, parc='aparc',
subjects_dir=subjects_dir,
regexp=aparc_label_name)[0]
stc_mean_label = stc_mean.in_label(label)
data = np.abs(stc_mean_label.data)
stc_mean_label.data[data < 0.6 * np.max(data)] = 0.
# 8.5% of original source space vertices were omitted during forward
# calculation, suppress the warning here with verbose='error'
func_labels, _ = mne.stc_to_label(stc_mean_label, src=src, smooth=True,
subjects_dir=subjects_dir, connected=True,
verbose='error')
# take first as func_labels are ordered based on maximum values in stc
func_label = func_labels[0]
# load the anatomical ROI for comparison
anat_label = mne.read_labels_from_annot(subject, parc='aparc',
subjects_dir=subjects_dir,
regexp=aparc_label_name)[0]
# extract the anatomical time course for each label
stc_anat_label = stc.in_label(anat_label)
pca_anat = stc.extract_label_time_course(anat_label, src, mode='pca_flip')[0]
stc_func_label = stc.in_label(func_label)
pca_func = stc.extract_label_time_course(func_label, src, mode='pca_flip')[0]
# flip the pca so that the max power between tmin and tmax is positive
pca_anat *= np.sign(pca_anat[np.argmax(np.abs(pca_anat))])
pca_func *= np.sign(pca_func[np.argmax(np.abs(pca_anat))])
plt.figure()
plt.plot(1e3 * stc_anat_label.times, pca_anat, 'k',
label='Anatomical %s' % aparc_label_name)
plt.plot(1e3 * stc_func_label.times, pca_func, 'b',
label='Functional %s' % aparc_label_name)
plt.legend()
plt.show()
brain = stc_mean.plot(hemi='lh', subjects_dir=subjects_dir)
brain.show_view('lateral')
# show both labels
brain.add_label(anat_label, borders=True, color='k')
brain.add_label(func_label, borders=True, color='b')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Two kinds of vector products we'll see
Step2: 2) Use the function outer(vector1, vector2) to find the outer product of rv and cv. Does the order of the arguments matter?
Step3: II. Complex vectors
Step4: The complex conjugate changes the sign of the imaginary part
Step5: 3) Use dot() and .conjugate() to find the dot product of v1 and it's own conjugate
Step6: III. Matrices
Step7: Can also use the matrix type which is like array but restricts to 2D. Also, matrix adds .H and .I methods for hermitian and inverse, respectively. For more information, see Stack Overflow question #4151128
Step8: Examples
Step9: Interpret this result
Step10: Signs are opposite compared to the book, but it turns out that (-) doesn't matter in the interpretation of eigenvectors
Step11: Part 2
Step12: Vector products in QuTiP
Step13: Matrix in QuTiP
Step14: Example from class
Step15: Some additional examples we have already done by hand
Step16: Problem 1.8
|
<ASSISTANT_TASK:>
Python Code:
from numpy import array, dot, outer, sqrt, matrix
from numpy.linalg import eig, eigvals
from matplotlib.pyplot import hist
%matplotlib inline
rv = array([1,2]) # a row vector
rv
cv = array([[3],[4]]) # a column vector
cv
rv
dot(rv,cv)
dot(cv,rv)
outer(rv,cv)
outer(cv,rv)
# Complex numbers in python have a j term:
a = 1+2j
a
v1 = array([1+2j, 3+2j, 5+1j, 4+0j])
v1
v1.conjugate()
dot(v1.conjugate(),v1)
# a two-dimensional array
m1 = array([[0,1],[-2,3]])
m1
# can find transpose with the T method:
m1.T
# find the eigenvalues and eigenvectors of a matrix:
eig(m1)
m2 = matrix( [[2,1],[2,1]])
m2.H
evals, evec = eig(m2)
evals
evec
# use a question mark to get help on a command
eig?
M14 = array([[0,1],[-2,3]])
eig(M14)
1/sqrt(2) # this is the value for both entries in the first eigenvector
1/sqrt(5) # this is the first value in the second eigenvector
2/sqrt(5) # this is the second value in the second eigenvector
eig(M14)
M16 = array([[0,-1j],[1j,0]])
evals, evecs = eig(M16)
evecs
evecs[:,0]
evecs[:,1]
dot(evecs[:,0].conjugate(),evecs[:,1])
from qutip import *
# Create a row vector:
qv = Qobj([[1,2]])
qv
# Find the corresponding column vector
qv.dag()
qv2 = Qobj([[1+2j,4-1j]])
qv2
qv2.dag()
qv2*qv2.dag() # inner product (dot product)
qv2.dag()*qv2 # outer product
qm = Qobj([[1,2],[2,1]])
qm
qm.eigenenergies() # in quantum (as we will learn) eigenvalues often correspond to energy levels
evals, evecs = qm.eigenstates()
evecs
evecs[0]
qv = Qobj([[0,1],[-2,3]])
qv
evals, evecs = qv.eigenstates()
evals
evecs
qv*evecs[0]
qv*evecs[0] == evals[0]*evecs[0]
qv*evecs[1] == evals[1]*evecs[1]
# Solution
data = [10,13,14,14,6,8,7,9,12,14,13,11,10,7,7]
# Fill in the hist() function:
n, bins, patches = hist(data, bins=9, range=(5,14))
# Partial Solution:
from sympy import *
c,a,x = symbols("c a x")
Q.positive((c,a))
first = integrate( c*exp(-a*x) ,(x,0,oo),conds='none')
first
check = integrate( a*exp(-a*x) ,(x,0,oo),conds='none')
check
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's set up a coplanar two planet system.
Step2: We're now going to integrate the system forward in time. We assume the observer of the system is in the direction of the positive x-axis. We want to measure the time when the inner planet transits. In this geometry, this happens when the y coordinate of the planet changes sign. Whenever we detect a change in sign between two steps, we try to find the transit time, which must lie somewhere within the last step, by bisection.
Step3: Next, we do a linear least square fit to remove the linear trend from the transit times, thus leaving us with the transit time variations.
Step4: Finally, let us plot the TTVs.
|
<ASSISTANT_TASK:>
Python Code:
import rebound
import numpy as np
sim = rebound.Simulation()
sim.add(m=1)
sim.add(m=1e-5, a=1,e=0.1,omega=0.25)
sim.add(m=1e-5, a=1.757)
sim.move_to_com()
N=174
transittimes = np.zeros(N)
p = sim.particles
i = 0
while i<N:
y_old = p[1].y - p[0].y # (Thanks to David Martin for pointing out a bug in this line!)
t_old = sim.t
sim.integrate(sim.t+0.5) # check for transits every 0.5 time units. Note that 0.5 is shorter than one orbit
t_new = sim.t
if y_old*(p[1].y-p[0].y)<0. and p[1].x-p[0].x>0.: # sign changed (y_old*y<0), planet in front of star (x>0)
while t_new-t_old>1e-7: # bisect until prec of 1e-5 reached
if y_old*(p[1].y-p[0].y)<0.:
t_new = sim.t
else:
t_old = sim.t
sim.integrate( (t_new+t_old)/2.)
transittimes[i] = sim.t
i += 1
sim.integrate(sim.t+0.05) # integrate 0.05 to be past the transit
A = np.vstack([np.ones(N), range(N)]).T
c, m = np.linalg.lstsq(A, transittimes, rcond=-1)[0]
%matplotlib inline
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(10,5))
ax = plt.subplot(111)
ax.set_xlim([0,N])
ax.set_xlabel("Transit number")
ax.set_ylabel("TTV [hours]")
plt.scatter(range(N), (transittimes-m*np.array(range(N))-c)*(24.*365./2./np.pi));
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Line plot of sunspot data
Step2: Use np.loadtxt to read the data into a NumPy array called data. Then create two new 1d NumPy arrays named years and ssc that have the sequence of year and sunspot counts.
Step3: Make a line plot showing the sunspot count as a function of year.
Step4: Describe the choices you have made in building this visualization and how they make it effective.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import os
assert os.path.isfile('yearssn.dat')
data = np.loadtxt('yearssn.dat','float')
year = np.array(range(len(data)),'float')
ssc = np.array(range(len(data)),'float')
for x in range(len(data)):
year[x] = data[x,0]
ssc[x] = data[x,1]
assert len(year)==315
assert year.dtype==np.dtype(float)
assert len(ssc)==315
assert ssc.dtype==np.dtype(float)
plt.figure(figsize=(18,1))
plt.plot(year,ssc)
plt.xlabel('Year')
plt.ylabel('Mean Sunspot \n Number')
plt.title('Sunspot Data from 1700-2014')
plt.xlim(1700.0, 2015.0)
plt.ylim(0, 200)
plt.yticks([100,200])
assert True # leave for grading
plt.figure(figsize=(13,8))
plt.subplot(4,1,1)
plt.plot(year,ssc)
plt.xlabel('Year')
plt.ylabel('Mean Sunspot \n Number')
plt.title('Sunspot Data from 1700-2014')
plt.xlim(1700.0, 1800.0)
plt.ylim(0, 200)
plt.yticks([100,200])
plt.subplot(4,1,2) # 2 rows x 1 col, plot 2
plt.plot(year,ssc)
plt.xlabel('Year')
plt.ylabel('Mean Sunspot \n Number')
plt.xlim(1800.0, 1900.0)
plt.ylim(0, 200)
plt.yticks([100,200])
plt.subplot(4,1,3) # 2 rows x 1 col, plot 2
plt.plot(year,ssc)
plt.xlabel('Year')
plt.ylabel('Mean Sunspot \n Number')
plt.xlim(1900.0, 2000.0)
plt.ylim(0, 200)
plt.yticks([100,200])
plt.subplot(4,1,4) # 2 rows x 1 col, plot 2
plt.plot(year,ssc)
plt.xlabel('Year')
plt.ylabel('Mean Sunspot \n Number')
plt.xlim(2000.0, 2015.0)
plt.ylim(0, 200)
plt.yticks([100,200])
plt.tight_layout()
assert True # leave for grading
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Count number of tweets per day for every news, calculate cummulative diffusion
Step2: Plot diffusion for every day for all news together
Step3: Plot cummulative diffusion of all news together
Step4: Plot cummulative diffusion for every news headline
Step5: Average diffusion per day for all news
Step6: The same graph but in logarithmic scale
Step7: Calculate and plot standart deviation
Step8: Calculate and plot share of values inside one standard deviation for every day
Step9: Store average diffusion data on hard drive to use by another jupyter notebook
Step10: Plot average diffusion for both real and fake news on one graph
Step11: In logarithmic scale
Step12: Calculate average diffusion duration (number of days until difussion is dead)
|
<ASSISTANT_TASK:>
Python Code:
client = pymongo.MongoClient("46.101.236.181")
db = client.allfake
# get collection names
collections = sorted([collection for collection in db.collection_names()])
day = {} # number of tweets per day per collection
diff = {} # cumullative diffusion on day per colletion
for collection in collections:
# timeframe
relevant_from = db[collection].find().sort("timestamp", pymongo.ASCENDING).limit(1)[0]['timestamp']
relevant_till = db[collection].find().sort("timestamp", pymongo.DESCENDING).limit(1)[0]['timestamp']
i = 0
day[collection] = [] # number of tweets for every collection for every day
diff[collection] = [] # cummulative diffusion for every collection for every day
averagediff = [] # average diffusion speed for every day for all news
d = relevant_from
delta = datetime.timedelta(days=1)
while d <= relevant_till:
# tweets per day per collection
day[collection].append(db[collection].find({"timestamp":{"$gte": d, "$lt": d + delta}}).count())
# cummulative diffusion per day per collection
if i == 0:
diff[collection].append( day[collection][i] )
else:
diff[collection].append( diff[collection][i-1] + day[collection][i] )
d += delta
i += 1
# the longest duration of diffusion among all news headlines
max_days = max([len(day[coll]) for coll in \
[days_col for days_col in day] ])
summ_of_diffusions = [0] * max_days # summary diffusion for every day
# calculate summary diffusion for every day
for d in range(max_days):
for c in collections:
# if there is an entry for this day for this collection, add its number of tweets to the number of this day
if d < len(day[c]):
summ_of_diffusions[d] += day[c][d]
plt.step(range(len(summ_of_diffusions)),summ_of_diffusions, 'g')
plt.xlabel('Day')
plt.ylabel('Number of tweets')
plt.title('Diffusion of all real news together')
plt.show()
summ_of_diffusions_cumulative = [0] * max_days #
summ_of_diffusions_cumulative[0] = summ_of_diffusions[0]
for d in range(1, max_days):
summ_of_diffusions_cumulative[d] += summ_of_diffusions_cumulative[d-1] + summ_of_diffusions[d]
plt.step(range(len(summ_of_diffusions_cumulative)),summ_of_diffusions_cumulative, 'g')
plt.xlabel('Day')
plt.ylabel('Cummulative number of tweets')
plt.title('Cummulative diffusion of all real news together')
plt.show()
for collection in collections:
plt.step([d+1 for d in range(len(diff[collection]))], diff[collection])
plt.xlabel('Day')
plt.ylabel('Cummulative number of tweets')
plt.title('Cumulative diffusion of real news headlines')
plt.show()
averagediff = [0 for _ in range(max_days)] # average diffusion for every day
for collection in collections:
for i,d in enumerate(day[collection]):
averagediff[i] += d / len(collections)
plt.xlabel('Day')
plt.ylabel('Average number of tweets')
plt.step(range(1,len(averagediff)+1),averagediff, 'g')
plt.title('Average diffusion of real news')
plt.show()
plt.ylabel('Average number of tweets')
plt.xlabel('Day')
plt.yscale('log')
plt.step(range(1,len(averagediff)+1),averagediff, 'g')
plt.show()
avgdiff_std = [0 for _ in range(max_days)] # standard deviation for every day for all collections
number_tweets = [[] for _ in range(max_days)] # number of tweets for every day for every collection
for d in range(max_days):
for c in collections:
# if there is an entry for this day for this collection
if d < len(day[c]):
# add number of tweets for this day for this colletion to the number_tweets for this day
number_tweets[d].append(day[c][d])
# calculate standard deviation for this day
avgdiff_std[d] = np.std(number_tweets[d])
plt.ylabel('Standart deviation for average number of tweets per day')
plt.xlabel('Day')
plt.step(range(1,len(avgdiff_std)+1),avgdiff_std, 'g')
plt.title('Standard deviation for real news average')
plt.show()
inside_std = [0 for _ in range(max_days)] # number of values inside one standard deviation for every day
inside_std_share = [0 for _ in range(max_days)] # share of values inside one standard deviation for every day
for d in range(max_days):
for c in collections:
# set borders of mean plusminus one std
lowest = averagediff[d] - avgdiff_std[d]
highest = averagediff[d] + avgdiff_std[d]
# if there is entray for this day for this collection and its value is inside the borderes
if d < len(day[c]) and (day[c][d] >= lowest and day[c][d] <= highest):
# increment number of values inside one std for this day
inside_std[d] += 1
# calculate the share of values inside one std for this day
inside_std_share[d] = inside_std[d] / float(len(number_tweets[d]))
plt.ylabel('Percent of values in 1 std from average')
plt.xlabel('Day')
plt.scatter(range(1,len(inside_std_share)+1),inside_std_share, c='g')
plt.title('Percentage of values inside the range\n of one standard deviation from mean for real news')
plt.show()
averagediff_real = averagediff
%store averagediff_real
# from hard drive, load data for average diffusion of fake news
%store -r averagediff_fake
plt.xlabel('Day')
plt.ylabel('Average number of tweets')
plt.step(range(1,len(averagediff)+1),averagediff, 'g', label="real news")
plt.step(range(1,len(averagediff_fake)+1),averagediff_fake, 'r', label="fake news")
plt.legend()
plt.title('Average diffusion for both types of news')
plt.show()
plt.ylabel('Average number of tweets')
plt.xlabel('Day')
plt.yscale('log')
plt.step(range(1,len(averagediff_fake)+1),averagediff_fake, 'r', range(1,len(averagediff)+1),averagediff, 'g')
plt.show()
diffDurationAvg = 0; # average duration of diffusion
durations = [len(day[col]) for col in collections] # all durations
diffDurationAvg = np.mean(durations) # mean duration
diffDurationAvg_std = np.std(durations) # standard deviation for the mean
print "Average diffusion duration: %.2f days" % diffDurationAvg
print "Standard deviation: %.2f days" % diffDurationAvg_std
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: This query finds the average prescription claim by state.
Step4: This query finds the average tip based on payment type.
Step6: This query uses a discrete number for grouping and will be inherently faster than the names queries (which use strings).
Step7: Without BI Engine
Step8: Here, I'm running the query without BI Engine turned on.
Step9: The slot milliseconds is a proxy for the cost if you have a reservation -- it measures how much your BigQuery slots are getting used.
|
<ASSISTANT_TASK:>
Python Code:
from google.cloud import bigquery
from timeit import default_timer as timer
from datetime import timedelta
def show_query(query):
client = bigquery.Client()
query_job = client.query(query, bigquery.job.QueryJobConfig(use_query_cache=False))
df = query_job.result().to_dataframe()
print("Compute: {} slotms Bytes: {:.1f} MB".format(query_job.slot_millis, query_job.total_bytes_processed/(1024*1024)))
return df
COST_BY_STATE=
SELECT
nppes_provider_state, SUM(total_drug_cost)/SUM(total_claim_count) AS avg_cost
FROM `bigquery-public-data.medicare.part_d_prescriber_2014`
GROUP BY 1
ORDER BY 2 DESC
LIMIT 10
show_query(COST_BY_STATE)
TIPS_BY_TYPE=
SELECT payment_type, AVG(tips) AS avg_tip
FROM bigquery-public-data.chicago_taxi_trips.taxi_trips
GROUP BY 1
ORDER BY 2 DESC
show_query(TIPS_BY_TYPE)
AIR_QUALITY=
SELECT
site_num,
ANY_VALUE(state_name) AS state,
AVG(aqi) as air_quality_index,
FROM `bigquery-public-data.epa_historical_air_quality.pm10_daily_summary`
GROUP BY site_num
ORDER BY air_quality_index DESC
LIMIT 10
show_query(AIR_QUALITY)
from google.cloud import bigquery
from timeit import default_timer as timer
from datetime import timedelta
# Construct a BigQuery client object.
client = bigquery.Client()
def run_query(query, n=5):
tot_slotmillis, tot_timeelapsed = 0, timedelta(0)
for iter in range(n):
query_job = client.query(query, bigquery.job.QueryJobConfig(use_query_cache=False))
df = query_job.result().to_dataframe()
tot_timeelapsed += (query_job.ended - query_job.started)
tot_slotmillis += query_job.slot_millis
print("Job stat: slot_mills={} server_time={}".format(tot_slotmillis/n, tot_timeelapsed/n))
run_query(COST_BY_STATE)
run_query(TIPS_BY_TYPE)
run_query(AIR_QUALITY)
run_query(COST_BY_STATE)
run_query(TIPS_BY_TYPE)
run_query(AIR_QUALITY)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Compare the results between standard AM1.5d and the SMARTS-generated spectrum
|
<ASSISTANT_TASK:>
Python Code:
file = build_smarts_file(
**astmg_173_03_m
)
data = send_to_smarts(file)
plt.plot(data.iloc[:,0],data.iloc[:,1],hold=True)
plt.plot(data.iloc[:,0],data.iloc[:,4])
plt.show()
data.columns
ill=Illumination("AM1.5d")
x,y=ill.get_spectrum("nm")
plt.plot(x,y,hold=True,label="AM1.5d")
plt.plot(data['WVLGTH'],data['BEAM_NORM+'],hold=True,label="SMART")
ill_g=Illumination("AM1.5g")
x,y=ill_g.get_spectrum("nm")
plt.plot(x,y,hold=True,label="AM1.5g")
plt.plot(data['WVLGTH'],data['GLOBL_TILT'],hold=True,label="SMART")
data=get_astm_airmass(1.5)
plt.plot(x,y,hold=True,label="AM1.5g")
plt.plot(data['WVLGTH'],data['GLOBL_TILT'],hold=True,label="SMART")
amass=np.linspace(0.5,3,num=10)
for a in amass:
data=get_astm_airmass(a)
plt.plot(data['WVLGTH'],data['GLOBL_TILT'],hold=True,label='%s'%a)
amass=np.linspace(0.5,3,num=10)
for a in amass:
data=get_astm_airmass(a)
plt.plot(data['WVLGTH'],data['GLOB_HORIZ'],hold=True,label='%s'%a)
data.columns
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Part 2 - Adding Latitude and Longitude
Step2: Part 3 - Clustering
Step3: Above plots shows the regions in Toronto. However the clusters are not visible
|
<ASSISTANT_TASK:>
Python Code:
import requests
import lxml
import pandas as pd
import numpy as np
from bs4 import BeautifulSoup
wiki_page = requests.get('https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M').text
soup = BeautifulSoup(wiki_page, 'lxml')
table = soup.find('table')
# table
toronto_table = soup.find('table',{'class':'wikitable sortable'})
links = toronto_table.findAll('td')
pincodes = []
count = 0
for x in links:
if count == 0:
x1 = x.text
count += 1
elif count == 1:
x2 = x.text
count +=1
elif count == 2:
x3 = x.text
x3 = x3.replace('\n','')
count = 0
if x3 == 'Not assigned':
x3 = x2
if x2 != 'Not assigned':
pincodes.append((x1,x2,x3))
# print (pincodes)
result = {}
for x in pincodes:
if x[0] in result:
result[x[0]] = [x[0], x[1], result[x[0]][1] + ', ' + x[2]]
else:
result[x[0]] = [x[0], x[1], x[2]]
results = {}
for count, x in enumerate(result):
results[count] = [x, result[x][1], result[x][2]]
# print(results)
toronto_data = pd.DataFrame.from_dict(results, orient='index', columns=['PostalCode', 'Borough', 'Neighborhood'])
toronto_data
# Trail - Not WOrking or taking too long time
import geocoder # import geocoder
update_results = {}
for postal_code in toronto_data['PostalCode']:
lat_lng_coords = None
while(lat_lng_coords is None):
geo_info = geocoder.google('{}, Toronto, Ontario'.format(postal_code))
lat_lng_coords = geo_info.latlng
latitude = lat_lng_coords[0]
longitude = lat_lng_coords[1]
update_results[postal_code] = {"latitude":latitude, "longitude":longitude}
toronto_data['PostalCode']
coordinates = pd.read_csv('http://cocl.us/Geospatial_data')
coordinates.rename(columns={'Postal Code': 'PostalCode'}, inplace=True)
final_result = pd.merge(toronto_data, coordinates, on='PostalCode')
final_result
import matplotlib.pyplot as plt
lat_lons = []
lats = []
lons = []
for index, row in final_result.iterrows():
lat_lons.append([row['Longitude'], row['Latitude']])
lats.append(row['Latitude'])
lons.append(row['Longitude'])
plt.scatter(lons, lats)
plt.xlabel("Longitude")
plt.ylabel("Latitude")
plt.title("Toronto Postal Codes Geo Location")
plt.show()
# I have Referred some clustering examples from Kaggle
# https://www.kaggle.com/xxing9703/kmean-clustering-of-latitude-and-longitude
import folium
toronto_latitude = 43.6532; toronto_longitude = -79.3832
map_toronto = folium.Map(location = [toronto_latitude, toronto_longitude], zoom_start = 10.7)
# adding markers to map
for lat, lng, borough, neighborhood in zip(final_result['Latitude'], final_result['Longitude'], final_result['Borough'], final_result['Neighborhood']):
label = '{}, {}'.format(neighborhood, borough)
label = folium.Popup(label, parse_html=True)
folium.CircleMarker(
[lat, lng],
radius=5,
popup=label,
color='red',
fill=True,
fill_color='#110000',
fill_opacity=0.7).add_to(map_toronto)
map_toronto
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Minimal example
Step2: Advanced example
Step3: Progress monitoring and control using callback argument of fit method
Step4: Counting total iterations that will be used to explore all subspaces
|
<ASSISTANT_TASK:>
Python Code:
print(__doc__)
import numpy as np
from skopt import BayesSearchCV
from sklearn.datasets import load_digits
from sklearn.svm import SVC
from sklearn.model_selection import train_test_split
X, y = load_digits(10, True)
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.75, test_size=.25, random_state=0)
# log-uniform: understand as search over p = exp(x) by varying x
opt = BayesSearchCV(
SVC(),
{
'C': (1e-6, 1e+6, 'log-uniform'),
'gamma': (1e-6, 1e+1, 'log-uniform'),
'degree': (1, 8), # integer valued parameter
'kernel': ['linear', 'poly', 'rbf'], # categorical parameter
},
n_iter=32,
cv=3
)
opt.fit(X_train, y_train)
print("val. score: %s" % opt.best_score_)
print("test score: %s" % opt.score(X_test, y_test))
from skopt import BayesSearchCV
from skopt.space import Real, Categorical, Integer
from sklearn.datasets import load_digits
from sklearn.svm import LinearSVC, SVC
from sklearn.pipeline import Pipeline
from sklearn.model_selection import train_test_split
X, y = load_digits(10, True)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
# pipeline class is used as estimator to enable
# search over different model types
pipe = Pipeline([
('model', SVC())
])
# single categorical value of 'model' parameter is
# sets the model class
# We will get ConvergenceWarnings because the problem is not well-conditioned.
# But that's fine, this is just an example.
linsvc_search = {
'model': [LinearSVC(max_iter=1000)],
'model__C': (1e-6, 1e+6, 'log-uniform'),
}
# explicit dimension classes can be specified like this
svc_search = {
'model': Categorical([SVC()]),
'model__C': Real(1e-6, 1e+6, prior='log-uniform'),
'model__gamma': Real(1e-6, 1e+1, prior='log-uniform'),
'model__degree': Integer(1,8),
'model__kernel': Categorical(['linear', 'poly', 'rbf']),
}
opt = BayesSearchCV(
pipe,
[(svc_search, 20), (linsvc_search, 16)], # (parameter space, # of evaluations)
cv=3
)
opt.fit(X_train, y_train)
print("val. score: %s" % opt.best_score_)
print("test score: %s" % opt.score(X_test, y_test))
from skopt import BayesSearchCV
from sklearn.datasets import load_iris
from sklearn.svm import SVC
X, y = load_iris(True)
searchcv = BayesSearchCV(
SVC(gamma='scale'),
search_spaces={'C': (0.01, 100.0, 'log-uniform')},
n_iter=10,
cv=3
)
# callback handler
def on_step(optim_result):
score = searchcv.best_score_
print("best score: %s" % score)
if score >= 0.98:
print('Interrupting!')
return True
searchcv.fit(X, y, callback=on_step)
from skopt import BayesSearchCV
from sklearn.datasets import load_iris
from sklearn.svm import SVC
X, y = load_iris(True)
searchcv = BayesSearchCV(
SVC(),
search_spaces=[
({'C': (0.1, 1.0)}, 19), # 19 iterations for this subspace
{'gamma':(0.1, 1.0)}
],
n_iter=23
)
print(searchcv.total_iterations)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: static versus dynamic websites
|
<ASSISTANT_TASK:>
Python Code:
from IPython.core.display import display, HTML
from meta import (
define_website,
video_map,
keys,
url,
)
define_website
# from a package I wrote: https://github.com/dm-wyncode/pynotebook-youtube-editor
from youtube_editor.youtube_editor import (
get_embed_string_from,
)
for key in keys:
display(HTML(get_embed_string_from(video_map.get(key), url)))
html = get_embed_string_from(video_map['entire video'], url)
display(HTML(html))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Ice Albedo
Step7: 1.4. Atmospheric Coupling Variables
Step8: 1.5. Oceanic Coupling Variables
Step9: 1.6. Prognostic Variables
Step10: 2. Key Properties --> Software Properties
Step11: 2.2. Code Version
Step12: 2.3. Code Languages
Step13: 3. Grid
Step14: 3.2. Adaptive Grid
Step15: 3.3. Base Resolution
Step16: 3.4. Resolution Limit
Step17: 3.5. Projection
Step18: 4. Glaciers
Step19: 4.2. Description
Step20: 4.3. Dynamic Areal Extent
Step21: 5. Ice
Step22: 5.2. Grounding Line Method
Step23: 5.3. Ice Sheet
Step24: 5.4. Ice Shelf
Step25: 6. Ice --> Mass Balance
Step26: 7. Ice --> Mass Balance --> Basal
Step27: 7.2. Ocean
Step28: 8. Ice --> Mass Balance --> Frontal
Step29: 8.2. Melting
Step30: 9. Ice --> Dynamics
Step31: 9.2. Approximation
Step32: 9.3. Adaptive Timestep
Step33: 9.4. Timestep
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'messy-consortium', 'sandbox-3', 'landice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <a id="quick_summary">
Step2: df.info() shows data types, number of rows and columns, and memory usage of your data frame
Step3: <a id='sorting'></a>
Step4: <a id='removing_duplicates'></a>
Step5: Define duplicates by column name(s)
Step6: <a id="map"></a>
Step7: Suppose you wanted to add a column indicating the type of animal that each food came from. Let’s write down a mapping of each distinct meat type to the kind of animal using a dictionary and a function
Step8: Or we could use apply() and apply the meat2animal() function to the new column
Step9: We can also use dataframe.assign() function which was added in pandas version 0.16
Step10: [back to top]
Step11: <a id="replacing_values"></a>
Step12: If you want to replace -999 with NaN
Step13: If you want to replace multiple values at once, you instead pass a list then the substitute value
Step14: <a name="drop_fill_na"></a>
Step15: Filling/replacing NaN values with something else (replace NaN with 0/zero)
Step16: <a name="renaming_axis"></a>
Step17: If you want to create a transformed version of a data set without modifying the original, a useful method is rename
Step18: <a id="renaming_indexes_columns"></a>
Step19: You can also apply str functions to modify the index or column labels
Step20: <a id="binning"></a>
Step21: Let’s divide these into bins of 18 to 25, 26 to 35, 35 to 60, and finally 60 and older. To do so, you have to use <i>cut()</i> function
Step22: If you want the right value to be exclusive in the intervals
Step23: You can also pass your own bin names by passing a list or array to the labels option
Step24: [back to top]
Step25: <a id='slicing'></a>
Step26: Getting first n rows of data frame using index slicing syntax
Step27: Slicing based on data frame's index range
Step28: <a id='loc'></a>
Step29: Slicing based on row index label and column label combined using loc
Step30: <a id='iloc'></a>
Step31: <a id='boolean'></a>
Step32: or I've seen some people do boolean filtering by passing "criteria" variable to the data frame like so
Step33: We can also do partial string matching. So let's say you don't know the exact spelling a word is you want to match, you can do this
Step34: Using ~ to do a "NOT"
Step35: Filtering using query() method
Step36: <a id='partial_column_labels'></a>
Step37: <a id='value_counts'></a>
Step38: <a id='cum_sum'></a>
Step39: <a id='pivot_table'></a>
Step40: Calculating the means of each group
Step41: Getting counts by group
Step42: Getting cumulative sum or running total of the group counts
Step43: <a id="transform">
Step44: Goal
Step45: For those curious how to do this using dplyr / dplython
Step46: Here's a comparison I made between dplython and pandas.
Step47: <a id='perc_columns'>
Step48: <a id='transpose'>
Step49: You can also do df.dtypes to accomplish similar output, but it's nice to see actual data along with the columns.
Step50: <a id="index2column">
Step51: <a id="column2index">
Step52: To remove the index name ("a"), do
Step53: <a id='missing_dates'>
Step54: From above, we have holes in our data. Let's fill in those missing holes using the idx date_range we specified above.
Step56: <a id='database'>
Step57: Also check out Yhat's db.py. Currently, doesn't support IBM DB2. So I'm stuck with pyodbc at work for now.
Step58: Below is what data looks like in 'wide' format
Step59: This is what data looks like in 'long' format
Step60: Now we can plot the stock prices. Since the data is in long format, we can take advantage of ggplot's color= parameter, which will magically make line charts with different colors for each stock. If the data was in wide format, you would have had to manually specify all the different stocks to plot or use a loop construct.
Step61: Sadly, ggplot's development appears to have stagnated. I would recommend using other charting libraries instead.
Step62: <a id='category'>
Step63: Now define the order of the cetgorical data using set_categories()
Step64: <a id='merge'>
Step65: By default, merge() does an inner join, but you can specify 'left', 'right', or 'outer' joins. These types of joins are familiar if you have SQL background. Here's a good SQL joins visualizer.
Step66: NOTE
Step67: <a id='convert_type'>
Step68: <a id="matplotlib">
Step69: With version 1.5+
Step70: <a id="subplots">
Step71: For box plots, instead of making sub-plots, pandas intelligently splits the data using the by= parameter"
Step72: <a id="bokeh">
Step73: <a id="plotly">
Step74: <a id="chaining">
Step75: Also check out pipe() method.
Step76: Let's fetch the exchange rate data
Step77: Sanity check on the data
Step78: We'll use this one-liner %R magic command to send the Pandas data frame to R as input using the "-i" flag. See here for other flags.
Step79: Alternatively, you can use pandas built-in facility to do this
Step80: Next, we'll use %%R magic command which will make the entire code cell R executable
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
pd.set_option("display.max_rows",1000) # or pd.options.display.max_rows=1000
pd.set_option("display.max_columns",20) # or pd.options.display.max_columns=20
pd.set_option('precision',7)
pd.set_option('large_repr', 'truncate')
import pandas as pd
data = pd.DataFrame({'group': ['a', 'a', 'a', 'b','b', 'b', 'c', 'c','c'],
'ounces': [4, 3, 12, 6, 7.5, 8, 3, 5, 6]})
data
data.describe()
data.info()
data.sort_values(by=['group','ounces'], ascending=[False, True], inplace=True)
data
import pandas as pd
data = pd.DataFrame({'k1': ['one'] * 3 + ['two'] * 4, 'k2': [3, 2, 1, 3, 3, 4, 4]})
data.sort_values(by='k2')
data
data.drop_duplicates() # by default, duplicate is defined by all columns
data.drop_duplicates(subset='k1') # duplicate in column k1 only
import pandas as pd
data = pd.DataFrame({'food': ['bacon', 'pulled pork', 'bacon', 'Pastrami','corned beef', 'Bacon', 'pastrami', 'honey ham','nova lox'],
'ounces': [4, 3, 12, 6, 7.5, 8, 3, 5, 6]})
data
meat_to_animal = {
'bacon': 'pig',
'pulled pork': 'pig',
'pastrami': 'cow',
'corned beef': 'cow',
'honey ham': 'pig',
'nova lox': 'salmon'
}
def meat2animal(series):
if series["food"]=='bacon':
return 'pig'
elif series["food"]=='pulled pork':
return 'pig'
elif series["food"]=='pastrami':
return 'cow'
elif series["food"]=='corned beef':
return 'cow'
elif series["food"]=='honey ham':
return 'pig'
else:
return 'salmon'
data['animal'] = data['food'].map(str.lower).map(meat_to_animal)
data
# axis=1 means to apply the function for each row, but I prefer to use axis='columns' instead
data['animal2'] = data.apply(meat2animal,axis='columns')
data
import pandas as pd
import numpy as np
df = pd.DataFrame({'data1' : np.random.randn(5),
'data2' : np.random.randn(5)})
df.assign(ratio = df['data1'] / df['data2'])
data.drop('animal2', axis='columns', inplace=True) # dropping/removing a column
data
import pandas as pd
import numpy as np
data = pd.Series([1., -999., 2., -999., -1000., 3.])
data
data.replace(-999, np.nan, inplace=True)
data
import pandas as pd
data = pd.Series([1., -999., 2., -999., -1000., 3.])
data
data.replace([-999, -1000], np.nan, inplace=True)
data
data.dropna()
data.fillna(value=0)
import pandas as pd
data = pd.DataFrame(np.arange(12).reshape((3, 4)),index=['Ohio', 'Colorado', 'New York'],columns=['one', 'two', 'three', 'four'])
data
data.index.map(str.upper)
data.index = data.index.map(str.upper)
data
data.rename(index=str.title, columns=str.upper, inplace=True) # str.title means to make the 1st letter capitalized only
data
import pandas as pd
data = pd.DataFrame(np.arange(12).reshape((3, 4)),index=['Ohio', 'Colorado', 'New York'],columns=['one', 'two', 'three', 'four'])
data
data.rename(index={'Ohio': 'INDIANA'},columns={'three': 'peekaboo'},inplace=True)
data
data.rename(index=str.title, columns=str.upper, inplace=True) # str.title means to make the 1st letter capitalized only
data
ages = [20, 22, 25, 27, 21, 23, 37, 31, 61, 45, 41, 32]
bins = [18, 25, 35, 60, 100]
cats = pd.cut(ages, bins)
cats
cats.labels
cats.levels
pd.value_counts(cats)
pd.cut(ages, [18, 26, 36, 61, 100], right=False)
group_names = ['Youth', 'YoungAdult', 'MiddleAged', 'Senior']
pd.cut(ages, bins, labels=group_names)
pd.value_counts(pd.cut(ages, bins, labels=group_names))
import pandas as pd
df = pd.DataFrame({'key1' : ['a', 'a', 'b', 'b', 'a'],
'key2' : ['one', 'two', 'one', 'two', 'one'],
'data1' : np.random.randn(5),
'data2' : np.random.randn(5)})
df
grouped = df['data1'].groupby(df['key1'])
grouped.mean()
import numpy as np
import pandas as pd
dates = pd.date_range('20130101',periods=6)
df = pd.DataFrame(np.random.randn(6,4),index=dates,columns=list('ABCD'))
df
df[0:3] # get first 3 rows of the data frame
df['20130102':'20130104'] # get rows by index range
df.loc[:,['A','B']] # syntax is: df.loc[rows_index, cols_index]
df.loc['20130102':'20130104',['A','B']]
df.iloc[3] # returns 4th row (index=3) of the data frame
df.iloc[3:5,0:2] # returns specific range of rows and columns of the data frame
df.iloc[[1,5],[0,2]] # returns specific rows and columns using lists containing columns or row indexes
df.iloc[1:3,:] # returning specific rows and returning all columns
df.iloc[:,1:3] # returning all rows and specific columns
df.iloc[1,1] # getting secific scalar/single value
df[df.A > -0.5] # or df[df["A"] > -0.5], this syntax works when there is a space in the column name
criteria = df.A > -0.5
df[criteria]
df2 = df.copy()
df2['E']=['one', 'one','two','three','four','three']
df2
df2[df2['E'].isin(['two','four'])] # read as "return rows where column E contains two or four"
df2[df2.E.str.contains("tw|ou")]
df2[~df2['E'].isin(['two','four'])] # column E containing values not in two or four
import numpy as np
import pandas as pd
dates = pd.date_range('20130101',periods=6)
df = pd.DataFrame(np.random.randn(6,4),index=dates,columns=list('ABCD'))
df
df.query('A > C')
df.query('A > 0')
df.query('A > 0 & A < 1')
df.query('A > B | A > C') # where A is greater than B or A is greater than C
import pandas as pd
df = pd.DataFrame({'key1' : ['a', 'a', 'b', 'b', 'a'],
'key2' : ['one', 'two', 'one', 'two', 'one'],
'data1' : np.random.randn(5),
'data2' : np.random.randn(5)})
df
df.filter(like='data')
import pandas as pd
data = pd.DataFrame({'group': ['a', 'a', 'a', 'b','b', 'b', 'c', 'c','c'],
'ounces': [4, 3, 12, 6, 7.5, 8, 3, 5, 6]})
data
data.group.value_counts()
data.group.value_counts().cumsum()
import pandas as pd
data = pd.DataFrame({'group': ['a', 'a', 'a', 'b','b', 'b', 'c', 'c','c'],
'ounces': [4, 3, 12, 6, 7.5, 8, 3, 5, 6]})
data
data.pivot_table(values='ounces',index='group',aggfunc=np.mean)
data.pivot_table(values='ounces',index='group',aggfunc='count')
data.pivot_table(values='ounces',index='group',aggfunc='count').cumsum()
import pandas as pd
# sample data can be dowloaded here: http://pbpython.com/extras/sales-funnel.xlsx
df = pd.read_csv('/home/pybokeh/Downloads/sales-funnel.csv')
df.head()
by_rep = pd.pivot_table(data=df, index='Rep', columns='Product', values='Quantity', aggfunc='count', fill_value=0)
by_rep
from dplython import *
diamonds.head()
(diamonds.assign(carat_diff=diamonds
.groupby('color')['carat']
.transform(lambda x:x-x.mean())
)
.sort_values(by=['color','carat_diff'])
.head(10)
)
# Using dplyr/dplython syntax
(diamonds >>
group_by(X.color) >>
mutate(carat_diff = X.carat - X.carat.mean()) >>
ungroup() >>
arrange(X.color, X.carat_diff) >>
head(10)
)
perc_of_rows = by_rep.apply(lambda x : x / x.sum() * 100, axis='columns') # or axis=1
perc_of_rows
perc_of_columns = by_rep.apply(lambda x : x / x.sum() * 100, axis='index') # or axis=0
perc_of_columns
import pandas as pd
df = pd.DataFrame({'key1' : ['a', 'a', 'b', 'b', 'a'],
'key2' : ['one', 'two', 'one', 'two', 'one'],
'data1' : np.random.randn(5),
'data2' : np.random.randn(5)})
df
df.head(1).transpose()
df.dtypes
import pandas as pd
data = pd.DataFrame({'group': ['a', 'a', 'a', 'b','b', 'b', 'c', 'c','c'],
'ounces': [4, 3, 12, 6, 7.5, 8, 3, 5, 6]})
data
data.reset_index(level=0, inplace=True)
data
import pandas as pd
col = ['a','b','c']
data = pd.DataFrame([[1,2,3],[10,11,12],[20,21,22]],columns=col)
data
data = data.set_index('a')
data
data.index.name = None
data
import pandas as pd
idx = pd.date_range('09-01-2013', '09-30-2013')
s = pd.Series({'09-02-2013': 2,
'09-03-2013': 10,
'09-06-2013': 5,
'09-07-2013': 1})
s
s.index = pd.DatetimeIndex(s.index)
s = s.reindex(idx, fill_value=0)
s
import pyodbc # If using PostgreSQL, MySQL, SQLite, Redshift, MS SQL Server, or Oracle, use db.py instead
import pandas as pd
from getpass import getpass # Module that will create a text input widget AND mask your password
userid = 'your_userid'
pw = getpass(prompt='Enter your password: ')
cnxn_string = 'DSN=your_dsn;UID=' + userid + ';PWD=' + pw
cnxn = pyodbc.connect(cnxn_string)
cursor = cnxn.cursor()
sql =
SELECT * from your_table...
df = pd.read_sql(sql, cnxn, index_col=None, parse_dates={'some_column':"%Y-%m-%d"})
# Close connections
cursor.close()
cnxn.close()
%matplotlib inline
import matplotlib.pyplot as plt
from pandas_datareader import data as web
from datetime import datetime
import pandas as pd
stocks = {'tech':['GOOGL', 'MSFT', 'LNKD', 'YHOO', 'FB','HPQ','AMZN'],
'auto':['TM','F','GM','HMC','NSANY','HYMTF'],
'housing':['HD','WMT','LOW']
}
start_date = datetime(2016,1,1)
end_date = datetime(2016, 4, 21)
def get_px(stock, start, end):
return web.get_data_yahoo(stock, start, end)['Adj Close']
df = pd.DataFrame({n: get_px(n, start_date, end_date) for n in stocks['tech']})
df.head(10)
# Create a column from the index values
df.reset_index(level=0, inplace=True)
df.head(10)
df_long = pd.melt(df, id_vars=['Date']).dropna()
df_long.head()
df_long.tail()
df_long.rename(columns={'variable': 'Stock', 'value':'Price'},inplace=True)
df_long.head(10)
%matplotlib inline
import matplotlib.pyplot as plt
from ggplot import *
stock_plot = ggplot(df_long, aes(x='Date', y='Price', color='Stock')) + \
geom_line() + \
ylab('Price (US$)') + \
ggtitle('Auto Stocks: Adj Close') + \
scale_x_date(labels='%b %d %y',breaks=date_breaks('3 months') )
# Have to use MATPLOTLIB api to rotate x tick labels. Hopefully some day, ggplot will implement a rotation option.
fig = stock_plot.draw()
ax = fig.gca()
labels = ax.get_xticklabels()
for label in labels:
label.set_rotation(270)
plt.show()
pivoted = df_long.pivot(index='Date', columns='Stock', values='Price')
pivoted.head()
# since data is now in wide format, we can plot data frames directly
pivoted.plot.line()
import pandas as pd
data = pd.DataFrame({'medal': ['bronze', 'silver', 'silver', 'gold','bronze', 'bronze', 'gold', 'gold','gold'],
'ounces': [4, 3, 12, 6, 7.5, 8, 3, 5, 6]})
data
data["medal"] = data["medal"].astype("category")
data.dtypes
data.sort_values(by=["medal"])
data["medal"].cat.set_categories([ "gold","silver","bronze"],inplace=True)
data.sort_values(by=["medal"])
import pandas as pd
left = pd.DataFrame({'group': ['a', 'a', 'a', 'b','b', 'b', 'c', 'c','c'],
'ounces': [4, 3, 12, 6, 7.5, 8, 3, 5, 6]})
left
right = pd.DataFrame({'label': ['a','b','c'],
'value': ['alpha','beta','charlie']})
right
inner_joined = pd.merge(left, right, how='inner', left_on='group', right_on='label')
inner_joined
import numpy as np
import pandas as pd
df = pd.DataFrame([range(3), [0, np.NaN, 0], [0, 0, np.NaN], range(3), range(3)])
df
df.isnull().any(axis=1)
df[df.isnull().any(axis=1)]
import pandas as pd
a = [['a', '1.2', '4.2'], ['b', '70', '0.03'], ['x', '5', '0']]
df = pd.DataFrame(a, columns=['one', 'two', 'three'])
df
df.dtypes
df[['two', 'three']] = df[['two', 'three']].astype(float)
df.dtypes
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
df = pd.DataFrame({"var1":[1,2,3,4,5,6], "var2":[1,2,3,4,5,6]})
plt.plot(df["var1"], df["var2"])
plt.show()
plt.plot("var1", "var2", data=df)
plt.show()
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
df = pd.read_excel('/home/pybokeh/Downloads/temp.xlsx', sheetname='Claims')
df.hist(column='DTF', by='GRADE_SHORT')
plt.show()
df.boxplot(column='DTF', by='GRADE_SHORT')
plt.show()
from bokeh.sampledata.autompg import autompg as df
from bokeh.charts import BoxPlot, output_notebook, show, hplot
df.head()
box = BoxPlot(df, values='mpg', label='cyl', color='cyl',
title="MPG Box Plot by Cylinder Count", plot_width=400, plot_height=400)
# Embed the chart within this notebook
output_notebook()
show(box)
import pandas as pd
from plotly.offline import download_plotlyjs, init_notebook_mode, iplot
import numpy as np
import cufflinks as cf
init_notebook_mode()
df = pd.DataFrame(np.random.rand(10, 5), columns=['A', 'B', 'C', 'D', 'E'])
iplot(df.iplot(asFigure=True, kind='box', title='Plotly Pandas', dimensions=(800,500)), show_link=False)
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
df = (pd.read_csv('/home/pybokeh/temp/vehicles.csv',
usecols=['year', 'make', 'model', 'comb08', 'fuelType', 'fuelType1',
'fuelType2', 'atvType', 'cylinders', 'VClass'])
.rename(columns={'comb08':'combmpg'})
.query("make in('Honda','Acura','Toyota','Lexus') "
"& fuelType1 in('Regular Gasoline','Premium Gasoline','Midgrade Gasoline') "
"& cylinders in(4, 6) "
"& VClass in('Compact Cars','Subcompact Cars','Midsize Cars','Large Cars','Sport Utility','Minivan') "
"& ~(fuelType2 in('E85','Electricity','Natural Gas','Propane'))")
['combmpg'].plot.hist(alpha=0.5, label='Honda Motor Co')
)
plt.title("Histogram of Combined Hwy+City MPG", weight="bold")
plt.show()
# load the rpy2 extension
%load_ext rpy2.ipython
import pandas as pd
pandas_df = pd.read_csv('http://research.stlouisfed.org/fred2/data/EXJPUS.txt', sep=' ', skiprows=28, engine='python',
parse_dates=[0], header=None, names=['Month','Value'])
pandas_df.dtypes
pandas_df.tail()
%R -i pandas_df
from rpy2.robjects import pandas2ri
pandas2ri.activate()
%%R -w 800 -h 500 -u px # instead of px, you can also choose 'in', 'cm', or 'mm'
df = as.data.frame(pandas_df) # ensure data frame that was passed in from Python will be R's data frame type
library(ggplot2) # plotting library
library(scales) # enables us to change the x label time scale
p = ggplot(pandas_df, aes(pandas_df$Month, pandas_df$Value))
p = p +
geom_line() +
scale_x_datetime(labels=date_format("%Y-%m"), breaks="5 years", minor_breaks="1 year") +
ggtitle("Yen to US Dollars") +
xlab("Time") +
ylab("Yen Value")
print(p)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Use XLA with tf.function
Step2: Then define some necessary constants and prepare the MNIST dataset.
Step3: Finally, define the model and the optimizer. The model uses a single dense layer.
Step4: Define the training function
Step5: Train and test the model
Step6: And, finally, check the accuracy
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# In TF 2.4 jit_compile is called experimental_compile
!pip install tf-nightly
import tensorflow as tf
tf.compat.v1.enable_eager_execution()
# Size of each input image, 28 x 28 pixels
IMAGE_SIZE = 28 * 28
# Number of distinct number labels, [0..9]
NUM_CLASSES = 10
# Number of examples in each training batch (step)
TRAIN_BATCH_SIZE = 100
# Number of training steps to run
TRAIN_STEPS = 1000
# Loads MNIST dataset.
train, test = tf.keras.datasets.mnist.load_data()
train_ds = tf.data.Dataset.from_tensor_slices(train).batch(TRAIN_BATCH_SIZE).repeat()
# Casting from raw data to the required datatypes.
def cast(images, labels):
images = tf.cast(
tf.reshape(images, [-1, IMAGE_SIZE]), tf.float32)
labels = tf.cast(labels, tf.int64)
return (images, labels)
layer = tf.keras.layers.Dense(NUM_CLASSES)
optimizer = tf.keras.optimizers.Adam()
@tf.function(jit_compile=True)
def train_mnist(images, labels):
images, labels = cast(images, labels)
with tf.GradientTape() as tape:
predicted_labels = layer(images)
loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(
logits=predicted_labels, labels=labels
))
layer_variables = layer.trainable_variables
grads = tape.gradient(loss, layer_variables)
optimizer.apply_gradients(zip(grads, layer_variables))
for images, labels in train_ds:
if optimizer.iterations > TRAIN_STEPS:
break
train_mnist(images, labels)
images, labels = cast(test[0], test[1])
predicted_labels = layer(images)
correct_prediction = tf.equal(tf.argmax(predicted_labels, 1), labels)
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print("Prediction accuracy after training: %s" % accuracy)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 활용 예제
Step2: 해야 할 일 1 (5점)
Step3: 해야 할 일 2 (10점)
Step4: 또한 print_line_sum_of_file을 예를 들어 다음과 같이 작성할 수 있다.
Step5: 위 함수를 이전에 작성한 예제 파일에 적용하면 예상된 결과과 나온다.
Step6: 해야 할 일 3 (5점)
Step7: 동일한 방식을 반복하면 임의의 자연수 n에 대해 linear_n 함수를 정의할 수 있다. 즉,
Step8: 위와 같이 linear_gen 함수를 정의한 다음에 특정 n에 대해 linear_n 이 필요하다면 아래와 같이 간단하게 정의해서 사용할 수 있다.
Step9: 해야 할 일 5 (10점)
Step10: K로 시작하는 이름으로만 구성된 리스트는
Step11: 해야 할 일 7 (15점)
Step12: map 함수를 사용하지 않는 방식은 다음과 같다.
Step13: 해야 할 일 9 (10점)
Step14: 도시이름과 인구수를 쌍으로 갖는 리스트를 구현하는 방법은 아래와 같다.
Step15: city_pop를 이용하여 예를 들어 C 도시의 인구수를 확인하는 방법은 다음과 같다.
|
<ASSISTANT_TASK:>
Python Code:
def interval_point(a, b, x):
if a < b:
return (b-a)*x + a
else:
return a - (a-b)*x
interval_point(0, 1, 0.5)
interval_point(3, 2, 0.2)
while True:
try:
x = float(raw_input("Please type a new number: "))
inverse = 1.0 / x
print("The inverse of {} is {}.".format(x, inverse))
break
except ValueError:
print("You should have given either an int or a float")
except ZeroDivisionError:
print("The input number is {} which cannot be inversed.".format(int(x)))
f = open("test.txt", 'w')
f.write("1,3,5,8\n0,4,7\n1,18")
f.close()
def print_line_sum_of_file(filename):
g = open("test.txt", 'r')
h = g.readlines()
g.close()
for line in h:
sum = 0
k = line.strip().split(',')
for i in range(len(k)):
if i < len(k) -1:
print(k[i] + " +"),
else:
print(k[i] + " ="),
sum = sum + int(k[i])
print(sum)
print_line_sum_of_file("test.txt")
def linear_1(a, b):
return a + b
def linear_2(a, b):
return a * 2 + b
def linear_gen(n, a, b):
return a * n + b
def linear_10(a, b):
return linear_gen(10, a, b)
names = ["Koh", "Kang", "Park", "Kwon", "Lee", "Yi", "Kim", "Jin"]
def StartsWithK(s):
return s[0] == 'K'
K_names = filter(StartsWithK, names)
K_names
map(lambda x : x ** 2, range(5))
def list_square(num):
L = []
for i in range(num):
L.append(i ** 2)
return L
list_square(5)
cities = ['A', 'B', 'C', 'D', 'E']
populations = [20, 30, 140, 80, 25]
city_pop = []
for i in range(len(cities)):
city_pop.append((cities[i], populations[i]))
city_pop
city_pop[2][1]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: None
Step2: Control Flow
Step3: There is also a single line version
Step4: for loop
Step5: The Python for loop as a different semantic
Step6: break and continue are also available.
Step7: pass statement
Step8: Exception handling
Step9: A slightly more usefuly example
Step10: Sometimes you want to suppress an exception but want some code to be executed regardless whether the code in the try block succeeded or not.
Step11: Data structures
Step12: The same can be achived with parentheses.
Step13: In general parentheses are not requires. But in some situations they are. Hence, it's pretty common to use them regardless of the situation.
Step14: Individual elements are accessed using the [] notation as with most other sequence types in Python.
Step15: Tuples can be concatinated.
Step16: Unpacking tuples
Step17: Nested structures can also be unpacked.
Step18: tuple methods
Step19: List
Step20: The insert method is computationally intensive and should be avoided
Step21: The keyword in can be used to check for membership
Step22: List methods
Step23: Splicing
Step24: Notice
Step25: Shorthand versions can be used when be the first index refers to the first element in the sequence or the last index refers to the last element.
Step26: This notation can also be used to assign to a sequence.
Step27: Negative indices can be used to do slicing relative to the end of the list rather than the beginning.
Step28: There is also an optional step parameter to this. So actually it looks like this start
Step29: Built-in sequence functions
Step30: sorted does what it sounds like
Step31: zip pairs up elements of sequences.
Step32: reversed reverses a sequence.
Step33: Rule of thumb
Step34: Dict
Step35: Accessing a dict's elements is done using the same syntax as is used for tuples and lists
Step36: To check whether a key is a dict, the in syntax can be used.
Step37: Elements can be remove by using the del keyword or the pop method. The latter return the object that will be deleted.
Step38: Adding objects is also very intuitive.
Step39: Multiple values can be added simultaneously by joining two dicts, using the update method.
Step40: The keys and values methods return all keys and values.
Step41: While the values of a dict can be any Python object, the keys need to be immutable like int, float, string or tuple. The elements of a tuple must also be immutable. This is called hashability.
Step42: To use a list as key, it can be converted to a tuple.
Step43: Set
Step44: Sets support mathematical set operations like union, intersection, differences and symmetric difference.
Step45: It can also be checked if a set is a super or sub set of another set
Step46: Two sets are equal if their contents are equal
Step47: Other useful methods are
Step48: List, Set, and Dict Comprehensions
Step49: Similarly dicts and sets can be created.
Step50: List comprehensions can also be nested.
|
<ASSISTANT_TASK:>
Python Code:
True and True
True or False
bool(1)
bool(0)
a = None
a is None
b = 'is something'
b is not None
a = 4
if a > 0:
print('more than 0')
if a <= 0:
print('0 or less')
elif a > 0 and a < 10:
print('more than zero, less than 10')
else:
print('something else')
<statement if condition True> if <condition> else <otherwise>
a = -4
print('foo') if a > 0 else print('bar')
for(<init>; <condition>; <statement>):
# do something with the index
arr = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
for(i = 0; i < arr.length(); i++):
print(arr[i])
arr = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
for item in arr:
print(item)
for item in range(10):
print(item)
x = 256
total = 0
while x > 0:
if total > 500:
break
total = total + x # shorthand for this is total += x
x = x / 2
total
def f():
pass
f()
try:
raise Exception('Some exception')
except:
print('An exception occured')
try:
float('a string')
except ValueError as e:
print(e)
fh = open('material/foo.py')
try:
float(fh.read())
except:
print('an exception occured')
finally:
fh.close()
a = 1, 2, 3
a
a = (1, 2, 3)
a
a = tuple('foo bar')
a
a[0]
a[0] = 'f'
a = (1, 'a', [1, 2])
a[2].append(3)
a
(1, 2) + (2, 3)
(1, 2) * 3
tup = 1, 2, 3
tup
a, b, c = 1, 2, 3
print(a, b, c)
tup = (1, 2, (3, 4))
tup
a, b, (c, d) = tup
d
tup.<TAB>
a = [1, 2, 3, None, 'foo']
a
a.append(1.2e4)
a
a.insert(2, 'new text')
a
a.pop(2)
a
a.remove(1)
a
3 in a
1 in a
[4, 5, 6] + [1, 2, 3]
a = [4, 5, 6]
a.extend([1, 2, 3])
a
a.sort()
a
b = ['one', 'two', 'three', 'four']
b.sort(key=len)
b
a = list(range(10))
a
a[2:4]
a[:3]
a[5:]
a[:]
b = (2, 3, 4, 5, 6)
b[3:5]
a[:4] = [3, 2, 1, 0]
a
a[-1] # is the last element of the list
a[-4:-2]
Image('images/slicing.png')
a[::1]
a[::2]
a[::-2]
a[::-1]
a[1:6:2]
words = ['this', 'is', 'a', 'short', 'sentence']
for index, word in enumerate(words):
print('{} {}'.format(index, word))
sorted('this is a short sentence')
sorted([(5, 1), (17, 2), (22, 3)], key=lambda x: x[1])
a = [1, 2, 3, 4]
b = [5, 6, 7, 8]
for x, y in zip(a, b):
print(x, y)
c = [True, False]
for x, y, z in zip(a, b, c):
print(x, y, z)
heros = [('Peter', 'Parker'), ('Bruce', 'Wayne'), ('Clark', 'Kent')]
first, last = zip(*heros)
first, last
list(reversed(range(10)))
Image('images/youre-doing-it-wrong.jpg')
a = {}
b = {'a': 'foo', 'b': 'bar', 'c': 3}
c = dict(a = 'grok')
b, c
b['a']
c['a'] = 'bar'
c['a']
'a' in c
'c' in c
b
del b['c']
b
b.pop('b')
b
b['b'] = 2
b
b.update({'c': 3, 'd': 4})
b
b.keys()
b.values()
print(hash(412))
print(hash(412.431))
print(hash('this is a string'))
hash([1, 2, 3])
hash(tuple([1, 2, 3]))
set([1, 2, 3, 3, 4])
set((1, 2, 3, 3, 4))
set('Python is great!')
{1, 2, 3, 3, 4}
a = {1, 2, 3, 4, 5}
b = {3, 4, 5, 6, 7, 8}
a | b # union (or)
#a.union(b)
Image('images/Union.png')
a & b # intersection (and)
#a.intersection(b)
Image('images/Intersection.png')
a ^ b # symmetric difference (xor)
#a.symmetric_difference(b)
Image('images/symDifference.png')
a - b # difference
#a.difference(b)
Image('images/Difference.png')
a_set = {1, 2, 3, 4, 5}
{1, 2, 3}.issubset(a_set)
a_set.issuperset({1, 2, 3})
{1, 2, 3} == {3, 2, 1}
a.add(x) # Add element x to the set a
a.remove(x) # Remove element x from the set a
a.isdisjoint(b) # True if a and b have no elements in common.
[word.lower() for word in 'This is a simple sentence'.split()]
[word.lower() for word in 'This is a simple sentence'.split() if len(word) < 3]
results = []
for word in 'This is a simple sentence'.split():
if len(word) < 3:
results.append(word.lower())
results
{num: num + 1 for num in range(10)}
{num for num in range(10)}
physicists = [
['Albert', 'Paul', 'Marie', 'Richard'],
['Einstein', 'Dirac', 'Curie', 'Feynman']
]
physicists
[[name.upper() for name in physicist] for physicist in physicists]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 基础知识
Step2: 用小卡特的爹(好吧,这其实是某本python书的作者,但是我实在想不起来名字了)的话就是:相当于把a和b都贴在了[1,2,3]这个上面,上面a.append[4]之后,相当于把又贴在了[1,2,3,4]上面,因为b是贴在a上的,所以b的值也被改变,成为了[1,2,3,4]。
Step3: 动态引用,强类型
Step4: 变量其实是对象在特定命名空间中的名称而已。对象的类型信息是保存在它自己内部的。如果你因为这样就认为python不是类型语言,那就是错了。
Step5: 从上面可以看出类型必须一致才能进行四则运算。从上面我们可以看出type可以用来查看一个对象的类型;反过来,python中,可以通过isinstance函数来检查一个对象是否是某个数据类型的实例。
Step6: instance 可以接受由类型组成的元祖。如果想检查某个对象的类型是否属于元祖中所制定的类型,可以用如下例子中的方法:
Step7: 属性和方法
Step8: "鸭子"类型
Step9: 魔术方法。当然,还有一个更好一些的验证办法,即利用 iter 函数:
Step10: 常常在编写需要处理多类型输入的函数时使用这个功能,此外还有一个常见的应用场景是:编写可以接受任何序列(列表、元祖、ndarry)或迭代器的函数。例如我们可以先检查对象是不是列表(或NumPy数组),如果不是,就将其转换成是:
Step11: 引入 import
Step12: 如果我们想要引入 simple02.py 中定义的变量和函数,我们可以在同目录下创建另一个文件,取名为 simple03.py
Step13: 运行它:python simple03.py
Step14: 二元运算符和比较运算符
Step15: 严格与懒惰
Step16: 在python中,只要这些句子被求值,相关计算就会立即发生。上述的例子中,d的值会被立即设定为30。而在其他一些语言(eg. Haskell), d的值在被使用之前是不会被计算出来的。
Step17: 另外一些对象,诸如tuple元组和字符串是不可变的immutable。
Step18: 标准类型
Step19: 但是在python2.x中,并不能默认产生这个浮点解。要产生浮点解,需要在python2.x的程序中写入
Step20: 如果不加这句话,也可以用显示的方式来转化,如
Step21: 4)如果要得到C风格的整数除法,即除不尽时候丢弃小树,可以使用“除后圆整运算符//”
Step22: 5)在python中,复数的虚部用j表示,例如
Step23: 字符串
Step24: 前面我们已经提到过,字符串在python中是不可以被改变的,即immutable,如果要修改字符串就必须创建一个新的。
Step25: 由于字符串其实是一串字符序列,所以可以被当作某种序列类型(类似列表、元祖等)进行处理
Step26: 如果一个字符串包含很反斜杠,这会很让人懊恼,因为反斜杠常常会将字符进行转移。在python中,为了使得反斜杠就代表反斜杠本身而不去进行转移,可以在字符串前面加上字母r。这个在matplot里面画图添加希腊字母标注时候会显得比较有用。
Step27: 这里重点讲一讲python 3带来的一些新的字符串格式化的手段,其实我在平时也已经在用了,但是这里再进行简短的讨论。因为在数据分析中这个经常会用到。
Step28: 上述,%s代表将参数格式化为字符串,%d将参数格式化为整数,这些地方都是需要实际参数来替换的,因此在上述代码的第二行中,我们使用了由值组成的元组,将这些值传入形参的位置。
Step29: 布尔值
Step30: ** 几乎所有内置的python类型以及任何定义了__nonzero__魔术方法的类都能在if语句中被解释为True或者False
Step31: 如果想知道一个对象会被强制转换成哪个布尔值,可以使用bool函数。
Step32: 类型转换
Step33: None
Step34: None 还是函数可选参数的一种常见默认值;(不过,虽然我知道可以这样用,但是我暂时联想不到有什么场景可以用到)
Step35: 值得注意的是, None并非是保留关键字,它只是NoneType的一个实例
Step36: 给定 datetime 一个实例。你可以通过调用其date和time方法提取相应的date和time对象
Step37: strftime 方法用于将datetime格式化为字符串:
Step38: 控制流
Step39: if 可以和多个 elif 以及一个“滴水不漏”的 else 块(如果所有条件都为False),例如
Step40: for 循环
Step41: 后面我们会看到,如果集合或者迭代器的元素是序列类型,例如元组或者列表,那么还可以方便将这些元素拆散成for语句中的多个变量.
Step42: while 循环
Step43: pss
Step44: 在开发一个新功能时,常常会将pass用作代码中的占位符:
Step45: 异常处理
|
<ASSISTANT_TASK:>
Python Code:
%run appendix-A/simple01.py
a = [1, 2, 3]
b = a
a.append(4)
print(a)
print(b)
def append_element(fanglist, element):
fanglist.append(element)
data = [1,2,3]
append_element(data, 5)
print(data)
a = 5
type(a)
a = 2.5
type(a)
a = 'hello'
b = 5
print('a is %s, and b is %s' % (type(a), type(b)))
c = a + b
a = 5
isinstance(a, int)
a = 2.2
isinstance(a, int)
a = 5
b = 2.2
isinstance(a,(int,float))
a = 5
b = 2.2
isinstance(b,(int,float))
a = 'foo' #定义了对象
a.count #访问对象的方法
#访问对象的方法还可以使用 getattr 函数 (getattr 是 getattribute 的缩写)
getattr(a, 'center')
__inter__
def isiterable(obj):
try:
iter(obj)
return True
except TypeError: #不可迭代
return False
#对于字符串以及大部分Python集合类型,该函数返回True:
print(isiterable([1,2]))
print(isiterable((1,2)))
print(isiterable('1'))
print(isiterable(1))
x = (1,2) #定义了一个tuple
print(type(x))
print(isinstance(x,list) )#因为x是tuple,不是list,所以返回 False
print(isiterable(x))
if not isinstance(x,list) and isiterable(x):
#if not isiterable(x):
x = list(x)
print(x)
#set up a module
PI = 3.14159
def f(x):
return (x + 2 )
def g(a, b):
return (a + b)
import simple02 as s2
result = s2.f(5)
print(result)
pi = s2.PI
print(pi)
sumvalue = s2.g(2,2)
print(sumvalue)
#Fang code
import sys
sys.path.append("./appendix-A/simple02.py")
import simple02
result=simple02.f(5)
print(result)
a + b # 求和
a - b
a*b
a/b
a//b # a 除以 b后向下圆整,丢弃小数部分
a**b # a 的 b次方
a & b # 如果a和b均为True,则结果为True. 对于整数,执行按位与操作 https://zh.wikipedia.org/wiki/%E4%BD%8D%E6%93%8D%E4%BD%9C
a|b #如果a或者b为True, 则结果为True. 对于整数,执行按位与操作
a^b #对于布尔值,如果a或者b为True(但不都为True),则结果为True. 对于整数,执行按位异或操作
a==b #如果a等于b,则结果为True
a !=b #不等,则True
a <= b、a<b # 如果a小于等于(或小于)b,则结果为True
a > b、a >= b # 如果a大于(或大于等于)b,则结果为True
a is b #如果引用a和引用b都指向同一个python对象,则结果为True
a is not b #如果引用a和引用不指向同一个python对象,则结果为True
#举例子来运用这些算符
a = (1,2,3)
b = a
c = 1.2
d = 2.0
e = 2
f = 3
sumtuple = a + b
print (sumtuple)
suminteger = e + f
print(suminteger)
print(c/d)
print(c//d)
print(e/f)
print(e//f)
print(e & f)
print (True & True)
print(e is f)
print (e is not f)
print (True^True)
print (True^False)
if a == b:
print('a=b')
a=b=c=5
d=a+b*c
a = [1,2,3,4]
a[0]=2
print(a)
a_tuple = (2,3,4)
a_tuple[1]=2
3/2
from __future__ import division
3/float(2)
3//2
a=1+2j
b=3-3j
c=a-b
print(c)
a='Hi, this is Yue-Wen FANG from NYU SHANGHAI'
b="He is a visiting researcher!"
c='''
##########################
##########################
##########################
'''
print(a)
print(b)
print(c)
a="He is a visiting researcher!"
b=a.replace('is a visiting researcher','graduated from East China Normal University') #创建了一个新的字符串,b引用了这个新的字符串
print(a)
print(b)
python中允许将其他类型转化成字符串,用str函数,例如
a=5
str(a)
print(a)
a="ALOHA!"
print(a)
list(a)
print(a[:3])
s = 'This\\is\\a\\flower'
s1 = r'This\\is\\a\\flower'
print(s,'\t',s1)
template = '%s was born in %d'
template % ('yue-wen',1990)
# 再看一个例子
a=2e6
b=2.9
print('a is %d and b is %f' %(a, b))
True and False
True or False
a = [1,2,3]
b = a[:]
c = []
if a:
print('a is %s' % a)
if not b:
print('b is %s' % b)
if not c: # 因为c是空的,所以会当作False处理,not False则为True
print('c is none')
a = [1,2,3]
b = a[:]
c = []
print('The bool values of a, b, and c are %s, %s, and %s, respectively' % (bool(a), bool(b), bool(c)))
bool([]), bool([1]), bool('hello'), bool()
import math
s = math.pi
a = str(s)
print(type(s))
print(type(a))
b = float(a)
int(float(a))
print(type(a))
a = None
a is None
b = 5
print(b is not None)
if b is None:
print(b)
def add_and_maybe_multipy(a, b, c=None):
if a is not None and b is not None and c is None:
return(a+b)
else:
return(a*c)
x = add_and_maybe_multipy(1, 1, None)
y = add_and_maybe_multipy(1, 2, 3)
print('x = %d and y = %d' % (x, y))
from datetime import datetime, date, time
dt = datetime(2017, 10, 28, 19, 20, 22)
day = dt.day
print(day)
min = dt.minute
print(min)
dt.date()
dt.time()
from datetime import datetime, date, time
dt = datetime(2017, 9, 22, 12, 30, 20)
print(dt.day)
datetime.strptime('20170923', '%Y%m%d')
#完整的格式化定义请参考表书本10-2
#Page 417
x=-1
if x<0:
print(repr("hello"))
print("%s" % ("hello"))
a = [2, 3, 4, 5, 6, 8, 10]
if type(a)==list:
print(a)
x = 10
if x<0:
print('x is negative')
elif x == 0:
print('x = 0')
elif x > 0:
print('x > 0')
else:
print('x is a complex')
sequence = [1, 2, None, 4, None, 5]
total = 0
for value in sequence:
if value is None:
continue
total += value
print('total is %f in this cyle' % (total))
#使用的语法是
#for a,b,c in iterator:
# do something
#我这里只写一个简单的例子,但是这个例子可能还不是上述语法最正统的例子。
# date: 20170923
sequence = [(1,2,3),(4,5,6)]
total = 0
for (i,j,k) in sequence:
print((i,j,k))
coor = (i,j,k)
for i in coor:
i=i+1
print(i, end=' ')
print('\n')
x = 256
total = 0
while x > 0:
if total > 500:
break
total += x
print('total = %d' %total)
x = x//2
print('x = %d' %x)
if x < 0:
print 'negative'
elif x==0:
#TO DO: here you can add codes in future
pass
else:
print 'positive'
def f(x,y,z):
#TODO: realize a function here
pass
float('1.2345')
float('something')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Список - непрерывный кусок памяти!
Step2: Методы самого списка
Step3: Строки, списки, срезы
Step4: Разделение - объединение
Step5: Попробуем сами
Step6: Циклы
Step7: Цикл с условием останова
Step8: Упражнения
Step9: Set
Step10: Set как математическое множество
Step11: Dictionary
Step13: Шаблонизация и форматирование
Step14: Еще пара примеров
|
<ASSISTANT_TASK:>
Python Code:
s = 'abc' # строка
s = "abc" # тоже строка
n = 2 # int
n = 3.5 # float
l = [] # пустой список
l = [1, 2, 3] # тоже список, но с числами
l = ['a', 1, None] # разные типы в одном списке
t = (1,) # кортеж с одним элементом (не забудьте про запятую!)
d = {} # пустой словарь
d = {'a': 1, 'b': 2} # словарь с парами ключ-значение
s = set([1, 2]) # множество, еще можно {1, 2}
l = [3,2,1]
len(l) # посчитать длину списка или строки
sum(l) # просуммировать элементы списка
sorted(l) # вернуть отсортированный список, reverse=True для сортировки в обратном порядке
max(l) # максимальный элемент; min(l) # минимальный
l.append(9) # добавить элемент в конец
l.extend([3, 4, 5]) # расширить один список другим
l.insert(3, 4) # вставить элемент 4 в позицию 3
l.remove(3) # удалить первый элемент со значением 3
l.count(3) # посчитать число элементов со значением 3
# s - строка или список
s = "string"
s[0] # нулевой элемент (индексация с нуля)
s[2:4] # элементы 2 и 3
s[1:8:2] # элементы 1, 3, 5, 7 (шаг 2)
s[-1] # обратный индекс - последний элемент
s[::2] # все элементы с шагом 2
s[::-1] # развернутая строка/список
s.split("a") # разделяем строку по "a"
s.split("\t") # разделяем строку по табуляции
"\t".join(list_of_strings) # объединяем
s = "У Мэри есть овечка"
print(s[0] == ...)
print(s[7:10] == ...)
print(s[1:8:2] == ...)
print(s[-3] == ...)
print(s[::4] == ...)
print(s[::-1] == ...)
lst = [3,2,1]
for i in lst: # перебираем элементы в контейнере
print(i)
for i, item in enumerate(lst): # перебираем вместе с индексами
print(str(i) + ". " + str(item))
for j in range(1, 10, 2): # что напечатает?
print(j)
a = 0
while a < 100: # проверка условия
a += 1 # инкремент (увеличение значения на 1)
print(a)
if a == 20:
break # выход из цикла
# сгенерировать список можно, например, так:
import random
a = [random.randint(-10, 10) for _ in range(10)]
b = [random.randint(-10, 10) for _ in range(10)]
mx = a[0]
mn = b[0]
s = {1, 2, 3}
s.add(6) # добавление элемента
s.remove(2) # удаление элемента
3 in s # проверка наличия элемента
4 not in s # проверка отсутствия элемента
s1, s2 = {1, 2, 3}, {2, 3}
s2.issubset(s1) # является ли s2 подсетом s1?
s1.issuperset(s2) # является ли s1 суперсетом над s2?
s1.union(s2) # объединить два множества в одно
s1.intersection(s2) # пересечение множеств
s1.difference(s2) # разность множеств
# сокращенные версии (не рекомендуются к использованию)
s2 <= s1 # можно также s1<s2
s1 >= s2 # можно также s1>s2
s1 | s2
s1 & s2
s1 - s2
d = {"foo": 1, 42: "bar"}
d["a"] = 125 # добавить значение по ключу в словарь
del d["a"] # удалить значение по ключу из словаря
d.keys() # список ключей (в Python 3 - итератор)
d.values() # список значений (в Python 3 - итератор)
d.get(12, "freedom") # значение по умолчанию
d.update({42: "zord", "obama": "trump"}) # обновить значения из другого словаря
s = "ТеКсТ ДоМиКом"
print(s.lower()) # нижний регистр
print(s.upper()) # верхний регистр
print("У Пети было {0} яблок".format(15))
print("Что лучше - {0} ящиков пива или {1} ящиков водки?".format(30, 20))
print("У Пети было {how_much} яблок".format(how_much="дофига"))
Взвесьте мне {0:.5f}
килограмм хурмы
.format(21.35236172)
# f-строки (Python 3.6+):
a = 2
b = 8
f"a + b = {a + b}"
# старый формат: printf-нотация:
"Жили в квартире %d веселых чижа" % 44
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Note that there are two nested MeterGroups
Step2: Putting these meters into a MeterGroup allows us to easily sum together the power demand recorded by both meters to get the total power demand for the entire appliance (but it's also very easy to see the individual meter power demand too).
Step3: We can easily get the power data for both mains meters summed together
Step4: Stats for MeterGroups
Step5: Note that NILMTK has raised a warning that Mains uses a different type of power measurement than all the submeters, so it's not an entirely accurate comparison. Which raises the question
Step6: Total Energy
Step7: Energy per submeter
Step8: column headings are the ElecMeter instance numbers.
Step9: Draw wiring diagram
Step10: It's not very pretty but it shows that meters (1,2) (the site meters) are upstream of all other meters.
Step11: Stats and info for individual meters
Step12: Get upstream meter
Step13: Metadata about the class of meter
Step14: Dominant appliance
Step15: Total energy
Step16: Get good sections
Step17: We can automatically identify the 'good sections' (i.e. the sections where every pair of consecutive samples is less than max_sample_period specified in the dataset metadata)
Step18: The blue chunks show where the data is good. The white gap is the large gap seen in the raw power data. There are lots of smaller gaps that we cannot see at this zoom level.
Step19: Dropout rate
Step20: Note that the dropout rate has gone down (which is good!) now that we are ignoring the gaps. This value is probably more representative of the performance of the wireless system.
Step21: Or all appliances in the 'heating' category
Step22: Or all appliances in building 1 with a single-phase induction motor(!)
Step23: (NILMTK imports the 'common metadata' from the NILM Metadata project, which includes a wide range of different category taxonomies)
Step24: Select a group of meters from properties of the meters (not the appliances)
Step25: Select a single meter from a MeterGroup
Step26: Appliances are uniquely identified within a building by a type (fridge, kettle, television, etc.) and an instance number. If we do not specify an instance number then ElecMeter retrieves instance 1 (instance numbering starts from 1). If you want a different instance then just do this
Step27: To uniquely identify an appliance in nilmtk.global_meter_group then we must specify the dataset name, building instance number, appliance type and appliance instance in a dict
Step28: Search for a meter using details of the ElecMeter
Step29: Instance numbering
Step30: Select nested MeterGroup
Step31: We can also specify the mains by asking for meter instance 0
Step32: which is equivalent to elec.mains()
|
<ASSISTANT_TASK:>
Python Code:
from matplotlib import rcParams
import matplotlib.pyplot as plt
%matplotlib inline
rcParams['figure.figsize'] = (13, 6)
plt.style.use('ggplot')
from nilmtk import DataSet
redd = DataSet('/data/REDD/redd.h5')
elec = redd.buildings[1].elec
elec
elec.nested_metergroups()
elec.mains()
elec.mains().power_series_all_data().head()
elec.submeters()
elec.proportion_of_energy_submetered()
mains = elec.mains()
mains.available_power_ac_types()
elec.submeters().available_power_ac_types()
elec.load()
elec.mains().total_energy() # returns kWh
energy_per_meter = elec.submeters().energy_per_meter() # kWh, again
energy_per_meter
# energy_per_meter is a DataFrame where each row is a
# power type ('active', 'reactive' or 'apparent').
# All appliance meters in REDD are record 'active' so just select
# the 'active' row:
energy_per_meter = energy_per_meter.loc['active']
more_than_20 = energy_per_meter[energy_per_meter > 20]
more_than_20
instances = more_than_20.index
instances
elec.draw_wiring_graph()
elec.meters_directly_downstream_of_mains()
fridge_meter = elec['fridge']
fridge_meter.upstream_meter() # happens to be the mains meter group!
fridge_meter.device
fridge_meter.dominant_appliance()
fridge_meter.total_energy() # kWh
fridge_meter.plot()
good_sections = fridge_meter.good_sections(full_results=True)
# specifying full_results=False would give us a simple list of
# TimeFrames. But we want the full GoodSectionsResults object so we can
# plot the good sections...
good_sections.plot()
good_sections.combined()
fridge_meter.dropout_rate()
import nilmtk
nilmtk.global_meter_group.select_using_appliances(type='washer dryer')
nilmtk.global_meter_group.select_using_appliances(category='heating')
nilmtk.global_meter_group.select_using_appliances(building=1, category='single-phase induction motor')
nilmtk.global_meter_group.select_using_appliances(building=2, category='laundry appliances')
elec.select(device_model='REDD_whole_house')
elec.select(sample_period=3)
elec['fridge']
elec['light', 2]
import nilmtk
nilmtk.global_meter_group[{'dataset': 'REDD', 'building': 1, 'type': 'fridge', 'instance': 1}]
elec[1]
from nilmtk.elecmeter import ElecMeterID
# ElecMeterID is a namedtuple for uniquely identifying each ElecMeter
nilmtk.global_meter_group[ElecMeterID(instance=8, building=1, dataset='REDD')]
elec[[ElecMeterID(instance=3, building=1, dataset='REDD'),
ElecMeterID(instance=4, building=1, dataset='REDD')]]
elec[ElecMeterID(instance=(3,4), building=1, dataset='REDD')]
elec[ElecMeterID(instance=0, building=1, dataset='REDD')]
elec.mains() == elec[ElecMeterID(instance=0, building=1, dataset='REDD')]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Write your equation for magnetic energy in words (and Latex) here.
Step2: Problem 1b
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt # import the plotting module
%matplotlib inline
# This allows us to plot in the notebook environment
thetas=np.arange(0,180,1) # makes an array of thetas from 0 to 180 at 1 degree increments.
Es=np.cos(np.radians(thetas)) # replace this with YOUR equation - this is just and EXAMPLE.
plt.plot(thetas,Es) # make a nice plot
plt.title("Write your title here")
plt.xlabel("Write your X-axis Label here")
plt.ylabel("Write your Y-axis Label here")
# figure out thermal energy here and print it out
YOUR_NUMBER=1 # obviously this is not the real number. what is it?
print 'thermal energy is: ',YOUR_NUMBER
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load Iris Flower Dataset
Step2: Imbalanced Iris Flower Classes
Step3: Standardize Features
Step4: Train Support Vector Classifier With Weighted Classes
|
<ASSISTANT_TASK:>
Python Code:
# Load libraries
from sklearn.svm import SVC
from sklearn import datasets
from sklearn.preprocessing import StandardScaler
import numpy as np
#Load data with only two classes
iris = datasets.load_iris()
X = iris.data[:100,:]
y = iris.target[:100]
# Make class highly imbalanced by removing first 40 observations
X = X[40:,:]
y = y[40:]
# Create target vector indicating if class 0, otherwise 1
y = np.where((y == 0), 0, 1)
# Standarize features
scaler = StandardScaler()
X_std = scaler.fit_transform(X)
# Create support vector classifier
svc = SVC(kernel='linear', class_weight='balanced', C=1.0, random_state=0)
# Train classifier
model = svc.fit(X_std, y)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The second table is the sample_files table, which provides, among other information, the GitHub repo that each file belongs to (in the repo_name column). The first several rows of this table are printed below.
Step3: Next, we write a query that uses information in both tables to determine how many files are released in each license.
Step4: It's a big query, and so we'll investigate each piece separately.
|
<ASSISTANT_TASK:>
Python Code:
#$HIDE_INPUT$
from google.cloud import bigquery
# Create a "Client" object
client = bigquery.Client()
# Construct a reference to the "github_repos" dataset
dataset_ref = client.dataset("github_repos", project="bigquery-public-data")
# API request - fetch the dataset
dataset = client.get_dataset(dataset_ref)
# Construct a reference to the "licenses" table
licenses_ref = dataset_ref.table("licenses")
# API request - fetch the table
licenses_table = client.get_table(licenses_ref)
# Preview the first five lines of the "licenses" table
client.list_rows(licenses_table, max_results=5).to_dataframe()
#$HIDE_INPUT$
# Construct a reference to the "sample_files" table
files_ref = dataset_ref.table("sample_files")
# API request - fetch the table
files_table = client.get_table(files_ref)
# Preview the first five lines of the "sample_files" table
client.list_rows(files_table, max_results=5).to_dataframe()
# Query to determine the number of files per license, sorted by number of files
query =
SELECT L.license, COUNT(1) AS number_of_files
FROM `bigquery-public-data.github_repos.sample_files` AS sf
INNER JOIN `bigquery-public-data.github_repos.licenses` AS L
ON sf.repo_name = L.repo_name
GROUP BY L.license
ORDER BY number_of_files DESC
# Set up the query (cancel the query if it would use too much of
# your quota, with the limit set to 10 GB)
safe_config = bigquery.QueryJobConfig(maximum_bytes_billed=10**10)
query_job = client.query(query, job_config=safe_config)
# API request - run the query, and convert the results to a pandas DataFrame
file_count_by_license = query_job.to_dataframe()
# Print the DataFrame
file_count_by_license
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Define telescope properties for ray tracer and construct array
Step2: 3. Construct PE imager and ray procesor
Step3: 4. Construct ray generator for each off-axis angle and pass it to ray processor
Step4: 5. Plot the results
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
import calin.math.geometry
import calin.math.hex_array
import calin.simulation.vs_optics
import calin.simulation.ray_processor
def dms(d,m,s):
# Note this function fails for "negative" d=0 (e.g. -00:30:00)
sign = 1
if(d<0):
sign = -1
d = abs(d)
return sign * (d + m/60.0 + s/3600.0)
mst = calin.ix.simulation.vs_optics.IsotropicDCArrayParameters()
mst.mutable_array_origin().set_latitude(dms(28, 45, 47.36))
mst.mutable_array_origin().set_longitude(dms(-17, 53, 23.93))
mst.mutable_array_origin().set_elevation(2147 * 100.0)
scope = mst.mutable_prescribed_array_layout().add_scope_positions();
scope.set_z(mst.array_origin().elevation())
mst.mutable_reflector_frame().set_optic_axis_rotation(-90);
dc = mst.mutable_reflector()
dc.set_curvature_radius(1920)
dc.set_aperture(1230)
dc.set_facet_num_hex_rings(5)
dc.mutable_psf_align().set_object_plane(inf) # 10 * 1e5);
dc.set_alignment_image_plane(1600)
dc.set_facet_spacing(122)
dc.set_facet_size(120)
dc.set_facet_focal_length(1607)
dc.set_facet_focal_length_dispersion(1)
dc.set_facet_spot_size_probability(0.8)
dc.set_facet_spot_size(0.5 * 2.8) # Spot size of 28mm at 2F
dc.set_facet_spot_size_dispersion(0.5 * 0.02)
dc.set_facet_labeling_parity(True)
dc.set_weathering_factor(1.0)
for id in [1,62,67,72,77,82,87]: dc.add_facet_missing_list(id-1)
mst.mutable_focal_plane().set_camera_diameter(235)
mst.mutable_focal_plane().mutable_translation().set_y(1/(1.0/dc.alignment_image_plane()-1/(10 * 1e5)))
mst.mutable_pixel().set_spacing(5)
mst.mutable_pixel().set_cone_inner_diameter(5)
mst.mutable_pixel().set_cone_survival_prob(1)
mst.mutable_pixel().set_hex_module_size(1)
mst.mutable_pixel().set_module_num_hex_rings(9)
u1,v1 = calin.math.hex_array.cluster_hexid_to_center_uv(1,1)
x1,y1 = calin.math.hex_array.uv_to_xy(u1,v1)
rot = arctan2(-y1,x1)/pi*180 - 30
mst.mutable_pixel().set_grid_rotation(rot)
obs_camera_box = mst.add_obscurations()
obs_camera_box.aligned_box().max_corner().set_x(150)
obs_camera_box.aligned_box().max_corner().set_y(mst.focal_plane().translation().y()+150)
obs_camera_box.aligned_box().max_corner().set_z(150)
obs_camera_box.aligned_box().min_corner().set_x(-150)
obs_camera_box.aligned_box().min_corner().set_y(mst.focal_plane().translation().y())
obs_camera_box.aligned_box().min_corner().set_z(-150)
obs_camera_box.aligned_box().set_incoming_only(True)
rng = calin.math.rng.RNG()
cta = calin.simulation.vs_optics.VSOArray()
cta.generateFromArrayParameters(mst, rng)
cta.pointTelescopesAzEl(0,90.0/180.0*pi);
scope = cta.telescope(0)
pe_imager = calin.simulation.ray_processor.TelescopePSFCalcPEProcessor()
processor = calin.simulation.ray_processor.VSORayProcessor(cta, pe_imager, rng)
dx = 1 # cm
dtheta = 0.1 / 180*pi # rad
fov_2 = 4.5 / 180.0*pi # rad
R = scope.reflectorIP()/2.0
x00 = scope.reflectorIPCenter()
scope.reflectorToGlobal_pos(x00)
all_theta = []
mean_x = []
mean_y = []
psf_x = []
psf_y = []
for theta in frange(0,fov_2,dtheta):
pos_gen = calin.math.ray_generator.HexGridPlanePositionGenerator(R, dx)
dir_gen = calin.math.ray_generator.SingleDirectionGenerator()
ray_gen = calin.math.ray_generator.PositionNestedByDirectionRayGenerator(x00,
calin.math.geometry.rotation_theta_phi(pi-theta,0), asarray([0,0,-100e5]), dir_gen, pos_gen)
pe_imager.clear()
processor.process_all_from_ray_generator(ray_gen)
all_theta.append(theta/pi*180)
mean_x.append(pe_imager.mom().mean_x())
mean_y.append(pe_imager.mom().mean_y())
psf_x.append(sqrt(pe_imager.mom().var_x()))
psf_y.append(sqrt(pe_imager.mom().var_y()))
plot(all_theta, asarray(psf_x)/scope.focalPlanePosition()[1]/pi*180, label='Tangential')
hold('on')
plot(all_theta, asarray(psf_y)/scope.focalPlanePosition()[1]/pi*180, label='Sagittal')
legend(loc=2)
xlabel('Field angle [deg]')
ylabel('PSF RMS [deg]')
grid()
a=list(axis())
a[2] = 0
axis(a);
plot(all_theta, asarray(mean_x)/scope.focalPlanePosition()[1]/pi*180, label='Tangential')
hold('on')
plot(all_theta, asarray(mean_y)/scope.focalPlanePosition()[1]/pi*180, label='Sagittal')
legend(loc=2)
xlabel('Field angle [deg]')
ylabel('Image centroid location [deg]')
grid()
a=list(axis())
a[2] = 0
axis(a);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Zeit, das Ganze mal grafisch darzustellen. Wir erzeugen zunächst ein Objekt der Klasse Panel mit dem Namen panel1, den Eckpunkten $a=(0,-2)$ und $b=(0,2)$ und einem Volumenstrom pro Länge und Tiefe von $2~\text{m}/{s}$.
Step2: Unser Quell-Paneel macht schon (fast) was es soll. Es hat nur noch einen Schönheitsfehler
Step3: Als nächstes erzeugen wir die 4 Paneele
Step4: und stellen dann das Gleichungssytem auf
Step5: Zur Lösung des Gleichungssystems nutzen wir einen in Python (bzw. Numpy) vorhandenen Löser
Step6: Nachdem jetzt allen Paneelen die richtige Stärke $\lambda_i$ zugewiesen wurde, können wir das Geschwindigkeitsfeld berechnen
Step7: Mit dem Quell-Panel-Verfahren können wir die Druckverteilung auf beliebigen umströmten Körpern berechnen, solange die Strömung stationär und reibungsfrei ist und der Körper keinen Auftrieb erfährt.
|
<ASSISTANT_TASK:>
Python Code:
import math
import numpy as np
from scipy import integrate
import matplotlib.pyplot as plt
%matplotlib inline
class Panel:
# Initialisiert ein Objekt der Klasse Panel
def __init__(self, ax, ay, bx, by, lamb=0):
# Panel-Stärke lambda
self.lamb = lamb
# Koordinaten der beiden Endpunkte des Panels als
# Attribute des Objekts übernehmen
self.ax = ax
self.ay = ay
self.bx = bx
self.by = by
# Koordinaten des Paneel-Mittelpunkts berechnen
self.mx = 0.5*(ax+bx)
self.my = 0.5*(ay+by)
# Länge des Paneels berechnen
self.laenge = math.sqrt((bx-ax)**2+(by-ay)**2)
# Winkel zwischen x-Achse und Panel-Normalen
if bx-ax <= 0:
self.beta = math.acos((by-ay)/self.laenge)
else:
self.beta = math.pi + math.acos(-(by-ay)/self.laenge)
def x(self, s):
return self.ax + s*(self.bx-self.ax)/self.laenge # Strahlensatz
def y(self, s):
return self.ay + s*(self.by-self.ay)/self.laenge # Strahlensatz
def phi(self, x,y):
def integrand(s, x, y, xs, ys):
return numpy.log(numpy.sqrt((x-xs(s))**2+(y-ys(s))**2))
def integral(s_min, s_max, x, y, xs, ys):
return integrate.quad(integrand, s_min, s_max,
args=(x,y,self.x,self.y))[0]
vec_integral = np.vectorize(integral)
return (self.lamb / (2 * math.pi)
* vec_integral(0, self.laenge, x, y, self.x, self.y))
def psi(self, x,y):
def integrand(s, x, y, xs, ys):
return numpy.arctan2(y-ys(s),x-xs(s))
def integral(s_min, s_max, x, y, xs, ys):
return integrate.quad(integrand, s_min, s_max,
args=(x,y,self.x,self.y))[0]
vec_integral = np.vectorize(integral)
return (self.lamb / (2 * math.pi)
* vec_integral(0, self.laenge, x, y, self.x, self.y))
def vel(self, x,y):
def integrand_u(s, x, y, xs, ys):
return (x-xs(s))/((x-xs(s))**2+(y-ys(s))**2)
def integrand_v(s, x, y, xs, ys):
return (y-ys(s))/((x-xs(s))**2+(y-ys(s))**2)
def integral_u(s_min, s_max, x, y, xs, ys):
return integrate.quad(integrand_u, s_min, s_max,
args=(x,y,self.x,self.y))[0]
def integral_v(s_min, s_max, x, y, xs, ys):
return integrate.quad(integrand_v, s_min, s_max,
args=(x,y,self.x,self.y))[0]
vec_integral_u = np.vectorize(integral_u)
vec_integral_v = np.vectorize(integral_v)
u = (self.lamb / (2 * math.pi)
* vec_integral_u(0, self.laenge, x, y, self.x, self.y))
v = (self.lamb / (2 * math.pi)
* vec_integral_v(0, self.laenge, x, y, self.x, self.y))
return u,v
lamb = 1.0
panel1 = Panel(ax=0,ay=-2,bx=0,by=2,lamb=lamb) # erzeuge neues Panel
nx = 400 # Anzahl der Punkte in x-Richtung
ny = 200 # Anzahl der Punkte in y-Richtung
x = np.linspace(-10, 10, nx, dtype=float) # 1D-Array mit x-Koordinaten
y = np.linspace(-5, 5, ny, dtype=float) # 1D-Array mit y-Koordinaten
X, Y = np.meshgrid(x, y ) # erzeugt das Gitter mit nx * ny Punkten
u,v = panel1.vel(X, Y)
u += 0.5
# Neuen Plot einrichten
plt.figure(figsize=(10, 5))
plt.xlabel('x')
plt.ylabel('y')
plt.xlim(-10,10)
plt.ylim(-5,5)
# Stromlinien mit Matplotlib-Funktion darstellen
plt.streamplot(X, Y, u, v,
density=2, linewidth=1, arrowsize=2, arrowstyle='->')
plt.plot([panel1.ax, panel1.bx],[panel1.ay, panel1.by],
color='red', linewidth=3);
def norm_int(panel_i, panel_j):
# Berechnet das Integral in der Koeffizientenmatrix
def integrand(s):
delta_x = panel_j.mx - panel_i.x(s)
delta_y = panel_j.my - panel_i.y(s)
return ((delta_x*math.cos(panel_j.beta)
+ delta_y*math.sin(panel_j.beta))
/ (delta_x*delta_x + delta_y*delta_y))
return integrate.quad(integrand, 0.0, panel_i.laenge)[0]
panels = [] # Liste der Paneele
#panels.append(Panel(ax=-5.0,ay= 0.0, bx= 0.0, by= 2.0))
#panels.append(Panel(ax= 0.0,ay= 2.0, bx= 5.0, by= 0.0))
#panels.append(Panel(ax= 5.0,ay= 0.0, bx= 0.0, by=-2.0))
#panels.append(Panel(ax= 0.0,ay=-2.0, bx=-5.0, by= 0.0))
panels.append(Panel(bx=-5.0,by= 0.0, ax= 0.0, ay= 2.0))
panels.append(Panel(bx= 0.0,by= 2.0, ax= 15, ay= 0.0))
panels.append(Panel(bx= 15,by= 0.0, ax= 0.0, ay=-2.0))
panels.append(Panel(bx= 0.0,by=-2.0, ax=-5.0, ay= 0.0))
N = len(panels)
u_oo = 1.0
v_oo = 0.0
# die Koeffizientenmatrix A_ij
A_ij = np.zeros((N,N), dtype=float)
for i, panel_i in enumerate(panels):
for j, panel_j in enumerate(panels):
if i == j:
A_ij[i,j] = 0.5
else:
A_ij[i,j] = norm_int(panel_i, panel_j) / (2 * math.pi)
# und die rechte Seite des Gleichungssystems
b_i = np.zeros(N, dtype=float)
for i, panel_i in enumerate(panels):
b_i[i] = -u_oo * math.cos(panel_i.beta) - v_oo * math.sin(panel_i.beta)
lambda_i = np.linalg.solve(A_ij, b_i)
for i, panel_i in enumerate(panels):
panel_i.lamb = lambda_i[i]
# Anteil der Translationsströmung:
u = np.full_like(X, u_oo)
v = np.full_like(Y, v_oo)
# Anteil der einzelnen Paneele
for i, panel_i in enumerate(panels):
u_i,v_i = panel_i.vel(X, Y)
u += u_i
v += v_i
# Neuen Plot einrichten
plt.figure(figsize=(10, 5))
plt.xlabel('x')
plt.ylabel('y')
plt.xlim(-10,10)
plt.ylim(-5,5)
# Stromlinien mit Matplotlib-Funktion darstellen
plt.streamplot(X, Y, u, v,
density=2, linewidth=1, arrowsize=2, arrowstyle='->');
for i, panel_i in enumerate(panels):
plt.plot([panel_i.ax, panel_i.bx],[panel_i.ay, panel_i.by],
color='red', linewidth=3);
from IPython.core.display import HTML
def css_styling():
styles = open('TFDStyle.css', 'r').read()
return HTML(styles)
css_styling()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Function 2
Step3: Assignment wrapup
|
<ASSISTANT_TASK:>
Python Code:
# Put your code here, using additional cells if necessary.
# Put your code here, using additional cells if necessary.
from IPython.display import HTML
HTML(
<iframe
src="https://goo.gl/forms/M7YCyE1OLzyOK7gH3?embedded=true"
width="80%"
height="1200px"
frameborder="0"
marginheight="0"
marginwidth="0">
Loading...
</iframe>
)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: When you process text, you have a nice long series of steps, but let's say you're interested in three things
Step2: The scikit-learn package does a ton of stuff, some of which includes the above. We're going to start by playing with the CountVectorizer.
Step3: Let's take a look at what it found out!
Step4: Okay, that looks like trash and garbage. What's a "sparse array"??????
Step5: If we put on our Computer Goggles we see that the first sentence has the first word 3 times, the second word 1 time, the third word 1 time, etc... But we can't read it, really. It would look nicer as a dataframe.
Step6: What do all of those numbers mean????
Step7: So sentence #4 has "at" once, and the first sentence has "bought" once, and the last sentence has "the" three times. But hey, those are garbage words! They're cluttering up our dataframe! We need to add stopwords!
Step8: I still see meowed and meowing and fish and fishes - they seem the same, so let's lemmatize/stem them.
Step9: We're going to use one that features a STEMMER - something that strips the endings off of words (or tries to, at least). This one is from nltk.
Step10: Now lets look at the new version of that dataframe.
Step11: TF-IDF
Step12: Probably the one where fish appears three times.
Step13: Now our numbers have shifted a little bit. Instead of just being a count, it's the percentage of the words.
Step14: What's the highest combined? for 'fish' and 'meow'?
Step15: Indices 4 and 6 (numbers 5 and 7) are tied - but meow never even appears in one of them!
Step16: Let's take a look at our OLD values, then our NEW values, just for meow and fish.
Step17: Notice how 'meow' increased in value because it's an infrequent term, and fish dropped in value because it's so frequent.
Step18: LOOK AT HOW IMPORTANT MEOW IS. Meowing is out of this world important, because no one ever meows.
Step19: 4 categories of documents
|
<ASSISTANT_TASK:>
Python Code:
texts = [
"Penny bought bright blue fishes.",
"Penny bought bright blue and orange fish.",
"The cat ate a fish at the store.",
"Penny went to the store. Penny ate a bug. Penny saw a fish.",
"It meowed once at the bug, it is still meowing at the bug and the fish",
"The cat is at the store. The cat is orange. The cat is meowing at the fish.",
"Penny is a fish"
]
Penny bought bright blue fishes
tokenized - penny bought bright blue fishes
"Penny bought bright blue fishes".split()
from sklearn.feature_extraction.text import CountVectorizer
count_vectorizer = CountVectorizer()
# .fit_transfer TOKENIZES and COUNTS
X = count_vectorizer.fit_transform(texts)
X
X.toarray()
pd.DataFrame(X.toarray())
# A fish is Penny
count_vectorizer.get_feature_names()
pd.DataFrame(X.toarray(), columns=count_vectorizer.get_feature_names())
# We'll make a new vectorizer
count_vectorizer = CountVectorizer(stop_words='english')
# .fit_transfer TOKENIZES and COUNTS
X = count_vectorizer.fit_transform(texts)
print(count_vectorizer.get_feature_names())
pd.DataFrame(X.toarray(), columns=count_vectorizer.get_feature_names())
# This is what our normal tokenizer looks like
def boring_tokenizer(str_input):
words = re.sub(r"[^A-Za-z0-9\-]", " ", str_input).lower().split()
return words
count_vectorizer = CountVectorizer(stop_words='english', tokenizer=boring_tokenizer)
X = count_vectorizer.fit_transform(texts)
print(count_vectorizer.get_feature_names())
from nltk.stem.porter import PorterStemmer #it doesn't know what words are, it just chop off the ends from the words
porter_stemmer = PorterStemmer()
#
print(porter_stemmer.stem('fishes'))
print(porter_stemmer.stem('meowed'))
print(porter_stemmer.stem('oranges'))
print(porter_stemmer.stem('meowing'))
print(porter_stemmer.stem('organge'))
porter_stemmer = PorterStemmer()
def stemming_tokenizer(str_input):
words = re.sub(r"[^A-Za-z0-9\-]", " ", str_input).lower().split()
words = [porter_stemmer.stem(word) for word in words]
return words
count_vectorizer = CountVectorizer(stop_words='english', tokenizer=boring_tokenizer)
X = count_vectorizer.fit_transform(texts)
print(count_vectorizer.get_feature_names())
pd.DataFrame(X.toarray(), columns=count_vectorizer.get_feature_names())
pd.DataFrame(X.toarray(), columns=count_vectorizer.get_feature_names())
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf_vectorizer = TfidfVectorizer(stop_words='english', tokenizer=stemming_tokenizer, use_idf=False, norm='l1')
X = tfidf_vectorizer.fit_transform(texts)
pd.DataFrame(X.toarray(), columns=tfidf_vectorizer.get_feature_names())
tfidf_vectorizer = TfidfVectorizer(stop_words='english', tokenizer=stemming_tokenizer, use_idf=False, norm='l1')
X = tfidf_vectorizer.fit_transform(texts)
df = pd.DataFrame(X.toarray(), columns=tfidf_vectorizer.get_feature_names())
df
# Just add the columns together
pd.DataFrame([df['fish'], df['meow'], df['fish'] + df['meow']], index=["fish", "meow", "fish + meow"]).T
# use_idf=True is default, but I'll leave it in,idf inverse document frequency
idf_vectorizer = TfidfVectorizer(stop_words='english', tokenizer=stemming_tokenizer, use_idf=True, norm='l1')
X = idf_vectorizer.fit_transform(texts)
idf_df = pd.DataFrame(X.toarray(), columns=idf_vectorizer.get_feature_names())
idf_df
# OLD dataframe
pd.DataFrame([df['fish'], df['meow'], df['fish'] + df['meow']], index=["fish", "meow", "fish + meow"]).T
# NEW dataframe
pd.DataFrame([idf_df['fish'], idf_df['meow'], idf_df['fish'] + idf_df['meow']], index=["fish", "meow", "fish + meow"]).T
# use_idf=True is default, but I'll leave it in
l2_vectorizer = TfidfVectorizer(stop_words='english', tokenizer=stemming_tokenizer, use_idf=True)
X = l2_vectorizer.fit_transform(texts)
l2_df = pd.DataFrame(X.toarray(), columns=l2_vectorizer.get_feature_names())
l2_df
# normal TF-IDF dataframe
pd.DataFrame([idf_df['fish'], idf_df['meow'], idf_df['fish'] + idf_df['meow']], index=["fish", "meow", "fish + meow"]).T
# L2 norm TF-IDF dataframe
pd.DataFrame([l2_df['fish'], l2_df['meow'], l2_df['fish'] + l2_df['meow']], index=["fish", "meow", "fish + meow"]).T
# Initialize a vectorizer
vectorizer = TfidfVectorizer(use_idf=True, tokenizer=boring_tokenizer, stop_words='english')
X = vectorizer.fit_transform(texts) #fit_transform
# KMeans clustering is a method of clustering.
from sklearn.cluster import KMeans
number_of_clusters = 2
km = KMeans(n_clusters=number_of_clusters)
km.fit(X)
print("Top terms per cluster:")
order_centroids = km.cluster_centers_.argsort()[:, ::-1]
terms = vectorizer.get_feature_names()
for i in range(number_of_clusters):
top_ten_words = [terms[ind] for ind in order_centroids[i, :5]]
print("Cluster {}: {}".format(i, ' '.join(top_ten_words)))
results = pd.DataFrame()
results['text'] = texts
results['category'] = km.labels_
results
from sklearn.cluster import KMeans
number_of_clusters = 4
km = KMeans(n_clusters=number_of_clusters)
km.fit(X)
print("Top terms per cluster:")
order_centroids = km.cluster_centers_.argsort()[:, ::-1]
terms = vectorizer.get_feature_names()
for i in range(number_of_clusters):
top_ten_words = [terms[ind] for ind in order_centroids[i, :5]]
print("Cluster {}: {}".format(i, ' '.join(top_ten_words)))
results = pd.DataFrame()
results['text'] = texts
results['category'] = km.labels_
results
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Summary Statistics
Step2: Next let's take a look at their records and runs.
Step3: Last let's take a look at score differential
|
<ASSISTANT_TASK:>
Python Code:
from pybaseball import schedule_and_record
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
yankees = schedule_and_record(1927, 'NYY')
reds = schedule_and_record(1976, 'CIN')
mariners = schedule_and_record(2001, 'SEA')
mariners.describe()
reds.describe()
yankees.describe()
# create a numeric indicator of a win and take cumulative sum of wins-to-date at each point in the season
mariners['win-count'] = np.where(mariners['W/L']=='W', 1, 0).cumsum()
yankees['win-count'] = np.where(yankees['W/L']=='W', 1, 0).cumsum()
reds['win-count'] = np.where(reds['W/L']=='W', 1, 0).cumsum()
plt.plot(mariners['win-count'],label=" '01 Mariners")
plt.plot(yankees['win-count'],label=" '27 Yankees")
plt.plot(reds['win-count'],label=" '76 Reds")
plt.legend(loc=4)
plt.xlabel('Games into Season')
plt.ylabel('Win Count')
plt.title('Record Throughout Season');
mariners['scorediff'] = (mariners['R'] - mariners['RA']).cumsum()
yankees['scorediff'] = (yankees['R'] - yankees['RA']).cumsum()
reds['scorediff'] = (reds['R'] - reds['RA']).cumsum()
plt.plot(mariners['scorediff'],label=" '01 Mariners")
plt.plot(yankees['scorediff'],label=" '27 Yankees")
plt.plot(reds['scorediff'],label=" '76 Reds")
plt.legend(loc=4)
plt.xlabel('Games into Season')
plt.ylabel('Runs Scored - Runs Against')
plt.title('Cumulative Run Differential');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Default parameter scan settings relating to a specific RateChar session can also be specified during instantiation
Step2: min_concrange_factor
Step3: Various optional arguments, similar to those used during object instantiation, can be used to override the default settings and customise any parameter scan
Step4: Accessing Results
Step5: .. note
Step6: Finally data needed to draw lines relating to metabolic control analysis coefficients are also included in scan_results. Data is supplied in 3 different forms
Step7: Metabolic Control Analysis Results
Step8: Naturally, coefficients can also be accessed individually
Step9: Plotting Results
Step10: Plots generated by RateChar do not have widgets for each individual line; lines are enabled or disabled in batches according to the category they belong to. By default the Fluxes, Demand and Supply categories are enabled when plotting. To display the partial response coefficient lines together with the flux lines for J_R3, for instance, we would click the J_R3 and the Partial Response Coefficients buttons (in addition to those that are enabled by default).
Step11: Modifying the status of individual lines is still supported, but has to take place via the toggle_line method. As an example prcJR3_C_R4 can be disabled as follows
Step12: .. note
Step13: When no path is supplied the dataset will be saved to the default directory. (Which should be "~/Pysces/lin4_fb/ratechar/save_data.npz" in this case.
Step14: Similarly results may be loaded using the load_session method, either with or without a specified path
Step15: Saving Results
Step16: A subdirectory will be created for each metabolite with the files ec_results_N, rc_results_N, prc_results_N, flux_results_N and mca_summary_N (where N is a number starting at "0" which increments after each save operation to prevent overwriting files).
|
<ASSISTANT_TASK:>
Python Code:
mod = pysces.model('lin4_fb.psc')
rc = psctb.RateChar(mod)
rc = psctb.RateChar(mod,min_concrange_factor=100,
max_concrange_factor=100,
scan_points=255,
auto_load=False)
mod.species
rc.do_ratechar()
rc.do_ratechar(fixed=['S1','S3'], scan_min=0.02, max_concrange_factor=110, scan_points=200)
# Each key represents a field through which results can be accessed
sorted(rc.S3.scan_results.keys())
# Single value results
# scan_min value
rc.S3.scan_results.scan_min
# fixed metabolite name
rc.S3.scan_results.fixed
# 1-dimensional ndarray results (only every 10th value of 200 value arrays)
# scan_range values
rc.S3.scan_results.scan_range[::10]
# J_R3 values for scan_range
rc.S3.scan_results.J_R3[::10]
# total_supply values for scan_range
rc.S3.scan_results.total_supply[::10]
# Note that J_R3 and total_supply are equal in this case, because S3
# only has a single supply reaction
# Metabolic Control Analysis coefficient line data
# Names of elasticity coefficients related to the 'S3' parameter scan
rc.S3.scan_results.ec_names
# The x, y coordinates for two points that will be used to plot a
# visual representation of ecR3_S3
rc.S3.scan_results.ecR3_S3
# The x,y coordinates for two points that will be used to plot a
# visual representation of ecR4_S3
rc.S3.scan_results.ecR4_S3
# The ecR3_S3 and ecR4_S3 data collected into a single array
# (horizontally stacked).
rc.S3.scan_results.ec_data
# Metabolic control analysis coefficient results
rc.S3.mca_results
# Control coefficient ccJR3_R1 value
rc.S3.mca_results.ccJR3_R1
# Rate characteristic plot for 'S3'.
S3_rate_char_plot = rc.S3.plot()
# Display plot via `interact` and enable certain lines by clicking category buttons.
# The two method calls below are equivalent to clicking the 'J_R3'
# and 'Partial Response Coefficients' buttons:
# S3_rate_char_plot.toggle_category('J_R3',True)
# S3_rate_char_plot.toggle_category('Partial Response Coefficients',True)
S3_rate_char_plot.interact()
#remove_next
# To avoid duplication - do not run #ex
display(Image(path.join(notebook_dir,'images','ratechar_1.png'))) #ex
S3_rate_char_plot.toggle_line('prcJR3_S3_R4', False)
S3_rate_char_plot.show()
# This points to a file under the Pysces directory
save_file = '~/Pysces/rc_doc_example.npz'
# Correct path depending on platform - necessary for platform independent scripts
if platform == 'win32' and pysces.version.current_version_tuple() < (0,9,8):
save_file = psctb.utils.misc.unix_to_windows_path(save_file)
else:
save_file = path.expanduser(save_file)
rc.save_session(file_name = save_file)
rc.save_session() # to "~/Pysces/lin4_fb/ratechar/save_data.npz"
rc.load_session(save_file)
# OR
rc.load_session() # from "~/Pysces/lin4_fb/ratechar/save_data.npz"
# This points to a subdirectory under the Pysces directory
save_folder = '~/Pysces/lin4_fb/'
# Correct path depending on platform - necessary for platform independent scripts
if platform == 'win32' and pysces.version.current_version_tuple() < (0,9,8):
save_folder = psctb.utils.misc.unix_to_windows_path(save_folder)
else:
save_folder = path.expanduser(save_folder)
rc.save_results(save_folder)
# Otherwise results will be saved to the default directory
rc.save_results(save_folder) # to sub folders in "~/Pysces/lin4_fb/ratechar/
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <h2> Import the dataframe and remove any features that are all zero </h2>
Step2: <h2> Get mappings between sample names, file names, and sample classes </h2>
Step3: <h4>Control is 0, alzheimers is 1 </h4>
Step4: <h2> pqn normalize your features </h2>
Step5: <h2>Random Forest & adaBoost with PQN-normalized data</h2>
Step6: <h2> Great, you can classify things. But make null models and do a sanity check to make
Step7: <h2> Let's check out some PCA plots </h2>
|
<ASSISTANT_TASK:>
Python Code:
import time
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
from sklearn import preprocessing
from sklearn.ensemble import RandomForestClassifier
from sklearn.cross_validation import StratifiedShuffleSplit
from sklearn.cross_validation import cross_val_score
#from sklearn.model_selection import StratifiedShuffleSplit
#from sklearn.model_selection import cross_val_score
from sklearn.ensemble import AdaBoostClassifier
from sklearn.metrics import roc_curve, auc
from sklearn.utils import shuffle
from scipy import interp
%matplotlib inline
def remove_zero_columns(X, threshold=1e-20):
# convert zeros to nan, drop all nan columns, the replace leftover nan with zeros
X_non_zero_colum = X.replace(0, np.nan).dropna(how='all', axis=1).replace(np.nan, 0)
#.dropna(how='all', axis=0).replace(np.nan,0)
return X_non_zero_colum
def zero_fill_half_min(X, threshold=1e-20):
# Fill zeros with 1/2 the minimum value of that column
# input dataframe. Add only to zero values
# Get a vector of 1/2 minimum values
half_min = X[X > threshold].min(axis=0)*0.5
# Add the half_min values to a dataframe where everything that isn't zero is NaN.
# then convert NaN's to 0
fill_vals = (X[X < threshold] + half_min).fillna(value=0)
# Add the original dataframe to the dataframe of zeros and fill-values
X_zeros_filled = X + fill_vals
return X_zeros_filled
toy = pd.DataFrame([[1,2,3,0],
[0,0,0,0],
[0.5,1,0,0]], dtype=float)
toy_no_zeros = remove_zero_columns(toy)
toy_filled_zeros = zero_fill_half_min(toy_no_zeros)
print toy
print toy_no_zeros
print toy_filled_zeros
### Subdivide the data into a feature table
local_path = '/home/irockafe/Dropbox (MIT)/Alm_Lab/'\
'projects'
data_path = local_path + '/revo_healthcare/data/processed/MTBLS72/positive_mode/'\
'mtbls_no_retcor_bw2.csv'
## Import the data and remove extraneous columns
df = pd.read_csv(data_path, index_col=0)
df.shape
df.head()
# Make a new index of mz:rt
mz = df.loc[:,"mz"].astype('str')
rt = df.loc[:,"rt"].astype('str')
idx = mz+':'+rt
df.index = idx
# separate samples from xcms/camera things to make feature table
not_samples = ['mz', 'mzmin', 'mzmax', 'rt', 'rtmin', 'rtmax',
'npeaks', 'positive_mode',
]
samples_list = df.columns.difference(not_samples)
mz_rt_df = df[not_samples]
# convert to samples x features
X_df_raw = df[samples_list].T
# Replace X's at the beginning of columns
new_idx = [i.replace('X', '') for i in X_df_raw.index]
X_df_raw.index = new_idx
# Remove zero-full columns and fill zeroes with 1/2 minimum values
X_df = remove_zero_columns(X_df_raw)
X_df_zero_filled = zero_fill_half_min(X_df)
print "original shape: %s \n# zeros: %f\n" % (X_df_raw.shape, (X_df_raw < 1e-20).sum().sum())
print "zero-columns and NaNs repalced? shape: %s \n# zeros: %f\n" % (X_df.shape,
(X_df < 1e-20).sum().sum())
print "zeros filled shape: %s \n#zeros: %f\n" % (X_df_zero_filled.shape,
(X_df_zero_filled < 1e-20).sum().sum())
# Convert to numpy matrix to play nicely with sklearn
#X = X_df.as_matrix()
#print X.shape
#print X_df.head()
# get mapping between sample name and sample class
path_sample_class_map = local_path + '/revo_healthcare/data/raw/MTBLS72/s_Plasma_AD_Lipidomics.txt'
class_column = 'Factor Value[Cognitive Status]'
class_df = pd.read_csv(path_sample_class_map,
sep='\t')
# Set index as sample name
class_df.set_index('Sample Name', inplace=True)
class_df = class_df[class_column]
# select only positive values
class_df = class_df[class_df.index.str.contains('POS')]
print class_df.head(10)
print "Class label shape: ", class_df.shape
print "feature table shape: ", X_df.shape
#class_df.rename('class', inplace=True)
print class_df.head()
pd.unique(class_df.values)
# Match between feature table and metadata and assert that they're in the same order.
# then define the numpy-arrays for X and y
class_df = class_df[X_df.index].sort_index()
X_df = X_df.sort_index()
assert (class_df.index == X_df.index).all()
X = X_df.as_matrix()
# convert classes to numbers
le = preprocessing.LabelEncoder()
le.fit(class_df)
y = le.transform(class_df)
print y
# TODO PQN normalization, and log-transformation,
# and some feature selection (above certain threshold of intensity, use principal components), et
def pqn_normalize(X, integral_first=False, plot=False):
'''
Take a feature table and run PQN normalization on it
'''
# normalize by sum of intensities in each sample first. Not necessary
if integral_first:
sample_sums = np.sum(X, axis=1)
X = (X / sample_sums[:,np.newaxis])
# Get the median value of each feature across all samples
mean_intensities = np.median(X, axis=0)
# Divde each feature by the median value of each feature -
# these are the quotients for each feature
X_quotients = (X / mean_intensities[np.newaxis,:])
if plot: # plot the distribution of quotients from one sample
for i in range(1,len(X_quotients[:,1])):
print 'allquotients reshaped!\n\n',
#all_quotients = X_quotients.reshape(np.prod(X_quotients.shape))
all_quotients = X_quotients[i,:]
print all_quotients.shape
x = np.random.normal(loc=0, scale=1, size=len(all_quotients))
sns.violinplot(all_quotients)
plt.title("median val: %f\nMax val=%f" % (np.median(all_quotients), np.max(all_quotients)))
plt.plot( title="median val: ")#%f" % np.median(all_quotients))
plt.xlim([-0.5, 5])
plt.show()
# Define a quotient for each sample as the median of the feature-specific quotients
# in that sample
sample_quotients = np.median(X_quotients, axis=1)
# Quotient normalize each samples
X_pqn = X / sample_quotients[:,np.newaxis]
return X_pqn
# Make a fake sample, with 2 samples at 1x and 2x dilutions
X_toy = np.array([[1,1,1,],
[2,2,2],
[3,6,9],
[6,12,18]], dtype=float)
print X_toy
print X_toy.reshape(1, np.prod(X_toy.shape))
X_toy_pqn_int = pqn_normalize(X_toy, integral_first=True, plot=True)
print X_toy_pqn_int
print '\n\n\n'
X_toy_pqn = pqn_normalize(X_toy)
print X_toy_pqn
X_pqn = pqn_normalize(X)
print X_pqn
def roc_curve_cv(X, y, clf, cross_val,
path='/home/irockafe/Desktop/roc.pdf',
save=False, plot=True):
t1 = time.time()
# collect vals for the ROC curves
tpr_list = []
mean_fpr = np.linspace(0,1,100)
auc_list = []
# Get the false-positive and true-positive rate
for i, (train, test) in enumerate(cross_val):
clf.fit(X[train], y[train])
y_pred = clf.predict_proba(X[test])[:,1]
# get fpr, tpr
fpr, tpr, thresholds = roc_curve(y[test], y_pred)
roc_auc = auc(fpr, tpr)
#print 'AUC', roc_auc
#sns.plt.plot(fpr, tpr, lw=10, alpha=0.6, label='ROC - AUC = %0.2f' % roc_auc,)
#sns.plt.show()
tpr_list.append(interp(mean_fpr, fpr, tpr))
tpr_list[-1][0] = 0.0
auc_list.append(roc_auc)
if (i % 10 == 0):
print '{perc}% done! {time}s elapsed'.format(perc=100*float(i)/cross_val.n_iter, time=(time.time() - t1))
# get mean tpr and fpr
mean_tpr = np.mean(tpr_list, axis=0)
# make sure it ends up at 1.0
mean_tpr[-1] = 1.0
mean_auc = auc(mean_fpr, mean_tpr)
std_auc = np.std(auc_list)
if plot:
# plot mean auc
plt.plot(mean_fpr, mean_tpr, label='Mean ROC - AUC = %0.2f $\pm$ %0.2f' % (mean_auc,
std_auc),
lw=5, color='b')
# plot luck-line
plt.plot([0,1], [0,1], linestyle = '--', lw=2, color='r',
label='Luck', alpha=0.5)
# plot 1-std
std_tpr = np.std(tpr_list, axis=0)
tprs_upper = np.minimum(mean_tpr + std_tpr, 1)
tprs_lower = np.maximum(mean_tpr - std_tpr, 0)
plt.fill_between(mean_fpr, tprs_lower, tprs_upper, color='grey', alpha=0.2,
label=r'$\pm$ 1 stdev')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC curve, {iters} iterations of {cv} cross validation'.format(
iters=cross_val.n_iter, cv='{train}:{test}'.format(test=cross_val.test_size, train=(1-cross_val.test_size)))
)
plt.legend(loc="lower right")
if save:
plt.savefig(path, format='pdf')
plt.show()
return tpr_list, auc_list, mean_fpr
rf_estimators = 1000
n_iter = 3
test_size = 0.3
random_state = 1
cross_val_rf = StratifiedShuffleSplit(y, n_iter=n_iter, test_size=test_size, random_state=random_state)
clf_rf = RandomForestClassifier(n_estimators=rf_estimators, random_state=random_state)
print cross_val_rf.n_iter
print cross_val_rf.test_size
tpr_vals, auc_vals, mean_fpr = roc_curve_cv(X, y, clf_rf, cross_val_rf,
save=False)
# For adaboosted
n_iter = 3
test_size = 0.3
random_state = 1
adaboost_estimators = 200
cross_val_adaboost = StratifiedShuffleSplit(y, n_iter=n_iter, test_size=test_size, random_state=random_state)
clf = AdaBoostClassifier(n_estimators=adaboost_estimators, random_state=random_state)
adaboost_tpr, adaboost_auc, adaboost_fpr = roc_curve_cv(X, y, clf, cross_val_adaboost,
)
# Make a null model AUC curve
def make_null_model(X, y, clf, cross_val, random_state=1, num_shuffles=5, plot=True):
'''
Runs the true model, then sanity-checks by:
Shuffles class labels and then builds cross-validated ROC curves from them.
TODO: Compare true AUC vs. shuffled auc by t-test (assumes normality of AUC curve)
'''
null_aucs = []
print y.shape
print X.shape
tpr_true, auc_true, fpr_true = roc_curve_cv(X, y, clf, cross_val, save=True)
# shuffle y lots of times
for i in range(0, num_shuffles):
#Iterate through the shuffled y vals and repeat with appropriate params
# Retain the auc vals for final plotting of distribution
y_shuffle = shuffle(y)
cross_val.y = y_shuffle
cross_val.y_indices = y_shuffle
print 'Number of differences b/t original and shuffle: %s' % (y == cross_val.y).sum()
# Get auc values for number of iterations
tpr, auc, fpr = roc_curve_cv(X, y_shuffle, clf, cross_val, plot=True)
null_aucs.append(auc)
#plot the outcome as violinplots of auc curves
if plot:
flattened_aucs = [j for i in null_aucs for j in i]
my_dict = {'true_auc': auc_true, 'null_auc': flattened_aucs}
df_poop = pd.DataFrame.from_dict(my_dict, orient='index').T
df_tidy = pd.melt(df_poop, value_vars=['true_auc', 'null_auc'],
value_name='auc', var_name='AUC_type')
#print flattened_aucs
sns.violinplot(x='AUC_type', y='auc',
inner='points', data=df_tidy)
# Plot distribution of AUC vals
plt.title("Distribution of aucs")
#sns.plt.ylabel('count')
plt.xlabel('AUC')
#sns.plt.plot(auc_true, 0, color='red', markersize=10)
plt.show()
# Do a quick t-test to see if odds of randomly getting an AUC that good
return auc_true, null_aucs
# Make a null model AUC curve & compare it to null-model
# Random forest magic!
rf_estimators = 1000
n_iter = 50
test_size = 0.3
random_state = 1
cross_val_rf = StratifiedShuffleSplit(y, n_iter=n_iter, test_size=test_size, random_state=random_state)
clf_rf = RandomForestClassifier(n_estimators=rf_estimators, random_state=random_state)
true_auc, all_aucs = make_null_model(X, y, clf_rf, cross_val_rf, num_shuffles=5)
# make dataframe from true and false aucs
flattened_aucs = [j for i in all_aucs for j in i]
my_dict = {'true_auc': true_auc, 'null_auc': flattened_aucs}
df_poop = pd.DataFrame.from_dict(my_dict, orient='index').T
df_tidy = pd.melt(df_poop, value_vars=['true_auc', 'null_auc'],
value_name='auc', var_name='AUC_type')
print df_tidy.head()
#print flattened_aucs
sns.violinplot(x='AUC_type', y='auc',
inner='points', data=df_tidy, bw=0.7)
plt.title("Classification is not possible when data is shuffled")
#sns.plt.ylabel('count')
plt.xlabel('True model vs. Null Model')
plt.ylabel('AUC')
#sns.plt.plot(auc_true, 0, color='red', markersize=10)
plt.savefig('/home/irockafe/Desktop/auc distribution', format='pdf')
plt.show()
from sklearn.decomposition import PCA
# Check PCA of things
def PCA_plot(X, y, n_components, plot_color, class_nums, class_names, title='PCA'):
pca = PCA(n_components=n_components)
X_pca = pca.fit(X).transform(X)
print zip(plot_color, class_nums, class_names)
for color, i, target_name in zip(plot_color, class_nums, class_names):
# plot one class at a time, first plot all classes y == 0
#print color
#print y == i
xvals = X_pca[y == i, 0]
print xvals.shape
yvals = X_pca[y == i, 1]
plt.scatter(xvals, yvals, color=color, alpha=0.8, label=target_name)
plt.legend(bbox_to_anchor=(1.01,1), loc='upper left', shadow=False)#, scatterpoints=1)
plt.title('PCA of Malaria data')
plt.show()
PCA_plot(X, y, 2, ['red', 'blue'], [0,1], ['Control', 'Alzheimers'])
#PCA_plot(X, y, 2, ['red', 'blue'], [0,1], ['malaria', 'non-malaria fever'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Step 2
Step2: Now that we have the SparkContext setup, let's download the Wikipedia data as an RDD. For this notebook we will only use a subset of the data, but there is code below to use the full dataset (which is about ~5GB).
Step3: Now that the rdd is defined, let's see the first few lines to confirm it is structured the way we want.
Step4: This looks good, it has a document on each line, with a subsequent index value. Since we want to split documents by space, it is important to remove any extra spaces that exist. When examining the document closely we see there are extra spaces, so let's clean those up and split each document by space. Also, let's put the index for the document as the first entry, so we have an 'id' key and then the words.
Step5: Now each document is a tuple of (index, list of words). Let's transform that into a list of (index, word) tuples instead. We will use flatMap for that.
Step6: Great, now we have things formatted the way we want, let's start aggregating to generate word counts per document.
Step7: And finally, let's create a dictionary with word as the key and count as the value.
Step8: Step 3
Step9: Looking at the most frequent words in the bag of words, it is obvious that 'stop words' are most prevalent. Let's remove them with one line, using GraphLab Create.
Step10: Great, now the most frequent words are no longer stop words. We are ready to train a Topic Model on the data.
Step 4
Step11: Step 5
Step12: Well, that is just showing predicted topic_id. Instead, let's add a column with the topic_id we just predicted, and create that as our results SFrame.
Step13: Now let's see which topic ids appear most frequently in this set of Wikipedia data
Step14: Looking at this tells us that topic ids 22 and 6 are more common in this dataset. Let's find out what words are associated with those topics.
Step15: Interesting. Wonder what this set of documents is about. Let's get the full list of topic words learned by the model.
Step16: That SFrame is less useful, let's groupby all the same topic ids and create a list of words.
Step17: This is the appropriate format for recording the topics learned, by topic_id.
Step 6
|
<ASSISTANT_TASK:>
Python Code:
# To use GraphLab Create within PySpark, you need to set the $SPARK_HOME and $PYTHONPATH
# environment variables on the driver. A common usage:
!export SPARK_HOME="your-spark-home-dir"
!export PYTHONPATH=$SPARK_HOME/python/:$SPARK_HOME/python/lib/py4j-0.8.2.1-src.zip:$PYTHONPATH
import graphlab as gl
from pyspark import SparkContext
import os
import requests
# Set up the SparkContext object
# this can be 'local' or 'yarn-client' in PySpark
# Remember if using yarn-client then all the paths should be accessible
# by all nodes in the cluster.
sc = SparkContext('local')
import requests
def download_file(url, save_path):
local_filename = url.split('/')[-1]
r = requests.get(url, stream=True)
with open(os.path.join(save_path, local_filename), 'wb') as f:
for chunk in r.iter_content(chunk_size=1024):
if chunk: # filter out keep-alive new chunks
f.write(chunk)
f.flush()
return local_filename
# File to download
file_list = [16]
# If you want to use this entire Wikipedia dataset, uncomment the following line.
# This will download ~5GB of data split over 36 files.
# file_list = range(37)
# Download location for Wikipedia data
save_path = '/tmp/wikipedia'
# Actually download the files, if the location doesn't exist yet.
if not os.path.exists(save_path):
os.mkdir(save_path)
for idx in file_list:
url = 'https://static.turi.com/datasets/wikipedia/raw/w%d' % idx
print "Downloading '%s', saving to: '%s'" % (url, save_path)
download_file(url, save_path) # This will download 146MB of data.
rawRdd = sc.textFile('file:///%s/' % save_path).zipWithIndex()
rawRdd.take(1)
# replace multiple spaces with a single space using re.sub, then trim whitespace and split on single space.
import re
splitRdd = rawRdd.map(lambda (a,b): (b, re.sub("[ ]+", " ", a).strip().split(" ")))
splitRdd.take(1)
zipRdd = splitRdd.flatMap(lambda (a,b): zip([a] * len(b),b))
zipRdd.take(1)
wordRdd = zipRdd.map(lambda composite_word: (composite_word, 1)).reduceByKey(lambda a, b: a + b)
wordRdd.take(2)
bagRdd = wordRdd.map(lambda (a,b):(a[0],(a[1],b))).groupByKey().map(lambda (a,b):(a,{word_count[0]:word_count[1] for word_count in b.data}))
bagRdd.take(1)
data = gl.SFrame.from_rdd(bagRdd,sc)
data = data.unpack('X1')
data.rename({'X1.0':'id','X1.1':'bag_of_words'})
gl.canvas.set_target('ipynb')
data.show()
# Trim out stopwords
data['bag_of_words'] = data['bag_of_words'].dict_trim_by_keys(gl.text_analytics.stopwords(), exclude=True)
data.show()
# If running on entire dataset, might want to increase num_topics and num_iterations
model = gl.topic_model.create(data['bag_of_words'], num_topics=30, num_iterations=50)
pred = model.predict(data['bag_of_words'])
pred
results = gl.SFrame({'doc_id':data['id'], 'topic_id':pred, 'bag_of_words':data['bag_of_words']})
results.swap_columns('doc_id', 'bag_of_words') # better SFrame formatting
results.print_rows(max_column_width=60)
gl.canvas.set_target('ipynb')
results['topic_id'].show('Categorical')
model.get_topics([22], output_type='topic_words').print_rows(max_column_width=100)
model.get_topics([6], output_type='topic_words').print_rows(max_column_width=100)
topics = model.get_topics()
topics = topics.rename({'topic':'topic_id'})
topics
topics.groupby(['topic_id'], {'topic_words':gl.aggregate.CONCAT("word")}).print_rows(max_column_width=80)
# to save the predictions as an RDD
predictions_rdd = data.to_rdd(sc)
predictions_rdd.saveAsTextFile('file:///tmp/predictions.rdd')
# save the topic_ids with their topic words
topics_rdd = topics.to_rdd(sc)
topics_rdd.saveAsTextFile('file:///tmp/topics.rdd')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Retrieving training and test data
Step2: Visualize the training data
Step3: Building the network
Step4: Training the network
Step5: Testing
|
<ASSISTANT_TASK:>
Python Code:
# Import Numpy, TensorFlow, TFLearn, and MNIST data
import numpy as np
import tensorflow as tf
import tflearn
import tflearn.datasets.mnist as mnist
# Retrieve the training and test data
trainX, trainY, testX, testY = mnist.load_data(one_hot=True)
# Visualizing the data
import matplotlib.pyplot as plt
%matplotlib inline
# Function for displaying a training image by it's index in the MNIST set
def display_digit(index):
label = trainY[index].argmax(axis=0)
# Reshape 784 array into 28x28 image
image = trainX[index].reshape([28,28])
plt.title('Training data, index: %d, Label: %d' % (index, label))
plt.imshow(image, cmap='gray_r')
plt.show()
# Display the first (index 0) training image
display_digit(0)
# Define the neural network
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
# Inputs
net = tflearn.input_data([None, trainX.shape[1]])
# Hidden layer(s)
net = tflearn.fully_connected(net, 160, activation='ReLU')
net = tflearn.fully_connected(net, 64, activation='ReLU')
# Output layer and training model
net = tflearn.fully_connected(net, 10, activation='softmax')
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.01, loss='categorical_crossentropy')
model = tflearn.DNN(net)
return model
# Build the model
model = build_model()
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=100, n_epoch=200)
# Compare the labels that our model predicts with the actual labels
# Find the indices of the most confident prediction for each item. That tells us the predicted digit for that sample.
predictions = np.array(model.predict(testX)).argmax(axis=1)
# Calculate the accuracy, which is the percentage of times the predicated labels matched the actual labels
actual = testY.argmax(axis=1)
test_accuracy = np.mean(predictions == actual, axis=0)
# Print out the result
print("Test accuracy: ", test_accuracy)
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=75, n_epoch=200)
# Compare the labels that our model predicts with the actual labels
# Find the indices of the most confident prediction for each item. That tells us the predicted digit for that sample.
predictions = np.array(model.predict(testX)).argmax(axis=1)
# Calculate the accuracy, which is the percentage of times the predicated labels matched the actual labels
actual = testY.argmax(axis=1)
test_accuracy = np.mean(predictions == actual, axis=0)
# Print out the result
print("Test accuracy: ", test_accuracy)
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=50, n_epoch=75)
# Compare the labels that our model predicts with the actual labels
# Find the indices of the most confident prediction for each item. That tells us the predicted digit for that sample.
predictions = np.array(model.predict(testX)).argmax(axis=1)
# Calculate the accuracy, which is the percentage of times the predicated labels matched the actual labels
actual = testY.argmax(axis=1)
test_accuracy = np.mean(predictions == actual, axis=0)
# Print out the result
print("Test accuracy: ", test_accuracy)
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=25, n_epoch=100)
# Compare the labels that our model predicts with the actual labels
# Find the indices of the most confident prediction for each item. That tells us the predicted digit for that sample.
predictions = np.array(model.predict(testX)).argmax(axis=1)
# Calculate the accuracy, which is the percentage of times the predicated labels matched the actual labels
actual = testY.argmax(axis=1)
test_accuracy = np.mean(predictions == actual, axis=0)
# Print out the result
print("Test accuracy: ", test_accuracy)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Visualizing data
Step2: Form of linear regression
Step3: The Learned Linear Function is
Step4: Model evaluation metrics for regression
Step5: TIP
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
data_path = "data/Advertising.csv"
# or load the dataset directly from the link
# # data_link = "http://www-bcf.usc.edu/~gareth/ISL/Advertising.csv"
data = pd.read_csv(data_path, index_col=0)
# display the first 5 rows
data.head()
import seaborn as sns
# allow plots to appear within the notebook
%matplotlib inline
# visualize the relationship between the features and the response using scatterplots
sns.pairplot(data, x_vars=['TV','Radio','Newspaper'], y_vars='Sales', size=7, aspect=0.7, kind='reg')
# feature names
feature_names = ['TV', 'Radio', 'Newspaper']
# matrix training instances
X = data[feature_names]
# target output
label = 'Sales'
y = data[label]
print(X.head())
print('-------------------------')
print(y.head())
# How many training examples do we got?
# (row, col) -> (number of training examples, number of variables/features)
X.shape
# Splitting X and y into training and testing sets
from sklearn.model_selection import train_test_split
# As we are spliting data randomly,
# We use random_state=1 for reproducibility of this kernel results on your machine
# otherwise you would get differnt coeficients
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
# default split is 75% for training and 25% for testing
print(X_train.shape)
print(y_train.shape)
print(X_test.shape)
print(y_test.shape)
# Linear Regression Model
# import model
from sklearn.linear_model import LinearRegression
# import module to calculate model perfomance metrics
from sklearn import metrics
# instantiate
linreg = LinearRegression()
# fit the model to the training data (learn the coefficients)
linreg.fit(X_train, y_train)
# print the intercept and coefficients
print(linreg.intercept_)
print(linreg.coef_)
# pair the feature names with the coefficients
list(zip(feature_names, linreg.coef_))
# make predictions on the testing set
y_pred = linreg.predict(X_test)
# We want to compute the RMSQ using the true testing(y) and our predicted(Y)
print(np.sqrt(metrics.mean_squared_error(y_test, y_pred)))
# create a Python list of feature names
feature_cols = ['TV', 'Radio']
# use the list to select a subset of the original DataFrame
X = data[feature_cols]
# select a Series from the DataFrame
y = data.Sales
# split into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
# fit the model to the training data (learn the coefficients)
linreg.fit(X_train, y_train)
# make predictions on the testing set
y_pred = linreg.predict(X_test)
# compute the RMSE of our predictions
print(np.sqrt(metrics.mean_squared_error(y_test, y_pred)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now let's implement the simulate some data from this model
Step2: One thing to note here is that we've used named parameters in a dictionary, instead of an array of parameters as in some of the other examples.
Step3: Using our optimized model, over-plot the conditional predictions
|
<ASSISTANT_TASK:>
Python Code:
try:
import tinygp
except ImportError:
%pip install -q tinygp
try:
import optax
except ImportError:
%pip install -q optax
import tinygp
import jax
import jax.numpy as jnp
class SpectralMixture(tinygp.kernels.Kernel):
def __init__(self, weight, scale, freq):
self.weight = jnp.atleast_1d(weight)
self.scale = jnp.atleast_1d(scale)
self.freq = jnp.atleast_1d(freq)
def evaluate(self, X1, X2):
tau = jnp.atleast_1d(jnp.abs(X1 - X2))[..., None]
return jnp.sum(
self.weight
* jnp.prod(
jnp.exp(-2 * jnp.pi**2 * tau**2 / self.scale**2) * jnp.cos(2 * jnp.pi * self.freq * tau),
axis=-1,
)
)
import numpy as np
import matplotlib.pyplot as plt
def build_gp(theta):
kernel = SpectralMixture(
jnp.exp(theta["log_weight"]),
jnp.exp(theta["log_scale"]),
jnp.exp(theta["log_freq"]),
)
return tinygp.GaussianProcess(kernel, t, diag=jnp.exp(theta["log_diag"]), mean=theta["mean"])
params = {
"log_weight": np.log([1.0, 1.0]),
"log_scale": np.log([10.0, 20.0]),
"log_freq": np.log([1.0, 1.0 / 3.0]),
"log_diag": np.log(0.1),
"mean": 0.0,
}
random = np.random.default_rng(546)
t = np.sort(random.uniform(0, 10, 50))
true_gp = build_gp(params)
y = true_gp.sample(jax.random.PRNGKey(123))
plt.plot(t, y, ".k")
plt.ylim(-4.5, 4.5)
plt.title("simulated data")
plt.xlabel("x")
_ = plt.ylabel("y")
import optax
@jax.jit
@jax.value_and_grad
def loss(theta):
return -build_gp(theta).condition(y)
opt = optax.sgd(learning_rate=3e-4)
opt_state = opt.init(params)
for i in range(1000):
loss_val, grads = loss(params)
updates, opt_state = opt.update(grads, opt_state)
params = optax.apply_updates(params, updates)
opt_gp = build_gp(params)
tau = np.linspace(0, 5, 500)
plt.plot(tau, true_gp.kernel(tau[:1], tau)[0], "--k", label="true kernel")
plt.plot(tau, opt_gp.kernel(tau[:1], tau)[0], label="inferred kernel")
plt.legend()
plt.xlabel(r"$\tau$")
plt.ylabel(r"$k(\tau)$")
_ = plt.xlim(tau.min(), tau.max())
plt.savefig("gp-spectral-mixture-learned-kernel.pdf")
x = np.linspace(-2, 12, 500)
plt.plot(t, y, ".k", label="data")
mu, var = opt_gp.predict(y, x, return_var=True)
plt.fill_between(
x,
mu + np.sqrt(var),
mu - np.sqrt(var),
color="C0",
alpha=0.5,
label="prediction",
)
plt.plot(x, mu, color="C0", lw=2)
plt.xlim(x.min(), x.max())
plt.ylim(-4.5, 4.5)
plt.legend(loc=2)
plt.xlabel("x")
_ = plt.ylabel("y")
plt.savefig("gp-spectral-mixture-pred.pdf")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
Step2: Here we'll initialize a default binary, but ask for it to be created as a contact system.
Step3: We'll compare this to the default detached binary
Step4: Hierarchy
Step5: As you can see, the overcontact system has an additional "component" with method "envelope" and component label "contact_envelope".
Step6: Now, of course, if we didn't originally know we wanted a contact binary and built the default detached system, we could still turn it into an contact binary just by changing the hierarchy.
Step7: However, since our system was detached, the system is not overflowing, and therefore doesn't pass system checks
Step8: And because of this, the potential and requiv@secondary constraints cannot be updated from the constraints.
Step9: Likewise, we can make a contact system detached again simply by removing the envelope from the hierarchy. The parameters themselves will still exist (unless you remove them), so you can always just change the hierarchy again to change back to an overcontact system.
Step10: Although the constraints have been removed, PHOEBE has lost the original value of the secondary radius (because of the failed contact constraints), so we'll have to reset that here as well.
Step11: Adding Datasets
Step12: For comparison, we'll do the same to our detached system
Step13: Running Compute
Step14: Synthetics
Step15: Plotting
Step16: Orbits
Step17: Light Curves
Step18: RVs
|
<ASSISTANT_TASK:>
Python Code:
!pip install -I "phoebe>=2.2,<2.3"
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b_cb = phoebe.default_binary(contact_binary=True)
b_detached = phoebe.default_binary()
print(b_detached.hierarchy)
print(b_cb.hierarchy)
print(b_cb.filter(component='contact_envelope', kind='envelope', context='component'))
print(b_cb.filter(component='primary', kind='star', context='component'))
b_cb['requiv@primary'] = 1.5
b_cb['pot@contact_envelope@component']
b_cb['fillout_factor@contact_envelope@component']
b_cb['requiv@secondary@component']
b_detached.add_component('envelope', component='contact_envelope')
hier = phoebe.hierarchy.binaryorbit(b_detached['binary'], b_detached['primary'], b_detached['secondary'], b_detached['contact_envelope'])
print(hier)
b_detached.filter(context='constraint',constraint_func='pitch',component='primary')
b_detached.set_hierarchy(hier)
print(b_detached.hierarchy)
print(b_detached.run_checks())
b_detached['pot@component']
b_detached['requiv@secondary@component']
hier = phoebe.hierarchy.binaryorbit(b_detached['binary'], b_detached['primary'], b_detached['secondary'])
print(hier)
b_detached.set_hierarchy(hier)
print(b_detached.hierarchy)
b_detached['requiv@secondary@component'] = 1.0
b_cb.add_dataset('mesh', compute_times=[0], dataset='mesh01')
b_cb.add_dataset('orb', compute_times=np.linspace(0,1,201), dataset='orb01')
b_cb.add_dataset('lc', times=np.linspace(0,1,21), dataset='lc01')
b_cb.add_dataset('rv', times=np.linspace(0,1,21), dataset='rv01')
b_detached.add_dataset('mesh', compute_times=[0], dataset='mesh01')
b_detached.add_dataset('orb', compute_times=np.linspace(0,1,201), dataset='orb01')
b_detached.add_dataset('lc', times=np.linspace(0,1,21), dataset='lc01')
b_detached.add_dataset('rv', times=np.linspace(0,1,21), dataset='rv01')
b_cb.run_compute(irrad_method='none')
b_detached.run_compute(irrad_method='none')
print(b_cb['mesh01@model'].components)
print(b_detached['mesh01@model'].components)
afig, mplfig = b_cb['mesh01@model'].plot(x='ws', show=True)
afig, mplfig = b_detached['mesh01@model'].plot(x='ws', show=True)
afig, mplfig = b_cb['orb01@model'].plot(x='ws',show=True)
afig, mplfig = b_detached['orb01@model'].plot(x='ws',show=True)
afig, mplfig = b_cb['lc01@model'].plot(show=True)
afig, mplfig = b_detached['lc01@model'].plot(show=True)
afig, mplfig = b_cb['rv01@model'].plot(show=True)
afig, mplfig = b_detached['rv01@model'].plot(show=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Preparing the dataset
Step2: IMPORTANT
Step3: Understanding the sizes of the data
Step4: Are the classes equally distributed?
Step5: Flatten images
Step6: Splitting the train set into train and dev set
Step7: Helper class that creates minibatches
Step8: Testing the iterator
Step9: Neural network
Step10: Model instance
Step11: Loss function and optimizer
Step12: Sanity check
Step13: Training loop
Step14: Test accuarcy
Step15: Plot loss functions, accuracies
Step16: Overfitting
Step17: Gotchas
Step18: they are not part of the model
Step19: One solution is nn.ModuleList
Step20: Using the GPU
Step21: Moving things manually to the GPU
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
from torchvision import datasets
import torch
import torch.nn as nn
import torch.optim as optim
train_data = datasets.FashionMNIST('data', download=True, train=True)
# we need FloatTensors as input
train_X = train_data.data.float()
train_y = train_data.targets
test_data = datasets.FashionMNIST('data', download=True, train=False)
test_X = test_data.data.float()
test_y = test_data.targets
labels = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
idx = np.random.randint(len(train_X))
sample_X = train_X[idx].numpy()
sample_y = train_y[idx].numpy()
print("Label: {}".format(labels[sample_y]))
plt.imshow(sample_X, 'gray')
train_X.size()
np.unique(train_y.numpy(), return_counts=True)
print("Before flattening:")
print("Train size:", train_X.size(), train_y.size())
print("Test size:", test_X.size(), test_y.size())
train_X = train_X.view(-1, 28 * 28).squeeze(1)
test_X = test_X.view(-1, 28 * 28).squeeze(1)
print("\nAfter flattening:")
print("Train size:", train_X.size(), train_y.size())
print("Test size:", test_X.size(), test_y.size())
all_idx = np.arange(len(train_X))
np.random.shuffle(all_idx)
train_idx = all_idx[:50000]
dev_idx = all_idx[50000:]
print("The overlap between train and dev should be an empty set:", set(train_idx) & set(dev_idx))
print("")
dev_X = train_X[dev_idx]
dev_y = train_y[dev_idx]
train_X = train_X[train_idx]
train_y = train_y[train_idx]
print("Train size:", train_X.size(), train_y.size())
print("Dev size:", dev_X.size(), dev_y.size())
print("Test size:", test_X.size(), test_y.size())
class BatchedIterator:
def __init__(self, X, y, batch_size):
self.X = X
self.y = y
self.batch_size = batch_size
def iterate_once(self):
for start in range(0, len(self.X), self.batch_size):
end = start + self.batch_size
yield self.X[start:end], self.y[start:end]
train_iter = BatchedIterator(train_X, train_y, 33333)
for batch in train_iter.iterate_once():
print(batch[0].size(), batch[1].size())
class SimpleClassifier(nn.Module):
def __init__(self, input_dim, output_dim, hidden_dim):
super().__init__()
self.input_layer = nn.Linear(input_dim, hidden_dim)
self.relu = nn.ReLU()
self.output_layer = nn.Linear(hidden_dim, output_dim)
def forward(self, X):
h = self.input_layer(X)
h = self.relu(h)
out = self.output_layer(h)
return out
model = SimpleClassifier(
input_dim=train_X.size(1),
output_dim=10,
hidden_dim=50
)
model
for n, p in model.named_parameters():
print(n, p.size())
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters())
test_pred = model(test_X).max(axis=1)[1]
test_acc = torch.eq(test_pred, test_y).sum().float() / len(test_X)
test_acc
batch_size = 1000
train_iter = BatchedIterator(train_X, train_y, batch_size)
dev_iter = BatchedIterator(dev_X, dev_y, batch_size)
test_iter = BatchedIterator(test_X, test_y, batch_size)
all_train_loss = []
all_dev_loss = []
all_train_acc = []
all_dev_acc = []
n_epochs = 10
for epoch in range(n_epochs):
# training loop
for bi, (batch_x, batch_y) in enumerate(train_iter.iterate_once()):
y_out = model(batch_x)
loss = criterion(y_out, batch_y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
# one train epoch finished, evaluate on the train and the dev set (NOT the test)
train_out = model(train_X)
train_loss = criterion(train_out, train_y)
all_train_loss.append(train_loss.item())
train_pred = train_out.max(axis=1)[1]
train_acc = torch.eq(train_pred, train_y).sum().float() / len(train_X)
all_train_acc.append(train_acc)
dev_out = model(dev_X)
dev_loss = criterion(dev_out, dev_y)
all_dev_loss.append(dev_loss.item())
dev_pred = dev_out.max(axis=1)[1]
dev_acc = torch.eq(dev_pred, dev_y).sum().float() / len(dev_X)
all_dev_acc.append(dev_acc)
print(f"Epoch: {epoch}\n train accuracy: {train_acc} train loss: {train_loss}")
print(f" dev accuracy: {dev_acc} dev loss: {dev_loss}")
test_pred = model(test_X).max(axis=1)[1]
test_acc = torch.eq(test_pred, test_y).sum().float() / len(test_X)
test_acc
plt.plot(all_train_loss, label='train')
plt.plot(all_dev_loss, label='dev')
plt.legend()
plt.plot(all_train_acc, label='train')
plt.plot(all_dev_acc, label='dev')
plt.legend()
toy_X = train_X[:5]
toy_y = train_y[:5]
model = SimpleClassifier(
input_dim=train_X.size(1),
output_dim=10,
hidden_dim=500
)
optimizer = optim.Adam(model.parameters())
batch_size = 20
toy_train_iter = BatchedIterator(toy_X, toy_y, batch_size)
all_train_loss = []
all_dev_loss = []
all_train_acc = []
all_dev_acc = []
n_epochs = 20
for epoch in range(n_epochs):
# training loop
for bi, (batch_x, batch_y) in enumerate(toy_train_iter.iterate_once()):
y_out = model(batch_x)
loss = criterion(y_out, batch_y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
# one train epoch finished, evaluate on the train and the dev set (NOT the test)
train_out = model(toy_X)
train_loss = criterion(train_out, toy_y)
all_train_loss.append(train_loss.item())
train_pred = train_out.max(axis=1)[1]
train_acc = torch.eq(train_pred, toy_y).sum().float() / len(toy_X)
all_train_acc.append(train_acc)
dev_out = model(dev_X)
dev_loss = criterion(dev_out, dev_y)
all_dev_loss.append(dev_loss.item())
dev_pred = dev_out.max(axis=1)[1]
dev_acc = torch.eq(dev_pred, dev_y).sum().float() / len(dev_X)
all_dev_acc.append(dev_acc)
fig, ax = plt.subplots(1, 2, figsize=(12, 5))
ax[0].set_title("Loss")
ax[1].set_title("Accuracy")
ax[0].set_xlabel("epoch")
ax[1].set_xlabel("epoch")
ax[0].plot(all_train_loss, label='train')
ax[0].plot(all_dev_loss, label='dev')
ax[1].plot(all_train_acc, label='train')
ax[1].plot(all_dev_acc, label='dev')
plt.legend()
class SimpleClassifier(nn.Module):
def __init__(self, input_dim, output_dim,
hidden_dim):
super().__init__()
self.input_layer = nn.Linear(
input_dim, hidden_dim)
# let's add some extra layers in a list
self.extra_layers = [
nn.Linear(hidden_dim, 100),
nn.ReLU(),
nn.Linear(100, hidden_dim),
nn.ReLU(),
]
self.relu = nn.ReLU()
self.output_layer = nn.Linear(
hidden_dim, output_dim)
def forward(self, X):
h = self.input_layer(X)
h = self.relu(h)
# passing through extra layers
for layer in self.extra_layers:
h = layer(h)
out = self.output_layer(h)
return out
m = SimpleClassifier(4, 5, 6)
print(m)
print("Parameters:")
for name, param in m.named_parameters():
print("Name: {}, size: {}".format(name, param.size()))
class SimpleClassifier(nn.Module):
def __init__(self, input_dim, output_dim,
hidden_dim):
super().__init__()
self.input_layer = nn.Linear(
input_dim, hidden_dim)
# use ModuleList
self.extra_layers = nn.ModuleList([
nn.Linear(hidden_dim, 100),
nn.ReLU(),
nn.Linear(100, hidden_dim),
nn.ReLU(),
])
self.relu = nn.ReLU()
self.output_layer = nn.Linear(
hidden_dim, output_dim)
def forward(self, X):
h = self.input_layer(X)
h = self.relu(h)
# passing through extra layers
for layer in self.extra_layers:
h = layer(h)
out = self.output_layer(h)
return out
m = SimpleClassifier(4, 5, 6)
print(m)
print("Parameters:")
for name, param in m.named_parameters():
print("Name: {}, size: {}".format(name, param.size()))
use_cuda = torch.cuda.is_available()
print(use_cuda)
if use_cuda:
model = model.cuda()
criterion = criterion.cuda()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Known coordinates of rectangle
Step2: Define the area inside x and y coordinates
Step3: Define boundaries as CLOSED
Step4: Make a new elevation field for display
|
<ASSISTANT_TASK:>
Python Code:
from landlab import RasterModelGrid
import numpy as np
from matplotlib.pyplot import show
%matplotlib inline
mg = RasterModelGrid((10, 10))
min_x = 2.5
max_x = 5.0
min_y = 3.5
max_y = 7.5
x_condition = np.logical_and(mg.x_of_node < max_x, mg.x_of_node > min_x)
y_condition = np.logical_and(mg.y_of_node < max_y, mg.y_of_node > min_y)
my_nodes = np.logical_and(x_condition, y_condition)
mg.status_at_node[my_nodes] = mg.BC_NODE_IS_CLOSED
z = mg.add_zeros("topographic__elevation", at="node")
mg.imshow(z, at="node")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Alleles
Step2: Filtering with MTB srm data
Step3: Predict binders
Step4: Compute clusters of promiscuous binders
Step5: overlapping clusters?
|
<ASSISTANT_TASK:>
Python Code:
import os, math, time, pickle, subprocess
from importlib import reload
from collections import OrderedDict
import numpy as np
import pandas as pd
pd.set_option('display.width', 100)
import epitopepredict as ep
from epitopepredict import base, sequtils, plotting, peptutils, analysis
from IPython.display import display, HTML, Image
%matplotlib inline
import matplotlib as mpl
import pylab as plt
mhc2alleles = ep.get_preset_alleles('bovine_like_mhc2')
mhc1alleles = ['BoLA-1:01901', 'BoLA-2:00801','BoLA-2:01201','BoLA-4:02401','BoLA-2:07001',
'BoLA-3:01701','BoLA-1:02301','BoLA-2:01801','BoLA-6:01302','BoLA-3:00201']
#srm data
srm = pd.read_csv('srm_mtb.csv')
#srm = srm[srm.length<=400]
proteome = sequtils.genbank_to_dataframe('MTB-H37Rv.gb',cds=True)
#filter proteins on srm data
proteome = proteome.merge(srm[['locus_tag','concentration']],on='locus_tag',how='inner')
proteome = proteome[proteome.concentration>0]
print (len(proteome))
proteome.head()
P1 = base.get_predictor('tepitope')
b = P1.predict_sequences(proteome, alleles=mhc2alleles, threads=8, path='mtb_tepitope')
P2 = base.get_predictor('netmhciipan')
b = P2.predict_sequences(proteome, alleles=mhc2alleles, threads=8, path='mtb_netmhciipan',overwrite=False)
reload(base)
P3 = base.get_predictor('netmhcpan')
b = P3.predict_sequences(proteome[:200], alleles=mhc1alleles, threads=8, path='mtb_netmhcpan',overwrite=False)
reload(analysis)
predictors = ['netmhcpan','tepitope','netmhciipan']
clusters = {}
for name in predictors:
P = base.get_predictor(name)
path = 'mtb_'+name
P.load(path)
pb=P.promiscuous_binders(n=3)
#print (pb[:20])
cl = analysis.find_clusters(pb, min_binders=3)
clusters[name] = cl
c1=clusters['netmhciipan']
c2=clusters['tepitope']
x = analysis.get_overlaps(c1,c2,how='inside')
print (x)
final = ep.analysis.create_nmers(cl, proteome, key='20mer', length=20, margin=2)
final = analysis.peptide_properties(final, colname='20mer')
print (final.head())
reload(plotting)
#print (P.get_names())
ax = plotting.plot_tracks([P],name='Bla-g-5',cutoff=.95, cutoff_method='default',n=2, legend=True, figsize=(12,5),regions=cl)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Set Configuration
Step2: 3. Enter Storage Bucket Recipe Parameters
Step3: 4. Execute Storage Bucket
|
<ASSISTANT_TASK:>
Python Code:
!pip install git+https://github.com/google/starthinker
from starthinker.util.configuration import Configuration
CONFIG = Configuration(
project="",
client={},
service={},
user="/content/user.json",
verbose=True
)
FIELDS = {
'auth_write':'service', # Credentials used for writing data.
'bucket_bucket':'', # Name of Google Cloud Bucket to create.
'bucket_emails':'', # Comma separated emails.
'bucket_groups':'', # Comma separated groups.
}
print("Parameters Set To: %s" % FIELDS)
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
TASKS = [
{
'bucket':{
'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Credentials used for writing data.'}},
'bucket':{'field':{'name':'bucket_bucket','kind':'string','order':2,'default':'','description':'Name of Google Cloud Bucket to create.'}},
'emails':{'field':{'name':'bucket_emails','kind':'string_list','order':3,'default':'','description':'Comma separated emails.'}},
'groups':{'field':{'name':'bucket_groups','kind':'string_list','order':4,'default':'','description':'Comma separated groups.'}}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(CONFIG, TASKS, force=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First, learn about the data
Step2: Let's learn about this dataset, starting from the attributes.
Step3: Great, it follows the CF and ACDD conventions, so we don't need to wander around, but we know what to expect and where to find the information that we will need. For instance, does it conform with some Simple Geometry? If so, which one?
Step4: OK, this is a trajectory, so we expect that each measurement will have a time and position.
Step5: It looks like we are interested in TEMP_CTD_MEAN and SAL_MEAN. Let's confirm that. We can learn a lot by inspecting the Attributes.
Step6: Yes, we can see in the attributes of both variables the standard_name and long_name. We found what we need. Let's simplify our dataset and extract only what we need - temperature and salinity - and call it "tsg".
Step7: Notice that there is a trajectory dimension. Since this is a single trajectory, CF does not require to keep this dimension, but this is a good practice. In case we want to merge this dataset with another trajectory, let's say another Saildrone from another year, and both trajectories would merge seamlessly with two trajectories.
Step8: Now, if we look at the temperature, it will have only the dimension obs.
Step9: Actuall QC
Step10: Great! You just finished to QC the temperature of the whole Saildrone Antarctic mission. It's probably not the best approach to use the gradient test only, but good enough for this example.
Step11: Yes, it seems right. We asked to inspect all variables that were the type
Step12: Let's improve this. Let's evaluate temperature and salinity at the same time, but now let's add another test, the rate of change.
Step13: Nice, you can choose which tests to apply on each variable, and that includes which parameters to use on each test.
|
<ASSISTANT_TASK:>
Python Code:
import xarray as xr
from cotede.qc import ProfileQC
ds = xr.open_dataset('saildrone-antarctica.nc')
ds.attrs['Conventions']
ds.attrs['featureType']
list(ds.keys())
print(ds["SAL_MEAN"])
print("====")
print(ds["TEMP_CTD_MEAN"])
tsg = ds[['TEMP_CTD_MEAN', 'SAL_MEAN']]
tsg
tsg = tsg.isel(trajectory=0)
tsg['TEMP_CTD_MEAN']
tsg['SAL_MEAN'].attrs
tsg['TEMP_CTD_MEAN'][:10]
pqc = ProfileQC(tsg, {'sea_water_temperature':{'gradient': {'threshold': 5}}})
pqc.flags.keys()
pqc.flags['TEMP_CTD_MEAN']['gradient']
cfg = {
'sea_water_temperature':{
'gradient': {'threshold': 5},
'rate_of_change': {'threshold': 5}},
'SAL_MEAN': {
'rate_of_change': {'threshold': 2}}
}
pqc = ProfileQC(tsg, cfg)
pqc.flags
import matplotlib.pyplot as plt
plt.figure(figsize=(14,4))
idx = pqc.flags['TEMP_CTD_MEAN']['overall'] <= 2
plt.plot(pqc['time'][idx], pqc['TEMP_CTD_MEAN'][idx], '.')
plt.title('Temperature [$^\circ$C]')
plt.figure(figsize=(14,4))
idx = pqc.flags['SAL_MEAN']['overall'] <= 2
plt.plot(pqc['time'][idx], pqc['SAL_MEAN'][idx], '.')
plt.title('Salinity')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Chemistry Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 1.8. Coupling With Chemical Reactivity
Step12: 2. Key Properties --> Software Properties
Step13: 2.2. Code Version
Step14: 2.3. Code Languages
Step15: 3. Key Properties --> Timestep Framework
Step16: 3.2. Split Operator Advection Timestep
Step17: 3.3. Split Operator Physical Timestep
Step18: 3.4. Split Operator Chemistry Timestep
Step19: 3.5. Split Operator Alternate Order
Step20: 3.6. Integrated Timestep
Step21: 3.7. Integrated Scheme Type
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
Step23: 4.2. Convection
Step24: 4.3. Precipitation
Step25: 4.4. Emissions
Step26: 4.5. Deposition
Step27: 4.6. Gas Phase Chemistry
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Step30: 4.9. Photo Chemistry
Step31: 4.10. Aerosols
Step32: 5. Key Properties --> Tuning Applied
Step33: 5.2. Global Mean Metrics Used
Step34: 5.3. Regional Metrics Used
Step35: 5.4. Trend Metrics Used
Step36: 6. Grid
Step37: 6.2. Matches Atmosphere Grid
Step38: 7. Grid --> Resolution
Step39: 7.2. Canonical Horizontal Resolution
Step40: 7.3. Number Of Horizontal Gridpoints
Step41: 7.4. Number Of Vertical Levels
Step42: 7.5. Is Adaptive Grid
Step43: 8. Transport
Step44: 8.2. Use Atmospheric Transport
Step45: 8.3. Transport Details
Step46: 9. Emissions Concentrations
Step47: 10. Emissions Concentrations --> Surface Emissions
Step48: 10.2. Method
Step49: 10.3. Prescribed Climatology Emitted Species
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Step51: 10.5. Interactive Emitted Species
Step52: 10.6. Other Emitted Species
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
Step54: 11.2. Method
Step55: 11.3. Prescribed Climatology Emitted Species
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Step57: 11.5. Interactive Emitted Species
Step58: 11.6. Other Emitted Species
Step59: 12. Emissions Concentrations --> Concentrations
Step60: 12.2. Prescribed Upper Boundary
Step61: 13. Gas Phase Chemistry
Step62: 13.2. Species
Step63: 13.3. Number Of Bimolecular Reactions
Step64: 13.4. Number Of Termolecular Reactions
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Step67: 13.7. Number Of Advected Species
Step68: 13.8. Number Of Steady State Species
Step69: 13.9. Interactive Dry Deposition
Step70: 13.10. Wet Deposition
Step71: 13.11. Wet Oxidation
Step72: 14. Stratospheric Heterogeneous Chemistry
Step73: 14.2. Gas Phase Species
Step74: 14.3. Aerosol Species
Step75: 14.4. Number Of Steady State Species
Step76: 14.5. Sedimentation
Step77: 14.6. Coagulation
Step78: 15. Tropospheric Heterogeneous Chemistry
Step79: 15.2. Gas Phase Species
Step80: 15.3. Aerosol Species
Step81: 15.4. Number Of Steady State Species
Step82: 15.5. Interactive Dry Deposition
Step83: 15.6. Coagulation
Step84: 16. Photo Chemistry
Step85: 16.2. Number Of Reactions
Step86: 17. Photo Chemistry --> Photolysis
Step87: 17.2. Environmental Conditions
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ec-earth-consortium', 'ec-earth3-cc', 'atmoschem')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Other ideas about the datasets
Step3: Now that we have sample files let me try and understand exactly what kind of data we have in our tags
Step8: Now that we have an idea about what kind of data we have in our sample file let us start with finding whether the kets that we have are fine or not
Step10: Let's load data
Step11: Number of unique users
Step12: Not a lot of users seem to be contributing to India's map
Step14: How much people are contributing
Step15: Out of 953 total users
Step16: Number of users making only 1 contribution
Step17: Number of nodes and ways
Step18: Looking at the other problems in the data
Step19: Country is correct.
Step20: Delhi, DL mean the same thing
Step21: These are small number of cases so probably done by the same user
Step22: My thoughts were right. Looking at the data, all the cases are in New Delhi so I can map these to New Delhi.
Step23: I was hoping that as this is a map of Delhi the city will be Delhi, Gurgaon and Faridabad. Maybe the spelling and case would be different but still only these.
|
<ASSISTANT_TASK:>
Python Code:
r'''<pre>
+----------------------------------------------------------------------------+
|
key | types | occurrences | percents |
| ---------------------- | -------- | ----------- | ------------------------ |
| _id | ObjectId | 4063611 | 100.00000000000000000000 |
| created | Object | 4063611 | 100.00000000000000000000 |
| created.changeset | String | 4063611 | 100.00000000000000000000 |
| created.timestamp | String | 4063611 | 100.00000000000000000000 |
| created.uid | String | 4063611 | 100.00000000000000000000 |
| created.user | String | 4063611 | 100.00000000000000000000 |
| created.version | String | 4063611 | 100.00000000000000000000 |
| id | String | 4063611 | 100.00000000000000000000 |
| type | String | 4063611 | 100.00000000000000000000 |
| pos | Array | 3374750 | 83.04805750353564519628 |
| node_refs | Array | 688861 | 16.95194249646435125101 |
| address | Object | 2733 | 0.06725545333940674553 |
| address.housenumber | String | 1759 | 0.04328662364581649380 |
| address.street | String | 1022 | 0.02515004511996842343 |
| address.city | String | 922 | 0.02268917964834724424 |
| address.postcode | String | 766 | 0.01885022951261821136 |
| address.interpolation | String | 533 | 0.01311641296374086926 |
| address.country | String | 388 | 0.00954815802989016429 |
| address.housename | String | 180 | 0.00442955784891811699 |
| address.state | String | 89 | 0.00219017026974284695 |
| address.full | String | 60 | 0.00147651928297270574 |
| address.inclusion | String | 28 | 0.00068904233205392934 |
| address.buildingnumber | String | 23 | 0.00056599905847287051 |
| address.suburb | String | 12 | 0.00029530385659454114 |
| address.place | String | 8 | 0.00019686923772969410 |
| address.locality | String | 3 | 0.00007382596414863528 |
| address.district | String | 2 | 0.00004921730943242352 |
| address.area | String | 1 | 0.00002460865471621176 |
| address.block_number | String | 1 | 0.00002460865471621176 |
| address.city_1 | String | 1 | 0.00002460865471621176 |
| address.province | String | 1 | 0.00002460865471621176 |
| address.street_1 | String | 1 | 0.00002460865471621176 |
| address.street_2 | String | 1 | 0.00002460865471621176 |
| address.street_3 | String | 1 | 0.00002460865471621176 |
| address.subdistrict | String | 1 | 0.00002460865471621176 |
| address.unit | String | 1 | 0.00002460865471621176 |
+----------------------------------------------------------------------------+'''
None
from collections import defaultdict
import xml.etree.cElementTree as ET
import re
def get_element(osm_file, tags=('node', 'way', 'relation')):
Yield element if it is the right type of tag
Reference:
http://stackoverflow.com/questions/3095434/inserting-newlines-in-xml-file-generated-via-xml-etree-elementtree-in-python
context = iter(ET.iterparse(osm_file, events=('start', 'end')))
_, root = next(context)
for event, elem in context:
if tags is not None and elem.tag not in tags:
continue
if event == 'end':
yield elem
root.clear()
def take_sample(k, osm_file, sample_file):
with open(sample_file, 'wb') as output:
output.write('<?xml version="1.0" encoding="UTF-8"?>\n')
output.write('<osm>\n ')
# Write every kth top level element
for i, element in enumerate(get_element(osm_file)):
if i % k == 0:
# print "i is {}".format(i)
output.write(ET.tostring(element, encoding='utf-8'))
output.write('</osm>')
#take_sample(10, "new-delhi_india.osm", "sample_10.osm")
#OSM_FILE = "new-delhi_india.osm"
OSM_FILE = "sample_100.osm"
def get_tag_types():
tag_types = set()
for element in get_element(OSM_FILE, tags=None):
tag_types.add(element.tag)
return tag_types
#get_tag_types()
def tag_attributes(osm_file, tags):
for element in get_element(osm_file, tags):
print element.attrib
#tag_attributes(OSM_FILE, ('node',))
#tag_attributes(OSM_FILE, ('nd',))
#tag_attributes(OSM_FILE, ('member',))
#tag_attributes(OSM_FILE, ('tag',))
#tag_attributes(OSM_FILE, ('relation',))
#tag_attributes(OSM_FILE, ('way',))
import re
lower = re.compile(r'^([a-z]|_)*$')
lower_colon = re.compile(r'^([a-z]|_)*:([a-z]|_)*$')
problem_chars = re.compile(r'[=\+/&<>;\'"\?%#$@\,\. \t\r\n]')
Your task is to explore the data a bit more.
Before you process the data and add it into your database, you should check the
"k" value for each "<tag>" and see if there are any potential problems.
We have provided you with 3 regular expressions to check for certain patterns
in the tags. As we saw in the quiz earlier, we would like to change the data
model and expand the "addr:street" type of keys to a dictionary like this:
{"address": {"street": "Some value"}}
So, we have to see if we have such tags, and if we have any tags with
problematic characters.
Please complete the function 'key_type', such that we have a count of each of
four tag categories in a dictionary:
"lower", for tags that contain only lowercase letters and are valid,
"lower_colon", for otherwise valid tags with a colon in their names,
"problemchars", for tags with problematic characters, and
"other", for other tags that do not fall into the other three categories.
See the 'process_map' and 'test' functions for examples of the expected format.
def _key_type(element, keys):
if element.tag == "tag":
k = element.attrib['k']
if problem_chars.search(k):
print "problemchars {}".format(k)
keys['problemchars'] += 1
elif lower_colon.search(k):
keys['lower_colon'] += 1
elif lower.search(k):
keys['lower'] += 1
else:
#print "other {}".format(k)
keys['other'] += 1
return keys
def keys_type():
keys = {"lower": 0, "lower_colon": 0, "problemchars": 0, "other": 0}
for element in get_element(OSM_FILE, ('tag',)):
keys = _key_type(element, keys)
return keys
keys_type()
Your task is to explore the data a bit more.
The first task is a fun one - find out how many unique users
have contributed to the map in this particular area!
The function process_map should return a set of unique user IDs ("uid")
def unique_user_contributed(tags = ('node','relation',)):
users = set()
for element in get_element(OSM_FILE, tags):
users.add(element.attrib['user'])
return users
#len(unique_user_contributed())
CREATED = ["version", "changeset", "timestamp", "user", "uid"]
def ensure_key_value(_dict, key, val):
if key not in _dict:
_dict[key] = val
return _dict[key]
STATE_MAPPING = {
'delhi': 'DL',
'uttar pradesh': 'UP',
'u.p.': 'UP',
'ncr': 'DL'
}
CITY_MAPPING = {
'gurugram': 'Gurgaon',
'gurgram': 'Gurgaon',
'faridabad': 'Faridabad',
'delh': 'Delhi',
'new delhi': 'Delhi',
'neew delhi': 'Delhi',
'delhi': 'Delhi',
'old delhi': 'Delhi',
'noida': 'Noida',
'greater noida': 'Noida',
'ghaziabad': 'Ghaziabad',
'bahadurgarh': 'Bahadurgarh',
'meerut': 'Meerut'
}
CITY_TO_STATE = {
'Gurgaon': 'HR',
'Faridabad': 'HR',
'Delhi': 'DL',
'Noida': 'UP',
'Ghaziabad': 'UP',
'Bahadurgarh': 'HR',
'Meerut': 'UP'
}
def fix_address_value(address_type, value):
def if_lower_in_mapping_then_replace(value, mapping):
if value.lower() in mapping:
value = mapping[value.lower()]
if value not in set(mapping.values()):
#print "{} = {}".format(address_type, value)
pass
return value
if address_type == 'state':
value = if_lower_in_mapping_then_replace(value, STATE_MAPPING)
elif address_type == 'city':
value = if_lower_in_mapping_then_replace(value, CITY_MAPPING)
return value
def ensure_address(element_map):
if 'address' not in element_map:
element_map['address'] = {
'country': 'IN'
}
return element_map['address']
def map_city_to_states(address_map):
if 'city' in address_map:
city = address_map['city']
if city in CITY_TO_STATE:
address_map['state'] = CITY_TO_STATE[city]
def fix_address(element_map):
After we are done with general processing of individual address fields
we process it as a whole
address_map = ensure_address(element_map)
map_city_to_states(address_map)
def process_tags(element, node):
for tag in element.iter('tag'):
key = tag.attrib['k']
value = tag.attrib['v']
if problem_chars.search(key):
continue
if key.startswith("addr:"):
_parts = key.split(":")
if len(_parts) > 2:
continue
obj = ensure_key_value(node, 'address', {})
address_type = _parts[1]
value = fix_address_value(address_type, value)
obj[address_type] = value
else:
node[key] = value
fix_address(node)
def shape_element(element):
Takes an element and shapes it to be ready for insertion into the database
node = {}
if element.tag == "node" or element.tag == "way":
node['type'] = element.tag
process_tags(element, node)
for nd in element.iter('nd'):
obj = ensure_key_value(node, 'node_refs', [])
obj.append(nd.attrib['ref'])
for key, value in element.attrib.iteritems():
if key in CREATED:
ensure_key_value(node, 'created', {})
node['created'][key] = value
elif key == 'lat':
ensure_key_value(node, 'pos', [0, 0])
node['pos'][0] = float(value)
elif key == 'lon':
ensure_key_value(node, 'pos', [0, 0])
node['pos'][1] = float(value)
else:
node[key] = value
return node
else:
return None
for element in get_element(OSM_FILE):
node = shape_element(element)
import pprint
def get_client():
from pymongo import MongoClient
return MongoClient('mongodb://localhost:27017/')
def get_collection():
collection = get_client().examples.osm
return collection
import codecs
import json
def process_map(file_in, pretty = False):
Saves file as a json ready for insertion into mongoDB using mongoimport
# You do not need to change this file
file_out = "{0}.json".format(file_in)
#data = []
with codecs.open(file_out, "w") as fo:
for element in get_element(file_in):
el = shape_element(element)
if el:
#data.append(el)
if pretty:
fo.write(json.dumps(el, indent=2)+"\n")
else:
fo.write(json.dumps(el) + "\n")
#return data
process_map(OSM_FILE)
collection = get_collection()
collection.count()
len(collection.distinct("created.user"))
# some helper functions for running mongo DB queries
def aggregate_to_list(collection, query):
result = collection.aggregate(query)
return list(r for r in result)
def aggregate_and_show(collection, query, limit = True):
_query = query[:]
if limit:
_query.append({"$limit": 5})
pprint.pprint(aggregate_to_list(collection, query))
def aggregate(query):
aggregate_and_show(collection, query, False)
def aggregate_distincts(field, limit = False):
query = [
{"$match": {field: {"$exists": 1}}},
{"$group": {"_id": "$" + field,
"count": {"$sum": 1}}},
{"$sort": {"count": -1}}
]
if limit:
query.append({"$limit": 10})
aggregate(query)
def contribution_of_top(n):
result = aggregate_to_list(collection, [
{"$group": {"_id": "$created.user",
"count": {"$sum": 1}}},
{"$sort": {"count": -1}},
{"$limit": n},
{"$group": {"_id": 1,
"count": {"$sum": "$count"}}}
])
return result[0]['count']
def contributions_of(top):
Given a list of numbers returns a dictionary of contributions of those number of user
result = {}
for count in top:
result[count] = float(contribution_of_top(count) * 100) / collection.count()
return result
pprint.pprint(contributions_of([1, 5, 15, 30, 50]))
aggregate([
{"$group": {"_id": "$created.user",
"count": {"$sum": 1}}},
{"$sort": {"count": -1}},
{"$limit": 10}
])
aggregate([
{"$group": {"_id": "$created.user",
"count": {"$sum": 1}}},
{"$group": {"_id": "$count",
"num_users": {"$sum": 1}}},
{"$sort": {"_id": 1}},
{"$limit": 1}
])
collection.count({"type":"node"})
collection.count({"type":"way"})
collection.distinct("address.country")
collection.distinct("address.state")
ncr_cases = list(r for r in collection.find({"address.state": "NCR"}))
ncr_cases
len(ncr_cases)
set(element['created']['user'] for element in ncr_cases)
collection.distinct("address.city")
collection.distinct("address.street")
aggregate_distincts("address.country")
aggregate_distincts("address.state")
aggregate_distincts("address.city", True)
aggregate_distincts("address.street", True)
aggregate_to_list(collection, [
{"$match": {"address.city": 'Hira Colony, Siraspur, Delhi'}}
])
aggregate_distincts("amenity", True)
aggregate_distincts("landuse", True)
aggregate_distincts("place")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Run model
Step2: Questions
Step3: Now let's look at the variations of the respective clusters, nmf topic and epithets
Step4: Visualize topic clusters
Step5: Kmeans based tfidf matrix
Step6: Kmeans based on nmf
|
<ASSISTANT_TASK:>
Python Code:
import datetime as dt
import os
import time
from cltk.corpus.greek.tlg.parse_tlg_indices import get_epithet_index
from cltk.corpus.greek.tlg.parse_tlg_indices import get_epithets
from cltk.corpus.greek.tlg.parse_tlg_indices import select_authors_by_epithet
from cltk.corpus.greek.tlg.parse_tlg_indices import get_epithet_of_author
from cltk.corpus.greek.tlg.parse_tlg_indices import get_id_author
from cltk.stop.greek.stops import STOPS_LIST as greek_stops
from cltk.tokenize.word import nltk_tokenize_words
from greek_accentuation.characters import base
import pandas # pip install pandas
from sklearn.decomposition import NMF # pip install scikit-learn scipy
from sklearn.externals import joblib
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.feature_extraction.text import CountVectorizer
def stream_lemmatized_files(corpus_dir):
# return all docs in a dir
user_dir = os.path.expanduser('~/cltk_data/user_data/' + corpus_dir)
files = os.listdir(user_dir)
for file in files:
filepath = os.path.join(user_dir, file)
with open(filepath) as fo:
yield fo.read()
def map_file_order_to_name(corpus_dir):
# read contents of dir, order and their tlg name
user_dir = os.path.expanduser('~/cltk_data/user_data/' + corpus_dir)
files = os.listdir(user_dir)
map_id_author = get_id_author() # {…, '1648': 'Pyrgion Hist.', '4017': 'Syrianus Phil.', …}
map_count_name = {}
for count, file in enumerate(files):
file_id = str(file[3:-4])
name = map_id_author[file_id]
map_count_name[count] = name
return map_count_name
map_file_count_name = map_file_order_to_name('tlg_lemmatized_no_accents_no_stops')
t0 = dt.datetime.utcnow()
data_samples = []
for text in stream_lemmatized_files('tlg_lemmatized_no_accents_no_stops'):
data_samples.append(text)
print('... finished in {}'.format(dt.datetime.utcnow() - t0))
print('Number of texts:', len(data_samples))
# view all epithets:
get_epithets()
t0 = dt.datetime.utcnow()
# tf-idf features
n_samples = 2000
n_features = 1000 # TODO: increase
n_topics = len(get_epithets())
n_top_words = 20
tfidf_vectorizer = TfidfVectorizer(max_df=1.0,
min_df=1,
max_features=n_features,
stop_words=None)
tfidf = tfidf_vectorizer.fit_transform(data_samples)
# save features
vector_fp = os.path.expanduser('~/cltk_data/user_data/tlg_lemmatized_no_accents_no_stops_tfidf_{0}features.pickle'.format(n_features))
joblib.dump(tfidf, vector_fp)
print('... finished in {}'.format(dt.datetime.utcnow() - t0))
# time on good server:
# 1000 features: 0:01:22
t0 = dt.datetime.utcnow()
print("Fitting the NMF model with tf-idf features, "
"n_samples=%d and n_features=%d..."
% (n_samples, n_features))
nmf = NMF(n_components=n_topics, random_state=1,
alpha=.1, l1_ratio=.5).fit(tfidf)
# save model
nmf_fp = os.path.expanduser('~/cltk_data/user_data/tlg_lemmatized_no_accents_no_stops_tfidf_{0}features_nmf.pickle'.format(n_features))
joblib.dump(nmf, nmf_fp)
print('... finished in {}'.format(dt.datetime.utcnow() - t0))
def print_top_words(model, feature_names, n_top_words):
for topic_id, topic in enumerate(model.components_):
print('Topic #{}:'.format(int(topic_id)))
print(''.join([feature_names[i] + ' ' + str(round(topic[i], 2))
+' | ' for i in topic.argsort()[:-n_top_words - 1:-1]]))
print()
print("Topics in NMF model:")
tfidf_feature_names = tfidf_vectorizer.get_feature_names()
print_top_words(nmf, tfidf_feature_names, n_top_words)
tfidf.shape
doc_topic_distrib = nmf.transform(tfidf) # numpy.ndarray
doc_topic_distrib.shape
df = pandas.DataFrame(doc_topic_distrib)
df = df.rename(map_file_count_name)
df.head(100)
# for each topic (col), which author (row) has the highest value?
# TODO: get top 5 authors
for count in range(n_topics):
print('Top author of topic {0}: {1}'.format(count, df[count].idxmax()))
# Now, transpose df and get top topic of each author
# for each topic (col), which author (row) has the highest value?
# TODO: get top 5 authors
df_t = df.T
df_t.head(10)
map_name_epithet_id = {}
for curr_name in df_t.columns:
print(curr_name)
try:
top_topic = int(df_t[curr_name].idxmax())
except TypeError: # there are some duplicate names, just take one value for now
top_topic = int(df_t[curr_name].idxmax().iloc[0])
print(' NMF topic:', top_topic)
for _id, name in get_id_author().items():
if curr_name == name:
epithet = get_epithet_of_author(_id)
print(' Traditional epithet:', epithet)
map_name_epithet_id[name] = {'id': _id,
'top_topic': top_topic,
'epithet': epithet}
print()
map_name_epithet_id['Silenus Trag.']
# Group by epithet, collect topics
# {<epithet>: [<topics>]}
from collections import defaultdict
map_epithet_topics = defaultdict(list)
for name, _dict in map_name_epithet_id.items():
epithet = _dict['epithet']
top_topic = _dict['top_topic']
map_epithet_topics[epithet].append(top_topic)
# import pprint
# pp = pprint.PrettyPrinter(indent=4)
# pp.pprint(dict(map_epithet_topics))
print(dict(map_epithet_topics))
# which epithet has the most topics associated with it?
map_epithet_count_topics = {}
for epithet, topic_list in map_epithet_topics.items():
map_epithet_count_topics[epithet] = len(topic_list)
sorted(map_epithet_count_topics.items(), key=lambda x:x[1], reverse=True)
# Group by topic, collect epithets
# {<topic>: [<epithets>]}
from collections import defaultdict
map_topic_epithets = defaultdict(list)
for name, _dict in map_name_epithet_id.items():
epithet = _dict['epithet']
top_topic = _dict['top_topic']
map_topic_epithets[top_topic].append(epithet)
dict(map_topic_epithets)
# least, most cohesive epithets
# which epithet has the most topics associated with it?
map_topics_count_epithet = {}
for topic, epithet_list in map_topic_epithets.items():
map_topics_count_epithet[topic] = len(epithet_list)
# map_topics_count_epithet
sorted_list_tuples = sorted(map_topics_count_epithet.items(), key=lambda x:x[1], reverse=True)
for topic_freq in sorted_list_tuples:
topic_number = str(topic_freq[0])
doc_freq = str(topic_freq[1])
print('Topic #{0} has {1} author-documents in it'.format(topic_number, doc_freq))
# also do this all w/ countvectorizer?
# http://scikit-learn.org/stable/modules/clustering.html
dataset_array = df.values
print(dataset_array.dtype) # kmeans needs to be homogeneous data type (here, float64)
print(dataset_array)
# do I need to normalize
# sklearn.preprocessing.StandardScaler
from sklearn import preprocessing
# http://scikit-learn.org/stable/modules/preprocessing.html
# first load scaler and train on given data set
scaler = preprocessing.StandardScaler().fit(df)
scaler.mean_
scaler.scale_
t0 = dt.datetime.utcnow()
# actually do normalization; can be reused for eg a training set
df_scaled = pandas.DataFrame(scaler.transform(df))
print('... finished in {}'.format(dt.datetime.utcnow() - t0))
from sklearn import cluster
# Convert DataFrame to matrix (numpy.ndarray)
matrix = df_scaled.as_matrix()
km = cluster.KMeans(n_clusters=n_topics)
km.fit(matrix)
# Get cluster assignment labels
labels = km.labels_ # these are the topics 0-54; array([53, 53, 16, ..., 42, 16, 13]
# Format results as a DataFrame
df_clusters = pandas.DataFrame([df_scaled.index, labels]).T # add author names to the 0 col
df_clusters.head(5)
%matplotlib inline
import matplotlib.pyplot as plt # pip install matplotlib
import matplotlib
matplotlib.style.use('ggplot')
# from pandas.tools.plotting import table
# this is a cluseter of the already-clustered kmeans topics; not very informative
plt.figure()
df_clusters.plot.scatter(x=0, y=1) # y is topics no., x is doc id
# try clustering the original tfidf
# tfidf_dense = tfidf.toarray()
scaler = preprocessing.StandardScaler(with_mean=False).fit(tfidf) # either with_mean=False or make dense
# save scaler
scaler_fp = os.path.expanduser('~/cltk_data/user_data/tlg_lemmatized_no_accents_no_stops_tfidf_{0}features_scaler.pickle'.format(n_features))
joblib.dump(df_scaled, scaler_fp)
import numpy as np
# direct Pandas conversion of sparse scipy matrix not supported
# Following http://stackoverflow.com/a/17819427
# df_scaled_tfidf = pandas.DataFrame(scaler.transform(tfidf))
# df_scaled_tfidf = pandas.DataFrame()
scaler_tfidf = scaler.transform(tfidf) # sparse matrix of type '<class 'numpy.float64'>
pandas.SparseDataFrame([pandas.SparseSeries(scaler_tfidf[i].toarray().ravel()) for i in np.arange(scaler_tfidf.shape[0])])
df_scaled_tfidf = pandas.SparseDataFrame([pandas.SparseSeries(scaler_tfidf[i].toarray().ravel()) for i in np.arange(scaler_tfidf.shape[0])])
# type(df) # pandas.sparse.frame.SparseDataFrame
# Convert DataFrame to matrix (numpy.ndarray)
matrix_tfidf = df_scaled_tfidf.as_matrix()
km_tfidf = cluster.KMeans(n_clusters=n_topics)
km_tfidf.fit(matrix_tfidf)
# Get cluster assignment labels
labels = km_tfidf.labels_ # these are the topics 0-54; array([53, 53, 16, ..., 42, 16, 13]
# Format results as a DataFrame
df_clusters_tfidf = pandas.DataFrame([df_scaled_tfidf.index, labels]).T # add author names to the 0 col
df_clusters_tfidf.head(10)
plt.figure()
df_clusters_tfidf.plot.scatter(x=0, y=1) # y is topics no., x is doc id
nmf_array = nmf.components_
t0 = dt.datetime.utcnow()
# nmf_dense = nmf_array.toarray()
scaler = preprocessing.StandardScaler().fit(nmf_array) # either with_mean=False or make dense
# save features
tfidf_matrix_scaler_fp = os.path.expanduser('~/cltk_data/user_data/tlg_lemmatized_no_accents_no_stops_tfidf_matrix_{0}features.pickle'.format(n_features))
joblib.dump(scaler, tfidf_matrix_scaler_fp)
print('... finished in {}'.format(dt.datetime.utcnow() - t0))
df_scaled_nmf = pandas.DataFrame(scaler.transform(nmf_array))
# Convert DataFrame to matrix (numpy.ndarray)
matrix_nmf = df_scaled_nmf.as_matrix()
km_nmf = cluster.KMeans(n_clusters=n_topics)
km_nmf.fit(matrix_nmf)
# Get cluster assignment labels
labels = km_nmf.labels_ # these are the clusters 0-54; array([ 1, 4, 11, 14, 28, 9, 30,
# Format results as a DataFrame
df_clusters_nmf = pandas.DataFrame([df_scaled_nmf.index, labels]).T # add author names to the 0 col
df_clusters_nmf.head(10)
plt.figure()
df_clusters_nmf.plot.scatter(x=0, y=1) # y is topics no., x is doc id
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <h3>The http request response cycle</h3>
Step2: <h3>Set up the BeautifulSoup object</h3>
Step3: <h3>BS4 functions</h3>
Step4: <h4>find finds the first instance of a specified tag</h4>
Step5: <h4>bs4 functions can be recursively applied on elements</h4>
Step6: <h4>Both find as well as find_all can be qualified by css selectors</h4>
Step7: <h4>get_text() returns the marked up text (the content) enclosed in a tag.</h4>
Step8: <h4>get returns the value of a tag attribute</h4>
Step9: <h1>A function that returns a list containing recipe names, recipe descriptions (if any) and recipe urls</h1>
Step10: <h2>Let's write a function that</h2>
Step11: <h2>Construct a list of dictionaries for all recipes</h2>
Step12: <h1>Logging in to a web server</h1>
Step13: <h3>Construct an object that contains the data to be sent to the login page</h3>
Step14: <h3>get the value of the login token</h3>
Step15: <h3>Setup a session, login, and get data</h3>
|
<ASSISTANT_TASK:>
Python Code:
import requests
from bs4 import BeautifulSoup
url = "http://www.epicurious.com/search/Tofu Chili"
response = requests.get(url)
if response.status_code == 200:
print("Success")
else:
print("Failure")
keywords = input("Please enter the things you want to see in a recipe")
url = "http://www.epicurious.com/search/" + keywords
response = requests.get(url)
if response.status_code == 200:
print("Success")
else:
print("Failure")
results_page = BeautifulSoup(response.content,'lxml')
print(results_page.prettify())
#print(results_page)
all_a_tags = results_page.find_all('a')
print(type(all_a_tags))
all_a_tags[0:5]
div_tag = results_page.find('div')
print(div_tag)
type(div_tag)
div_tag.find('a')
#When using this method and looking for 'class' use 'class_' (because class is a reserved word in python)
#Note that we get a list back because find_all returns a list
type(results_page.find_all('article',class_="recipe-content-card"))
#Since we're using a string as the key, the fact that class is a reserved word is not a problem
#We get an element back because find returns an element
type(results_page.find('article',{'class':'recipe-content-card'}))
results_page.find('article',{'class':'recipe-content-card'}).get_text()
recipe_tag = results_page.find('article',{'class':'recipe-content-card'})
recipe_link = recipe_tag.find('a')
print("a tag:",recipe_link)
link_url = recipe_link.get('href')
print("link url:",link_url)
print(type(link_url))
def get_recipes(keywords):
recipe_list = list()
import requests
from bs4 import BeautifulSoup
url = "http://www.epicurious.com/search/" + keywords
response = requests.get(url)
if not response.status_code == 200:
return None
try:
results_page = BeautifulSoup(response.content,'lxml')
recipes = results_page.find_all('article',class_="recipe-content-card")
for recipe in recipes:
recipe_link = "http://www.epicurious.com" + recipe.find('a').get('href')
recipe_name = recipe.find('a').get_text()
try:
recipe_description = recipe.find('p',class_='dek').get_text()
except:
recipe_description = ''
recipe_list.append((recipe_name,recipe_link,recipe_description))
return recipe_list
except:
return None
get_recipes("Tofu chili")
get_recipes('Nothing')
recipe_link = "http://www.epicurious.com" + '/recipes/food/views/spicy-lemongrass-tofu-233844'
def get_recipe_info(recipe_link):
recipe_dict = dict()
import requests
from bs4 import BeautifulSoup
try:
response = requests.get(recipe_link)
if not response.status_code == 200:
return recipe_dict
result_page = BeautifulSoup(response.content,'lxml')
ingredient_list = list()
prep_steps_list = list()
for ingredient in result_page.find_all('li',class_='ingredient'):
ingredient_list.append(ingredient.get_text())
for prep_step in result_page.find_all('li',class_='preparation-step'):
prep_steps_list.append(prep_step.get_text().strip())
recipe_dict['ingredients'] = ingredient_list
recipe_dict['preparation'] = prep_steps_list
return recipe_dict
except:
return recipe_dict
get_recipe_info(recipe_link)
def get_all_recipes(keywords):
results = list()
all_recipes = get_recipes(keywords)
for recipe in all_recipes:
recipe_dict = get_recipe_info(recipe[1])
recipe_dict['name'] = recipe[0]
recipe_dict['description'] = recipe[2]
results.append(recipe_dict)
return(results)
get_all_recipes("Tofu chili")
with open('wikidata.txt') as f:
contents = f.read().split('\n')
username = contents[0]
password = contents[1]
payload = {
'wpName': username,
'wpPassword': password,
'wploginattempt': 'Log in',
'wpEditToken': "+\\",
'title': "Special:UserLogin",
'authAction': "login",
'force': "",
'wpForceHttps': "1",
'wpFromhttp': "1",
#'wpLoginToken': ‘', #We need to read this from the page
}
def get_login_token(response):
soup = BeautifulSoup(response.text, 'lxml')
token = soup.find('input',{'name':"wpLoginToken"}).get('value')
return token
with requests.session() as s:
response = s.get('https://en.wikipedia.org/w/index.php?title=Special:UserLogin&returnto=Main+Page')
payload['wpLoginToken'] = get_login_token(response)
#Send the login request
response_post = s.post('https://en.wikipedia.org/w/index.php?title=Special:UserLogin&action=submitlogin&type=login',
data=payload)
#Get another page and check if we’re still logged in
response = s.get('https://en.wikipedia.org/wiki/Special:Watchlist')
data = BeautifulSoup(response.content,'lxml')
print(data.find('div',class_='mw-changeslist').get_text())
dir(requests.session())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the NYC Taxi Fares dataset
Step3: Conveniently, 2/3 of the data have fares under \$10, and 1/3 have fares \$10 and above.
Step4: Add a datetime column and derive useful statistics
Step5: Separate categorical from continuous columns
Step6: <div class="alert alert-info"><strong>NOTE
Step7: We can see that <tt>df['Hour']</tt> is a categorical feature by displaying some of the rows
Step8: Here our categorical names are the integers 0 through 23, for a total of 24 unique categories. These values <em>also</em> correspond to the codes assigned to each name.
Step9: <div class="alert alert-info"><strong>NOTE
Step10: <div class="alert alert-info"><strong>NOTE
Step11: We can feed all of our continuous variables into the model as a tensor. We're not normalizing the values here; we'll let the model perform this step.
Step12: Note
Step13: Set an embedding size
Step14: Define a TabularModel
Step15: <div class="alert alert-danger"><strong>This is how the categorical embeddings are passed into the layers.</strong></div>
Step16: Define loss function & optimizer
Step17: Perform train/test splits
Step18: Train the model
Step19: Plot the loss function
Step20: Validate the model
Step21: Now let's look at the first 50 predicted values
Step22: Save the model
Step23: Loading a saved model (starting from scratch)
Step24: Now define the model. Before we can load the saved settings, we need to instantiate our TabularModel with the parameters we used before (embedding sizes, number of continuous columns, output size, layer sizes, and dropout layer p-value).
Step25: Once the model is set up, loading the saved settings is a snap.
Step26: Next we'll define a function that takes in new parameters from the user, performs all of the preprocessing steps above, and passes the new data through our trained model.
Step27: Feed new data through the trained model
|
<ASSISTANT_TASK:>
Python Code:
import torch
import torch.nn as nn
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
df = pd.read_csv('../Data/NYCTaxiFares.csv')
df.head()
df['fare_class'].value_counts()
def haversine_distance(df, lat1, long1, lat2, long2):
Calculates the haversine distance between 2 sets of GPS coordinates in df
r = 6371 # average radius of Earth in kilometers
phi1 = np.radians(df[lat1])
phi2 = np.radians(df[lat2])
delta_phi = np.radians(df[lat2]-df[lat1])
delta_lambda = np.radians(df[long2]-df[long1])
a = np.sin(delta_phi/2)**2 + np.cos(phi1) * np.cos(phi2) * np.sin(delta_lambda/2)**2
c = 2 * np.arctan2(np.sqrt(a), np.sqrt(1-a))
d = (r * c) # in kilometers
return d
df['dist_km'] = haversine_distance(df,'pickup_latitude', 'pickup_longitude', 'dropoff_latitude', 'dropoff_longitude')
df.head()
df['EDTdate'] = pd.to_datetime(df['pickup_datetime'].str[:19]) - pd.Timedelta(hours=4)
df['Hour'] = df['EDTdate'].dt.hour
df['AMorPM'] = np.where(df['Hour']<12,'am','pm')
df['Weekday'] = df['EDTdate'].dt.strftime("%a")
df.head()
df['EDTdate'].min()
df['EDTdate'].max()
df.columns
cat_cols = ['Hour', 'AMorPM', 'Weekday']
cont_cols = ['pickup_latitude', 'pickup_longitude', 'dropoff_latitude', 'dropoff_longitude', 'passenger_count', 'dist_km']
y_col = ['fare_class'] # this column contains the labels
# Convert our three categorical columns to category dtypes.
for cat in cat_cols:
df[cat] = df[cat].astype('category')
df.dtypes
df['Hour'].head()
df['AMorPM'].head()
df['AMorPM'].cat.categories
df['AMorPM'].head().cat.codes
df['Weekday'].cat.categories
df['Weekday'].head().cat.codes
hr = df['Hour'].cat.codes.values
ampm = df['AMorPM'].cat.codes.values
wkdy = df['Weekday'].cat.codes.values
cats = np.stack([hr, ampm, wkdy], 1)
cats[:5]
# Convert categorical variables to a tensor
cats = torch.tensor(cats, dtype=torch.int64)
# this syntax is ok, since the source data is an array, not an existing tensor
cats[:5]
# Convert continuous variables to a tensor
conts = np.stack([df[col].values for col in cont_cols], 1)
conts = torch.tensor(conts, dtype=torch.float)
conts[:5]
conts.type()
# Convert labels to a tensor
y = torch.tensor(df[y_col].values).flatten()
y[:5]
cats.shape
conts.shape
y.shape
# This will set embedding sizes for Hours, AMvsPM and Weekdays
cat_szs = [len(df[col].cat.categories) for col in cat_cols]
emb_szs = [(size, min(50, (size+1)//2)) for size in cat_szs]
emb_szs
# This is our source data
catz = cats[:4]
catz
# This is passed in when the model is instantiated
emb_szs
# This is assigned inside the __init__() method
selfembeds = nn.ModuleList([nn.Embedding(ni, nf) for ni,nf in emb_szs])
selfembeds
list(enumerate(selfembeds))
# This happens inside the forward() method
embeddingz = []
for i,e in enumerate(selfembeds):
embeddingz.append(e(catz[:,i]))
embeddingz
# We concatenate the embedding sections (12,1,4) into one (17)
z = torch.cat(embeddingz, 1)
z
# This was assigned under the __init__() method
selfembdrop = nn.Dropout(.4)
z = selfembdrop(z)
z
class TabularModel(nn.Module):
def __init__(self, emb_szs, n_cont, out_sz, layers, p=0.5):
super().__init__()
self.embeds = nn.ModuleList([nn.Embedding(ni, nf) for ni,nf in emb_szs])
self.emb_drop = nn.Dropout(p)
self.bn_cont = nn.BatchNorm1d(n_cont)
layerlist = []
n_emb = sum((nf for ni,nf in emb_szs))
n_in = n_emb + n_cont
for i in layers:
layerlist.append(nn.Linear(n_in,i))
layerlist.append(nn.ReLU(inplace=True))
layerlist.append(nn.BatchNorm1d(i))
layerlist.append(nn.Dropout(p))
n_in = i
layerlist.append(nn.Linear(layers[-1],out_sz))
self.layers = nn.Sequential(*layerlist)
def forward(self, x_cat, x_cont):
embeddings = []
for i,e in enumerate(self.embeds):
embeddings.append(e(x_cat[:,i]))
x = torch.cat(embeddings, 1)
x = self.emb_drop(x)
x_cont = self.bn_cont(x_cont)
x = torch.cat([x, x_cont], 1)
x = self.layers(x)
return x
torch.manual_seed(33)
model = TabularModel(emb_szs, conts.shape[1], 2, [200,100], p=0.4) # out_sz = 2
model
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
batch_size = 60000
test_size = 12000
cat_train = cats[:batch_size-test_size]
cat_test = cats[batch_size-test_size:batch_size]
con_train = conts[:batch_size-test_size]
con_test = conts[batch_size-test_size:batch_size]
y_train = y[:batch_size-test_size]
y_test = y[batch_size-test_size:batch_size]
len(cat_train)
len(cat_test)
import time
start_time = time.time()
epochs = 300
losses = []
for i in range(epochs):
i+=1
y_pred = model(cat_train, con_train)
loss = criterion(y_pred, y_train)
losses.append(loss)
# a neat trick to save screen space:
if i%25 == 1:
print(f'epoch: {i:3} loss: {loss.item():10.8f}')
optimizer.zero_grad()
loss.backward()
optimizer.step()
print(f'epoch: {i:3} loss: {loss.item():10.8f}') # print the last line
print(f'\nDuration: {time.time() - start_time:.0f} seconds') # print the time elapsed
plt.plot(range(epochs), losses)
plt.ylabel('Cross Entropy Loss')
plt.xlabel('epoch');
# TO EVALUATE THE ENTIRE TEST SET
with torch.no_grad():
y_val = model(cat_test, con_test)
loss = criterion(y_val, y_test)
print(f'CE Loss: {loss:.8f}')
rows = 50
correct = 0
print(f'{"MODEL OUTPUT":26} ARGMAX Y_TEST')
for i in range(rows):
print(f'{str(y_val[i]):26} {y_val[i].argmax():^7}{y_test[i]:^7}')
if y_val[i].argmax().item() == y_test[i]:
correct += 1
print(f'\n{correct} out of {rows} = {100*correct/rows:.2f}% correct')
# Make sure to save the model only after the training has happened!
if len(losses) == epochs:
torch.save(model.state_dict(), 'TaxiFareClssModel.pt')
else:
print('Model has not been trained. Consider loading a trained model instead.')
import torch
import torch.nn as nn
import numpy as np
import pandas as pd
def haversine_distance(df, lat1, long1, lat2, long2):
r = 6371
phi1 = np.radians(df[lat1])
phi2 = np.radians(df[lat2])
delta_phi = np.radians(df[lat2]-df[lat1])
delta_lambda = np.radians(df[long2]-df[long1])
a = np.sin(delta_phi/2)**2 + np.cos(phi1) * np.cos(phi2) * np.sin(delta_lambda/2)**2
c = 2 * np.arctan2(np.sqrt(a), np.sqrt(1-a))
return r * c
class TabularModel(nn.Module):
def __init__(self, emb_szs, n_cont, out_sz, layers, p=0.5):
super().__init__()
self.embeds = nn.ModuleList([nn.Embedding(ni, nf) for ni,nf in emb_szs])
self.emb_drop = nn.Dropout(p)
self.bn_cont = nn.BatchNorm1d(n_cont)
layerlist = []
n_emb = sum((nf for ni,nf in emb_szs))
n_in = n_emb + n_cont
for i in layers:
layerlist.append(nn.Linear(n_in,i))
layerlist.append(nn.ReLU(inplace=True))
layerlist.append(nn.BatchNorm1d(i))
layerlist.append(nn.Dropout(p))
n_in = i
layerlist.append(nn.Linear(layers[-1],out_sz))
self.layers = nn.Sequential(*layerlist)
def forward(self, x_cat, x_cont):
embeddings = []
for i,e in enumerate(self.embeds):
embeddings.append(e(x_cat[:,i]))
x = torch.cat(embeddings, 1)
x = self.emb_drop(x)
x_cont = self.bn_cont(x_cont)
x = torch.cat([x, x_cont], 1)
return self.layers(x)
emb_szs = [(24, 12), (2, 1), (7, 4)]
model2 = TabularModel(emb_szs, 6, 2, [200,100], p=0.4)
model2.load_state_dict(torch.load('TaxiFareClssModel.pt'));
model2.eval() # be sure to run this step!
def test_data(mdl): # pass in the name of the new model
# INPUT NEW DATA
plat = float(input('What is the pickup latitude? '))
plong = float(input('What is the pickup longitude? '))
dlat = float(input('What is the dropoff latitude? '))
dlong = float(input('What is the dropoff longitude? '))
psngr = int(input('How many passengers? '))
dt = input('What is the pickup date and time?\nFormat as YYYY-MM-DD HH:MM:SS ')
# PREPROCESS THE DATA
dfx_dict = {'pickup_latitude':plat,'pickup_longitude':plong,'dropoff_latitude':dlat,
'dropoff_longitude':dlong,'passenger_count':psngr,'EDTdate':dt}
dfx = pd.DataFrame(dfx_dict, index=[0])
dfx['dist_km'] = haversine_distance(dfx,'pickup_latitude', 'pickup_longitude',
'dropoff_latitude', 'dropoff_longitude')
dfx['EDTdate'] = pd.to_datetime(dfx['EDTdate'])
# We can skip the .astype(category) step since our fields are small,
# and encode them right away
dfx['Hour'] = dfx['EDTdate'].dt.hour
dfx['AMorPM'] = np.where(dfx['Hour']<12,0,1)
dfx['Weekday'] = dfx['EDTdate'].dt.strftime("%a")
dfx['Weekday'] = dfx['Weekday'].replace(['Fri','Mon','Sat','Sun','Thu','Tue','Wed'],
[0,1,2,3,4,5,6]).astype('int64')
# CREATE CAT AND CONT TENSORS
cat_cols = ['Hour', 'AMorPM', 'Weekday']
cont_cols = ['pickup_latitude', 'pickup_longitude', 'dropoff_latitude',
'dropoff_longitude', 'passenger_count', 'dist_km']
xcats = np.stack([dfx[col].values for col in cat_cols], 1)
xcats = torch.tensor(xcats, dtype=torch.int64)
xconts = np.stack([dfx[col].values for col in cont_cols], 1)
xconts = torch.tensor(xconts, dtype=torch.float)
# PASS NEW DATA THROUGH THE MODEL WITHOUT PERFORMING A BACKPROP
with torch.no_grad():
z = mdl(xcats, xconts).argmax().item()
print(f'\nThe predicted fare class is {z}')
test_data(model2)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: PWC-Net-large model training (with cyclical learning rate schedule)
Step2: TODO
Step3: Pre-train on FlyingChairs+FlyingThings3DHalfRes mix
Step4: Configure the training
Step5: Train the model
|
<ASSISTANT_TASK:>
Python Code:
pwcnet_train.ipynb
PWC-Net model training.
Written by Phil Ferriere
Licensed under the MIT License (see LICENSE for details)
Tensorboard:
[win] tensorboard --logdir=E:\\repos\\tf-optflow\\tfoptflow\\pwcnet-lg-6-2-cyclic-chairsthingsmix
[ubu] tensorboard --logdir=/media/EDrive/repos/tf-optflow/tfoptflow/pwcnet-lg-6-2-cyclic-chairsthingsmix
from __future__ import absolute_import, division, print_function
import sys
from copy import deepcopy
from dataset_base import _DEFAULT_DS_TRAIN_OPTIONS
from dataset_flyingchairs import FlyingChairsDataset
from dataset_flyingthings3d import FlyingThings3DHalfResDataset
from dataset_mixer import MixedDataset
from model_pwcnet import ModelPWCNet, _DEFAULT_PWCNET_TRAIN_OPTIONS
# TODO: You MUST set dataset_root to the correct path on your machine!
if sys.platform.startswith("win"):
_DATASET_ROOT = 'E:/datasets/'
else:
_DATASET_ROOT = '/media/EDrive/datasets/'
_FLYINGCHAIRS_ROOT = _DATASET_ROOT + 'FlyingChairs_release'
_FLYINGTHINGS3DHALFRES_ROOT = _DATASET_ROOT + 'FlyingThings3D_HalfRes'
# TODO: You MUST adjust the settings below based on the number of GPU(s) used for training
# Set controller device and devices
# A one-gpu setup would be something like controller='/device:GPU:0' and gpu_devices=['/device:GPU:0']
# Here, we use a dual-GPU setup, as shown below
gpu_devices = ['/device:GPU:0', '/device:GPU:1']
controller = '/device:CPU:0'
# TODO: You MUST adjust this setting below based on the amount of memory on your GPU(s)
# Batch size
batch_size = 8
# TODO: You MUST set the batch size based on the capabilities of your GPU(s)
# Load train dataset
ds_opts = deepcopy(_DEFAULT_DS_TRAIN_OPTIONS)
ds_opts['in_memory'] = False # Too many samples to keep in memory at once, so don't preload them
ds_opts['aug_type'] = 'heavy' # Apply all supported augmentations
ds_opts['batch_size'] = batch_size * len(gpu_devices) # Use a multiple of 8; here, 16 for dual-GPU mode (Titan X & 1080 Ti)
ds_opts['crop_preproc'] = (256, 448) # Crop to a smaller input size
ds1 = FlyingChairsDataset(mode='train_with_val', ds_root=_FLYINGCHAIRS_ROOT, options=ds_opts)
ds_opts['type'] = 'into_future'
ds2 = FlyingThings3DHalfResDataset(mode='train_with_val', ds_root=_FLYINGTHINGS3DHALFRES_ROOT, options=ds_opts)
ds = MixedDataset(mode='train_with_val', datasets=[ds1, ds2], options=ds_opts)
# Display dataset configuration
ds.print_config()
# Start from the default options
nn_opts = deepcopy(_DEFAULT_PWCNET_TRAIN_OPTIONS)
nn_opts['verbose'] = True
nn_opts['ckpt_dir'] = './pwcnet-lg-6-2-cyclic-chairsthingsmix/'
nn_opts['batch_size'] = ds_opts['batch_size']
nn_opts['x_shape'] = [2, ds_opts['crop_preproc'][0], ds_opts['crop_preproc'][1], 3]
nn_opts['y_shape'] = [ds_opts['crop_preproc'][0], ds_opts['crop_preproc'][1], 2]
nn_opts['use_tf_data'] = True # Use tf.data reader
nn_opts['gpu_devices'] = gpu_devices
nn_opts['controller'] = controller
# Use the PWC-Net-large model in quarter-resolution mode
nn_opts['use_dense_cx'] = True
nn_opts['use_res_cx'] = True
nn_opts['pyr_lvls'] = 6
nn_opts['flow_pred_lvl'] = 2
# Set the learning rate schedule. This schedule is for a single GPU using a batch size of 8.
# Below,we adjust the schedule to the size of the batch and the number of GPUs.
nn_opts['lr_policy'] = 'cyclic'
nn_opts['cyclic_lr_max'] = 5e-04 # Anything higher will generate NaNs
nn_opts['cyclic_lr_base'] = 1e-05
nn_opts['cyclic_lr_stepsize'] = 20000
nn_opts['max_steps'] = 200000
# Below,we adjust the schedule to the size of the batch and our number of GPUs (2).
nn_opts['cyclic_lr_stepsize'] /= len(gpu_devices)
nn_opts['max_steps'] /= len(gpu_devices)
nn_opts['cyclic_lr_stepsize'] = int(nn_opts['cyclic_lr_stepsize'] / (float(ds_opts['batch_size']) / 8))
nn_opts['max_steps'] = int(nn_opts['max_steps'] / (float(ds_opts['batch_size']) / 8))
# Instantiate the model and display the model configuration
nn = ModelPWCNet(mode='train_with_val', options=nn_opts, dataset=ds)
nn.print_config()
# Train the model
nn.train()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The class <code>plot_diagram</code> helps us to visualize the data space and the parameter space during training and has nothing to do with PyTorch.
Step2: <!--Empty Space for separating topics-->
Step3: Generate values from -3 to 3 that create a line with a slope of -3. This is the line you will estimate.
Step4: Let us plot the line.
Step5: Let us add some noise to the data in order to simulate the real data. Use <code>torch.randn(X.size())</code> to generate Gaussian noise that is the same size as <code>X</code> and has a standard deviation opf 0.1.
Step6: Plot the <code>Y</code>
Step7: <!--Empty Space for separating topics-->
Step8: Define the cost or criterion function using MSE (Mean Square Error)
Step9: Define the learning rate <code>lr</code> and an empty list <code>LOSS</code> to record the loss for each iteration
Step10: Now, we create a model parameter by setting the argument <code>requires_grad</code> to <code> True</code> because the system must learn it.
Step11: Create a <code>plot_diagram</code> object to visualize the data space and the parameter space for each iteration during training
Step12: <!--Empty Space for separating topics-->
Step13: Let us try to run 4 iterations of gradient descent
Step14: Plot the cost for each iteration
Step15: <!--Empty Space for separating topics-->
Step16: Double-click <b>here</b> for the solution.
Step17: Double-click <b>here</b> for the solution.
Step18: Double-click <b>here</b> for the solution.
|
<ASSISTANT_TASK:>
Python Code:
# These are the libraries will be used for this lab.
import numpy as np
import matplotlib.pyplot as plt
# The class for plotting
class plot_diagram():
# Constructor
def __init__(self, X, Y, w, stop, go = False):
start = w.data
self.error = []
self.parameter = []
self.X = X.numpy()
self.Y = Y.numpy()
self.parameter_values = torch.arange(start, stop)
self.Loss_function = [criterion(forward(X), Y) for w.data in self.parameter_values]
w.data = start
# Executor
def __call__(self, Yhat, w, error, n):
self.error.append(error)
self.parameter.append(w.data)
plt.subplot(212)
plt.plot(self.X, Yhat.detach().numpy())
plt.plot(self.X, self.Y,'ro')
plt.xlabel("A")
plt.ylim(-20, 20)
plt.subplot(211)
plt.title("Data Space (top) Estimated Line (bottom) Iteration " + str(n))
plt.plot(self.parameter_values.numpy(), self.Loss_function)
plt.plot(self.parameter, self.error, 'ro')
plt.xlabel("B")
plt.figure()
# Destructor
def __del__(self):
plt.close('all')
# Import the library PyTorch
import torch
# Create the f(X) with a slope of -3
X = torch.arange(-3, 3, 0.1).view(-1, 1)
f = -3 * X
# Plot the line with blue
plt.plot(X.numpy(), f.numpy(), label = 'f')
plt.xlabel('x')
plt.ylabel('y')
plt.legend()
plt.show()
# Add some noise to f(X) and save it in Y
Y = f + 0.1 * torch.randn(X.size())
# Plot the data points
plt.plot(X.numpy(), Y.numpy(), 'rx', label = 'Y')
plt.plot(X.numpy(), f.numpy(), label = 'f')
plt.xlabel('x')
plt.ylabel('y')
plt.legend()
plt.show()
# Create forward function for prediction
def forward(x):
return w * x
# Create the MSE function for evaluate the result.
def criterion(yhat, y):
return torch.mean((yhat - y) ** 2)
# Create Learning Rate and an empty list to record the loss for each iteration
lr = 0.1
LOSS = []
w = torch.tensor(-10.0, requires_grad = True)
gradient_plot = plot_diagram(X, Y, w, stop = 5)
# Define a function for train the model
def train_model(iter):
for epoch in range (iter):
# make the prediction as we learned in the last lab
Yhat = forward(X)
# calculate the iteration
loss = criterion(Yhat,Y)
# plot the diagram for us to have a better idea
gradient_plot(Yhat, w, loss.item(), epoch)
# store the loss into list
LOSS.append(loss)
# backward pass: compute gradient of the loss with respect to all the learnable parameters
loss.backward()
# updata parameters
w.data = w.data - lr * w.grad.data
# zero the gradients before running the backward pass
w.grad.data.zero_()
# Give 4 iterations for training the model here.
train_model(4)
# Plot the loss for each iteration
plt.plot(LOSS)
plt.tight_layout()
plt.xlabel("Epoch/Iterations")
plt.ylabel("Cost")
# Practice: Create w with the inital value of -15.0
# Type your code here
# Practice: Create LOSS2 list
# Type your code here
# Practice: Create your own my_train_model
gradient_plot1 = plot_diagram(X, Y, w, stop = 15)
# Practice: Plot the list LOSS2 and LOSS
# Type your code here
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Flower power
Step2: ConvNet Codes
Step3: Below I'm running images through the VGG network in batches.
Step4: Building the Classifier
Step5: Data prep
Step6: Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use StratifiedShuffleSplit from scikit-learn.
Step7: If you did it right, you should see these sizes for the training sets
Step9: Batches!
Step10: Training
Step11: Testing
Step12: Below, feel free to choose images and see how the trained classifier predicts the flowers in them.
|
<ASSISTANT_TASK:>
Python Code:
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
vgg_dir = 'tensorflow_vgg/'
# Make sure vgg exists
if not isdir(vgg_dir):
raise Exception("VGG directory doesn't exist!")
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(vgg_dir + "vgg16.npy"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='VGG16 Parameters') as pbar:
urlretrieve(
'https://s3.amazonaws.com/content.udacity-data.com/nd101/vgg16.npy',
vgg_dir + 'vgg16.npy',
pbar.hook)
else:
print("Parameter file already exists!")
import tarfile
dataset_folder_path = 'flower_photos'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile('flower_photos.tar.gz'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Flowers Dataset') as pbar:
urlretrieve(
'http://download.tensorflow.org/example_images/flower_photos.tgz',
'flower_photos.tar.gz',
pbar.hook)
if not isdir(dataset_folder_path):
with tarfile.open('flower_photos.tar.gz') as tar:
tar.extractall()
tar.close()
import os
import numpy as np
import tensorflow as tf
from tensorflow_vgg import vgg16
from tensorflow_vgg import utils
data_dir = 'flower_photos/'
contents = os.listdir(data_dir)
classes = [each for each in contents if os.path.isdir(data_dir + each)]
# Set the batch size higher if you can fit in in your GPU memory
batch_size = 10
codes_list = []
labels = []
batch = []
codes = None
with tf.Session() as sess:
vgg = vgg16.Vgg16()
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
with tf.name_scope("content_vgg"):
vgg.build(input_)
for each in classes:
print("Starting {} images".format(each))
class_path = data_dir + each
files = os.listdir(class_path)
for ii, file in enumerate(files, 1):
# Add images to the current batch
# utils.load_image crops the input images for us, from the center
img = utils.load_image(os.path.join(class_path, file))
batch.append(img.reshape((1, 224, 224, 3)))
labels.append(each)
# Running the batch through the network to get the codes
if ii % batch_size == 0 or ii == len(files):
# Image batch to pass to VGG network
images = np.concatenate(batch)
feed_dict = {input_: images}
# TODO: Get the values from the relu6 layer of the VGG network
codes_batch = sess.run(vgg.relu6, feed_dict=feed_dict)
# Here I'm building an array of the codes
if codes is None:
codes = codes_batch
else:
codes = np.concatenate((codes, codes_batch))
# Reset to start building the next batch
batch = []
print('{} images processed'.format(ii))
# write codes to file
with open('codes', 'w') as f:
codes.tofile(f)
# write labels to file
import csv
with open('labels', 'w') as f:
writer = csv.writer(f, delimiter='\n')
writer.writerow(labels)
# read codes and labels from file
import csv
with open('labels') as f:
reader = csv.reader(f, delimiter='\n')
labels = np.array([each for each in reader if len(each) > 0]).squeeze()
with open('codes') as f:
codes = np.fromfile(f, dtype=np.float32)
codes = codes.reshape((len(labels), -1))
from sklearn.preprocessing import LabelBinarizer
lb = LabelBinarizer()
lb.fit(labels)
labels_vecs = lb.transform(labels)# Your one-hot encoded labels array here
from sklearn.model_selection import StratifiedShuffleSplit
ss = StratifiedShuffleSplit(n_splits=2,test_size=.2)
train_idx,val_idx = next(ss.split(codes, labels))
half_val_length = int(len(val_idx)/2)
val_idx, test_idx = val_idx[:half_val_length],val_idx[half_val_length:]
train_x, train_y = codes[train_idx], labels_vecs[train_idx]
val_x, val_y = codes[val_idx], labels_vecs[val_idx]
test_x, test_y = codes[test_idx], labels_vecs[test_idx]
print("Train shapes (x, y):", train_x.shape, train_y.shape)
print("Validation shapes (x, y):", val_x.shape, val_y.shape)
print("Test shapes (x, y):", test_x.shape, test_y.shape)
inputs_ = tf.placeholder(tf.float32, shape=[None, codes.shape[1]])
labels_ = tf.placeholder(tf.int64, shape=[None, labels_vecs.shape[1]])
# TODO: Classifier layers and operations
fc_layer = tf.contrib.layers.fully_connected(inputs_, 256)
logits = tf.contrib.layers.fully_connected(fc_layer, labels_vecs.shape[1], activation_fn=None)# output layer logits
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=labels_, logits=logits)
cost = tf.reduce_mean(cross_entropy)# cross entropy loss
optimizer = # training optimizer
# Operations for validation/test accuracy
predicted = tf.nn.softmax(logits)
correct_pred = tf.equal(tf.argmax(predicted, 1), tf.argmax(labels_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
def get_batches(x, y, n_batches=10):
Return a generator that yields batches from arrays x and y.
batch_size = len(x)//n_batches
for ii in range(0, n_batches*batch_size, batch_size):
# If we're not on the last batch, grab data with size batch_size
if ii != (n_batches-1)*batch_size:
X, Y = x[ii: ii+batch_size], y[ii: ii+batch_size]
# On the last batch, grab the rest of the data
else:
X, Y = x[ii:], y[ii:]
# I love generators
yield X, Y
saver = tf.train.Saver()
with tf.Session() as sess:
# TODO: Your training code here
saver.save(sess, "checkpoints/flowers.ckpt")
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: test_x,
labels_: test_y}
test_acc = sess.run(accuracy, feed_dict=feed)
print("Test accuracy: {:.4f}".format(test_acc))
%matplotlib inline
import matplotlib.pyplot as plt
from scipy.ndimage import imread
test_img_path = 'flower_photos/roses/10894627425_ec76bbc757_n.jpg'
test_img = imread(test_img_path)
plt.imshow(test_img)
# Run this cell if you don't have a vgg graph built
if 'vgg' in globals():
print('"vgg" object already exists. Will not create again.')
else:
#create vgg
with tf.Session() as sess:
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
vgg = vgg16.Vgg16()
vgg.build(input_)
with tf.Session() as sess:
img = utils.load_image(test_img_path)
img = img.reshape((1, 224, 224, 3))
feed_dict = {input_: img}
code = sess.run(vgg.relu6, feed_dict=feed_dict)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: code}
prediction = sess.run(predicted, feed_dict=feed).squeeze()
plt.imshow(test_img)
plt.barh(np.arange(5), prediction)
_ = plt.yticks(np.arange(5), lb.classes_)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: ATUALIZA SIGLA ORIGEM DAS RECEITAS
Step4: ATUALIZA DESCRIÇÃO E CÓDIGO DA FONTE DA RECEITA
Step6: ATUALIZA UF DO DOADOR
Step8: ATUALIZA SIGLA/NUMERO DO PARTIDO DO DOADOR
Step11: ORIGEM ORGÃO PARTIDÁRIO
Step15: Se o doador tem um CNPJ de uma candidatura ou orgao partidário ou CPF de uma candidatura então complementa os dados do doador pelo os do receptor.
Step24: RECEITAS DE DECLARADAS COMO 'ROC' DE DOADOR PJ
Step28: ORIGEM RECEITA APLICAÇÃO FINANCEIRA
Step31: ORIGEM RECURSOS FINANCIAMENTO COLETIVO
Step33: ORIGEM RECEITA RONI
Step35: CASOS ESPECIAIS NÂO IDENTIFICADOS
Step37: CASO DE ORIGEM DE RECEITA 'ROC' DE CANDIDATO NÃO IDENTIFICADO - CASOS ESPECIAIS
Step44: RESUMO DAS RECEITAS
|
<ASSISTANT_TASK:>
Python Code:
ano_eleicao = '2018'
dbschema = f'rede{ano_eleicao}'
dbschema_tse = f'tse{ano_eleicao}'
table_receitas = f'{dbschema}.receitas_{ano_eleicao}'
table_receitas_candidatos = f'{dbschema_tse}.receitas_candidatos_{ano_eleicao}'
table_receitas_candidatos_doador_originario = f'{dbschema_tse}.receitas_candidatos_doador_originario_{ano_eleicao}'
table_receitas_orgaos_partidarios = f'{dbschema_tse}.receitas_orgaos_partidarios_{ano_eleicao}'
table_receitas_orgaos_partidarios_doador_originario= f'{dbschema_tse}.receitas_orgaos_partidarios_doador_originario_{ano_eleicao}'
table_consulta_cand = f'{dbschema_tse}.consulta_cand_{ano_eleicao}'
table_candidaturas = f"{dbschema}.candidaturas_{ano_eleicao}"
table_municipios = f"tse{ano_eleicao}.municipios_{ano_eleicao}"
table_partidos = f'{dbschema_tse}.partidos_{ano_eleicao}'
table_orgaos_partidarios = f"{dbschema}.orgaos_partidarios_{ano_eleicao}"
table_origem_receitas = f'tse.origem_receitas'
table_fonte_receitas = f'tse.fonte_receitas'
import os
import sys
sys.path.append('../')
import mod_tse as mtse
home = os.environ["HOME"]
mtse.execute_query(f'CREATE SCHEMA IF NOT EXISTS {dbschema};')
query_update_origem_receita = f
update {table_receitas} as r
set receita_origem_sg = o.sg_origem_receita,
receita_origem_cd = o.cd_origem_receita,
receita_origem_ds = o.ds_origem_receita
from {table_origem_receitas} as o
where upper(r.receita_origem_cd) = upper(o.cd_origem_receita)
or upper(r.receita_origem_ds) = upper(o.tx_origem_receita)
;
mtse.execute_query(query_update_origem_receita)
print("Origen Receita Atualizadas.")
query_update_fonte_receita = f
update {table_receitas} as r
set receita_fonte_cd = f.cd_fonte_receita,
receita_fonte_ds = f.ds_fonte_receita
from {table_fonte_receitas} as f
where upper(r.receita_fonte_cd) = upper(f.cd_fonte_receita)
or upper(r.receita_fonte_ds) = upper(f.tx_fonte_receita)
;
mtse.execute_query(query_update_fonte_receita)
print("Fonte Receita Atualizadas.")
siglas_uf = list(mtse.unidades_da_federacao.keys())
siglas_uf="'" +"','".join(siglas_uf) +"'"
query_update_uf_doador_OP = f
update {table_receitas} as r
set doador_uf = m.sg_uf,
doador_ue_nome = m.nm_municipio
from {table_municipios} as m
where
m.cd_municipio = doador_ue
;
mtse.execute_query(query_update_uf_doador_OP)
print("doador_uf atualizada.")
query_update_doador_partido_sg = f
with partidos as
(
select sg_partido, nr_partido, nm_partido
from {table_partidos}
group by sg_partido, nr_partido, nm_partido
)
update {table_receitas} as r
set
doador_partido_sg = p.sg_partido,
doador_partido_nr = p.nr_partido,
doador_partido_nome = p.nm_partido
from partidos as p
where
upper(r.doador_partido_sg) = upper(p.sg_partido)
or
upper(r.doador_partido_nr) = upper(p.nr_partido)
;
mtse.execute_query(query_update_doador_partido_sg)
print("Sigla do partido do doador atualizada.")
mtse.execute_query(f
update {table_receitas} as r
set doador_id = public.get_orgao_partidario_id(doador_uf,doador_partido_sg),
doador_tipo_cd = 'OP',
doador_tipo_ds = 'Orgão Partidário',
doador_label = public.get_orgao_partidario_label(
doador_partido_sg,
doador_uf
)
where receita_origem_sg = 'RPP'
and doador_tipo_cd in ('OP','PJ')
;
)
mtse.pandas_query(fselect doador_tipo_cd, count(*), sum(receita_valor)
from {table_receitas}
where tabela_id in ('RC','ROP')
group by doador_tipo_cd
order by doador_tipo_cd
)
table_receptores = f"{dbschema}.receptores_{ano_eleicao}"
query_create_table_receptores = f
DROP TABLE IF EXISTS {table_receptores} CASCADE;
CREATE TABLE {table_receptores} (
receita_origem_sg varchar,
receptor_id varchar,
receptor_tipo_cd varchar,
receptor_tipo_ds varchar,
receptor_label varchar,
receptor_genero_ds varchar,
receptor_esfera_partidaria_cd varchar,
receptor_esfera_partidaria_ds varchar,
receptor_uf varchar,
receptor_ue varchar,
receptor_ue_nome varchar,
receptor_cnpj varchar,
receptor_cargo_cd varchar,
receptor_cargo_ds varchar,
receptor_candidato_seq varchar,
receptor_candidato_nr varchar,
receptor_candidato_nome varchar,
receptor_candidato_cpf varchar,
receptor_vice_candidato_cpf varchar,
receptor_partido_nr varchar,
receptor_partido_sg varchar,
receptor_partido_nome varchar
);
CREATE INDEX ON {table_receitas} (receptor_cnpj);
CREATE INDEX ON {table_receitas} (receptor_candidato_nr);
CREATE INDEX ON {table_receitas} (receptor_uf);
CREATE INDEX ON {table_receitas} (receptor_candidato_cpf);
CREATE INDEX ON {table_receitas} (receita_origem_sg);
insert into {table_receptores}
( select
receita_origem_sg,
receptor_id ,
receptor_tipo_cd ,
receptor_tipo_ds ,
receptor_label ,
receptor_genero_ds ,
receptor_esfera_partidaria_cd ,
receptor_esfera_partidaria_ds ,
receptor_uf ,
receptor_ue ,
receptor_ue_nome ,
receptor_cnpj ,
receptor_cargo_cd ,
receptor_cargo_ds ,
receptor_candidato_seq ,
receptor_candidato_nr ,
receptor_candidato_nome ,
receptor_candidato_cpf ,
receptor_vice_candidato_cpf ,
receptor_partido_nr ,
receptor_partido_sg ,
receptor_partido_nome
from {table_receitas}
group by
receita_origem_sg,
receptor_id ,
receptor_tipo_cd ,
receptor_tipo_ds ,
receptor_label ,
receptor_genero_ds ,
receptor_esfera_partidaria_cd ,
receptor_esfera_partidaria_ds ,
receptor_uf ,
receptor_ue ,
receptor_ue_nome ,
receptor_cnpj ,
receptor_cargo_cd ,
receptor_cargo_ds ,
receptor_candidato_seq ,
receptor_candidato_nr ,
receptor_candidato_nome ,
receptor_candidato_cpf ,
receptor_vice_candidato_cpf ,
receptor_partido_nr ,
receptor_partido_sg ,
receptor_partido_nome
)
;
mtse.execute_query(query_create_table_receptores)
query_update_doador_origem_receita_ROC_PJ = f
update {table_receitas} as r
set
doador_id = rp.receptor_id,
doador_tipo_cd = rp.receptor_tipo_cd,
doador_tipo_ds = rp.receptor_tipo_ds,
doador_label = rp.receptor_label,
doador_genero_ds = rp.receptor_genero_ds,
doador_esfera_partidaria_cd = rp.receptor_esfera_partidaria_cd,
doador_esfera_partidaria_ds = rp.receptor_esfera_partidaria_ds,
doador_uf = rp.receptor_uf,
doador_ue = rp.receptor_ue,
doador_ue_nome = rp.receptor_ue_nome,
doador_candidato_sq = rp.receptor_candidato_seq,
doador_candidato_nr = rp.receptor_candidato_nr,
doador_candidato_nome = rp.receptor_candidato_nome,
doador_candidato_cargo_cd = rp.receptor_cargo_cd,
doador_candidato_cargo_ds = rp.receptor_cargo_ds,
doador_partido_nr = rp.receptor_partido_nr,
doador_partido_sg = rp.receptor_partido_sg,
doador_partido_nome = rp.receptor_partido_nome
from
{table_receptores} as rp
where
(
r.doador_cpf_cnpj = rp.receptor_cnpj
and
r.receita_origem_sg = 'ROC'
and
r.doador_tipo_cd = 'PJ'
)
;
mtse.execute_query(query_update_doador_origem_receita_ROC_PJ)
mtse.pandas_query(fselect doador_tipo_cd, count(*), sum(receita_valor)
from {table_receitas}
where tabela_id in ('RC','ROP')
group by doador_tipo_cd
order by doador_tipo_cd
)
query_update_doador_origem_receita_ROC = f
update {table_receitas} as r
set
doador_id = rp.receptor_id,
doador_tipo_cd = rp.receptor_tipo_cd,
doador_tipo_ds = rp.receptor_tipo_ds,
doador_label = rp.receptor_label,
doador_genero_ds = rp.receptor_genero_ds,
doador_esfera_partidaria_cd = rp.receptor_esfera_partidaria_cd,
doador_esfera_partidaria_ds = rp.receptor_esfera_partidaria_ds,
doador_uf = rp.receptor_uf,
doador_ue = rp.receptor_ue,
doador_ue_nome = rp.receptor_ue_nome,
doador_candidato_sq = rp.receptor_candidato_seq,
doador_candidato_nr = rp.receptor_candidato_nr,
doador_candidato_nome = rp.receptor_candidato_nome,
doador_candidato_cargo_cd = rp.receptor_cargo_cd,
doador_candidato_cargo_ds = rp.receptor_cargo_ds,
doador_partido_nr = rp.receptor_partido_nr,
doador_partido_sg = rp.receptor_partido_sg,
doador_partido_nome = rp.receptor_partido_nome
from
{table_receptores} as rp
where
(
r.doador_candidato_nr = rp.receptor_candidato_nr
and
r.doador_uf = rp.receptor_uf
and
r.receita_origem_sg = 'ROC'
and
r.doador_tipo_cd = 'PJ'
)
;
mtse.execute_query(query_update_doador_origem_receita_ROC)
mtse.pandas_query(fselect doador_tipo_cd, count(*), sum(receita_valor)
from {table_receitas}
where tabela_id in ('RC','ROP')
group by doador_tipo_cd
order by doador_tipo_cd
)
query_update_doador_origem_receita_ROC = f
update {table_receitas} as r
set
doador_id = c.candidato_id,
doador_tipo_cd = 'CA',
doador_tipo_ds = 'Candidato',
doador_label = c.candidato_label,
doador_genero_ds = c.ds_genero,
doador_uf = c.sg_uf,
doador_ue = c.sg_ue,
doador_ue_nome = c.nm_ue,
doador_candidato_sq = c.sq_candidato,
doador_candidato_nr = c.nr_candidato,
doador_candidato_nome = c.nm_candidato,
doador_candidato_cargo_cd = c.cd_cargo,
doador_candidato_cargo_ds = c.ds_cargo,
doador_partido_nr = c.nr_partido,
doador_partido_sg = c.sg_partido,
doador_partido_nome = c.nm_partido
from
{table_candidaturas} as c
where
(
r.doador_candidato_nr = c.nr_candidato
and
r.doador_uf = c.sg_uf
and
r.receita_origem_sg = 'ROC'
and
r.receptor_tipo_cd = 'CA'
and
(r.doador_candidato_sq = c.sq_candidato
or r.doador_nome_rfb like '%'||c.nm_candidato||'%'
)
)
;
mtse.execute_query(query_update_doador_origem_receita_ROC)
mtse.pandas_query(fselect doador_tipo_cd, count(*), sum(receita_valor)
from {table_receitas}
where tabela_id in ('RC','ROP')
group by doador_tipo_cd
order by doador_tipo_cd
)
query_update_doador_origem_receita_RP = f
update {table_receitas} as r
set
doador_id = rp.receptor_id||'RP',
doador_tipo_cd = 'RP',
doador_tipo_ds = 'Candidato Recursos Própios',
doador_label = rp.receptor_label,
doador_genero_ds = rp.receptor_genero_ds,
doador_esfera_partidaria_cd = rp.receptor_esfera_partidaria_cd,
doador_esfera_partidaria_ds = rp.receptor_esfera_partidaria_ds,
doador_uf = rp.receptor_uf,
doador_ue = rp.receptor_ue,
doador_ue_nome = rp.receptor_ue_nome,
doador_candidato_sq = rp.receptor_candidato_seq,
doador_candidato_nr = rp.receptor_candidato_nr,
doador_candidato_nome = rp.receptor_candidato_nome,
doador_candidato_cargo_cd = rp.receptor_cargo_cd,
doador_candidato_cargo_ds = rp.receptor_cargo_ds,
doador_partido_nr = rp.receptor_partido_nr,
doador_partido_sg = rp.receptor_partido_sg,
doador_partido_nome = rp.receptor_partido_nome
from
{table_receptores} as rp
where
(
r.doador_cpf_cnpj = rp.receptor_candidato_cpf
and
r.receita_origem_sg = 'RP'
and
r.doador_tipo_cd = 'PF'
)
;
mtse.execute_query(query_update_doador_origem_receita_RP)
mtse.pandas_query(fselect doador_tipo_cd, count(*), sum(receita_valor)
from {table_receitas}
where tabela_id in ('RC','ROP')
group by doador_tipo_cd
order by doador_tipo_cd
)
mtse.pandas_query(fselect doador_tipo_cd, count(*), sum(receita_valor)
from {table_receitas}
where tabela_id in ('RC','ROP')
group by doador_tipo_cd
order by doador_tipo_cd
)
mtse.pandas_query(fselect doador_tipo_cd, count(*), sum(receita_valor)
from {table_receitas}
where tabela_id in ('RC','ROP')
group by doador_tipo_cd
order by doador_tipo_cd
)
query_update_nome_doador_raf = f
update {table_receitas} as r
set doador_nome = 'Rendimentos de aplicações financeiras',
doador_id = 'RAF',
doador_tipo_cd = 'RAF',
doador_tipo_ds = 'Rendimentos de aplicações financeiras',
doador_label = ('Rendimentos de aplicações financeiras')
where receita_origem_sg = 'RAF'
and doador_nome in ('#NE','#NE#', '','#NULO','#NULO#', null)
;
mtse.execute_query(query_update_nome_doador_raf)
mtse.pandas_query(fselect receptor_tipo_cd, count(*), sum(receita_valor)
from {table_receitas}
where tabela_id in ('RC','ROP')
group by receptor_tipo_cd
order by receptor_tipo_cd
)
mtse.pandas_query(fselect doador_tipo_cd, count(*), sum(receita_valor)
from {table_receitas}
where tabela_id in ('RC','ROP')
group by doador_tipo_cd
order by doador_tipo_cd
)
query_update_doador_tipo_cd_RFC = f
update {table_receitas}
set doador_tipo_cd = 'RFC',
doador_tipo_ds = 'Recursos de Financiamento Coletivo',
doador_label = 'Recursos de Financiamento Coletivo'
where
doador_tipo_cd = 'PJ'
and receita_origem_sg = 'RFC'
;
mtse.execute_query(query_update_doador_tipo_cd_RFC)
## limpa dados de candidato em doador PF com origem 'RPF' restantes
mtse.execute_query(fupdate {table_receitas}
set
doador_uf = '#NE' ,
doador_partido_sg = '#NE' ,
doador_candidato_cargo_ds = '#NE' ,
doador_genero_ds = '#NE'
where
doador_tipo_cd = 'PF'
--and
--receita_origem_sg = 'RPF'
)
query_update_nome_doador_roni = f
update {table_receitas} as r
set doador_nome = ('Recursos de origens não identificadas'),
doador_id = 'RONI',
doador_tipo_cd = 'RONI',
doador_tipo_ds = 'Recursos de origens não identificadas',
doador_label = ('Recursos de origens não identificadas')
where receita_origem_sg = 'RONI'
and
(
doador_nome in ('#NE','#NE#', '','#NULO','#NULO#')
or
doador_tipo_cd = 'NI'
)
;
mtse.execute_query(query_update_nome_doador_roni)
query_update_nome_doador_NI = f
update {table_receitas} as r
set doador_nome = ('Recursos de origens não identificadas'),
doador_id = 'RONI',
doador_tipo_cd = 'RONI',
doador_tipo_ds = 'Recursos de origens não identificadas',
doador_label = ('Recursos de origens não identificadas')
where
doador_tipo_cd = 'NI'
;
mtse.execute_query(query_update_nome_doador_NI)
query_update_doador_origem_receita_ROC_PJ_NI = f
update {table_receitas} as r
set
doador_id = 'CA'||doador_candidato_sq,
doador_tipo_cd = 'CA',
doador_tipo_ds = 'Candidato'
where
(
r.doador_tipo_cd = 'PJ'
and
r.receita_origem_sg in ('ROC','CBRE')
)
;
mtse.execute_query(query_update_doador_origem_receita_ROC_PJ_NI)
mtse.pandas_query(fselect doador_tipo_cd, count(*), sum(receita_valor)
from {table_receitas}
where tabela_id in ('RC','ROP')
group by doador_tipo_cd
order by doador_tipo_cd
)
mtse.pandas_query(fselect receptor_tipo_cd,doador_tipo_cd, count(*), sum(receita_valor)
from {table_receitas}
where tabela_id in ('RC','ROP')
group by receptor_tipo_cd,doador_tipo_cd
order by receptor_tipo_cd,doador_tipo_cd
)
q= fselect receita_origem_ds, receita_origem_sg, count(*) as qtd, sum(receita_valor) total_receitas
from {table_receitas} as r
where tabela_id in ('RC','ROP')
group by receita_origem_ds, receita_origem_sg
order by total_receitas desc
;
mtse.pandas_query(q)
mtse.pandas_query(fselect doador_tipo_cd, count(*), sum(receita_valor)
from {table_receitas}
where tabela_id in ('RC','ROP')
group by doador_tipo_cd
order by doador_tipo_cd
)
q= fselect receita_origem_ds, receita_origem_sg, count(*) as qtd, sum(receita_valor) total_receitas
from {table_receitas} as r
where tabela_id in ('RC','ROP')
group by receita_origem_ds, receita_origem_sg
order by total_receitas desc
;
mtse.pandas_query(q)
q= fselect receptor_tipo_cd, count(*) as qtd, sum(receita_valor) total_receitas
from {table_receitas} as r
where tabela_id in ('RC','ROP')
group by receptor_tipo_cd
order by receptor_tipo_cd desc
;
mtse.pandas_query(q)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 2. Key Properties --> Software Properties
Step12: 2.2. Code Version
Step13: 2.3. Code Languages
Step14: 3. Key Properties --> Timestep Framework
Step15: 3.2. Split Operator Advection Timestep
Step16: 3.3. Split Operator Physical Timestep
Step17: 3.4. Integrated Timestep
Step18: 3.5. Integrated Scheme Type
Step19: 4. Key Properties --> Meteorological Forcings
Step20: 4.2. Variables 2D
Step21: 4.3. Frequency
Step22: 5. Key Properties --> Resolution
Step23: 5.2. Canonical Horizontal Resolution
Step24: 5.3. Number Of Horizontal Gridpoints
Step25: 5.4. Number Of Vertical Levels
Step26: 5.5. Is Adaptive Grid
Step27: 6. Key Properties --> Tuning Applied
Step28: 6.2. Global Mean Metrics Used
Step29: 6.3. Regional Metrics Used
Step30: 6.4. Trend Metrics Used
Step31: 7. Transport
Step32: 7.2. Scheme
Step33: 7.3. Mass Conservation Scheme
Step34: 7.4. Convention
Step35: 8. Emissions
Step36: 8.2. Method
Step37: 8.3. Sources
Step38: 8.4. Prescribed Climatology
Step39: 8.5. Prescribed Climatology Emitted Species
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Step41: 8.7. Interactive Emitted Species
Step42: 8.8. Other Emitted Species
Step43: 8.9. Other Method Characteristics
Step44: 9. Concentrations
Step45: 9.2. Prescribed Lower Boundary
Step46: 9.3. Prescribed Upper Boundary
Step47: 9.4. Prescribed Fields Mmr
Step48: 9.5. Prescribed Fields Mmr
Step49: 10. Optical Radiative Properties
Step50: 11. Optical Radiative Properties --> Absorption
Step51: 11.2. Dust
Step52: 11.3. Organics
Step53: 12. Optical Radiative Properties --> Mixtures
Step54: 12.2. Internal
Step55: 12.3. Mixing Rule
Step56: 13. Optical Radiative Properties --> Impact Of H2o
Step57: 13.2. Internal Mixture
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Step59: 14.2. Shortwave Bands
Step60: 14.3. Longwave Bands
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Step62: 15.2. Twomey
Step63: 15.3. Twomey Minimum Ccn
Step64: 15.4. Drizzle
Step65: 15.5. Cloud Lifetime
Step66: 15.6. Longwave Bands
Step67: 16. Model
Step68: 16.2. Processes
Step69: 16.3. Coupling
Step70: 16.4. Gas Phase Precursors
Step71: 16.5. Scheme Type
Step72: 16.6. Bulk Scheme Species
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'snu', 'sandbox-3', 'aerosol')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Copying and pasting code can introduce bugs
Step2: Why write functions?
Step4: Mechanics of Writing a Function
Step5: Variable names and scope
Step6: If you modify an object (like a list or a dataframe) inside of a function, the modifications will affect its value outside of the function
Step7: Exercise
Step8: Once you've identified the bug in the above code, take the code and turn it into a function that takes a number as input and returns True if the number is prime and False if it is not.
Step9: Test cases
Step10: Exercise - Write a Function
|
<ASSISTANT_TASK:>
Python Code:
# Example: We're going to use Pandas dataframes to create a gradebook for this course
import pandas as pd
# Student Rosters:
section_1_students = ['Hao', 'Jennifer', 'Alex']
section_2_students = ['Christa', 'Troy', 'Xin']
# Gradebook columns:
columns = ['raw_grade', 'did_extra_credit', 'final_grade']
# Let's create two dataframes, one for each class section
section_1_gradebook = pd.DataFrame(index=section_1_students, columns=columns)
section_2_gradebook = pd.DataFrame(index=section_2_students, columns=columns)
print("Section 1 Gradebook:")
print(section_1_gradebook)
print("\n") # prints a newline character to put an empty line
print("Section 2 Gradebook:")
print(section_2_gradebook)
# Now let's add some data
# (in real life we might load this from a CSV or other file)
section_1_gradebook.loc['Hao']['raw_grade'] = 80
section_1_gradebook.loc['Hao']['did_extra_credit'] = True # python supports boolean (True/False) values
section_1_gradebook.loc['Jennifer']['raw_grade'] = 98
section_1_gradebook.loc['Jennifer']['did_extra_credit'] = False
section_1_gradebook.loc['Alex']['raw_grade'] = 85
section_1_gradebook.loc['Alex']['did_extra_credit'] = True
section_2_gradebook.loc['Christa']['raw_grade'] = 91
section_2_gradebook.loc['Christa']['did_extra_credit'] = False
section_2_gradebook.loc['Troy']['raw_grade'] = 89
section_2_gradebook.loc['Troy']['did_extra_credit'] = True
section_2_gradebook.loc['Xin']['raw_grade'] = 89
section_2_gradebook.loc['Xin']['did_extra_credit'] = False
print("Section 1 Gradebook:")
print(section_1_gradebook)
print("\n") # prints a newline character to put an empty line
print("Section 2 Gradebook:")
print(section_2_gradebook)
# Let's give everyone who did Extra Credit 5 additional points
# But we'll put a max of 100 points
# We'll do section 1 first:
# Now let's print out the results:
print("Section 1 Final Grades:")
print(section_1_gradebook)
# Now let's do Section 2:
# Now let's print out the results:
print("Section 2 Final Grades:")
print(section_2_gradebook)
# Let's put our extra credit code in a function!
# Let's decompose our other function into two functions that each do one thing
extra_credit_value = 10
add_final_grades_for_section(section_1_gradebook, extra_credit_value)
add_final_grades_for_section(section_2_gradebook, extra_credit_value)
def example_addition_function(num_1, num_2):
This function adds two numbers.
example_addition_function is the function name
Parameters:
num_1: This is the first formal parameter
num_2: This is the second formal parameter
Returns:
sum of num_1 and num_2
added_value = num_1 + num_2
return added_value
arg_1 = 5
arg_2 = 10
result_value = example_addition_function(arg_1, arg_2) # arg_1 and arg_2 are the arguments to the function
def print_message(message):
message_to_print = "Here is your message: " + message
print(message_to_print)
my_message = "Hello, class!"
print_message(my_message)
#print(message_to_print) # this will cause an error. This variable only exists within the function.
def add_name_to_list(name_list, new_name):
name_list.append(new_name)
teachers = ["Bernease", "Dave", "Joe"]
print(teachers)
add_name_to_list(teachers, "Colin")
print(teachers)
# Determine if num is prime
# This code has a bug. What is it?
# Also, the efficiency of the code can be improved. How?
num = 3
is_prime = True
for integer in range(1, num):
if num % integer == 0:
# The "==" operator checks for equality and returns True or False.
# Note the difference between "==" and "=", which assigns a value to a variable.
#
# The "%" operator calculates the remainder of a division operation
# if the remainder is zero, integer is a divisor of num, so num is not prime
print("Not prime!")
is_prime = False
if is_prime:
print("Is prime!")
# Write a function that takes a number as input and determines if it is prime.
# If it is prime, return True
# Otherwise, return False
# Check your function on several test cases to make sure you are getting the correct output:
# Function header: name, arguments
# Function logic
# return something
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The Jordan-Wigner and Bravyi-Kitaev Transforms
Step2: Ladder operators and the canonical anticommutation relations
Step3: The parity transform
Step4: The purpose of the string of Pauli $Z$'s is to introduce the phase factor $(-1)^{\sum_{q=0}^{p-1} n_q}$ when acting on a computational basis state; when $e$ is the identity encoder, the modulo-2 sum $\sum_{q=0}^{p-1} n_q$ is computed as $\sum_{q=0}^{p-1} z_q$, which requires reading $p$ bits and leads to a Pauli $Z$ string with weight $p$. A simple solution to this problem is to consider instead the encoder defined by
Step5: Now let's map one of the FermionOperators again but with the total number of modes set to 100.
Step6: Note that with the JWT, it is not necessary to specify the total number of modes in the system because $\tilde{a}_p$ only acts on qubits $0, \ldots, p$ and not any higher ones.
Step7: For the JWT, $U(j) = {j}$ and $P(j) = {0, \ldots, j}$, whereas for the parity transform, $U(j) = {j, \ldots, N-1}$ and $P(j) = {j}$. The size of these sets can be as large as $N$, the total number of modes. These sets are determined by the encoding function $e$.
Step8: The advantage of the Bravyi-Kitaev transform is not apparent in a system with so few modes. Let's look at a system with 100 modes.
Step9: Now let's go back to a system with 10 modes and check that the Bravyi-Kitaev transformed operators satisfy the expected relations.
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
try:
import openfermion
except ImportError:
!pip install git+https://github.com/quantumlib/OpenFermion.git@master#egg=openfermion
from openfermion import *
# Create some ladder operators
annihilate_2 = FermionOperator('2')
create_2 = FermionOperator('2^')
annihilate_5 = FermionOperator('5')
create_5 = FermionOperator('5^')
# Construct occupation number operators
num_2 = create_2 * annihilate_2
num_5 = create_5 * annihilate_5
# Map FermionOperators to QubitOperators using the JWT
annihilate_2_jw = jordan_wigner(annihilate_2)
create_2_jw = jordan_wigner(create_2)
annihilate_5_jw = jordan_wigner(annihilate_5)
create_5_jw = jordan_wigner(create_5)
num_2_jw = jordan_wigner(num_2)
num_5_jw = jordan_wigner(num_5)
# Create QubitOperator versions of zero and identity
zero = QubitOperator()
identity = QubitOperator(())
# Check the canonical anticommutation relations
assert anticommutator(annihilate_5_jw, annihilate_2_jw) == zero
assert anticommutator(annihilate_5_jw, annihilate_5_jw) == zero
assert anticommutator(annihilate_5_jw, create_2_jw) == zero
assert anticommutator(annihilate_5_jw, create_5_jw) == identity
# Check that the occupation number operators commute
assert commutator(num_2_jw, num_5_jw) == zero
# Print some output
print("annihilate_2_jw = \n{}".format(annihilate_2_jw))
print('')
print("create_2_jw = \n{}".format(create_2_jw))
print('')
print("annihilate_5_jw = \n{}".format(annihilate_5_jw))
print('')
print("create_5_jw = \n{}".format(create_5_jw))
print('')
print("num_2_jw = \n{}".format(num_2_jw))
print('')
print("num_5_jw = \n{}".format(num_5_jw))
print(jordan_wigner(FermionOperator('99')))
# Set the number of modes in the system
n_modes = 10
# Define a function to perform the parity transform
def parity(fermion_operator, n_modes):
return binary_code_transform(fermion_operator, parity_code(n_modes))
# Map FermionOperators to QubitOperators using the parity transform
annihilate_2_parity = parity(annihilate_2, n_modes)
create_2_parity = parity(create_2, n_modes)
annihilate_5_parity = parity(annihilate_5, n_modes)
create_5_parity = parity(create_5, n_modes)
num_2_parity = parity(num_2, n_modes)
num_5_parity = parity(num_5, n_modes)
# Check the canonical anticommutation relations
assert anticommutator(annihilate_5_parity, annihilate_2_parity) == zero
assert anticommutator(annihilate_5_parity, annihilate_5_parity) == zero
assert anticommutator(annihilate_5_parity, create_2_parity) == zero
assert anticommutator(annihilate_5_parity, create_5_parity) == identity
# Check that the occupation number operators commute
assert commutator(num_2_parity, num_5_parity) == zero
# Print some output
print("annihilate_2_parity = \n{}".format(annihilate_2_parity))
print('')
print("create_2_parity = \n{}".format(create_2_parity))
print('')
print("annihilate_5_parity = \n{}".format(annihilate_5_parity))
print('')
print("create_5_parity = \n{}".format(create_5_parity))
print('')
print("num_2_parity = \n{}".format(num_2_parity))
print('')
print("num_5_parity = \n{}".format(num_5_parity))
print(parity(annihilate_2, 100))
# Create a Majorana operator from our existing operators
c_5 = annihilate_5 + create_5
# Set the number of modes (required for the parity transform)
n_modes = 10
# Transform the Majorana operator to a QubitOperator in two different ways
c_5_jw = jordan_wigner(c_5)
c_5_parity = parity(c_5, n_modes)
# Print some output
print("c_5_jw = \n{}".format(c_5_jw))
print('')
print("c_5_parity = \n{}".format(c_5_parity))
c_5_bk = bravyi_kitaev(c_5, n_modes)
print("c_5_bk = \n{}".format(c_5_bk))
n_modes = 100
# Initialize some Majorana operators
c_17 = FermionOperator('[17] + [17^]')
c_50 = FermionOperator('[50] + [50^]')
c_73 = FermionOperator('[73] + [73^]')
# Map to QubitOperators
c_17_jw = jordan_wigner(c_17)
c_50_jw = jordan_wigner(c_50)
c_73_jw = jordan_wigner(c_73)
c_17_parity = parity(c_17, n_modes)
c_50_parity = parity(c_50, n_modes)
c_73_parity = parity(c_73, n_modes)
c_17_bk = bravyi_kitaev(c_17, n_modes)
c_50_bk = bravyi_kitaev(c_50, n_modes)
c_73_bk = bravyi_kitaev(c_73, n_modes)
# Print some output
print("Jordan-Wigner\n"
"-------------")
print("c_17_jw = \n{}".format(c_17_jw))
print('')
print("c_50_jw = \n{}".format(c_50_jw))
print('')
print("c_73_jw = \n{}".format(c_73_jw))
print('')
print("Parity\n"
"------")
print("c_17_parity = \n{}".format(c_17_parity))
print('')
print("c_50_parity = \n{}".format(c_50_parity))
print('')
print("c_73_parity = \n{}".format(c_73_parity))
print('')
print("Bravyi-Kitaev\n"
"-------------")
print("c_17_bk = \n{}".format(c_17_bk))
print('')
print("c_50_bk = \n{}".format(c_50_bk))
print('')
print("c_73_bk = \n{}".format(c_73_bk))
# Set the number of modes in the system
n_modes = 10
# Map FermionOperators to QubitOperators using the Bravyi-Kitaev transform
annihilate_2_bk = bravyi_kitaev(annihilate_2, n_modes)
create_2_bk = bravyi_kitaev(create_2, n_modes)
annihilate_5_bk = bravyi_kitaev(annihilate_5, n_modes)
create_5_bk = bravyi_kitaev(create_5, n_modes)
num_2_bk = bravyi_kitaev(num_2, n_modes)
num_5_bk = bravyi_kitaev(num_5, n_modes)
# Check the canonical anticommutation relations
assert anticommutator(annihilate_5_bk, annihilate_2_bk) == zero
assert anticommutator(annihilate_5_bk, annihilate_5_bk) == zero
assert anticommutator(annihilate_5_bk, create_2_bk) == zero
assert anticommutator(annihilate_5_bk, create_5_bk) == identity
# Check that the occupation number operators commute
assert commutator(num_2_bk, num_5_bk) == zero
# Print some output
print("annihilate_2_bk = \n{}".format(annihilate_2_bk))
print('')
print("create_2_bk = \n{}".format(create_2_bk))
print('')
print("annihilate_5_bk = \n{}".format(annihilate_5_bk))
print('')
print("create_5_bk = \n{}".format(create_5_bk))
print('')
print("num_2_bk = \n{}".format(num_2_bk))
print('')
print("num_5_bk = \n{}".format(num_5_bk))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A simple classification model using Keras with Cloud TPUs
Step2: Resolve TPU Address
Step3: FLAGS used as model params
Step5: Download training input data and define prediction input & output
Step6: Define model (2 hidden layers with 10 neurons in each)
Step7: Creating a TPU model from a Keras Model
Step8: Training of the model on TPU
Step9: Evaluation of the model
Step10: Save the model
Step11: Prediction
Step12: Prediction on TPU
Step13: Prediction on CPU
|
<ASSISTANT_TASK:>
Python Code:
# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,0
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
An Example of a classification model using Keras for the Iris dataset.
import json
import os
import pandas as pd
import pprint
import tensorflow as tf
import time
import numpy as np
from tensorflow import keras
print(tf.__version__)
use_tpu = True #@param {type:"boolean"}
if use_tpu:
assert 'COLAB_TPU_ADDR' in os.environ, 'Missing TPU; did you request a TPU in Notebook Settings?'
if 'COLAB_TPU_ADDR' in os.environ:
TF_MASTER = 'grpc://{}'.format(os.environ['COLAB_TPU_ADDR'])
else:
TF_MASTER=''
with tf.Session(TF_MASTER) as session:
print ('List of devices:')
pprint.pprint(session.list_devices())
# Model specific parameters
# TPU address
tpu_address = TF_MASTER
# Number of epochs
epochs = 50
# Number of steps_per_epoch
steps_per_epoch = 20
# NOTE: Total number of training steps = Number of epochs * Number of steps_per_epochs
# Total number of evaluation steps. If '0', evaluation after training is skipped
eval_steps = 50
TRAIN_URL = "http://download.tensorflow.org/data/iris_training.csv"
TEST_URL = "http://download.tensorflow.org/data/iris_test.csv"
CSV_COLUMN_NAMES = ['SepalLength', 'SepalWidth',
'PetalLength', 'PetalWidth', 'Species']
SPECIES = ['Setosa', 'Versicolor', 'Virginica']
PREDICTION_INPUT_DATA = {
'SepalLength': [6.9, 5.1, 5.9, 6.0, 5.5, 6.2, 5.5, 6.3],
'SepalWidth': [3.1, 3.3, 3.0, 3.4, 2.5, 2.9, 4.2, 2.8],
'PetalLength': [5.4, 1.7, 4.2, 4.5, 4.0, 4.3, 1.4, 5.1],
'PetalWidth': [2.1, 0.5, 1.5, 1.6, 1.3, 1.3, 0.2, 1.5],
}
PREDICTION_OUTPUT_DATA = ['Virginica', 'Setosa', 'Versicolor', 'Versicolor', 'Versicolor', 'Versicolor', 'Setosa', 'Virginica']
def maybe_download():
train_path = tf.keras.utils.get_file(TRAIN_URL.split('/')[-1], TRAIN_URL)
test_path = tf.keras.utils.get_file(TEST_URL.split('/')[-1], TEST_URL)
return train_path, test_path
def load_data(y_name='Species'):
Returns the iris dataset as (train_x, train_y), (test_x, test_y).
train_path, test_path = maybe_download()
train = pd.read_csv(train_path, names=CSV_COLUMN_NAMES, header=0, dtype={'SepalLength': pd.np.float32,
'SepalWidth': pd.np.float32, 'PetalLength': pd.np.float32, 'PetalWidth': pd.np.float32, 'Species': pd.np.int32})
train_x, train_y = train, train.pop(y_name)
test = pd.read_csv(test_path, names=CSV_COLUMN_NAMES, header=0, dtype={'SepalLength': pd.np.float32,
'SepalWidth': pd.np.float32, 'PetalLength': pd.np.float32, 'PetalWidth': pd.np.float32, 'Species': pd.np.int32})
test_x, test_y = test, test.pop(y_name)
return (train_x, train_y), (test_x, test_y)
def get_model():
return keras.Sequential([
keras.layers.Dense(10, input_shape=(4,), activation=tf.nn.relu, name = "Dense_1"),
keras.layers.Dense(10, activation=tf.nn.relu, name = "Dense_2"),
keras.layers.Dense(3, activation=None, name = "logits"),
keras.layers.Dense(3, activation=tf.nn.softmax, name = "softmax")
])
dnn_model = get_model()
dnn_model.compile(optimizer=tf.train.AdagradOptimizer(learning_rate=0.1),
loss='sparse_categorical_crossentropy',
metrics=['sparse_categorical_crossentropy'])
dnn_model.summary()
tpu_model = tf.contrib.tpu.keras_to_tpu_model(
dnn_model,
strategy=tf.contrib.tpu.TPUDistributionStrategy(
tf.contrib.cluster_resolver.TPUClusterResolver(TF_MASTER)))
tpu_model.summary()
# Fetch the data
(train_x, train_y), (test_x, test_y) = load_data()
# Train the model
tpu_model.fit(
train_x, train_y,
steps_per_epoch = steps_per_epoch,
epochs=epochs,
)
tpu_model.evaluate(test_x, test_y,
steps = eval_steps)
tpu_model.save_weights('./DNN_TPU_1024.h5', overwrite=True)
COLUMNS_NAME=['SepalLength', 'SepalWidth', 'PetalLength', 'PetalWidth']
data = pd.DataFrame(PREDICTION_INPUT_DATA, columns=COLUMNS_NAME)
print(data)
predictions = tpu_model.predict(data)
template = ('\nPrediction is "{}" ({:.1f}%), expected "{}"')
for pred_dict, expec in zip(predictions, PREDICTION_OUTPUT_DATA):
class_index = np.argmax(pred_dict)
class_probability = np.max(pred_dict)
print(template.format(SPECIES[class_index], 100*class_probability, expec))
cpu_model = tpu_model.sync_to_cpu()
cpu_predictions = cpu_model.predict(data)
template = ('\nPrediction is "{}" ({:.1f}%), expected "{}"')
for pred_dict, expec in zip(cpu_predictions, PREDICTION_OUTPUT_DATA):
class_index = np.argmax(pred_dict)
class_probability = np.max(pred_dict)
print(template.format(SPECIES[class_index], 100*class_probability, expec))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: The results of this notebook are available for download on the team archive (Prep_premade_GS1.tar.gz)
Step3: <h1><center>Contamination models</center></h1>
Step4: Creating new contamination models
Step5: <h1><center>Fitting a single object, ID = 43404 </center></h1>
Step6: Retrieve and write-out the 2D spectrum of a single object. These cutouts are referred to as "beams".
Step7: Generate the photometric catalog from the 3DHST catalog
Step8: Run the fit
Step9: <h1><center>Grizli products</center></h1>
Step10: SED fit
Step11: Emission line maps
Step12: Fits files
Step13: The results of the fit are stored in *full.fits
Step14: Results of the fit at the best determined redshift
Step15: 1D line fluxes
Step16: <h1><center>Batch-mode fitting</center></h1>
Step17: Fitting every object in the field with jh mag < 22
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import time
import os
import numpy as np
import matplotlib.pyplot as plt
from astropy.io import fits
import drizzlepac
import grizli
import glob
from grizli import utils
import importlib
from grizli.prep import process_direct_grism_visit
from hsaquery import query, overlaps
from grizli.pipeline import auto_script
from grizli.multifit import GroupFLT, MultiBeam, get_redshift_fit_defaults
import os
from grizli.pipeline import photoz
from astropy.table import Table
import eazy
from IPython.display import Image
field = 'GS1'
ref_filter = 'F105W'
HOME_PATH = '/Users/rsimons/Desktop/clear/for_hackday/%s'%field
PATH_TO_SCRIPTS = '/Users/rsimons/Desktop/git/clear_local/example_notebooks'
PATH_TO_CATS = '/Users/rsimons/Desktop/clear/Catalogs'
PATH_TO_RAW = glob.glob(HOME_PATH + '/*/RAW')[0]
PATH_TO_PREP = glob.glob(HOME_PATH + '/*/PREP')[0]
class Pointing():
Generalization of GN1, GS1, ERSPRIME, etc
To change field-dependent catalog, seg map, ref image, and padding
only need to change them here.
def __init__(self, field, ref_filter):
if 'N' in field.upper():
self.pad = 500 # really only necessary for GDN
self.radec_catalog = PATH_TO_CATS + '/goodsN_radec.cat'
self.seg_map = PATH_TO_CATS + '/Goods_N_plus_seg.fits'
self.catalog = PATH_TO_CATS + '/goodsn-F105W-astrodrizzle-v4.4_drz_sub_plus.cat'
self.ref_image = PATH_TO_CATS + '/goodsn-F105W-astrodrizzle-v4.4_drz_sci.fits'
self.params = {}
self.params['CATALOG_FILE'] = PATH_TO_CATS + '/{0}_3dhst.{1}.cats/Catalog/{0}_3dhst.{1}.cat'.format('goodsn', 'v4.1')
self.params['Z_STEP'] = 0.002
self.params['Z_MAX'] = 4
self.params['MAIN_OUTPUT_FILE'] = '{0}_3dhst.{1}.eazypy'.format('goodsn', 'v4.1')
self.params['PRIOR_FILTER'] = 205
self.params['MW_EBV'] = {'aegis':0.0066, 'cosmos':0.0148, 'goodss':0.0069,
'uds':0.0195, 'goodsn':0.0103}['goodsn']
self.params['TEMPLATES_FILE'] = 'templates/fsps_full/tweak_fsps_QSF_12_v3.param'
self.translate_file = PATH_TO_CATS + '/{0}_3dhst.{1}.cats/Eazy/{0}_3dhst.{1}.translate'.format('goodsn', 'v4.1')
elif 'S' in field.upper():
self.pad = 200 # grizli default
self.radec_catalog = PATH_TO_CATS + '/goodsS_radec.cat'
self.seg_map = PATH_TO_CATS + '/Goods_S_plus_seg.fits'
self.catalog = PATH_TO_CATS + '/goodss-F105W-astrodrizzle-v4.3_drz_sub_plus.cat'
self.ref_image = PATH_TO_CATS + '/goodss-F105W-astrodrizzle-v4.3_drz_sci.fits'
self.params = {}
self.params['CATALOG_FILE'] = PATH_TO_CATS + '/{0}_3dhst.{1}.cats/Catalog/{0}_3dhst.{1}.cat'.format('goodss', 'v4.1')
self.params['Z_STEP'] = 0.002
self.params['Z_MAX'] = 4
self.params['MAIN_OUTPUT_FILE'] = '{0}_3dhst.{1}.eazypy'.format('goodss', 'v4.1')
self.params['PRIOR_FILTER'] = 205
self.params['MW_EBV'] = {'aegis':0.0066, 'cosmos':0.0148, 'goodss':0.0069,
'uds':0.0195, 'goodsn':0.0103}['goodss']
self.params['TEMPLATES_FILE'] = 'templates/fsps_full/tweak_fsps_QSF_12_v3.param'
self.translate_file = PATH_TO_CATS + '/{0}_3dhst.{1}.cats/Eazy/{0}_3dhst.{1}.translate'.format('goodss', 'v4.1')
os.chdir(PATH_TO_PREP)
files = glob.glob('%s/*flt.fits'%PATH_TO_RAW)
info = grizli.utils.get_flt_info(files)
visits, filters = grizli.utils.parse_flt_files(info=info, uniquename=True)
product_names = np.array([visit['product'] for visit in visits])
filter_names = np.array([visit['product'].split('-')[-1] for visit in visits])
basenames = np.array([visit['product'].split('.')[0]+'.0' for visit in visits])
all_grism_files = []
all_direct_files = []
ref_filter_1 = 'F105W'
ref_filter_2 = 'F140W'
ref_grism_1 = 'G102'
ref_grism_2 = 'G141'
for v, visit in enumerate(visits):
product = product_names[v]
basename = basenames[v]
filt1 = filter_names[v]
if (ref_filter_1.lower() in filt1) or (ref_filter_2.lower() in filt1):
all_direct_files.extend(visit['files'])
grism_index_1 = np.where((basenames == basename) & (filter_names == ref_grism_1.lower()))[0]
grism_index_2 = np.where((basenames == basename) & (filter_names == ref_grism_2.lower()))[0]
if len(grism_index_1) > 0:
all_grism_files.extend(visits[grism_index_1[0]]['files'])
if len(grism_index_2) > 0:
all_grism_files.extend(visits[grism_index_2[0]]['files'])
print ('Number of direct files:', len(all_direct_files))
print ('Number of grism files:', len(all_grism_files))
p = Pointing(field = field, ref_filter = ref_filter_1)
print('Initializing (or loading pre-existing) contamination models...')
grp = GroupFLT(grism_files=all_grism_files,
direct_files=[],
ref_file = p.ref_image,
seg_file = p.seg_map,
catalog = p.catalog,
pad=p.pad,
cpu_count=8)
compute_models = False
if compute_models:
print('Computing first-pass flat continuum models...')
grp.compute_full_model(mag_limit = 25)
fig, axes = plt.subplots(1,3, figsize=[30,10])
axes[0].imshow(grp.FLTs[0].grism['SCI'], vmin=-0.02, vmax=0.2, cmap='cubehelix_r',interpolation='Nearest', origin='lower')
axes[1].imshow(grp.FLTs[0].model, vmin=-0.02, vmax=0.2, cmap='cubehelix_r',interpolation='Nearest', origin='lower')
axes[2].imshow(grp.FLTs[0].grism['SCI'] - grp.FLTs[0].model, vmin=-0.02, vmax=0.2, cmap='cubehelix_r',interpolation='Nearest', origin='lower')
axes[0].set_title('data', fontsize = 40)
axes[1].set_title('model', fontsize = 40)
axes[2].set_title('data - model', fontsize = 40)
for ax in axes:
ax.set_xlim(200,1200)
ax.set_ylim(200,1200)
if compute_models:
print('Re-computing continuum models with higher-order polynomials and subtracting off contamination..')
grp.refine_list(poly_order=2, mag_limits=[16, 24], verbose=False)
fig.clf()
fig, axes = plt.subplots(1,3, figsize=[30,10])
axes[0].imshow(grp.FLTs[0].grism['SCI'], vmin=-0.02, vmax=0.2, cmap='cubehelix_r',interpolation='Nearest', origin='lower')
axes[1].imshow(grp.FLTs[0].model, vmin=-0.02, vmax=0.2, cmap='cubehelix_r',interpolation='Nearest', origin='lower')
axes[2].imshow(grp.FLTs[0].grism['SCI'] - grp.FLTs[0].model, vmin=-0.02, vmax=0.2, cmap='cubehelix_r',interpolation='Nearest', origin='lower')
axes[0].set_title('data', fontsize = 40)
axes[1].set_title('model', fontsize = 40)
axes[2].set_title('data - model', fontsize = 40)
for ax in axes:
ax.set_xlim(200,1200)
ax.set_ylim(200,1200)
if compute_models:
print('Saving contamination models')
grp.save_full_data()
# First is set with combined emission line complexes for the redshift fit
# (don't allow infinite freedom) of the line ratios / fluxes
templ0 = grizli.utils.load_templates(fwhm=1200, line_complexes=True, stars=False,
full_line_list=None, continuum_list=None,
fsps_templates=True)
# Second set has individual line templates for fitting the line fluxes
templ1 = grizli.utils.load_templates(fwhm=1200, line_complexes=False, stars=False,
full_line_list=None, continuum_list=None,
fsps_templates=True)
fmt = '{0:<36s} {1:<36s}'
print(fmt.format('templ0', 'templ1'))
print(fmt.format('------', '------'))
for i in range(len(templ1)):
if i > len(templ0)-1:
print(fmt.format('', list(templ1.keys())[i]))
else:
print(fmt.format(list(templ0.keys())[i], list(templ1.keys())[i]))
# Parameters for drizzled line maps
pline = {'kernel': 'point', 'pixfrac': 0.2, 'pixscale': 0.1, 'size': 8, 'wcs': None}
'''
from multifit.drizzle_to_wavelength
---
wcs : `~astropy.wcs.WCS` or None
Pre-determined WCS. If not specified, generate one based on `ra`,
`dec`, `pixscale` and `pixscale`
size : float
Size of the output thumbnail, in arcsec
pixscale : float
Pixel scale of the output thumbnail, in arcsec
pixfrac : float
Drizzle PIXFRAC (for `kernel` = 'point')
kernel : str, ('square' or 'point')
Drizzle kernel to use
'''
eazy.symlink_eazy_inputs(path=os.path.dirname(eazy.__file__)+'/data',
path_is_env=False)
ez = eazy.photoz.PhotoZ(param_file=None, translate_file=p.translate_file,
zeropoint_file=None, params=p.params,
load_prior=True, load_products=False)
ep = photoz.EazyPhot(ez, grizli_templates=templ0, zgrid=ez.zgrid)
id_fit = 43403
beams = grp.get_beams(id_fit, size=80)
print("beams: ", beams)
mb = grizli.multifit.MultiBeam(beams, fcontam=1.0, group_name=field)
mb.write_master_fits()
# Fit polynomial model for initial continuum subtraction
wave = np.linspace(2000,2.5e4,100)
poly_templates = grizli.utils.polynomial_templates(
wave=wave,
order=7,
line=False)
pfit = mb.template_at_z(
z=0,
templates=poly_templates,
fit_background=True,
fitter='lstsq',
fwhm=1400,
get_uncertainties=2)
hdu, fig = mb.drizzle_grisms_and_PAs(
size=32,
fcontam=0.2,
flambda=False,
scale=1,
pixfrac=0.5,
kernel='point',
make_figure=True,
usewcs=False,
zfit=pfit,
diff=True)
# Save drizzled ("stacked") 2D trace as PNG and FITS
fig.savefig('{0}_{1:05d}.stack.png'.format(field, id_fit))
hdu.writeto('{0}_{1:05d}.stack.fits'.format(field, id_fit), clobber=True)
print('GDS %i'%id_fit)
print('\tRA:', mb.ra)
print('\tDEC:', mb.dec)
print('Finding nearest source in 3DHST photometric catalog..')
phot, ii, dd = ep.get_phot_dict(mb.ra, mb.dec)
print('\tmatched source within %.5f arcsec'%dd.value)
# The order of the polynomial used to scale the photometry to the spectrum
# phot_scale_order = 0, multiplicative
# phot_scale_order = 1, linear
# phot_scale_order = 2, quadratic
# etc.
phot_scale_order = 0
# run the fit
out = grizli.fitting.run_all(id_fit,
t0=templ0,
t1=templ1,
fwhm=1200,
zr=[0.0, 3.5],
dz=[0.004, 0.0005],
fitter='nnls',
group_name=field,
fit_stacks=True,
prior=None,
fcontam=0.,
pline=pline,
mask_sn_limit=7,
fit_only_beams=False,
fit_beams=True,
root=field,
fit_trace_shift=False,
phot=phot,
verbose=False,
scale_photometry= phot_scale_order,
show_beams=True)
Image(filename = PATH_TO_PREP + '/GS1_43403.stack.png', width = 1000, height = 1000)
Image(filename = PATH_TO_PREP + '/GS1_43403.sed.png', width = 1000, height = 1000)
Image(filename = PATH_TO_PREP + '/GS1_43403.full.png', width = 1000, height = 1000)
# Results of the fit are saved in *full.fits
fit_hdu = fits.open('{0}_{1:05d}.full.fits'.format(field, id_fit))
print('{0} has lines [{1}]'.format(fit_hdu.filename(), fit_hdu[0].header['HASLINES']))
# Helper script for plotting line maps, not generated automatically
fig = grizli.fitting.show_drizzled_lines(fit_hdu, size_arcsec=1.6, cmap='plasma_r')
fig.savefig('{0}_{1:05d}.line.png'.format(field, id_fit))
plt.close('all')
Image(filename = PATH_TO_PREP + '/GS1_43403.line.png', width = 1000, height = 1000)
fits_files = glob.glob('GS1_43403*fits')
for file in fits_files:
im = fits.open(file)
print('\n\n{1}\n{0}\n{1}\n\n'.format(file, '='*len(file)))
im.info()
full_fits = fits.open('GS1_43403.full.fits')
zfit_stack = Table(fit_hdu['ZFIT_STACK'].data)
plt.plot(zfit_stack['zgrid'], zfit_stack['pdf'], label='Stacked')
# zoom in around the initial best-guess with the individual "beam" spectra
fit_beam = Table(fit_hdu['ZFIT_BEAM'].data)
plt.plot(fit_beam['zgrid'], fit_beam['pdf'], label='Zoom, beams')
plt.xlim(0.5, 2.5); plt.semilogy(); plt.grid()
plt.ylim(1.e-200, 1.e10)
plt.xlabel('z'); plt.ylabel('PDF(z)'); plt.legend()
mb, st, fit, tfit, line_hdu = out
print(tfit.keys())
print('z = {0}'.format(tfit['z']))
print('Continuum template, cont1d: ', tfit['cont1d'].__class__)
fig, ax = plt.subplots(1,1, figsize = (10, 8))
ax.plot(tfit['cont1d'].wave/1.e4, tfit['cont1d'].flux, label='continuum')
ax.plot(tfit['line1d'].wave/1.e4, tfit['line1d'].flux, label='total')
ax.set_xlim(0.8, 1.7); ax.set_ylim(0,3.e-18); plt.grid()
ax.set_xlabel(r'$\lambda$, microns'); ax.set_ylabel(r'$f_\lambda$, erg/s/cm2/A'); plt.legend()
# cfit, coeffs, covar are coefficients of the template fit
# and their covariance
full_fits = fits.open('GS1_43403.full.fits')
full_hdr = full_fits[0].header
numlines = full_hdr['NUMLINES']
print ('%i lines detected'%numlines)
for n in np.arange(numlines):
line_name = full_hdr['LINE%.3i'%(n+1)]
line_flux = full_hdr['FLUX%.3i'%(n+1)]
line_err = full_hdr['ERR%.3i'%(n+1)]
print ('\t\t\t' + line_name)
print ('\t flux = %.2f'%(1.e17 * line_flux), 'x 10^-17 erg/s/cm^2')
print ('\t eflux = %.2f'%(1.e17 * line_err), 'x 10^-17 erg/s/cm^2')
print ('\t\t flux/eflux = %.1f'%(line_flux/line_err))
print ('\n\n\n')
full_hdr
def grizli_fit(grp, id_fit, field = '', ref_filter = 'F105W', use_pz_prior = True, use_phot = True, scale_phot = True, templ0 = None, templ1 = None, ep = None, pline = None):
beams = grp.get_beams(id_fit, size=80)
if beams != []:
print("beams: ", beams)
mb = grizli.multifit.MultiBeam(beams, fcontam=1.0, group_name=field)
mb.write_master_fits()
# Fit polynomial model for initial continuum subtraction
wave = np.linspace(2000,2.5e4,100)
poly_templates = grizli.utils.polynomial_templates(
wave=wave,
order=7,
line=False)
pfit = mb.template_at_z(
z=0,
templates=poly_templates,
fit_background=True,
fitter='lstsq',
fwhm=1400,
get_uncertainties=2)
if pfit != None:
# Drizzle grisms / PAs
hdu, fig = mb.drizzle_grisms_and_PAs(
size=32,
fcontam=0.2,
flambda=False,
scale=1,
pixfrac=0.5,
kernel='point',
make_figure=True,
usewcs=False,
zfit=pfit,
diff=True)
# Save drizzled ("stacked") 2D trace as PNG and FITS
fig.savefig('{0}_{1:05d}.stack.png'.format(field, id_fit))
hdu.writeto('{0}_{1:05d}.stack.fits'.format(field, id_fit), clobber=True)
if use_pz_prior:
#use redshift prior from z_phot
prior = np.zeros((2, len(p.tempfilt['zgrid'])))
prior[0] = p.tempfilt['zgrid']
prior[1] = p.pz['chi2fit'][:,id]
else:
prior = None
order = 0
tab = utils.GTable()
tab['ra'] = [mb.ra]
tab['dec'] = [mb.dec]
tab['id'] = id_fit
phot, ii, dd = ep.get_phot_dict(tab['ra'][0], tab['dec'][0])
out = grizli.fitting.run_all(
id_fit,
t0=templ0,
t1=templ1,
fwhm=1200,
zr=[0.0, 3.5],
dz=[0.004, 0.0005],
fitter='nnls',
group_name=field,
fit_stacks=True,
prior=None,
fcontam=0.,
pline=pline,
mask_sn_limit=7,
fit_only_beams=False,
fit_beams=True,
root=field,
fit_trace_shift=False,
phot=phot,
verbose=True,
scale_photometry=order,
show_beams=True)
mb, st, fit, tfit, line_hdu = out
fit_hdu = fits.open('{0}_{1:05d}.full.fits'.format(field, id_fit))
fit_hdu.info()
# same as the fit table above, redshift fit to the stacked spectra
fit_stack = Table(fit_hdu['ZFIT_STACK'].data)
# zoom in around the initial best-guess with the individual "beam" spectra
fit_beam = Table(fit_hdu['ZFIT_BEAM'].data)
templ = Table(fit_hdu['TEMPL'].data)
print('{0} has lines [{1}]'.format(fit_hdu.filename(), fit_hdu[0].header['HASLINES']))
# Helper script for plotting them, not generated automatically
fig = grizli.fitting.show_drizzled_lines(fit_hdu, size_arcsec=1.6, cmap='plasma_r')
fig.savefig('{0}_{1:05d}.line.png'.format(field, id_fit))
plt.close('all')
#Fit all objects with MAG_AUTO < 22
if False:
good = np.where(np.array(grp.catalog['MAG_AUTO']) < 22)[0]
for g in good:
id_fit = np.array(grp.catalog['NUMBER'])[g]
mag_fit = grp.catalog['MAG_AUTO'][g]
grizli_fit(grp, id_fit = id_fit, field = field,
use_pz_prior = False, use_phot = True, scale_phot = True,
templ0 = templ0, templ1 = templ1, ep = ep, pline = pline,)
'''
x = templ1['fsps/fsps_QSF_12_v3_nolines_002.dat']
fig, ax = plt.subplots(1,1, figsize = (10,10))
ax.plot(x.wave, x.flux)
ax.semilogx()
ax.set_xlim(4000, 7000)
'''
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 大多数情况下 字典推导能做到的,可通过创建一个元组sequence 然后将之传至 dict() func 也可
Step2: but 字典推导表达意思更加清晰 同时 运行速度更快(近一倍)<br>同时 第二个例子程序 可重写
Step3: 上述两行推导 -- '&' 与 'and' 区别:
Step4: 1.18 映射名称到序列元素
Step5: nametuple 实例 看起来像 一个普通的类实例 and 其跟元组类型 可交换 and 支持所有普通元组操作 如 索引 and 解压
Step6: namedtuple 's 主要用途 即 将你的代码从下标 操作解脱出来 and if 从数据库中调用中返回了很大的元组列表 and 通过通过 下标 去操作其中元素 但你 当表中添加了新的列的时候 你的代码就会出错 if 使用 namedtuple 即不会
Step7: 下标操作通常会让代码表意不明 并且非常依赖记录records 的结构<br>if 出现 歧义
Step8: namedtuple 另一个用途 是作为dict 的替代 because dict 存储需要更多的内存空间 and 需要构建一个非常大的包含字典的数据结构 and 使用命名元组会更加高效 BUT 不像dict 一个namedtuple 是不可更改
Step9: 以上不能使用 s.shares = 98<br> if want to change the attr 可以使用namedtuple instance 's _replace() and 其会创建一个全新的namedtuple and 将对应字段用新的值取代
Step10: _replace() method and 有用特性 is 当你namedtuple 拥有可选或缺失字段时, 他是个超级方便填充数据的方法 可以先创建一个内含默认值 的原型(初态)tuple and 使用_replace() 创建新值被更新过的instance
Step11: if you目标是一个需要更新很多instance's attr 高效的数据结构 BUT namedtuple is not 最佳选择 YOU can 使用一个包含 _slots_ method 的类<br>REF
Step12: 上述是将 生成器表达式 作为一个单独 argument 传递给func 时候 的巧妙语法 (不需要多加一个括号) 加不加括号 两者等效
Step13: 使用一个生成器表达式作为 argument 会比先创建一个临时列表更加高效 and 优雅
Step14: 以上多创建临时列表 速度会变慢 即将会创建一个巨大的仅仅被使用一次就被丢弃的临时数据结构!!!!!!
|
<ASSISTANT_TASK:>
Python Code:
prices = {
'ACME': 45.23,
'AAPL': 612.78,
'IBM': 205.55,
'HPQ': 37.20,
'FB': 10.75
}
# Make a dictionary of all prices over 200
p1 = {key: value for key, value in prices.items() if value > 200}
# Make a dictionary of tech stocks
tech_names = {'AAPL', 'IBM', 'HPQ', 'MSFT'}
p2 = {key: value for key, value in prices.items() if key in tech_names}
print(p1,'\n',p2)
p3 = dict((key, value) for key, value in prices.items() if value > 200)
print(p3)
# Make a dictionary of tech stocks
tech_names = {'AAPL', 'IBM', 'HPQ', 'MSFT'}
p4 = {key: prices[key] for key in prices.keys() & tech_names}
print(p4)
# p4 = {key: prices[key] for key in prices.keys() and tech_names}
prices.keys()
tech_names
type(prices.keys()) == type(tech_names)
prices.keys() & tech_names
type(prices.keys() & tech_names)
from collections import namedtuple
Subsciber = namedtuple('Subscriber',['addr','joined'])
sub = Subsciber('jonesy@exit.com','2012-10-19')
sub
sub.addr
sub.joined
len(sub)
addr, joined = sub
print(addr,'\n',joined)
def compute_cost(records):
total = 0.0
for rec in records:
total += rec[1] * rec[2]
return total
Stock = namedtuple('Stock',['name','shares','price'])
def compute_cost2(records):
total = 0.0
for rec in records:
s = Stock(*rec)
total += s.shares * s.price
return total
s = Stock('Ace',100,98.9)
s
s.shares
s.shares = 98
s2 = s._replace(shares=98)
print(s,'\n',s2)
# Create a ST type
ST = namedtuple('ST',['name','share','price','date','time'])
# Create a prototype instance
ST_prototype = ST('', 0, 0.0, None, None)
# Function to convert a dictionary to a ST
def dict_to_ST(s):
return ST_prototype._replace(**s)
a = ('hi',1,12,'2016-09-10','18:19:18')
dict_to_ST(a)
a = {'name':'hi','share':1,'price':12,'date':'2016-09-10','time':'18:19:18'}
dict_to_ST(a)
# want to 平方和
nums = [1,2,3,4,5,6]
s = sum(x * x for x in nums)
s
# Determine if any .py files exist in a directory
# 判断 python 文件是否存在此目录中
# 只要有一个py file 存在 any() return True
import os
files = os.listdir('f:\Save\python')
if any(name.endswith('.py') for name in files):
print('There be python file!')
else:
print('Sorry no python.')
# Output a tuple as CSV
s = ('ACME',50,123.34)
print(','.join(str(x) for x in s))
# Data reduction across fileds of a data structure
portfolio = [
{'name':'GOOG','share':50},
{'name':'Yahoo','share':75},
{'name':'ALO','share':20},
{'name':'CSX','share':85}
]
min_share = min(s['share'] for s in portfolio)
min_share
s = sum((x * x for x in nums)) # 显示的传递一个生成器表达式对象
s = sum(x * x for x in nums) # 更加优雅的实现方式 省略了括号
s = sum([x * x for x in nums])
s
# Odiginal : Return 20
min_s1 = min(s['share'] for s in portfolio)
# Alternative : Return ['name':'AOL,'share':20]
min_s2 = min(portfolio, key=lambda s:s['share'])
print(min_s1,'\n',min_s2)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Face Generation
Step3: Explore the Data
Step5: CelebA
Step7: Preprocess the Data
Step10: Input
Step13: Discriminator
Step16: Generator
Step19: Loss
Step22: Optimization
Step25: Neural Network Training
Step27: Train
Step29: MNIST
Step31: CelebA
|
<ASSISTANT_TASK:>
Python Code:
data_dir = './data'
# FloydHub - Use with data ID "R5KrjnANiKVhLWAkpXhNBe"
#data_dir = '/input'
DON'T MODIFY ANYTHING IN THIS CELL
import helper
helper.download_extract('mnist', data_dir)
helper.download_extract('celeba', data_dir)
show_n_images = 25
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
import os
from glob import glob
from matplotlib import pyplot
mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'mnist/*.jpg'))[:show_n_images], 28, 28, 'L')
pyplot.imshow(helper.images_square_grid(mnist_images, 'L'), cmap='gray')
show_n_images = 25
DON'T MODIFY ANYTHING IN THIS CELL
mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'img_align_celeba/*.jpg'))[:show_n_images], 28, 28, 'RGB')
pyplot.imshow(helper.images_square_grid(mnist_images, 'RGB'))
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
import problem_unittests as tests
def model_inputs(image_width, image_height, image_channels, z_dim):
Create the model inputs
:param image_width: The input image width
:param image_height: The input image height
:param image_channels: The number of image channels
:param z_dim: The dimension of Z
:return: Tuple of (tensor of real input images, tensor of z data, learning rate)
inputs_real = tf.placeholder(tf.float32, (None, image_width, image_height, image_channels), name='input_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')
learning_rate = tf.placeholder(tf.float32, (None))
return inputs_real, inputs_z, learning_rate
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
def discriminator(images, reuse=False):
Create the discriminator network
:param images: Tensor of input image(s)
:param reuse: Boolean if the weights should be reused
:return: Tuple of (tensor output of the discriminator, tensor logits of the discriminator)
alpha=0.2
x = images
with tf.variable_scope('discriminator', reuse=reuse):
x = tf.layers.conv2d(x, 64, 4, strides=2, padding="same")
x = tf.layers.batch_normalization(x, training=True)
x = tf.maximum(alpha * x, x)
#x = tf.layers.dropout(x, 0.5)
x = tf.layers.conv2d(x, 128, 4, strides=2, padding="same")
x = tf.layers.batch_normalization(x, training=True)
x = tf.maximum(alpha * x, x)
#x = tf.layers.dropout(x, 0.5)
x = tf.layers.conv2d(x, 256, 4, strides=2, padding="same")
x = tf.layers.batch_normalization(x, training=True)
x = tf.maximum(alpha * x, x)
#x = tf.layers.dropout(x, 0.5)
x = tf.reshape(x, (-1, 4 * 4 * 256))
logits = tf.layers.dense(x, 1)
out = tf.sigmoid(logits)
return out, logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_discriminator(discriminator, tf)
def generator(z, out_channel_dim, is_train=True):
Create the generator network
:param z: Input z
:param out_channel_dim: The number of channels in the output image
:param is_train: Boolean if generator is being used for training
:return: The tensor output of the generator
reuse = not is_train
alpha= 0.2
with tf.variable_scope('generator', reuse=reuse):
x = tf.layers.dense(z, 4 * 4 * 512)
x = tf.reshape(x, (-1, 4, 4, 512))
x = tf.layers.batch_normalization(x, training=is_train)
#x = tf.layers.dropout(x, 0.5)
x = tf.maximum(alpha * x, x)
#print(x.shape)
x = tf.layers.conv2d_transpose(x, 256, 4, strides=1, padding="valid")
x = tf.layers.batch_normalization(x,training=is_train)
x = tf.maximum(alpha * x, x)
#print(x.shape)
x = tf.layers.conv2d_transpose(x, 128, 4, strides=2, padding="same")
x = tf.layers.batch_normalization(x,training=is_train)
x = tf.maximum(alpha * x, x)
#print(x.shape)
x = tf.layers.conv2d_transpose(x, out_channel_dim, 4, strides=2, padding="same")
#x = tf.maximum(alpha * x, x)
logits = x
out = tf.tanh(logits)
return out
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_generator(generator, tf)
def model_loss(input_real, input_z, out_channel_dim):
Get the loss for the discriminator and generator
:param input_real: Images from the real dataset
:param input_z: Z input
:param out_channel_dim: The number of channels in the output image
:return: A tuple of (discriminator loss, generator loss)
smooth = 0.1
_, d_logits_real = discriminator(input_real, reuse=False)
fake = generator(input_z, out_channel_dim, is_train=True)
d_logits_fake = discriminator(fake, reuse=True)
# Calculate losses
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real,
labels=tf.ones_like(d_logits_real) * (1 - smooth)))
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.zeros_like(d_logits_fake)))
d_loss = d_loss_real + d_loss_fake
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.ones_like(d_logits_fake)))
return d_loss, g_loss
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_loss(model_loss)
def model_opt(d_loss, g_loss, learning_rate, beta1):
Get optimization operations
:param d_loss: Discriminator loss Tensor
:param g_loss: Generator loss Tensor
:param learning_rate: Learning Rate Placeholder
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:return: A tuple of (discriminator training operation, generator training operation)
t_vars = tf.trainable_variables()
g_vars = [var for var in t_vars if var.name.startswith('generator')]
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
all_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
g_update_ops = [var for var in all_update_ops if var.name.startswith('generator')]
d_update_ops = [var for var in all_update_ops if var.name.startswith('discriminator')]
with tf.control_dependencies(d_update_ops):
d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars)
with tf.control_dependencies(g_update_ops):
g_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(g_loss, var_list=g_vars)
return d_train_opt, g_train_opt
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_opt(model_opt, tf)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
def show_generator_output(sess, n_images, input_z, out_channel_dim, image_mode):
Show example output for the generator
:param sess: TensorFlow session
:param n_images: Number of Images to display
:param input_z: Input Z Tensor
:param out_channel_dim: The number of channels in the output image
:param image_mode: The mode to use for images ("RGB" or "L")
cmap = None if image_mode == 'RGB' else 'gray'
z_dim = input_z.get_shape().as_list()[-1]
example_z = np.random.uniform(-1, 1, size=[n_images, z_dim])
samples = sess.run(
generator(input_z, out_channel_dim, False),
feed_dict={input_z: example_z})
images_grid = helper.images_square_grid(samples, image_mode)
pyplot.imshow(images_grid, cmap=cmap)
pyplot.show()
def train(epoch_count, batch_size, z_dim, learning_rate, beta1, get_batches, data_shape, data_image_mode):
Train the GAN
:param epoch_count: Number of epochs
:param batch_size: Batch Size
:param z_dim: Z dimension
:param learning_rate: Learning Rate
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:param get_batches: Function to get batches
:param data_shape: Shape of the data
:param data_image_mode: The image mode to use for images ("RGB" or "L")
inputs_real, inputs_z, lr = model_inputs(data_shape[1], data_shape[2], data_shape[3], z_dim)
d_loss, g_loss = model_loss(inputs_real, inputs_z, data_shape[-1])
d_train_opt, g_train_opt = model_opt(d_loss, g_loss, learning_rate, beta1)
batch_num = 0
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epoch_count):
for batch_images in get_batches(batch_size):
batch_num = batch_num+1
batch_images = batch_images * 2
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_dim))
_ = sess.run(d_train_opt, feed_dict={inputs_real: batch_images, inputs_z: batch_z, lr:learning_rate})
_ = sess.run(g_train_opt, feed_dict={inputs_z: batch_z, lr:learning_rate})
if batch_num % 100 == 0:
train_loss_d = d_loss.eval({inputs_z:batch_z, inputs_real: batch_images})
train_loss_g = g_loss.eval({inputs_z:batch_z})
print("Epoch {}/{} batch {}...".format(epoch_i+1, epoch_count, batch_num),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
batch_size = 64
z_dim = 100
learning_rate = 0.001
beta1 = 0.6
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
epochs = 2
mnist_dataset = helper.Dataset('mnist', glob(os.path.join(data_dir, 'mnist/*.jpg')))
with tf.Graph().as_default():
train(epochs, batch_size, z_dim, learning_rate, beta1, mnist_dataset.get_batches,
mnist_dataset.shape, mnist_dataset.image_mode)
batch_size = 64
z_dim = 100
learning_rate = 0.001
beta1 = 0.6
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
epochs = 1
celeba_dataset = helper.Dataset('celeba', glob(os.path.join(data_dir, 'img_align_celeba/*.jpg')))
with tf.Graph().as_default():
train(epochs, batch_size, z_dim, learning_rate, beta1, celeba_dataset.get_batches,
celeba_dataset.shape, celeba_dataset.image_mode)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Perform the simulation
Step2: Plot pretty pictures of what we just did
|
<ASSISTANT_TASK:>
Python Code:
from tvb.simulator.lab import *
LOG.info("Configuring...")
#Initialise a Model, Coupling, and Connectivity.
oscillator = models.Generic2dOscillator()
white_matter = connectivity.Connectivity(load_default=True)
white_matter.speed = numpy.array([4.0])
white_matter_coupling = coupling.Linear(a=0.0126)
#Initialise an Integrator
heunint = integrators.HeunDeterministic(dt=2 ** -4)
#Initialise some Monitors with period in physical time
momo = monitors.TemporalAverage(period=1.0) # 1000Hz
mama = monitors.Bold(period=500) # defaults to one data point every 500ms
#Bundle them
what_to_watch = (momo, mama)
#Define the stimulus
#Specify a weighting for regions to receive stimuli...
white_matter.configure() # Because we want access to number_of_regions
nodes = [0, 7, 13, 33, 42]
weighting = numpy.zeros((white_matter.number_of_regions, ))
weighting[nodes] = numpy.array([2.0 ** -2, 2.0 ** -3, 2.0 ** -4, 2.0 ** -5, 2.0 ** -6])
eqn_t = equations.Gaussian()
eqn_t.parameters["midpoint"] = 15000.0
eqn_t.parameters["sigma"] = 4.0
stimulus = patterns.StimuliRegion(temporal=eqn_t,
connectivity=white_matter,
weight=weighting)
#Initialise Simulator -- Model, Connectivity, Integrator, Monitors, and stimulus.
sim = simulator.Simulator(model=oscillator, connectivity=white_matter,
coupling=white_matter_coupling,
integrator=heunint, monitors=what_to_watch,
stimulus=stimulus)
sim.configure()
LOG.info("Starting simulation...")
#Perform the simulation
tavg_time = []
tavg_data = []
bold_time = []
bold_data = []
for tavg, bold in sim(simulation_length=30000):
if not tavg is None:
tavg_time.append(tavg[0])
tavg_data.append(tavg[1])
if not bold is None:
bold_time.append(bold[0])
bold_data.append(bold[1])
LOG.info("Finished simulation.")
#Plot the stimulus
plot_pattern(sim.stimulus)
#Make the lists numpy.arrays for easier use.
TAVG = numpy.array(tavg_data)
BOLD = numpy.array(bold_data)
#Plot temporally averaged time series
figure(1)
plot(tavg_time, TAVG[:, 0, :, 0])
title("Temporal average")
#Plot bold time series
figure(2)
plot(bold_time, BOLD[:, 0, :, 0])
title("BOLD")
#Show them
show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Download ReadMe and chandra.dat files and save them inside ./data/ dir
Step2: Dealing with null values from Vizier metadata
Step3: We can see records in columns logLSoft, logLHard, logLFull, for example, showing the values -999.0.
Step7: That's it. After declaring the null values with all the significant numbers following the Format, such values are properly handled.
Step8: Here it is, the MOC catalog (the list of elements to be more precise)
Step9: The plot made by MOCPy
Step10: And here after importing it to Aladin
|
<ASSISTANT_TASK:>
Python Code:
baseurl = 'ftp://cdsarc.u-strasbg.fr/pub/cats/J/ApJ/817/172/'
readme_file = 'ReadMe'
chandra_file = 'chandra.dat'
import astropy
print "astropy version:",astropy.__version__
import mocpy
print "mocpy version:",mocpy.__version__
import healpy
print "healpy version:",healpy.__version__
def download(path,filename,outdir):
from urllib2 import urlopen
url = path+filename
f = urlopen(url)
data = f.read()
with open(outdir+filename, "wb") as fp:
fp.write(data)
import os
if not os.path.isdir('data'):
os.mkdir('data')
download(baseurl,readme_file,outdir='data/')
download(baseurl,chandra_file,outdir='data/')
!ls 'data/'
from astropy.table import Table
chandra = Table.read('data/chandra.dat',readme='data/ReadMe',format='ascii.cds')
chandra # Notice the '-999' values
from astropy.table import Table
chandra = Table.read('data/chandra.dat',readme='data/ReadMe_fix',format='ascii.cds')
chandra
# A function to find out which healpix level corresponds a given (typical) size of coverage
def size2level(size):
Returns nearest Healpix level corresponding to a given diamond size
The 'nearest' Healpix level is here to be the nearest greater level,
right before the first level smaller than 'size'.
# units
from astropy import units as u
# Structure to map healpix' levels to their angular sizes
#
healpix_levels = {
0 : 58.63 * u.deg,
1 : 29.32 * u.deg,
2 : 14.66 * u.deg,
3 : 7.329 * u.deg,
4 : 3.665 * u.deg,
5 : 1.832 * u.deg,
6 : 54.97 * u.arcmin,
7 : 27.48 * u.arcmin,
8 : 13.74 * u.arcmin,
9 : 6.871 * u.arcmin,
10 : 3.435 * u.arcmin,
11 : 1.718 * u.arcmin,
12 : 51.53 * u.arcsec,
13 : 25.77 * u.arcsec,
14 : 12.88 * u.arcsec,
15 : 6.442 * u.arcsec,
16 : 3.221 * u.arcsec,
17 : 1.61 * u.arcsec,
18 : 805.2 * u.milliarcsecond,
19 : 402.6 * u.milliarcsecond,
20 : 201.3 * u.milliarcsecond,
21 : 100.6 * u.milliarcsecond,
22 : 50.32 * u.milliarcsecond,
23 : 25.16 * u.milliarcsecond,
24 : 12.58 * u.milliarcsecond,
25 : 6.291 * u.milliarcsecond,
26 : 3.145 * u.milliarcsecond,
27 : 1.573 * u.milliarcsecond,
28 : 786.3 * u.microarcsecond,
29 : 393.2 * u.microarcsecond
}
assert size.unit
ko = None
for k,v in healpix_levels.iteritems():
if v < 2 * size: # extrapolating the error by one order of magnitude
break
ko = k
return ko
import numpy as np
from astropy import units as u
median_positional_error = np.median(chandra['e_Pos']) * u.arcsec
level = size2level(median_positional_error)
nside = 2**level
print "Typical (median) position error: \n{}".format(median_positional_error)
print "\nCorrespondig healpix level: {} \n\t and nsize value: {}".format(level,nside)
def healpix_radec2pix(nside, ra, dec, nest=True):
convert ra,dec to healpix elements
def radec2thetaphi(ra,dec):
convert equatorial ra, dec in degrees
to polar theta, phi in radians
def ra2phi(ra):
import math
return math.radians(ra)
def dec2theta(dec):
import math
return math.pi/2 - math.radians(dec)
_phi = ra2phi(ra)
_theta = dec2theta(dec)
return _theta,_phi
import healpy
_theta,_phi = radec2thetaphi(ra, dec)
return healpy.ang2pix(nside, _theta, _phi, nest=nest)
radec = zip(chandra['RAdeg'],chandra['DEdeg'])
hpix = [ healpix_radec2pix(nside,ra,dec) for ra,dec in radec ]
hpix
moc = mocpy.MOC()
moc.add_pix_list(level,hpix)
moc.plot()
moc.write('data/MOC_chandra.fits')
from IPython.display import HTML
HTML('''
<figure>
<img src="data/MOC_on_Aladin.png" alt="MOC printed on Aladin">
<figcaption>Figure 1: MOC printed on Aladin</figcaption>
</figure>
''')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Chemistry Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 1.8. Coupling With Chemical Reactivity
Step12: 2. Key Properties --> Software Properties
Step13: 2.2. Code Version
Step14: 2.3. Code Languages
Step15: 3. Key Properties --> Timestep Framework
Step16: 3.2. Split Operator Advection Timestep
Step17: 3.3. Split Operator Physical Timestep
Step18: 3.4. Split Operator Chemistry Timestep
Step19: 3.5. Split Operator Alternate Order
Step20: 3.6. Integrated Timestep
Step21: 3.7. Integrated Scheme Type
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
Step23: 4.2. Convection
Step24: 4.3. Precipitation
Step25: 4.4. Emissions
Step26: 4.5. Deposition
Step27: 4.6. Gas Phase Chemistry
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Step30: 4.9. Photo Chemistry
Step31: 4.10. Aerosols
Step32: 5. Key Properties --> Tuning Applied
Step33: 5.2. Global Mean Metrics Used
Step34: 5.3. Regional Metrics Used
Step35: 5.4. Trend Metrics Used
Step36: 6. Grid
Step37: 6.2. Matches Atmosphere Grid
Step38: 7. Grid --> Resolution
Step39: 7.2. Canonical Horizontal Resolution
Step40: 7.3. Number Of Horizontal Gridpoints
Step41: 7.4. Number Of Vertical Levels
Step42: 7.5. Is Adaptive Grid
Step43: 8. Transport
Step44: 8.2. Use Atmospheric Transport
Step45: 8.3. Transport Details
Step46: 9. Emissions Concentrations
Step47: 10. Emissions Concentrations --> Surface Emissions
Step48: 10.2. Method
Step49: 10.3. Prescribed Climatology Emitted Species
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Step51: 10.5. Interactive Emitted Species
Step52: 10.6. Other Emitted Species
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
Step54: 11.2. Method
Step55: 11.3. Prescribed Climatology Emitted Species
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Step57: 11.5. Interactive Emitted Species
Step58: 11.6. Other Emitted Species
Step59: 12. Emissions Concentrations --> Concentrations
Step60: 12.2. Prescribed Upper Boundary
Step61: 13. Gas Phase Chemistry
Step62: 13.2. Species
Step63: 13.3. Number Of Bimolecular Reactions
Step64: 13.4. Number Of Termolecular Reactions
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Step67: 13.7. Number Of Advected Species
Step68: 13.8. Number Of Steady State Species
Step69: 13.9. Interactive Dry Deposition
Step70: 13.10. Wet Deposition
Step71: 13.11. Wet Oxidation
Step72: 14. Stratospheric Heterogeneous Chemistry
Step73: 14.2. Gas Phase Species
Step74: 14.3. Aerosol Species
Step75: 14.4. Number Of Steady State Species
Step76: 14.5. Sedimentation
Step77: 14.6. Coagulation
Step78: 15. Tropospheric Heterogeneous Chemistry
Step79: 15.2. Gas Phase Species
Step80: 15.3. Aerosol Species
Step81: 15.4. Number Of Steady State Species
Step82: 15.5. Interactive Dry Deposition
Step83: 15.6. Coagulation
Step84: 16. Photo Chemistry
Step85: 16.2. Number Of Reactions
Step86: 17. Photo Chemistry --> Photolysis
Step87: 17.2. Environmental Conditions
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'inm', 'sandbox-1', 'atmoschem')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1:
Step2: In order to factorize eP as F U F^{-1}, for some matrix U
Step3:
|
<ASSISTANT_TASK:>
Python Code:
from sympy import *
from sympy.abc import n, i, N, x, lamda, phi, z, j, r, k, a, alpha
from commons import *
from matrix_functions import *
from sequences import *
import functions_catalog
init_printing()
m = 10
eP = Matrix(m, m, lambda n,k: factorial(n)*binomial(n,k)/factorial(k))
eP
inspect(eP)
eP_pm = production_matrix(eP)
eP_epm = production_matrix(eP, exp=True)
eP_pm, eP_epm
F = Matrix(m, m, diagonal_func_matrix(factorial))
F_inv = F**(-1)
F, F_inv
B = F_inv * eP * F
B
U = Matrix(m, m, rows_shift_matrix(by=1))
U
F_inv * U * F
F_inv * U * F * B
B**(-1) * F_inv * U * F * B
F * B**(-1) * F_inv * U * F * B * F_inv
P = Matrix(m, m, binomial)
P_bar = Matrix(m, m, lambda i, j: binomial(i, j) if j < i else 0)
P_bar
production_matrix(P_bar[1:,:-1], exp=False), production_matrix(P_bar[1:,:-1], exp=True)
j=3
(P_bar**j).applyfunc(lambda i: i/factorial(j))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Perform the simulation
Step2: ####Plot pretty pictures of what we just did
|
<ASSISTANT_TASK:>
Python Code:
from tvb.simulator.lab import *
import datetime
START_TIME = datetime.datetime.now()
LOG.info("Configuring...")
#Initialise a Model, Coupling, and Connectivity.
oscillator = models.Generic2dOscillator()
white_matter = connectivity.Connectivity(load_default=True)
white_matter.speed = numpy.array([3.0])
white_matter_coupling = coupling.Linear()
#Initialise an Integrator
heunint = integrators.HeunDeterministic()
#Initialise some Monitors with period in physical time
momo = monitors.Raw()
mama = monitors.TemporalAverage()
#Bundle them
what_to_watch = (momo, mama)
#Initialise a Simulator -- Model, Connectivity, Integrator, and Monitors.
sim = simulator.Simulator(model=oscillator, connectivity=white_matter,
coupling=white_matter_coupling,
integrator=heunint, monitors=what_to_watch)
sim.configure()
LOG.info("Starting simulation...")
#Perform the simulation
raw_data = []
raw_time = []
tavg_data = []
tavg_time = []
for raw, tavg in sim(simulation_length=1000):
if not raw is None:
raw_time.append(raw[0])
raw_data.append(raw[1])
if not tavg is None:
tavg_time.append(tavg[0])
tavg_data.append(tavg[1])
LOG.info("Finished simulation.")
print 'It run for %d sec.' % (datetime.datetime.now() - START_TIME).seconds
#Plot defaults in a few combinations
#Make the lists numpy.arrays for easier use.
RAW = numpy.array(raw_data)
TAVG = numpy.array(tavg_data)
#Plot raw time series
figure(1)
plot(raw_time, RAW[:, 0, :, 0])
title("Raw -- State variable 0")
figure(2)
plot(raw_time, RAW[:, 1, :, 0])
title("Raw -- State variable 1")
#Plot temporally averaged time series
figure(3)
plot(tavg_time, TAVG[:, 0, :, 0])
title("Temporal average")
#Show them
show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
Step2: And we'll add a single light curve dataset so that we can see how passband luminosities affect the resulting synthetic light curve model.
Step3: Lastly, just to make things a bit easier, we'll turn off limb-darkening and irradiation (reflection) and use blackbody atmospheres.
Step4: Relevant Parameters
Step5: The 'pblum' parameter is only relevant for each component-dataset pair in which pblum_ref=='self'. This component will then have its intensities scaled such that they match the value provided by pblum. In general, a pblum of 4pi will result in an out-of-eclipse flux of ~1.
Step6: NOTE
Step7: Now note that only a single pblum parameter is visible.
Step8: Let's see how changing the value of pblum affects the computed light curve. By default, pblum is set to be 4 pi, giving a total flux for the primary star of ~1.
Step9: If we now set pblum to be only 2 pi, we should expect the entire light curve to be scaled in half.
Step10: And if we halve the temperature of the secondary star - the resulting light curve changes to the new sum of fluxes, where the primary star dominates since the secondary star flux is reduced by a factor of 16, so we expect a total out-of-eclipse flux of ~0.5 + ~0.5/16 = ~0.53.
Step11: Let us undo our changes before we look at decoupled luminosities.
Step12: Decoupled Luminosities
Step13: Now we see that both pblums are available and can have different values.
Step14: If we set these to 4pi, then we'd expect each star to contribute 1.0 in flux units, meaning the baseline of the light curve should be at approximately 2.0
Step15: Now let's make a significant temperature-ratio by making a very cool secondary star. Since the luminosities are decoupled - this temperature change won't affect the resulting light curve very much (compare this to the case above with coupled luminosities). What is happening here is that even though the secondary star is cooler, its luminosity is being rescaled to the same value as the primary star, so the eclipse depth doesn't change (you would see a similar lack-of-effect if you changed the radii).
Step16: In most cases you will not want decoupled luminosities as they can easily break the self-consistency of your model.
Step17: Accessing Model Luminosities
Step18: By default this exposes pblums for all component-dataset pairs in the form of a dictionary. Alternatively, you can pass a label or list of labels to component and/or dataset.
Step19: Note that this same logic is applied (at t0) to initialize all passband luminosities within the backend, so does not need to be called before run_compute.
Step20: Since the luminosities are passband-dependent, they are stored with the same dataset as the light curve (or RV), but with the mesh method, and are available at each of the times at which a mesh was stored.
Step21: Now let's compare the value of the synthetic luminosities to those of the input pblum
Step22: In this case, since our two stars are identical, the synthetic luminosity of the secondary star should be the same as the primary (and the same as pblum@primary).
Step23: However, if we change the temperature of the secondary star again, since the pblums are coupled, we'd expect the synthetic luminosity of the primary to remain fixed but the secondary to decrease.
Step24: Now, we'll just undo our changes before continuing
Step25: Role of Pblum
Step26: 'abs_normal_intensities' are the intensities per triangle in absolute units, i.e. W/m^3.
Step27: The values of 'normal_intensities', however, are significantly samller (in this case). These are the intensities in relative units which will eventually be integrated to give us flux for a light curve.
Step28: 'normal_intensities' are scaled from 'abs_normal_intensities' so that the computed luminosity matches the prescribed luminosity (pblum).
|
<ASSISTANT_TASK:>
Python Code:
!pip install -I "phoebe>=2.1,<2.2"
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
b.add_dataset('lc', times=np.linspace(0,1,101), dataset='lc01')
b.set_value_all('ld_func', 'logarithmic')
b.set_value_all('ld_coeffs', [0,0])
b.set_value_all('atm', 'blackbody')
b.set_value('irrad_method', 'none')
print b['pblum_ref']
print b['pblum_ref@primary']
print b['pblum']
b['pblum_ref@primary'] = 'self'
b['pblum_ref@secondary'] = 'primary'
print b['pblum']
b.run_compute()
afig, mplfig = b.plot(show=True)
b['pblum@primary'] = 2 * np.pi
b.run_compute()
afig, mplfig = b.plot(show=True)
b['teff@secondary'] = 0.5 * b.get_value('teff@primary')
print b['teff']
b.run_compute()
afig, mplfig = b.plot(show=True)
b.set_value_all('teff', 6000)
b.set_value_all('pblum', 4*np.pi)
b.set_value_all('pblum_ref', 'self')
print b['pblum']
b.set_value_all('pblum', 4*np.pi)
b.run_compute()
afig, mplfig = b.plot(show=True)
print b['teff']
b['teff@secondary'] = 3000
b.run_compute()
afig, mplfig = b.plot(show=True)
b.set_value_all('teff', 6000)
b.set_value_all('pblum', 4*np.pi)
b['pblum_ref@primary'] = 'self'
b['pblum_ref@secondary'] = 'primary'
print b.compute_pblums()
print b.compute_pblums(dataset='lc01', component='primary')
b.add_dataset('mesh', times=np.linspace(0,1,5), dataset='mesh01', columns=['areas', 'pblum@lc01', 'ldint@lc01', 'ptfarea@lc01', 'abs_normal_intensities@lc01', 'normal_intensities@lc01'])
b.run_compute()
print b.filter(qualifier='pblum', context='model').twigs
t0 = b.get_value('t0@system')
print b.get_value(qualifier='pblum', time=t0, component='primary', kind='mesh', context='model')
print b.get_value('pblum@primary@dataset')
print b.get_value(qualifier='pblum', time=t0, component='primary', kind='mesh', context='model')
print b.get_value(qualifier='pblum', time=t0, component='secondary', kind='mesh', context='model')
b['teff@secondary@component'] = 3000
print b.compute_pblums()
b.run_compute()
print b.get_value(qualifier='pblum', time=t0, component='primary', kind='mesh', context='model')
print b.get_value(qualifier='pblum', time=t0, component='secondary', kind='mesh', context='model')
b.set_value_all('teff@component', 6000)
areas = b.get_value(qualifier='areas', dataset='mesh01', time=t0, component='primary', unit='m^2')
ldint = b.get_value(qualifier='ldint', component='primary', time=t0)
ptfarea = b.get_value(qualifier='ptfarea', component='primary', time=t0)
abs_normal_intensities = b.get_value(qualifier='abs_normal_intensities', dataset='lc01', time=t0, component='primary')
normal_intensities = b.get_value(qualifier='normal_intensities', dataset='lc01', time=t0, component='primary')
np.median(abs_normal_intensities)
np.median(normal_intensities)
pblum = b.get_value(qualifier='pblum', component='primary', context='dataset')
print np.sum(normal_intensities * ldint * np.pi * areas) * ptfarea, pblum
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Pyadjoint expects the data to be fully preprocessed thus both observed and synthetic data are expected to have exactly the same length, sampling rate, and spectral content. Pyadjoint furthermore does not care about the actual components in question; it will use two traces and calculate misfit values and adjoint sources for them. To provide a familiar nomenclature we will always talk about observed and synthetic data Pyadjoint is independent of what the data actually represents.
Step2: Usage Options
Step3: All adjoint source types can also be plotted during the calculation. The type of plot produced depends on the type of misfit measurement and adjoint source.
Step4: Saving to Disc
|
<ASSISTANT_TASK:>
Python Code:
import obspy
import pyadjoint
# Helper function to get some example data used for
# illustrative purposes.
obs, syn = pyadjoint.utils.get_example_data()
# Select the vertical components of both.
obs = obs.select(component="Z")[0]
syn = syn.select(component="Z")[0]
adj_src = pyadjoint.calculate_adjoint_source(
# The type of misfit measurement and adjoint source.
adj_src_type="waveform_misfit",
# Pass observed and synthetic data traces.
observed=obs, synthetic=syn,
# The spectral content of the data.
min_period=20.0, max_period=100.0,
# The window borders in seconds since the first sample.
left_window_border=800.0,
right_window_border=900.0)
print(adj_src)
# Access misfit and adjoint sources. The misfit is a floating point number.
print(adj_src.misfit)
# The adjoint source is a a numpy array.
print(adj_src.adjoint_source)
print(pyadjoint.calculate_adjoint_source(
adj_src_type="waveform_misfit", observed=obs, synthetic=syn,
min_period=20.0, max_period=100.0,
left_window_border=800.0, right_window_border=900.0, adjoint_src=False))
pyadjoint.calculate_adjoint_source(
adj_src_type="waveform_misfit", observed=obs, synthetic=syn,
min_period=20.0, max_period=100.0,
left_window_border=800.0, right_window_border=900.0, plot=True);
print(pyadjoint.calculate_adjoint_source(
adj_src_type="waveform_misfit", observed=obs, synthetic=syn,
min_period=20.0, max_period=100.0,
left_window_border=800.0, right_window_border=900.0,
taper_percentage=0.3, taper_type="cosine"))
adj_src.write(filename="NET.STA.CHA.adj_src",
format="SPECFEM", time_offset=-10)
!head NET.STA.CHA.adj_src
!rm NET.STA.CHA.adj_src
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Ricoh GEN5
Step2: KonicaMinolta KM1024i
Step3: Kyocera KJ4B_1200_64k CPLD
Step4: Jetmapping BitSwap
Step5: Manual Data Assignment
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
high = 1
low = 0
mask_elements = 64
bounds = np.empty(shape=[high+1, mask_elements], dtype=np.int32)
for i in range(mask_elements):
bounds[high,i] = (i + 1) * 4 - 1
bounds[low,i] = i * 4
print(" i | High Low \n---|-----------")
for i in range(mask_elements):
print("{:2} | {:4} {:4}".format(i, bounds[high,i], bounds[low,i]))
import numpy as np
channels = 4
elements = 320
mask_address = np.empty(shape=[channels, elements], dtype=np.int32)
for i in range(elements):
mask_address[0,i] = (319-i) * 4 + 0
mask_address[1,i] = i * 4 + 2
mask_address[2,i] = (319-i) * 4 + 1
mask_address[3,i] = i * 4 + 3
print(" i | A B C D \n----|---------------------")
for i in range(elements):
print("{:3} | {:4} {:4} {:4} {:4}".format(i,mask_address[0,i],mask_address[1,i],mask_address[2,i],mask_address[3,i]))
import numpy as np
channels = 2
elements = 512
mask_address = np.empty(shape=[channels, elements], dtype=np.int32)
for i in range(elements):
mask_address[0,i] = (511-i) * 2 + 0
mask_address[1,i] = i * 2 + 1
print(" i | High Low \n----|-----------")
for i in range(elements):
print("{:3} | {:4} {:4}".format(i,mask_address[0,i],mask_address[1,i]))
import numpy as np
channels = 10
data_size = 3
bounds = np.empty(shape=[channels,2], dtype=np.int32)
for i in range(1,channels+1):
bounds[i-1][0] = data_size * i - 1 # High
bounds[i-1][1] = data_size * (i - 1) # Low
print(" i | High Low \n----|-----------")
for i in range(1,channels+1):
print("{:3} | {:4} {:4}".format(i,bounds[i-1,0],bounds[i-1,1]))
import numpy as np
g_RAM_FRAME_SIZE = 256
data_size = 3
bounds_left_msn = np.empty(shape=[g_RAM_FRAME_SIZE,2], dtype=np.int32)
bounds_right_msn = np.empty(shape=[g_RAM_FRAME_SIZE,2], dtype=np.int32)
bounds_left_lsn = np.empty(shape=[g_RAM_FRAME_SIZE,2], dtype=np.int32)
bounds_right_lsn = np.empty(shape=[g_RAM_FRAME_SIZE,2], dtype=np.int32)
for i in range(0,int(g_RAM_FRAME_SIZE / 8 - 1 + 1)):
bounds_left_msn[i][0] = (i * 8) + 3
bounds_left_msn[i][1] = (i * 8) + 0
bounds_right_msn[i][0] = (i * 8) + 7
bounds_right_msn[i][1] = (i * 8) + 4
bounds_left_lsn[i][0] = (i * 8) + 7
bounds_left_lsn[i][1] = (i * 8) + 4
bounds_right_lsn[i][0] = (i * 8) + 3
bounds_right_lsn[i][1] = (i * 8) + 0
print(" i | High Low <= High Low | High Low <= High Low\n----|------------------------|-----------------------")
for i in range(0,int(g_RAM_FRAME_SIZE / 8 - 1 + 1)):
print("{:3} | {:4} {:4} <= {:4} {:4} | {:4} {:4} <= {:4} {:4}".format(i,bounds_left_msn[i,0],bounds_left_msn[i,1],bounds_right_msn[i,0],bounds_right_msn[i,1],bounds_left_lsn[i,0],bounds_left_lsn[i,1],bounds_right_lsn[i,0],bounds_right_lsn[i,1]))
import numpy as np
channels = 8
data_size = 3
bounds = np.empty(shape=[channels,2], dtype=np.int32)
for i in range(1,channels+1):
bounds[i-1][0] = data_size * i - 1 # High
bounds[i-1][1] = data_size * (i - 1) # Low
print(" i | High Low \n----|-----------")
for i in range(1,channels+1):
print("{:3} | {:4} {:4}".format(i,bounds[i-1,0],bounds[i-1,1]))
<END_TASK>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.