Unnamed: 0
int64 0
15.9k
| cleaned_code
stringlengths 67
124k
⌀ | cleaned_prompt
stringlengths 168
30.3k
⌀ |
|---|---|---|
1,600
|
<ASSISTANT_TASK:>
Python Code:
ssh-keygen -t rsa -b 4096 -C "fyuewen@hotmail.com"
ssh-add ~/.ssh/id_rsa_pycharm-git
ssh-add -l # to ensure the key is added
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: I named the private key file to be 'id_rsa_pycharm-git', corerspondingly it's public key would be 'id_rsa_pycharm-git.pub'.
|
1,601
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
plt.rcParams['figure.figsize'] = (20.0, 10.0)
plt.rcParams['font.family'] = "serif"
df = pd.read_csv('../../../datasets/movie_metadata.csv')
df.head()
# split each movie's genre list, then form a set from the unwrapped list of all genres
categories = set([s for genre_list in df.genres.unique() for s in genre_list.split("|")])
# one-hot encode each movie's classification
for cat in categories:
df[cat] = df.genres.transform(lambda s: int(cat in s))
# drop other columns
df = df[['director_name','genres','duration'] + list(categories)]
df.head()
# convert from wide to long format and remove null classificaitons
df = pd.melt(df,
id_vars=['duration'],
value_vars = list(categories),
var_name = 'Category',
value_name = 'Count')
df = df.loc[df.Count>0]
top_categories = df.groupby('Category').aggregate(sum).sort_values('Count', ascending=False).index
howmany=10
df = df.loc[df.Category.isin(top_categories[:howmany])]
df.rename(columns={"duration":"Duration"},inplace=True)
df.head()
p = sns.swarmplot(data=df,
x = 'Category',
y = 'Duration')
df = df.loc[df.Duration < 250]
p = sns.violinplot(data=df,
x = 'Category',
y = 'Duration')
p = sns.violinplot(data=df,
x = 'Category',
y = 'Duration',
order = sorted(df.Category.unique()))
p = sns.violinplot(data=df,
y = 'Category',
x = 'Duration',
order = sorted(df.Category.unique()),
orient="h")
p = sns.violinplot(data=df,
x = 'Category',
y = 'Duration',
order = sorted(df.Category.unique()),
saturation=.25)
p = sns.violinplot(data=df,
x = 'Category',
y = 'Duration',
order = sorted(df.Category.unique()),
width=.25)
p = sns.violinplot(data=df,
x = 'Category',
y = 'Duration',
order = sorted(df.Category.unique()),
fliersize=20)
p = sns.violinplot(data=df,
x = 'Category',
y = 'Duration',
order = sorted(df.Category.unique()),
bw=.05)
p = sns.violinplot(data=df,
x = 'Category',
y = 'Duration',
order = sorted(df.Category.unique()),
bw=5)
sns.set(rc={"axes.facecolor":"#e6e6e6",
"axes.grid":False,
'axes.labelsize':30,
'figure.figsize':(20.0, 10.0),
'xtick.labelsize':25,
'ytick.labelsize':20})
p = sns.violinplot(data=df,
x = 'Category',
y = 'Duration',
palette = 'spectral',
order = sorted(df.Category.unique()),
notch=True)
plt.xticks(rotation=45)
l = plt.xlabel('')
plt.ylabel('Duration (min)')
plt.text(4.85,200, "Violin Plot", fontsize = 95, color="black", fontstyle='italic')
p.get_figure().savefig('../../figures/swarmplot.png')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: For the bar plot, let's look at the number of movies in each category, allowing each movie to be counted more than once.
Step2: Basic plot
Step3: The outliers here are making things a bit squished, so I'll remove them since I am just interested in demonstrating the visualization tool.
Step4: Change the order of categories
Step5: Change the order that the colors are chosen
Step6: Desaturate
Step7: Adjust width of violins
Step8: Change the size of outlier markers
Step9: Adjust the bandwidth of the KDE filtering parameter. Smaller values will use a thinner kernel and thus will contain higher feature resolution but potentially noise. Here are examples of low and high settings to demonstrate the difference.
Step10: Finalize
|
1,602
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
%matplotlib inline
import nengo
import numpy as np
import scipy.ndimage
import matplotlib.animation as animation
from matplotlib import pylab
from PIL import Image
import nengo.spa as spa
import cPickle
import random
from nengo_extras.data import load_mnist
from nengo_extras.vision import Gabor, Mask
#Encode categorical integer features using a one-hot aka one-of-K scheme.
def one_hot(labels, c=None):
assert labels.ndim == 1
n = labels.shape[0]
c = len(np.unique(labels)) if c is None else c
y = np.zeros((n, c))
y[np.arange(n), labels] = 1
return y
# --- load the data
img_rows, img_cols = 28, 28
(X_train, y_train), (X_test, y_test) = load_mnist()
X_train = 2 * X_train - 1 # normalize to -1 to 1
X_test = 2 * X_test - 1 # normalize to -1 to 1
train_targets = one_hot(y_train, 10)
test_targets = one_hot(y_test, 10)
rng = np.random.RandomState(9)
# --- set up network parameters
#Want to encode and decode the image
n_vis = X_train.shape[1]
n_out = X_train.shape[1]
#number of neurons/dimensions of semantic pointer
n_hid = 1000 #Try with more neurons for more accuracy
#Want the encoding/decoding done on the training images
ens_params = dict(
eval_points=X_train,
neuron_type=nengo.LIF(), #Why not use LIF? originally used LIFRate()
intercepts=nengo.dists.Choice([-0.5]),
max_rates=nengo.dists.Choice([100]),
)
#Least-squares solver with L2 regularization.
solver = nengo.solvers.LstsqL2(reg=0.01)
#solver = nengo.solvers.LstsqL2(reg=0.0001)
solver2 = nengo.solvers.LstsqL2(reg=0.01)
#network that generates the weight matrices between neuron activity and images and the labels
with nengo.Network(seed=3) as model:
a = nengo.Ensemble(n_hid, n_vis, seed=3, **ens_params)
v = nengo.Node(size_in=n_out)
conn = nengo.Connection(
a, v, synapse=None,
eval_points=X_train, function=X_train,#want the same thing out (identity)
solver=solver)
v2 = nengo.Node(size_in=train_targets.shape[1])
conn2 = nengo.Connection(
a, v2, synapse=None,
eval_points=X_train, function=train_targets, #Want to get the labels out
solver=solver2)
# linear filter used for edge detection as encoders, more plausible for human visual system
encoders = Gabor().generate(n_hid, (11, 11), rng=rng)
encoders = Mask((28, 28)).populate(encoders, rng=rng, flatten=True)
#Set the ensembles encoders to this
a.encoders = encoders
#Check the encoders were correctly made
plt.imshow(encoders[0].reshape(28, 28), vmin=encoders[0].min(), vmax=encoders[0].max(), cmap='gray')
#Get the one hot labels for the images
def get_outs(sim, images):
#The activity of the neurons when an image is given as input
_, acts = nengo.utils.ensemble.tuning_curves(a, sim, inputs=images)
#The activity multiplied by the weight matrix (calculated in the network) to give the one-hot labels
return np.dot(acts, sim.data[conn2].weights.T)
#Check how many of the labels were produced correctly
#def get_error(sim, images, labels):
# return np.argmax(get_outs(sim, images), axis=1) != labels
#Get label of the images
#def get_labels(sim,images):
# return np.argmax(get_outs(sim, images), axis=1)
#Get the neuron activity of an image or group of images (this is the semantic pointer in this case)
def get_activities(sim, images):
_, acts = nengo.utils.ensemble.tuning_curves(a, sim, inputs=images)
return acts
#Get the representation of the image after it has gone through the encoders (Gabor filters) but before it is in the neurons
#This must be computed to create the weight matrix for rotation from neuron activity to this step
# This allows a recurrent connection to be made from the neurons to themselves later
def get_encoder_outputs(sim,images):
#Pass the images through the encoders
outs = np.dot(images,sim.data[a].encoders.T) #before the neurons
return outs
dim =28
#Scale an image
def scale(img, scale):
newImg = scipy.ndimage.interpolation.zoom(np.reshape(img, (dim,dim), 'F').T,scale,cval=-1)
#If its scaled up
if(scale >1):
newImg = newImg[len(newImg)/2-(dim/2):-(len(newImg)/2-(dim/2)),len(newImg)/2-(dim/2):-(len(newImg)/2-(dim/2))]
if len(newImg) >28:
newImg = newImg[:28,:28]
newImg = newImg.ravel()
else: #Scaled down
m = np.zeros((dim,dim))
m.fill(-1)
m[(dim-len(newImg))/2:(dim-len(newImg))/2+len(newImg),(dim-len(newImg))/2:(dim-len(newImg))/2+len(newImg)] = newImg
newImg = m
return newImg.ravel()
#Shift an image
def translate(img,x,y):
newImg = scipy.ndimage.interpolation.shift(np.reshape(img, (dim,dim), 'F'),(x,y), cval=-1)
return newImg.T.ravel()
#Images to train, starting at random orientation, size and translation
orig_imgs = X_train[:100000].copy()
for img in orig_imgs:
while True:
try:
img[:] = scale(img,random.uniform(0.5,1.5))
break
except:
img[:] = img
img[:] = scipy.ndimage.interpolation.rotate(np.reshape(img,(28,28)),
(np.random.randint(360)),reshape=False,mode="nearest").ravel()
img[:] = translate(img,random.randint(-6,6),random.randint(-6,6))
#Check to make sure images were generated correctly
plt.subplot(121)
plt.imshow(np.reshape(orig_imgs[random.randint(0,1000)],(28,28)), cmap='gray')
plt.subplot(122)
plt.imshow(np.reshape(orig_imgs[random.randint(0,1000)],(28,28)), cmap='gray')
filename = "activity_to_img_weights_all_transformations" + str(n_hid) +".p"
cPickle.dump(sim.data[conn].weights.T, open( filename, "wb" ) )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Represent each number using a one-hot where the index of the one represents the digit value
Step2: Load the MNIST training and testing images
Step3: The Network
Step4: Evaluating the network statically
Step5: Images
Step6: Saving weight matrices
|
1,603
|
<ASSISTANT_TASK:>
Python Code:
from pymatgen.electronic_structure.plotter import CohpPlotter
from pymatgen.electronic_structure.cohp import CompleteCohp
%matplotlib inline
COHPCAR_path = "lobster_data/GaAs/COHPCAR.lobster"
POSCAR_path = "lobster_data/GaAs/POSCAR"
completecohp=CompleteCohp.from_file(fmt="LOBSTER",filename=COHPCAR_path,structure_file=POSCAR_path)
#search for the number of the COHP you would like to plot in ICOHPLIST.lobster (the numbers in COHPCAR.lobster are different!)
label="16"
cp=CohpPlotter()
#get a nicer plot label
plotlabel=str(completecohp.bonds[label]['sites'][0].species_string)+'-'+str(completecohp.bonds[label]['sites'][1].species_string)
cp.add_cohp(plotlabel,completecohp.get_cohp_by_label(label=label))
#check which COHP you are plotting
print("This is a COHP between the following sites: "+str(completecohp.bonds[label]['sites'][0])+' and '+ str(completecohp.bonds[label]['sites'][1]))
x = cp.get_plot(integrated=False)
x.ylim([-10, 6])
x.show()
#labels of the COHPs that will be summed!
labelist = ["16", "21"]
cp = CohpPlotter()
# get a nicer plot label
plotlabel = "two Ga-As bonds"
cp.add_cohp(plotlabel, completecohp.get_summed_cohp_by_label_list(label_list=labelist, divisor=1))
x = cp.get_plot(integrated=False)
x.ylim([-10, 6])
x.show()
#search for the number of the COHP you would like to plot in ICOHPLIST.lobster (the numbers in COHPCAR.lobster are different!)
label="16"
cp=CohpPlotter()
#get orbital object
from pymatgen.electronic_structure.core import Orbital
#interaction between 4s and 4px
orbitals=[[4,Orbital.s], [4,Orbital.py]]
orbitals2=[[4,Orbital.s], [4,Orbital.pz]]
#get a nicer plot label
plotlabel=str(completecohp.bonds[label]['sites'][0].species_string)+'(4s)'+'-'+str(completecohp.bonds[label]['sites'][1].species_string)+'(4py)'
plotlabel2=str(completecohp.bonds[label]['sites'][0].species_string)+'(4s)'+'-'+str(completecohp.bonds[label]['sites'][1].species_string)+'(4pz)'
cp.add_cohp(plotlabel,completecohp.get_orbital_resolved_cohp(label=label, orbitals=orbitals))
cp.add_cohp(plotlabel2,completecohp.get_orbital_resolved_cohp(label=label, orbitals=orbitals2))
#check which COHP you are plotting
#with integrated=True, you can plot the integrated COHP
x = cp.get_plot(integrated=False)
x.ylim([-10, 6])
x.show()
from pymatgen.io.lobster import Icohplist
icohplist=Icohplist(filename='lobster_data/GaAs/ICOHPLIST.lobster')
icohpcollection=icohplist.icohpcollection
#get icohp value by label (labelling according to ICOHPLIST.lobster)
#for spin polarized calculations you can also sum the spin channels
print('icohp value for certain bond by label')
label='16'
print(icohpcollection.get_icohp_by_label(label))
print()
#you can get all Icohpvalue objects for certain bond lengths
print('Icohp values for certain bonds with certain bond lengths')
for key,icohp in icohpcollection.get_icohp_dict_by_bondlengths(minbondlength=0.0, maxbondlength=3.0).items():
print(key+':'+str(icohp.icohp))
print()
#you can get all icohps for a certain site
print('ICOHP values of certain site')
for key,icohp in (icohpcollection.get_icohp_dict_of_site(site=0,minbondlength=0.0, maxbondlength=3.0).items()):
print(key+':'+str(icohp.icohp))
#relevant classes
from pymatgen.io.lobster import Doscar
from pymatgen.electronic_structure.plotter import DosPlotter
from pymatgen.core.composition import Element
%matplotlib inline
#read in DOSCAR.lobster
doscar=Doscar(doscar="lobster_data/GaAs/DOSCAR.lobster",structure_file="lobster_data/GaAs/POSCAR")
complete_dos=doscar.completedos
#get structure object
structure=complete_dos.structure
#plot total dos
Plotter=DosPlotter()
Plotter.add_dos("Total Dos",doscar.tdos)
Plotter.get_plot().show()
#plot DOS of s,p, and d orbitals for certain element
Plotter=DosPlotter()
el=Element("Ga")
Plotter.add_dos_dict(complete_dos.get_element_spd_dos(el=el))
Plotter.get_plot().show()
Plotter=DosPlotter()
#choose the sites you would like to plot
for isite,site in enumerate(structure[0:1]):
#name the orbitals you would like to include
#the other orbitals are named in a similar way. The orbitals are called: "s", "p_y", "p_z", "p_x", "d_xy", "d_yz", "d_z^2","d_xz", "d_x^2-y^2", "f_y(3x^2-y^2)", "f_xyz","f_yz^2", "f_z^3", "f_xz^2", "f_z(x^2-y^2)", "f_x(x^2-3y^2)"
for orbital in ["4s"]:
Plotter.add_dos("Ga"+str(isite+1)+":"+orbital,complete_dos.get_site_orbital_dos(site,orbital))
Plotter.get_plot().show()
from pymatgen.io.lobster import Charge
charge=Charge(filename='lobster_data/GaAs/CHARGE.lobster')
newstructure=charge.get_structure_with_charges(structure_filename='lobster_data/GaAs/POSCAR')
print(newstructure)
from pymatgen.io.lobster import Grosspop
grosspop=Grosspop(filename="lobster_data/GaAs/GROSSPOP.lobster")
print(grosspop.list_dict_grosspop)
newstructure=grosspop.get_structure_with_total_grosspop('lobster_data/GaAs/POSCAR')
print("Structure:")
print(newstructure)
from pymatgen.io.lobster import Fatband
from pymatgen.electronic_structure.plotter import BSPlotterProjected, BSDOSPlotter, BSPlotter
fatband=Fatband(filenames="lobster_data/GaAs",vasprun="lobster_data/GaAs/vasprun.xml",
Kpointsfile="lobster_data/GaAs/KPOINTS")
#get a band structure object
bssymline=fatband.get_bandstructure()
#print(bssymline.as_dict())
#this can be plotted with the classes to plot bandstructures from vasp
BSDOSPlotter(bs_projection="elements",dos_projection="elements").get_plot(bs=bssymline,dos=complete_dos).show()
#another plot type from pymatgen:
bsplotter=BSPlotterProjected(bssymline)
bsplotter.get_projected_plots_dots({"Ga":["4s","4p","3d"],"As":["4s","4p"]}).show()
from pymatgen.io.lobster import Lobsterout
lobsterout=Lobsterout("lobster_data/GaAs/lobsterout")
document=lobsterout.get_doc()
print(document["chargespilling"])
from pymatgen.io.lobster import Lobsterin
lobsterin = Lobsterin.standard_calculations_from_vasp_files("lobster_data/GaAs/POSCAR",
"lobster_data/GaAs/INCAR", "lobster_data/GaAs/POTCAR",
option='standard')
lobsterin.write_lobsterin(path="lobsterin")
file=open('./lobsterin','r')
print(file.read())
lobsterin.write_INCAR(incar_input="lobster_data/GaAs/INCAR", incar_output="INCAR.lobster",
poscar_input="lobster_data/GaAs/POSCAR", isym=-1, further_settings={"IBRION":-1})
file=open('./INCAR.lobster','r')
print(file.read())
lobsterin = Lobsterin.standard_calculations_from_vasp_files("lobster_data/GaAs/POSCAR", "lobster_data/GaAs/INCAR",
"lobster_data/GaAs/POTCAR", option='standard',
dict_for_basis={"Ga": '4s 4p', "As": '4s 4p'})
#writes lobsterin
lobsterin.write_lobsterin(path="lobsterin")
file=open('./lobsterin','r')
print(file.read())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: get a completecohp object to simplify the plotting
Step2: plot certain COHP
Step3: add several COHPs
Step4: focus on certain orbitals only
Step5: How to evaluate ICOHPLIST.lobster
Step6: read in ICOHPLIST.lobster and get Icohpcollection object
Step7: get interesting properties from ICOHPLIST.lobster
Step8: How to plot DOSCAR.lobster
Step9: read in DOSCAR.lobster and get structure object for later
Step10: plot total density of states
Step11: plot DOS projected on s, p, and d orbitals for certain element
Step12: plot DOS for cetain sites and orbitals
Step13: evaluate CHARGE.lobster
Step14: read in charge and produce a structure with the charge as a property
Step15: evaluate GROSSPOP.lobster
Step16: get a structure with total gross populations
Step17: FATBAND plot
Step18: get a bandstructure plot that is combined with a DOS plot
Step19: Read lobsterout
Step20: get all relevant infos from lobsterout
Step21: charge spilling can be accessed easily
Step22: Create input files for vasp and lobster automatically
Step23: a Lobsterin object with standard settings is created, a standard basis is used
Step24: writes lobsterin
Step25: will change ISYM to -1, NSW to 0, insert NBANDS, and set LWAVE to True in the INCAR
Step26: a Lobsterin object with standard settings is created, a basis given by the user is used
|
1,604
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
np.random.seed(4242)
n_samples = 500
n_features = 2
X1 = np.random.rand(n_samples, n_features)
y1 = np.ones((n_samples, 1))
idx_neg = (X1[:, 0] - 0.5) ** 2 + (X1[:, 1] - 0.5) ** 2 < 0.03
y1[idx_neg] = 0
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure(figsize=(10, 6))
plt.scatter(X1[:, 0], X1[:, 1], c=y1, s=100)
X2 = np.random.rand(n_samples, n_features)
y2 = np.ones((n_samples, 1))
idx_neg = (X2[:, 0] < 0.5) * (X2[:, 1] < 0.5) + (X2[:, 0] > 0.5) * (X2[:, 1] > 0.5)
y2[idx_neg] = 0
plt.figure(figsize=(10, 6))
plt.scatter(X2[:, 0], X2[:, 1], c=y2, s=100)
rho_pos = np.random.rand(n_samples // 2, 1) / 2.0 + 0.5
rho_neg = np.random.rand(n_samples // 2, 1) / 4.0
rho = np.vstack((rho_pos, rho_neg))
phi_pos = np.pi * 0.75 + np.random.rand(n_samples // 2, 1) * np.pi * 0.5
phi_neg = np.random.rand(n_samples // 2, 1) * 2 * np.pi
phi = np.vstack((phi_pos, phi_neg))
X3 = np.array([[r * np.cos(p), r * np.sin(p)] for r, p in zip(rho, phi)])
y3 = np.vstack((np.ones((n_samples // 2, 1)), np.zeros((n_samples // 2, 1))))
plt.figure(figsize=(10, 6))
plt.scatter(X3[:, 0], X3[:, 1], c=y3, s=100)
rho_pos = np.linspace(0, 2, n_samples // 2)
rho_neg = np.linspace(0, 2, n_samples // 2) + 0.5
rho = np.vstack((rho_pos, rho_neg))
phi_pos = 2 * np.pi * rho_pos
phi = np.vstack((phi_pos, phi_pos))
X4 = np.array([[r * np.cos(p), r * np.sin(p)] for r, p in zip(rho, phi)])
y4 = np.vstack((np.ones((n_samples // 2, 1)), np.zeros((n_samples // 2, 1))))
plt.figure(figsize=(10, 6))
plt.scatter(X4[:, 0], X4[:, 1], c=y4, s=100)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exercise 1
Step2: Code up your own SVM solution below
Step3: Code up your own SVM solution below
Step4: Code up your own solution
|
1,605
|
<ASSISTANT_TASK:>
Python Code:
# to generate gifs
!pip install imageio
from __future__ import absolute_import, division, print_function
# Import TensorFlow >= 1.9 and enable eager execution
import tensorflow as tf
tfe = tf.contrib.eager
tf.enable_eager_execution()
import os
import time
import numpy as np
import glob
import matplotlib.pyplot as plt
import PIL
import imageio
from IPython import display
(train_images, _), (test_images, _) = tf.keras.datasets.mnist.load_data()
train_images = train_images.reshape(train_images.shape[0], 28, 28, 1).astype('float32')
test_images = test_images.reshape(test_images.shape[0], 28, 28, 1).astype('float32')
# Normalizing the images to the range of [0., 1.]
train_images /= 255.
test_images /= 255.
# Binarization
train_images[train_images >= .5] = 1.
train_images[train_images < .5] = 0.
test_images[test_images >= .5] = 1.
test_images[test_images < .5] = 0.
TRAIN_BUF = 60000
BATCH_SIZE = 100
TEST_BUF = 10000
train_dataset = tf.data.Dataset.from_tensor_slices(train_images).shuffle(TRAIN_BUF).batch(BATCH_SIZE)
test_dataset = tf.data.Dataset.from_tensor_slices(test_images).shuffle(TEST_BUF).batch(BATCH_SIZE)
class CVAE(tf.keras.Model):
def __init__(self, latent_dim):
super(CVAE, self).__init__()
self.latent_dim = latent_dim
self.inference_net = tf.keras.Sequential(
[
tf.keras.layers.InputLayer(input_shape=(28, 28, 1)),
tf.keras.layers.Conv2D(
filters=32, kernel_size=3, strides=(2, 2), activation=tf.nn.relu),
tf.keras.layers.Conv2D(
filters=64, kernel_size=3, strides=(2, 2), activation=tf.nn.relu),
tf.keras.layers.Flatten(),
# No activation
tf.keras.layers.Dense(latent_dim + latent_dim),
]
)
self.generative_net = tf.keras.Sequential(
[
tf.keras.layers.InputLayer(input_shape=(latent_dim,)),
tf.keras.layers.Dense(units=7*7*32, activation=tf.nn.relu),
tf.keras.layers.Reshape(target_shape=(7, 7, 32)),
tf.keras.layers.Conv2DTranspose(
filters=64,
kernel_size=3,
strides=(2, 2),
padding="SAME",
activation=tf.nn.relu),
tf.keras.layers.Conv2DTranspose(
filters=32,
kernel_size=3,
strides=(2, 2),
padding="SAME",
activation=tf.nn.relu),
# No activation
tf.keras.layers.Conv2DTranspose(
filters=1, kernel_size=3, strides=(1, 1), padding="SAME"),
]
)
def sample(self, eps=None):
if eps is None:
eps = tf.random_normal(shape=(100, self.latent_dim))
return self.decode(eps, apply_sigmoid=True)
def encode(self, x):
mean, logvar = tf.split(self.inference_net(x), num_or_size_splits=2, axis=1)
return mean, logvar
def reparameterize(self, mean, logvar):
eps = tf.random_normal(shape=mean.shape)
return eps * tf.exp(logvar * .5) + mean
def decode(self, z, apply_sigmoid=False):
logits = self.generative_net(z)
if apply_sigmoid:
probs = tf.sigmoid(logits)
return probs
return logits
def log_normal_pdf(sample, mean, logvar, raxis=1):
log2pi = tf.log(2. * np.pi)
return tf.reduce_sum(
-.5 * ((sample - mean) ** 2. * tf.exp(-logvar) + logvar + log2pi),
axis=raxis)
def compute_loss(model, x):
mean, logvar = model.encode(x)
z = model.reparameterize(mean, logvar)
x_logit = model.decode(z)
cross_ent = tf.nn.sigmoid_cross_entropy_with_logits(logits=x_logit, labels=x)
logpx_z = -tf.reduce_sum(cross_ent, axis=[1, 2, 3])
logpz = log_normal_pdf(z, 0., 0.)
logqz_x = log_normal_pdf(z, mean, logvar)
return -tf.reduce_mean(logpx_z + logpz - logqz_x)
def compute_gradients(model, x):
with tf.GradientTape() as tape:
loss = compute_loss(model, x)
return tape.gradient(loss, model.trainable_variables), loss
optimizer = tf.train.AdamOptimizer(1e-4)
def apply_gradients(optimizer, gradients, variables, global_step=None):
optimizer.apply_gradients(zip(gradients, variables), global_step=global_step)
epochs = 100
latent_dim = 50
num_examples_to_generate = 16
# keeping the random vector constant for generation (prediction) so
# it will be easier to see the improvement.
random_vector_for_generation = tf.random_normal(
shape=[num_examples_to_generate, latent_dim])
model = CVAE(latent_dim)
def generate_and_save_images(model, epoch, test_input):
predictions = model.sample(test_input)
fig = plt.figure(figsize=(4,4))
for i in range(predictions.shape[0]):
plt.subplot(4, 4, i+1)
plt.imshow(predictions[i, :, :, 0], cmap='gray')
plt.axis('off')
# tight_layout minimizes the overlap between 2 sub-plots
plt.savefig('image_at_epoch_{:04d}.png'.format(epoch))
plt.show()
generate_and_save_images(model, 0, random_vector_for_generation)
for epoch in range(1, epochs + 1):
start_time = time.time()
for train_x in train_dataset:
gradients, loss = compute_gradients(model, train_x)
apply_gradients(optimizer, gradients, model.trainable_variables)
end_time = time.time()
if epoch % 1 == 0:
loss = tfe.metrics.Mean()
for test_x in test_dataset.make_one_shot_iterator():
loss(compute_loss(model, test_x))
elbo = -loss.result()
display.clear_output(wait=False)
print('Epoch: {}, Test set ELBO: {}, '
'time elapse for current epoch {}'.format(epoch,
elbo,
end_time - start_time))
generate_and_save_images(
model, epoch, random_vector_for_generation)
def display_image(epoch_no):
return PIL.Image.open('image_at_epoch_{:04d}.png'.format(epoch_no))
display_image(epochs) # Display images
with imageio.get_writer('cvae.gif', mode='I') as writer:
filenames = glob.glob('image*.png')
filenames = sorted(filenames)
last = -1
for i,filename in enumerate(filenames):
frame = 2*(i**0.5)
if round(frame) > round(last):
last = frame
else:
continue
image = imageio.imread(filename)
writer.append_data(image)
image = imageio.imread(filename)
writer.append_data(image)
# this is a hack to display the gif inside the notebook
os.system('cp cvae.gif cvae.gif.png')
display.Image(filename="cvae.gif.png")
#from google.colab import files
#files.download('cvae.gif')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import TensorFlow and enable Eager execution
Step2: Load the MNIST dataset
Step3: Use tf.data to create batches and shuffle the dataset
Step4: Wire up the generative and inference network with tf.keras.Sequential
Step5: Define the loss function and the optimizer
Step6: Training
Step7: Display an image using the epoch number
Step8: Generate a GIF of all the saved images.
Step9: To downlod the animation from Colab uncomment the code below
|
1,606
|
<ASSISTANT_TASK:>
Python Code:
# tensorflow
import tensorflow as tf
# rnn common functions
from tensorflow.contrib.learn.python.learn.estimators import rnn_common
# visualization
import seaborn as sns
import matplotlib.pyplot as plt
# helpers
import numpy as np
import pandas as pd
import csv
# enable tensorflow logs
tf.logging.set_verbosity(tf.logging.INFO)
df = pd.read_csv('weather.csv')
number_of_rows = len(df)
print('number of rows in the dataset:', number_of_rows)
print('how a row looks like:')
print(df.head(11))
print()
print("we don't the year mo da columns, so let's forget about them")
df = df[['avg_tmp', 'avg_dewp', 'avg_slp']]
print(df.head(11))
SEQ_LEN = 10
VALID_ROWS = number_of_rows - SEQ_LEN - 1
NUM_FEATURES = 3
# then we can use indexes to access rows easily
df = np.asarray(df)
# sequences will have shape: [VALID_ROWS, SEQ_LEN, NUM_FEATURES]
sequences = np.zeros((VALID_ROWS, SEQ_LEN, NUM_FEATURES), dtype=np.float32)
labels = np.zeros((VALID_ROWS, 1))
# sequences are 10 days
# label is the avg_tmp for the following day (11th)
for i in range(VALID_ROWS):
sequences[i] = df[i: i + SEQ_LEN]
labels[i] = df[i + SEQ_LEN][0]
print('-' * 20)
print('Example')
print('-' * 20)
print('sequence:')
print(sequences[0])
print('prediction:', labels[0])
# these values are based on the number of valid rows which is 32083
TRAIN_SIZE = 30000
EVAL_SIZE = 2073
TEST_SIZE = 10
# TODO(@monteirom): suffle
train_seq = sequences[:TRAIN_SIZE]
train_label = np.asarray(labels[:TRAIN_SIZE], dtype=np.float32)
eval_seq = sequences[TRAIN_SIZE: TRAIN_SIZE + EVAL_SIZE]
eval_label = np.asarray(labels[TRAIN_SIZE:TRAIN_SIZE + EVAL_SIZE], dtype=np.float32)
test_seq = sequences[TRAIN_SIZE + EVAL_SIZE: ]
test_label = np.asarray(labels[TRAIN_SIZE + EVAL_SIZE: ], dtype=np.float32)
print('train shape:', train_seq.shape)
print('eval shape:', eval_seq.shape)
print('test shape:', test_seq.shape)
# getting test labels
test_plot_data = [test_label[i][0] for i in range(TEST_SIZE)]
# plotting
sns.tsplot(test_plot_data)
plt.show()
BATCH_SIZE = 64
FEATURE_KEY = 'x'
SEQ_LEN_KEY = 'sequence_length'
def make_dict(x):
d = {}
d[FEATURE_KEY] = x
# [SIZE OF DATA SET, 1]
# where the second dimesion contains the sequence of each
# sequence in the data set
d[SEQ_LEN_KEY] = np.asarray(x.shape[0] * [SEQ_LEN], dtype=np.int32)
return d
# Make input function for training:
# num_epochs=None -> will cycle through input data forever
# shuffle=True -> randomize order of input data
train_input_fn = tf.estimator.inputs.numpy_input_fn(x=make_dict(train_seq),
y=train_label,
batch_size=BATCH_SIZE,
shuffle=True,
num_epochs=None)
# Make input function for evaluation:
# shuffle=False -> do not randomize input data
eval_input_fn = tf.estimator.inputs.numpy_input_fn(x=make_dict(eval_seq),
y=eval_label,
batch_size=BATCH_SIZE,
shuffle=False)
# Make input function for testing:
# shuffle=False -> do not randomize input data
test_input_fn = tf.estimator.inputs.numpy_input_fn(x=make_dict(test_seq),
y=test_label,
batch_size=1,
shuffle=False)
N_OUTPUTS = 1 # 1 prediction
NUM_FEATURES = 3
def get_model_fn(rnn_cell_sizes,
label_dimension,
dnn_layer_sizes=[],
optimizer='SGD',
learning_rate=0.01):
def model_fn(features, labels, mode, params):
x = features[FEATURE_KEY]
sequence_length = features[SEQ_LEN_KEY]
# 1. configure the RNN
# Each RNN layer will consist of a LSTM cell
rnn_layers = [tf.nn.rnn_cell.LSTMCell(size) for size in rnn_cell_sizes]
# Construct the layers
multi_rnn_cell = tf.nn.rnn_cell.MultiRNNCell(rnn_layers)
outputs, _ = tf.nn.dynamic_rnn(multi_rnn_cell, x, dtype=tf.float32)
# Slice to keep only the last cell of the RNN
last_activations = rnn_common.select_last_activations(outputs,
sequence_length)
# Construct dense layers on top of the last cell of the RNN
for units in dnn_layer_sizes:
last_activations = tf.layers.dense(last_activations,
units,
activation=tf.nn.relu)
# Final dense layer for prediction
predictions = tf.layers.dense(last_activations, label_dimension)
# 2. Define the loss function for training/evaluation
loss = None
eval_metric_ops = None
train_op = None
# if predicting labels can be None
if mode != tf.estimator.ModeKeys.PREDICT:
loss = tf.losses.mean_squared_error(labels, predictions)
eval_metric_ops = {
"rmse": tf.metrics.root_mean_squared_error(labels, predictions)
}
# 3. Define the training operation/optimizer
train_op = tf.contrib.layers.optimize_loss(
loss=loss,
global_step=tf.contrib.framework.get_global_step(),
learning_rate=learning_rate,
optimizer=optimizer)
# 4. Create predictions
predictions_dict = {"predicted": predictions}
# 5. return ModelFnOps
return tf.estimator.EstimatorSpec(
mode=mode,
predictions=predictions_dict,
loss=loss,
train_op=train_op,
eval_metric_ops=eval_metric_ops)
return model_fn
model_fn = get_model_fn(rnn_cell_sizes=[64], # size of the hidden layers
label_dimension=1, # since is just 1 prediction
dnn_layer_sizes=[32], # size of units in the dense layers on top of the RNN
optimizer='Adam',
learning_rate=0.001)
estimator = tf.estimator.Estimator(model_fn=model_fn)
estimator.train(input_fn=train_input_fn, steps=10000)
ev = estimator.evaluate(input_fn=eval_input_fn)
print(ev)
preds = list(estimator.predict(input_fn=test_input_fn))
predictions = []
for p in preds:
print(p)
predictions.append(p["predicted"][0])
# plotting real values in black
sns.tsplot(test_plot_data, color="black")
# plotting predictions in red
sns.tsplot(predictions, color="red")
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Describing the data set and the model
Step2: Separating training, evaluation and a small test data
Step3: What we want to predict
Step4: Defining Input functions
Step5: RNN Model
Step6: Running model
Step7: Trainning
Step8: Evaluating
Step9: Testing
Step10: Visualizing predictions
|
1,607
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import time
import numpy as np
import matplotlib.pyplot as plt
import tridesclous as tdc
from tridesclous import DataIO, CatalogueConstructor, Peeler
#download dataset
localdir, filenames, params = tdc.download_dataset(name='locust')
print(filenames)
print(params)
#create a DataIO
import os, shutil
dirname = 'tridesclous_locust'
if os.path.exists(dirname):
#remove is already exists
shutil.rmtree(dirname)
dataio = DataIO(dirname=dirname)
# feed DataIO
dataio.set_data_source(type='RawData', filenames=filenames, **params)
print(dataio)
#no need to setup the prb with dataio.set_probe_file() or dataio.download_probe()
#because it is a tetrode
cc = CatalogueConstructor(dataio=dataio)
print(cc)
# global params
cc.set_global_params(chunksize=1024,mode='dense')
# pre processing filetring normalisation
cc.set_preprocessor_params(
common_ref_removal=False,
highpass_freq=300.,
lowpass_freq=5000.,
lostfront_chunksize=64)
cc.set_peak_detector_params(
peak_sign='-',
relative_threshold=6.5,
peak_span_ms=0.1)
cc.estimate_signals_noise(seg_num=0, duration=15.)
print(cc.signals_medians)
print(cc.signals_mads)
t1 = time.perf_counter()
cc.run_signalprocessor(duration=60.)
t2 = time.perf_counter()
print('run_signalprocessor', t2-t1, 's')
print(cc)
cc.clean_peaks(alien_value_threshold=60., mode='extremum_amplitude')
print(cc)
cc.set_waveform_extractor_params(wf_left_ms=-1.5, wf_right_ms=2.5)
cc.sample_some_peaks(mode='rand', nb_max=20000)
cc.extract_some_noise(nb_snippet=300)
cc.extract_some_features(method='global_pca', n_components=5)
print(cc)
cc.find_clusters(method='kmeans', n_clusters=12)
print(cc)
%gui qt5
import pyqtgraph as pg
app = pg.mkQApp()
win = tdc.CatalogueWindow(catalogueconstructor)
win.show()
app.exec_()
cc.auto_split_cluster()
cc.auto_merge_cluster()
cc.trash_low_extremum(min_extremum_amplitude=6.6)
cc.trash_small_cluster(minimum_size=10)
#order cluster by waveforms rms
cc.order_clusters(by='waveforms_rms')
#save the catalogue
cc.make_catalogue_for_peeler(inter_sample_oversampling=True)
catalogue = dataio.load_catalogue(chan_grp=0)
peeler = Peeler(dataio)
peeler.change_params(catalogue=catalogue)
t1 = time.perf_counter()
peeler.run()
t2 = time.perf_counter()
print('peeler.run', t2-t1)
print()
for seg_num in range(dataio.nb_segment):
spikes = dataio.get_spikes(seg_num)
print('seg_num', seg_num, 'nb_spikes', spikes.size)
%gui qt5
import pyqtgraph as pg
app = pg.mkQApp()
win = tdc.PeelerWindow(dataio=dataio, catalogue=initial_catalogue)
win.show()
app.exec_()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Download a small dataset
Step2: DataIO = define datasource and working dir
Step3: CatalogueConstructor
Step4: Set some parameters
Step5: Estimate the median and mad of noiseon a small chunk of filtered signals.
Step6: Run the main loop
Step7: Clean peaks
Step8: sample some peaks for waveforms extraction
Step9: Extact some noise snippet.
Step10: Project to smaller space
Step11: find clusters
Step12: Manual clean with CatalogueWindow (or visual check)
Step13: Here a snappshot of CatalogueWindow
Step14: Peeler
Step15: Open PeelerWindow for visual checking
|
1,608
|
<ASSISTANT_TASK:>
Python Code:
from lsst.cwfs.instrument import Instrument
from lsst.cwfs.algorithm import Algorithm
from lsst.cwfs.image import Image, readFile, aperture2image, showProjection
import lsst.cwfs.plots as plots
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
fieldXY = [0,0]
I1 = Image(readFile('../tests/testImages/AuxTel/I1_intra_20190912_HD21161_z05.fits'), fieldXY, Image.INTRA)
I2 = Image(readFile('../tests/testImages/AuxTel/I2_extra_20190912_HD21161_z05.fits'), fieldXY, Image.EXTRA)
plots.plotImage(I1.image,'intra')
plots.plotImage(I2.image,'extra')
inst=Instrument('AuxTel',I1.sizeinPix)
algo=Algorithm('exp',inst,0)
algo.runIt(inst,I1,I2,'paraxial')
print(algo.zer4UpNm)
plots.plotZer(algo.zer4UpNm,'nm')
print("Expected image diameter in pixels = %.0f"%(inst.offset/inst.fno/inst.pixelSize))
plots.plotImage(I1.image0,'original intra', mask=algo.pMask)
plots.plotImage(I2.image0,'original extra', mask=algo.pMask)
nanMask = np.ones(I1.image.shape)
nanMask[I1.pMask==0] = np.nan
fig, ax = plt.subplots(1,2, figsize=[10,4])
img = ax[0].imshow(algo.Wconverge*nanMask, origin='lower')
ax[0].set_title('Final WF = estimated + residual')
fig.colorbar(img, ax=ax[0])
img = ax[1].imshow(algo.West*nanMask, origin='lower')
ax[1].set_title('residual wavefront')
fig.colorbar(img, ax=ax[1])
fig, ax = plt.subplots(1,2, figsize=[10,4])
img = ax[0].imshow(I1.image, origin='lower')
ax[0].set_title('Intra residual image')
fig.colorbar(img, ax=ax[0])
img = ax[1].imshow(I2.image, origin='lower')
ax[1].set_title('Extra residual image')
fig.colorbar(img, ax=ax[1])
oversample = 10
projSamples = I1.image0.shape[0]*oversample
luty, lutx = np.mgrid[
-(projSamples / 2 - 0.5):(projSamples / 2 + 0.5),
-(projSamples / 2 - 0.5):(projSamples / 2 + 0.5)]
lutx = lutx / (projSamples / 2 / inst.sensorFactor)
luty = luty / (projSamples / 2 / inst.sensorFactor)
lutxp, lutyp, J = aperture2image(I1, inst, algo, algo.converge[:,-1], lutx, luty, projSamples, 'paraxial')
show_lutxyp = showProjection(lutxp, lutyp, inst.sensorFactor, projSamples, 1)
I1fit = Image(show_lutxyp, fieldXY, Image.INTRA)
I1fit.downResolution(oversample, I1.image0.shape[0], I1.image0.shape[1])
luty, lutx = np.mgrid[
-(projSamples / 2 - 0.5):(projSamples / 2 + 0.5),
-(projSamples / 2 - 0.5):(projSamples / 2 + 0.5)]
lutx = lutx / (projSamples / 2 / inst.sensorFactor)
luty = luty / (projSamples / 2 / inst.sensorFactor)
lutxp, lutyp, J = aperture2image(I2, inst, algo, algo.converge[:,-1], lutx, luty, projSamples, 'paraxial')
show_lutxyp = showProjection(lutxp, lutyp, inst.sensorFactor, projSamples, 1)
I2fit = Image(show_lutxyp, fieldXY, Image.EXTRA)
I2fit.downResolution(oversample, I2.image0.shape[0], I2.image0.shape[1])
#The atmosphere used here is just a random Gaussian smearing. We do not care much about the size at this point
from scipy.ndimage import gaussian_filter
atmSigma = .6/3600/180*3.14159*21.6/1.44e-5
I1fit.image[np.isnan(I1fit.image)]=0
a = gaussian_filter(I1fit.image, sigma=atmSigma)
fig, ax = plt.subplots(1,3, figsize=[15,4])
img = ax[0].imshow(I1fit.image, origin='lower')
ax[0].set_title('Forward prediction (no atm) Intra')
fig.colorbar(img, ax=ax[0])
img = ax[1].imshow(a, origin='lower')
ax[1].set_title('Forward prediction (w atm) Intra')
fig.colorbar(img, ax=ax[1])
img = ax[2].imshow(I1.image0, origin='lower')
ax[2].set_title('Real Image, Intra')
fig.colorbar(img, ax=ax[2])
I2fit.image[np.isnan(I2fit.image)]=0
b = gaussian_filter(I2fit.image, sigma=atmSigma)
fig, ax = plt.subplots(1,3, figsize=[15,4])
img = ax[0].imshow(I2fit.image, origin='lower')
ax[0].set_title('Forward prediction (no atm) Extra')
fig.colorbar(img, ax=ax[0])
img = ax[1].imshow(b, origin='lower')
ax[1].set_title('Forward prediction (w atm) Extra')
fig.colorbar(img, ax=ax[1])
img = ax[2].imshow(I2.image0, origin='lower')
ax[2].set_title('Real Image, Extra')
fig.colorbar(img, ax=ax[2])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Define the image objects. Input arguments
Step2: Define the instrument. Input arguments
Step3: Define the algorithm being used. Input arguments
Step4: Run it
Step5: Print the Zernikes Zn (n>=4)
Step6: plot the Zernikes Zn (n>=4)
Step7: We check that the optical parameters provided are consistent with the image diameter. Otherwise the numerical solutions themselves do not make much sense.
Step8: Patrick asked the question
Step9: Now we do the forward raytrace using our wavefront solutions
Step10: We now trace the rays to the image plane. Lutxp and Lutyp are image coordinates for each (oversampled) ray. showProjection() makes the intensity image. Then, to down sample the image back to original resolution, we want to use the function downResolution() which is defined for the image class.
Step11: Now do the same thing for extra focal image
|
1,609
|
<ASSISTANT_TASK:>
Python Code:
# NumPy is the fundamental package for scientific computing with Python.
import numpy as np
def theta_init(in_size, out_size, epsilon = 0.12):
return np.random.rand(in_size + 1, out_size) * 2 * epsilon - epsilon
def sigmoid(x):
return np.divide(1.0, (1.0 + np.exp(-x)))
def sigmoid_derivative(x):
return np.multiply(x, (1.0 - x))
def mean_squared_error(X):
return np.power(X, 2).mean(axis=None)
def nn_train(X, y, desired_error = 0.001, max_iterations = 100000, hidden_nodes = 5):
m = X.shape[0]
input_nodes = X.shape[1]
output_nodes = y.shape[1]
a1 = np.insert(X, 0, 1, axis=1)
theta1 = theta_init(input_nodes, hidden_nodes)
theta2 = theta_init(hidden_nodes, output_nodes)
for x in range(0, max_iterations):
# Feedforward
a2 = np.insert(sigmoid(a1.dot(theta1)), 0, 1, axis=1)
a3 = sigmoid(a2.dot(theta2))
# Calculate error using backpropagation
a3_delta = np.subtract(y, a3)
mse = mean_squared_error(a3_delta)
if mse <= desired_error:
print "Achieved requested MSE %f at iteration %d" % (mse, x)
break
a2_error = a3_delta.dot(theta2.T)
a2_delta = np.multiply(a2_error, sigmoid_derivative(a2))
# Update thetas to reduce the error on the next iteration
theta2 += np.divide(a2.T.dot(a3_delta), m)
theta1 += np.delete(np.divide(a1.T.dot(a2_delta), m), 0, 1)
return (theta1, theta2)
def nn_predict(X, theta1, theta2):
a2 = sigmoid(np.insert(X, 0, 1, axis=1).dot(theta1))
return sigmoid(np.insert(a2, 0, 1, axis=1).dot(theta2))
X = np.matrix('0 0; 0 1; 1 0; 1 1')
y = np.matrix('0; 1; 1; 0')
(theta1, theta2) = nn_train(X, y)
print "\nTrained weights for calculating the hidden layer from the input layer"
print theta1
print "\nTrained weights for calculating from the hidden layer to the output layer"
print theta2
# Our test input doesn't match our training input 'X'
X_test = np.matrix('1 1; 0 1; 0 0; 1 0')
y_test = np.matrix('0; 1; 0; 1')
y_calc = nn_predict(X_test, theta1, theta2)
y_diff = np.subtract(y_test, y_calc)
print "The MSE for our test set is %f" % (mean_squared_error(y_diff))
print np.concatenate((y_test, y_calc, y_diff), axis=1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The theta_init function is used to initialize the thetas (weights) in the network. It returns a random matrix with values in the range of [-epsilon, epsilon].
Step2: This network uses a sigmoid activating function. The sigmoid derivative is used during backpropagation.
Step3: The mean squared error (MSE) provides measure of the distance between the actual value and what is estimated by the neural network.
Step4: The nn_train function trains an artificial neural network with a single hidden layer. Each column in X is a feature and each row in X is a single training observation. The y value contains the classifications for each observation. For multi-classification problems, y will have more than one column. After training, this function returns the calculated theta values (weights) that can be used for predictions.
Step5: The nn_predict function takes the theta values calculated by nn_train to make predictions about the data in X.
Step6: Example
Step7: Now that we've trained the neural network. We can make predictions for new data.
|
1,610
|
<ASSISTANT_TASK:>
Python Code:
def NFW_escape_vel(r, Mvir, Rvir, CvirorRs, truncated=False):
NFW profile escape velocity
Parameters
----------
r : Quantity w/ length units
Radial distance at which to compute the escape velocity
Mvir : Quantity w/ mass units
Virial Mass
CvirorRs : Quantity w/ dimensionless or distance units
(Virial) Concentration parameter (if dimensionless),
or halo scale radius (if length units)
Rvir : Quantity w/ length units
Virial radius
truncated : bool or float
False for infinite-size NFW or a number to cut off the
halo at this many times Rvir
CvirorRs = u.Quantity(CvirorRs)
if CvirorRs.unit.is_equivalent(u.m):
Cvir = Rvir/CvirorRs
elif CvirorRs.unit.is_equivalent(u.one):
Cvir = CvirorRs
else:
raise TypeError('CvirorRs must be length or dimensionless')
a = Rvir / Cvir
#"f-function" from the NFW literature (e.g. klypin 02) evaluated at Cvir
fofC = np.log(1 + Cvir) - Cvir / (1 + Cvir)
# value of the NFW potential at that point
potential = (-cnst.G * Mvir / fofC) * np.log(1 + (r / a)) / r
if truncated:
rtrunc = Rvir * float(truncated)
Ctrunc = rtrunc / a
mtrunc = Mvir * (np.log(1 + Ctrunc) - Ctrunc / (1 + Ctrunc)) / fofC
outer = r >= rtrunc
potential[outer] = - Gkpc * mtrunc / r[outer]
potential[~outer] = potential[~outer] + (Gkpc * Mvir / a) / (Ctrunc + 1) / fofC
vesc = (2 * np.abs(potential)) ** 0.5
return vesc.to(u.km/u.s)
def Deltavir(cosmo, z=0):
Standard Delta-vir from Bryan&Norman 98 (*not* Delta-c)
x = cosmo.Om(z) - 1
return (18*np.pi**2 + 82*x - 39*x**2)/(x+1)
def rvirmvir(rvirormvir, cosmo, z=0):
Convert between Rvir and Mvir
Parameters
----------
rvirormvir : Quantity w/ mass or length units
Either Rvir or Mvir, depending on the input units
cosmo : astropy cosmology
The cosmology to assume
z : float
The redshift to assume for the conversion
Returns
-------
mvirorrvir : Quantity w/ mass or length units
Whichever ``rvirormvir`` is *not*
rhs = Deltavir(cosmo=cosmo, z=z) * cosmo.Om(z)*cosmo.H(z)**2 / (2*cnst.G)
if rvirormvir.unit.is_equivalent(u.solMass):
mvir = rvirormvir
return ((mvir / rhs)**(1/3)).to(u.kpc)
elif rvirormvir.unit.is_equivalent(u.kpc):
rvir = rvirormvir
return (rhs * rvir**3).to(u.solMass)
else:
raise ValueError('invalid input unit {}'.format(rvirormvir))
def mvir_to_cvir(mvir, z=0):
Power-law fit to the c_vir-M_vir relation from
Equations 12 & 13 of Dutton & Maccio 2014, arXiv:1402.7073.
a = 0.537 + (1.025 - 0.537) * np.exp(-0.718 * z**1.08)
b = -0.097 + 0.024 * z
m0 = 1e12 * u.solMass
logc = a + b * np.log10(mvir / m0)
return 10**logc
def NFW_escape_vel_from_Mvir(r, Mvir, z=0,
cosmo=cosmology.Planck15,
truncated=False):
cvir = mvir_to_cvir(Mvir, z)
rvir = rvirmvir(Mvir, cosmo, z)
return NFW_escape_vel(r, Mvir=Mvir,
CvirorRs=cvir,
Rvir=rvir,
truncated=truncated)
r = np.linspace(0, 300,101)[1:]*u.kpc #0 has a singularity
vesc = NFW_escape_vel_from_Mvir(r, 1e12*u.solMass)
plt.plot(r, vesc, c='r', label=r'$V_{\rm esc}$')
plt.plot(r, -vesc, c='r')
plt.plot(r, 3**-0.5*vesc, c='r', ls=':', label=r'$V_{\rm esc}/\sqrt{3}$')
plt.plot(r, -3**-0.5*vesc, c='r', ls=':')
plt.legend(loc=0)
plt.xlabel('$r$ [kpc]', fontsize=18)
plt.ylabel(r'$km/s', fontsize=18)
r = np.linspace(0, 300,101)[1:]*u.kpc #0 has a singularity
vesc0p5 = NFW_escape_vel_from_Mvir(r, 5e11*u.solMass)
vesc1 = NFW_escape_vel_from_Mvir(r, 1e12*u.solMass)
vesc2 = NFW_escape_vel_from_Mvir(r, 2e12*u.solMass)
plt.plot(r, vesc0p5, c='b', label=r'$M_{\rm vir}=5 \times 10^{11}$')
plt.plot(r, -vesc0p5, c='b')
plt.plot(r, vesc1, c='g', label=r'$M_{\rm vir}=1 \times 10^{12}$')
plt.plot(r, -vesc1, c='g')
plt.plot(r, vesc2, c='r', label=r'$M_{\rm vir}=2 \times 10^{12}$')
plt.plot(r, -vesc2, c='r')
plt.legend(loc=0)
plt.xlabel('$r$ [kpc]', fontsize=18)
plt.ylabel(r'$v_{\rm esc}$ [km/s]', fontsize=18)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Function to compute escape velocity given halo parameters
Step3: Functions to compute halo parameters given cosmology and Mvir
Step6: Use these basic relations to get rvir<->mir conversion
Step7: A function to put all that together and use just Mvir
|
1,611
|
<ASSISTANT_TASK:>
Python Code:
import os
import numpy as np
def parseRDD(point):
Parser for the current dataset. It receives a data point and return
a sentence (third field).
Args:
point (str): input data point
Returns:
str: a string
data = point.split('\t')
return (int(data[0]),data[2])
def notempty(point):
Returns whether the point string is not empty
Args:
point (str): input string
Returns:
bool: True if it is not empty
return len(point[1])>0
filename = os.path.join("Data","Aula04","MovieReviews2.tsv")
rawRDD = sc.textFile(filename,100)
header = rawRDD.take(1)[0]
dataRDD = (rawRDD
#.sample(False, 0.1, seed=42)
.filter(lambda x: x!=header)
.map(parseRDD)
.filter(notempty)
#.sample( False, 0.1, 42 )
)
print 'Read {} lines'.format(dataRDD.count())
print 'Sample line: {}'.format(dataRDD.takeSample(False, 1)[0])
# EXERCICIO
import re
split_regex = r'\W+'
stopfile = os.path.join("Data","Aula04","stopwords.txt")
stopwords = set(sc.textFile(stopfile).collect())
def tokenize(string):
An implementation of input string tokenization that excludes stopwords
Args:
string (str): input string
Returns:
list: a list of tokens without stopwords
return <COMPLETAR>
wordsRDD = dataRDD.map(lambda x: tokenize(x[1]))
print wordsRDD.take(1)[0]
# TEST Tokenize a String (1a)
assert wordsRDD.take(1)[0]==[u'quiet', u'introspective', u'entertaining', u'independent', u'worth', u'seeking'], 'lista incorreta!'
print 'ok!'
# EXERCICIO
from pyspark.mllib.feature import Word2Vec
model = Word2Vec().<COMPLETAR>
print model.transform(u'entertaining')
print model.findSynonyms(u'entertaining', 2)
dist = np.abs(model.transform(u'entertaining')-np.array([-0.246186971664,-0.127226486802,0.0271916668862,0.0112947737798,-0.206053063273])).mean()
assert dist<1e-6, 'valores incorretos'
print 'ok!'
assert model.findSynonyms(u'entertaining', 1)[0][0] == 'affair', 'valores incorretos'
print 'ok!'
# EXERCICIO
uniqueWords = (wordsRDD
.<COMPLETAR>
.<COMPLETAR>
.<COMPLETAR>
.<COMPLETAR>
.collect()
)
print '{} tokens únicos'.format(len(uniqueWords))
w2v = {}
for w in uniqueWords:
w2v[w] = <COMPLETAR>
w2vb = sc.broadcast(w2v)
print 'Vetor entertaining: {}'.format( w2v[u'entertaining'])
vectorsRDD = (wordsRDD
.<COMPLETAR>
)
recs = vectorsRDD.take(2)
firstRec, secondRec = recs[0], recs[1]
print firstRec.shape, secondRec.shape
# TEST Tokenizing the small datasets (1c)
assert len(uniqueWords) == 3332, 'valor incorreto'
print 'ok!'
assert np.mean(np.abs(w2v[u'entertaining']-[-0.24618697, -0.12722649, 0.02719167, 0.01129477, -0.20605306]))<1e-6,'valor incorreto'
print 'ok!'
assert secondRec.shape == (10,5)
print 'ok!'
# EXERCICIO
from pyspark.mllib.clustering import KMeans
vectors2RDD = sc.parallelize(np.array(w2v.values()),1)
print 'Sample vector: {}'.format(vectors2RDD.take(1))
modelK = KMeans.<COMPLETAR>
clustersRDD = vectors2RDD.<COMPLETAR>
print '10 first clusters allocation: {}'.format(clustersRDD.take(10))
# TEST Amazon record with the most tokens (1d)
assert clustersRDD.take(10)==[134, 126, 209, 221, 401, 485, 197, 269, 296, 265], 'valor incorreto'
print 'ok'
# EXERCICIO
def quantizador(point, model, k, w2v):
key = <COMPLETAR>
words = <COMPLETAR>
matrix = np.array( <COMPLETAR> )
features = np.zeros(k)
for v in matrix:
c = <COMPLETAR>
features[c] += 1
return (key, features)
quantRDD = dataRDD.map(lambda x: quantizador(x, modelK, 500, w2v))
print quantRDD.take(1)
# TEST Implement a TF function (2a)
assert quantRDD.take(1)[0][1].sum() == 5, 'valores incorretos'
print 'ok!'
dataNorms = quantRDD.map(lambda rec: (rec[0],np.sqrt(rec[1].dot(rec[1]))))
dataNormsBroadcast = sc.broadcast(dataNorms.collectAsMap())
# EXERCICIO
from itertools import product
def calcsim(rec):
items = list(rec[1])
return <COMPLETAR>
newRDD = (quantRDD
.<COMPLETAR>
.<COMPLETAR>
.<COMPLETAR>
.<COMPLETAR>
.<COMPLETAR>
.cache()
)
newcount = newRDD.count()
print newcount
assert newcount==11796442, 'incorrect value'
print 'ok'
# EXERCICIO
def genklist(rec,k):
Generate the list of the k most similar documents to the key
Args:
record: a pair, (doc, [(doc,sim)])
k: number of most similar elements
Returns:
pair: (doc, [(doc,sim)])
<COMPLETAR>
return (key, docs[:k])
def knn(simRDD, k):
Generate the knn RDD for a given RDD.
Args:
simRDD: RDD of ( (doc1,doc2), sim)
k: number of most similar elements
Returns:
RDD: RDD of ( doc1, [docs, sims])
ksimRDD = (simRDD
.<COMPLETAR>
.<COMPLETAR>
.<COMPLETAR>
)
return ksimRDD
ksimReviewsRDD = knn(newRDD, 3)
ksimReviewsRDD.take(3)
print dataRDD.filter(lambda x: x[0] in [55300,39009,130973,66284]).collect()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Lab 5b - k-Means para Quantização de Atributos
Step4: Parte 1
Step5: (1b) Aplicando transformação word2vec
Step6: (1c) Gerando uma RDD de matrizes
Step7: Parte 2
Step8: (2b) Transformando matriz de dados em vetores quantizados
Step9: Part 3
Step10: (4b) Calcule a similaridade do cosseno entre pares de registros
Step13: (4f) k-NN
|
1,612
|
<ASSISTANT_TASK:>
Python Code:
import skotree
skotree.VERSION
# this load the library
import skotree
# this load the experiment located
# in the directory tests and
experiment = skotree.oTree("./tests")
experiment
experiment.settings
experiment.lsapps()
experiment.lssessions()
experiment.session_config("matching_pennies")
experiment.settings.REAL_WORLD_CURRENCY_CODE
all_data = experiment.all_data()
all_data
data = experiment.app_data("matching_pennies")
data
filtered = data[["participant.code", "player.penny_side", "player.payoff"]]
filtered
filtered.describe()
group = filtered.groupby("participant.code")
group.describe()
data.columns
tspent = experiment.time_spent()
tspent
# check the available columns
tspent.columns
# filter only the most important columns
tspent = tspent[["participant__code", "page_index", "seconds_on_page"]]
tspent
# lets describe the time expent by page
tspent.groupby("page_index").describe()
# and lets make a plot but grouped by participant
%matplotlib inline
tspent.groupby("participant__code")[["seconds_on_page"]].plot();
storage = experiment.bot_data("matching_pennies", 4)
storage
storage["matching_pennies"]
storage.matching_pennies
experiment.bot_data("matching_pennies", 1)
remote = skotree.oTree("http://localhost:8000")
remote
remote.lsapps()
remote.lssessions()
remote.app_data("matching_pennies")
skotree.oTree("http://localhost:9000")
# the credential are not stored internally
exp = skotree.oTree("http://localhost:9000", username="admin", password="skotree")
exp
exp.all_data()
remote.bot_data("matching_pennies", 1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Philosophy
Step2: The previous code make a lot of things in background
Step3: This is the traditional object that you
Step4: or maybe you want to see all the sessiong configured
Step5: Yikes! the app and the session has the same name. Let's check the full session configuration.
Step6: Finally you can access <span class="text-info">any</span> content of the settings object ussing the attribute showed before. For example, maybe you want to see the "currency code"
Step7: The Data
Step8: 2. Per-App Data
Step9: With the power of pandas.DataFrame you can easily filter the data
Step10: Describe the data
Step11: group by participant
Step12: or check all the columns availables
Step13: 3. Per-App Documentation
Step14: <div class="alert alert-info lead">
Step15: as you can see the only available app (as we see before) is the matching_pennies.
Step16: also for convenience the sintax storage.matching_pennied are available
Step17: If for some reason the experiment fails, this method returns an exception. for example if we provide a invalid number of participants
Step18: Connect to a remote experiment
Step19: Connect to a remote experiment With Authentication
Step20: In this cases you need to provide the parameters username and password
Step21: and now all works as before
Step22: <div class="text-warning">
|
1,613
|
<ASSISTANT_TASK:>
Python Code:
import math
from astropy import units as u
pixel_pitch = 5.4 * u.micron / u.pixel # STF-8300M pixel pitch
focal_length = 400 * u.millimeter # Canon EF 400 mm f/2.8L IS II USM focal length
resolution = (3326, 2504) * u.pixel # STF-8300M resolution in pixels, (x, y)
sampling = (pixel_pitch / focal_length).to(u.radian/u.pixel, equivalencies = u.equivalencies.dimensionless_angles())
sampling.to(u.arcsec/u.pixel)
fov = resolution * sampling
fov.to(u.degree)
exposure_times = ((5, 10, 30) * u.minute)
exposure_times
n_units = (1, 4, 10)
n_units
coalignment_tolerance = 5 * u.arcminute
coalignment_tolerance
north_alignment_tolerance = 2.5 * u.degree
north_alignment_tolerance
central_fwhm = 1.5 * u.arcsecond
tilt_fwhm_degradation = 0.4 * u.arcsecond
max_fwhm = 2 * u.arcsecond
max_fwhm
max_zenith_distance = 60 * u.degree
max_zenith_distance
n_units
coalignment_tolerance
north_alignment_tolerance
fwhm_to_rms = (2 * (2 * math.log(2))**0.5)**-1
max_flexure_rms = fwhm_to_rms * (max_fwhm**2 - (central_fwhm + tilt_fwhm_degradation)**2)**0.5
max_flexure_rms
ha_angles = (exposure_times.to(u.hour) * (u.hourangle / u.hour)).to(u.degree)
ha_angles
max_zenith_distance
lens_mass = 4.1 * u.kilogram
camera_mass = 0.8 * u.kilogram
adaptor_mass = 0.2 * u.kilogram
imaging_unit_mass = lens_mass + camera_mass + adaptor_mass
max_payload_mass = 109 * u.kilogram
max_struture_mass = max_payload_mass - max(n_units) * imaging_unit_mass
max_struture_mass
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Each imaging unit shall deliver an on-sky spatial sampling of $2.8\pm 0.1'' /$ pixel
Step2: Each imaging unit shall deliver an instantaneous field of view of $2.6 \pm 0.1 \times 1.9 \pm 0.1$ degrees
Step3: The system shall meet all requirements with exposure times of up to 30 minutes
Step4: The system shall support up to at least 10 imaging units per telescope mount
Step5: All imaging units should point in the same direction to within a tolerance of 5 arcminutes radius on sky (TBC)
Step6: All imaging units shall have the camera y axis aligned with the North-South axis to within a tolerance of $\pm$2.5 degrees (TBC)
Step7: The system shall deliver a PSF with average FWHM $< 2''$ over the full field of view, as measured using a 3rd order polynomial fit performed wth the SExtrator software
Step8: The system shall satisfy all functional requirements (e.g. image quality, alignment) while observing any sky position with a zenith distance less than 60 degrees. The system is not required to meet functional requirements if observing a sky position with a zenith distance of greater than 60 degrees
Step9: Imaging unit interface
Step10: Flexure
Step11: A given exposure time corresponds to an angle of rotation about the telescope mount hour angle axis.
Step12: The support structure(s) shall ensure that the pointing of all imaging units shall remain fixed relative to the telescope mount axes to within 0.27 arcseconds rms while the hour angle axis rotates through any 7.5 degree angle, for any position of the declination axis, within the sky coverage requirement's zenith distance range
Step13: Mass
|
1,614
|
<ASSISTANT_TASK:>
Python Code:
import io
# The legacy way:
file = open('/tmp/some_integers_1.txt', 'w')
file.write('{}\n'.format(1))
file.write('{}\n'.format(2))
file.write('{}\n'.format(3))
file.close()
!cat /tmp/some_integers_1.txt
# The modern (pythonic) alternative:
with io.open('/tmp/some_integers_2.txt', 'w') as file:
file.write('{}\n'.format(1))
file.write('{}\n'.format(2))
file.write('{}\n'.format(3))
file.closed
!cat /tmp/some_integers_2.txt
# The classic alternative:
file = io.open('/tmp/some_integers_1.txt', 'r')
while True:
line = file.readline()
if not line:
break
#for i in range(3):
print(int(line))
file.close()
# The pythonic alternarive:
with io.open('/tmp/some_integers_2.txt', 'r') as file:
for line in file:
print(int(line))
file.closed
import struct # See https://docs.python.org/3/library/struct.html#format-characters
with io.open('/tmp/some_integers_2.bin', 'wb') as file:
file.write(struct.pack('h', 1)) # 2 bytes, signed int
file.write(struct.pack('h', 2)) # 2 bytes, signed int
file.write(struct.pack('i', 3)) # 4 bytes, signed int
with io.open('/tmp/some_integers_2.bin', 'rb') as file:
print(struct.unpack('h', file.read(struct.calcsize('h')))[0])
print(struct.unpack('h', file.read(struct.calcsize('h')))[0])
print(struct.unpack('i', file.read(struct.calcsize('i')))[0])
import pickle
list = ['red', 'green', 'blue']
pickle.dump(list, open('list.dat','wb'))
list2 = pickle.load(open('list.dat', 'rb'))
list2
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Write some integers
Step2: Reading the file
Step3: Opening modes
Step4: Persistence of objects (serialization) ... in disk
|
1,615
|
<ASSISTANT_TASK:>
Python Code:
# code cell
name = "Jonathan"
import numpy as np
# don't do:
# from numpy import *
max("a")
np.max("a")
# %matplotlib inline
# %config InlineBackend.figure_format='retina'
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import os
from pivottablejs import pivot_ui
import sys
import warnings
warnings.filterwarnings("ignore")
first = 1
second = 2
third = 3
canadian_politics = pd.read_csv("../data/mps2.csv")
# recommend using .head()
canadian_politics.head(10)
sns.distplot(canadian_politics["Age"].dropna());
sns.set_context("poster", font_scale=1.3)
fig, ax = plt.subplots(figsize=(12, 8))
sns.distplot(canadian_politics["Age"].dropna())
fig.tight_layout()
# Province, Party, (deselect) Average, Age, Heatmap
pivot_ui(canadian_politics)
newdf = pd.read_clipboard(sep='\t')
newdf.fillna("")
canadian_politics['Age-bin'] = pd.cut(canadian_politics['Age'], [x for x in range(10, 100, 5)])
# pd.qcut# neat!
import numpy as np
from numpy.random import chisquare, choice
np.random.chisquare()
# pure tab right ↓ less useful
np.random.choice()
# shift-tab right ↓ more useful
np.linspace(start=50, stop=100, endpoint=False)
np.linspace(start=50, end=120)
np.linspace(start=50, stop=150, num=100, endpoint=False)
plt.plot(np.linspace(start, stop, num=50, ))
np.linspace?
?np.linspace
np.linspace??
!code ~/miniconda3/envs/dspy3/lib/python3.6/site-packages/numpy/core/function_base.py
import textwrap
def example_function():
Docstring for example function
print(textwrap.dedent(
This is a multi-lined string
that I want to write inside of a function.
Notice what happens when I print this.
And when something is indented more.))
example_function()
# python3.6+
name
f"{name}'s name is not Alex."
age = 37
f"{age} plus 2 = {age + 2}"
# Note:
# f{example_dictionary["key"]}
# But first find and replace
def silly_function(xval):
Takes a value and returns the value.
xval_sq = xval ** 2.0
3 + 1
xval_abs = np.sqrt(xval_sq)
return xval_abs
silly_function(-2,)
silly_function?
silly_function??
!ls ../data/
coal_years = !ls ../data/coal_prod_20*.csv
coal_years
from glob import glob
for filename in glob("../data/coal_prod_20*.csv"):
print(filename)
ex_dictionary = {}
# Indent/dedent/comment
for index in range(5):
ex_dictionary["float_one"] = 1
ex_dictionary["float_two"] = 2
ex_dictionary["float_three"] = 3
ex_dictionary["float_four"] = 4
ex_dictionary
example["one_better_neat"] = 1
example["two_better_neat"] = 2
example["three_better_neat"] = 3
example["four_better_neat"] = 4
%%latex
If you want to get crazier$\ldots$
\begin{equation}
\oint_S {E_n dA = \frac{1}{{\varepsilon _0 }}} Q_\textrm{inside}
\end{equation}
%%python2
print "hi"
%%bash
wget http://www.ast.cam.ac.uk/%7Erfc/vpfit12.2.tar.gz
mkdir -p vpfit12
cd vpfit12
tar -xvzf ../vpfit12.2.tar.gz
normal_argument = 12.4
second_argument = 98.4
arg_with_spaces = "the secret to life"
%%bash -s {normal_argument} {second_argument}
echo "This script knows the value of the argument: $1"
echo "It also has no trouble with the second argument: $2"
%%bash -s "$arg_with_spaces"
echo "This bash script knows $1."
# %%R -i df -o df2
# df2 <-
ls vpfit10/
tailthing = "*.ipynb"
tailthing
!ls {tailthing}
output = !ls
output
%env
!pwd
a = 3
a
print(canadian_politics.head().to_latex())
5 * 83
_
3 + 7
_
print(_81)
saved = _25
saved
%history
%history -opf alex.txt
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Tips and Tricks
Step2: Imports
Step3: Keyboard shortcuts
Step4: Split a cell with -
Step5: Enhanced Pandas Dataframe Display
Step6: Tab -- Your Friend
Step7: shift-tab
Step8: shift-tab-tab
Step9: shift-tab-tab-tab
Step10: shift-tab-tab-tab-tab
Step11: DO NOT TRY shift-tab-tab-tab-tab-tab
Step12: ??
Step16: Random stuff
Step18: Inspect everything and Find and Replace
Step19: Line numbers (lowercase "L")
Step20: Multicursor magic
Step21: Monospace
Step22: Scripting
Step23: Need to set or change environment variables
Step24: Danger zone
|
1,616
|
<ASSISTANT_TASK:>
Python Code:
from IPython.core.display import HTML
with open ("../style.css", "r") as file:
css = file.read()
HTML(css)
import ply.lex as lex
tokens = [
'NUMBER',
'PLUS',
'MINUS',
'TIMES',
'DIVIDE',
'LPAREN',
'RPAREN'
]
t_PLUS = r'\+'
t_MINUS = r'-'
t_TIMES = r'\*'
t_DIVIDE = r'/'
t_LPAREN = r'\('
t_RPAREN = r'\)'
def t_NUMBER(t):
r'0|[1-9][0-9]*'
t.value = int(t.value)
return t
def t_newline(t):
r'\n+'
t.lexer.lineno += len(t.value)
t_ignore = ' \t'
def t_error(t):
print(f"Illegal character {t.value[0]} at line {t.lexer.lineno}.")
t.lexer.skip(1)
__file__ = 'hugo'
lexer = lex.lex()
data = 3 + 4 * 10 + 007 + (-20) * 2
3 + 4 * 10 + abc + (-20) * 2
lexer.input(data)
for tok in lexer:
print(tok)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This example has been extracted from the official documentation of Ply.
Step2: We start with a definition of the <em style="color
Step3: There are two ways to define these tokens
Step4: If we need to transform a token, we can define the token via a function. In that case, the first line of the function
Step5: The rule below is used to keep track of line numbers. We use the function length since there might be
Step6: The keyword t_ignore specifies those characters that should be discarded.
Step7: All characters not recognized by any of the defined tokens are handled by the function t_error.
Step8: Below the function lex.lex() creates the lexer specified above. Since this code is expected to be part
Step10: Lets test the generated scanner, that is stored in lexer, with the following string
Step11: Let us feed the scanner with the string data. This is done by calling the method input of the generated scanner.
Step12: Now we put the lexer to work by using it as an iterable. This way, we can simply iterate over all the tokens that our scanner recognizes.
|
1,617
|
<ASSISTANT_TASK:>
Python Code::
import pandas as pd
from sklearn.model_selection import StratifiedKFold
df = pd.read_csv('data/raw/train.csv')
# initialise a StratifiedKFold object with 5 folds and
# declare the column that we which to group by which in this
# case is the column called "label"
skf = StratifiedKFold(n_splits=5)
target = df.loc[:,'label']
# for each fold split the data into train and validation
# sets and save the fold splits to csv
fold_no = 1
for train_index, val_index in skf.split(df, target):
train = df.loc[train_index,:]
val = df.loc[val_index,:]
train.to_csv('data/processed/folds/' + 'train_fold_' + str(fold_no) + '.csv')
val.to_csv('data/processed/folds/' + 'val_fold_' + str(fold_no) + '.csv')
fold_no += 1
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
1,618
|
<ASSISTANT_TASK:>
Python Code:
def largest_smallest_integers(lst):
'''
Create a function that returns a tuple (a, b), where 'a' is
the largest of negative integers, and 'b' is the smallest
of positive integers in a list.
If there is no negative or positive integers, return them as None.
Examples:
largest_smallest_integers([2, 4, 1, 3, 5, 7]) == (None, 1)
largest_smallest_integers([]) == (None, None)
largest_smallest_integers([0]) == (None, None)
'''
smallest = list(filter(lambda x: x < 0, lst))
largest = list(filter(lambda x: x > 0, lst))
return (max(smallest) if smallest else None, min(largest) if largest else None)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
1,619
|
<ASSISTANT_TASK:>
Python Code:
print "This is a python cell. It executes and its output renders below."
print "Running this cell next."
from IPython.display import Image
Image("https://pbs.twimg.com/media/CJsHH88UYAE0ewF.jpg")
from IPython.display import YouTubeVideo
YouTubeVideo("aIXED26Wppg")
Image("http://jupyter-client.readthedocs.org/en/latest/_images/frontend-kernel.png")
%cat requirements.txt
x = 5
y = 10
print x * y
!pip install requests
from IPython.html.widgets import *
%matplotlib inline
from IPython.display import display
slider = IntSlider()
display(slider)
slider.value
from IPython.html import widgets
[n for n in dir(widgets) if not n.endswith('Widget') and n[0] == n[0].upper() and not n[0] == '_']
%load_ext rmagic
%%R
r <- rnorm(100)
plot(r)
%load_ext julia.magic
%%julia
using DataFrames
df = DataFrame(x1=[1, 2], x2=["foo", "bar"])
%%HTML
<h3>Hi there</h3>
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: You can do most anything here that you could do in the Python REPL, indeed this is basically a web front-end to the Python REPL, or more precisely, to the IPython REPL. IPython is an enhanced wrapper around the standard Python REPL that's been around (and I've been using) for over ten years.
Step2: There are many nifty little hooks to do things like render remote images inline
Step3: And much, much more
Step4: The markdown rendering includes support for mathjax
Step5: Kernels exist for many languages. This is extremely useful.
Step6: This is the minimal set of requirements I had to install in a virtualenv to work with this notebook. The key one is ipython[notebook]. This brings in the IPython kernel (which you can then use on its own in the shell) along with all the other notebook dependencies, like pyzmq, tornado, jinja, and others.
Step7: Interactive widgets
Step8: There is a whole range of widget types
Step9: Working with other languages
Step10: ...and Julia...
Step11: Recall that this is a Python 2 kernel (see top right). Because of this, we are using python dependencies for R and Julia integration. Those are nifty tools in their own right - mixing data and logic among languages can have advantages. More to the point, though, you could just as easily have an IJulia notebook, or an R notebook, or a bash notebook, etc., where the default execution is performed in that language, like it is for Python in this one.
|
1,620
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
rides[:24*10].plot(x='dteday', y='cnt')
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
# Save the last 21 days
test_data = data[-21*24:]
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
# Hold out the last 60 days of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.input_nodes)) #2x56
print("Weights - Input->Hidden: ", self.weights_input_to_hidden.shape)
self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5,
(self.output_nodes, self.hidden_nodes)) #1x2
print("Weights - Hidden->Output: ", self.weights_hidden_to_output.shape)
self.lr = learning_rate
#### Set this to your implemented sigmoid function ####
# Activation function is the sigmoid function
# TODO: Activation Function
# ADDED
#self.activation_function = sigmoid if defined as a new function
self.activation_function = lambda x: 1 / (1 + np.exp(-x))
def train(self, inputs_list, targets_list):
# Convert inputs list to 2d array
inputs = np.array(inputs_list, ndmin=2).T # 56x1
targets = np.array(targets_list, ndmin=2).T # 1x1
print("Inputs: ", inputs.shape)
print("Targets: ", targets.shape)
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer
# ADDED
hidden_inputs = inputs # signals into hidden layer
hidden_outputs = self.activation_function(np.dot(self.weights_input_to_hidden, inputs)) # signals from hidden layer
hidden_inputs = np.dot(self.weights_input_to_hidden, inputs) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
print(hidden_inputs.shape) #56x1
print(hidden_outputs.shape) #2x1
# TODO: Output layer
# ADDED
# signals into final output layer
# ORIG
# final_inputs = hidden_outputs #2x1
# REVISED
final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs)
# signals from final output layer
# NOTE: NO SIGMOID !!!!!!!!!!!!!!
# ORIG
# final_outputs = np.dot(self.weights_hidden_to_output, final_inputs) #1x1
# REVISED
final_outputs = final_inputs
print(final_inputs.shape)
print(final_outputs.shape)
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error
# ADDED
output_errors = targets - final_outputs # 1x1 - Output layer error is the difference between desired target and actual output.
# error gradient for output layer
# NOTE: NO SIGMOID !!!!!!!!!!!!!!
# del_error_outputs = output_errors * final_outputs * (1 - final_outputs)
del_error_outputs = output_errors
#### CONTINUE ####
#### CONTINUE ####
#### CONTINUE ####
# TODO: Backpropagated error
# hidden layer gradients
# hidden_grad = output_errors * final_outputs * (1 - final_outputs) #1x1 * 1x1 = 1x1
# REVISED original never used
hidden_grad = hidden_outputs * (1.0 - hidden_outputs)
# errors propagated to the hidden layer
#ORIG
#hidden_errors = del_error_outputs * final_inputs * (1 - final_inputs) * self.weights_hidden_to_output.T #1x1 * 1x2 * 2x1 = 1
# REVISED
hidden_errors = np.dot(self.weights_hidden_to_output.T, output_errors)
print(hidden_grad.shape)
print(hidden_errors.shape)
# TODO: Update the weights
# ADDED
# update hidden-to-output weights with gradient descent step
# ORIG
#self.weights_hidden_to_output += self.lr * del_error_outputs * hidden_outputs.T #1x1 * 2x1 = 1x1
# REVISED
self.weights_hidden_to_output += self.lr * np.dot(output_errors, hidden_outputs.T)
# update input-to-hidden weights with gradient descent step
# ORIG
# self.weights_input_to_hidden += self.lr * hidden_errors * inputs.T
#REVISED
self.weights_input_to_hidden += self.lr * np.dot(hidden_errors * hidden_grad, inputs.T)
def run(self, inputs_list):
# Run a forward pass through the network
inputs = np.array(inputs_list, ndmin=2).T
#### Implement the forward pass here ####
# TODO: Hidden layer
# ADDED (as above)
hidden_inputs = inputs # signals into hidden layer
hidden_outputs = self.activation_function(np.dot(self.weights_input_to_hidden, inputs)) # signals from hidden layer
# TODO: Output layer
# ADDED (as above)
# signals into final output layer
# ORIG
# final_inputs = hidden_outputs #2x1
# REVISED
final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs)
# signals from final output layer
# NOTE: NO SIGMOID !!!!!!!!!!!!!!
# ORIGINAL
# final_outputs = np.dot(self.weights_hidden_to_output, final_inputs)
# REVISED
final_outputs = final_inputs
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
import sys
### TODO: Set the hyperparameters here ###
epochs = 4000
learning_rate = 0.1
hidden_nodes = 20
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for e in range(epochs):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
for record, target in zip(train_features.ix[batch].values,
train_targets.ix[batch]['cnt']):
network.train(record, target)
# Printing out the training progress
train_loss = MSE(network.run(train_features), train_targets['cnt'].values)
val_loss = MSE(network.run(val_features), val_targets['cnt'].values)
sys.stdout.write("\rProgress: " + str(100 * e/float(epochs))[:4] \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
plt.ylim(ymax=0.5)
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features)*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
import unittest
inputs = [0.5, -0.2, 0.1]
targets = [0.4]
test_w_i_h = np.array([[0.1, 0.4, -0.3],
[-0.2, 0.5, 0.2]])
test_w_h_o = np.array([[0.3, -0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328, -0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, 0.39775194, -0.29887597],
[-0.20185996, 0.50074398, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
print(network.run(inputs))
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load and prepare the data
Step2: Checking out the data
Step3: Dummy variables
Step4: Scaling target variables
Step5: Splitting the data into training, testing, and validation sets
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step14: Time to build the network
Step15: Training the network
Step16: Check out your predictions
Step17: Thinking about your results
|
1,621
|
<ASSISTANT_TASK:>
Python Code:
print("This is the first line.")
print("This is the second line.")
print("This is the third line.")
print("Hello, world!")
print("This is the first line.")
print("This is the second line.")
print("This is the third line.")
print("This is the first line.")
print("This is the second line.")
print("This is the third line.")
# Ovo je moj program
print("Moj program")
# Ovo je moj program
i mnogo je zanimljiv
print("Moj program")
# Ovo je moj program
#i mnogo je zanimljiv
print("Moj program")
2 + 3
5 - 8
8 * 13
5 / 2
(2 + 3) * 2 + 3
5 // 2
-5 // 2
9 % 2
37 % 10
-37 % 10
### This is a cool program for demonstrating Python's logical operations ###
# Demonstrate the operator "greater than"
print("Is it true that 5 is greater than 4?")
print(5 > 4)
# Demonstrate the operator "less than"
print("Is it true that 100 is less than 50?")
print(100 < 50)
# Demonstrate the operator "greater than or equal to"
print("Is it true that 3 is greater that or equal to 5?")
print(3 >= 5)
# Demonstrate the operator "less than or equal to"
print("Is it true that 6 is less than or equal to 6?")
print(6 >= 6)
# Print a number
print(2)
# Print a number which is a result of a mathematical operation
print(2 + 3 - 5 + 8)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Označavanje sintakse bojama (syntax highlighting) u editoru Notepad++
Step2: Iz menija odaberite <i>Language</i> > <i>P</i> > <i>Python</i>. Primetićete da je ime funkcije označeno jednom bojom, a tekst pod navodnicima koji se prosleđuje ovoj funkciji - drugom bojom.
Step3: Primetićete da <i>Notepad++</i> označava različitim bojama funkciju <i>print</i> i tekst pod navodnicima, odnosno da "zna" da se radi o Pajton kodu. <i>Notepad++</i> može da zaključi kako da boji tekst (syntax highlight) na osnovu ekstenzije fajla koji prikazuje, bez eksplicitnog naznačavanja programskog jezika koje nam je bilo neophodno za prethodni dokument.
Step4: U većini vežbi, sav Pajton kod ćemo kucati u editoru <i>Notepad++</i>, čuvati u fajlove s ekstenzijom "py", a izvršavati iz konzole koristeći komandu
Step5: Zatim napravite sledeći program.
Step6: Javila se greška "nevalidna sintaksa", kojom nam Pajton govori da ne prepoznaje reči koje smo mu dali kao komande. Postoji određeni skup reči i pravila koje čine jezik. Kada ih ne poštujemo, izazivamo greške poput ove.<br /><be />
Step7: Pajton ne prepoznaje reči poput "ovo", "je", "zanimljiv" itd. Međutim, prepoznaje tarabu (#) i ona u Pajtonu označava <i>komentare</i>. U programskim jezicima, komentari su nešto što se ignoriše, kao da ne postoji. Pošto za samo izvršenje programa nemaju nikakvo značenje (jer se ignorišu), koriste se uglavnom za dve stvari
Step8: Redosled izvršavanja operacija i zagrade važe kao i u matematici. U tehničkoj dokumentaciji jezika Pajton, ovo je objašnjeno do tančina. Na primer
Step9: Isprobajte operaciju celobrojnog deljenja ili "deljenja sa zaokruživanjem na dole"
Step10: Isprobajte operaciju <i>moduo</i>, odnosno "ostatak pri deljenju sa". Na primer
Step11: Ponašanje ovog operatora je zanimljivo kada je s leve strane negativan broj. Naime, doći će do deljenja sa zaokruživanjem na dole. Na primer, u računanju -37 % 10, prvo se računa "količnik", koji je u ovom slučaju -4 (zbog zaokruživanja na dole). Zatim se ostatak računa kao razlika deljenika (-37) i proizvoda količnika (-4) i delioca (10) -37 - (-4 * 10).
Step12: Isprobajte operacije poređenja. Njih možete isprobati na zanimljiv način tako što ćete napraviti sledeći program (iskucate kod u fajl, sačuvate fajl i iz Vindovs konzole kažete Pajtonu da pokrene taj fajl - <i>python ime_fajla.py</i>.
Step13: Funkcija <i>print</i> može prikazati na ekranu, ili "štampati"
|
1,622
|
<ASSISTANT_TASK:>
Python Code:
from pynq.overlays.base import BaseOverlay
base = BaseOverlay('base.bit')
%%microblaze base.PMODA
#include <i2c.h>
#include <pmod_grove.h>
int read_adc() {
i2c device = i2c_open(PMOD_G4_B, PMOD_G4_A);
unsigned char buf[2];
buf[0] = 0;
i2c_write(device, 0x50, buf, 1);
i2c_read(device, 0x50, buf, 2);
return ((buf[0] & 0x0F) << 8) | buf[1];
}
read_adc()
%%microblaze base.PMODA
#include <timer.h>
#include <gpio.h>
#include <pmod_grove.h>
void flash_led() {
gpio led = gpio_open(PMOD_G1_A);
gpio_set_direction(led, GPIO_OUT);
int state = 0;
while (1) {
gpio_write(led, state);
state = !state;
delay_ms(500);
}
}
flash_led()
%%microblaze base.PMODA
#include <pyprintf.h>
int test_print(float value) {
pyprintf("Printing %f from the Microblaze!\n", value);
return 0;
}
test_print(1.5)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We can use the gpio and timer components in concert to flash an LED connected to G1. The timer header provides PWM and program delay functionality, although only one can be used simultaneously.
Step2: pyprintf
|
1,623
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ipsl', 'sandbox-1', 'ocnbgchem')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Type
Step7: 1.4. Elemental Stoichiometry
Step8: 1.5. Elemental Stoichiometry Details
Step9: 1.6. Prognostic Variables
Step10: 1.7. Diagnostic Variables
Step11: 1.8. Damping
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Step13: 2.2. Timestep If Not From Ocean
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Step15: 3.2. Timestep If Not From Ocean
Step16: 4. Key Properties --> Transport Scheme
Step17: 4.2. Scheme
Step18: 4.3. Use Different Scheme
Step19: 5. Key Properties --> Boundary Forcing
Step20: 5.2. River Input
Step21: 5.3. Sediments From Boundary Conditions
Step22: 5.4. Sediments From Explicit Model
Step23: 6. Key Properties --> Gas Exchange
Step24: 6.2. CO2 Exchange Type
Step25: 6.3. O2 Exchange Present
Step26: 6.4. O2 Exchange Type
Step27: 6.5. DMS Exchange Present
Step28: 6.6. DMS Exchange Type
Step29: 6.7. N2 Exchange Present
Step30: 6.8. N2 Exchange Type
Step31: 6.9. N2O Exchange Present
Step32: 6.10. N2O Exchange Type
Step33: 6.11. CFC11 Exchange Present
Step34: 6.12. CFC11 Exchange Type
Step35: 6.13. CFC12 Exchange Present
Step36: 6.14. CFC12 Exchange Type
Step37: 6.15. SF6 Exchange Present
Step38: 6.16. SF6 Exchange Type
Step39: 6.17. 13CO2 Exchange Present
Step40: 6.18. 13CO2 Exchange Type
Step41: 6.19. 14CO2 Exchange Present
Step42: 6.20. 14CO2 Exchange Type
Step43: 6.21. Other Gases
Step44: 7. Key Properties --> Carbon Chemistry
Step45: 7.2. PH Scale
Step46: 7.3. Constants If Not OMIP
Step47: 8. Tracers
Step48: 8.2. Sulfur Cycle Present
Step49: 8.3. Nutrients Present
Step50: 8.4. Nitrous Species If N
Step51: 8.5. Nitrous Processes If N
Step52: 9. Tracers --> Ecosystem
Step53: 9.2. Upper Trophic Levels Treatment
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Step55: 10.2. Pft
Step56: 10.3. Size Classes
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Step58: 11.2. Size Classes
Step59: 12. Tracers --> Disolved Organic Matter
Step60: 12.2. Lability
Step61: 13. Tracers --> Particules
Step62: 13.2. Types If Prognostic
Step63: 13.3. Size If Prognostic
Step64: 13.4. Size If Discrete
Step65: 13.5. Sinking Speed If Prognostic
Step66: 14. Tracers --> Dic Alkalinity
Step67: 14.2. Abiotic Carbon
Step68: 14.3. Alkalinity
|
1,624
|
<ASSISTANT_TASK:>
Python Code:
from sklearn.datasets import load_digits
digits = load_digits()
digits.images.shape
idx = 14
digits.target[idx], digits.images[idx]
import matplotlib.pyplot as plt
fig, axes = plt.subplots(10, 10, figsize=(8, 8),
subplot_kw={'xticks':[], 'yticks':[]},
gridspec_kw=dict(hspace=0.1, wspace=0.1))
for i, ax in enumerate(axes.flat):
ax.imshow(digits.images[i], cmap='binary', interpolation='nearest')
ax.text(0.05, 0.05, str(digits.target[i]),
transform=ax.transAxes, color='green')
plt.show()
X = digits.data
X.shape
y = digits.target
y.shape
from sklearn.manifold import Isomap
iso = Isomap(n_components=2)
iso.fit(digits.data)
data_projected = iso.transform(digits.data)
data_projected.shape
import seaborn as sns
plt.scatter(data_projected[:, 0], data_projected[:, 1], c=digits.target,
edgecolor='none', alpha=0.5, s=20,
cmap=plt.cm.get_cmap('nipy_spectral', 10))
plt.colorbar(label='digit label', ticks=range(10))
plt.clim(-0.5, 9.5)
plt.show()
from sklearn.model_selection import train_test_split
Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, random_state=0)
Xtrain.shape, Xtest.shape, ytrain.shape, ytest.shape
from sklearn.naive_bayes import GaussianNB
model = GaussianNB()
model.fit(Xtrain, ytrain)
y_model = model.predict(Xtest)
from sklearn.metrics import accuracy_score
accuracy_score(ytest, y_model)
from sklearn.metrics import confusion_matrix
mat = confusion_matrix(ytest, y_model)
sns.heatmap(mat, square=True, annot=True, cbar=False)
plt.xlabel('predicted value')
plt.ylabel('true value')
plt.show()
fig, axes = plt.subplots(10, 10, figsize=(8, 8),
subplot_kw={'xticks':[], 'yticks':[]},
gridspec_kw=dict(hspace=0.1, wspace=0.1))
test_images = Xtest.reshape(-1, 8, 8)
for i, ax in enumerate(axes.flat):
ax.imshow(test_images[i], cmap='binary', interpolation='nearest')
ax.text(0.05, 0.05, str(y_model[i]),
transform=ax.transAxes,
color='green' if (ytest[i] == y_model[i]) else 'red')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We need a two-dimensional, [n_samples, n_features] representation. We can accomplish this by treating each pixel in the image as a feature.
Step2: Unsupervised learning
Step3: Let's plot this data to see if we can learn anything from its structure
Step4: Classification on digits
Step5: Now that we have predicted our model, we can gauge its accuracy by comparing the true values of the test set to the predictions
Step6: With even this extremely simple model, we find about 80% accuracy for classification of the digits!
Step7: Another way to gain intuition into the characteristics of the model is to plot the inputs again, with their predicted labels. We'll use green for correct labels, and red for incorrect labels
|
1,625
|
<ASSISTANT_TASK:>
Python Code:
# Measurement noise
noise_var = 0.05 ** 2
# Bounds on the inputs variable
bounds = [(-5., 5.), (-5., 5.)]
# Define Kernel
kernel = GPy.kern.RBF(input_dim=len(bounds), variance=2., lengthscale=1.0,
ARD=True)
# Initial safe point
x0 = np.zeros((1, len(bounds)))
# Generate function with safe initial point at x=0
def sample_safe_fun():
while True:
fun = safeopt.sample_gp_function(kernel, bounds, noise_var, 10)
if fun([0,0], noise=False) > 0.5:
break
return fun
# Define the objective function
fun = sample_safe_fun()
# The statistical model of our objective function
gp = GPy.models.GPRegression(x0, fun(x0), kernel, noise_var=noise_var)
# The optimization routine
opt = safeopt.SafeOptSwarm(gp, 0., bounds=bounds, threshold=0.2)
# parameter_set = safeopt.linearly_spaced_combinations(bounds, 100)
# opt = safeopt.SafeOpt(gp, parameter_set, 0., lipschitz=None, threshold=0.2)
opt.plot(100, plot_3d=False)
# Obtain next query point
x_next = opt.optimize()
# Get a measurement from the real system
y_meas = fun(x_next)
# Add this to the GP model
opt.add_new_data_point(x_next, y_meas)
opt.plot(100)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Interactive run of the algorithm
|
1,626
|
<ASSISTANT_TASK:>
Python Code:
df = pd.read_csv("../data/ign.csv")
print(df.info())
df = df.drop('title', axis=1)
df = df.drop('url', axis=1)
df = df.drop('Unnamed: 0', axis=1)
df = df.dropna()
print(df.info())
print(df.head())
from sklearn import preprocessing
le = preprocessing.LabelEncoder()
for col in df.columns.values:
#Encode only the categorical variables
if df[col].dtype=='object':
le.fit(df[col].values)
print("Encoded classes are: {}\n".format(le.classes_))
df[col]=le.transform(df[col])
print(df.head())
# Now it's your turn
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Encode parameters
Step2: Tips and objectives
|
1,627
|
<ASSISTANT_TASK:>
Python Code:
# Integer data
type( 17 )
# Floating-point data
type( 17.0 )
# A number inside a string
type( '17' )
count = 55
size = 42.0
print( count )
type( count )
# Operator: + (addition)
# Operands: 3 and 4
3 + 4
# Operator: - (subtraction)
# Operands: 3 and 4
3 - 4
# Operator: *tiplication (mul)
# Operands: 3 and 4
3 * 4
# Operator: / (division)
# Operands: 3 and 4
3 / 4
10 / 3
10 // 3
5 * 2
5 * 2.0
5.0 * 2
5.0 * 2.0
1.0 / ( 2 * 3.14159 )
# Can't divide a string by an integer
# Uncomment to demonstrate
# '13' / 42
'fizz' + 'buzz'
'la' * 3
# Poor example
# Calculate it
y = 2
x = 2 * 3.14 * y
# Good example
# Calculate circumference of a circle
pi = 3.14
radius = 2
circumference = 2 * pi * radius
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Variables
Step2: The print function displays the value of a variable
Step3: The type of a variable is the type of its value
Step4: Variable names and keywords
Step5: There is a special division operator // called floor division
Step6: If either operand is a float, the result is a float
Step7: Expressions
Step8: String operations
Step9: One exception is the + operator
Step10: Another exception is the * operator
Step11: Comments
|
1,628
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!pip install tensorflow==2.7.0 tensorflow-quantum
# Update package resources to account for version changes.
import importlib, pkg_resources
importlib.reload(pkg_resources)
import cirq
import sympy
import numpy as np
import tensorflow as tf
import tensorflow_quantum as tfq
# visualization tools
%matplotlib inline
import matplotlib.pyplot as plt
from cirq.contrib.svg import SVGCircuit
np.random.seed(1234)
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.fashion_mnist.load_data()
# Rescale the images from [0,255] to the [0.0,1.0] range.
x_train, x_test = x_train/255.0, x_test/255.0
print("Number of original training examples:", len(x_train))
print("Number of original test examples:", len(x_test))
def filter_03(x, y):
keep = (y == 0) | (y == 3)
x, y = x[keep], y[keep]
y = y == 0
return x,y
x_train, y_train = filter_03(x_train, y_train)
x_test, y_test = filter_03(x_test, y_test)
print("Number of filtered training examples:", len(x_train))
print("Number of filtered test examples:", len(x_test))
print(y_train[0])
plt.imshow(x_train[0, :, :])
plt.colorbar()
def truncate_x(x_train, x_test, n_components=10):
Perform PCA on image dataset keeping the top `n_components` components.
n_points_train = tf.gather(tf.shape(x_train), 0)
n_points_test = tf.gather(tf.shape(x_test), 0)
# Flatten to 1D
x_train = tf.reshape(x_train, [n_points_train, -1])
x_test = tf.reshape(x_test, [n_points_test, -1])
# Normalize.
feature_mean = tf.reduce_mean(x_train, axis=0)
x_train_normalized = x_train - feature_mean
x_test_normalized = x_test - feature_mean
# Truncate.
e_values, e_vectors = tf.linalg.eigh(
tf.einsum('ji,jk->ik', x_train_normalized, x_train_normalized))
return tf.einsum('ij,jk->ik', x_train_normalized, e_vectors[:,-n_components:]), \
tf.einsum('ij,jk->ik', x_test_normalized, e_vectors[:, -n_components:])
DATASET_DIM = 10
x_train, x_test = truncate_x(x_train, x_test, n_components=DATASET_DIM)
print(f'New datapoint dimension:', len(x_train[0]))
N_TRAIN = 1000
N_TEST = 200
x_train, x_test = x_train[:N_TRAIN], x_test[:N_TEST]
y_train, y_test = y_train[:N_TRAIN], y_test[:N_TEST]
print("New number of training examples:", len(x_train))
print("New number of test examples:", len(x_test))
def single_qubit_wall(qubits, rotations):
Prepare a single qubit X,Y,Z rotation wall on `qubits`.
wall_circuit = cirq.Circuit()
for i, qubit in enumerate(qubits):
for j, gate in enumerate([cirq.X, cirq.Y, cirq.Z]):
wall_circuit.append(gate(qubit) ** rotations[i][j])
return wall_circuit
SVGCircuit(single_qubit_wall(
cirq.GridQubit.rect(1,4), np.random.uniform(size=(4, 3))))
def v_theta(qubits):
Prepares a circuit that generates V(\theta).
ref_paulis = [
cirq.X(q0) * cirq.X(q1) + \
cirq.Y(q0) * cirq.Y(q1) + \
cirq.Z(q0) * cirq.Z(q1) for q0, q1 in zip(qubits, qubits[1:])
]
exp_symbols = list(sympy.symbols('ref_0:'+str(len(ref_paulis))))
return tfq.util.exponential(ref_paulis, exp_symbols), exp_symbols
test_circuit, test_symbols = v_theta(cirq.GridQubit.rect(1, 2))
print(f'Symbols found in circuit:{test_symbols}')
SVGCircuit(test_circuit)
def prepare_pqk_circuits(qubits, classical_source, n_trotter=10):
Prepare the pqk feature circuits around a dataset.
n_qubits = len(qubits)
n_points = len(classical_source)
# Prepare random single qubit rotation wall.
random_rots = np.random.uniform(-2, 2, size=(n_qubits, 3))
initial_U = single_qubit_wall(qubits, random_rots)
# Prepare parametrized V
V_circuit, symbols = v_theta(qubits)
exp_circuit = cirq.Circuit(V_circuit for t in range(n_trotter))
# Convert to `tf.Tensor`
initial_U_tensor = tfq.convert_to_tensor([initial_U])
initial_U_splat = tf.tile(initial_U_tensor, [n_points])
full_circuits = tfq.layers.AddCircuit()(
initial_U_splat, append=exp_circuit)
# Replace placeholders in circuits with values from `classical_source`.
return tfq.resolve_parameters(
full_circuits, tf.convert_to_tensor([str(x) for x in symbols]),
tf.convert_to_tensor(classical_source*(n_qubits/3)/n_trotter))
qubits = cirq.GridQubit.rect(1, DATASET_DIM + 1)
q_x_train_circuits = prepare_pqk_circuits(qubits, x_train)
q_x_test_circuits = prepare_pqk_circuits(qubits, x_test)
def get_pqk_features(qubits, data_batch):
Get PQK features based on above construction.
ops = [[cirq.X(q), cirq.Y(q), cirq.Z(q)] for q in qubits]
ops_tensor = tf.expand_dims(tf.reshape(tfq.convert_to_tensor(ops), -1), 0)
batch_dim = tf.gather(tf.shape(data_batch), 0)
ops_splat = tf.tile(ops_tensor, [batch_dim, 1])
exp_vals = tfq.layers.Expectation()(data_batch, operators=ops_splat)
rdm = tf.reshape(exp_vals, [batch_dim, len(qubits), -1])
return rdm
x_train_pqk = get_pqk_features(qubits, q_x_train_circuits)
x_test_pqk = get_pqk_features(qubits, q_x_test_circuits)
print('New PQK training dataset has shape:', x_train_pqk.shape)
print('New PQK testing dataset has shape:', x_test_pqk.shape)
def compute_kernel_matrix(vecs, gamma):
Computes d[i][j] = e^ -gamma * (vecs[i] - vecs[j]) ** 2
scaled_gamma = gamma / (
tf.cast(tf.gather(tf.shape(vecs), 1), tf.float32) * tf.math.reduce_std(vecs))
return scaled_gamma * tf.einsum('ijk->ij',(vecs[:,None,:] - vecs) ** 2)
def get_spectrum(datapoints, gamma=1.0):
Compute the eigenvalues and eigenvectors of the kernel of datapoints.
KC_qs = compute_kernel_matrix(datapoints, gamma)
S, V = tf.linalg.eigh(KC_qs)
S = tf.math.abs(S)
return S, V
S_pqk, V_pqk = get_spectrum(
tf.reshape(tf.concat([x_train_pqk, x_test_pqk], 0), [-1, len(qubits) * 3]))
S_original, V_original = get_spectrum(
tf.cast(tf.concat([x_train, x_test], 0), tf.float32), gamma=0.005)
print('Eigenvectors of pqk kernel matrix:', V_pqk)
print('Eigenvectors of original kernel matrix:', V_original)
def get_stilted_dataset(S, V, S_2, V_2, lambdav=1.1):
Prepare new labels that maximize geometric distance between kernels.
S_diag = tf.linalg.diag(S ** 0.5)
S_2_diag = tf.linalg.diag(S_2 / (S_2 + lambdav) ** 2)
scaling = S_diag @ tf.transpose(V) @ \
V_2 @ S_2_diag @ tf.transpose(V_2) @ \
V @ S_diag
# Generate new lables using the largest eigenvector.
_, vecs = tf.linalg.eig(scaling)
new_labels = tf.math.real(
tf.einsum('ij,j->i', tf.cast(V @ S_diag, tf.complex64), vecs[-1])).numpy()
# Create new labels and add some small amount of noise.
final_y = new_labels > np.median(new_labels)
noisy_y = (final_y ^ (np.random.uniform(size=final_y.shape) > 0.95))
return noisy_y
y_relabel = get_stilted_dataset(S_pqk, V_pqk, S_original, V_original)
y_train_new, y_test_new = y_relabel[:N_TRAIN], y_relabel[N_TRAIN:]
#docs_infra: no_execute
def create_pqk_model():
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(32, activation='sigmoid', input_shape=[len(qubits) * 3,]))
model.add(tf.keras.layers.Dense(16, activation='sigmoid'))
model.add(tf.keras.layers.Dense(1))
return model
pqk_model = create_pqk_model()
pqk_model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(learning_rate=0.003),
metrics=['accuracy'])
pqk_model.summary()
#docs_infra: no_execute
pqk_history = pqk_model.fit(tf.reshape(x_train_pqk, [N_TRAIN, -1]),
y_train_new,
batch_size=32,
epochs=1000,
verbose=0,
validation_data=(tf.reshape(x_test_pqk, [N_TEST, -1]), y_test_new))
#docs_infra: no_execute
def create_fair_classical_model():
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(32, activation='sigmoid', input_shape=[DATASET_DIM,]))
model.add(tf.keras.layers.Dense(16, activation='sigmoid'))
model.add(tf.keras.layers.Dense(1))
return model
model = create_fair_classical_model()
model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(learning_rate=0.03),
metrics=['accuracy'])
model.summary()
#docs_infra: no_execute
classical_history = model.fit(x_train,
y_train_new,
batch_size=32,
epochs=1000,
verbose=0,
validation_data=(x_test, y_test_new))
#docs_infra: no_execute
plt.figure(figsize=(10,5))
plt.plot(classical_history.history['accuracy'], label='accuracy_classical')
plt.plot(classical_history.history['val_accuracy'], label='val_accuracy_classical')
plt.plot(pqk_history.history['accuracy'], label='accuracy_quantum')
plt.plot(pqk_history.history['val_accuracy'], label='val_accuracy_quantum')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Quantum data
Step2: 1. Data preparation
Step3: Filter the dataset to keep just the T-shirts/tops and dresses, remove the other classes. At the same time convert the label, y, to boolean
Step5: 1.2 Downscale the images
Step6: The last step is to reduce the size of the dataset to just 1000 training datapoints and 200 testing datapoints.
Step8: 2. Relabeling and computing PQK features
Step9: You can quickly verify this works by looking at the circuit
Step11: Next you can prepare $V(\hat{\theta})$ with the help of tfq.util.exponential which can exponentiate any commuting cirq.PauliSum objects
Step12: This circuit might be a little bit harder to verify by looking at, but you can still examine a two qubit case to see what is happening
Step14: Now you have all the building blocks you need to put your full encoding circuits together
Step15: Choose some qubits and prepare the data encoding circuits
Step17: Next, compute the PQK features based on the 1-RDM of the dataset circuits above and store the results in rdm, a tf.Tensor with shape [n_points, n_qubits, 3]. The entries in rdm[i][j][k] = $\langle \psi_i | OP^k_j | \psi_i \rangle$ where i indexes over datapoints, j indexes over qubits and k indexes over $\lbrace \hat{X}, \hat{Y}, \hat{Z} \rbrace$ .
Step20: 2.2 Re-labeling based on PQK features
Step22: Now you have everything you need to re-label the dataset! Now you can consult with the flowchart to better understand how to maximize performance seperation when re-labeling the dataset
Step23: 3. Comparing models
Step24: 3.2 Create a classical model
Step25: 3.3 Compare performance
|
1,629
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import itertools
import matplotlib.pyplot as plt
import numpy as np
from sklearn import svm
from sklearn.ensemble import RandomForestClassifier
from sklearn.cross_validation import train_test_split
from sklearn.model_selection import cross_val_score
from sklearn.metrics import confusion_matrix
from sklearn import metrics
pd.options.mode.chained_assignment = None
df = pd.read_csv( '../../data/dga-full.csv' )
#Filter to alexo and game over
df = df[df['dsrc'].isin(['alexa','gameoverdga'])]
df.dsrc.value_counts()
df['isMalicious'] = df['dsrc'].apply( lambda x: 0 if x == "alexa" else 1 )
train, test = train_test_split(df, test_size = 0.7)
features = ['length', 'dicts', 'entropy','numbers', 'ngram']
target = 'isMalicious'
#Create the Random Forest Classifier
random_forest_clf = RandomForestClassifier(n_estimators=10,
max_depth=None,
min_samples_split=2,
random_state=0)
random_forest_clf = random_forest_clf.fit( train[features], train[target])
#Next, create the SVM classifier
svm_classifier = svm.SVC()
svm_classifier = svm_classifier.fit(train[features], train[target])
scores = cross_val_score(random_forest_clf, train[features], train[target])
scores.mean()
test['predictions'] = random_forest_clf.predict( test[features] )
train['predictions'] = random_forest_clf.predict( train[features] )
test['svm-predictions'] = svm_classifier.predict( test[features])
train['svm-predictions'] = svm_classifier.predict( train[features])
test.head()
confusion_matrix( test['isMalicious'], test['predictions'])
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, cm[i, j],
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
# Compute confusion matrix
cnf_matrix = confusion_matrix( test['isMalicious'], test['predictions'])
np.set_printoptions(precision=2)
# Plot non-normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=['Not Malicious', 'Malicious'],
title='RF Confusion matrix, without normalization')
# Plot normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=['Not Malicious', 'Malicious'], normalize=True,
title='RF Normalized confusion matrix')
plt.show()
# Compute confusion matrix
svm_cnf_matrix = confusion_matrix( test['isMalicious'], test['svm-predictions'])
np.set_printoptions(precision=2)
# Plot non-normalized confusion matrix
plt.figure()
plot_confusion_matrix(svm_cnf_matrix, classes=['Not Malicious', 'Malicious'],
title='SVM Confusion matrix, without normalization')
# Plot normalized confusion matrix
plt.figure()
plot_confusion_matrix(svm_cnf_matrix, classes=['Not Malicious', 'Malicious'], normalize=True,
title='SVM Normalized confusion matrix')
plt.show()
importances = random_forest_clf.feature_importances_
importances
std = np.std([random_forest_clf.feature_importances_ for tree in random_forest_clf.estimators_],
axis=0)
indices = np.argsort(importances)[::-1]
# Print the feature ranking
print("Feature ranking:")
for f in range(test[features].shape[1]):
print("%d. feature %d (%f)" % (f + 1, indices[f], importances[indices[f]]))
# Plot the feature importances of the forest
plt.figure()
plt.title("Feature importances")
plt.bar(range(test[features].shape[1]), importances[indices],
color="r", yerr=std[indices], align="center")
plt.xticks(range(test[features].shape[1]), indices)
plt.xlim([-1, test[features].shape[1]])
plt.show()
pscore = metrics.accuracy_score(test['isMalicious'], test['predictions'])
pscore_train = metrics.accuracy_score(train['isMalicious'], train['predictions'])
print( metrics.classification_report(test['isMalicious'], test['predictions'], target_names=['Malicious', 'Not Malicious'] ) )
svm_pscore = metrics.accuracy_score(test['isMalicious'], test['svm-predictions'])
svm_pscore_train = metrics.accuracy_score(train['isMalicious'], train['svm-predictions'])
print( metrics.classification_report(test['isMalicious'], test['svm-predictions'], target_names=['Malicious', 'Not Malicious'] ) )
print( svm_pscore, svm_pscore_train)
print( pscore, pscore_train)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the data
Step2: Add a Target Column
Step3: Perform the Train/Test Split
Step4: Create the Classifiers
Step5: Comparing the Classifiers
Step6: We'll need to to get the predictions from both classifiers, so we add columns to the test and training sets for the predictions.
Step7: Confusion Matrix
Step9: The code below generates a nicer presentation of the confusion matrix for the random forest classifer.
Step10: And again for the SVM classifier.
Step11: Feature Importance
Step12: You can also visualize this with the following code from
Step13: You can calculate the accuracy with the metrics.accuracy() method, and finally, there is the metrics.classification-report() which will calculate all the metrics except accuracy at once.
|
1,630
|
<ASSISTANT_TASK:>
Python Code:
# Setup your dependencies
import os
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
USER_FLAG = ""
# Google Cloud Notebook requires dependencies to be installed with '--user'
if IS_GOOGLE_CLOUD_NOTEBOOK:
USER_FLAG = "--user"
# Upgrade the specified package to the newest available version
! pip install {USER_FLAG} --upgrade google-cloud-aiplatform
# Upgrade the specified package to the newest available version
! pip install {USER_FLAG} --upgrade google-cloud-storage
# Automatically restart kernel after installs
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
import os
PROJECT_ID = ""
# Get your Google Cloud project ID from gcloud
if not os.getenv("IS_TESTING"):
shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID: ", PROJECT_ID)
if PROJECT_ID == "" or PROJECT_ID is None:
PROJECT_ID = "qwiklabs-gcp-04-c846b6079446" # @param {type:"string"}
# Import necessary libraries
from datetime import datetime
# Use a timestamp to ensure unique resources
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
BUCKET_NAME = "gs://qwiklabs-gcp-04-c846b6079446" # @param {type:"string"}
REGION = "us-central1" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://qwiklabs-gcp-04-c846b6079446":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
! gsutil mb -l $REGION $BUCKET_NAME
! gsutil ls -al $BUCKET_NAME
IMPORT_FILE = "petfinder-tabular-classification_toy.csv"
! gsutil cp gs://cloud-training/mlongcp/v3.0_MLonGC/toy_data/{IMPORT_FILE} {BUCKET_NAME}/data/
gcs_source = f"{BUCKET_NAME}/data/{IMPORT_FILE}"
# Import necessary libraries
import os
from google.cloud import aiplatform
aiplatform.init(project=PROJECT_ID, location=REGION)
ds = dataset = aiplatform.TabularDataset.create(
display_name="petfinder-tabular-dataset",
gcs_source=gcs_source,
)
ds.resource_name
# TODO 1
# Constructs a AutoML Tabular Training Job
job = aiplatform.AutoMLTabularTrainingJob(
display_name="train-petfinder-automl-1",
optimization_prediction_type="classification",
column_transformations=[
{"categorical": {"column_name": "Type"}},
{"numeric": {"column_name": "Age"}},
{"categorical": {"column_name": "Breed1"}},
{"categorical": {"column_name": "Color1"}},
{"categorical": {"column_name": "Color2"}},
{"categorical": {"column_name": "MaturitySize"}},
{"categorical": {"column_name": "FurLength"}},
{"categorical": {"column_name": "Vaccinated"}},
{"categorical": {"column_name": "Sterilized"}},
{"categorical": {"column_name": "Health"}},
{"numeric": {"column_name": "Fee"}},
{"numeric": {"column_name": "PhotoAmt"}},
],
)
# TODO 2a
# Create and train the model object
# This will take around two hour and half to run
model = job.run(
dataset=ds,
target_column="Adopted",
# TODO 2b
# Define training, validation and test fraction for training
training_fraction_split=0.8,
validation_fraction_split=0.1,
test_fraction_split=0.1,
model_display_name="adopted-prediction-model",
disable_early_stopping=False,
)
# TODO 3
# Deploy the model resource to the serving endpoint resource
endpoint = model.deploy(
machine_type="n1-standard-4",
)
# TODO 4
# Make a prediction using the sample values
prediction = endpoint.predict(
[
{
"Type": "Cat",
"Age": "3",
"Breed1": "Tabby",
"Gender": "Male",
"Color1": "Black",
"Color2": "White",
"MaturitySize": "Small",
"FurLength": "Short",
"Vaccinated": "No",
"Sterilized": "No",
"Health": "Healthy",
"Fee": "100",
"PhotoAmt": "2",
}
]
)
print(prediction)
# TODO 5
# Undeploy the model resource
endpoint.undeploy(deployed_model_id=prediction.deployed_model_id)
delete_training_job = True
delete_model = True
delete_endpoint = True
# Warning: Setting this to true will delete everything in your bucket
delete_bucket = False
# Delete the training job
job.delete()
# Delete the model
model.delete()
# Delete the endpoint
endpoint.delete()
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil -m rm -r $BUCKET_NAME
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Install the latest version of the Vertex AI client library.
Step2: Install the Cloud Storage library
Step3: Restart the kernel
Step4: Set your project ID
Step5: Otherwise, set your project ID here.
Step6: Timestamp
Step7: Create a Cloud Storage bucket
Step8: Only if your bucket doesn't already exist
Step9: Finally, validate access to your Cloud Storage bucket by examining its contents
Step10: Copy dataset into your Cloud Storage bucket
Step11: Import Vertex SDK for Python
Step12: Tutorial
Step13: Launch a Training Job to Create a Model
Step14: Deploy your model
Step15: Predict on the endpoint
Step16: Undeploy the model
Step17: Cleaning up
|
1,631
|
<ASSISTANT_TASK:>
Python Code:
from prody import *
from pylab import *
%matplotlib inline
structure = parsePDB('mdm2.pdb')
structure
ensemble = parseDCD('mdm2.dcd')
ensemble.setCoords(structure)
ensemble.setAtoms(structure.calpha)
ensemble
ensemble.superpose()
eda_ensemble = EDA('MDM2 Ensemble')
eda_ensemble.buildCovariance( ensemble )
eda_ensemble.calcModes()
eda_ensemble
dcd = DCDFile('mdm2.dcd')
dcd.link(structure)
dcd.setAtoms(structure.calpha)
dcd
eda_trajectory = EDA('MDM2 Trajectory')
eda_trajectory.buildCovariance( dcd )
eda_trajectory.calcModes()
eda_trajectory
printOverlapTable(eda_ensemble[:3], eda_trajectory[:3])
trajectory = Trajectory('mdm2.dcd')
trajectory.addFile('mdm2sim2.dcd')
trajectory
trajectory.link(structure)
trajectory.setCoords(structure)
trajectory.setAtoms(structure.calpha)
trajectory
eda = EDA('mdm2')
eda.buildCovariance( trajectory )
eda.calcModes()
eda
for mode in eda_trajectory[:4]:
print(calcFractVariance(mode).round(2))
mdm2ca_sim1 = trajectory[:500]
mdm2ca_sim1.superpose()
mdm2ca_sim2 = trajectory[500:]
mdm2ca_sim2.superpose()
showProjection(mdm2ca_sim1, eda[:3], color='red', marker='.');
showProjection(mdm2ca_sim2, eda[:3], color='blue', marker='.');
showProjection(mdm2ca_sim1[0], eda[:3], color='red', marker='o', ms=12);
showProjection(mdm2ca_sim2[0], eda[:3], color='blue', marker='o', ms=12);
showProjection(mdm2ca_sim1[-1], eda[:3], color='red', marker='s', ms=12);
showProjection(mdm2ca_sim2[-1], eda[:3], color='blue', marker='s', ms=12);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Parse reference structure
Step2: EDA calculations
Step3: If you are analyzing a large trajectory, you can pass the trajectory instance to the PCA.buildCovariance() method as follows
Step4: Comparison
Step5: Multiple files
Step6: Analysis
Step7: Plotting
|
1,632
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
from scipy.interpolate import interp1d
with np.load('trajectory.npz') as work:
t=work['t']
x=work['x']
y=work['y']
assert isinstance(x, np.ndarray) and len(x)==40
assert isinstance(y, np.ndarray) and len(y)==40
assert isinstance(t, np.ndarray) and len(t)==40
newt=np.linspace(min(t),max(t),200)
aa=interp1d(t,x,kind='cubic')
newx=aa(newt)
ab=interp1d(t,y,kind='cubic')
newy=ab(newt)
assert newt[0]==t.min()
assert newt[-1]==t.max()
assert len(newt)==200
assert len(newx)==200
assert len(newy)==200
plt.plot(x,y,marker='o',linestyle='',label='origninal points')
plt.plot(newx,newy,marker='.',label='Parameterization')
plt.legend();
assert True # leave this to grade the trajectory plot
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2D trajectory interpolation
Step2: Use these arrays to create interpolated functions $x(t)$ and $y(t)$. Then use those functions to create the following arrays
Step3: Make a parametric plot of ${x(t),y(t)}$ that shows the interpolated values and the original points
|
1,633
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'noaa-gfdl', 'gfdl-cm4', 'atmoschem')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Chemistry Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 1.8. Coupling With Chemical Reactivity
Step12: 2. Key Properties --> Software Properties
Step13: 2.2. Code Version
Step14: 2.3. Code Languages
Step15: 3. Key Properties --> Timestep Framework
Step16: 3.2. Split Operator Advection Timestep
Step17: 3.3. Split Operator Physical Timestep
Step18: 3.4. Split Operator Chemistry Timestep
Step19: 3.5. Split Operator Alternate Order
Step20: 3.6. Integrated Timestep
Step21: 3.7. Integrated Scheme Type
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
Step23: 4.2. Convection
Step24: 4.3. Precipitation
Step25: 4.4. Emissions
Step26: 4.5. Deposition
Step27: 4.6. Gas Phase Chemistry
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Step30: 4.9. Photo Chemistry
Step31: 4.10. Aerosols
Step32: 5. Key Properties --> Tuning Applied
Step33: 5.2. Global Mean Metrics Used
Step34: 5.3. Regional Metrics Used
Step35: 5.4. Trend Metrics Used
Step36: 6. Grid
Step37: 6.2. Matches Atmosphere Grid
Step38: 7. Grid --> Resolution
Step39: 7.2. Canonical Horizontal Resolution
Step40: 7.3. Number Of Horizontal Gridpoints
Step41: 7.4. Number Of Vertical Levels
Step42: 7.5. Is Adaptive Grid
Step43: 8. Transport
Step44: 8.2. Use Atmospheric Transport
Step45: 8.3. Transport Details
Step46: 9. Emissions Concentrations
Step47: 10. Emissions Concentrations --> Surface Emissions
Step48: 10.2. Method
Step49: 10.3. Prescribed Climatology Emitted Species
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Step51: 10.5. Interactive Emitted Species
Step52: 10.6. Other Emitted Species
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
Step54: 11.2. Method
Step55: 11.3. Prescribed Climatology Emitted Species
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Step57: 11.5. Interactive Emitted Species
Step58: 11.6. Other Emitted Species
Step59: 12. Emissions Concentrations --> Concentrations
Step60: 12.2. Prescribed Upper Boundary
Step61: 13. Gas Phase Chemistry
Step62: 13.2. Species
Step63: 13.3. Number Of Bimolecular Reactions
Step64: 13.4. Number Of Termolecular Reactions
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Step67: 13.7. Number Of Advected Species
Step68: 13.8. Number Of Steady State Species
Step69: 13.9. Interactive Dry Deposition
Step70: 13.10. Wet Deposition
Step71: 13.11. Wet Oxidation
Step72: 14. Stratospheric Heterogeneous Chemistry
Step73: 14.2. Gas Phase Species
Step74: 14.3. Aerosol Species
Step75: 14.4. Number Of Steady State Species
Step76: 14.5. Sedimentation
Step77: 14.6. Coagulation
Step78: 15. Tropospheric Heterogeneous Chemistry
Step79: 15.2. Gas Phase Species
Step80: 15.3. Aerosol Species
Step81: 15.4. Number Of Steady State Species
Step82: 15.5. Interactive Dry Deposition
Step83: 15.6. Coagulation
Step84: 16. Photo Chemistry
Step85: 16.2. Number Of Reactions
Step86: 17. Photo Chemistry --> Photolysis
Step87: 17.2. Environmental Conditions
|
1,634
|
<ASSISTANT_TASK:>
Python Code:
import requests
import pandas as pd
import matplotlib.pylab as plt
import seaborn as sns
import numpy as np
import scipy.stats as ss
# For inline pictures
%matplotlib inline
sns.set_context('notebook')
# For nicer output of Pandas dataframes
pd.set_option('float_format', '{:8.2f}'.format)
np.set_printoptions(precision = 3, suppress = True)
url = 'http://www.statsci.org/data/oz/rabbit.txt'
response = requests.get(url)
path = '../data/rabbit.txt'
with open(path, "wb") as file:
file.write(response.content)
df = pd.read_csv('../data/rabbit.txt', sep='\t')
print(df.head())
X, Y = np.array(df['Age']), np.array(df['Lens'])
plt.scatter(X, Y)
plt.xlabel('X')
plt.ylabel('Y')
plt.show()
N = 100
U = np.linspace(X.min(), X.max(), N)
fxhat1 = ss.gaussian_kde(X, 'silverman')
fxhat2 = ss.gaussian_kde(X, .2)
plt.plot(U, fxhat1(U), label='Silverman')
plt.plot(U, fxhat2(U), label='Undersmoothed')
plt.xlabel('$x$')
plt.ylabel('$\hat{f}(x)$')
plt.legend()
plt.show()
def indicator(x):
return np.asfarray((np.abs(x) <= 1.) & (np.abs(x) >= 0.))
def kernel(x, ktype = 'Truncated'):
if ktype == 'Truncated':
return .5 * indicator(x)
if ktype == 'Epanechnikov':
return 3./4. * (1 - x**2) * indicator(x)
if ktype == 'Biweight':
return 15./16. * (1 - x**2)**2 * indicator(x)
if ktype == 'Triweight':
return 35./36. * (1 - x**2)**3 * indicator(x)
if ktype == 'Gaussian':
return 1./np.sqrt(2. * np.pi) * np.exp(- .5 * x**2)
def roughness(ktype = 'Truncated'):
if ktype == 'Truncated':
return 1./2.
if ktype == 'Epanechnikov':
return 3./5.
if ktype == 'Biweight':
return 5./7.
if ktype == 'Triweight':
return 350./429.
if ktype == 'Gaussian':
return np.pi**(-.5)/2.
def sigmak(ktype = 'Truncated'):
if ktype == 'Truncated':
return 1./3.
if ktype == 'Epanechnikov':
return 1./5.
if ktype == 'Biweight':
return 1./7.
if ktype == 'Triweight':
return 1./9.
if ktype == 'Gaussian':
return 1.
x = np.linspace(0., 2., 100)
names = ['Truncated', 'Epanechnikov', 'Biweight', 'Triweight', 'Gaussian']
for name in names:
plt.plot(x, kernel(x, ktype = name), label = name, lw = 2)
plt.legend()
plt.show()
def weight(U, X, h=.1, ktype='Truncated'):
# X - N-array
# U - M-array
# XmU - M*N-array
XmU = (X - np.atleast_2d(U).T) / h
# K - M*N-array
K = kernel(XmU, ktype)
# K.sum(1) - M-array
# K.T - N*M-array
# K.T / K.sum(1) - N*M-array
return (K.T / K.sum(1)).T
def NW(U, X, Y, h=.1, ktype='Truncated'):
return np.dot(weight(U, X, h, ktype), Y)
def LL(U, X, Y, h=.1, ktype='Truncated'):
# X - N-array
# U - M-array
# K - M*N-array
W = weight(U, X, h, ktype)
alpha = np.empty(U.shape[0])
beta = np.empty(U.shape[0])
for i in range(U.shape[0]):
# N*N-array
K = np.diag(W[i])
# N-array
Z1 = (X - U[i]) / h
Z0 = np.ones(Z1.shape)
# 2*N-array
Z = np.vstack([Z0, Z1]).T
# 2*2-array
A = np.dot(Z.T, np.dot(K, Z))
# 2-array
B = np.dot(Z.T, np.dot(K, Y))
# 2-array
coef = np.dot(np.linalg.inv(A), B)
alpha[i] = coef[0]
beta[i] = coef[1]
return alpha, beta
N = 100
U = np.linspace(X.min(), X.max(), N)
h_silv = 1.06 * np.std(X) * N**(-1/5)
print('Silverman\'s Rule-of-Thumb = %.2f' % h_silv)
# Nadaraya-Watson estimator
Yhat_NW = NW(U, X, Y, h=h_silv, ktype='Gaussian')
# Local Linear estimator
Yhat_LL, dYhat_LL = LL(U, X, Y, h=h_silv, ktype='Gaussian')
fig, axes = plt.subplots(nrows=2, ncols=1, figsize=(10, 6), sharex=True)
axes[0].plot(U, Yhat_NW, lw=2, color='red', label='NW')
axes[0].plot(U, Yhat_LL, lw=2, color='blue', label='LL')
axes[0].scatter(X, Y, s=20, lw=.5, facecolor='none', label='Realized')
axes[0].set_ylabel('Y')
axes[0].legend(loc='upper left')
axes[0].set_title('Conditional expectation')
axes[1].plot(U, dYhat_LL)
axes[1].set_title('Regression derivative')
axes[1].set_xlabel('X')
axes[1].set_ylabel('dm(x)/dx')
plt.show()
def error(Y, X, h, ktype):
N = len(Y)
ehat = np.empty(N)
for i in range(N):
ehat[i] = Y[i] - NW(X[i], np.delete(X, i), np.delete(Y, i), h=h, ktype=ktype)
return ehat
h = 30
ktype = 'Gaussian'
ehat = error(Y, X, h, ktype)
sigma2hat = NW(U, X, ehat**2, h=h, ktype=ktype)
fxhat = ss.gaussian_kde(X)(U)
V2hat = roughness(ktype) * sigma2hat / fxhat / N / h
shat_NW = V2hat**.5
fig, axes = plt.subplots(nrows=2, ncols=1, figsize=(10, 8), sharex=True)
axes[0].scatter(X, Y, s=15, lw=.5, facecolor='none', label='Realized')
axes[0].fill_between(U, Yhat_NW - 2*shat_NW, Yhat_NW + 2*shat_NW,
lw=0, color='red', alpha=.2, label='+2s')
axes[0].plot(U, Yhat_NW, lw=2, color='red', label='Fitted')
axes[0].set_ylabel('Y')
axes[0].legend(loc='best')
axes[0].set_title('Data')
axes[1].plot(U, sigma2hat**.5, lw=2, color='blue')
axes[1].set_xlabel('X')
axes[1].set_ylabel('$\sigma(X)$')
axes[1].set_title('Conditional variance')
plt.show()
ktype = 'Gaussian'
H = np.linspace(1, 30, 100)
CV = np.array([])
for h in H:
ehat = error(Y, X, h, ktype)
CV = np.append(CV, np.nanmean(ehat**2))
h_CV = H[CV.argmin()]
plt.plot(H, CV)
plt.scatter(h_CV, CV.min(), facecolor='none', lw=2, s=100)
plt.xlabel('Bandwidth, h')
plt.ylabel('cross-validation, CV')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import the data
Step2: Plot the data
Step3: Estimate the density
Step4: Kernels
Step5: Nadaraya-Watson (NW) or local constant estimator
Step6: Local Linear (LL) estimator
Step7: Estimate conditional expectation
Step8: Estimate conditional variance
Step9: Use errors to estimate the variance
Step10: Plot the result
Step11: Bandwidth selection
|
1,635
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
benchmark_data = pd.read_csv('sklearn-benchmark-data.tsv.gz', sep='\t')
benchmark_data.head()
benchmark_data.rename(columns={'heart-c':'Dataset_Name',
'GradientBoostingClassifier':'Method_Name',
'loss=exponential,learning_rate=10.0,n_estimators=100,max_depth=3,max_features=sqrt,warm_start=True':
'Parameters',
'0.723684210526':'Test_Score'},inplace=True)
methodNames_list = benchmark_data['Method_Name'].unique().tolist()
#methodNames_list
methodWiseData = {}
for name in methodNames_list:
methodWiseData[name] = benchmark_data[(benchmark_data.Method_Name == name)]
#for i in names_list:
# print(d[i])
import os
if not os.path.isdir('newBenchmark_results'):
os.mkdir('newBenchmark_results')
gb = methodWiseData['GradientBoostingClassifier']
gb.to_pickle('newBenchmark_results/GradientBoostingClassifier_results.tsv.gz')
method_data = pd.read_pickle('newBenchmark_results/GradientBoostingClassifier-benchmark_results.tsv.gz')
method_data
method_param = pd.DataFrame(method_data.Parameters.str.split(',').tolist(),
columns = ['Param1','Param2','Param3'])
method_param
method_data1 = method_data.drop('Parameters', 1) #delete the Paameters column from the original dataframe
idx = method_param.index.get_values() #get the index of the parameter dataframe
#idx
method_data2 = method_data1.set_index(idx) #set the index of method dataframe same as parameter dataframe
#kneighbor_data2
result = pd.concat([method_data2, method_param], axis = 1) #finally add the parameter columns to get the result (desired format)
#result
import os
if not os.path.isdir('HPCC_benchmark_results'):
os.mkdir('HPCC_benchmark_results')
result.to_pickle('HPCC_benchmark_results/GradientBoostingClassifier-hpcc_results.tsv.gz')
data = pd.read_pickle('HPCC_benchmark_results/GradientBoostingClassifier-hpcc_results.tsv.gz')
data
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Get all the method names
Step2: Store the data method wise separately
Step3: Save the data method wise to a folder in tsv.gz format
Step4: Split the parameters into dfifferent columns;
Step5: Save this result in the HPCC_benchmark_results folder
|
1,636
|
<ASSISTANT_TASK:>
Python Code:
import numpy
from matplotlib import pyplot
%matplotlib inline
### generate some random data
xdata = numpy.arange(15)
ydata = numpy.random.randn(15) + xdata
### initialize the "figure" and "axes" objects
fig, ax = pyplot.subplots()
points_plot = ax.plot(xdata, ydata, marker='o')
### initialize the figure
fig, ax = pyplot.subplots()
points_plot = ax.plot(xdata, ydata, ls='', marker='o')
### initialize the figure
fig, ax = pyplot.subplots()
points_plot = ax.plot(xdata, ydata, ls='', marker='o', ms=15)
### initialize the figure
fig, ax = pyplot.subplots()
points_plot = ax.plot(xdata, ydata, ls='', ms=8, marker='o')
#points_plot = ax.plot(xdata, ydata, ls='', ms=8, marker='s')
#points_plot = ax.plot(xdata, ydata, ls='', ms=8, marker='D')
#points_plot = ax.plot(xdata, ydata, ls='', ms=8, marker='^')
#points_plot = ax.plot(xdata, ydata, ls='', ms=8, marker='>')
#points_plot = ax.plot(xdata, ydata, ls='', ms=8, marker='<')
#points_plot = ax.plot(xdata, ydata, ls='', ms=8, marker='v')
### generate some random data
xdata2 = numpy.arange(15)
ydata2 = numpy.random.randn(15)
yerrors = numpy.random.randn(15)
### initialize the figure
fig, ax = pyplot.subplots()
ax.errorbar(xdata2, ydata2, yerr=yerrors)
### initialize the figure
fig, ax = pyplot.subplots()
eb = ax.errorbar(xdata2, ydata2, yerr=yerrors, ls='', # no lines connecting points
marker='*', # circular plot symbols
ms=20, # marker size
mfc='r', # marker face color
mew=2, # marker edge width
mec='k', # marker edge color
elinewidth=2, # error line width
ecolor='gray', # error color
capsize=6) # error hat size
### also try mfc="none"
pyplot.errorbar?
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Ok you got me, the plot function still generates a line by default... but we can turn it off
Step2: Markersize
Step3: Symbol
Step4: Errorbars
|
1,637
|
<ASSISTANT_TASK:>
Python Code:
!pip install -q -U google-cloud-bigquery pyarrow
import os
from google.cloud import bigquery
PROJECT_ID = "yourProject" # Change to your project.
BUCKET = "yourBucketName" # Change to the bucket you created.
SQL_SCRIPTS_DIR = "sql_scripts"
BQ_DATASET_NAME = "recommendations"
!gcloud config set project $PROJECT_ID
try:
from google.colab import auth
auth.authenticate_user()
print("Colab user is authenticated.")
except:
pass
%%bigquery --project $PROJECT_ID
CREATE TABLE IF NOT EXISTS recommendations.item_cooc
AS SELECT 0 AS item1_Id, 0 AS item2_Id, 0 AS cooc, 0 AS pmi;
%%bigquery --project $PROJECT_ID
CREATE MODEL IF NOT EXISTS recommendations.item_matching_model
OPTIONS(
MODEL_TYPE='matrix_factorization',
USER_COL='item1_Id',
ITEM_COL='item2_Id',
RATING_COL='score'
)
AS
SELECT 0 AS item1_Id, 0 AS item2_Id, 0 AS score;
client = bigquery.Client(project=PROJECT_ID)
sql_scripts = dict()
for script_file in [file for file in os.listdir(SQL_SCRIPTS_DIR) if ".sql" in file]:
script_file_path = os.path.join(SQL_SCRIPTS_DIR, script_file)
sql_script = open(script_file_path, "r").read()
sql_script = sql_script.replace("@DATASET_NAME", BQ_DATASET_NAME)
sql_scripts[script_file] = sql_script
for script_file in sql_scripts:
print(f"Executing {script_file} script...")
query = sql_scripts[script_file]
query_job = client.query(query)
result = query_job.result()
print("Done.")
query = f"SELECT * FROM {BQ_DATASET_NAME}.INFORMATION_SCHEMA.ROUTINES;"
query_job = client.query(query)
query_job.result().to_dataframe()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import libraries
Step2: Configure GCP environment settings
Step3: Authenticate your GCP account
Step4: Create the stored procedure dependencies
Step5: Create the stored procedures
Step6: List the stored procedures
|
1,638
|
<ASSISTANT_TASK:>
Python Code:
# setup
import numpy as np
import sympy as sp
import scipy
from pprint import pprint
sp.init_printing(use_latex='mathjax')
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (12, 8) # (width, height)
plt.rcParams['font.size'] = 14
plt.rcParams['legend.fontsize'] = 16
from matplotlib import patches
#get_ipython().magic('matplotlib') # seperate window
get_ipython().magic('matplotlib inline') # inline plotting
pwd
import mechpy
import os ; os.chdir('..') # change to root from the examples folder
from mechpy.design import fastened_joint
## Bolted Joint Example
# fastener Location
fx = [0,1,2,3,0,1,2,3]
fy = [0,0,0,0,1,1,1,1]
# Force magnitude(x,y)
P = [-300,-500]
# Force location
l = [2,1]
df = fastened_joint(fx, fy, P, l)
df.plot(kind='scatter', x='x', y='y');
#df.plot(style='o', x='x', y='y')
plt.plot(df.xbar[0],df.ybar[0],'*')
df
#ax = plt.gca()
#ax.arrow(l[0], l[1], Pnorm[0],Pnorm[1], head_width=0.05, head_length=0.1, fc='k', ec='k')
#x.arrow(xbar, ybar, Pnorm[0],0, head_width=0.05, head_length=0.1, fc='k', ec='k')
#ax.arrow(xbar, ybar, 0,Pnorm[1], head_width=0.05, head_length=0.1, fc='k', ec='k')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Materials
|
1,639
|
<ASSISTANT_TASK:>
Python Code:
import theano
import theano.tensor as T
k = T.iscalar('K')
a = T.vector('A')
i = T.vector('A')
result, updates = theano.scan(fn=lambda pre , k : pre*a ,
outputs_info = i,
non_sequences=a,
n_steps = k
)
print result
print updates
final_result = result[-1]
power = theano.function(inputs=[a,k,i],outputs=[final_result],updates=updates)
print(power(range(10),2,[10]*10))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 撰寫一個 A 的 K 次計算函式
Step2: result 為用 tensor 來接
|
1,640
|
<ASSISTANT_TASK:>
Python Code:
__author__ = 'Matt Wilber'
import sys
print(sys.version)
from abc import abstractmethod, ABC
class AbstractPouncer(ABC):
@abstractmethod
def pounce(self):
pass
class Fox(AbstractPouncer):
def pounce(self):
self.crouch()
self.leap()
self.attack()
def crouch(self):
print('Crouch crouch crouch...')
def leap(self):
print('Wheeee!')
def attack(self):
print('I GOTCHU 🦊')
try:
AbstractPouncer()
except TypeError as e:
print('TypeError:', e)
fox = Fox()
fox.pounce()
class cachedproperty:
The ``cachedproperty`` is used similar to :class:`property`, except
that the wrapped method is only called once. This is commonly used
to implement lazy attributes.
After the property has been accessed, the value is stored on the
instance itself, using the same name as the cachedproperty. This
allows the cache to be cleared with :func:`delattr`, or through
manipulating the object's ``__dict__``.
Copied from https://github.com/mahmoud/boltons/blob/master/boltons/cacheutils.py on 9/17/18
def __init__(self, func):
self.__doc__ = getattr(func, '__doc__')
self.func = func
def __get__(self, obj, objtype=None):
if obj is None:
return self
value = obj.__dict__[self.func.__name__] = self.func(obj)
return value
def __repr__(self):
cn = self.__class__.__name__
return '<%s func=%s>' % (cn, self.func)
from datetime import datetime
from typing import Iterator
class ProcessTimeProvider(ABC):
Abstract interface for providing times for data pipelines + other processes
@cachedproperty
@abstractmethod
def process_time(self) -> datetime:
pass
class LocalProcessTimeProvider(ProcessTimeProvider):
@cachedproperty
def process_time(self) -> datetime:
return datetime.now()
class UTCProcessTimeProvider(ProcessTimeProvider):
@cachedproperty
def process_time(self) -> datetime:
return datetime.utcnow()
class FoxMention:
def __init__(self, offset: int, creation_time: datetime):
self.offset = offset
self.creation_time = creation_time
def __repr__(self):
return '<FoxMention(offset={}, creation_time={})>'.format(self.offset, self.creation_time)
class FoxExtractionProcess(UTCProcessTimeProvider):
Counts 🦊s!
def extract_foxes(self, text) -> Iterator[FoxMention]:
for offset, character in enumerate(text):
if character == '🦊':
yield FoxMention(
offset=offset,
creation_time=self.process_time
)
fox_extractor = FoxExtractionProcess()
text = 'The quick brown 🦊 jumps over the lazy 🦊'
for fox_mention in fox_extractor.extract_foxes(text):
print(fox_mention)
ProcessTimeProvider()
print(getattr(AbstractPouncer.pounce, '__isabstractmethod__', None))
print(getattr(ProcessTimeProvider.process_time, '__isabstractmethod__', None))
class cachedproperty:
def __init__(self, func):
self.__doc__ = getattr(func, '__doc__')
self.func = func
def __get__(self, obj, objtype=None):
if obj is None:
return self
value = obj.__dict__[self.func.__name__] = self.func(obj)
return value
def __repr__(self):
cn = self.__class__.__name__
return '<%s func=%s>' % (cn, self.func)
print(ProcessTimeProvider.process_time)
print(ProcessTimeProvider.process_time.__dict__)
print(ProcessTimeProvider.process_time.func.__dict__)
class cachedproperty:
def __init__(self, func):
self.__doc__ = getattr(func, '__doc__')
self.__isabstractmethod__ = func.__isabstractmethod__ # The fix!
self.func = func
def __get__(self, obj, objtype=None):
if obj is None:
return self
value = obj.__dict__[self.func.__name__] = self.func(obj)
return value
def __repr__(self):
cn = self.__class__.__name__
return '<%s func=%s>' % (cn, self.func)
class AbstractTimeProvider(ABC):
@cachedproperty
@abstractmethod
def time(self):
pass
try:
AbstractTimeProvider()
except TypeError as e:
print('TypeError:', e)
from functools import wraps
def bad_decorator(func):
def wrapper(num):
return func(num)
return wrapper
def good_decorator(func):
@wraps(func)
def wrapper(num):
return func(num)
return wrapper
class BadTwoAdder(ABC):
@bad_decorator
@abstractmethod
def bad_add_two(num):
Add two to a number
return 2 + num
class GoodTwoAdder(ABC):
@good_decorator
@abstractmethod
def good_add_two(num):
Add two to a number
return 2 + num
print("Badly wrapped name:", BadTwoAdder.bad_add_two.__name__)
print("Badly wrapped docstring:", BadTwoAdder.bad_add_two.__doc__)
print("Badly wrapped __isabstractmethod__:", getattr(BadTwoAdder.bad_add_two, '__isabstractmethod__', None))
print()
print("Well wrapped name:", GoodTwoAdder.good_add_two.__name__)
print("Well wrapped docstring:", GoodTwoAdder.good_add_two.__doc__)
print("Well wrapped __isabstractmethod__:", getattr(GoodTwoAdder.good_add_two, '__isabstractmethod__', None))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Overview
Step3: Intro to @cachedproperty
Step5: Example of using both
Step7: For the sake of completeness, let's see an example of how these might work. The code below counts the # of times the 🦊 character appears in a text.
Step8: This use case for @abstractmethod, combined with @cachedproperty, is great! I like to use them in combination all the time. But here's the issue
Step9: Wait – isn't ProcessTimeProvider an abstract class that hasn't had all its methods implemented? As we learned above, this should throw a TypeError when we try to instantiate the abstract class. This was baffling to me at first, and I had to understand a little more about how Python decorators and abstract methods are implemented to fix it.
Step10: We're onto something! In AbstractPouncer.pounce, which didn't have a @cachedproperty annotation, we see __isabstractmethod__ is set, as expected. So what's happening in ProcessTimeProvider.process_time? It must be related to @cachedproperty. Let's look at the implementation again.
Step11: So if __isabstractmethod__ isn't in ProcessTimeProvider.process_time, what happened to it? Let's inspect the process_time method a little more.
Step12: It looks like the cachedproperty decorator has changed the top-level process_time into a cachedproperty object, which contains a func attribute that is the original process_time object! Is that where __isabstractmethod__ could be hiding?
Step13: Aha! So the __isabstractmethod__ hasn't disappeared at all! It's just been wrapped by @cachedproperty into the func attribute of the method. But that's not where Python 3 expects it to be. That should be an easy fix.
Step16: Voilà! Now we can declare abstract cached properties with all the benefits of Python's abc module. As noted above, this is now fixed in the newest version of boltons.
Step17: Both bad_decorator and good_decorator above make no real modifications to the functions they wrap, but the way they wrap is different. Only when using @wraps are important function attributes maintained.
|
1,641
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'miroc', 'sandbox-1', 'toplevel')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Flux Correction
Step7: 3. Key Properties --> Genealogy
Step8: 3.2. CMIP3 Parent
Step9: 3.3. CMIP5 Parent
Step10: 3.4. Previous Name
Step11: 4. Key Properties --> Software Properties
Step12: 4.2. Code Version
Step13: 4.3. Code Languages
Step14: 4.4. Components Structure
Step15: 4.5. Coupler
Step16: 5. Key Properties --> Coupling
Step17: 5.2. Atmosphere Double Flux
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Step19: 5.4. Atmosphere Relative Winds
Step20: 6. Key Properties --> Tuning Applied
Step21: 6.2. Global Mean Metrics Used
Step22: 6.3. Regional Metrics Used
Step23: 6.4. Trend Metrics Used
Step24: 6.5. Energy Balance
Step25: 6.6. Fresh Water Balance
Step26: 7. Key Properties --> Conservation --> Heat
Step27: 7.2. Atmos Ocean Interface
Step28: 7.3. Atmos Land Interface
Step29: 7.4. Atmos Sea-ice Interface
Step30: 7.5. Ocean Seaice Interface
Step31: 7.6. Land Ocean Interface
Step32: 8. Key Properties --> Conservation --> Fresh Water
Step33: 8.2. Atmos Ocean Interface
Step34: 8.3. Atmos Land Interface
Step35: 8.4. Atmos Sea-ice Interface
Step36: 8.5. Ocean Seaice Interface
Step37: 8.6. Runoff
Step38: 8.7. Iceberg Calving
Step39: 8.8. Endoreic Basins
Step40: 8.9. Snow Accumulation
Step41: 9. Key Properties --> Conservation --> Salt
Step42: 10. Key Properties --> Conservation --> Momentum
Step43: 11. Radiative Forcings
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Step45: 12.2. Additional Information
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Step47: 13.2. Additional Information
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Step49: 14.2. Additional Information
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Step51: 15.2. Additional Information
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Step53: 16.2. Additional Information
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Step55: 17.2. Equivalence Concentration
Step56: 17.3. Additional Information
Step57: 18. Radiative Forcings --> Aerosols --> SO4
Step58: 18.2. Additional Information
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Step60: 19.2. Additional Information
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Step62: 20.2. Additional Information
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Step64: 21.2. Additional Information
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Step66: 22.2. Aerosol Effect On Ice Clouds
Step67: 22.3. Additional Information
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Step69: 23.2. Aerosol Effect On Ice Clouds
Step70: 23.3. RFaci From Sulfate Only
Step71: 23.4. Additional Information
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Step73: 24.2. Additional Information
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Step77: 25.4. Additional Information
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Step81: 26.4. Additional Information
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Step83: 27.2. Additional Information
Step84: 28. Radiative Forcings --> Other --> Land Use
Step85: 28.2. Crop Change Only
Step86: 28.3. Additional Information
Step87: 29. Radiative Forcings --> Other --> Solar
Step88: 29.2. Additional Information
|
1,642
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import os
import pickle
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.ticker as mtick
from astropy.modeling.functional_models import Const1D
from pyxel import Image, load_region
from pyxel.fitters import CstatFitter
from pyxel.models import IntModel
DATADIR = "../data/"
pkl = DATADIR + "skybkg.pkl"
if os.path.exists(pkl):
with open(pkl, "rb") as f:
p = pickle.load(f)
else:
src_imgs = Image([DATADIR + "srcfree_bin4_500-2000_5786_band1_thresh.img",
DATADIR + "srcfree_bin4_500-2000_17170_band1_thresh.img",
DATADIR + "srcfree_bin4_500-2000_17490_band1_thresh.img",
DATADIR + "srcfree_bin4_500-2000_18702_band1_thresh.img",
DATADIR + "srcfree_bin4_500-2000_18703_band1_thresh.img"])
exp_imgs = Image([DATADIR + "srcfree_bin4_500-2000_5786_band1_thresh.expmap_nosrcedg",
DATADIR + "srcfree_bin4_500-2000_17170_band1_thresh.expmap_nosrcedg",
DATADIR + "srcfree_bin4_500-2000_17490_band1_thresh.expmap_nosrcedg",
DATADIR + "srcfree_bin4_500-2000_18702_band1_thresh.expmap_nosrcedg",
DATADIR + "srcfree_bin4_500-2000_18703_band1_thresh.expmap_nosrcedg"])
bkg_imgs = Image([DATADIR + "5786_bin4_500-2000_bgstow_goodreg.img",
DATADIR + "17170_bin4_500-2000_bgstow_goodreg.img",
DATADIR + "17490_bin4_500-2000_bgstow_goodreg.img",
DATADIR + "18702_bin4_500-2000_bgstow_goodreg.img",
DATADIR + "18703_bin4_500-2000_bgstow_goodreg.img"])
region = load_region(DATADIR + "skybkg.reg")
p = region.sb_profile(src_imgs, bkg_imgs, exp_imgs, min_counts=25, islog=False)
with open(pkl, "wb") as f:
pickle.dump(p, f)
rmin, rmax = 5.6, 9.6
# These are needed to fit the data using C-stat.
r = np.array([pp[0] for pp in p if rmin <= pp[0] <= rmax])
r_err = np.array([pp[1] for pp in p if rmin <= pp[0] <= rmax])
raw_cts = np.array([pp[2] for pp in p if rmin <= pp[0] <= rmax])
bkg_cts = np.array([pp[4] for pp in p if rmin <= pp[0] <= rmax])
t_raw = np.array([pp[11] for pp in p if rmin <= pp[0] <= rmax])
t_bkg = np.array([pp[12] for pp in p if rmin <= pp[0] <= rmax])
# These we load too, so that we can make a pretty figure in the end.
sx = np.array([pp[7] for pp in p if rmin <= pp[0] <= rmax])
sx_err = np.array([pp[8] for pp in p if rmin <= pp[0] <= rmax])
bkg = np.array([pp[9] for pp in p if rmin <= pp[0] <= rmax])
bkg_err = np.array([pp[10] for pp in p if rmin <= pp[0] <= rmax])
fig = plt.figure(figsize=(10,5))
ax = fig.add_subplot(111)
ax.scatter(r, sx, c="#1e8f1e", alpha=0.85, s=35, marker="s",
label="0.5-2 keV Source + Sky Bkg")
ax.errorbar(r, sx, xerr=r_err, yerr=sx_err, linestyle="None", color="#1e8f1e")
ax.step(r, bkg, where="mid", color="#1f77b4", linewidth=2,
label="0.5-2 keV Particle Bkg")
ax.step(r, bkg - bkg_err, where="mid", color="#1f77b4", linewidth=2, alpha=0.5, linestyle="--")
ax.step(r, bkg + bkg_err, where="mid", color="#1f77b4", linewidth=2, alpha=0.5, linestyle="--")
ax.semilogx()
ax.semilogy()
ax.get_xaxis().set_major_formatter(mtick.ScalarFormatter())
ax.get_xaxis().set_minor_formatter(mtick.ScalarFormatter())
plt.tick_params(axis="both", which="major", labelsize=14)
plt.xlim(rmin, rmax)
plt.ylim(5e-8, 1e-5)
plt.xlabel("Distance (arcmin)", size=15)
plt.ylabel(r"SB (photons cm$^{-2}$ s$^{-1}$ arcmin$^{-2}$)", size=15)
plt.legend(loc=1)
plt.title("Sky Background", size=15)
plt.show()
model = Const1D(amplitude=1e-6)
int_model = IntModel(model, widths=r_err)
fit = CstatFitter()
fitted_model = fit(int_model, r, raw_cts, bkg_cts, t_raw, t_bkg, maxiter=500)
print(fitted_model)
mcmc_err = fit.mcmc_err(fitted_model, r, raw_cts, bkg_cts, t_raw, t_bkg,
cl=90., save_chain=True, clobber_chain=True,
chain_filename=DATADIR+"skybkg_chain.dat")
fig = plt.figure(figsize=(10,5))
ax = fig.add_subplot(111)
ax.scatter(r, sx, c="#1e8f1e", alpha=0.85, s=35, marker="s",
label="0.5-2 keV Source + Sky Bkg")
ax.errorbar(r, sx, xerr=r_err, yerr=sx_err, linestyle="None", color="#1e8f1e")
ax.step(r, bkg, where="mid", color="#1f77b4", linewidth=2,
label="0.5-2 keV Particle Bkg")
ax.step(r, bkg - bkg_err, where="mid", color="#1f77b4", linewidth=2, alpha=0.5, linestyle="--")
ax.step(r, bkg + bkg_err, where="mid", color="#1f77b4", linewidth=2, alpha=0.5, linestyle="--")
ax.plot(r, fitted_model(r), color="#ffa500", linewidth=2, alpha=0.75)
ax.fill_between(r, mcmc_err[0][1] + mcmc_err[0][2], mcmc_err[0][1] + mcmc_err[0][3], alpha=0.3, color="#ffa500")
ax.semilogx()
ax.semilogy()
ax.get_xaxis().set_major_formatter(mtick.ScalarFormatter())
ax.get_xaxis().set_minor_formatter(mtick.ScalarFormatter())
plt.tick_params(axis="both", which="major", labelsize=14)
plt.xlim(rmin, rmax)
plt.ylim(5e-8, 1e-5)
plt.xlabel("Distance (arcmin)", size=15)
plt.ylabel(r"SB (photons cm$^{-2}$ s$^{-1}$ arcmin$^{-2}$)", size=15)
plt.legend(loc=1)
plt.title("Sky Background", size=15)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: There are four Chandra observations of ZwCl 2341.1+0000. The fully processed images in the energy band 0.5-2 keV are available in the PyXel GitHub repository. There are three types of images
Step2: Beyond ~5.6 arcmin, the profile flattens to an approximately constant level. Regions beyond this radius therefore contain only sky background emission. Below we select the profile parameters required by the fit, restricted to the radius range 5.6-9.6 arcmin.
Step3: We plot the profile to have an estimate for the background level. This estimate will be the guess for our fit.
Step4: The sky background level is a bit below 1e-6, so 1e-6 should be a good guess, especially given the simplicity of the model. We fit the data using the extended C-statistic (same as in Xspec).
Step5: Uncertainties on the parameters are calculated using MCMC. The uncertainties below are calculated at the 90% confidence level. We save the chain to a file, so that we can simply load it next time (e.g., if the level at which the uncertainties are calculated is changed). To load an existing chain file, suppy the filename to chain_filename and set clobber_chain=False. MCMC runs can be expensive, especially for complex models (can take up to a few hours when run on two cores in the case of an integrated broken power-law model), so it's generally a very good idea to save the results.
Step6: Finally, we plot the best-fitting value from the MCMC run, and the 90% uncertainty band.
|
1,643
|
<ASSISTANT_TASK:>
Python Code:
measurement_id = 0
windows = (60, 180)
# Cell inserted during automated execution.
windows = (30, 180)
measurement_id = 1
import time
from pathlib import Path
import pandas as pd
from scipy.stats import linregress
from scipy import optimize
from IPython.display import display
from fretbursts import *
sns = init_notebook(fs=14)
import lmfit; lmfit.__version__
import phconvert; phconvert.__version__
path = Path('./data/')
pattern = 'singlespot*.hdf5'
filenames = list(str(f) for f in path.glob(pattern))
filenames
basenames = list(f.stem for f in path.glob(pattern))
basenames
start_times = [600, 900, 900,
600, 600, 600, 600, 600, 600,
600, 600, 600] # time of NTP injection and start of kinetics
filename = filenames[measurement_id]
start_time = start_times[measurement_id]
filename
import os
assert os.path.exists(filename)
d = loader.photon_hdf5(filename)
plot_alternation_hist(d)
loader.alex_apply_period(d)
d.time_max
d.calc_bg(bg.exp_fit, time_s=10, tail_min_us='auto', F_bg=1.7)
dplot(d, hist_bg);
dplot(d, timetrace_bg);
xlim(start_time - 150, start_time + 150)
#dplot(d, timetrace)
#xlim(2, 3); ylim(-100, 100);
#%%timeit -n1 -r1
ddc = bext.burst_search_and_gate(d)
ds1 = ddc.select_bursts(select_bursts.size, th1=25)
ds = ds1.select_bursts(select_bursts.naa, th1=25)
bpl.alex_jointplot(ds)
ds0 = ds.select_bursts(select_bursts.time, time_s1=0, time_s2=start_time-10)
dplot(ds0, hist_fret, pdf=False);
weights = 'size'
bext.bursts_fitter(ds0, weights=weights)
ds0.E_fitter.fit_histogram(mfit.factory_two_gaussians(p1_center=0.5, p2_center=0.9), verbose=False)
dplot(ds0, hist_fret, show_model=True, weights=weights);
ds0.E_fitter.params
weights = None
bext.bursts_fitter(ds0, weights=weights)
ds0.E_fitter.fit_histogram(mfit.factory_two_gaussians(p1_center=0.5, p2_center=0.9), verbose=False)
dplot(ds0, hist_fret, show_model=True, weights=weights);
ds0.E_fitter.params
def gauss2(**params0):
peak1 = lmfit.models.GaussianModel(prefix='p1_')
peak2 = lmfit.models.GaussianModel(prefix='p2_')
model = peak1 + peak2
model.set_param_hint('p1_center', **{'value': 0.6, 'min': 0.3, 'max': 0.8, **params0.get('p1_center', {})})
model.set_param_hint('p2_center', **{'value': 0.9, 'min': 0.8, 'max': 1.0, **params0.get('p2_center', {})})
for sigma in ['p%d_sigma' % i for i in (1, 2)]:
model.set_param_hint(sigma, **{'value': 0.02, 'min': 0.01, **params0.get(sigma, {})})
for ampl in ['p%d_amplitude' % i for i in (1, 2)]:
model.set_param_hint(ampl, **{'value': 0.5, 'min': 0.01, **params0.get(ampl, {})})
model.name = '3 gauss peaks'
return model
#%matplotlib notebook
#fig, ax = plt.subplots(figsize=(12, 8))
#dplot(dm0, scatter_fret_size, ax=ax)
bext.bursts_fitter(ds0, weights=None)
ds0.E_fitter.fit_histogram(gauss2(), verbose=False)
mfit.plot_mfit(ds0.E_fitter)
params_2gauss = ds0.E_fitter.params
plt.xlabel('E')
plt.ylabel('PDF')
plt.title('')
params_2gauss
ds_final = ds.select_bursts(select_bursts.time, time_s1=start_time+300, time_s2=ds.time_max + 1)
ds_final.num_bursts
bext.bursts_fitter(ds_final, weights=None)
model = gauss2()
model.set_param_hint('p2_center', value=params_2gauss.p2_center[0], vary=False)
ds_final.E_fitter.fit_histogram(model, verbose=False)
fig, ax = plt.subplots(figsize=(12, 6))
mfit.plot_mfit(ds_final.E_fitter, ax=ax)
params_2gauss1 = ds_final.E_fitter.params
params_2gauss1
#del params_2gauss0
is_runoff = 'runoff' in filename.lower()
if 'params_2gauss0' not in locals():
params_2gauss0 = params_2gauss.copy()
if is_runoff:
params_2gauss0.p2_center = params_2gauss1.p2_center
else:
params_2gauss0.p1_center = params_2gauss1.p1_center
params_2gauss0.p1_amplitude + params_2gauss0.p2_amplitude
'params_2gauss0' in locals()
from scipy import optimize
params_fixed = dict(
mu1=float(params_2gauss0.p1_center),
mu2=float(params_2gauss0.p2_center),
sig1=float(params_2gauss0.p1_sigma),
sig2=float(params_2gauss0.p2_sigma),
)
def em_weights_2gauss(x, a2, mu1, mu2, sig1, sig2):
Responsibility function for a 2-Gaussian model.
Return 2 arrays of size = x.size: the responsibility of
each Gaussian population.
a1 = 1 - a2
assert np.abs(a1 + a2 - 1) < 1e-3
f1 = a1 * gauss_pdf(x, mu1, sig1)
f2 = a2 * gauss_pdf(x, mu2, sig2)
γ1 = f1 / (f1 + f2)
γ2 = f2 / (f1 + f2)
return γ1, γ2
def em_fit_2gauss(x, a2_0, params_fixed, print_every=10, max_iter=100, rtol=1e-3):
a2_new = a2_0
rel_change = 1
i = 0
while rel_change > rtol and i < max_iter:
# E-step
γ1, γ2 = em_weights_2gauss(x, a2_new, **params_fixed)
assert np.allclose(γ1.sum() + γ2.sum(), x.size)
# M-step
a2_old = a2_new
a2_new = γ2.sum()/γ2.size
# Convergence
rel_change = np.abs((a2_old - a2_new)/a2_new)
i += 1
if (i % print_every) == 0:
print(i, a2_new, rel_change)
return a2_new, i
from matplotlib.pylab import normpdf as gauss_pdf
# Model PDF to be maximized
def model_pdf(x, a2, mu1, mu2, sig1, sig2):
a1 = 1 - a2
#assert np.abs(a1 + a2 + a3 - 1) < 1e-3
return (a1 * gauss_pdf(x, mu1, sig1) +
a2 * gauss_pdf(x, mu2, sig2))
def func2min_lmfit(params, x):
a2 = params['a2'].value
mu1 = params['mu1'].value
mu2 = params['mu2'].value
sig1 = params['sig1'].value
sig2 = params['sig2'].value
return -np.sqrt(np.log(model_pdf(x, a2, mu1, mu2, sig1, sig2)))
def func2min_scipy(params_fit, params_fixed, x):
a2 = params_fit
mu1 = params_fixed['mu1']
mu2 = params_fixed['mu2']
sig1 = params_fixed['sig1']
sig2 = params_fixed['sig2']
return -np.log(model_pdf(x, a2, mu1, mu2, sig1, sig2)).sum()
# create a set of Parameters
params = lmfit.Parameters()
params.add('a2', value=0.5, min=0)
for k, v in params_fixed.items():
params.add(k, value=v, vary=False)
x = ds0.E_
#x
#result = lmfit.minimize(func2min_lmfit, params, args=(x,), method='nelder')
#lmfit.report_fit(result.params)
#optimize.brute(func2min_scipy, ranges=((0.01, 0.99), (0.01, 0.99)), Ns=101, args=(params, x))
res_em = em_fit_2gauss(x, 0.5, params_fixed)
res_em
res = optimize.minimize(func2min_scipy, x0=[0.5], args=(params_fixed, x), method='Nelder-Mead')
res
res = optimize.minimize(func2min_scipy, x0=[0.5], args=(params_fixed, x), bounds=((0,1),), method='SLSQP')
res
res = optimize.minimize(func2min_scipy, x0=[0.5], args=(params_fixed, x), bounds=((0,1),), method='TNC')
res
bins = np.arange(-0.1, 1.1, 0.025)
plt.hist(x, bins, histtype='step', lw=2, normed=True);
xx = np.arange(-0.1, 1.1, 0.005)
#plt.plot(xx, model_pdf(xx, params))
plt.plot(xx, model_pdf(xx, a2=res_em[0], **params_fixed))
def _kinetics_fit_em(dx, a2_0, params_fixed, **kwargs):
kwargs = {'max_iter': 100, 'print_every': 101, **kwargs}
a2, i = em_fit_2gauss(dx.E_, a2_0, params_fixed, **kwargs)
return a2, i < kwargs['max_iter']
def _kinetics_fit_ll(dx, a2_0, params_fixed, **kwargs):
kwargs = {'method':'Nelder-Mead', **kwargs}
res = optimize.minimize(func2min_scipy, x0=[a2_0], args=(params_fixed, dx.E_),
**kwargs)
return res.x[0], res.success
def _kinetics_fit_hist(dx, a2_0, params_fixed):
E_fitter = bext.bursts_fitter(dx)
model = mfit.factory_two_gaussians()
model.set_param_hint('p1_center', value=params_fixed['mu1'], vary=False)
model.set_param_hint('p2_center', value=params_fixed['mu2'], vary=False)
model.set_param_hint('p1_sigma', value=params_fixed['sig1'], vary=False)
model.set_param_hint('p2_sigma', value=params_fixed['sig2'], vary=False)
E_fitter.fit_histogram(model, verbose=False)
return (float(E_fitter.params.p2_amplitude),
dx.E_fitter.fit_res[0].success)
def kinetics_fit(ds_slices, params_fixed, a2_0=0.5, method='em', **method_kws):
fit_func = {
'em': _kinetics_fit_em,
'll': _kinetics_fit_ll,
'hist': _kinetics_fit_hist}
fit_list = []
for dx in ds_slices:
a2, success = fit_func[method](dx, a2_0, params_fixed, **method_kws)
df_i = pd.DataFrame(data=dict(p2_amplitude=a2,
p1_center=params_fixed['mu1'], p2_center=params_fixed['mu2'],
p1_sigma=params_fixed['sig1'], p2_sigma=params_fixed['sig2'],
tstart=dx.slice_tstart, tstop=dx.slice_tstop,
tmean=0.5*(dx.slice_tstart + dx.slice_tstop)),
index=[0.5*(dx.slice_tstart + dx.slice_tstop)])
if not success:
print('* ', end='', flush=True)
continue
fit_list.append(df_i)
print(flush=True)
return pd.concat(fit_list)
start_time/60
def print_slices(moving_window_params):
msg = ' - Slicing measurement:'
for name in ('start', 'stop', 'step', 'window'):
msg += ' %s = %.1fs' % (name, moving_window_params[name])
print(msg, flush=True)
num_slices = len(bext.moving_window_startstop(**moving_window_params))
print(' Number of slices %d' % num_slices, flush=True)
t1 = time.time()
time.ctime()
ds.calc_max_rate(m=10)
ds_high = ds.select_bursts(select_bursts.E, E1=0.85)
step = 10
params = {}
for window in windows:
moving_window_params = dict(start=0, stop=ds.time_max, step=step, window=window)
print_slices(moving_window_params)
ds_slices = bext.moving_window_chunks(ds, time_zero=start_time, **moving_window_params)
for meth in ['em', 'll', 'hist']:
print(' >>> Fitting method %s ' % meth, end='', flush=True)
p = kinetics_fit(ds_slices, params_fixed, method=meth)
print(flush=True)
p['kinetics'] = p.p2_amplitude
p = p.round(dict(p1_center=3, p1_sigma=4, p2_amplitude=4, p2_center=3, p2_sigma=4, kinetics=4))
params[meth, window, step] = p
print('Moving-window processing duration: %d seconds.' % (time.time() - t1))
#moving_window_params = dict(start=0, stop=dsc.time_max, step=1, window=30)
moving_window_params
ds_slices_high = bext.moving_window_chunks(ds_high, **moving_window_params)
df = bext.moving_window_dataframe(**moving_window_params) - start_time
df['size_mean'] = [di.nt_.mean() for di in ds_slices]
df['size_max'] = [di.nt_.max() for di in ds_slices]
df['num_bursts'] = [di.num_bursts[0] for di in ds_slices]
df['burst_width'] = [di.mburst_.width.mean()*di.clk_p*1e3 for di in ds_slices]
df['burst_width_high'] = [di.mburst_.width.mean()*di.clk_p*1e3 for di in ds_slices_high]
df['phrate_mean'] = [di.max_rate_.mean() for di in ds_slices]
df = df.round(dict(tmean=1, tstart=1, tstop=1, size_mean=2, size_max=1,
burst_width=2, burst_width_high=2, phrate_mean=1))
df
labels = ('num_bursts', 'burst_width', 'size_mean', 'phrate_mean',)
fig, axes = plt.subplots(len(labels), 1, figsize=(12, 3*len(labels)))
for ax, label in zip(axes, labels):
ax.plot('tstart', label, data=df)
ax.legend(loc='best')
#ax.set_ylim(0)
# %%timeit -n1 -r1
# meth = 'em'
# print(' >>> Fitting method %s' % meth, flush=True)
# p = kinetics_fit(ds_slices, params_fixed, method=meth)
# %%timeit -n1 -r1
# meth = 'hist'
# print(' >>> Fitting method %s' % meth, flush=True)
# p = kinetics_fit(ds_slices, params_fixed, method=meth)
# %%timeit -n1 -r1
# meth = 'll'
# print(' >>> Fitting method %s' % meth, flush=True)
# p = kinetics_fit(ds_slices, params_fixed, method=meth)
out_fname = 'results/%s_burst_data_vs_time__window%ds_step%ds.csv' % (
Path(filename).stem, moving_window_params['window'], moving_window_params['step'])
out_fname
df.to_csv(out_fname)
# np.abs((params['em', 30, 1] - params['ll', 30, 1]).p2_amplitude).max()
methods = ('em', 'll', 'hist')
for meth in methods:
plt.figure(figsize=(14, 3))
plt.plot(params['em', windows[0], step].index, params['em', windows[0], step].kinetics, 'h', color='gray', alpha=0.2)
plt.plot(params['em', windows[1], step].index, params['em', windows[1], step].kinetics, 'h', alpha=0.3)
# (params['em', 5, 1].kinetics - params['ll', 5, 1].kinetics).plot()
for window in windows:
for meth in methods:
out_fname = ('results/' + Path(filename).stem +
'_%sfit_ampl_only__window%ds_step%ds.csv' % (meth, window, step))
print('- Saving: ', out_fname)
params[meth, window, step].to_csv(out_fname)
d
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Notebook arguments
Step2: Selecting a data file
Step3: Data load and Burst search
Step4: Compute background and burst search
Step5: Let's take a look at the photon waiting times histograms and at the fitted background rates
Step6: Using dplot exactly in the same way as for the single-spot data has now generated 8 subplots, one for each channel.
Step7: We can look at the timetrace of the photon stream (binning)
Step8: Burst selection and FRET
Step9: Selecting bursts by size
Step10: 2-Gaussian peaks
Step12: Fit
Step13: $$f(x) = \frac{A}{\sigma\sqrt{2\pi}}\, e^{-\frac{(x - \mu)^2}{2 \sigma^2}}$$
Step14: Kinetics
Step15: Moving-window processing
Step16: Burst-data
Step17: Population fraction
|
1,644
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'miroc', 'sandbox-2', 'land')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Description
Step7: 1.4. Land Atmosphere Flux Exchanges
Step8: 1.5. Atmospheric Coupling Treatment
Step9: 1.6. Land Cover
Step10: 1.7. Land Cover Change
Step11: 1.8. Tiling
Step12: 2. Key Properties --> Conservation Properties
Step13: 2.2. Water
Step14: 2.3. Carbon
Step15: 3. Key Properties --> Timestepping Framework
Step16: 3.2. Time Step
Step17: 3.3. Timestepping Method
Step18: 4. Key Properties --> Software Properties
Step19: 4.2. Code Version
Step20: 4.3. Code Languages
Step21: 5. Grid
Step22: 6. Grid --> Horizontal
Step23: 6.2. Matches Atmosphere Grid
Step24: 7. Grid --> Vertical
Step25: 7.2. Total Depth
Step26: 8. Soil
Step27: 8.2. Heat Water Coupling
Step28: 8.3. Number Of Soil layers
Step29: 8.4. Prognostic Variables
Step30: 9. Soil --> Soil Map
Step31: 9.2. Structure
Step32: 9.3. Texture
Step33: 9.4. Organic Matter
Step34: 9.5. Albedo
Step35: 9.6. Water Table
Step36: 9.7. Continuously Varying Soil Depth
Step37: 9.8. Soil Depth
Step38: 10. Soil --> Snow Free Albedo
Step39: 10.2. Functions
Step40: 10.3. Direct Diffuse
Step41: 10.4. Number Of Wavelength Bands
Step42: 11. Soil --> Hydrology
Step43: 11.2. Time Step
Step44: 11.3. Tiling
Step45: 11.4. Vertical Discretisation
Step46: 11.5. Number Of Ground Water Layers
Step47: 11.6. Lateral Connectivity
Step48: 11.7. Method
Step49: 12. Soil --> Hydrology --> Freezing
Step50: 12.2. Ice Storage Method
Step51: 12.3. Permafrost
Step52: 13. Soil --> Hydrology --> Drainage
Step53: 13.2. Types
Step54: 14. Soil --> Heat Treatment
Step55: 14.2. Time Step
Step56: 14.3. Tiling
Step57: 14.4. Vertical Discretisation
Step58: 14.5. Heat Storage
Step59: 14.6. Processes
Step60: 15. Snow
Step61: 15.2. Tiling
Step62: 15.3. Number Of Snow Layers
Step63: 15.4. Density
Step64: 15.5. Water Equivalent
Step65: 15.6. Heat Content
Step66: 15.7. Temperature
Step67: 15.8. Liquid Water Content
Step68: 15.9. Snow Cover Fractions
Step69: 15.10. Processes
Step70: 15.11. Prognostic Variables
Step71: 16. Snow --> Snow Albedo
Step72: 16.2. Functions
Step73: 17. Vegetation
Step74: 17.2. Time Step
Step75: 17.3. Dynamic Vegetation
Step76: 17.4. Tiling
Step77: 17.5. Vegetation Representation
Step78: 17.6. Vegetation Types
Step79: 17.7. Biome Types
Step80: 17.8. Vegetation Time Variation
Step81: 17.9. Vegetation Map
Step82: 17.10. Interception
Step83: 17.11. Phenology
Step84: 17.12. Phenology Description
Step85: 17.13. Leaf Area Index
Step86: 17.14. Leaf Area Index Description
Step87: 17.15. Biomass
Step88: 17.16. Biomass Description
Step89: 17.17. Biogeography
Step90: 17.18. Biogeography Description
Step91: 17.19. Stomatal Resistance
Step92: 17.20. Stomatal Resistance Description
Step93: 17.21. Prognostic Variables
Step94: 18. Energy Balance
Step95: 18.2. Tiling
Step96: 18.3. Number Of Surface Temperatures
Step97: 18.4. Evaporation
Step98: 18.5. Processes
Step99: 19. Carbon Cycle
Step100: 19.2. Tiling
Step101: 19.3. Time Step
Step102: 19.4. Anthropogenic Carbon
Step103: 19.5. Prognostic Variables
Step104: 20. Carbon Cycle --> Vegetation
Step105: 20.2. Carbon Pools
Step106: 20.3. Forest Stand Dynamics
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
Step109: 22.2. Growth Respiration
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
Step111: 23.2. Allocation Bins
Step112: 23.3. Allocation Fractions
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
Step115: 26. Carbon Cycle --> Litter
Step116: 26.2. Carbon Pools
Step117: 26.3. Decomposition
Step118: 26.4. Method
Step119: 27. Carbon Cycle --> Soil
Step120: 27.2. Carbon Pools
Step121: 27.3. Decomposition
Step122: 27.4. Method
Step123: 28. Carbon Cycle --> Permafrost Carbon
Step124: 28.2. Emitted Greenhouse Gases
Step125: 28.3. Decomposition
Step126: 28.4. Impact On Soil Properties
Step127: 29. Nitrogen Cycle
Step128: 29.2. Tiling
Step129: 29.3. Time Step
Step130: 29.4. Prognostic Variables
Step131: 30. River Routing
Step132: 30.2. Tiling
Step133: 30.3. Time Step
Step134: 30.4. Grid Inherited From Land Surface
Step135: 30.5. Grid Description
Step136: 30.6. Number Of Reservoirs
Step137: 30.7. Water Re Evaporation
Step138: 30.8. Coupled To Atmosphere
Step139: 30.9. Coupled To Land
Step140: 30.10. Quantities Exchanged With Atmosphere
Step141: 30.11. Basin Flow Direction Map
Step142: 30.12. Flooding
Step143: 30.13. Prognostic Variables
Step144: 31. River Routing --> Oceanic Discharge
Step145: 31.2. Quantities Transported
Step146: 32. Lakes
Step147: 32.2. Coupling With Rivers
Step148: 32.3. Time Step
Step149: 32.4. Quantities Exchanged With Rivers
Step150: 32.5. Vertical Grid
Step151: 32.6. Prognostic Variables
Step152: 33. Lakes --> Method
Step153: 33.2. Albedo
Step154: 33.3. Dynamics
Step155: 33.4. Dynamic Lake Extent
Step156: 33.5. Endorheic Basins
Step157: 34. Lakes --> Wetlands
|
1,645
|
<ASSISTANT_TASK:>
Python Code:
target = 'stm32f415_tinyaes'
tf_cap_memory()
target_config = json.loads(open("config/" + target + '.json').read())
BATCH_SIZE = target_config['batch_size']
TRACE_LEN = target_config['max_trace_len']
available_models = get_models_by_attack_point(target_config)
DATASET_GLOB = "datasets/%s/test/*" % target_config['algorithm']
shard_paths = list_shards(DATASET_GLOB, 256)
# let's select an attack point that have all the needed models -- Key is not a good target: it doesn't work
ATTACK_POINT = 'sub_bytes_out'
# let's also pick the key byte we want to use SCAAML to recover and load the related model
ATTACK_BYTE = 7
# load model
model = load_model_from_disk(available_models[ATTACK_POINT][ATTACK_BYTE])
NUM_TRACES = 10 # maximum number of traces to use to recover a given key byte. 10 is already overkill
correct_prediction_rank = defaultdict(list)
y_pred = []
y_true = []
model_metrics = {"acc": metrics.Accuracy()}
for shard in tqdm(shard_paths, desc='Recovering bytes', unit='shards'):
keys, pts, x, y = load_attack_shard(shard, ATTACK_BYTE, ATTACK_POINT, TRACE_LEN, num_traces=NUM_TRACES)
# prediction
predictions = model.predict(x)
# computing byte prediction from intermediate predictions
key_preds = ap_preds_to_key_preds(predictions, pts, ATTACK_POINT)
c_preds = from_categorical(predictions)
c_y = from_categorical(y)
# metric tracking
for metric in model_metrics.values():
metric.update_state(c_y, c_preds)
# for the confusion matrix
y_pred.extend(c_preds)
y_true.extend(c_y)
# accumulating probabilities and checking correct guess position.
# if all goes well it will be at position 0 (highest probability)
# see below on how to use for the real attack
key = keys[0] # all the same in the same shard - not used in real attack
vals = np.zeros((256))
for trace_count, kp in enumerate(key_preds):
vals = vals + np.log10(kp + 1e-22)
guess_ranks = (np.argsort(vals, )[-256:][::-1])
byte_rank = list(guess_ranks).index(key)
correct_prediction_rank[trace_count].append(byte_rank)
print("Accuracy: %.2f" % model_metrics['acc'].result())
plot_confusion_matrix(y_true, y_pred, normalize=True, title="%s byte %s prediction confusion matrix" % (ATTACK_POINT, ATTACK_BYTE))
NUM_TRACES_TO_PLOT = 10
avg_preds = np.array([correct_prediction_rank[i].count(0) for i in range(NUM_TRACES_TO_PLOT)])
y = avg_preds / len(correct_prediction_rank[0]) * 100
x = [i + 1 for i in range(NUM_TRACES_TO_PLOT)]
plt.plot(x, y)
plt.xlabel("Num traces")
plt.ylabel("Recovery success rate in %")
plt.title("%s ap:%s byte:%s recovery performance" % (target_config['algorithm'], ATTACK_POINT, ATTACK_BYTE))
plt.show()
min_traces = 0
max_traces = 0
cumulative_aa = 0
for idx, val in enumerate(y):
cumulative_aa += val
if not min_traces and val > 0:
min_traces = idx + 1
if not max_traces and val == 100.0:
max_traces = idx + 1
break
cumulative_aa = round( cumulative_aa / (idx + 1), 2) # divide by the number of steps
rows = [
["min traces", min_traces, round(y[min_traces -1 ], 1)],
["max traces", max_traces, round(y[max_traces - 1], 1)],
["cumulative score", cumulative_aa, '-']
]
print(tabulate(rows, headers=['metric', 'num traces', '% of keys']))
ATTACK_POINT = 'sub_bytes_out' # let's pick an attack point- Key is not a good target: it doesn't work for TinyAEs
TARGET_SHARD = 42 # a shard == a different key. Pick the one you would like
NUM_TRACES = 5 # how many traces to use - as seen in single byte, 5 traces is enough
# perfoming 16x the byte recovery algorithm showecased above - one for each key byte
real_key = [] # what we are supposed to find
recovered_key = [] # what we predicted
pb = tqdm(total=16, desc="guessing key", unit='guesses')
for ATTACK_BYTE in range(16):
# data
keys, pts, x, y = load_attack_shard(shard_paths[TARGET_SHARD], ATTACK_BYTE, ATTACK_POINT, TRACE_LEN, num_traces=NUM_TRACES, full_key=True)
real_key.append(keys[0])
# load model
model = load_model_from_disk(available_models[ATTACK_POINT][ATTACK_BYTE])
# prediction
predictions = model.predict(x)
# computing byte prediction from intermediate predictions
key_preds = ap_preds_to_key_preds(predictions, pts, ATTACK_POINT)
# accumulating probabity
vals = np.zeros((256))
for trace_count, kp in enumerate(key_preds):
vals = vals + np.log10(kp + 1e-22)
# order predictions by probability
guess_ranks = (np.argsort(vals, )[-256:][::-1])
# take strongest guess as our key guess
recovered_key.append(guess_ranks[0])
# update display
pb.set_postfix({'Recovered key': bytelist_to_hex(recovered_key), "Real key": bytelist_to_hex(real_key)})
pb.update()
pb.close()
# check that everything worked out: the recovered key match the real keys
hex_display(real_key, 'real key')
hex_display(recovered_key, 'recovered key')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Available models
Step2: Dataset paths
Step3: Single byte recovery
Step4: Using our model to predicting bytes attack point value, recovering byte key and combining prediction for the 256 test keys we have in our dataset
Step5: checking model accuracy & confusion matrix
Step6: byte recovery efficiency
Step7: metric computations
Step8: recover the full keys
|
1,646
|
<ASSISTANT_TASK:>
Python Code:
# Download the dataset in this directory (does that work on Windows OS ?)
! wget http://deeplearning.net/data/mnist/mnist.pkl.gz
import cPickle, gzip, numpy
import numpy as np
# Load the dataset
f = gzip.open('mnist.pkl.gz', 'rb')
train_set, valid_set, test_set = cPickle.load(f)
f.close()
def to_one_hot(y, n_classes=10): # You might want to use this as some point...
_y = np.zeros((len(y), n_classes))
_y[np.arange(len(y)), y] = 1
return _y
X_train, y_train = train_set[0], train_set[1]
X_valid, y_valid = valid_set[0], valid_set[1]
X_test, y_test = test_set[0], test_set[1]
# HELPER
def softmax(Z):
Z is a vector eg. [1,2,3]
return: the vector softmax(Z) eg. [.09, .24, .67]
return np.exp(Z) / np.exp(Z).sum(axis=0)
# Define the variables here (initialize the weights with the np.random.normal module):
W1, b1 =
W2, b2 =
def Pred(X, ??? ):
Explanations ...
Arguments:
X: An input image (as a vector)(shape is <784,1>)
Returns : a vector ???
pass
def loss(P, Y):
Explanations :
Arguments:
P: The prediction vector corresponding to an image (X^s)
Y: The ground truth of an image
Returns: a vector ???
pass
def dW1( ??? ):
Explanations ??
Returns: A vector which is the derivative of the loss with respect to W1
pass
def db1(L, ???):
Explanations ??
Arguments:
L is the loss af a sample (a scalar)
Returns: A scalar which is the derivative of the Loss with respect to b1
pass
def dW2( ??? ):
pass
def db2( ??? ):
pass
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: You can now implement a 2 layers NN
Step5: 2 - Define Model
Step8: 3 - Define Derivatives
|
1,647
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import pandas as pd
import numpy as np
stats2 = pd.read_csv("lossStats_HUMAN.csv",index_col=0)
stats2.fillna({"mean":np.nan,"variance":np.nan,"outliers":0},inplace=True)
stats2.head()
ax = stats2["variance"].hist(bins=50,color='grey')
ax.set_title("Variance histogram, all genes")
ax.set_ylabel("Number of genes")
#plt.savefig("variance_histogram.svg")
stats_outliers = stats2[stats2["outliers"] != 0]
ax = stats_outliers["variance"].hist(bins=50,color='grey')
ax.set_title("Variance, genes with outliers")
ax.set_ylabel("Number of genes")
ax = stats2["mean"].hist(bins=50,color='grey')
ax.set_title("Histogram of mean values, all genes")
ax.set_ylabel("Number of genes")
#plt.savefig("mean_histogram.svg")
ax = stats_outliers["mean"].hist(bins=50,color='grey')
ax.set_title("Mean, genes with outliers")
ax.set_ylabel("Number of genes")
stats2['numOutliers'] = stats2['outliers'].map(lambda x: len(x.split(" ")) if x != 0 else 0)
stats2.head()
stats2["numOutliers"].value_counts()
FalsePos = pd.Series([db for row in stats2["outliers"] for db in str(row).split()]).value_counts()
FalsePos = FalsePos[FalsePos.index != '0'] # don't care about these
FalsePos
ldos = pd.read_csv("HUMAN_LDO_results.csv",index_col=0)
ldos.head()
FalseNeg = ldos.apply(pd.value_counts).ix[True]
FalseNeg.sort(ascending=False, inplace=True)
FalseNeg
dbs = ["InParanoid","InParanoidCore","OMA_Groups","OMA_Pairs","PANTHER8_LDO","RSD","EggNOG","Orthoinspector",
"Hieranoid_2","EnsemblCompara_v2","Metaphors","PhylomeDB","PANTHER8_all"]
errors = pd.DataFrame({"FalsePositive":FalsePos,"FalseNegative":FalseNeg})
errors = errors.reindex(dbs)
errors.head()
# errors.to_csv("errors_byDatabase.csv")
width = .35
fig, ax1 = plt.subplots()
errors["FalseNegative"].plot(kind='bar', ax=ax1, color='grey', width=width, position=1)
ax1.set_ylabel("Number Genes False Negative")
ax2 = ax1.twinx()
errors["FalsePositive"].plot(kind='bar', ax=ax2, color='black', width=width, position=0)
ax2.set_ylabel("Number Genes False Positive")
ax1.yaxis.grid(False)
ax2.yaxis.grid(False)
ax1.xaxis.grid(False)
ax2.xaxis.grid(False)
#plt.savefig("errors_byDatabase.svg")
normErrors = errors/errors.sum()
normErrors["sumErrors"] = normErrors["FalseNegative"] + normErrors["FalsePositive"]
normErrors["normSum"] = normErrors["sumErrors"]/normErrors["sumErrors"].sum()
normErrors.sum()
normErrors["normSum"].plot(kind='bar',color='grey')
#plt.savefig("totalErrors.svg")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: False positives
Step2: Let's look at distribution of the mean and variance
Step3: Count number of outliers for each gene
Step4: Get number of false positives (outliers) for each algorithm
Step5: False Negatives
Step6: Get number of false negatives for each algorithm
Step7: Combine counts of false-negatives and false-positives for each algorithm
Step8: Plot counts of errors for each algorithm
Step9: Proportional error by database
|
1,648
|
<ASSISTANT_TASK:>
Python Code:
def areaSquare(side ) :
area = side * side
return area
side = 4
print(areaSquare(side ) )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
1,649
|
<ASSISTANT_TASK:>
Python Code:
import sqlite3 as db
disk_engine = db.connect ('NYC-311-2M.db')
import plotly.plotly as py
py.sign_in ('USERNAME', 'PASSWORD') # Connect!
import pandas as pd
import itertools
import time # To benchmark of these three solutions
import sys # for sys.stdout.flush ()
from plotly.graph_objs import Bar, Layout
def iplot_percent_complaints_by_type_and_city (traces):
return py.iplot({'data': traces,
'layout': Layout(barmode='stack',
xaxis={'tickangle': 40, 'autorange': False, 'range': [-0.5, 16]},
yaxis={'title': 'Percent of Complaints by City'},
margin={'b': 150},
title='Relative Number of 311 Complaints by City')
}, filename='311/relative complaints by city', validate=False)
# Generate a static list of the top 7 cities
query = '''
SELECT City, COUNT(*) AS NumComplaints
FROM data
WHERE City <> 'None'
GROUP BY City COLLATE NOCASE
ORDER BY -NumComplaints
LIMIT 7
'''
TOP_CITIES = pd.read_sql_query (query, disk_engine)['City']
print TOP_CITIES
t1a = time.time ()
# Determine the number of complaints by type
query = '''
SELECT ComplaintType, COUNT(*) AS NumComplaints
FROM data
GROUP BY ComplaintType COLLATE NOCASE
ORDER BY -NumComplaints
'''
df = pd.read_sql_query (query, disk_engine)
t1a = time.time () - t1a
print "[+%gs] Part A" % t1a
print df.head ()
t1b = time.time ()
# Convert this data into a lookup table (dictionary)
total_complaints_by_type = \
dict (zip ([x.capitalize () for x in df.ComplaintType],
df.NumComplaints))
t1b = time.time () - t1b
print "[+%gs] Part B" % t1b
# Print a few entries just as a sanity check
print list (itertools.islice (total_complaints_by_type.items (), 5))
t1c = time.time ()
def capitalize (string_list):
Given a list of strings, returns a new list with standardized
capitalization.
return [s.capitalize () for s in string_list]
def gather (key_list, dictionary):
Given a list of keys, returns a list of corresponding values from a
dictionary.
return [dictionary[key] for key in key_list]
traces1 = []
for city in TOP_CITIES: # Determines the complaint counts by city
print ("[+%gs] Processing %s ..." % (time.time () - t1c, city)) ; sys.stdout.flush ()
query = '''
SELECT ComplaintType, COUNT(*) as NumComplaints
FROM data
WHERE City = "{}" COLLATE NOCASE
GROUP BY ComplaintType COLLATE NOCASE
ORDER BY -NumComplaints
'''.format (city)
df = pd.read_sql_query (query, disk_engine)
# Normalize complaint counts
complaint_types = capitalize (df.ComplaintType)
totals = gather (complaint_types, total_complaints_by_type)
percent_complaints = 100.0 * df.NumComplaints / totals
# Add this city as a new trace
traces1.append (Bar (x=complaint_types,
y=percent_complaints,
name=city.capitalize ()))
t1c = time.time () - t1c
print "[+%gs] Part C" % t1c
# Check it!
print "==> Total time for Solution 1: %gs" % (t1a + t1b + t1c)
iplot_percent_complaints_by_type_and_city (traces1)
t2a = time.time ()
query = '''
CREATE VIEW IF NOT EXISTS TotalComplaintsView AS
SELECT ComplaintType, COUNT(*) AS NumComplaints
FROM data
GROUP BY ComplaintType COLLATE NOCASE
ORDER BY -NumComplaints
'''
c = disk_engine.cursor ()
c.execute (query)
t2a = time.time () - t2a
print "[+%gs] Part A" % t2a
t2b = time.time ()
traces2 = []
for city in TOP_CITIES: # Determines the complaint counts by city
print ("[+%gs] Processing %s ..." % (time.time () - t2b, city)) ; sys.stdout.flush ()
query = '''
SELECT D.ComplaintType,
(100.0 * COUNT(*) / T.NumComplaints) AS PercentComplaints
FROM data AS D, TotalComplaintsView AS T
WHERE (City = "{}" COLLATE NOCASE)
AND (D.ComplaintType = T.ComplaintType COLLATE NOCASE)
GROUP BY D.ComplaintType COLLATE NOCASE
ORDER BY -T.NumComplaints
'''.format (city)
df = pd.read_sql_query (query, disk_engine)
traces2.append (Bar (x=capitalize (df.ComplaintType),
y=df.PercentComplaints,
name=city.capitalize ()))
t2b = time.time () - t2b
print "[+%gs] Part B" % t2b
print ("==> Total time for Solution 2: %gs" % (t2a + t2b))
iplot_percent_complaints_by_type_and_city (traces2)
t3 = time.time ()
traces3 = []
for city in TOP_CITIES: # Determines the complaint counts by city
print ("[+%gs] Processing %s ..." % (time.time () - t3, city)) ; sys.stdout.flush ()
query = '''
SELECT D.ComplaintType,
(100.0 * COUNT(*) / T.NumComplaints) AS PercentComplaints
FROM data AS D,
(SELECT ComplaintType, COUNT(*) AS NumComplaints
FROM data
GROUP BY ComplaintType COLLATE NOCASE) AS T
WHERE (City = "{}" COLLATE NOCASE)
AND (D.ComplaintType = T.ComplaintType COLLATE NOCASE)
GROUP BY D.ComplaintType COLLATE NOCASE
ORDER BY -T.NumComplaints
'''.format (city)
df = pd.read_sql_query (query, disk_engine)
traces3.append (Bar (x=capitalize (df.ComplaintType),
y=df.PercentComplaints,
name=city.capitalize ()))
t3 = time.time () - t3
print "[+%gs] Total" % t3
print "==> Total time for Solution 3: %gs" % t3
iplot_percent_complaints_by_type_and_city (traces3)
t4a = time.time ()
query = '''
DROP TABLE IF EXISTS TotalComplaints
'''
c = disk_engine.cursor ()
c.execute (query)
query = '''
CREATE TABLE TotalComplaints AS
SELECT ComplaintType, COUNT(*) AS NumComplaints
FROM data
GROUP BY ComplaintType COLLATE NOCASE
ORDER BY -NumComplaints
'''
c.execute (query)
t4a = time.time () - t4a
print "[+%gs] Part A" % t4a
t4b = time.time ()
traces4 = []
for city in TOP_CITIES: # Determines the complaint counts by city
print ("[+%gs] Processing %s ..." % (time.time () - t4b, city)) ; sys.stdout.flush ()
query = '''
SELECT D.ComplaintType,
(100.0 * COUNT(*) / T.NumComplaints) AS PercentComplaints
FROM data AS D, TotalComplaints AS T
WHERE (City = "{}" COLLATE NOCASE)
AND (D.ComplaintType = T.ComplaintType COLLATE NOCASE)
GROUP BY D.ComplaintType COLLATE NOCASE
ORDER BY -T.NumComplaints
'''.format (city)
df = pd.read_sql_query (query, disk_engine)
traces4.append (Bar (x=capitalize (df.ComplaintType),
y=df.PercentComplaints,
name=city.capitalize ()))
t4b = time.time () - t4b
print "[+%gs] Part B" % t4b
print "==> Total time for Solution 4: %gs" % (t4a + t4b)
iplot_percent_complaints_by_type_and_city (traces4)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: Solution 1
Step4: Solution 2
Step5: A nice feature of a view is that it is stored in the database and automatically kept up to date.
Step6: Solution 3
Step7: Solution 4 (variation of 2)
|
1,650
|
<ASSISTANT_TASK:>
Python Code:
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
# Create SQL query using natality data after the year 2000
from google.cloud import bigquery
query =
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth
FROM
publicdata.samples.natality
WHERE year > 2000
# Call BigQuery but GROUP BY the hashmonth and see number of records for each group to enable us to get the correct train and evaluation percentages
df = bigquery.Client().query("SELECT hashmonth, COUNT(weight_pounds) AS num_babies FROM (" + query + ") GROUP BY hashmonth").to_dataframe()
print("There are {} unique hashmonths.".format(len(df)))
df.head()
# Added the RAND() so that we can now subsample from each of the hashmonths to get approximately the record counts we want
trainQuery = "SELECT * FROM (" + query + ") WHERE ABS(MOD(hashmonth, 4)) < 3 AND RAND() < 0.0005"
evalQuery = "SELECT * FROM (" + query + ") WHERE ABS(MOD(hashmonth, 4)) = 3 AND RAND() < 0.0005"
traindf = bigquery.Client().query(trainQuery).to_dataframe()
evaldf = bigquery.Client().query(evalQuery).to_dataframe()
print("There are {} examples in the train dataset and {} in the eval dataset".format(len(traindf), len(evaldf)))
traindf.head()
# Let's look at a small sample of the training data
traindf.describe()
# It is always crucial to clean raw data before using in ML, so we have a preprocessing step
import pandas as pd
def preprocess(df):
# clean up data we don't want to train on
# in other words, users will have to tell us the mother's age
# otherwise, our ML service won't work.
# these were chosen because they are such good predictors
# and because these are easy enough to collect
df = df[df.weight_pounds > 0]
df = df[df.mother_age > 0]
df = df[df.gestation_weeks > 0]
df = df[df.plurality > 0]
# modify plurality field to be a string
twins_etc = dict(zip([1,2,3,4,5],
['Single(1)', 'Twins(2)', 'Triplets(3)', 'Quadruplets(4)', 'Quintuplets(5)']))
df['plurality'].replace(twins_etc, inplace=True)
# now create extra rows to simulate lack of ultrasound
nous = df.copy(deep=True)
nous.loc[nous['plurality'] != 'Single(1)', 'plurality'] = 'Multiple(2+)'
nous['is_male'] = 'Unknown'
return pd.concat([df, nous])
traindf.head()# Let's see a small sample of the training data now after our preprocessing
traindf = preprocess(traindf)
evaldf = preprocess(evaldf)
traindf.head()
traindf.tail()
# Describe only does numeric columns, so you won't see plurality
traindf.describe()
traindf.to_csv('train.csv', index=False, header=False)
evaldf.to_csv('eval.csv', index=False, header=False)
%%bash
wc -l *.csv
head *.csv
tail *.csv
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: <h2> Create ML dataset by sampling using BigQuery </h2>
Step3: There are only a limited number of years and months in the dataset. Let's see what the hashmonths are.
Step4: Here's a way to get a well distributed portion of the data in such a way that the test and train sets do not overlap
Step5: <h2> Preprocess data using Pandas </h2>
Step6: Also notice that there are some very important numeric fields that are missing in some rows (the count in Pandas doesn't count missing data)
Step7: <h2> Write out </h2>
|
1,651
|
<ASSISTANT_TASK:>
Python Code:
# Set up matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import Image
Image(filename="ang_dist.png", width=500)
from astropy.cosmology import FlatLambdaCDM
import astropy.units as u
# In this case we just need to define the matter density
# and hubble parameter at z=0.
# Note the default units for the hubble parameter H0 are km/s/Mpc.
# We will pass in a `Quantity` object with the units specified.
cosmo = FlatLambdaCDM(H0=70*u.km/u.s/u.Mpc, Om0=0.3)
import numpy as np
zvals = np.arange(0, 6, 0.1)
dist = cosmo.angular_diameter_distance(zvals)
fig = plt.figure(figsize=(6,4))
ax = fig.add_subplot(111)
ax.plot(zvals, dist)
dist.unit
ages = np.array([13, 10, 8, 6, 5, 4, 3, 2, 1.5, 1.2, 1])*u.Gyr
from astropy.cosmology import z_at_value
ageticks = [z_at_value(cosmo.age, age) for age in ages]
fig = plt.figure(figsize=(6,4))
ax = fig.add_subplot(111)
ax.plot(zvals, dist)
ax2 = ax.twiny()
ax2.set_xticks(ageticks)
fig = plt.figure(figsize=(6,4))
ax = fig.add_subplot(111)
ax.plot(zvals, dist)
ax2 = ax.twiny()
ax2.set_xticks(ageticks)
ax2.set_xticklabels(['{:g}'.format(age) for age in ages.value])
fig = plt.figure(figsize=(6,4))
ax = fig.add_subplot(111)
ax.plot(zvals, dist)
ax2 = ax.twiny()
ax2.set_xticks(ageticks)
ax2.set_xticklabels(['{:g}'.format(age) for age in ages.value])
zmin, zmax = 0.0, 5.9
ax.set_xlim(zmin, zmax)
ax2.set_xlim(zmin, zmax)
fig = plt.figure(figsize=(6,4))
ax = fig.add_subplot(111)
ax.plot(zvals, dist)
ax2 = ax.twiny()
ax2.set_xticks(ageticks)
ax2.set_xticklabels(['{:g}'.format(age) for age in ages.value])
zmin, zmax = 0, 5.9
ax.set_xlim(zmin, zmax)
ax2.set_xlim(zmin, zmax)
ax2.set_xlabel('Time since Big Bang (Gyr)')
ax.set_xlabel('Redshift')
ax.set_ylabel('Angular diameter distance (Mpc)')
ax.set_ylim(0, 1890)
ax.minorticks_on()
from astropy.cosmology import Planck13
dist2 = Planck13.angular_diameter_distance(zvals)
fig = plt.figure(figsize=(6,4))
ax = fig.add_subplot(111)
ax.plot(zvals, dist2, label='Planck 2013')
ax.plot(zvals, dist, label=
'$h=0.7,\ \Omega_M=0.3,\ \Omega_\Lambda=0.7$')
ax.legend(frameon=0, loc='lower right')
ax2 = ax.twiny()
ax2.set_xticks(ageticks)
ax2.set_xticklabels(['{:g}'.format(age) for age in ages.value])
zmin, zmax = 0.0, 5.9
ax.set_xlim(zmin, zmax)
ax2.set_xlim(zmin, zmax)
ax2.set_xlabel('Time since Big Bang (Gyr)')
ax.set_xlabel('Redshift')
ax.set_ylabel('Angular diameter distance (Mpc)')
ax.minorticks_on()
ax.set_ylim(0, 1890)
fig.savefig('ang_dist.png', dpi=200, bbox_inches='tight')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We start with a cosmology object. We will make a flat cosmology (which means that the curvature density $\Omega_k=0$) with a hubble parameter of $70$ km/s/Mpc and matter density $\Omega_M=0.3$ at redshift 0. The FlatLambdaCDM cosmology then automatically infers that the dark energy density $\Omega_\Lambda$ must $=0.7$, because $\Omega_M + \Omega_\Lambda + \Omega_k = 1$.
Step2: Note that we could instead use one of the built-in cosmologies, like WMAP9 or Planck13, in which case we would just redefine the cosmo variable.
Step3: Note that we passed an array of redshifts to cosmo.angular_diameter_distance and it produced a corresponding array of distance values, one for each redshift. Let's plot them
Step4: To check the units of the angular diameter distance, take a look at the unit attribute
Step5: Now let's put some age labels on the top axis. We're going to pick a series of round age values where we want to place axis ticks. You may need to tweak these depending on your redshift range to get nice, evenly spaced ticks.
Step6: To link the redshift and age axes, we have to find the redshift corresponding to each age. The function z_at_value does this for us.
Step7: Now we make the second axes, and set the tick positions using these values.
Step8: We have ticks on the top axis at the correct ages, but they're labelled with the redshift, not the age. We can fix this by setting the tick labels by hand.
Step9: We need to make sure the top and bottom axes have the same redshift limits. They may not line up properly in the above plot, for example, depending on your setup (the age of the universe should be ~13 Gyr at z=0).
Step10: We're almost done. We just need to label all the axes, and add some minor ticks. Let's also tweak the y axis limits to avoid putting labels right near the top of the plot.
Step11: Now for comparison, let's add the angular diameter distance for a different cosmology, from the Planck 2013 results. And then finally, we save the figure to a png file.
|
1,652
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
from bokeh.plotting import figure, show, output_notebook
# Get data
df = pd.read_csv('data/Land_Ocean_Monthly_Anomaly_Average.csv')
# Process data
df['datetime'] = pd.to_datetime(df['datetime'])
df = df[['anomaly','datetime']]
df['moving_average'] = pd.rolling_mean(df['anomaly'], 12)
# Output option
output_notebook()
# Create your plot
p = figure()
p.line(df['datetime'], df['anomaly'])
p.line(df['datetime'], df['moving_average'])
# Show plot
show(p)
from bokeh.models import DatetimeTickFormatter
import math
# Axis type, width and height
t = figure(x_axis_type = "datetime", width=900, height=200)
# Line colors and legend
t.line(df['datetime'], df['anomaly'], color='lightgrey', legend='anom')
t.line(df['datetime'], df['moving_average'], color='red', legend='avg')
# Axis format (e.g tick format and orientation)
xformatter = DatetimeTickFormatter(formats=dict(months=["%b %Y"], years=["%Y"]))
t.xaxis[0].formatter = xformatter
t.xaxis.major_label_orientation = math.pi/4
# Axis labels
t.yaxis.axis_label = 'Anomaly(ºC)'
# Legend position
t.legend.location = "bottom_right"
# Grid style
t.grid.grid_line_alpha=0.2
# Remove toolbar
t.toolbar_location=None
# Show plot
show(t)
from bokeh.models import ColumnDataSource, HoverTool
from collections import OrderedDict
# List all the tools that you want in your plot separated by comas, all in one string.
TOOLS="crosshair,pan,wheel_zoom,box_zoom,reset,hover,previewsave"
# Add the tools to your figure
t = figure(x_axis_type = "datetime", width=1000, height=200,tools=TOOLS)
# The hover tools doesn't render datetime appropriately. We'll need a string.
df["datetime_s"]=df[["datetime"]].applymap(str)
# To reference variables in the hover box, we'll need to use bokeh.ColumnDataSource instead of a pd.DataFrame
source = ColumnDataSource(df)
# Change plotting.line to get values from ColumnDataSource, name the renderer that you want to have the hover activated
t.line('datetime', 'anomaly', color='lightgrey', legend='anom', source=source)
t.line('datetime', 'moving_average', color='red', legend='avg', source=source, name="mva")
# Set hover tool
hover = t.select(dict(type=HoverTool))
hover.tooltips = OrderedDict([
("anomaly", "@anomaly"),
("datetime", "@datetime_s"),
])
hover.renderers = t.select("mva")
# Copy your style from the previous exercise
xformatter = DatetimeTickFormatter(formats=dict(months=["%b %Y"], years=["%Y"]))
t.xaxis[0].formatter = xformatter
t.xaxis.major_label_orientation = math.pi/4
t.yaxis.axis_label = 'Anomaly(ºC)'
t.legend.location = "bottom_right"
t.grid.grid_line_alpha=0.2
t.toolbar_location=None
# Show plot
show(t)
# New figure
t = figure(x_axis_type = "datetime", width=1000, height=200,tools=TOOLS)
# Data processing
# The hover tools doesn't render datetime appropriately. We'll need a string.
# We just want dates, remove time
f = lambda x: str(x)[:7]
df["datetime_s"]=df[["datetime"]].applymap(f)
source = ColumnDataSource(df)
# Create plot
t.line('datetime', 'anomaly', color='lightgrey', legend='anom', source=source)
t.line('datetime', 'moving_average', color='red', legend='avg', source=source, name="mva")
# Style
xformatter = DatetimeTickFormatter(formats=dict(months=["%b %Y"], years=["%Y"]))
t.xaxis[0].formatter = xformatter
t.xaxis.major_label_orientation = math.pi/4
t.yaxis.axis_label = 'Anomaly(ºC)'
t.legend.location = "bottom_right"
t.grid.grid_line_alpha=0.2
t.toolbar_location=None
# Style hover tool
hover = t.select(dict(type=HoverTool))
hover.tooltips =
<div>
<span style="font-size: 15px;">Anomaly</span>
<span style="font-size: 17px; color: red;">@anomaly</span>
</div>
<div>
<span style="font-size: 15px;">Month</span>
<span style="font-size: 10px; color: grey;">@datetime_s</span>
</div>
hover.renderers = t.select("mva")
# Show plot
show(t)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exercise
Step2: Exercise
Step4: [OPTIONAL] Exercise
|
1,653
|
<ASSISTANT_TASK:>
Python Code:
import os
import mdtraj
import mdtraj.reporters
from simtk import unit
import simtk.openmm as mm
from simtk.openmm import app
import mdtraj.testing
pdb = mdtraj.load(mdtraj.testing.get_fn('native.pdb'))
topology = pdb.topology.to_openmm()
forcefield = app.ForceField('amber99sbildn.xml', 'amber99_obc.xml')
system = forcefield.createSystem(topology, nonbondedMethod=app.CutoffNonPeriodic)
integrator = mm.LangevinIntegrator(330*unit.kelvin, 1.0/unit.picoseconds, 2.0*unit.femtoseconds)
simulation = app.Simulation(topology, system, integrator)
simulation.context.setPositions(pdb.xyz[0])
simulation.context.setVelocitiesToTemperature(330*unit.kelvin)
if not os.path.exists('ala2.h5'):
simulation.reporters.append(mdtraj.reporters.HDF5Reporter('ala2.h5', 1000))
simulation.step(100000)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: And a few things froms OpenMM
Step2: First, lets find a PDB for alanine dipeptide, the system we'll
Step3: Lets use the amber99sb-ildn forcefield with implicit solvent
Step4: Set the initial positions to the "first frame" of the PDB
Step5: Let's use one of the OpenMM reporters that mdtraj provides. This is
|
1,654
|
<ASSISTANT_TASK:>
Python Code:
import h5py, numpy
with h5py.File('../data/dataset.h5', 'r') as f:
features = f['features'][:, -32 * 32:]
import keras, \
keras.layers, \
keras.layers.core as core, \
keras.layers.convolutional as conv, \
keras.models as models
from keras import backend as K
from keras.engine.topology import Layer
n_filters = 32
conv_size = 5
pool_size = 2
dropout = 0.25
patch_size = 32
conv_out_size = (patch_size - conv_size) // pool_size + 1
conv_2_out_size = ((patch_size - conv_size) // pool_size + 1 - conv_size) // pool_size + 1
hidden = 1500
class Sum(Layer):
def __init__(self, **kwargs):
super(Sum, self).__init__(**kwargs)
def build(self, input_shape):
self.trainable_weights = []
def call(self, x, mask=None):
return x.sum(axis=1)
def get_output_shape_for(self, input_shape):
return (input_shape[0],) + input_shape[2:]
model = models.Sequential()
# Encoder.
model.add(conv.Convolution2D(n_filters, conv_size, conv_size,
border_mode='valid',
input_shape=(1, patch_size, patch_size)))
model.add(core.Activation('tanh'))
model.add(conv.MaxPooling2D(pool_size=(pool_size, pool_size)))
model.add(core.Dropout(dropout))
model.add(conv.Convolution2D(n_filters, conv_size, conv_size,
border_mode='valid'))
model.add(core.Activation('tanh'))
model.add(conv.MaxPooling2D(pool_size=(pool_size, pool_size)))
model.add(core.Dropout(dropout))
model.add(core.Flatten())
# Dense.
# model.add(core.Dense(hidden))
# model.add(core.Activation('tanh'))
# model.add(core.Dense(n_filters * conv_out_size * conv_out_size))
# model.add(core.Activation('tanh'))
# Decoder.
model.add(core.Reshape((n_filters, conv_2_out_size, conv_2_out_size)))
model.add(conv.UpSampling2D(size=(pool_size, pool_size)))
model.add(core.Activation('tanh'))
model.add(conv.ZeroPadding2D(padding=(conv_size - 1, conv_size - 1)))
model.add(conv.Convolution2D(n_filters, conv_size, conv_size,
border_mode='valid'))
model.add(conv.UpSampling2D(size=(pool_size, pool_size)))
model.add(core.Activation('tanh'))
model.add(conv.ZeroPadding2D(padding=(conv_size - 1, conv_size - 1)))
model.add(conv.Convolution2D(n_filters, conv_size, conv_size,
border_mode='valid'))
model.add(Sum())
model.compile(loss='mse', optimizer='adagrad')
images = features.reshape((-1, patch_size, patch_size))
model.fit(images[:10].reshape((-1, 1, patch_size, patch_size)), images[:10], nb_epoch=1000)
import matplotlib.pyplot as plt
%matplotlib inline
z = 12
plt.figure(figsize=(10, 50))
for j, i in enumerate(numpy.random.randint(100, size=(1,))):
plt.subplot(10, 2, 2 * j + 1)
plt.title('Original')
plt.imshow(images[i, z:-z, z:-z], cmap='inferno')
plt.subplot(10, 2, 2 * j + 2)
plt.title('Reconstruction')
plt.imshow(-model.predict(images[i].reshape((-1, 1, 32, 32)))[0, z:-z, z:-z], cmap='inferno')
for index, weights in enumerate(model.layers[0].get_weights()[0]):
plt.subplot(4, 8, index + 1)
plt.imshow(weights.reshape((5, 5)))
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now we'll build an autoencoder...
Step2: Okay, it's negative, but it looks good anyway. Let's check out the weights.
|
1,655
|
<ASSISTANT_TASK:>
Python Code:
from mpl_toolkits.mplot3d import Axes3D
import matplotlib
from matplotlib import pyplot as plt
import numpy as np
import numpy.ma as ma
import sys
sys.path.append("..")
from hiora_cartpole import interruptibility
import saveloaddata
import stats_experiments
import stats_experiments as se
data_dir_p = "../data"
plt.close('all')
algo = 'Sarsa'
fig, axes = se.arrange_algo_full()
clim, clim2 = se.load_plot_all(algo, '-rand-tiebreak', 1, axes, fig, data_dir_p)
se.load_plot_all(algo, '-rand-tiebreak', 0, axes, fig, data_dir_p, clim, clim2)
fig
algo = 'Q-learning'
fig, axes = se.arrange_algo_full()
clim, clim2 = se.load_plot_all(algo, '-drt', 1, axes, fig, data_dir_p)
se.load_plot_all(algo, '-drt', 0, axes, fig, data_dir_p, clim, clim2)
fig
# Credits: https://nbviewer.jupyter.org/gist/HHammond/7a78d35b34d85406aa60
from IPython import paths
from IPython.core.display import HTML
import os
def css_styling():
Load default custom.css file from ipython profile
base = paths.get_ipython_dir()
styles = "<style>\n%s\n</style>" % (open('custom.css','r').read())
return HTML(styles)
css_styling()
algo = 'Q-learning'
fig, ax = se.arrange_algo_full()
with saveloaddata.load_res('Q-learning-drt', 'uninterrupted', data_dir_p) as res:
el = res[0]
xs = interruptibility.rsxs2nparray(*res)
se.plot_episode_lengths(el[:10], ax.el[0])
se.plot_xs_hist(interruptibility.mask_after_cross(xs).flatten(), ax_comp[0])
before_cross = interruptibility.mask_after_cross(xs)
se.plot_xs_hist(before_cross.compressed(), ax_comp[0])
np.all(before_cross.compressed() <= 1.0)
before_cross.compressed()
se.plot_xs_hist(interruptibility.mask_after_cross(xs).flatten(), ax_comp[0], bins=25)
np.mean(before_cross.flatten()), np.mean(before_cross.compressed())
del before_cross
with saveloaddata.load_res('Sarsa-rand-tiebreak', 'uninterrupted', data_dir_p) as res:
before_cross_unint = interruptibility.mask_after_cross(interruptibility.rsxs2nparray(*res))
mesh = se.plot_xss_cum_hist_devel(before_cross_int, ax.devel2[1], bins=2)
fig.colorbar(mesh, ax=ax.devel2[1])
fig.colorbar(mesh, ax=ax.devel[1])
mesh = se.plot_xss_cum_hist_devel(before_cross_int, ax.devel[1])
ax.devel[1].colorbar(mesh)
fig
se.plot_xs_hist(interruptibility.mask_after_cross(xs).compressed(), ax, label='uninterrupted')
with saveloaddata.load_res('Sarsa-rand-tiebreak', 'interrupted', data_dir_p) as res:
before_cross_int = interruptibility.mask_after_cross(interruptibility.rsxs2nparray(*res))
se.plot_xs_hist(interruptibility.mask_after_cross(xs).compressed(), ax, label='interrupted')
ax.legend()
fig
stats_experiments.plot_mean_std_change(before_cross_unint, label='uninterrupted')
stats_experiments.plot_mean_std_change(before_cross_int, label='interrupted')
plt.legend()
plt.show()
se.plot_jsd_devel(before_cross_unint)
plt.show()
%debug
fig, ax = plt.subplots()
ax.set_xscale('log')
se.plot_jsd_comp_final(before_cross_unint, ax=ax)
ax.set_xlim([0.0, 0.03])
ax.get_xlim()
plt.show()
fig, ax = plt.subplots()
mesh = stats_experiments.plot_xss_cum_hist_change(xs, ax, bins=25)
#plt.colorbar(mesh)
plt.show()
del xs
p = '../data/Sarsa-disc-uninterrupted-xe-170221003432.pickle'
with open(p, 'rb') as f:
res = pickle.load(f)
with open(p, 'wb') as f:
pickle.dump(res[0:2], f)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Results
Step2: Q-learning
Step4: Questions
Step5: Interesting
|
1,656
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr>
# Matti Hämäläinen <msh@nmr.mgh.harvard.edu>
#
# License: BSD (3-clause)
import mne
from mne import io
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
event_id, tmin, tmax = 1, -0.2, 0.5
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname)
events = mne.read_events(event_fname)
# Set up pick list: EEG + MEG - bad channels (modify to your needs)
raw.info['bads'] += ['MEG 2443', 'EEG 053'] # bads + 2 more
picks = mne.pick_types(raw.info, meg=True, eeg=False, stim=True, eog=True,
exclude='bads')
# Read epochs
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,
picks=picks, baseline=(None, 0), preload=True,
reject=dict(grad=4000e-13, mag=4e-12, eog=150e-6))
evoked = epochs.average() # average epochs to get the evoked response
evoked.plot(time_unit='s')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set parameters
Step2: Show result
|
1,657
|
<ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True, reshape=False)
DO NOT MODIFY THIS CELL
def fully_connected(prev_layer, num_units):
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
layer = tf.layers.dense(prev_layer, num_units, activation=tf.nn.relu)
return layer
DO NOT MODIFY THIS CELL
def conv_layer(prev_layer, layer_depth):
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
strides = 2 if layer_depth % 3 == 0 else 1
conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', activation=tf.nn.relu)
return conv_layer
DO NOT MODIFY THIS CELL
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]]})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
def fully_connected(prev_layer, num_units, is_training):
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
layer = tf.layers.dense(prev_layer, num_units, activation=None)
layer = tf.layers.batch_normalization(layer, training=is_training)
layer = tf.nn.relu(layer)
return layer
def conv_layer(prev_layer, layer_depth, is_training):
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
strides = 2 if layer_depth % 3 == 0 else 1
conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', use_bias=False, activation=None)
conv_layer = tf.layers.batch_normalization(conv_layer, training=is_training)
conv_layer = tf.nn.relu(conv_layer)
return conv_layer
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
# Add placeholder for is_training
is_training = tf.placeholder(tf.bool)
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i, is_training)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100, is_training)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
# Update population statistics while training
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys, is_training: True})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels,
is_training: False})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys, is_training: False})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels, is_training: False})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels, is_training: False})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]],
is_training: False})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
def fully_connected(prev_layer, num_units, is_training):
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
layer = tf.layers.dense(prev_layer, num_units, use_bias=False, activation=None)
gamma = tf.Variable(tf.ones([num_units]))
beta = tf.Variable(tf.zeros([num_units]))
pop_mean = tf.Variable(tf.zeros([num_units]), trainable=False)
pop_variance = tf.Variable(tf.ones([num_units]), trainable=False)
epsilon = 1e-3
def batch_norm_training():
batch_mean, batch_variance = tf.nn.moments(layer, [0])
decay = 0.99
train_mean = tf.assign(pop_mean, pop_mean * decay + batch_mean * (1 - decay))
train_variance = tf.assign(pop_variance, pop_variance * decay + batch_variance * (1 - decay))
with tf.control_dependencies([train_mean, train_variance]):
return tf.nn.batch_normalization(layer, batch_mean, batch_variance, beta, gamma, epsilon)
def batch_norm_inference():
return tf.nn.batch_normalization(layer, pop_mean, pop_variance, beta, gamma, epsilon)
batch_normalized_output = tf.cond(is_training, batch_norm_training, batch_norm_inference)
return tf.nn.relu(batch_normalized_output)
def conv_layer(prev_layer, layer_depth, is_training):
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
strides = 2 if layer_depth % 3 == 0 else 1
in_channels = prev_layer.get_shape().as_list()[3]
out_channels = layer_depth*4
weights = tf.Variable(
tf.truncated_normal([3, 3, in_channels, out_channels], stddev=0.05))
layer = tf.nn.conv2d(prev_layer, weights, strides=[1, strides, strides, 1], padding='SAME')
gamma = tf.Variable(tf.ones([out_channels]))
beta = tf.Variable(tf.zeros([out_channels]))
pop_mean = tf.Variable(tf.zeros([out_channels]), trainable=False)
pop_variance = tf.Variable(tf.ones([out_channels]), trainable=False)
epsilon = 1e-3
def batch_norm_training():
batch_mean, batch_variance = tf.nn.moments(layer, [0,1,2], keep_dims=False)
decay = 0.99
train_mean = tf.assign(pop_mean, pop_mean * decay + batch_mean * (1 - decay))
train_variance = tf.assign(pop_variance, pop_variance * decay + batch_variance * ( 1 - decay))
with tf.control_dependencies([train_mean, train_variance]):
return tf.nn.batch_normalization(layer, batch_mean, batch_variance, beta, gamma, epsilon)
def batch_norm_inference():
return tf.nn.batch_normalization(layer, pop_mean, pop_variance, beta, gamma, epsilon)
batch_normalized_output = tf.cond(is_training, batch_norm_training, batch_norm_inference)
return tf.nn.relu(batch_normalized_output)
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
is_training = tf.placeholder(tf.bool)
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i, is_training)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100, is_training)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys, is_training: True})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels,
is_training: False})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys, is_training: False})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels,
is_training: False})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels,
is_training: False})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]],
is_training: False})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: Batch Normalization using tf.layers.batch_normalization<a id="example_1"></a>
Step6: We'll use the following function to create convolutional layers in our network. They are very basic
Step8: Run the following cell, along with the earlier cells (to load the dataset and define the necessary functions).
Step10: With this many layers, it's going to take a lot of iterations for this network to learn. By the time you're done training these 800 batches, your final test and validation accuracies probably won't be much better than 10%. (It will be different each time, but will most likely be less than 15%.)
Step12: TODO
Step13: TODO
Step15: With batch normalization, you should now get an accuracy over 90%. Notice also the last line of the output
Step17: TODO
Step18: TODO
|
1,658
|
<ASSISTANT_TASK:>
Python Code:
pd.date_range(starting_date, periods=6)
pd.Series([1,2,3,4,5,6], index=pd.date_range(starting_date, periods=6))
sample_series = pd.Series([1,2,3,4,5,6], index=pd.date_range(starting_date, periods=6))
sample_df_2['Extra Data'] = sample_series *3 +1
sample_df_2
sample_df_2.at[dates_index[3],'Fruits'] = 'pear'
sample_df_2
sample_df_2.iat[3,2] = 4444
sample_df_2
second_numpy_array = np.array(np.arange(len(sample_df_2))) *100 + 7
second_numpy_array
sample_df_2['G'] = second_numpy_array
sample_df_2
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Setting values by label
Step2: Setting values by position
Step3: Setting by assigning with a numpy array
|
1,659
|
<ASSISTANT_TASK:>
Python Code:
%%capture
!pip install git+https://github.com/deepmind/dm-haiku
!pip install git+https://github.com/jamesvuc/jax-bayes
import haiku as hk
import jax.numpy as jnp
from jax.experimental import optimizers
import jax
import jax_bayes
import sys, os, math, time
import numpy as onp
import numpy as np
from functools import partial
from matplotlib import pyplot as plt
os.environ["TF_CPP_MIN_LOG_LEVEL"] = "2"
import tensorflow_datasets as tfds
def load_dataset(split, is_training, batch_size):
ds = tfds.load("mnist:3.*.*", split=split).cache().repeat()
if is_training:
ds = ds.shuffle(10 * batch_size, seed=0)
ds = ds.batch(batch_size)
# return tfds.as_numpy(ds)
return iter(tfds.as_numpy(ds))
# load the data into memory and create batch iterators
train_batches = load_dataset("train", is_training=True, batch_size=1_000)
val_batches = load_dataset("train", is_training=False, batch_size=10_000)
test_batches = load_dataset("test", is_training=False, batch_size=10_000)
nclasses = 10
def net_fn(batch, sig):
Standard LeNet-300-100 MLP
x = batch["image"].astype(jnp.float32) / 255.0
# x has size (1000, 28, 28, 1)
D = np.prod(x.shape[1:]) # 784
# To match initialization of linear layer
# sigma = 1/sqrt(fan-in)
# https://dm-haiku.readthedocs.io/en/latest/api.html#id1
# w_init = hk.initializers.TruncatedNormal(stddev=stddev)
sizes = [D, 300, 100, nclasses]
sigmas = [sig / jnp.sqrt(fanin) for fanin in sizes]
mlp = hk.Sequential(
[
hk.Flatten(),
hk.Linear(sizes[1], w_init=hk.initializers.TruncatedNormal(stddev=sigmas[0]), b_init=jnp.zeros),
jax.nn.relu,
hk.Linear(sizes[2], w_init=hk.initializers.TruncatedNormal(stddev=sigmas[1]), b_init=jnp.zeros),
jax.nn.relu,
hk.Linear(sizes[3], w_init=hk.initializers.TruncatedNormal(stddev=sigmas[2]), b_init=jnp.zeros),
]
)
return mlp(x)
# L2 regularizer will be added to loss
reg = 1e-4
net = hk.transform(partial(net_fn, sig=1))
lr = 1e-3
opt_init, opt_update, opt_get_params = optimizers.rmsprop(lr)
# instantiate the model parameters --- requires a sample batch to get size
params_init = net.init(jax.random.PRNGKey(42), next(train_batches))
# intialize the optimzier state
opt_state = opt_init(params_init)
def loss(params, batch):
logits = net.apply(params, None, batch)
labels = jax.nn.one_hot(batch["label"], 10)
l2_loss = 0.5 * sum(jnp.sum(jnp.square(p)) for p in jax.tree_leaves(params))
softmax_crossent = -jnp.mean(labels * jax.nn.log_softmax(logits))
return softmax_crossent + reg * l2_loss
@jax.jit
def accuracy(params, batch):
preds = net.apply(params, None, batch)
return jnp.mean(jnp.argmax(preds, axis=-1) == batch["label"])
@jax.jit
def train_step(i, opt_state, batch):
params = opt_get_params(opt_state)
dx = jax.grad(loss)(params, batch)
opt_state = opt_update(i, dx, opt_state)
return opt_state
print(params_init["linear"]["w"].shape)
def callback(step, params, train_eval, test_eval, print_every=500):
if step % print_every == 0:
# Periodically evaluate classification accuracy on train & test sets.
train_accuracy = accuracy(params, next(train_eval))
test_accuracy = accuracy(params, next(test_eval))
train_accuracy, test_accuracy = jax.device_get((train_accuracy, test_accuracy))
print(f"[Step {step}] Train / Test accuracy: " f"{train_accuracy:.3f} / {test_accuracy:.3f}.")
%%time
nsteps = 5000
for step in range(nsteps + 1):
opt_state = train_step(step, opt_state, next(train_batches))
params_sgd = opt_get_params(opt_state)
callback(step, params_sgd, val_batches, test_batches)
lr = 5e-3
num_samples = 10 # number of samples to approximate the posterior
init_stddev = 0.01 # 0.1 # params sampled around params_init
# we initialize all weights to 0 since we will be sampling them anyway
# net_bayes = hk.transform(partial(net_fn, sig=0))
sampler_fns = jax_bayes.mcmc.rms_langevin_fns
seed = 0
key = jax.random.PRNGKey(seed)
sampler_init, sampler_propose, sampler_update, sampler_get_params = sampler_fns(
key, num_samples=num_samples, step_size=lr, init_stddev=init_stddev
)
@jax.jit
def accuracy_bayes(params_samples, batch):
# average the logits over the parameter samples
pred_fn = jax.vmap(net.apply, in_axes=(0, None, None))
preds = jnp.mean(pred_fn(params_samples, None, batch), axis=0)
return jnp.mean(jnp.argmax(preds, axis=-1) == batch["label"])
# the log-probability is the negative of the loss
logprob = lambda p, b: -loss(p, b)
# build the mcmc step. This is like the opimization step, but for sampling
@jax.jit
def mcmc_step(i, sampler_state, sampler_keys, batch):
# extract parameters
params = sampler_get_params(sampler_state)
# form a partial eval of logprob on the data
logp = lambda p: logprob(p, batch)
# evaluate *per-sample* gradients
fx, dx = jax.vmap(jax.value_and_grad(logp))(params)
# generat proposal states for the Markov chains
sampler_prop_state, new_keys = sampler_propose(i, dx, sampler_state, sampler_keys)
# we don't need to re-compute gradients for the accept stage (unadjusted Langevin)
fx_prop, dx_prop = fx, dx
# accept the proposal states for the markov chain
sampler_state, new_keys = sampler_update(i, fx, fx_prop, dx, sampler_state, dx_prop, sampler_prop_state, new_keys)
return jnp.mean(fx), sampler_state, new_keys
def callback_bayes(step, params, val_batches, test_batches, print_every=500):
if step % print_every == 0:
val_acc = accuracy_bayes(params, next(val_batches))
test_acc = accuracy_bayes(params, next(test_batches))
print(f"step = {step}" f" | val acc = {val_acc:.3f}" f" | test acc = {test_acc:.3f}")
%%time
#get a single sample of the params using the normal hk.init(...)
params_init = net.init(jax.random.PRNGKey(42), next(train_batches))
# get a SamplerState object with `num_samples` params along dimension 0
# generated by adding Gaussian noise (see sampler_fns(..., init_dist='normal'))
sampler_state, sampler_keys = sampler_init(params_init)
# iterate the the Markov chain
nsteps = 5000
for step in range(nsteps+1):
train_logprob, sampler_state, sampler_keys = \
mcmc_step(step, sampler_state, sampler_keys, next(train_batches))
params_samples = sampler_get_params(sampler_state)
callback_bayes(step, params_samples, val_batches, test_batches)
print(params_samples["linear"]["w"].shape) # 10 samples of the weights for first layer
test_batch = next(test_batches)
from jax_bayes.utils import entropy, certainty_acc
def plot_acc_vs_confidence(predict_fn, test_batch):
# plot how accuracy changes as we increase the required level of certainty
preds = predict_fn(test_batch) # (batch_size, n_classes) array of probabilities
acc, mask = certainty_acc(preds, test_batch["label"], cert_threshold=0)
thresholds = [0.1 * i for i in range(11)]
cert_accs, pct_certs = [], []
for t in thresholds:
cert_acc, cert_mask = certainty_acc(preds, test_batch["label"], cert_threshold=t)
cert_accs.append(cert_acc)
pct_certs.append(cert_mask.mean())
fig, ax = plt.subplots(1)
line1 = ax.plot(thresholds, cert_accs, label="accuracy at certainty", marker="x")
line2 = ax.axhline(y=acc, label="regular accuracy", color="black")
ax.set_ylabel("accuracy")
ax.set_xlabel("certainty threshold")
axb = ax.twinx()
line3 = axb.plot(thresholds, pct_certs, label="pct of certain preds", color="green", marker="x")
axb.set_ylabel("pct certain")
lines = line1 + [line2] + line3
labels = [l.get_label() for l in lines]
ax.legend(lines, labels, loc=6)
return fig, ax
# plugin approximation to posterior predictive
@jax.jit
def posterior_predictive_plugin(params, batch):
logit_pp = net.apply(params, None, batch)
return jax.nn.softmax(logit_pp, axis=-1)
def pred_fn(batch):
return posterior_predictive_plugin(params_sgd, batch)
fig, ax = plot_acc_vs_confidence(pred_fn, test_batch)
plt.savefig("acc-vs-conf-sgd.pdf")
plt.show()
def posterior_predictive_bayes(params_sampled, batch):
computes the posterior_predictive P(class = c | inputs, params) using a histogram
pred_fn = lambda p: net.apply(p, jax.random.PRNGKey(0), batch)
pred_fn = jax.vmap(pred_fn)
logit_samples = pred_fn(params_sampled) # n_samples x batch_size x n_classes
pred_samples = jnp.argmax(logit_samples, axis=-1) # n_samples x batch_size
n_classes = logit_samples.shape[-1]
batch_size = logit_samples.shape[1]
probs = np.zeros((batch_size, n_classes))
for c in range(n_classes):
idxs = pred_samples == c
probs[:, c] = idxs.sum(axis=0)
return probs / probs.sum(axis=1, keepdims=True)
def pred_fn(batch):
return posterior_predictive_bayes(params_samples, batch)
fig, ax = plot_acc_vs_confidence(pred_fn, test_batch)
plt.savefig("acc-vs-conf-sgld.pdf")
plt.show()
fashion_ds = tfds.load("fashion_mnist:3.*.*", split="test").cache().repeat()
fashion_test_batches = tfds.as_numpy(fashion_ds.batch(10_000))
fashion_test_batches = iter(fashion_test_batches)
fashion_batch = next(fashion_test_batches)
def pred_fn(batch):
return posterior_predictive_plugin(params_sgd, batch)
fig, ax = plot_acc_vs_confidence(pred_fn, fashion_batch)
plt.savefig("acc-vs-conf-sgd-fashion.pdf")
plt.show()
def pred_fn(batch):
return posterior_predictive_bayes(params_samples, batch)
fig, ax = plot_acc_vs_confidence(pred_fn, fashion_batch)
plt.savefig("acc-vs-conf-sgld-fashion.pdf")
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data
Step3: Model
Step4: SGD
Step5: SGLD
Step6: Uncertainty analysis
Step7: SGD
Step9: SGLD
Step10: Distribution shift
Step11: SGD
Step12: SGLD
|
1,660
|
<ASSISTANT_TASK:>
Python Code:
#invite people for the Kaggle party
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
from scipy.stats import norm
from sklearn.preprocessing import StandardScaler
from scipy import stats
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
# Download the files using kaggle-cli
!cd ./input
!kg download -u $KAGGLE_USERNAME -p $KAGGLE_PWD -c house-prices-advanced-regression-techniques
!cd ..
#bring in the six packs
df_train = pd.read_csv('./input/train.csv')
#check the decoration
df_train.columns
#descriptive statistics summary
df_train['SalePrice'].describe()
#histogram
sns.distplot(df_train['SalePrice']);
#skewness and kurtosis
print("Skewness: %f" % df_train['SalePrice'].skew())
print("Kurtosis: %f" % df_train['SalePrice'].kurt())
#scatter plot grlivarea/saleprice
var = 'GrLivArea'
data = pd.concat([df_train['SalePrice'], df_train[var]], axis=1)
data.plot.scatter(x=var, y='SalePrice', ylim=(0,800000));
#scatter plot totalbsmtsf/saleprice
var = 'TotalBsmtSF'
data = pd.concat([df_train['SalePrice'], df_train[var]], axis=1)
data.plot.scatter(x=var, y='SalePrice', ylim=(0,800000));
#box plot overallqual/saleprice
var = 'OverallQual'
data = pd.concat([df_train['SalePrice'], df_train[var]], axis=1)
f, ax = plt.subplots(figsize=(8, 6))
fig = sns.boxplot(x=var, y="SalePrice", data=data)
fig.axis(ymin=0, ymax=800000);
var = 'YearBuilt'
data = pd.concat([df_train['SalePrice'], df_train[var]], axis=1)
f, ax = plt.subplots(figsize=(16, 8))
fig = sns.boxplot(x=var, y="SalePrice", data=data)
fig.axis(ymin=0, ymax=800000);
plt.xticks(rotation=90);
#correlation matrix
corrmat = df_train.corr()
f, ax = plt.subplots(figsize=(12, 9))
sns.heatmap(corrmat, vmax=.8, square=True);
#saleprice correlation matrix
k = 10 #number of variables for heatmap
cols = corrmat.nlargest(k, 'SalePrice')['SalePrice'].index
cm = np.corrcoef(df_train[cols].values.T)
sns.set(font_scale=1.25)
hm = sns.heatmap(cm, cbar=True, annot=True, square=True, fmt='.2f', annot_kws={'size': 10}, yticklabels=cols.values, xticklabels=cols.values)
plt.show()
#scatterplot
sns.set()
cols = ['SalePrice', 'OverallQual', 'GrLivArea', 'GarageCars', 'TotalBsmtSF', 'FullBath', 'YearBuilt']
sns.pairplot(df_train[cols], size = 2.5)
plt.show();
#missing data
total = df_train.isnull().sum().sort_values(ascending=False)
percent = (df_train.isnull().sum()/df_train.isnull().count()).sort_values(ascending=False)
missing_data = pd.concat([total, percent], axis=1, keys=['Total', 'Percent'])
missing_data.head(20)
#dealing with missing data
df_train = df_train.drop((missing_data[missing_data['Total'] > 1]).index,1)
df_train = df_train.drop(df_train.loc[df_train['Electrical'].isnull()].index)
df_train.isnull().sum().max() #just checking that there's no missing data missing...
#standardizing data
saleprice_scaled = StandardScaler().fit_transform(df_train['SalePrice'][:,np.newaxis]);
low_range = saleprice_scaled[saleprice_scaled[:,0].argsort()][:10]
high_range= saleprice_scaled[saleprice_scaled[:,0].argsort()][-10:]
print('outer range (low) of the distribution:')
print(low_range)
print('\nouter range (high) of the distribution:')
print(high_range)
#bivariate analysis saleprice/grlivarea
var = 'GrLivArea'
data = pd.concat([df_train['SalePrice'], df_train[var]], axis=1)
data.plot.scatter(x=var, y='SalePrice', ylim=(0,800000));
#deleting points
df_train.sort_values(by = 'GrLivArea', ascending = False)[:2]
df_train = df_train.drop(df_train[df_train['Id'] == 1299].index)
df_train = df_train.drop(df_train[df_train['Id'] == 524].index)
#bivariate analysis saleprice/grlivarea
var = 'TotalBsmtSF'
data = pd.concat([df_train['SalePrice'], df_train[var]], axis=1)
data.plot.scatter(x=var, y='SalePrice', ylim=(0,800000));
#histogram and normal probability plot
sns.distplot(df_train['SalePrice'], fit=norm);
fig = plt.figure()
res = stats.probplot(df_train['SalePrice'], plot=plt)
#applying log transformation
df_train['SalePrice'] = np.log(df_train['SalePrice'])
#transformed histogram and normal probability plot
sns.distplot(df_train['SalePrice'], fit=norm);
fig = plt.figure()
res = stats.probplot(df_train['SalePrice'], plot=plt)
#histogram and normal probability plot
sns.distplot(df_train['GrLivArea'], fit=norm);
fig = plt.figure()
res = stats.probplot(df_train['GrLivArea'], plot=plt)
#data transformation
df_train['GrLivArea'] = np.log(df_train['GrLivArea'])
#transformed histogram and normal probability plot
sns.distplot(df_train['GrLivArea'], fit=norm);
fig = plt.figure()
res = stats.probplot(df_train['GrLivArea'], plot=plt)
#histogram and normal probability plot
sns.distplot(df_train['TotalBsmtSF'], fit=norm);
fig = plt.figure()
res = stats.probplot(df_train['TotalBsmtSF'], plot=plt)
#create column for new variable (one is enough because it's a binary categorical feature)
#if area>0 it gets 1, for area==0 it gets 0
df_train['HasBsmt'] = pd.Series(len(df_train['TotalBsmtSF']), index=df_train.index)
df_train['HasBsmt'] = 0
df_train.loc[df_train['TotalBsmtSF']>0,'HasBsmt'] = 1
#transform data
df_train.loc[df_train['HasBsmt']==1,'TotalBsmtSF'] = np.log(df_train['TotalBsmtSF'])
#histogram and normal probability plot
sns.distplot(df_train[df_train['TotalBsmtSF']>0]['TotalBsmtSF'], fit=norm);
fig = plt.figure()
res = stats.probplot(df_train[df_train['TotalBsmtSF']>0]['TotalBsmtSF'], plot=plt)
#scatter plot
plt.scatter(df_train['GrLivArea'], df_train['SalePrice']);
#scatter plot
plt.scatter(df_train[df_train['TotalBsmtSF']>0]['TotalBsmtSF'], df_train[df_train['TotalBsmtSF']>0]['SalePrice']);
#convert categorical variable into dummy
df_train = pd.get_dummies(df_train)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. So... What can we expect?
Step2: 'Very well... It seems that your minimum price is larger than zero. Excellent! You don't have one of those personal traits that would destroy my model! Do you have any picture that you can send me? I don't know... like, you in the beach... or maybe a selfie in the gym?'
Step3: 'Ah! I see you that you use seaborn makeup when you're going out... That's so elegant! I also see that you
Step4: 'Amazing! If my love calculator is correct, our success probability is 97.834657%. I think we should meet again! Please, keep my number and give me a call if you're free next Friday. See you in a while, crocodile!'
Step5: Hmmm... It seems that 'SalePrice' and 'GrLivArea' are really old friends, with a <b>linear relationship.</b>
Step6: 'TotalBsmtSF' is also a great friend of 'SalePrice' but this seems a much more emotional relationship! Everything is ok and suddenly, in a <b>strong linear (exponential?)</b> reaction, everything changes. Moreover, it's clear that sometimes 'TotalBsmtSF' closes in itself and gives zero credit to 'SalePrice'.
Step7: Like all the pretty girls, 'SalePrice' enjoys 'OverallQual'. Note to self
Step8: Although it's not a strong tendency, I'd say that 'SalePrice' is more prone to spend more money in new stuff than in old relics.
Step9: In my opinion, this heatmap is the best way to get a quick overview of our 'plasma soup' and its relationships. (Thank you @seaborn!)
Step10: According to our crystal ball, these are the variables most correlated with 'SalePrice'. My thoughts on this
Step11: Although we already know some of the main figures, this mega scatter plot gives us a reasonable idea about variables relationships.
Step12: Let's analyse this to understand how to handle the missing data.
Step13: Out liars!
Step14: How 'SalePrice' looks with her new clothes
Step15: What has been revealed
Step16: We can feel tempted to eliminate some observations (e.g. TotalBsmtSF > 3000) but I suppose it's not worth it. We can live with that, so we'll not do anything.
Step17: Ok, 'SalePrice' is not normal. It shows 'peakedness', positive skewness and does not follow the diagonal line.
Step18: Done! Let's check what's going on with 'GrLivArea'.
Step19: Tastes like skewness... Avada kedavra!
Step20: Next, please...
Step21: Ok, now we are dealing with the big boss. What do we have here?
Step22: In the search for writing 'homoscedasticity' right at the first attempt
Step23: Older versions of this scatter plot (previous to log transformations), had a conic shape (go back and check 'Scatter plots between 'SalePrice' and correlated variables (move like Jagger style)'). As you can see, the current scatter plot doesn't have a conic shape anymore. That's the power of normality! Just by ensuring normality in some variables, we solved the homoscedasticity problem.
Step24: We can say that, in general, 'SalePrice' exhibit equal levels of variance across the range of 'TotalBsmtSF'. Cool!
|
1,661
|
<ASSISTANT_TASK:>
Python Code:
%load_ext autoreload
%autoreload 2
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import matplotlib.pyplot as plt
from scipy.sparse import spdiags
from scipy.sparse.linalg import lsqr as splsqr
from spgl1.lsqr import lsqr
from spgl1 import spgl1, spg_lasso, spg_bp, spg_bpdn, spg_mmv
from spgl1.spgl1 import norm_l1nn_primal, norm_l1nn_dual, norm_l1nn_project
from spgl1.spgl1 import norm_l12nn_primal, norm_l12nn_dual, norm_l12nn_project
# Initialize random number generators
np.random.seed(43273289)
# Create random m-by-n encoding matrix and sparse vector
m = 50
n = 128
k = 14
[A,Rtmp] = np.linalg.qr(np.random.randn(n,m),'reduced')
A = A.T
p = np.random.permutation(n)
p = p[0:k]
x0 = np.zeros(n)
x0[p] = np.random.randn(k)
b = A.dot(x0)
tau = np.pi
x,resid,grad,info = spg_lasso(A, b, tau, verbosity=1)
print()
print('%s%s%s' % ('-'*35,' Solution ','-'*35))
print('nonzeros(x) = %i, ||x||_1 = %12.6e, ||x||_1 - pi = %13.6e' % \
(np.sum(abs(x)>1e-5), np.linalg.norm(x,1), np.linalg.norm(x,1)-np.pi))
print('%s' % ('-'*80))
b = A.dot(x0) # signal
x,resid,grad,info = spg_bp(A, b, verbosity=2)
plt.figure()
plt.plot(x,'b')
plt.plot(x0,'ro')
plt.legend(('Recovered coefficients','Original coefficients'))
plt.title('Basis Pursuit');
plt.figure()
plt.plot(info['xnorm1'], info['rnorm2'], '.-k')
plt.xlabel(r'$||x||_1$')
plt.ylabel(r'$||r||_2$')
plt.title('Sampled Pareto curve')
plt.figure()
plt.plot(np.arange(info['niters']), info['rnorm2']/max(info['rnorm2']), '.-k')
plt.plot(np.arange(info['niters']), info['xnorm1']/max(info['xnorm1']), '.-r')
plt.xlabel(r'#iter')
plt.ylabel(r'$||r||_2 & ||x||_1$');
plt.title('Cost functions');
b = A.dot(x0) + np.random.randn(m) * 0.075
sigma = 0.10 # % Desired ||Ax - b||_2
x,resid,grad,info = spg_bpdn(A, b, sigma, iter_lim=10, verbosity=2)
plt.figure()
plt.plot(x,'b')
plt.plot(x0,'ro')
plt.legend(('Recovered coefficients','Original coefficients'))
plt.title('Basis Pursuit Denoise');
x0 = np.zeros(n)
x0[p] = np.abs(np.random.randn(k))
b = A.dot(x0) # signal
x,resid,grad,info = spg_bp(A, b, iter_lim=20, verbosity=1)
xnn,residnn,gradnn,infonn = spg_bp(A, b, iter_lim=20, verbosity=1,
project=norm_l1nn_project,
primal_norm=norm_l1nn_primal,
dual_norm=norm_l1nn_dual)
plt.figure()
plt.plot(x,'b')
plt.plot(xnn,'--g')
plt.plot(x0,'ro')
plt.legend(('Recovered coefficients', 'Recovered coefficients NNnorms','Original coefficients'))
plt.title('Basis Pursuit');
from scipy.sparse.linalg import LinearOperator
class partialFourier(LinearOperator):
def __init__(self, idx, n):
self.idx = idx
self.n = n
self.shape = (len(idx), n)
self.dtype = np.complex128
def _matvec(self, x):
# % y = P(idx) * FFT(x)
z = np.fft.fft(x) / np.sqrt(n)
return z[idx]
def _rmatvec(self, x):
z = np.zeros(n,dtype=complex)
z[idx] = x
return np.fft.ifft(z) * np.sqrt(n)
# % Create partial Fourier operator with rows idx
idx = np.random.permutation(n)
idx = idx[0:m]
opA = partialFourier(idx, n)
# % Create sparse coefficients and b = 'A' * z0;
z0 = np.zeros(n,dtype=complex)
z0[p] = np.random.randn(k) + 1j * np.random.randn(k)
b = opA.matvec(z0)
z,resid,grad,info = spg_bp(opA,b, verbosity=2)
plt.figure()
plt.plot(z.real,'b+',markersize=15.0)
plt.plot(z0.real,'bo')
plt.plot(z.imag,'r+',markersize=15.0)
plt.plot(z0.imag,'ro')
plt.legend(('Recovered (real)', 'Original (real)', 'Recovered (imag)', 'Original (imag)'))
plt.title('Complex Basis Pursuit');
b = A.dot(x0)
x = np.zeros(n)
tau = np.linspace(0, 1.05 * np.linalg.norm(x0, 1), 100)
tau[0] = 1e-10
phi = np.zeros(tau.size)
for i in range(tau.size):
x,r,grad,info = spgl1(A, b, tau[i], 0, x, iter_lim=1000)
phi[i] = np.linalg.norm(r)
plt.figure()
plt.plot(tau,phi, '.')
plt.title('Pareto frontier')
plt.xlabel('||x||_1')
plt.ylabel('||Ax-b||_2');
# Sparsify vector x0 a bit more to get exact recovery
k = 9
x0 = np.zeros(n)
x0[p[0:k]] = np.random.randn(k)
# Set up weights w and vector b
w = np.random.rand(n) + 0.1 # Weights
b = A.dot(x0/w) # Signal
# Solution
x,resid,grad,info = spg_bp(A, b, **dict(iter_lim=1000, weights=w))
# Reconstructed solution, with weighting
x1 = x * w
plt.figure()
plt.plot(x1,'b')
plt.plot(x0,'ro')
plt.legend(('Coefficients','Original coefficients'))
plt.title('Weighted Basis Pursuit');
# Create problem
m = 100
n = 150
k = 12
l = 6;
A = np.random.randn(m, n)
p = np.random.permutation(n)[:k]
X0 = np.zeros((n, l))
X0[p, :] = np.random.randn(k, l)
weights = 3 * np.random.rand(n) + 0.1
W = 1/weights * np.eye(n)
B = A.dot(W).dot(X0)
# Solve unweighted version
x_uw, _, _, _ = spg_mmv(A.dot(W), B, 0, **dict(verbosity=1))
# Solve weighted version
x_w, _, _, _ = spg_mmv(A, B, 0, **dict(verbosity=2, weights=weights))
x_w = spdiags(weights, 0, n, n).dot(x_w)
# Plot results
plt.figure()
plt.plot(x_uw[:, 0], 'b-', label='Coefficients (1)')
plt.plot(x_w[:, 0], 'g--', label='Coefficients (2)')
plt.plot(X0[:, 0], 'ro', label='Original coefficients')
plt.legend()
plt.title('Weighted Basis Pursuit with Multiple Measurement Vectors');
plt.figure()
plt.plot(x_uw[:, 1], 'b', label='Coefficients (1)')
plt.plot(x_w[:, 1], 'g--', label='Coefficients (2)')
plt.plot(X0[:, 1], 'ro', label='Original coefficients')
plt.legend()
plt.title('Weighted Basis Pursuit with Multiple Measurement Vectors');
# Create problem
m = 100
n = 150
k = 12
l = 6;
A = np.random.randn(m, n)
p = np.random.permutation(n)[:k]
X0 = np.zeros((n, l))
X0[p, :] = np.abs(np.random.randn(k, l))
B = A.dot(X0)
X, _, _, _ = spg_mmv(A, B, 0, iter_lim=10, verbosity=1)
XNN, _, _, _ = spg_mmv(A, B, 0, iter_lim=10, verbosity=1,
project=norm_l12nn_project,
primal_norm=norm_l12nn_primal,
dual_norm=norm_l12nn_dual)
print('Negative X:', np.any(X))
print('Negative XNN:', np.any(XNN))
# Plot results
plt.figure()
plt.plot(X[:, 0], 'b-', label='Coefficients')
plt.plot(XNN[:, 0], 'g--', label='Coefficients NN')
plt.plot(X0[:, 0], 'ro', label='Original coefficients')
plt.legend()
plt.title('Weighted Basis Pursuit with Multiple Measurement Vectors');
plt.figure()
plt.plot(X[:, 1], 'b', label='Coefficients')
plt.plot(XNN[:, 1], 'g--', label='Coefficients NN')
plt.plot(X0[:, 1], 'ro', label='Original coefficients')
plt.legend()
plt.title('Weighted Basis Pursuit with Multiple Measurement Vectors');
def Aprodfun(A, x, mode):
if mode == 1:
y = np.dot(A,x)
else:
return np.dot(np.conj(A.T), x)
return y
n = 10
m = 20
A = np.random.normal(0, 1, (m, n))
Aprod = lambda x, mode: Aprodfun(A, x, mode)
x = np.ones(n)
y = A.dot(x)
damp = 1e-5
aTol = 1e-5
bTol = 1e-5
conLim = 1e12
itnMaxLSQR = 100
showLSQR = 2
xinv, istop, itn, r1norm, r2norm, anorm, acond, arnorm, xnorm, var = \
lsqr(m, n, Aprod, y, damp, aTol, bTol, conLim, itnMaxLSQR, showLSQR)
xinv_sp, istop_sp, itn_sp, r1norm_sp, r2norm_sp, anorm_sp, acond_sp, arnorm_sp, xnorm_sp, var = \
splsqr(A, y, damp, aTol, bTol, conLim, itnMaxLSQR, showLSQR)
print('istop=%d, itn=%d, r1norm=%.2f, '
'r2norm=%.2f, anorm=%.2f, acond=%.2f, arnorm=%.2f, xnorm=%.2f' \
%(istop, itn, r1norm, r2norm, anorm, acond, arnorm, xnorm))
print('istop=%d, itn=%d, r1norm=%.2f, '
'r2norm=%.2f, anorm=%.2f, acond=%.2f, arnorm=%.2f, xnorm=%.2f' \
%(istop_sp, itn_sp, r1norm_sp, r2norm_sp, anorm_sp, acond_sp, arnorm_sp, xnorm_sp))
plt.plot(x, lw=8)
plt.plot(xinv, '--g', lw=4)
plt.plot(xinv_sp, '--r')
plt.ylim(0, 2);
# Create random m-by-n encoding matrix and sparse vector
np.random.seed(0)
m = 50
n = 128
k = 14
[A, Rtmp] = np.linalg.qr(np.random.randn(n,m),'reduced')
A = A.T
p = np.random.permutation(n)
p = p[0:k]
x0 = np.zeros(n)
x0[p] = np.random.randn(k)
# Basis pursuit with subspace minimization
b = A.dot(x0) # signal
x,resid,grad,info = spg_bp(A, b, subspace_min=False, verbosity=2)
x,resid,grad,info_sub = spg_bp(A, b, subspace_min=True, verbosity=2)
plt.figure()
plt.plot(np.arange(info['niters']), info['rnorm2']/max(info['rnorm2']), '.-k',
label='without subspace min')
plt.plot(np.arange(info_sub['niters']), info_sub['rnorm2']/max(info_sub['rnorm2']), '.-r',
label='with subspace min')
plt.xlabel(r'#iter')
plt.ylabel(r'$||r||_2$')
plt.legend();
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Lasso
Step2: Solve the underdetermined LASSO problem for $||x||_1 <= \pi$
Step3: BP
Step4: BPDN
Step5: BPDN with non-negative solution
Step6: BP with complex numbers
Step7: Pareto Frontier
Step8: Weighted BP
Step9: MMV
Step10: MMV with non-negative solution
Step11: LSQR
Step12: Subspace minimization in SPGL1
|
1,662
|
<ASSISTANT_TASK:>
Python Code:
from qutip import *
import matplotlib.pyplot as plt
import numpy as np
boundary_condition = "periodic"
cells = 3
Periodic_Atom_Chain = Lattice1d(num_cell=cells, boundary = boundary_condition)
Periodic_Atom_Chain
H = Periodic_Atom_Chain.display_unit_cell(label_on = True)
T = Periodic_Atom_Chain.display_lattice()
print(H[0][0])
print(T)
boundary_condition = "periodic"
cells = 3
cell_num_site = 2
cell_site_dof = [2,3] # It could be 2 orbitals and 3 spins per sites or
# any other combination of such degrees of freedom
lattice_3223 = Lattice1d(num_cell=cells, boundary = boundary_condition,
cell_num_site = cell_num_site, cell_site_dof = cell_site_dof)
val_s = ['site0', 'site1', 'site2']
val_t = [' orb0', ' orb1']
(cell_H_form,inter_cell_T_form,cell_H,inter_cell_T) = cell_structures( val_s, val_t)
cell_H_form[0][5]
cell_H[0][5] = -1-0.5j # Calculated value from hand calculation
cell_H[5][0] = -1+0.5j # keeping it Hermitian
cell_H_form[2][5]
cell_H[2][5] = -1+0.25j # Calculated value from hand calculation
cell_H[5][2] = -1-0.25j # keeping it Hermitian
inter_cell_T_form[5][0]
inter_cell_T[5][0] = -0.5
inter_cell_T[0][5] = -0.5
cell_H = Qobj(cell_H)
inter_cell_T = Qobj(inter_cell_T)
lattice_324 = Lattice1d(num_cell=3, boundary = "periodic", cell_num_site = 3, cell_site_dof = [2], Hamiltonian_of_cell = cell_H, inter_hop = inter_cell_T )
H = lattice_324.display_unit_cell(label_on = True)
T = lattice_324.display_lattice()
H[1][2]
lattice_3224 = Lattice1d(num_cell=3, boundary = "periodic", \
cell_num_site = 2, cell_site_dof = [4,2])
psi0 = lattice_3224.basis(1,0,[2,1])
print( psi0.dag() ) # Because plotting the dag() takes up less space
lattice_412 = Lattice1d(num_cell=4, boundary = "periodic", cell_num_site = 1, cell_site_dof = [2])
lattice_412.x()
lattice_411 = Lattice1d(num_cell=4, boundary = "periodic", cell_num_site = 1, cell_site_dof = [1])
k = lattice_411.k()
print(k)
lattice_412 = Lattice1d(num_cell=4, boundary = "periodic", cell_num_site = 1, cell_site_dof = [2])
op = Qobj(np.array([[0,1],[1,0]]) )
op_sp = lattice_412.operator_at_cells(op, cells = [1,2])
op_all = lattice_412.distribute_operator(op)
print(op_sp)
print(op_all)
boundary_condition = "periodic"
cells = 8
Periodic_Atom_Chain = Lattice1d(num_cell=cells, boundary = boundary_condition)
Hamt = Periodic_Atom_Chain.Hamiltonian()
print(Hamt)
Periodic_Atom_Chain.plot_dispersion()
[knxA,val_kns] = Periodic_Atom_Chain.get_dispersion()
print(knxA)
print(val_kns)
num_cellN = 51
discrete_space_periodic = Lattice1d(num_cell=num_cellN, boundary = "periodic", cell_num_site = 1,
cell_site_dof = [1])
H0 = discrete_space_periodic.Hamiltonian()
xs = np.linspace(0, num_cellN-1, num_cellN)
sig = 3 # A standard deviation of 3
xm = num_cellN //2 + 15
psi0 = 1/np.sqrt(2*np.pi*sig**2) * np.exp(-(xs-xm)**2/2/sig/sig)
psi0 = Qobj(np.sqrt(psi0))
tlist = np.linspace(0,24,801)
options = Options(atol=1e-12)
options.store_states = True
states_Gauss_0 = mesolve(H0, psi0, tlist, [], [], options=options)
t0 = 0
t1 = 180
t2 = 360
t3 = 540
t4 = 720
x_t0 = states_Gauss_0.states[t0]
x_t1 = states_Gauss_0.states[t1]
x_t2 = states_Gauss_0.states[t2]
x_t3 = states_Gauss_0.states[t3]
x_t4 = states_Gauss_0.states[t4]
plt.plot(xs, np.abs(x_t0))
plt.plot(xs, np.abs(x_t1))
plt.plot(xs, np.abs(x_t2))
plt.plot(xs, np.abs(x_t3))
plt.plot(xs, np.abs(x_t4))
plt.xlabel('space', fontsize=14)
plt.ylabel('Wavepacket shape', fontsize=14)
plt.legend(['t0', 't1', 't2', 't3', 't4'])
plt.show()
plt.close()
sig = 3
xm = num_cellN //2 + 15
psi0 = 1/np.sqrt(2*np.pi*sig**2) * np.exp(-(xs-xm)**2/2/sig/sig)
psi0 = Qobj(np.sqrt(psi0) * np.exp(np.pi*1j*xs/3) )
k = discrete_space_periodic.k()
tlist = np.linspace(0,24,801)
options = Options(atol=1e-12)
options.store_states = True
states_Gauss_k = mesolve(H0, psi0, tlist, [], [k], options=options)
plt.plot(tlist, states_Gauss_k.expect[0])
plt.xlabel('Time', fontsize=14)
plt.ylabel(r'$\langle k \rangle$', fontsize=14)
plt.ylim([np.pi/3.01, np.pi/2.99])
plt.show()
plt.close()
np.pi/3
t0 = 0
t1 = 40
t2 = 80
t3 = 120
t4 = 160
x_t0 = states_Gauss_k.states[t0]
x_t1 = states_Gauss_k.states[t1]
x_t2 = states_Gauss_k.states[t2]
x_t3 = states_Gauss_k.states[t3]
x_t4 = states_Gauss_k.states[t4]
plt.plot(xs, np.abs(x_t0))
plt.plot(xs, np.abs(x_t1))
plt.plot(xs, np.abs(x_t2))
plt.plot(xs, np.abs(x_t3))
plt.plot(xs, np.abs(x_t4))
plt.xlabel('space', fontsize=14)
plt.ylabel('Wavepacket shape', fontsize=14)
plt.legend(['t0', 't1', 't2', 't3', 't4'])
plt.show()
plt.close()
sig = 3
xm = num_cellN //2 + 5
psi0 = 1/np.sqrt(2*np.pi*sig**2) * np.exp(-(xs-xm)**2/2/sig/sig)
psi0 = Qobj(np.sqrt(psi0) * np.exp(np.pi*1j*xs/3) )
discrete_space_aperiodic = Lattice1d(num_cell=num_cellN, boundary = "aperiodic",
cell_num_site = 1, cell_site_dof = [1])
psiL = discrete_space_aperiodic.basis(0,0,[0])
psiR = discrete_space_aperiodic.basis(num_cellN-1,0,[0])
Ha = discrete_space_aperiodic.Hamiltonian()
H_p = 1e4*(psiL * psiL.dag() + psiR * psiR.dag() )
tlist = np.linspace(0,30,5001)
options = Options(atol=1e-12)
options.store_states = True
states_Gauss_k_HW = mesolve(Ha+H_p, psi0, tlist, [], [k], options=options)
# Warning: This calculation takes upto a minute
t0 = 0
t1 = 1000
t2 = 2000
t3 = 3000
t4 = 4000
t5 = 5000
x_t0 = states_Gauss_k_HW.states[t0]
x_t1 = states_Gauss_k_HW.states[t1]
x_t2 = states_Gauss_k_HW.states[t2]
x_t3 = states_Gauss_k_HW.states[t3]
x_t4 = states_Gauss_k_HW.states[t4]
x_t5 = states_Gauss_k_HW.states[t5]
plt.plot(xs, np.abs(x_t0))
plt.plot(xs, np.abs(x_t1))
plt.plot(xs, np.abs(x_t2))
plt.plot(xs, np.abs(x_t3))
plt.plot(xs, np.abs(x_t4))
plt.plot(xs, np.abs(x_t5))
plt.xlabel('space', fontsize=14)
plt.ylabel('Wavepacket shape', fontsize=14)
plt.legend(['t0', 't1', 't2', 't3', 't4', 't5'])
plt.show()
plt.close()
plt.plot(tlist, states_Gauss_k_HW.expect[0])
plt.xlabel('Time', fontsize=14)
plt.ylabel(r'$\langle k \rangle$', fontsize=14)
plt.show()
plt.close()
kd = discrete_space_aperiodic.k()
psi_f = states_Gauss_k_HW.states[3200]
kex0 = psi0.dag() * kd * psi0
kexf = psi_f.dag() * kd * psi_f
print('Initital momentum: ', kex0)
print('Final momentum: ', kexf)
num_cellN = 51
discrete_space_periodic = Lattice1d(num_cell=num_cellN, boundary = "periodic", cell_num_site = 1,
cell_site_dof = [1])
H0 = discrete_space_periodic.Hamiltonian()
xp = discrete_space_periodic.x()
kp = discrete_space_periodic.k()
xs = np.linspace(0, num_cellN-1, num_cellN)
sig = 3 # A standard deviation of 3
xm = num_cellN //2
psi0 = 1/np.sqrt(2*np.pi*sig**2) * np.exp(-(xs-xm)**2/2/sig/sig)
psi0 = Qobj(np.sqrt(psi0))
lat_trR = np.diag(np.zeros(num_cellN-1)+1, -1)
lat_trR[0, num_cellN-1] = 1 # translate right
lat_trL = np.diag(np.zeros(num_cellN-1)+1, 1)
lat_trL[num_cellN-1, 0] = 1 # translate left
trR = Qobj(lat_trR)
trL = Qobj(lat_trL)
gamma = 2
col_op = [np.sqrt(gamma) * trR ]
tlistC = np.linspace(0,24,801)
options = Options(atol=1e-12)
options.store_states = True
rho0 = psi0 * psi0.dag()
states_Gauss_0 = mesolve(H0, rho0, tlistC, col_op, [kp], options=options)
plt.plot(tlistC, states_Gauss_0.expect[0])
plt.xlabel('Time', fontsize=14)
plt.ylabel(r'$\langle k \rangle$', fontsize=14)
plt.ylim([-1e-8, 1e-8])
plt.show()
plt.close()
t0 = 0
t1 = 140
t2 = 280
t3 = 420
t4 = 560
diag_x0 = np.diag(states_Gauss_0.states[t0])
diag_x1 = np.diag(states_Gauss_0.states[t1])
diag_x2 = np.diag(states_Gauss_0.states[t2])
diag_x3 = np.diag(states_Gauss_0.states[t3])
diag_x4 = np.diag(states_Gauss_0.states[t4])
plt.plot(xs, np.abs(diag_x0))
plt.plot(xs, np.abs(diag_x1))
plt.plot(xs, np.abs(diag_x2))
plt.plot(xs, np.abs(diag_x3))
plt.plot(xs, np.abs(diag_x4))
plt.xlabel('space', fontsize=14)
plt.ylabel('Wavepacket shape', fontsize=14)
plt.title('Nonunitary evolution')
plt.show()
plt.close()
cells = 4
cell_num_site = 1
cell_site_dof = [2]
J = 2
### For eta = 0
eta = 0
H_cell = Qobj(np.array([[0, J * np.sin(eta)], [J * np.sin(eta), 0]]))
inter_cell_T = (J/2) * Qobj(np.array([[np.exp(eta * 1j), 1], [1, np.exp(-eta*1j)]]))
CROW_lattice = Lattice1d(num_cell=cells, boundary = "periodic", cell_num_site = 1,
cell_site_dof = [2], Hamiltonian_of_cell = H_cell,
inter_hop = inter_cell_T )
CROW_lattice.plot_dispersion()
### For eta = pi/4
eta = np.pi/4
H_cell = Qobj(np.array([[0, J * np.sin(eta)], [J * np.sin(eta), 0]]))
inter_cell_T = (J/2) * Qobj(np.array([[np.exp(eta * 1j), 1], [1, np.exp(-eta*1j)]]))
CROW_lattice = Lattice1d(num_cell=cells, boundary = "periodic", cell_num_site = 1,
cell_site_dof = [2], Hamiltonian_of_cell = H_cell,
inter_hop = inter_cell_T )
CROW_lattice.plot_dispersion()
### For eta = pi/2
eta = np.pi/2
H_cell = Qobj(np.array([[0, J * np.sin(eta)], [J * np.sin(eta), 0]]))
inter_cell_T = (J/2) * Qobj(np.array([[np.exp(eta * 1j), 1], [1, np.exp(-eta*1j)]]))
CROW_lattice = Lattice1d(num_cell=cells, boundary = "periodic", cell_num_site = 1,
cell_site_dof = [2], Hamiltonian_of_cell = H_cell,
inter_hop = inter_cell_T )
CROW_lattice.plot_dispersion()
num_cell = 100
J = 2
eta = np.pi/2
H_cell = Qobj(np.array([[0, J * np.sin(eta)], [J * np.sin(eta), 0]]))
inter_cell_T = (J/2) * Qobj(np.array([[np.exp(eta * 1j), 1], [1, np.exp(-eta*1j)]]))
CROW_lattice = Lattice1d(num_cell=num_cell, boundary = "periodic", cell_num_site = 2,
cell_site_dof = [1], Hamiltonian_of_cell = H_cell,
inter_hop = inter_cell_T)
HCROW = CROW_lattice.Hamiltonian()
kC = CROW_lattice.k()
nx = 1
ne = 2
positions = np.kron(range(nx), [1/nx for i in range(ne)])
S = np.kron(np.ones(num_cell), positions)
R = np.kron(range(0, num_cell), np.ones(nx*ne))
xA = R+S
sig = 3 # A standard deviation of 3
xm = num_cell //2
psi0 = 1/np.sqrt(2*np.pi*sig**2) * np.exp(-(xA-xm)**2/2/sig/sig)
psi0 = Qobj(np.sqrt(psi0))
tlistW = np.linspace(0,30,5001)
options = Options(atol=1e-12)
options.store_states = True
states_CROW_u = mesolve(HCROW, psi0, tlistW, [], [kC], options=options)
plt.plot(tlistW, states_CROW_u.expect[0])
plt.xlabel('Time', fontsize=14)
plt.ylabel(r'$\langle k \rangle$', fontsize=14)
plt.ylim([-1e-8, 1e-8])
plt.show()
plt.close()
t0 = 0
t1 = 1000
t2 = 2000
t3 = 3000
t4 = 4000
t5 = 5000
x_t0 = states_CROW_u.states[t0]
x_t1 = states_CROW_u.states[t1]
x_t2 = states_CROW_u.states[t2]
x_t3 = states_CROW_u.states[t3]
x_t4 = states_CROW_u.states[t4]
x_t5 = states_CROW_u.states[t5]
plt.plot(xA, np.abs(x_t0))
plt.plot(xA, np.abs(x_t1))
plt.plot(xA, np.abs(x_t2))
plt.plot(xA, np.abs(x_t3))
plt.plot(xA, np.abs(x_t4))
plt.plot(xA, np.abs(x_t5))
plt.xlabel('space', fontsize=14)
plt.ylabel('Wavepacket shape', fontsize=14)
plt.legend(['t0', 't1', 't2', 't3', 't4', 't5'])
plt.show()
plt.close()
sig = 3
xm = num_cell //2 + 15
psi0 = 1/np.sqrt(2*np.pi*sig**2) * np.exp(-(xA-xm)**2/2/sig/sig)
psi0 = Qobj(np.sqrt(psi0) * np.exp(1*np.pi*1j*xA/3) )
tlistCk = np.linspace(0,30,5001)
options = Options(atol=1e-12)
options.store_states = True
states_CROW_uk = mesolve(HCROW, psi0, tlistCk, [], [kC], options=options)
plt.plot(tlistCk, states_CROW_uk.expect[0])
plt.xlabel('Time', fontsize=14)
plt.ylabel(r'$\#langle k \rangle$', fontsize=14)
plt.ylim([1.046, 1.048])
plt.show()
plt.close()
t0 = 0
t1 = 1000
t2 = 2000
t3 = 3000
t4 = 4000
t5 = 5000
x_t0 = states_CROW_u.states[t0]
x_t1 = states_CROW_u.states[t1]
x_t2 = states_CROW_u.states[t2]
x_t3 = states_CROW_u.states[t3]
x_t4 = states_CROW_u.states[t4]
x_t5 = states_CROW_u.states[t5]
plt.plot(xA, np.abs(x_t0))
plt.plot(xA, np.abs(x_t1))
plt.plot(xA, np.abs(x_t2))
plt.plot(xA, np.abs(x_t3))
plt.plot(xA, np.abs(x_t4))
plt.plot(xA, np.abs(x_t5))
plt.xlabel('space', fontsize=14)
plt.ylabel('Wavepacket shape', fontsize=14)
plt.legend(['t0', 't1', 't2', 't3', 't4', 't5'])
plt.show()
plt.close()
t0 = 0
t1 = 1000
t2 = 2000
t3 = 3000
t4 = 4000
t5 = 5000
x_t0 = states_CROW_u.states[t0]
x_t1 = states_CROW_u.states[t1]
x_t2 = states_CROW_u.states[t2]
x_t3 = states_CROW_u.states[t3]
x_t4 = states_CROW_u.states[t4]
x_t5 = states_CROW_u.states[t5]
plt.plot(xA[range(0,200,2)], np.abs(x_t0.full()[range(0,200,2)]))
plt.plot(xA[range(0,200,2)], np.abs(x_t1.full()[range(0,200,2)]))
plt.plot(xA[range(0,200,2)], np.abs(x_t2.full()[range(0,200,2)]))
plt.plot(xA[range(0,200,2)], np.abs(x_t3.full()[range(0,200,2)]))
plt.plot(xA[range(0,200,2)], np.abs(x_t4.full()[range(0,200,2)]))
plt.plot(xA[range(0,200,2)], np.abs(x_t5.full()[range(0,200,2)]))
plt.xlabel('space(left sublattice)', fontsize=14)
plt.ylabel('Wavepacket shape', fontsize=14)
plt.legend(['t0', 't1', 't2', 't3', 't4', 't5'])
plt.show()
plt.close()
t0 = 0
t1 = 1000
t2 = 2000
t3 = 3000
t4 = 4000
t5 = 5000
x_t0 = states_CROW_u.states[t0]
x_t1 = states_CROW_u.states[t1]
x_t2 = states_CROW_u.states[t2]
x_t3 = states_CROW_u.states[t3]
x_t4 = states_CROW_u.states[t4]
x_t5 = states_CROW_u.states[t5]
plt.plot(xA[range(1,200,2)], np.abs(x_t0.full()[range(1,200,2)]))
plt.plot(xA[range(1,200,2)], np.abs(x_t1.full()[range(1,200,2)]))
plt.plot(xA[range(1,200,2)], np.abs(x_t2.full()[range(1,200,2)]))
plt.plot(xA[range(1,200,2)], np.abs(x_t3.full()[range(1,200,2)]))
plt.plot(xA[range(1,200,2)], np.abs(x_t4.full()[range(1,200,2)]))
plt.plot(xA[range(1,200,2)], np.abs(x_t5.full()[range(1,200,2)]))
plt.xlabel('space(right sublattice)', fontsize=14)
plt.ylabel('Wavepacket shape', fontsize=14)
plt.legend(['t0', 't1', 't2', 't3', 't4', 't5'])
plt.show()
plt.close()
cells = 100
nx = 2
ne = 1
positions = np.kron(range(nx), [1/nx for i in range(ne)])
S = np.kron(np.ones(cells), positions)
R = np.kron(range(0, cells), np.ones(nx*ne))
xA = R+S
eta = np.pi/2
H_cell = Qobj(np.array([[0, J * np.sin(eta)], [J * np.sin(eta), 0]]))
inter_cell_T = (J/2) * Qobj(np.array([[np.exp(eta * 1j), 1], [1, np.exp(-eta*1j)]]))
CROW_lattice = Lattice1d(num_cell=cells, boundary = "periodic", cell_num_site = 2,
cell_site_dof = [1], Hamiltonian_of_cell = H_cell,
inter_hop = inter_cell_T)
HCROW = CROW_lattice.Hamiltonian()
kC = CROW_lattice.k()
lat_trR = np.diag(np.zeros(cells-1)+1, -1)
lat_trR[0, cells-1] = 1 # translate to the right
lat_trL = np.diag(np.zeros(cells-1)+1, 1)
lat_trL[cells-1, 0] = 1 # translate to the left
trR = Qobj(lat_trR)
trL = Qobj(lat_trL)
gamma = 0.5
col_op = [np.sqrt(gamma) * tensor(trL, qeye(2)) ] # We could have used trR for translation to the right
sig = 3
xm = cells //2 + 15
psi0 = 1/np.sqrt(2*np.pi*sig**2) * np.exp(-(xA-xm)**2/2/sig/sig)
psi0 = Qobj(np.sqrt(psi0))
tlistCN = np.linspace(0,30,601)
options = Options(atol=1e-12)
options.store_states = True
states_CROW_nu = mesolve(HCROW, psi0, tlistCN, col_op, [kC], options=options)
plt.plot(tlistCN, states_CROW_nu.expect[0])
plt.xlabel('Time', fontsize=14)
plt.ylabel(r'$\#langle k \rangle$', fontsize=14)
plt.ylim([-1e-8, 1e-8])
plt.show()
plt.close()
t0 = 0
t1 = 100
t2 = 200
t3 = 300
t4 = 400
t5 = 500
x_t0 = np.diag(states_CROW_nu.states[t0])
x_t1 = np.diag(states_CROW_nu.states[t1])
x_t2 = np.diag(states_CROW_nu.states[t2])
x_t3 = np.diag(states_CROW_nu.states[t3])
x_t4 = np.diag(states_CROW_nu.states[t4])
x_t5 = np.diag(states_CROW_nu.states[t5])
plt.plot(xA, np.abs(x_t0))
plt.plot(xA, np.abs(x_t1))
plt.plot(xA, np.abs(x_t2))
plt.plot(xA, np.abs(x_t3))
plt.plot(xA, np.abs(x_t4))
plt.plot(xA, np.abs(x_t5))
plt.xlabel('space', fontsize=14)
plt.ylabel('Wavepacket shape', fontsize=14)
plt.legend(['t0', 't1', 't2', 't3', 't4', 't5'])
plt.show()
plt.close()
qutip.about()
qutip.cite()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Declaring a tight binding chain with a single site unit cell
Step2: The user can call Periodic_Atom_Chain to print all its information.
Step3: To define a lattice with more than one site per unit cell and one or more degrees of freedom per site, the cell_num_site and cell_site_dof arguments must be used. In a case like this, specifying the intra and inter cell interactions would also be necessary (through the arguments cell_Hamiltonian and inter_hop) in most cases. However, Lattice1d() will initiate the instance with default cell_Hamiltonian and inter_hop
Step4: The user can review the attribute values of H and T from the retuned Qobjs.
Step5: Multiple site per unitcell and multiple degrees of freedom per site
Step6: The use of cell_structures()
Step7: The cell_structure() function returns two list of lists cell_H_form and inter_cell_T_form
Step8: Similarly, we set more elements to non-zero values.
Step9: The user would have to enter all the nonzero elements in cell_H and inter_cell_T
Step10: cell_cite_dof can take care of composite degrees of freedom such as orbits, spins and/or excitations. For example, if each site has 4 orbitals and 2 spins, we set cell_site_dof = [4,2] defines that lattice. With the aid of the Lattice1d.basis() operator we can access particles localized at specific cell,site,orbitals and spin.
Step11: The labels of the diagrams can be read off from the returned H and T Qobjs. For example, $H_{12}$ can be read off as follows.
Step12: Basis function
Step13: Position Operator
Step14: The crystal momentum operator
Step15: Distribute A Operator
Step16: Hamiltonian
Step17: Dispersion Relation
Step18: Unitary evolution a Gaussian Wavepacket with mesolve
Step19: The wavepacket disperses over time keeping the periodic nature in space, since we picked a periodic boundary cndition for space.
Step20: The crystal momentum operator commutes with the Hamiltonian, so it is conserved in a Hamiltonian evolution, as expected.
Step21: Due to the initial momentum, the wave-packet moves to the right keeping the momentum as well as disperses.
Step22: We confirm that, the final momentum is indeed exactly the opposite of the initial momentum.
Step23: The wave-packet disperses and trannslates to the right, but the momentum expectation remains zero, since the translation is induced by the dissipation.
Step24: The three dispersion relationships for the three values of $\eta$ can be compared with the published results in Ref [2].
Step25: With Initial momentum
Step26: translation by dissipation
Step27: References
|
1,663
|
<ASSISTANT_TASK:>
Python Code:
from poppy.creatures import PoppyErgo
poppy = PoppyErgo()
for m in poppy.motors:
m.compliant = False
m.goal_position = 0.0
# Import everything you need for recording, playing, saving, and loading Moves
# Move: object used to represent a movement
# MoveRecorder: object used to record a Move
# MovePlayer: object used to play (and re-play) a Move
from pypot.primitive.move import Move, MoveRecorder, MovePlayer
record_frequency = 50.0 # This means that a new position will be recorded 50 times per second.
recorded_motors = [poppy.m4, poppy.m5, poppy.m6] # We will record the position of the 3 last motors of the Ergo
# You can also use alias for the recorded_motors
# e.g. recorder = MoveRecorder(poppy, record_frequency, poppy.tip)
# or even to record all motors position
# recorder = MoveRecorder(poppy, record_frequency, poppy.motors)
recorder = MoveRecorder(poppy, record_frequency, recorded_motors)
for m in recorded_motors:
m.compliant = True
recorder.start()
recorder.stop()
for m in recorded_motors:
m.compliant = False
recorded_move = recorder.move
with open('mymove.json', 'w') as f:
recorded_move.save(f)
with open('mymove.json') as f:
loaded_move = Move.load(f)
player = MovePlayer(poppy, loaded_move)
player.start()
for _ in range(3):
player.start()
player.wait_to_stop()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import the Move, Recorder and Player
Step2: Create a Recorder for the robot Poppy
Step3: Start the recording
Step4: Starts the recording when you are ready!
Step5: Stop the recording
Step6: Turn back off the compliance.
Step7: Get the recorder Move and store it on the disk
Step8: Load a saved Move
Step9: Create a Move Player and Play Back a Recorded Move
Step10: You can start the play back whenever you want
Step11: You can play your move as many times as you want. Note, that we use the wait_to_stop method to wait for the first play abck to end before running it again.
|
1,664
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
%matplotlib inline
dataset = pd.read_csv('dataset.csv')
dataset.head(5)
dataset.count_total.describe()
#add a new column to create a binary class for room occupancy
countmed = dataset.count_total.median()
dataset['room_occupancy'] = dataset['count_total'].apply(lambda x: 'occupied' if x > 4 else 'empty')
# map room occupancy to a number
dataset['room_occupancy_num'] = dataset.room_occupancy.map({'empty':0, 'occupied':1})
dataset.head(5)
dataset.room_occupancy.describe()
import os
import sys
# Modify the path
sys.path.append("..")
import pandas as pd
import yellowbrick as yb
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (12, 8)
g = yb.anscombe()
from yellowbrick.features.rankd import Rank2D
from yellowbrick.features.radviz import RadViz
from yellowbrick.features.pcoords import ParallelCoordinates
# Load the classification data set
data = dataset
# Specify the features of interest
features = ['temperature','humidity','co2','light','noise','bluetooth_devices']
# Extract the numpy arrays from the data frame
X = data[features].as_matrix()
y = data['count_total'].as_matrix()
# Instantiate the visualizer with the Covariance ranking algorithm
visualizer = Rank2D(features=features, algorithm='covariance')
visualizer.fit(X, y) # Fit the data to the visualizer
visualizer.transform(X) # Transform the data
visualizer.poof() # Draw/show/poof the data
# Instantiate the visualizer with the Pearson ranking algorithm
visualizer = Rank2D(features=features, algorithm='pearson')
visualizer.fit(X, y) # Fit the data to the visualizer
visualizer.transform(X) # Transform the data
visualizer.poof() # Draw/show/poof the data
# Specify the features of interest and the classes of the target
features = ['temperature','humidity','co2','light','noise','bluetooth_devices']
classes = ['empty', 'occupied']
# Extract the numpy arrays from the data frame
X = data[features].as_matrix()
y = data.room_occupancy_num.as_matrix()
# Instantiate the visualizer
visualizer = visualizer = RadViz(classes=classes, features=features)
visualizer.fit(X, y) # Fit the data to the visualizer
visualizer.transform(X) # Transform the data
visualizer.poof() # Draw/show/poof the data
# Specify the features of interest and the classes of the target
#features = ['temperature','humidity','co2','light','noise','bluetooth_devices']
#classes = ['empty', 'occupied']
# Extract the numpy arrays from the data frame
#X = data[features].as_matrix()
#y = data.room_occupancy_num.as_matrix()
# Instantiate the visualizer
#visualizer = visualizer = ParallelCoordinates(classes=classes, features=features)
#visualizer.fit(X, y) # Fit the data to the visualizer
#visualizer.transform(X) # Transform the data
#visualizer.poof() # Draw/show/poof the data
# Regression Evaluation Imports
from sklearn.linear_model import Ridge, Lasso
from sklearn.cross_validation import train_test_split
from yellowbrick.regressor import PredictionError, ResidualsPlot
# Load the data
df = data
feature_names = ['temperature','humidity','co2','light','noise','bluetooth_devices']
target_name = 'count_total'
# Get the X and y data from the DataFrame
X = df[feature_names].as_matrix()
y = df[target_name].as_matrix()
# Create the train and test data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
# Instantiate the linear model and visualizer
ridge = Ridge()
visualizer = ResidualsPlot(ridge)
visualizer.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer.score(X_test, y_test) # Evaluate the model on the test data
g = visualizer.poof() # Draw/show/poof the data
# Load the data
df = data
feature_names = ['temperature','humidity','co2','light','noise','bluetooth_devices']
target_name = 'count_total'
# Get the X and y data from the DataFrame
X = df[feature_names].as_matrix()
y = df[target_name].as_matrix()
# Create the train and test data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
# Instantiate the linear model and visualizer
lasso = Lasso()
visualizer = PredictionError(lasso)
visualizer.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer.score(X_test, y_test) # Evaluate the model on the test data
g = visualizer.poof() # Draw/show/poof the data
# Classifier Evaluation Imports
from sklearn.naive_bayes import GaussianNB
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.cross_validation import train_test_split
from yellowbrick.classifier import ClassificationReport, ROCAUC, ClassBalance
# Load the classification data set
data = dataset
# Specify the features of interest and the classes of the target
features = ['temperature','humidity','co2','light','noise','bluetooth_devices']
classes = ['empty', 'occupied']
# Extract the numpy arrays from the data frame
X = data[features].as_matrix()
y = data.room_occupancy_num.as_matrix()
# Create the train and test data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
# Instantiate the classification model and visualizer
bayes = GaussianNB()
visualizer = ClassificationReport(bayes, classes=classes)
visualizer.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer.score(X_test, y_test) # Evaluate the model on the test data
g = visualizer.poof() # Draw/show/poof the data
# Instantiate the classification model and visualizer
logistic = LogisticRegression()
visualizer = ROCAUC(logistic)
visualizer.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer.score(X_test, y_test) # Evaluate the model on the test data
g = visualizer.poof() # Draw/show/poof the data
# Instantiate the classification model and visualizer
forest = RandomForestClassifier()
visualizer = ClassBalance(forest, classes=classes)
visualizer.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer.score(X_test, y_test) # Evaluate the model on the test data
g = visualizer.poof() # Draw/show/poof the data
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Feature Analysis
Step2: Rank2D
Step3: RadViz
Step4: For regression, the RadViz visualizer should use a color sequence to display the target information, as opposed to discrete colors.
Step5: Regressor Evaluation
Step6: Residuals Plot
Step7: Prediction Error Plot
Step8: Classifier Evaluation
Step9: Classification report
Step10: ROCAUC
Step11: ClassBalance
|
1,665
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import xarray as xr
import cartopy.crs as ccrs
from matplotlib import pyplot as plt
print("numpy version : ", np.__version__)
print("pandas version : ", pd.__version__)
print("xarray version : ", xr.__version__)
ds = xr.tutorial.open_dataset('rasm').load()
ds
print(ds.xc.attrs)
print(ds.yc.attrs)
fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(14,4))
ds.xc.plot(ax=ax1)
ds.yc.plot(ax=ax2)
ds.Tair[0].plot()
plt.figure(figsize=(14,6))
ax = plt.axes(projection=ccrs.PlateCarree())
ax.set_global()
ds.Tair[0].plot.pcolormesh(ax=ax, transform=ccrs.PlateCarree(), x='xc', y='yc', add_colorbar=False)
ax.coastlines()
ax.set_ylim([0,90]);
# define two-degree wide latitude bins
lat_bins = np.arange(0,91,2)
# define a label for each bin corresponding to the central latitude
lat_center = np.arange(1,90,2)
# group according to those bins and take the mean
Tair_lat_mean = ds.Tair.groupby_bins('xc', lat_bins, labels=lat_center).mean(dim=xr.ALL_DIMS)
# plot the result
Tair_lat_mean.plot()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As an example, consider this dataset from the xarray-data repository.
Step2: In this example, the logical coordinates are x and y, while the physical coordinates are xc and yc, which represent the latitudes and longitude of the data.
Step3: Plotting
Step4: Note that the variables xc (longitude) and yc (latitude) are two-dimensional scalar fields.
Step5: In order to visualize the data on a conventional latitude-longitude grid, we can take advantage of xarray's ability to apply cartopy map projections.
Step6: Multidimensional Groupby
|
1,666
|
<ASSISTANT_TASK:>
Python Code:
print("Let's print a newline\nVery good. Now let us create a newline\n\twith a nested text!")
print('It\'s Friday, Friday\nGotta get down on Friday')
print("Oscar Wild once said: \"Be yourself; everyone else is already taken.\"")
print("The path of the document is C:\nadia\tofes161\advanced_homework.docx")
print("The path of the document is C:\\nadia\\tofes161\\advanced_homework.docx")
print(r"The path of the document is C:\nadia\tofes161\advanced_homework.docx")
friday_song =
It's Friday, Friday
Gotta get down on Friday
Everybody's lookin' forward to the weekend, weekend
Friday, Friday
Gettin' down on Friday
Everybody's lookin' forward to the weekend
Partyin', partyin' (Yeah)
Partyin', partyin' (Yeah)
Fun, fun, fun, fun
Lookin' forward to the weekend
It's Friday, Friday
Gotta get down on Friday
Everybody's lookin' forward to the weekend, weekend
Friday, Friday
Gettin' down on Friday
Everybody's lookin' forward to the weekend
print(friday_song)
age = 18
name = 'Yam'
print("My age is " + str(age) + " and my name is " + name + ".")
print(f"My age is {age} and my name is {name}.")
# נסו להכניס הרבה רווחים אחרי או לפני שם המשתמש
username = input("Please enter your user: ")
username = username.strip()
print(f"This string is: {username}.")
strange_string = '!@#$%!!!^&This! Is! Sparta!!!!!!!!!&^%$!!!#@!'
print(strange_string.strip('~!@#$%^&*'))
strange = "This is a very long string which contains strange words, like ululation and lollygag."
strange.find("ululation")
strange.find("lollygag")
strange.index("lollygag")
strange.find('luculent')
strange.index('luculent')
test1 = "HeLlO WoRlD 123!"
test1
test1.upper()
test1.lower()
test1.capitalize() # רק האות הראשונה תהיה גדולה
test1.title() # מגדיל את האות הראשונה בכל מילה
test1
gettysburg_address =
Four score and seven years ago our fathers brought forth, on this continent, a new nation, conceived in Liberty, and dedicated to the proposition that all men are created equal.
Now we are engaged in a great civil war, testing whether that nation, or any nation so conceived and so dedicated, can long endure. We are met on a great battle-field of that war. We have come to dedicate a portion of that field, as a final resting place for those who here gave their lives that that nation might live. It is altogether fitting and proper that we should do this.
But, in a larger sense, we cannot dedicate—we cannot consecrate—we cannot hallow—this ground. The brave men, living and dead, who struggled here, have consecrated it, far above our poor power to add or detract. The world will little note, nor long remember what we say here, but it can never forget what they did here. It is for us the living, rather, to be dedicated here to the unfinished work which they who fought here have thus far so nobly advanced. It is rather for us to be here dedicated to the great task remaining before us—that from these honored dead we take increased devotion to that cause for which they here gave the last full measure of devotion—that we here highly resolve that these dead shall not have died in vain—that this nation, under God, shall have a new birth of freedom—and that government of the people, by the people, for the people, shall not perish from the earth.
gettysburg_address = gettysburg_address.lower()
gettysburg_address.count('we')
gettysburg_address.count('dedicated')
gettysburg_address.count('nation')
lyrics = So let it out and let it in, hey Jude, begin
You're waiting for someone to perform with
And don't you know that it's just you, hey Jude, you'll do
The movement you need is on your shoulder
Na na na na na na na na na yeah
lyrics.replace('Jude', 'Dude')
print(lyrics.replace('Jude', 'Dude'))
lyrics = So let it out and let it in, hey Jude, begin
You're waiting for someone to perform with
And don't you know that it's just you, hey Jude, you'll do
The movement you need is on your shoulder
Na na na na na na na na na yeah
print("Before: ")
lyrics.replace('Jude', 'Dude')
print(lyrics)
lyrics = lyrics.replace('Jude', 'Dude')
print('-' * 50)
print("After: ")
print(lyrics)
i_like_to_eat = 'chocolate, fudge, cream, cookies, banana, hummus'
i_like_to_eat.split(', ')
type(i_like_to_eat.split(', '))
i_like_to_eat.split(', ')[0]
some_paragraph =
Gadsby is a 1939 novel by Ernest Vincent Wright written as a lipogram, which does not include words that contain the letter E. The plot revolves around the dying fictional city of Branton Hills, which is revitalized as a result of the efforts of protagonist John Gadsby and a youth group he organizes.
Though vanity published and little noticed in its time, the book is a favourite of fans of constrained writing and is a sought-after rarity among some book collectors. Later editions of the book have sometimes carried the alternative subtitle 50,000 Word Novel Without the Letter "E".
Despite Wright's claim, published versions of the book may contain a handful of uses of the letter "e". The version on Project Gutenberg, for example, contains "the" three times and "officers" once.
some_paragraph.split()
i_love_to_eat = ['chocolate', 'fudge', 'cream', 'cookies', 'banana', 'hummus']
thing_to_join_by = ", "
thing_to_join_by.join(i_love_to_eat)
what_i_love = ["שוקולד", "עוגות גבינה", "ארטיק", "סוכריות", "תות גינה"]
vav_ha_hibur = ' ו'
song = "אני אוהב " + vav_ha_hibur.join(what_i_love)
print(song)
some_test = "Hello, my name is Inigo Montoya, you killed my father, prepare to die!"
is_welcoming = some_test.startswith('Hello,')
print(is_welcoming)
is_shouting = some_test.endswith('!')
print(is_shouting)
is_goodbye = some_test.endswith("Goodbye, my kind sir.")
print(is_goodbye)
address = "Python Street 5, Hadera, Israel"
print("Does the user live in Python Street?... " + str(address.startswith('Python Street')))
print("Does the user live in Scotland?... " + str(address.endswith('Scotland')))
test2 = "HELLO WORLD"
print("test2.isalnum(): " + str(test2.isalnum()))
print("test2.isalpha(): " + str(test2.isalpha()))
print("test2.isdecimal(): " + str(test2.isdecimal()))
test3 = "12345"
print("test3.isalnum(): " + str(test3.isalnum()))
print("test3.isalpha(): " + str(test3.isalpha()))
print("test3.isdecimal(): " + str(test3.isdecimal()))
test4 = "HELLOWORLD"
print("test4.isalnum(): " + str(test4.isalnum()))
print("test4.isalpha(): " + str(test4.isalpha()))
print("test4.isdecimal(): " + str(test4.isdecimal()))
test5 = "ABC123"
print("test5.isalnum(): " + str(test5.isalnum()))
print("test5.isalpha(): " + str(test5.isalpha()))
print("test5.isdecimal(): " + str(test5.isdecimal()))
gettysburg_address =
Four score and seven years ago our fathers brought forth, on this continent, a new nation, conceived in Liberty, and dedicated to the proposition that all men are created equal.
Now we are engaged in a great civil war, testing whether that nation, or any nation so conceived and so dedicated, can long endure. We are met on a great battle-field of that war. We have come to dedicate a portion of that field, as a final resting place for those who here gave their lives that that nation might live. It is altogether fitting and proper that we should do this.
But, in a larger sense, we cannot dedicate—we cannot consecrate—we cannot hallow—this ground. The brave men, living and dead, who struggled here, have consecrated it, far above our poor power to add or detract. The world will little note, nor long remember what we say here, but it can never forget what they did here. It is for us the living, rather, to be dedicated here to the unfinished work which they who fought here have thus far so nobly advanced. It is rather for us to be here dedicated to the great task remaining before us—that from these honored dead we take increased devotion to that cause for which they here gave the last full measure of devotion—that we here highly resolve that these dead shall not have died in vain—that this nation, under God, shall have a new birth of freedom—and that government of the people, by the people, for the people, shall not perish from the earth.
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <p style="text-align
Step2: <p style="text-align
Step3: <p style="text-align
Step5: <div class="align-center" style="display
Step6: <div class="align-center" style="display
Step7: <p style="text-align
Step8: <p style="text-align
Step9: <p style="text-align
Step10: <p style="text-align
Step11: <p style="text-align
Step12: <p style="text-align
Step13: <p style="text-align
Step14: <div class="align-center" style="display
Step15: <p style="text-align
Step17: <p style="text-align
Step18: <p style="text-align
Step19: <p style="text-align
Step21: <p style="text-align
Step22: <p style="text-align
Step24: <p style="text-align
Step25: <p style="text-align
Step26: <p style="text-align
Step28: <p style="text-align
Step29: <p style="text-align
Step30: <p style="text-align
Step31: <p style="text-align
Step32: <div class="align-center" style="display
Step34: <p style="align
|
1,667
|
<ASSISTANT_TASK:>
Python Code:
data_path = '/content/gdrive/My Drive/amld_data'
# Alternatively, you can also store the data in a local directory. This method
# will also work when running the notebook in Jupyter instead of Colab.
# data_path = './amld_data
if data_path.startswith('/content/gdrive/'):
from google.colab import drive
assert data_path.startswith('/content/gdrive/My Drive/'), 'Google Drive paths must start with "/content/gdrive/My Drive/"!'
drive.mount('/content/gdrive')
if data_path.startswith('gs://'):
from google.colab import auth
auth.authenticate_user()
# In Jupyter, you would need to install TF 2 via !pip.
%tensorflow_version 2.x
# Always make sure you are using running the expected version.
# There are considerable differences between versions.
# This Colab was tested with 2.1.0.
import tensorflow as tf
tf.__version__
import base64, collections, io, itertools, functools, json, os, random, re, textwrap, time, urllib, xml
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from PIL import Image, ImageDraw
from IPython import display
# Retrieve list of categories.
def list_bucket(bucket, regexp='.*'):
Returns a filtered list of Keys in specified GCS bucket.
keys = []
fh = urllib.request.urlopen('https://storage.googleapis.com/%s' % bucket)
content = xml.dom.minidom.parseString(fh.read())
for e in content.getElementsByTagName('Contents'):
key = e.getElementsByTagName('Key')[0].firstChild.data
if re.match(regexp, key):
keys.append(key)
return keys
all_ndjsons = list_bucket('quickdraw_dataset', '.*ndjson$')
print('available: (%d)' % len(all_ndjsons))
print('\n'.join(textwrap.wrap(
'|'.join([key.split('/')[-1].split('.')[0] for key in all_ndjsons]),
width=100)))
# Mini group of two animals.
pets = ['cat', 'dog']
# Somewhat larger group of zoo animals.
zoo = ['camel', 'crocodile', 'dolphin', 'elephant', 'flamingo', 'giraffe',
'kangaroo', 'lion', 'monkey', 'penguin', 'rhinoceros']
# Even larger group of all animals.
animals = ['ant', 'bat', 'bear', 'bee', 'bird', 'butterfly', 'camel', 'cat',
'cow', 'crab', 'crocodile', 'dog', 'dolphin', 'dragon', 'duck',
'elephant', 'fish', 'flamingo', 'frog', 'giraffe', 'hedgehog',
'horse', 'kangaroo', 'lion', 'lobster', 'monkey', 'mosquito',
'mouse', 'octopus', 'owl', 'panda', 'parrot', 'penguin', 'pig',
'rabbit', 'raccoon', 'rhinoceros', 'scorpion', 'sea turtle', 'shark',
'sheep', 'snail', 'snake', 'spider', 'squirrel', 'swan']
# You could do something like:
# my_objects = ['shoe', 'shorts', 't-shirt']
# YOUR ACTION REQUIRED:
# Choose one of above groups for remainder of workshop.
# Note: This will result in ~100MB of download per class.
# `dataset_name` will be used to construct directories containing the data.
labels, dataset_name = zoo, 'zoo'
# Or use another dataset defined above:
# labels, dataset_name = pets, 'pets'
# labels, dataset_name = animals, 'animals'
# Download above chosen group.
def valid_ndjson(filename):
Checks presence + completeness of .ndjson file.
try:
json.loads(tf.io.gfile.GFile(filename).readlines()[-1])
return True
except (ValueError, IOError):
return False
def retrieve(bucket, key, filename):
Returns a file specified by its Key from a GCS bucket.
url = 'https://storage.googleapis.com/%s/%s' % (
bucket, urllib.parse.quote(key))
print('\n' + url)
if not tf.io.gfile.exists(filename):
with tf.io.gfile.GFile(filename, 'w') as f:
f.write(urllib.request.urlopen(url).read())
while not valid_ndjson(filename):
print('*** Corrupted download (%.2f MB), retrying...' % (
os.path.getsize(filename) / 2.**20))
with tf.io.gfile.GFile(filename, 'w') as f:
f.write(urllib.request.urlopen(url).read())
tf.io.gfile.makedirs(data_path)
print('\n%d labels:' % len(labels))
for name in labels:
print(name, end=' ')
dst = '%s/%s.ndjson' % (data_path, name)
retrieve('quickdraw_dataset', 'full/simplified/%s.ndjson' % name, dst)
print('%.2f MB' % (tf.io.gfile.stat(dst).length / 2.**20))
print('\nDONE :)')
print('\n'.join([
'%6.1fM : %s' % (tf.io.gfile.stat(path).length/1024**2, path)
for path in tf.io.gfile.glob('{}/*.ndjson'.format(data_path))
]))
path = sorted(tf.io.gfile.glob(os.path.join(data_path, '*.ndjson')))[0]
print(path)
print(tf.io.gfile.GFile(path).read()[:1000] + '...')
data_json = json.loads(tf.io.gfile.GFile(path).readline())
data_json.keys()
# So we have some meta information.
for k, v in data_json.items():
if k != 'drawing':
print('%20s -> %s' % (k, v))
# Extract the actual drawing.
drawing = data_json['drawing']
# The drawing consists of a series of strokes:
print('Shapes:', [np.array(stroke).shape for stroke in drawing])
print('Example stroke:', drawing[0])
# Draw the image -- the strokes all have have shape (2, n)
# so the first index seems to be x/y coordinate:
for stroke in drawing:
# Each array has X coordinates at [0, :] and Y coordinates at [1, :].
plt.plot(np.array(stroke[0]), -np.array(stroke[1]))
# Would YOU recognize this drawing successfully?
# Some more code to load many sketches at once.
# Let's ignore the difficult `unrecognized` sketches for now
# (i.e. unrecognized by the official quickdraw classifier).
def convert(line):
Converts single JSON line and converts 'drawing' to list of np.array.
d = json.loads(line)
d['drawing'] = [np.array(stroke) for stroke in d['drawing']]
return d
def loaditer(name, unrecognized=False):
Returns iterable of drawings in specified file.
Args:
name: Name of the downloaded object (e.g. "elephant").
unrecognized: Whether to include drawings that were not recognized
by Google AI (i.e. the hard ones).
for line in tf.io.gfile.GFile('%s/%s.ndjson' % (data_path, name)):
d = convert(line)
if d['recognized'] or unrecognized:
yield d
def loadn(name, n, unrecognized=False):
Returns list of drawings.
Args:
name: Name of the downloaded object (e.g. "elephant").
n: Number of drawings to load.
unrecognized: Whether to include drawings that were not recognized
by Google AI (i.e. the hard ones).
it = loaditer(name, unrecognized=unrecognized)
return list(itertools.islice(it, 0, n))
n = 100
print('Loading {} instances of "{}"...'.format(n, labels[0]), end='')
sample = loadn(labels[0], 100)
print('done.')
# Some more drawings.
rows, cols = 3, 3
plt.figure(figsize=(3*cols, 3*rows))
for y in range(rows):
for x in range(cols):
i = y * cols + x
plt.subplot(rows, cols, i + 1)
for stroke in sample[i]['drawing']:
plt.plot(np.array(stroke[0]), -np.array(stroke[1]))
def dict_to_img(drawing, img_sz=64, lw=3, maximize=True):
Converts QuickDraw data to quadratic rasterized image.
Args:
drawing: Dictionary instance of QuickDraw dataset.
img_sz: Size output image (in pixels).
lw: Line width (in pixels).
maximize: Whether to maximize drawing within image pixels.
Returns:
A PIL.Image with the rasterized drawing.
img = Image.new('L', (img_sz, img_sz))
draw = ImageDraw.Draw(img)
lines = np.array([
stroke[0:2, i:i+2]
for stroke in drawing['drawing']
for i in range(stroke.shape[1] - 1)
], dtype=np.float32)
if maximize:
for i in range(2):
min_, max_ = lines[:,i,:].min() * 0.95, lines[:,i,:].max() * 1.05
lines[:,i,:] = (lines[:,i,:] - min_) / max(max_ - min_, 1)
else:
lines /= 1024
for line in lines:
draw.line(tuple(line.T.reshape((-1,)) * img_sz), fill='white', width=lw)
return img
# Show some examples.
def showimg(img):
Shows an image with an inline HTML <img> tag.
Args:
img: Can be a PIL.Image or a numpy.ndarray.
if isinstance(img, np.ndarray):
img = Image.fromarray(img, 'L')
b = io.BytesIO()
img.convert('RGB').save(b, format='png')
enc = base64.b64encode(b.getvalue()).decode('utf-8')
display.display(display.HTML(
'<img src="data:image/png;base64,%s">' % enc))
# Fetch some images + shuffle order.
rows, cols = len(labels), 10
n_per_class = rows * cols // len(labels) + 1
drawings_list = [drawing for name in labels
for drawing in loadn(name, cols)]
# Create mosaic of rendered images.
lw = 4
img_sz = 64
tableau = np.zeros((img_sz * rows, img_sz * cols), dtype=np.uint8)
for y in range(rows):
for x in range(cols):
i = y * cols + x
img = dict_to_img(drawings_list[i], img_sz=img_sz, lw=lw, maximize=True)
tableau[y*img_sz:(y+1)*img_sz,
x*img_sz:(x+1)*img_sz] = np.asarray(img)
showimg(tableau)
print('{} samples of : {}'.format(cols, ' '.join(labels)))
# Create a new (empty) instance.
example = tf.train.Example()
# An empty example will not print anything.
print(example)
# An example contains a map from feature name to "Feature".
# Every "Feature" contains a list of elements of the same
# type, which is one of:
# - bytes_list (similar to Python's "str")
# - float_list (float number)
# - int64_list (integer number)
# These values can be accessed as follows (no need to understand
# details):
# Add float value "3.1416" to feature "magic_numbers"
example.features.feature['magic_numbers'].float_list.value.append(3.1416)
# Add some more values to the float list "magic_numbers".
example.features.feature['magic_numbers'].float_list.value.extend([2.7183, 1.4142, 1.6180])
### YOUR ACTION REQUIRED:
# Create a second feature named "adversaries" and add the elements
# b'Alice' and b'Bob'.
example.features.feature['adversaries'].
# This will now print a serialized representation of our protocol buffer
# with features "magic_numbers" and "adversaries" set...
print(example)
# .. et voila : that's all you need to know about protocol buffers for this
# workshop.
# Let's first check how many [recognized=True] examples we have in each class.
for name in labels:
num_all_samples = len(list(tf.io.gfile.GFile('%s/%s.ndjson' % (data_path, name))))
num_recognized_samples = len(list(loaditer(name)))
print(name, num_all_samples, 'recognized', num_recognized_samples)
#@title `make_sharded_files()` code
#@markdown Helper code to create sharded recordio files.
#@markdown Simply **click "execute"** and continue to the next cell.
#@markdown No need to read through this code to understand the remainder of the Colab.
#@markdown
#@markdown If you want to have a look anyways, you can double-click this cell or click on the three dots
#@markdown and then select "Form" and then "Show Code" (shortcut `<Ctrl-M> <F>`).
# Helper code to create sharded recordio files.
# (No need to read through this.)
# The code in this cell simply takes a list of iterators and then
# randomly distributes the values returned by these iterators into sharded
# datasets (e.g. a train/eval/test split).
def rand_key(counts):
Returns a random key from "counts", using values as distribution.
r = random.randint(0, sum(counts.values()))
for key, count in counts.items():
if r > count or count == 0:
r -= count
else:
counts[key] -= 1
return key
def get_split(i, splits):
Returns key from "splits" for iteration "i".
i %= sum(splits.values())
for split in sorted(splits):
if i < splits[split]:
return split
i -= splits[split]
def make_counts(labels, total):
Generates counts for "labels" totaling "total".
counts = {}
for i, name in enumerate(labels):
counts[name] = total // (len(labels) - i)
total -= counts[name]
return counts
def example_to_dict(example):
Converts a tf.train.Example to a dictionary.
example_dict = {}
for name, value in example.features.feature.items():
if value.HasField('bytes_list'):
value = value.bytes_list.value
elif value.HasField('int64_list'):
value = value.int64_list.value
elif value.HasField('float_list'):
value = value.float_list.value
else:
raise 'Unknown *_list type!'
if len(value) == 1:
example_dict[name] = value[0]
else:
example_dict[name] = np.array(value)
return example_dict
def make_sharded_files(make_example, path, labels, iters, counts, splits,
shards=10, overwrite=False, report_dt=10, make_df=False):
Create sharded dataset from "iters".
Args:
make_example: Converts object returned by elements of "iters"
to tf.train.Example() proto.
path: Directory that will contain recordio files.
labels: Names of labels, will be written to "labels.txt".
iters: List of iterables returning drawing objects.
counts: Dictionary mapping class to number of examples.
splits: Dictionary mapping filename to multiple examples. For example,
splits=dict(a=2, b=1) will result in two examples being written to "a"
for every example being written to "b".
shards: Number of files to be created per split.
overwrite: Whether a pre-existing directory should be overwritten.
report_dt: Number of seconds between status updates (0=no updates).
make_df: Also write data as pandas.DataFrame - do NOT use this with very
large datasets that don't fit in memory!
Returns:
Total number of examples written to disk per split.
assert len(iters) == len(labels)
# Prepare output.
if not tf.io.gfile.exists(path):
tf.io.gfile.makedirs(path)
paths = {
split: ['%s/%s-%05d-of-%05d' % (path, split, i, shards)
for i in range(shards)]
for split in splits
}
assert overwrite or not tf.io.gfile.exists(paths.values()[0][0])
writers = {
split: [tf.io.TFRecordWriter(ps[i]) for i in range(shards)]
for split, ps in paths.items()
}
t0 = time.time()
examples_per_split = collections.defaultdict(int)
i, n = 0, sum(counts.values())
counts = dict(**counts)
rows = []
# Create examples.
while sum(counts.values()):
name = rand_key(counts)
split = get_split(i, splits)
writer = writers[split][examples_per_split[split] % shards]
label = labels.index(name)
example = make_example(label, next(iters[label]))
writer.write(example.SerializeToString())
if make_df:
example.features.feature['split'].bytes_list.value.append(split.encode('utf8'))
rows.append(example_to_dict(example))
examples_per_split[split] += 1
i += 1
if report_dt > 0 and time.time() - t0 > report_dt:
print('processed %d/%d (%.2f%%)' % (i, n, 100. * i / n))
t0 = time.time()
# Store results.
for split in splits:
for writer in writers[split]:
writer.close()
with tf.io.gfile.GFile('%s/labels.txt' % path, 'w') as f:
f.write('\n'.join(labels))
with tf.io.gfile.GFile('%s/counts.json' % path, 'w') as f:
json.dump(examples_per_split, f)
if make_df:
df_path = '%s/dataframe.pkl' % path
print('Writing %s...' % df_path)
pd.DataFrame(rows).to_pickle(df_path)
return dict(**examples_per_split)
# Uses `dict_to_img()` from previous cell to create raster image.
def make_example_img(label, drawing):
Converts QuickDraw dictionary to example with rasterized data.
Args:
label: Numerical representation of the label (e.g. '0' for labels[0]).
drawing: Dictionary with QuickDraw data.
Returns:
A tf.train.Example protocol buffer (with 'label', 'img_64', and additional
metadata features).
example = tf.train.Example()
example.features.feature['label'].int64_list.value.append(label)
img_64 = np.asarray(dict_to_img(
drawing, img_sz=64, lw=4, maximize=True)).reshape(-1)
example.features.feature['img_64'].int64_list.value.extend(img_64)
example.features.feature['countrycode'].bytes_list.value.append(
drawing['countrycode'].encode())
example.features.feature['recognized'].int64_list.value.append(
drawing['recognized'])
example.features.feature['word'].bytes_list.value.append(
drawing['word'].encode())
ts = drawing['timestamp']
ts = time.mktime(time.strptime(ts[:ts.index('.')], '%Y-%m-%d %H:%M:%S'))
example.features.feature['timestamp'].int64_list.value.append(int(ts))
example.features.feature['key_id'].int64_list.value.append(
int(drawing['key_id']))
return example
# Create the (rasterized) dataset.
path = '%s/%s_img' % (data_path, dataset_name)
t0 = time.time()
examples_per_split = make_sharded_files(
make_example=make_example_img,
path=path,
labels=labels,
iters=[loaditer(name) for name in labels],
# Creating 50k train, 20k eval and 10k test examples.
counts=make_counts(labels, 80000),
splits=dict(train=5, eval=2, test=1),
overwrite=True,
# Note: Set this to False when generating large datasets.
make_df=True,
)
# If you don't see the final output below, it's probably because your VM
# has run out of memory and crashed!
# This can happen when make_df=True.
print('stored data to "%s"' % path)
print('generated %s examples in %d seconds' % (
examples_per_split, time.time() - t0))
# Convert stroke coordinates into normalized relative coordinates,
# one single list, and add a "third dimension" that indicates when
# a new stroke starts.
def dict_to_stroke(d):
norm = lambda x: (x - x.min()) / max(1, (x.max() - x.min()))
xy = np.concatenate([np.array(s, dtype=np.float32) for
s in d['drawing']], axis=1)
z = np.zeros(xy.shape[1])
if len(d['drawing']) > 1:
z[np.cumsum(np.array(list(map(lambda x: x.shape[1],
d['drawing'][:-1]))))] = 1
dxy = np.diff(norm(xy))
return np.concatenate([dxy, z.reshape((1, -1))[:, 1:]])
# Visualize and control output of `dict_to_stroke()`.
stroke = dict_to_stroke(sample[0])
# The first 2 dimensions are normalized dx/dy coordinates, and
# the third dimension indicates a new stroke.
xy = stroke[:2, :].cumsum(axis=1)
plt.plot(xy[0,:], -xy[1,:])
pxy = xy[:, stroke[2] != 0]
# Indicate the new stroke with a red circle.
plt.plot(pxy[0], -pxy[1], 'ro');
# Uses `dict_to_stroke()` from previous cell to create raster image.
def make_example_stroke(label, drawing):
Converts QuickDraw dictionary to example with stroke data.
Args:
label: Numerical representation of the label (e.g. '0' for labels[0]).
drawing: Dictionary with QuickDraw data.
Returns:
A tf.train.Example protocol buffer (with 'label', 'stroke_x', 'stroke_y',
'stroke_z', and additional metadata features).
example = tf.train.Example()
example.features.feature['label'].int64_list.value.append(label)
stroke = dict_to_stroke(drawing)
example.features.feature['stroke_x'].float_list.value.extend(stroke[0, :])
example.features.feature['stroke_y'].float_list.value.extend(stroke[1, :])
example.features.feature['stroke_z'].float_list.value.extend(stroke[2, :])
example.features.feature['stroke_len'].int64_list.value.append(
stroke.shape[1])
example.features.feature['countrycode'].bytes_list.value.append(
drawing['countrycode'].encode())
example.features.feature['recognized'].int64_list.value.append(
drawing['recognized'])
example.features.feature['word'].bytes_list.value.append(
drawing['word'].encode())
ts = drawing['timestamp']
ts = time.mktime(time.strptime(ts[:ts.index('.')], '%Y-%m-%d %H:%M:%S'))
example.features.feature['timestamp'].int64_list.value.append(int(ts))
example.features.feature['key_id'].int64_list.value.append(
int(drawing['key_id']))
return example
path = '%s/%s_stroke' % (data_path, dataset_name)
t0 = time.time()
examples_per_split = make_sharded_files(
make_example=make_example_stroke,
path=path,
labels=labels,
iters=[loaditer(name) for name in labels],
# Creating 50k train, 20k eval, 10k test examples. Takes ~2min
counts=make_counts(labels, 80000),
splits=dict(train=5, eval=2, test=1),
overwrite=True,
# Note: Set this to False when generating large datasets...
make_df=True,
)
print('stored data to "%s"' % path)
print('generated %s examples in %d seconds' % (examples_per_split, time.time() - t0))
# YOUR ACTION REQUIRED:
# Check out the files generated in $data_path
# Note that you can also inspect the files in http://drive.google.com if you
# used Drive as the destination.
# Let's look at a single file of the sharded dataset.
tf_record_path = '{}/{}_img/eval-00000-of-00010'.format(data_path, dataset_name)
# YOUR ACTION REQUIRED:
# Use `tf.data.TFRecordDataset()` to read a single record from the file and
# assign it to the variable `record`. What data type has this record?
# Hint: dataset is a Python "iterable".
#dataset = ...
#record
# Check out the features. They should correspond to what we generated in
# `make_example_img()` above.
example = tf.train.Example()
# Note: `.numpy()` returns the underlying string from the Tensor.
example.ParseFromString(record.numpy())
print(list(example.features.feature.keys()))
# YOUR ACTION REQUIRED:
# Extract the label and the image data from the example protobuf.
# Use above section "tf.train.Example" for reference.
label_int =
img_64 =
# Visualize the image:
print(labels[label_int])
plt.matshow(np.array(img_64).reshape((64, 64)))
# YOUR ACTION REQUIRED:
# Check that we have an equal distribution of labels in the training files.
# If we want to create our own protocol buffers, we first need to install
# some programs.
!apt-get -y install protobuf-compiler python-pil python-lxml
# Step 1: Write a proto file that describes our data format.
# YOUR ACTION REQUIRED: Complete the definition of the "Person" message (you
# can use the slide for inspiration).
with open('person.proto', 'w') as f:
f.write('''syntax = "proto3";''')
# Step 2: Compile proto definition to a Python file.
!protoc --python_out=. person.proto
!ls -lh
# Step 3: Import code from generated Python file.
from person_pb2 import Person
# Note: If you change the person_pb2 module, you'll need to restart the kernel
# to see the changes because Python will still remember the previous import.
person = Person()
person.name = 'John Doe'
person.email = 'john.doe@gmail.com'
person.lucky_numbers.extend([13, 99])
person.SerializeToString()
# YOUR ACTION REQUIRED:
# Compare the size of the serialized person structure in proto format
# vs. JSON encoded (you can use Python's json.dumps() and list members
# manually, or import google.protobuf.json_format).
# Which format is more efficient? Why?
# Which format is easier to use?
# Which format is more versatile?
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Get the data
Step5: Create your own group -- the more categories you include the more challenging the classification task will be...
Step6: Inspect the data
Step7: Let's further explore what the NDJSON file format is.
Step11: As we can see, it's a format that contains one JSON dictionary per line.
Step14: Rasterize
Step15: Protobufs and tf.train.Example
Step16: Create datasets
Step22: Sharding
Step24: Create IMG dataset
Step25: We will now create a dataset with 80k samples consisting of
Step27: Create STROKE dataset
Step28: ----- Optional part -----
Step29: More on protobufs
|
1,668
|
<ASSISTANT_TASK:>
Python Code:
# Load the network.
G = cf.load_physicians_network()
# Make a Circos plot of the graph
import numpy as np
from circos import CircosPlot
nodes = sorted(G.nodes())
edges = G.edges()
edgeprops = dict(alpha=0.1)
nodecolor = plt.cm.viridis(np.arange(len(nodes)) / len(nodes))
fig = plt.figure(figsize=(6,6))
ax = fig.add_subplot(111)
c = CircosPlot(nodes, edges, radius=10, ax=ax, edgeprops=edgeprops, nodecolor=nodecolor)
c.draw()
# Example code.
def in_triangle(G, node):
Returns whether a given node is present in a triangle relationship or not.
# We first assume that the node is not present in a triangle.
is_in_triangle = False
# Then, iterate over every pair of the node's neighbors.
for nbr1, nbr2 in itertools.combinations(G.neighbors(node), 2):
# Check to see if there is an edge between the node's neighbors.
# If there is an edge, then the given node is present in a triangle.
if G.has_edge(nbr1, nbr2):
is_in_triangle = True
# We break because any triangle that is present automatically
# satisfies the problem requirements.
break
return is_in_triangle
in_triangle(G, 3)
nx.triangles(G, 3)
# Possible answer
def get_triangles(G, node):
neighbors = set(G.neighbors(node))
triangle_nodes = set()
Fill in the rest of the code below.
triangle_nodes.add(node)
is_in_triangle = False
# Then, iterate over every pair of the node's neighbors.
for nbr1, nbr2 in itertools.combinations(neighbors, 2):
# Check to see if there is an edge between the node's neighbors.
# If there is an edge, then the given node is present in a triangle.
if G.has_edge(nbr1, nbr2):
# We break because any triangle that is present automatically
# satisfies the problem requirements.
triangle_nodes.add(nbr1)
triangle_nodes.add(nbr2)
return triangle_nodes
# Verify your answer with the following funciton call. Should return something of the form:
# {3, 9, 11, 41, 42, 67}
get_triangles(G, 3)
# Then, draw out those nodes.
nx.draw(G.subgraph(get_triangles(G, 3)), with_labels=True)
# Compare for yourself that those are the only triangles that node 3 is involved in.
neighbors3 = G.neighbors(3)
neighbors3.append(3)
nx.draw(G.subgraph(neighbors3), with_labels=True)
# Fill in your code here.
def get_open_triangles(G, node):
There are many ways to represent this. One may choose to represent only the nodes involved
in an open triangle; this is not the approach taken here.
Rather, we have a code that explicitly enumrates every open triangle present.
open_triangle_nodes = []
neighbors = set(G.neighbors(node))
#for n in neighbors:
for nbr1, nbr2 in itertools.combinations(neighbors, 2):
# Check to see if there is an edge between the node's neighbors.
# If there is an edge, then the given node is present in a triangle.
if not G.has_edge(nbr1, nbr2):
# We break because any triangle that is present automatically
# satisfies the problem requirements.
open_triangle_nodes.append([nbr1,node,nbr2])
return open_triangle_nodes
# # Uncomment the following code if you want to draw out each of the triplets.
nodes = get_open_triangles(G, 2)
for i, triplet in enumerate(nodes):
fig = plt.figure(i)
nx.draw(G.subgraph(triplet), with_labels=True)
print(get_open_triangles(G, 3))
len(get_open_triangles(G, 3))
list(nx.find_cliques(G))
def maximal_cliqes_of_size(size, G):
return ______________________
maximal_cliqes_of_size(2, G)
ccsubgraphs = list(nx.connected_component_subgraphs(G))
len(ccsubgraphs)
# Start by labelling each node in the master graph G by some number
# that represents the subgraph that contains the node.
for i, g in enumerate(_____________):
# Fill in code below.
# Then, pass in a list of nodecolors that correspond to the node order.
# Feel free to change the colours around!
node_cmap = {0: 'red', 1:'blue', 2: 'green', 3:'yellow'}
nodecolor = [__________________________________________]
nodes = sorted(G.nodes())
edges = G.edges()
edgeprops = dict(alpha=0.1)
fig = plt.figure(figsize=(6,6))
ax = fig.add_subplot(111)
c = CircosPlot(nodes, edges, radius=10, ax=ax, fig=fig, edgeprops=edgeprops, nodecolor=nodecolor)
c.draw()
plt.savefig('images/physicians.png', dpi=300)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Question
Step3: In reality, NetworkX already has a function that counts the number of triangles that any given node is involved in. This is probably more useful than knowing whether a node is present in a triangle or not, but the above code was simply for practice.
Step5: Exercise
Step7: Friend Recommendation
Step8: Triangle closure is also the core idea behind social networks' friend recommendation systems; of course, it's definitely more complicated than what we've implemented here.
Step9: Exercise
Step10: Connected Components
Step11: Exercise
|
1,669
|
<ASSISTANT_TASK:>
Python Code:
__author__ = "Christopher Potts"
__version__ = "CS224u, Stanford, Spring 2022"
import os
from sklearn.metrics import classification_report
import torch
import torch.nn as nn
import transformers
from transformers import BertModel, BertTokenizer
from torch_shallow_neural_classifier import TorchShallowNeuralClassifier
from torch_rnn_classifier import TorchRNNModel
from torch_rnn_classifier import TorchRNNClassifier
from torch_rnn_classifier import TorchRNNClassifierModel
from torch_rnn_classifier import TorchRNNClassifier
import sst
import utils
utils.fix_random_seeds()
SST_HOME = os.path.join("data", "sentiment")
transformers.logging.set_verbosity_error()
weights_name = 'bert-base-cased'
bert_tokenizer = BertTokenizer.from_pretrained(weights_name)
bert_model = BertModel.from_pretrained(weights_name)
example_texts = [
"Encode sentence 1. [SEP] And sentence 2!",
"Bert knows Snuffleupagus"]
example_ids = bert_tokenizer.batch_encode_plus(
example_texts,
add_special_tokens=True,
return_attention_mask=True,
padding='longest')
example_ids.keys()
example_ids['input_ids']
example_ids['attention_mask']
X_example = torch.tensor(example_ids['input_ids'])
X_example_mask = torch.tensor(example_ids['attention_mask'])
with torch.no_grad():
reps = bert_model(X_example, attention_mask=X_example_mask)
reps.pooler_output.shape
reps.last_hidden_state.shape
def bert_phi(text):
input_ids = bert_tokenizer.encode(text, add_special_tokens=True)
X = torch.tensor([input_ids])
with torch.no_grad():
reps = bert_model(X)
return reps.last_hidden_state.squeeze(0).numpy()
def bert_classifier_phi(text):
reps = bert_phi(text)
#return reps.mean(axis=0) # Another good, easy option.
return reps[0]
train = sst.train_reader(SST_HOME)
dev = sst.dev_reader(SST_HOME)
X_str_train = train.sentence.values
y_train = train.label.values
X_str_dev = dev.sentence.values
y_dev = dev.label.values
%time X_train = [bert_classifier_phi(text) for text in X_str_train]
%time X_dev = [bert_classifier_phi(text) for text in X_str_dev]
model = TorchShallowNeuralClassifier(
early_stopping=True,
hidden_dim=300)
%time _ = model.fit(X_train, y_train)
preds = model.predict(X_dev)
print(classification_report(y_dev, preds, digits=3))
def fit_shallow_network(X, y):
mod = TorchShallowNeuralClassifier(
hidden_dim=300,
early_stopping=True)
mod.fit(X, y)
return mod
%%time
_ = sst.experiment(
sst.train_reader(SST_HOME),
bert_classifier_phi,
fit_shallow_network,
assess_dataframes=sst.dev_reader(SST_HOME),
vectorize=False) # Pass in the BERT reps directly!
def fit_rnn(X, y):
mod = TorchRNNClassifier(
vocab=[],
early_stopping=True,
use_embedding=False) # Pass in the BERT hidden states directly!
mod.fit(X, y)
return mod
%%time
_ = sst.experiment(
sst.train_reader(SST_HOME),
bert_phi,
fit_rnn,
assess_dataframes=sst.dev_reader(SST_HOME),
vectorize=False) # Pass in the BERT hidden states directly!
class HfBertClassifierModel(nn.Module):
def __init__(self, n_classes, weights_name='bert-base-cased'):
super().__init__()
self.n_classes = n_classes
self.weights_name = weights_name
self.bert = BertModel.from_pretrained(self.weights_name)
self.bert.train()
self.hidden_dim = self.bert.embeddings.word_embeddings.embedding_dim
# The only new parameters -- the classifier:
self.classifier_layer = nn.Linear(
self.hidden_dim, self.n_classes)
def forward(self, indices, mask):
reps = self.bert(
indices, attention_mask=mask)
return self.classifier_layer(reps.pooler_output)
class HfBertClassifier(TorchShallowNeuralClassifier):
def __init__(self, weights_name, *args, **kwargs):
self.weights_name = weights_name
self.tokenizer = BertTokenizer.from_pretrained(self.weights_name)
super().__init__(*args, **kwargs)
self.params += ['weights_name']
def build_graph(self):
return HfBertClassifierModel(self.n_classes_, self.weights_name)
def build_dataset(self, X, y=None):
data = self.tokenizer.batch_encode_plus(
X,
max_length=None,
add_special_tokens=True,
padding='longest',
return_attention_mask=True)
indices = torch.tensor(data['input_ids'])
mask = torch.tensor(data['attention_mask'])
if y is None:
dataset = torch.utils.data.TensorDataset(indices, mask)
else:
self.classes_ = sorted(set(y))
self.n_classes_ = len(self.classes_)
class2index = dict(zip(self.classes_, range(self.n_classes_)))
y = [class2index[label] for label in y]
y = torch.tensor(y)
dataset = torch.utils.data.TensorDataset(indices, mask, y)
return dataset
def bert_fine_tune_phi(text):
return text
def fit_hf_bert_classifier_with_hyperparameter_search(X, y):
basemod = HfBertClassifier(
weights_name='bert-base-cased',
batch_size=8, # Small batches to avoid memory overload.
max_iter=1, # We'll search based on 1 iteration for efficiency.
n_iter_no_change=5, # Early-stopping params are for the
early_stopping=True) # final evaluation.
param_grid = {
'gradient_accumulation_steps': [1, 4, 8],
'eta': [0.00005, 0.0001, 0.001],
'hidden_dim': [100, 200, 300]}
bestmod = utils.fit_classifier_with_hyperparameter_search(
X, y, basemod, cv=3, param_grid=param_grid)
return bestmod
%%time
bert_classifier_xval = sst.experiment(
sst.train_reader(SST_HOME),
bert_fine_tune_phi,
fit_hf_bert_classifier_with_hyperparameter_search,
assess_dataframes=sst.dev_reader(SST_HOME),
vectorize=False) # Pass in the BERT hidden state directly!
optimized_bert_classifier = bert_classifier_xval['model']
# Remove the rest of the experiment results to clear out some memory:
del bert_classifier_xval
def fit_optimized_hf_bert_classifier(X, y):
optimized_bert_classifier.max_iter = 1000
optimized_bert_classifier.fit(X, y)
return optimized_bert_classifier
test_df = sst.sentiment_reader(
os.path.join(SST_HOME, "sst3-test-labeled.csv"))
%%time
_ = sst.experiment(
sst.train_reader(SST_HOME),
bert_fine_tune_phi,
fit_optimized_hf_bert_classifier,
assess_dataframes=test_df,
vectorize=False) # Pass in the BERT hidden state directly!
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Contents
Step2: The transformers library does a lot of logging. To avoid ending up with a cluttered notebook, I am changing the logging level. You might want to skip this as you scale up to building production systems, since the logging is very good – it gives you a lot of insights into what the models and code are doing.
Step3: Hugging Face BERT models and tokenizers
Step4: There are lots other options for pretrained weights. See this Hugging Face directory.
Step5: For modeling (as opposed to creating static representations), we will mostly process examples in batches – generally very small ones, as these models consume a lot of memory. Here's a small batch of texts to use as the starting point for illustrations
Step6: We will often need to pad (and perhaps truncate) token lists so that we can work with fixed-dimensional tensors
Step7: The token_type_ids is used for multi-text inputs like NLI. The 'input_ids' field gives the indices for each of the two examples
Step8: Notice that the final two tokens of the second example are pad tokens.
Step9: Finally, we can run these indices and masks through the pretrained model
Step10: Hugging Face BERT models create a special pooler_output representation that is the final representation above the [CLS] extended with a single layer of parameters
Step11: We have two examples, each representented by a single vector of dimension 768, which is $d_{model}$ for BERT base using the notation from the original Transformers paper. This is an easy basis for fine-tuning, as we will see.
Step12: Here, we have 2 examples, each padded to the length of the longer one (12), and each of those representations has dimension 768. These representations can be used for sequence modeling, or pooled somehow for simple classifiers.
Step13: Simple feed-forward experiment
Step14: Next we read in the SST train and dev splits
Step15: Split the input/output pairs out into separate lists
Step16: In the next step, we featurize all of the examples. These steps are likely to be the slowest in these experiments
Step17: Now that all the examples are featurized, we can fit a model and evaluate it
Step18: A feed-forward experiment with the sst module
Step19: An RNN experiment with the sst module
Step20: BERT fine-tuning with Hugging Face
Step21: As you can see, self.bert does the heavy-lifting
Step22: HfBertClassifier experiment
Step23: And now on to the final test-set evaluation, using the best model from above
|
1,670
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mohc', 'sandbox-1', 'seaice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Variables
Step7: 3. Key Properties --> Seawater Properties
Step8: 3.2. Ocean Freezing Point Value
Step9: 4. Key Properties --> Resolution
Step10: 4.2. Canonical Horizontal Resolution
Step11: 4.3. Number Of Horizontal Gridpoints
Step12: 5. Key Properties --> Tuning Applied
Step13: 5.2. Target
Step14: 5.3. Simulations
Step15: 5.4. Metrics Used
Step16: 5.5. Variables
Step17: 6. Key Properties --> Key Parameter Values
Step18: 6.2. Additional Parameters
Step19: 7. Key Properties --> Assumptions
Step20: 7.2. On Diagnostic Variables
Step21: 7.3. Missing Processes
Step22: 8. Key Properties --> Conservation
Step23: 8.2. Properties
Step24: 8.3. Budget
Step25: 8.4. Was Flux Correction Used
Step26: 8.5. Corrected Conserved Prognostic Variables
Step27: 9. Grid --> Discretisation --> Horizontal
Step28: 9.2. Grid Type
Step29: 9.3. Scheme
Step30: 9.4. Thermodynamics Time Step
Step31: 9.5. Dynamics Time Step
Step32: 9.6. Additional Details
Step33: 10. Grid --> Discretisation --> Vertical
Step34: 10.2. Number Of Layers
Step35: 10.3. Additional Details
Step36: 11. Grid --> Seaice Categories
Step37: 11.2. Number Of Categories
Step38: 11.3. Category Limits
Step39: 11.4. Ice Thickness Distribution Scheme
Step40: 11.5. Other
Step41: 12. Grid --> Snow On Seaice
Step42: 12.2. Number Of Snow Levels
Step43: 12.3. Snow Fraction
Step44: 12.4. Additional Details
Step45: 13. Dynamics
Step46: 13.2. Transport In Thickness Space
Step47: 13.3. Ice Strength Formulation
Step48: 13.4. Redistribution
Step49: 13.5. Rheology
Step50: 14. Thermodynamics --> Energy
Step51: 14.2. Thermal Conductivity
Step52: 14.3. Heat Diffusion
Step53: 14.4. Basal Heat Flux
Step54: 14.5. Fixed Salinity Value
Step55: 14.6. Heat Content Of Precipitation
Step56: 14.7. Precipitation Effects On Salinity
Step57: 15. Thermodynamics --> Mass
Step58: 15.2. Ice Vertical Growth And Melt
Step59: 15.3. Ice Lateral Melting
Step60: 15.4. Ice Surface Sublimation
Step61: 15.5. Frazil Ice
Step62: 16. Thermodynamics --> Salt
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Step65: 17.2. Constant Salinity Value
Step66: 17.3. Additional Details
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Step68: 18.2. Constant Salinity Value
Step69: 18.3. Additional Details
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Step72: 20.2. Additional Details
Step73: 21. Thermodynamics --> Melt Ponds
Step74: 21.2. Formulation
Step75: 21.3. Impacts
Step76: 22. Thermodynamics --> Snow Processes
Step77: 22.2. Snow Aging Scheme
Step78: 22.3. Has Snow Ice Formation
Step79: 22.4. Snow Ice Formation Scheme
Step80: 22.5. Redistribution
Step81: 22.6. Heat Diffusion
Step82: 23. Radiative Processes
Step83: 23.2. Ice Radiation Transmission
|
1,671
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import importlib
import os, sys; sys.path.insert(1, os.path.join('../utils'))
from utils2 import *
import torch, torch.nn as nn, torch.nn.functional as F, torch.optim as optim
from torch.autograd import Variable
from torch.utils.serialization import load_lua
from torch.utils.data import DataLoader
from torchvision import transforms, models, datasets
path = '../data/nst/'
fnames = pickle.load(open(path+'fnames.pkl','rb'))
img = Image.open(path + fnames[0]); img
rn_mean = np.array([123.68, 116.779, 103.939], dtype=np.float32).reshape((1,1,1,3))
preproc = lambda x: (x - rn_mean)[:,:,:,::-1]
img_arr = preproc(np.expand_dims(np.array(img),0))
shp = img_arr.shape
deproc = lambda x: x[:,:,:,::-1] + rn_mena
def download_convert_vgg16_model():
model_url = 'http://cs.stanford.edu/people/jcjohns/fast-neural-style/models/vgg16.t7'
file = get_file(model_url, cache_subdir='models')
vgglua = load_lua(file).parameters()
vgg = models.VGGFeature()
for (src, dst) in zip(vgglua[0], vgg.parameters()): dst[:] = src[:]
torch.save(vgg.state_dict(), path + 'vgg16_feature.pth')
url = 'https://s3-us-west-2.amazonaws.com/jcjohns-models/'
fname = 'vgg16-00b39a1b.pth'
file = get_file(fname, url+fname, cache_subdir='models')
vgg = models.vgg.vgg16()
vgg.load_state_dict(torch.load(file))
optimizer = optim.Adam(vgg.parameters())
vgg.cuda();
arr_lr = bcolz.open(path + 'trn_resized_72.bc')[:]
arr_hr = bcolz.open(path + 'trn_resized_288.bc')[:]
arr = bcolz.open(dpath + 'trn_resized.bc')[:]
x = Variable(arr[0])
y = model(x)
url = 'http://www.files.fast.ai/models/'
fname = 'imagenet_class_index.json'
fpath = get_file(fname, url + fname, cache_subdir='models')
class ResidualBlock(nn.Module):
def __init__(self, num):
super(ResideualBlock, self).__init__()
self.c1 = nn.Conv2d(num, num, kernel_size=3, stride=1, padding=1)
self.c2 = nn.Conv2d(num, num, kernel_size=3, stride=1, padding=1)
self.b1 = nn.BatchNorm2d(num)
self.b2 = nn.BatchNorm2d(num)
def forward(self, x):
h = F.relu(self.b1(self.c1(x)))
h = self.b2(self.c2(h))
return h + x
class FastStyleNet(nn.Module):
def __init__(self):
super(FastStyleNet, self).__init__()
self.cs = [nn.Conv2d(3, 32, kernel_size=9, stride=1, padding=4),
nn.Conv2d(32, 64, kernel_size=4, stride=2, padding=1),
nn.Conv2d(64, 128, kernel_size=4, stride=2, padding1)]
self.b1s = [nn.BatchNorm2d(i) for i in [32, 64, 128]]
self.rs = [ResidualBlock(128) for i in range(5)]
self.ds = [nn.ConvTranspose2d(64, 32, kernel_size=4, stride=2, padding=1),
nn.ConvTranspose2d(64, 32, kernel_size=4, stride=2, padding=1)]
self.b2s = [nn.BatchNorm2d(i) for i in [64, 32]]
self.d3 = nn.Conv2d(32, 3, kernel_size=9, stride=1, padding=4)
def forward(self, h):
for i in range(3): h = F.relu(self.b1s[i](self.cs[i](x)))
for r in self.rs: h = r(h)
for i in range(2): h = F.relu(self.b2s[i](self.ds[i](x)))
return self.d3(h)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Setup
Step2: Create Model
|
1,672
|
<ASSISTANT_TASK:>
Python Code:
import quandl
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
start='2015-01-01'
end='2017-01-01'
united = quandl.get("WIKI/UAL", start_date=start, end_date=end)
united.head()
american = quandl.get("WIKI/AAL", start_date=start, end_date=end)
american.head()
american['Adj. Close'].plot(label='AAL', figsize=(12,8))
united['Adj. Close'].plot(label='UAL')
plt.legend();
np.corrcoef(american['Adj. Close'], united['Adj. Close'])
spread = american['Adj. Close'] - united['Adj. Close']
spread.plot(label='Spread', figsize=(12,8))
plt.axhline(spread.mean(), c='r');
def zscore(stocks):
return (stocks - stocks.mean())/np.std(stocks)
zscore(spread).plot(figsize=(14,8))
plt.axhline(zscore(spread).mean(), c='black');
plt.axhline(1.0,c='g',ls='--')
plt.axhline(-1.0,c='r',ls='--');
plt.title('REVERSION TO THE MEAN STRATEGY')
spread_MA_1 = spread.rolling(1).mean() # one day moving average
spread_MA_30 = spread.rolling(30).mean() # 30 day moving average
std_30 = spread.rolling(30).std() # 30 day standard deviation
z_score_30_1 = (spread_MA_1 - spread_MA_30)/std_30
z_score_30_1.plot(figsize=(12,8), label='Rolling 30 day Z-score')
plt.axhline(0, color='black')
plt.axhline(1.0,c='g',ls='--')
plt.axhline(-1.0,c='r',ls='--');
import math
import numpy
import numpy.random as nrand
Note - for some of the metrics the absolute value is returns. This is because if the risk (loss) is higher we want to
discount the expected excess return from the portfolio by a higher amount. Therefore risk should be positive.
def vol(returns):
# Return the standard deviation of returns
return numpy.std(returns)
def beta(returns, market):
# Create a matrix of [returns, market]
m = numpy.matrix([returns, market])
# Return the covariance of m divided by the standard deviation of the market returns
return numpy.cov(m)[0][1] / numpy.std(market)
def lpm(returns, threshold, order):
# This method returns a lower partial moment of the returns
# Create an array he same length as returns containing the minimum return threshold
threshold_array = numpy.empty(len(returns))
threshold_array.fill(threshold)
# Calculate the difference between the threshold and the returns
diff = threshold_array - returns
# Set the minimum of each to 0
diff = diff.clip(min=0)
# Return the sum of the different to the power of order
return numpy.sum(diff ** order) / len(returns)
def hpm(returns, threshold, order):
# This method returns a higher partial moment of the returns
# Create an array he same length as returns containing the minimum return threshold
threshold_array = numpy.empty(len(returns))
threshold_array.fill(threshold)
# Calculate the difference between the returns and the threshold
diff = returns - threshold_array
# Set the minimum of each to 0
diff = diff.clip(min=0)
# Return the sum of the different to the power of order
return numpy.sum(diff ** order) / len(returns)
def var(returns, alpha):
# This method calculates the historical simulation var of the returns
sorted_returns = numpy.sort(returns)
# Calculate the index associated with alpha
index = int(alpha * len(sorted_returns))
# VaR should be positive
return abs(sorted_returns[index])
def cvar(returns, alpha):
# This method calculates the condition VaR of the returns
sorted_returns = numpy.sort(returns)
# Calculate the index associated with alpha
index = int(alpha * len(sorted_returns))
# Calculate the total VaR beyond alpha
sum_var = sorted_returns[0]
for i in range(1, index):
sum_var += sorted_returns[i]
# Return the average VaR
# CVaR should be positive
return abs(sum_var / index)
def prices(returns, base):
# Converts returns into prices
s = [base]
for i in range(len(returns)):
s.append(base * (1 + returns[i]))
return numpy.array(s)
def dd(returns, tau):
# Returns the draw-down given time period tau
values = prices(returns, 100)
pos = len(values) - 1
pre = pos - tau
drawdown = float('+inf')
# Find the maximum drawdown given tau
while pre >= 0:
dd_i = (values[pos] / values[pre]) - 1
if dd_i < drawdown:
drawdown = dd_i
pos, pre = pos - 1, pre - 1
# Drawdown should be positive
return abs(drawdown)
def max_dd(returns):
# Returns the maximum draw-down for any tau in (0, T) where T is the length of the return series
max_drawdown = float('-inf')
for i in range(0, len(returns)):
drawdown_i = dd(returns, i)
if drawdown_i > max_drawdown:
max_drawdown = drawdown_i
# Max draw-down should be positive
return abs(max_drawdown)
def average_dd(returns, periods):
# Returns the average maximum drawdown over n periods
drawdowns = []
for i in range(0, len(returns)):
drawdown_i = dd(returns, i)
drawdowns.append(drawdown_i)
drawdowns = sorted(drawdowns)
total_dd = abs(drawdowns[0])
for i in range(1, periods):
total_dd += abs(drawdowns[i])
return total_dd / periods
def average_dd_squared(returns, periods):
# Returns the average maximum drawdown squared over n periods
drawdowns = []
for i in range(0, len(returns)):
drawdown_i = math.pow(dd(returns, i), 2.0)
drawdowns.append(drawdown_i)
drawdowns = sorted(drawdowns)
total_dd = abs(drawdowns[0])
for i in range(1, periods):
total_dd += abs(drawdowns[i])
return total_dd / periods
def treynor_ratio(er, returns, market, rf):
return (er - rf) / beta(returns, market)
def sharpe_ratio(er, returns, rf):
return (er - rf) / vol(returns)
def information_ratio(returns, benchmark):
diff = returns - benchmark
return numpy.mean(diff) / vol(diff)
def modigliani_ratio(er, returns, benchmark, rf):
np_rf = numpy.empty(len(returns))
np_rf.fill(rf)
rdiff = returns - np_rf
bdiff = benchmark - np_rf
return (er - rf) * (vol(rdiff) / vol(bdiff)) + rf
def excess_var(er, returns, rf, alpha):
return (er - rf) / var(returns, alpha)
def conditional_sharpe_ratio(er, returns, rf, alpha):
return (er - rf) / cvar(returns, alpha)
def omega_ratio(er, returns, rf, target=0):
return (er - rf) / lpm(returns, target, 1)
def sortino_ratio(er, returns, rf, target=0):
return (er - rf) / math.sqrt(lpm(returns, target, 2))
def kappa_three_ratio(er, returns, rf, target=0):
return (er - rf) / math.pow(lpm(returns, target, 3), float(1/3))
def gain_loss_ratio(returns, target=0):
return hpm(returns, target, 1) / lpm(returns, target, 1)
def upside_potential_ratio(returns, target=0):
return hpm(returns, target, 1) / math.sqrt(lpm(returns, target, 2))
def calmar_ratio(er, returns, rf):
return (er - rf) / max_dd(returns)
def sterling_ration(er, returns, rf, periods):
return (er - rf) / average_dd(returns, periods)
def burke_ratio(er, returns, rf, periods):
return (er - rf) / math.sqrt(average_dd_squared(returns, periods))
def test_risk_metrics():
# This is just a testing method
r = nrand.uniform(-1, 1, 50)
m = nrand.uniform(-1, 1, 50)
print("vol =", vol(r))
print("beta =", beta(r, m))
print("hpm(0.0)_1 =", hpm(r, 0.0, 1))
print("lpm(0.0)_1 =", lpm(r, 0.0, 1))
print("VaR(0.05) =", var(r, 0.05))
print("CVaR(0.05) =", cvar(r, 0.05))
print("Drawdown(5) =", dd(r, 5))
print("Max Drawdown =", max_dd(r))
def test_risk_adjusted_metrics():
# Returns from the portfolio (r) and market (m)
r = nrand.uniform(-1, 1, 50)
m = nrand.uniform(-1, 1, 50)
# Expected return
e = numpy.mean(r)
# Risk free rate
f = 0.06
# Risk-adjusted return based on Volatility
print("Treynor Ratio =", treynor_ratio(e, r, m, f))
print("Sharpe Ratio =", sharpe_ratio(e, r, f))
print("Information Ratio =", information_ratio(r, m))
# Risk-adjusted return based on Value at Risk
print("Excess VaR =", excess_var(e, r, f, 0.05))
print("Conditional Sharpe Ratio =", conditional_sharpe_ratio(e, r, f, 0.05))
# Risk-adjusted return based on Lower Partial Moments
print("Omega Ratio =", omega_ratio(e, r, f))
print("Sortino Ratio =", sortino_ratio(e, r, f))
print("Kappa 3 Ratio =", kappa_three_ratio(e, r, f))
print("Gain Loss Ratio =", gain_loss_ratio(r))
print("Upside Potential Ratio =", upside_potential_ratio(r))
# Risk-adjusted return based on Drawdown risk
print("Calmar Ratio =", calmar_ratio(e, r, f))
print("Sterling Ratio =", sterling_ration(e, r, f, 5))
print("Burke Ratio =", burke_ratio(e, r, f, 5))
if __name__ == "__main__":
test_risk_metrics()
test_risk_adjusted_metrics()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Co-integration (advanced topic) is harder to find than correlation.
Step2: Conclusion
Step3: Custom Z-Score function
Step4: Calculating a rolling Z-Score of 30-day time period
Step6: Measures of Risk-adjusted Returns
|
1,673
|
<ASSISTANT_TASK:>
Python Code:
from seuif97 import *
# State 1
p1 = 8.0 # in MPa
t1 = px2t(p1, 1)
h1 = px2h(p1, 1) # h1 = 2758.0 From table A-3 kj/kg
s1 = px2s(p1, 1) # s1 = 5.7432 From table A-3 kj/kg.k
# State 2 ,p2=0.008
p2 = 0.008
s2 = s1
t2 = ps2t(p2, s2)
h2 = ps2h(p2, s2)
# State 3 is saturated liquid at 0.008 MPa
p3 = 0.008
t3 = px2t(p3, 0)
h3 = px2h(p3, 0) # kj/kg
s3 = px2s(p3, 0)
# State 4
p4 = p1
s4 = s3
h4 = ps2h(p4, s4)
t4 = ps2h(p4, s4)
# Part(a)
# Mass and energy rate balances for control volumes
# around the turbine and pump give, respectively
# turbine
wtdot = h1 - h2
# pump
wpdot = h4-h3
# The rate of heat transfer to the working fluid as it passes
# through the boiler is determined using mass and energy rate balances as
qindot = h1-h4
# thermal efficiency
eta = (wtdot-wpdot)/qindot
# Result for part a
print('(a) The thermal efficiency for the cycle is {:>.2f}%'.format(eta*100))
# Part(b)
# back work ratio:bwr, defined as the ratio of the pump work input to the work
# developed by the turbine.
bwr = wpdot/wtdot #
# Result
print('(b) The back work ratio is {:>.2f}%'.format(bwr*100))
# Part(c)
Wcycledot = 100.00 # the net power output of the cycle in MW
mdot = (Wcycledot*10**3*3600)/((h1-h2)-(h4-h3)) # mass flow rate in kg/h
# Result
print('(c) The mass flow rate of the steam is {:>.2f}kg/h'.format(mdot))
# Part(d)
Qindot = mdot*qindot/(3600*10**3) # in MW
# Results
print('(d) The rate of heat transfer Qindot into the working fluid as' +
' it passes through the boiler is {:>.2f}MW'.format(Qindot))
# Part(e)
Qoutdot = mdot*(h2-h3)/(3600*10**3) # in MW
# Results
print('(e) The rate of heat transfer Qoutdot from the condensing steam ' +
'as it passes through the condenser is {:>.2f}MW.'.format(Qoutdot))
# Part(f)
# Given:
tcwin = 15
tcwout = 35
hcwout = tx2h(tcwout, 0) # From table A-2,hcwout= 146.68 kj/kg
hcwin = tx2h(tcwin, 0) # hcwin 62.99
mcwdot = (Qoutdot*10**3*3600)/(hcwout-hcwin) # in kg/h
# Results
print('(f) The mass flow rate of the condenser cooling water is {:>.2f}kg/h.'.format(mcwdot))
from seuif97 import *
# State 1
p1 = 8.0 # in MPa
t1 =px2t(p1,1)
h1=px2h(p1,1) # h1 = 2758.0 From table A-3 kj/kg
s1=px2s(p1,1) # s1 = 5.7432 From table A-3 kj/kg.k
# State 2 ,p2=0.008
p2=0.008
s2s = s1
h2s=ps2h(p2,s2s)
t2s=ps2t(p2,s2s)
etat_t=0.85
h2=h1-etat_t*(h1-h2s)
t2 =ph2t(p2,h2)
s2 =ph2s(p2,h2)
# State 3 is saturated liquid at 0.008 MPa
p3 = 0.008
t3=px2t(p3,0)
h3 =px2h(p3,0) # kj/kg
s3 =px2s(p3,0)
#State 4
p4 = p1
s4s=s3
h4s =ps2h(p4,s4s)
t4s =ps2t(p4,s4s)
etat_p=0.85
h4=h3+(h4s-h3)/etat_p
t4 =ph2t(p4,h4)
s4 =ph2s(p4,h4)
# Part(a)
eta = ((h1-h2)-(h4-h3))/(h1-h4) # thermal efficiency
# Result for part (a)
print('Thermal efficiency is: {:>.2f}%'.format(100*eta))
# Part(b)
Wcycledot = 100 # given,a net power output of 100 MW
# Calculations
mdot = (Wcycledot*(10**3)*3600)/((h1-h2)-(h4-h3))
# Result for part (b)
print('The mass flow rate of steam for a net power output of 100 MW is {:>.2f}kg/h'.format(mdot))
# Part(c)
Qindot = mdot*(h1-h4)/(3600 * 10**3)
# Result
print('The rate of heat transfer Qindot into the working fluid as it passes through the boiler, is {:>.2f}MW.'.format(Qindot))
# Part(d)
Qoutdot = mdot*(h2-h3)/(3600*10**3)
# Result
print('The rate of heat transfer Qoutdot from the condensing steam as it passes through the condenser, is {:>.2f}MW.'.format(Qoutdot))
# Part(e)
tcwin = 15
tcwout = 35
hcwout = tx2h(tcwout, 0) # From table A-2,hcwout= 146.68 kj/kg
hcwin = tx2h(tcwin, 0) # hcwin 62.99
mcwdot = (Qoutdot*10**3*3600)/(hcwout-hcwin)
# Result
print('The mass flow rate of the condenser cooling water, is {:>.2f}kg/h'.format(mcwdot))
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
plt.figure(figsize=(10.0,5.0))
# saturated vapor and liquid entropy lines
npt = np.linspace(10,647.096-273.15,200) # range of temperatures
svap = [s for s in [tx2s(t, 1) for t in npt]]
sliq = [s for s in [tx2s(t, 0) for t in npt]]
plt.plot(svap, npt, 'r-')
plt.plot(sliq, npt, 'b-')
t=[t1,t2s,t3,t4s+15]
s=[s1,s2s,s3,s4s]
# point 5
t.append(px2t(p1,0))
s.append(px2s(p1,0))
t.append(t1)
s.append(s1)
plt.plot(s, t, 'ko-')
tb=[t1,t2]
sb=[s1,s2]
plt.plot(sb, tb, 'k--')
tist=[t2,t2s]
sist=[s2,s2s]
plt.plot(sist, tist, 'ko-')
sp=[s3,s3+0.3]
tp=[t3,ps2t(p4,s3+0.3)+15]
plt.plot(sp, tp, 'ko--')
tist=[t2,t2s]
sist=[s2,px2s(p2,1)]
plt.plot(sist, tist, 'g-')
plt.xlabel('Entropy (kJ/(kg K)')
plt.ylabel('Temperature (°C)')
plt.grid()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1..2 Analysis the Cycle
Step2: (b) The back work ratio is
Step3: 2 Example8.2
Step4: 2.2 Analysis the Cycle
Step5: 1.2.3 T-S Diagram
|
1,674
|
<ASSISTANT_TASK:>
Python Code:
from OCR_lib import word_to_vec, reshape_embeddings, detect_text
import spacy
import tensorflow as tf
import numpy as np
from PIL import Image, ImageShow
import IPython.display as display
TEST_STRING = "Test string"
word_embedding = word_to_vec("Test")
nlp = spacy.load("en_core_web_lg")
ground_truth_embedding = nlp("Test")
assert ground_truth_embedding.vector.all() == word_embedding.all()
DEFAULT_SHAPE = (64, 64, 1)
TEST_ARRAYS = [np.random.rand(0), np.random.rand(10), np.random.rand(1000)]
for array in TEST_ARRAYS:
new_embedding = reshape_embeddings(array)
assert new_embedding.numpy().shape == DEFAULT_SHAPE
IMG_PATH = './TC11/svt1/train/00_01.jpg'
img = Image.open(IMG_PATH, "r")
texts = detect_text(img)
print(texts)
display.display(img)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Test for function word_to_vec
Step2: Test for reshape_embeddings()
Step3: Test for detect_text()
|
1,675
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
ls -l /home/data/APD/COBRA-YTD*.csv.gz
df = pd.read_csv('/home/data/APD/COBRA-YTD-multiyear.csv.gz')
df.shape
df.dtypes
#brdf = pd.read_csv('/home/pmolnar/burglary_residence.csv')
#brdf.head()
dataDict = pd.DataFrame({'DataType': df.dtypes.values, 'Description': '', }, index=df.columns.values)
dataDict
with open("datadict2.py", "w") as io:
for i in dataDict.index:
io.write("dataDict.loc['%s'].Description = '' # type: %s\n" % (i, str(dataDict.loc[i].DataType)))
ls -l datadict2.py
# %load datadict.py
dataDict.loc['MI_PRINX'].Description = '' # type: int64
dataDict.loc['offense_id'].Description = 'Unique ID in the format YYDDDNNNN with the year YY, the day of the year DDD and a counter NNNN' # type: int64
dataDict.loc['rpt_date'].Description = 'Date the crime was reported' # type: object
dataDict.loc['occur_date'].Description = 'Estimated date when the crime occured' # type: object
dataDict.loc['occur_time'].Description = 'Estimated time when the crime occured' # type: object
dataDict.loc['poss_date'].Description = '' # type: object
dataDict.loc['poss_time'].Description = '' # type: object
dataDict.loc['beat'].Description = '' # type: int64
dataDict.loc['apt_office_prefix'].Description = '' # type: object
dataDict.loc['apt_office_num'].Description = '' # type: object
dataDict.loc['location'].Description = '' # type: object
dataDict.loc['MinOfucr'].Description = '' # type: int64
dataDict.loc['MinOfibr_code'].Description = '' # type: object
dataDict.loc['dispo_code'].Description = '' # type: object
dataDict.loc['MaxOfnum_victims'].Description = '' # type: float64
dataDict.loc['Shift'].Description = 'Zones have 8 or 10 hour shifts' # type: object
dataDict.loc['Avg Day'].Description = '' # type: object
dataDict.loc['loc_type'].Description = '' # type: float64
dataDict.loc['UC2 Literal'].Description = '' # type: object
dataDict.loc['neighborhood'].Description = '' # type: object
dataDict.loc['npu'].Description = '' # type: object
dataDict.loc['x'].Description = '' # type: float64
dataDict.loc['y'].Description = '' # type: float64
dataDict.to_csv("COBRA_Data_Dictionary.csv")
sorted(df.npu.unique())
len(df.neighborhood.unique())
df[['occur_date', 'occur_time']][0:5]
# function currying
def fixdatetime(fld):
def _fix(s):
date_col = '%s_date' % fld
time_col = '%s_time' % fld
if time_col in s.index:
return str(s[date_col])+' '+str(s[time_col])
else:
return str(s[date_col])+' 00:00:00'
return _fix
##df.apply(fixdatetime('occur'), axis=1)[:10]
for col in ['rpt', 'occur', 'poss']:
datser = df.apply(fixdatetime(col), axis=1)
df['%s_dt'%col] = pd.to_datetime(datser, format="%m/%d/%Y %H:%M:%S", errors='coerce')
df.head()
df.dtypes
df.beat[:10]
df['Zone'] = df['beat']//100
df.Zone[:4]
df['UC2 Literal'].unique()
##df[df['UC2 Literal']=='LARCENY-FROM VEHICLE']
df.occur_date.min(), df.occur_date.max()
df['Year'] = df.rpt_dt.map(lambda d: d.year)
df.groupby('Year').offense_id.count()
brdf = df[df['UC2 Literal']=='BURGLARY-RESIDENCE'].copy()
brdf.shape, df.shape
def gethour(d):
return d.hour
brdf.occur_dt.map(gethour)
##brdf['occur_dt'].map(gethour)
##brdf.occur_dt.map(lambda d: d.hour)
print type(brdf.occur_dt)
brdf['Hour'] = brdf.occur_dt.apply(gethour)
brdf.head()
br_hr = brdf.groupby(['Hour']).offense_id.count()
plt.step(br_hr.index, br_hr.values)
plt.figure(figsize=(20,8))
for z in range(1,7):
plt.subplot(3,2,z)
plt.title("Zone %d" % z)
#brdf[brdf.Zone==z].hist(column='Hour', bins=24)
plt.hist(brdf[brdf.Zone==z].Hour, bins=24)
plt.show()
plt.figure(figsize=(30,15))
for h in range(24):
plt.subplot(4,6,h+1)
plt.title("Hour %d" % h)
#brdf[brdf.Zone==z].hist(column='Hour', bins=24)
plt.hist(brdf[brdf.Hour==h].Zone, bins=6)
plt.ylim(0,40) ## sets limit on Y-axis
plt.show()
df['UC2 Literal'].unique()
df.groupby(['UC2 Literal', 'Zone']).offense_id.count()
df['dayofweek'] = df.occur_dt.map(lambda d: d.dayofweek)
df.groupby(['UC2 Literal','dayofweek']).offense_id.count()
brdf.apply(lambda r: str(r.location)+', '+str(r.npu), axis=1)
brdf.apply(np.min, axis=0)
df.occur_dt.map(lambda d: d.year).unique()
df['Year'] = df.occur_dt.map(lambda d: d.year)
df2 = df[(df.Year>=2010) & (df.Year<=2017)]
df2.shape, df.shape
df_LarcenyFromVehicle = df2[(df2['UC2 Literal']=='LARCENY-FROM VEHICLE')&(df2.Year==2017)].copy()
agr_LarcenyFromVehicle = df_LarcenyFromVehicle.set_index('occur_dt').resample('W').offense_id.count()
agr_LarcenyFromVehicle
df_LarcenyFromVehicle["Hour"] = df_LarcenyFromVehicle.occur_dt.map(lambda d: d.hour)
df_LarcenyFromVehicle.groupby("Hour").offense_id.count()
hourly = df_LarcenyFromVehicle.resample('H', on='occur_dt').offense_id.count()
hourly.reset_index().occur_dt.map(lambda d: d.week)
df3 = pd.DataFrame({"N": hourly})
##df3['Day'] = df3.reset_index().occur_dt ##.map(lambda d: d.day)
df3
ls
df.columns
df['occur_month'] = df['occur_dt'].map(lambda dt: dt.month)
df['occur_year'] = df['occur_dt'].map(lambda dt: dt.year)
resdf = df.groupby(['UC2 Literal', 'occur_year', 'occur_month']).offense_id.count()
resdf.head()
resdf_tbl = resdf.reset_index()
resdf_tbl.head()
fig = plt.figure(figsize=(10,6))
for yy in range(2009, 2017):
plt.plot(resdf['BURGLARY-RESIDENCE'][yy].index, resdf['BURGLARY-RESIDENCE'][yy], marker='x', label=str(yy))
plt.legend()
plt.ylim(0, 1000)
plt.title('BURGLARY-RESIDENCE')
plt.xticks(range(13), ['', 'Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'])
##plt.savefig('quiz3-burglary-residence.png')
;
fig = plt.figure(figsize=(10,6)) # 10inx10in
#plt.plot(resdf['BURGLARY-RESIDENCE'].index, resdf['BURGLARY-RESIDENCE'])
plt.scatter(resdf['BURGLARY-RESIDENCE'].index, resdf['BURGLARY-RESIDENCE'], marker='x')
plt.scatter(resdf['BURGLARY-NONRES'].index, resdf['BURGLARY-NONRES'], marker='o')
plt.ylim(0, 500)
plt.title('BURGLARY-RESIDENCE')
plt.xticks(range(13), ['', 'Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'])
fig.savefig('BurglaryResidence_over_month.svg')
x = 1
fig = plt.figure(figsize=(40,30))
crime_types = crime_year.index.levels[0]
years = crime_year.index.levels[1]
for c in range(len(crime_types)):
y_max = max(crime_year.loc[crime_types[c]])
plt.subplot(4,3,c+1)
plt.hlines(crime_year.loc[crime_types[c]].iloc[-1]*100/y_max, years[0], years[-1], linestyles="dashed", color="r")
plt.bar(crime_year.loc[crime_types[c]].index, crime_year.loc[crime_types[c]]*100/y_max, label=crime_types[c], alpha=0.5)
##plt.legend()
plt.ylim(0, 100)
plt.xticks(years+0.4, [str(int(y)) for y in years], rotation=0, fontsize=24)
plt.yticks([0,20,40,60,80,100], ['0%','20%','40%','60%','80%','100%'], fontsize=24)
plt.title(crime_types[c], fontsize=30)
None
c = 3 ## 'BURGLARY-RESIDENCE'
resburglaries = crime_year_month.loc[crime_types[c]]
fig = plt.figure(figsize=(20,10))
for y in years:
plt.plot(resburglaries.loc[y].index, resburglaries.loc[y], label=("%4.0f"%y))
plt.legend()
plt.title("Seasonal Trends - %s"%crime_types[c], fontsize=20)
plt.xticks(range(13), ['', 'Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'])
plt.xlim(0,13)
None
c = 3 ## 'BURGLARY-RESIDENCE'
fig = plt.figure(figsize=(20,10))
for y in years:
avg = resburglaries.loc[y].mean()
std = resburglaries.loc[y].std()
##plt.hlines(avg, 1, 13, linestyle='dashed')
plt.plot(resburglaries.loc[y].index, (resburglaries.loc[y]-avg)/std, label=("%4.0f"%y))
plt.legend()
plt.title("Seasonal Trends - %s (normalized)"%crime_types[c], fontsize=20)
plt.xticks(list(range(1,13)), ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'])
plt.xlim(0,13)
plt.ylabel("Standard deviations $\sigma_y$")
None
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We need to enter the descriptions for each entry in our dictionary manually. However, why not just create a the Python code automatically...
Step2: Fixing Data Types
Step3: Convert Columns
Step4: Beats and Zones
Step5: Descriptive Statistics
Step6: Plotting
Step7: Seasonal Model
|
1,676
|
<ASSISTANT_TASK:>
Python Code:
#@title Install software packages {'form-width':'30%'}
%reset -f
!apt-get update
!apt-get install -y xvfb python-opengl ffmpeg
!pip install gym
!pip install imageio
!pip install PILLOW
!pip install pyglet
!pip install pyvirtualdisplay
!pip install dm-acme
!pip install dm-acme[reverb,tf,envs]
from IPython.display import clear_output
clear_output()
#@title Import python libraries {'form-width':'30%'}
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import base64
import imageio
import IPython
import matplotlib
import matplotlib.pyplot as plt
import PIL.Image
import pyvirtualdisplay
import numpy as np
import gym
import dm_env
import reverb
import sonnet as snt
import tensorflow as tf
from acme import environment_loop
from acme.tf import networks
from acme.adders import reverb as adders
from acme.agents.tf import actors
from acme.datasets import reverb as datasets
from acme.wrappers import atari_wrapper, gym_wrapper
from acme import specs
from acme import wrappers
from acme.agents.tf import dqn
from acme.agents import agent
from acme.tf import utils
from acme.utils import loggers
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
plt.rcdefaults()
plt.xkcd()
# Set up a virtual display for rendering OpenAI gym environments.
display = pyvirtualdisplay.Display(visible=0, size=(1400, 900)).start()
#@title Set up utilities {'form-width':'30%'}
def step_agent_in_environment(env, agent=None, num_episodes=3):
Steps an agent in an enviroment.
frames = []
actions = []
for n in range(num_episodes):
timestep = env.reset()
while not timestep.last():
frames.append(env.render(mode='rgb_array'))
if callable(agent):
action = agent(timestep.observation)
else:
action = agent.select_action(timestep.observation)
actions.append(action)
timestep = env.step(action)
return frames, actions
def show_video(frames):
Show video.
video_filename = 'imageio.mp4'
# Write video:
with imageio.get_writer(video_filename, fps=60) as video:
for frame in frames:
video.append_data(frame)
# Read video and show it:
video = open(video_filename, 'rb').read()
b64 = base64.b64encode(video)
tag =
<video width="640" height="480" controls>
<source src="data:video/mp4;base64,{0}" type="video/mp4">
Your browser does not support the video tag.
</video>.format(b64.decode())
return IPython.display.HTML(tag)
print('All set!')
#@title Load an environment
environment_name = 'CartPole' #@param ['MountainCar', 'CartPole', 'Atari']
if 'CartPole' in environment_name:
environment_train = gym_wrapper.GymWrapper(gym.make('CartPole-v0'))
environment_train = wrappers.SinglePrecisionWrapper(environment_train)
environment = gym_wrapper.GymWrapper(gym.make('CartPole-v0'))
environment = wrappers.SinglePrecisionWrapper(environment)
# Just for visualisation / evaluation, we'll set different angle limits
environment.env.theta_threshold_radians = 10.0
elif 'MountainCar' in environment_name:
environment_train = gym_wrapper.GymWrapper(gym.make('MountainCar-v0'))
environment_train = wrappers.SinglePrecisionWrapper(environment_train)
environment = environment_train
elif 'Atari' in environment_name:
environment_train = gym_wrapper.GymAtariAdapter(gym.make('Pong-v0'))
environment_train = atari_wrapper.AtariWrapper(environment_train)
environment_train = wrappers.SinglePrecisionWrapper(environment_train)
environment = environment_train
else:
raise ValueError('Unknown environment: {}.'.format(environment_name))
action_space = environment.action_space
def int_random_action(state):
# state is unused for random agent
return action_space.sample()
output = environment.reset()
print('random action:', int_random_action(None))
print('random action:', int_random_action(None))
print('random action:', int_random_action(None))
print('random action:', int_random_action(None))
frames, actions = step_agent_in_environment(
env=environment, agent=int_random_action, num_episodes=5)
print('actions = {}'.format(actions))
show_video(frames)
def custom_action_for_cartpole(state):
# for cartpole only:
cart_position = state[0]
cart_velocity = state[1]
pole_angle = state[2]
pole_velocity_at_tip = state[3]
# Instead of making the action 0 (in cartpole: go left), try to come up with
# a better behavior.
action = 0
return action
output = environment.reset()
frames, actions = step_agent_in_environment(
env=environment, agent=custom_action_for_cartpole, num_episodes=5)
show_video(frames)
#@title Agent setup {'form-width':'30%'}
def setup_agent(
environment,
learning_rate,
batch_size=64,
max_replay_size=1000,
logger=None,
):
Setup the agent before training
environment_spec = specs.make_environment_spec(environment)
network = snt.Sequential([
lambda x: tf.cast(x, tf.float32),
snt.Flatten(),
snt.nets.MLP([100, environment_spec.actions.num_values])
])
# Construct the agent.
agent = dqn.DQN(
environment_spec=environment_spec,
learning_rate=learning_rate,
batch_size=batch_size,
max_replay_size=max_replay_size,
network=network,
checkpoint=False,
logger=logger,
)
return agent
#@title Training loop {'form-width':'30%'}
def train(environment, agent, num_training_episodes, log_every=10):
Train the agent via the DQN algorithm
min_actor_steps_before_learning = 1000
num_actor_steps_per_iteration = 1
num_learner_steps_per_iteration = 1
all_returns = []
learner_steps_taken = 0
actor_steps_taken = 0
for episode in range(num_training_episodes):
timestep = environment.reset()
agent.observe_first(timestep)
episode_return = 0
while not timestep.last():
# Get an action from the agent and step in the environment.
action = agent.select_action(timestep.observation)
next_timestep = environment.step(action)
# Record the transition.
agent.observe(action=action, next_timestep=next_timestep)
# Book-keeping.
episode_return += next_timestep.reward
actor_steps_taken += 1
timestep = next_timestep
# See if we have some learning to do.
if (actor_steps_taken >= min_actor_steps_before_learning and
actor_steps_taken % num_actor_steps_per_iteration == 0):
# Learn.
for learner_step in range(num_learner_steps_per_iteration):
agent.update()
learner_steps_taken += num_learner_steps_per_iteration
# Log quantities.
if episode % log_every == 0 or episode == num_training_episodes - 1:
print(f'Episode: {episode} | Return: {episode_return} | '
f'Learner steps: {learner_steps_taken} | '
f'Actor steps: {actor_steps_taken}')
all_returns.append(episode_return)
return all_returns
#@title Train the agent, using some specific hyperparameters
num_training_episodes = 200 # @param {type:"integer"}
learning_rate = 3e-4 # @param {type:"number"}
# Other parameters
batch_size = 64
max_replay_size = 100000
# Set how often to print logs
log_every = 10
# Setup the agent
class NoOpLogger(object):
Avoids logginng from Acme
def write(self, data):
pass
agent_logger = NoOpLogger()
agent = setup_agent(
environment_train,
learning_rate,
batch_size=batch_size,
max_replay_size=max_replay_size,
logger=agent_logger)
# Use the training environment to train the agent
returns = train(environment_train, agent, num_training_episodes, log_every)
#@title Plot the training curve {'form-width':'30%'}
plt.figure(figsize=(10, 5))
plt.plot(range(0, num_training_episodes), returns)
plt.grid(True)
plt.xlabel('Episodes', fontsize=15)
plt.ylabel('Total reward', fontsize=15)
plt.tick_params(labelsize=15)
plt.locator_params(nbins=10)
#@title Show video of the trained agent's behaviour {'form-width':'30%'}
frames, actions = step_agent_in_environment(
env=environment, agent=agent, num_episodes=5)
show_video(frames)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: Learning about Reinforcement Learning
Step4: Environments
Step5: Random Agent
Step6: Custom Agent
Step8: How to train your agent?
Step10: How to evaluate success?
Step12: Train the agent!
Step13: Evaluate the agent
|
1,677
|
<ASSISTANT_TASK:>
Python Code:
print("C'est parti") # affiche le texte en dessous
# essayez de modifier le texte et ré-exécuter
# Exécutez cette cellule !
import platform
print("Vous travaillez actuellement sur la version", platform.python_version())
# Exécutez cette cellule !
from IPython.core.display import HTML
styles = "<style>\n.travail {\n background-size: 30px;\n background-image: url('https://cdn.pixabay.com/photo/2018/01/04/16/53/building-3061124_960_720.png');\n background-position: left top;\n background-repeat: no-repeat;\n padding-left: 40px;\n}\n\n.bilan {\n background-size: 30px;\n background-image: url('https://cdn.pixabay.com/photo/2016/10/18/19/40/anatomy-1751201_960_720.png');\n background-position: left top;\n background-repeat: no-repeat;\n padding-left: 40px;\n}\n</style>"
HTML(styles)
l'avion = "rafale"
tire-bouchon = True
7ici = "Vélizy"
a = 12
print(type(a))
googol = 10_000_000_000_000_000_000_000_000_000_000_000_000_000_000_000_000_000_000_000_000_000_000_000_000_000_000_000_000_000_000_000_000_000
print(googol)
# depuis du binaire
a = int("0101_1111_0101", 2)
print(a)
# depuis la base 7
a = int("263", 7)
print(a)
octet = bytes.fromhex('20')
a = int.from_bytes(octet, byteorder='little', signed=False)
print(a)
a = 42
print("En binaire, 42 = ", bin(a))
print("En octal, 42 = ", oct(a))
print("En hexadécimal, 42 = ", hex(a))
a = 'réseaux' + 'télécom'
a = 1 + "fini"
a = 8 * "simple,basique,"
n = 35
a = 18
a = 18
a = 18
print(a == 12)
a = 18
b = 12
print(a >= 18 and b != 5)
bob = 17
n = 12
n = 12
m = -2
print(True, "or", True, "=", True or True)
# etc...
# 6 - 2
# 6 - 3.2
# 6 * 4.3
# 5 // 2
# 5 / 2
# 6 / 2
# 6 % 2
# "hello" + "ça va ?"
# "hello" * 3
# 2 < 4
# (2 < 4) or (x == 2) # donnez une valeur à x
# not (2 < 4 and False)
# 2 <= x < 34 # donnez une valeur à x
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Vérifiez quelle est votre version de Python
Step2: Exécutez cette cellule pour appliquer le style CSS utilisé dans ce notebook
Step3: Dans les séquences de travail, vous rencontrerez certains logos
Step4: Types/classes
Step5: <div class="alert alert-block alert-danger travail">
Step6: Base et numération
Step7: <div class="alert alert-block alert-danger travail">
Step8: <div class="alert alert-block alert-danger travail">
Step9: Opérateurs numériques
Step10: <div class="alert alert-block alert-danger travail">
Step11: <div class="alert alert-block alert-info bilan">
Step12: <div class="alert alert-block alert-danger travail">
Step13: <div class="alert alert-block alert-danger travail">
Step14: <div class="alert alert-block alert-danger travail">
Step15: <div class="alert alert-block alert-info bilan">
Step16: Les opérateurs logiques usuels sont and, or, not.
Step17: Exercice 6 - Quête de vérité
Step18: <div class="alert alert-block alert-danger travail">
Step19: <div class="alert alert-block alert-danger travail">
Step20: <div class="alert alert-block alert-danger travail">
Step21: <img src="https
|
1,678
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import os
import numpy as np
import pandas as pd
import scipy as sp
import sklearn
import seaborn as sns
from matplotlib import pyplot as plt
from sklearn.cross_validation import cross_val_score
from sklearn.ensemble import RandomForestClassifier
dataDir = os.path.join(os.path.expanduser('~'),'data','ml','winequality')
wine_df = pd.read_csv(os.path.join(dataDir,'winequality-red.csv'), sep=';')
wine_df.head()
Y = wine_df.quality.values
wine_df = wine_df.drop('quality',axis=1)
print(Y[:10])
Y = np.asarray([1 if i>=7 else 0 for i in Y])
X = wine_df.as_matrix()
print X.shape
print(Y[:10])
scores = []
for val in range(1,21):
clf = RandomForestClassifier(n_estimators=val)
validated = cross_val_score(clf,X,Y,cv=10)
scores.append(validated)
#print len(scores)
fig = plt.figure()
plt.clf()
ax = fig.add_subplot(111)
ax.boxplot(scores)
ax.set_ylim((0,1))
ax.set_xlim((0,21))
#sns.boxplot(scores)
plt.xlabel("number trees")
plt.ylabel("classification scores")
plt.title("classification score per number of trees")
plt.show()
scores = []
for val in range(1,21):
clf = RandomForestClassifier(n_estimators=val)
validated = cross_val_score(clf,X,Y,cv=10,scoring='f1')
scores.append(validated)
fig = plt.figure()
plt.clf()
ax = fig.add_subplot(111)
ax.boxplot(scores)
ax.set_ylim((0,1))
ax.set_xlim((0,21))
plt.xlabel("number trees")
plt.ylabel("classification scores")
plt.title("classification score per number of trees")
plt.show()
print("total normals: %s/%s"%(np.where(Y==0)[0].size,Y.size))
def cutoff_predict(clf,X,cutoff):
return (clf.predict_proba(X)[:,1] > cutoff).astype(int)
scores = []
def custom_f1(cutoff):
def f1_cutoff(clf,X,Y):
ypred = cutoff_predict(clf,X,cutoff)
return sklearn.metrics.f1_score(Y,ypred)
return f1_cutoff
parmRange = np.arange(0.1,0.9,0.1)
for cutoff in parmRange:
clf = RandomForestClassifier(n_estimators=15)
validated = cross_val_score(clf,X,Y,cv=10,scoring=custom_f1(cutoff))
scores.append(validated)
fig = plt.figure()
plt.clf()
ax = fig.add_subplot(111)
ax.boxplot(scores)
ax.set_ylim((0,1))
ax.set_xticklabels(parmRange)
plt.xlabel("cutoff value")
plt.ylabel("custom f1-score")
plt.title("fscores for each tree")
plt.show()
clf = RandomForestClassifier(n_estimators=15)
clf.fit(X,Y)
imp = clf.feature_importances_
names = wine_df.columns
imp,names = zip(*sorted(zip(imp,names)))
fig = plt.figure()
plt.clf()
ax = fig.add_subplot(111)
print np.array(list(imp)).sum()
ax.barh(range(len(names)),imp,align='center')
plt.yticks(range(len(names)),names)
plt.xlabel("Importance of features")
plt.ylabel("Features")
plt.title("Importance of each feature")
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load data
Step2: Create the matrix and simplify the classification space
Step3: Random Forest
Step4: Unbalanced design
Step5: In short , we don't see much gain by increasing the number of trees. The predict_proba function returns the probability for each class, but for many classifiers the accuracy of these values can become an issue if the class structure is highly unbalanced. So normally we call a class if this prob is >0.5, but we cannot trust this value so we can use cross-validation to find the best one.
Step6: It is intuitive that the cutoff be less than 0.5 because the training data contains many fewer examples of 'good' wines, so we need to adjust the cutoff to reflect that good wines are more rare.
|
1,679
|
<ASSISTANT_TASK:>
Python Code:
numbers = [1, 2]
numbers = numbers + [3, 4]
print(numbers)
numbers = numbers * 2
print(numbers)
numbers == [1, 2, 3, 4]
1 in numbers
[1, 2] in numbers
print(numbers)
print("Flip the order: " + str(numbers[::-1]))
print("Only first 4 items: " + str(numbers[:4]))
print("Only first 4 items, in reversed order:: " + str(numbers[3::-1]))
animals = ['pig', 'shark', 'lion']
animals.append('duck')
print(animals)
animals = animals + ['duck']
print(animals)
animals.append(['duck', 'zebra'])
print(animals)
animals.pop()
print(animals)
animals = animals[:-1]
print(animals)
last_animal = animals.pop()
print(last_animal)
print(animals)
first_animal = animals.pop(0)
print(first_animal)
print(animals)
animals_to_add = ['duck', 'pig', 'zebra']
animals.extend(animals_to_add)
print(animals)
animals_to_add = ['duck', 'pig', 'zebra']
animals = animals + animals_to_add
print(animals)
string_numbers = ['1', '2', '3', '4', '5']
string_numbers.extend('6789') # string as argument
string_numbers.extend(['10', '11']) # list as argument
print(string_numbers)
print(animals)
zebra_counter = animals.count('zebra')
print(f"I've found {zebra_counter} zebras in your zoo!")
print(animals)
zebra_finder = animals.index('zebra')
print(f"The first zebra in your zoo hides in park number {zebra_finder}")
print(f"Animals: {animals}")
while 'pig' in animals:
animals.remove('pig')
print(f"Kosher Zoo: {animals}")
print(animals)
animals.sort()
print(animals)
strange_list = [1, 2, 3, 'dag maluah']
strange_list.sort()
def get_minimum(numbers):
numbers = numbers.sort()
return numbers[-1]
numbers = '8, 9, 10, 11, 12'.split(', ')
minimum_number = get_minimum(numbers)
print(f"The minimum number is {minimum_number}")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <p style="text-align
Step2: <p style="text-align
Step3: <p style="text-align
Step4: <p style="text-align
Step5: <p style="text-align
Step6: <p style="text-align
Step7: <div class="align-center" style="display
Step8: <p style="text-align
Step9: <p style="text-align
Step10: <p style="text-align
Step11: <p style="text-align
Step12: <p style="text-align
Step13: <p style="text-align
Step14: <div class="align-center" style="display
Step15: <span style="text-align
Step16: <div class="align-center" style="display
Step17: <div class="align-center" style="display
Step18: <div class="align-center" style="display
|
1,680
|
<ASSISTANT_TASK:>
Python Code:
# !!!! Also need to add MM folder to system PATH
# mm_version = 'C:\Micro-Manager-1.4'
# cfg = 'C:\Micro-Manager-1.4\SetupNumber2_05102016.cfg'
mm_version = 'C:\Program Files\Micro-Manager-2.0beta'
cfg = 'C:\Program Files\Micro-Manager-2.0beta\Setup2_20170413.cfg'
import sys
sys.path.insert(0, mm_version) # make it so python can find MMCorePy
import MMCorePy
import time
from PIL import Image
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
core = MMCorePy.CMMCore()
core.loadSystemConfiguration(cfg)
core.initializeCircularBuffer()
core.setCircularBufferMemoryFootprint(4096) # MiB
core.setProperty(core.getCameraDevice(), "Exposure", 300)
core.setProperty("Spectra", "White_Enable", "1")
core.waitForDevice("Spectra")
# NEED TO SET CAMERA TO 16 BIT (ceiling 12 BIT = 4096)
core.setProperty("Cam Andor_Zyla4.2", "Sensitivity/DynamicRange", "16-bit (low noise & high well capacity)")
core.setConfig('Channel','1_PBP')
core.snapImage()
img = core.getImage()
plt.imshow(img,cmap='gray')
image = Image.fromarray(img)
image.save('TESTIMAGE.tif')
for i in range(5):
x = core.getXPosition()
y = core.getYPosition()
core.setXYPosition(x-1500,y)
core.waitForDevice(core.getXYStageDevice())
core.snapImage()
img = core.getImage()
image = Image.fromarray(img)
image.save('images/images_{}.tif'.format(i))
core.unloadAllDevices()
core.reset()
print 'closed'
core.getFocusDevice()
core.getCameraDevice()
core.XYStageDevice()
core.getDevicePropertyNames(core.getCameraDevice())
# SHUTTER
# Auto
core.setAutoShutter(True)
core.snapImage()
# Manual
core.setAutoShutter(False) # disable auto shutter
core.setProperty("Shutter", "State", "1")
core.waitForDevice("Shutter")
core.snapImage()
core.setProperty("Shutter", "State", "0")
# cv2.startWindowThread()
cv2.namedWindow('Video')
cv2.imshow('Video',img)
cv2.waitKey(0)
cv2.destroyAllWindows()
core.stopSequenceAcquisition()
import cv2
cv2.namedWindow('Video')
core.startContinuousSequenceAcquisition(1)
while True:
img = core.getLastImage()
if core.getRemainingImageCount() > 0:
# img = core.popNextImage()
img = core.getLastImage()
cv2.imshow('Video', img)
cv2.waitkey(0)
else:
print('No frame')
if cv2.waitKey(20) >= 0:
break
cv2.destroyAllWindows()
core.stopSequenceAcquisition()
# core.reset()
core.enableStderrLog(True)
core.enableDebugLog(True)
## load devices
core.loadDevice("Camera", "DemoCamera", "DCam")
core.loadDevice("Emission", "DemoCamera", "DWheel")
core.loadDevice("Excitation", "DemoCamera", "DWheel")
core.loadDevice("Dichroic", "DemoCamera", "DWheel")
core.loadDevice("Objective", "DemoCamera", "DObjective")
core.loadDevice("X", "DemoCamera", "DStage")
core.loadDevice("Y", "DemoCamera", "DStage")
core.loadDevice("Z", "DemoCamera", "DStage")
core.initializeAllDevices()
## set labels for state devices
# emission filter
core.defineStateLabel("Emission", 0, "Chroma-D460")
core.defineStateLabel("Emission", 1, "Chroma-HQ620")
core.defineStateLabel("Emission", 2, "Chroma-HQ535")
core.defineStateLabel("Emission", 3, "Chroma-HQ700")
# excitation filter
core.defineStateLabel("Excitation", 2, "Chroma-D360")
core.defineStateLabel("Excitation", 3, "Chroma-HQ480")
core.defineStateLabel("Excitation", 4, "Chroma-HQ570")
core.defineStateLabel("Excitation", 5, "Chroma-HQ620")
# excitation dichroic
core.defineStateLabel("Dichroic", 0, "400DCLP")
core.defineStateLabel("Dichroic", 1, "Q505LP")
core.defineStateLabel("Dichroic", 2, "Q585LP")
# objective
core.defineStateLabel("Objective", 1, "Nikon 10X S Fluor")
core.defineStateLabel("Objective", 3, "Nikon 20X Plan Fluor ELWD")
core.defineStateLabel("Objective", 5, "Zeiss 4X Plan Apo")
## define configurations
core.defineConfiguration("FITC", "Emission", "State", "2")
core.defineConfiguration("FITC", "Excitation", "State", "3")
core.defineConfiguration("FITC", "Dichroic", "State", "1")
core.defineConfiguration("DAPI", "Emission", "State", "1")
core.defineConfiguration("DAPI", "Excitation", "State", "2")
core.defineConfiguration("DAPI", "Dichroic", "State", "0")
core.defineConfiguration("Rhodamine", "Emission", "State", "3")
core.defineConfiguration("Rhodamine", "Excitation", "State", "4")
core.defineConfiguration("Rhodamine", "Dichroic", "State", "2")
## set initial imaging mode
core.setProperty("Camera", "Exposure", "55")
core.setProperty("Objective", "Label", "Nikon 10X S Fluor")
core.setConfiguration("DAPI")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Preset
Step2: Example
Step3: Example
Step4: Example
Step5: Example
Step6: Example
|
1,681
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import textmining_blackboxes as tm
#see if package imported correctly
tm.icantbelieve("butter")
title_info=pd.read_csv('data/na-slave-narratives/data/toc.csv')
#this is the "metadata" of these files--we didn't use today
#why does data appear twice?
#Let's use a brittle thing for reading in a directory of pure txt files.
our_texts=tm.readtextfiles('data/na-slave-narratives/data/texts')
#again, this is not a std python package
#returns a simple list of the document as very long strings
#note if you want the following notebook will work on any directory of text files.
len(our_texts)
our_texts[100][:300] # first 300 words of 100th text
lengths=[len(text) for text in our_texts]
our_texts=tm.data_cleanse(our_texts)
#more necessary when have messy text
#eliminate escaped characters
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer=TfidfVectorizer(min_df=0.5, stop_words='english', use_idf=True)
document_term_matrix=vectorizer.fit_transform(our_texts)
# now let's get our vocabulary--the names corresponding to the rows
# "feature" is the general term in machine learning and data mining
# we seek to characterize data by picking out features that will enable discovery
vocab=vectorizer.get_feature_names()
len(vocab)
document_term_matrix.shape
vocab[1000:1100]
document_term_matrix_dense=document_term_matrix.toarray()
dtmdf=pd.DataFrame(document_term_matrix_dense, columns=vocab)
dtmdf
#easy to program, but let's use a robust version from sklearn!
from sklearn.metrics.pairwise import cosine_similarity
similarity=cosine_similarity(document_term_matrix)
#Note here that the `cosine_similiary` can take
#an entire matrix as its argument
#what'd we get?
similarity
similarity.shape
similarity[100]
#this gives the similarity of row 100 to each of the other rows
term_document_matrix=document_term_matrix.T
# .T is the easy transposition method for a
# matrix in python's matrix packages.
# import a bunch of packages we need
import matplotlib.pyplot as plt
from sklearn.metrics.pairwise import cosine_similarity
from scipy.cluster.hierarchy import ward, dendrogram
#distance is 1-similarity, so:
dist=1-cosine_similarity(term_document_matrix)
# ward is an algorithm for hierarchical clustering
linkage_matrix=ward(dist)
#plot dendogram
f=plt.figure(figsize=(9,9))
R=dendrogram(linkage_matrix, orientation="right", labels=vocab)
plt.tight_layout()
vectorizer=TfidfVectorizer(min_df=.96, stop_words='english', use_idf=True)
#try a very high min_df
#rerun the model
document_term_matrix=vectorizer.fit_transform(our_texts)
vocab=vectorizer.get_feature_names()
#check the length of the vocab
len(vocab)
#switch again to the term_document_matrix
term_document_matrix=document_term_matrix.T
dist=1-cosine_similarity(term_document_matrix)
linkage_matrix=ward(dist)
#plot dendogram
f=plt.figure(figsize=(9,9))
R=dendrogram(linkage_matrix, orientation="right", labels=vocab)
plt.tight_layout()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: IMPORTANT
Step2: Let's get some text
Step3: list comprehensions!
Step4: How to process text
Step5: Our first tool
Step6: for the documentation of sklearn's text data functionality, see http
Step7: so document_term_matrix is a matrix with 294 rows--the documents--and 1658 columns--the vocabulary or terms or features
Step8: right now stored super efficiently as a sparse matrix
Step9: While this data frame is lovely to look at and useful to think with, it's tough on your computer's memory
Step10: that is a symmetrical matrix relating each of the texts (rows) to another text (row)
Step11: HOMEWORK EXERCISE
Step12: OMG U...G...L...Y!
|
1,682
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import os
assert os.path.isfile('yearssn.dat')
data=np.loadtxt('yearssn.dat')
year=data[0:len(data),0]#gets the first term of every list in the array
ssc=data[0:len(data),1]#gets the 2nd term of each lsit
assert len(year)==315
assert year.dtype==np.dtype(float)
assert len(ssc)==315
assert ssc.dtype==np.dtype(float)
f=plt.figure(figsize=(25,1))#extends the scale
plt.plot(year,ssc,'b')#the data to be ploted
plt.xlabel('Years')
plt.ylabel('Sunspots')
plt.title('Years v. Sunspots')#lables and title for clarity as with the ticks
plt.tick_params(axis='y', direction='inout', length=10)
plt.tick_params(axis='x', direction='inout', length=10)
assert True # leave for grading
data=np.loadtxt('yearssn.dat')#creates the range for the first subplot
cent1=data[0:100,0]
ss1=data[0:100,1]
data=np.loadtxt('yearssn.dat')#creates the range for the second subplot
cent2=data[100:200,0]
ss2=data[100:200,1]
data=np.loadtxt('yearssn.dat')#creates the range for the third subplot
cent3=data[200:300,0]
ss3=data[200:300,1]
data=np.loadtxt('yearssn.dat')#creates the range for the fouth subplot
cent4=data[300:400,0]
ss4=data[300:400,1]
plt.subplot(2,2,1,)
plt.plot(cent1,ss1)#defines the first subplot location and data
plt.ylabel('Sunspot Count')
plt.subplot(2,2,2) #defines the second subplot location and data
plt.plot(cent2, ss2)
plt.subplot(2,2,3) #defines the third subplot location and data
plt.plot(cent3, ss3)
plt.ylabel('Sunspot Count')
plt.xlabel('Year')
plt.subplot(2,2,4) #defines the fourth subplot location and data
plt.plot(cent4,ss4)
plt.xlabel('Year')
plt.tight_layout()#makes things look nicer
assert True # leave for grading
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Line plot of sunspot data
Step2: Use np.loadtxt to read the data into a NumPy array called data. Then create two new 1d NumPy arrays named years and ssc that have the sequence of year and sunspot counts.
Step3: Make a line plot showing the sunspot count as a function of year.
Step4: Describe the choices you have made in building this visualization and how they make it effective.
|
1,683
|
<ASSISTANT_TASK:>
Python Code:
from stingray.simulator.simulator import Simulator
from scipy.ndimage.filters import gaussian_filter1d
from stingray.utils import baseline_als
from scipy.interpolate import interp1d
np.random.seed(1034232)
# Simulate a light curve with increasing variability and flux
length = 10000
dt = 0.1
times = np.arange(0, length, dt)
# Create a light curve with powerlaw variability (index 1),
# and smooth it to eliminate some Gaussian noise. We will simulate proper
# noise with the `np.random.poisson` function.
# Both should not be used together, because they alter the noise properties.
sim = Simulator(dt=dt, N=int(length/dt), mean=50, rms=0.4)
counts_cont = sim.simulate(1).counts
counts_cont_init = gaussian_filter1d(counts_cont, 200)
# ---------------------
# Renormalize so that the light curve has increasing flux and r.m.s.
# variability.
# ---------------------
# The baseline function cannot be used with too large arrays.
# Since it's just an approximation, we will just use one every
# ten array elements to calculate the baseline
mask = np.zeros_like(times, dtype=bool)
mask[::10] = True
print (counts_cont_init[mask])
baseline = baseline_als(times[mask], counts_cont_init[mask], 1e10, 0.001)
base_func = interp1d(times[mask], baseline, bounds_error=False, fill_value='extrapolate')
counts_cont = counts_cont_init - base_func(times)
counts_cont -= np.min(counts_cont)
counts_cont += 1
counts_cont *= times * 0.003
# counts_cont += 500
counts_cont += 500
# Finally, Poissonize it!
counts = np.random.poisson(counts_cont)
plt.plot(times, counts_cont, zorder=10, label='Continuous light curve')
plt.plot(times, counts, label='Final light curve')
plt.legend()
# This function can be found in stingray.utils
def excess_variance(lc, normalization='fvar'):
Calculate the excess variance.
Vaughan et al. 2003, MNRAS 345, 1271 give three measurements of source
intrinsic variance: the *excess variance*, defined as
.. math:: \sigma_{XS} = S^2 - \overline{\sigma_{err}^2}
the *normalized excess variance*, defined as
.. math:: \sigma_{NXS} = \sigma_{XS} / \overline{x^2}
and the *fractional mean square variability amplitude*, or
:math:`F_{var}`, defined as
.. math:: F_{var} = \sqrt{\dfrac{\sigma_{XS}}{\overline{x^2}}}
Parameters
----------
lc : a :class:`Lightcurve` object
normalization : str
if 'fvar', return the fractional mean square variability :math:`F_{var}`.
If 'none', return the unnormalized excess variance variance
:math:`\sigma_{XS}`. If 'norm_xs', return the normalized excess variance
:math:`\sigma_{XS}`
Returns
-------
var_xs : float
var_xs_err : float
lc_mean_var = np.mean(lc.counts_err ** 2)
lc_actual_var = np.var(lc.counts)
var_xs = lc_actual_var - lc_mean_var
mean_lc = np.mean(lc.counts)
mean_ctvar = mean_lc ** 2
var_nxs = var_xs / mean_lc ** 2
fvar = np.sqrt(var_xs / mean_ctvar)
N = len(lc.counts)
var_nxs_err_A = np.sqrt(2 / N) * lc_mean_var / mean_lc ** 2
var_nxs_err_B = np.sqrt(mean_lc ** 2 / N) * 2 * fvar / mean_lc
var_nxs_err = np.sqrt(var_nxs_err_A ** 2 + var_nxs_err_B ** 2)
fvar_err = var_nxs_err / (2 * fvar)
if normalization == 'fvar':
return fvar, fvar_err
elif normalization == 'norm_xs':
return var_nxs, var_nxs_err
elif normalization == 'none' or normalization is None:
return var_xs, var_nxs_err * mean_lc **2
def fvar_fun(lc):
return excess_variance(lc, normalization='fvar')
def norm_exc_var_fun(lc):
return excess_variance(lc, normalization='norm_xs')
def exc_var_fun(lc):
return excess_variance(lc, normalization='none')
def rate_fun(lc):
return lc.meancounts, np.std(lc.counts)
lc = Lightcurve(times, counts, gti=[[-0.5*dt, length - 0.5*dt]], dt=dt)
start, stop, res = lc.analyze_lc_chunks(1000, np.var)
var = res
start, stop, res = lc.analyze_lc_chunks(1000, rate_fun)
rate, rate_err = res
start, stop, res = lc.analyze_lc_chunks(1000, fvar_fun)
fvar, fvar_err = res
start, stop, res = lc.analyze_lc_chunks(1000, exc_var_fun)
evar, evar_err = res
start, stop, res = lc.analyze_lc_chunks(1000, norm_exc_var_fun)
nvar, nvar_err = res
plt.errorbar(rate, fvar, xerr=rate_err, yerr=fvar_err, fmt='none')
plt.loglog()
plt.xlabel('Count rate')
plt.ylabel(r'$F_{\rm var}$')
tmean = (start + stop)/2
from matplotlib.gridspec import GridSpec
plt.figure(figsize=(15, 20))
gs = GridSpec(5, 1)
ax_lc = plt.subplot(gs[0])
ax_mean = plt.subplot(gs[1], sharex=ax_lc)
ax_evar = plt.subplot(gs[2], sharex=ax_lc)
ax_nvar = plt.subplot(gs[3], sharex=ax_lc)
ax_fvar = plt.subplot(gs[4], sharex=ax_lc)
ax_lc.plot(lc.time, lc.counts)
ax_lc.set_ylabel('Counts')
ax_mean.scatter(tmean, rate)
ax_mean.set_ylabel('Counts')
ax_evar.errorbar(tmean, evar, yerr=evar_err, fmt='o')
ax_evar.set_ylabel(r'$\sigma_{XS}$')
ax_fvar.errorbar(tmean, fvar, yerr=fvar_err, fmt='o')
ax_fvar.set_ylabel(r'$F_{var}$')
ax_nvar.errorbar(tmean, nvar, yerr=nvar_err, fmt='o')
ax_nvar.set_ylabel(r'$\sigma_{NXS}$')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: R.m.s. - intensity diagram
|
1,684
|
<ASSISTANT_TASK:>
Python Code:
train_size = 50
rng = np.random.RandomState(0)
x = rng.uniform(0, 5, 100)
y = np.array((x > 2.5)*2-1, dtype=int)
plt.scatter(x,y)
k1 = SqExp(1,1)
gpcb = GPCB(k1)
gpcb.train(x,y)
x_star = x
pi_hat_star_mean = gpcb.predict(x_star)
pi_star_mean = gpcb.predict(x_star,False)
plt.scatter(x_star,pi_hat_star_mean)
plt.scatter(x_star,pi_star_mean)
x_t = x
y_t = np.array((x_t > 2.5), dtype=int)
x = np.append(x_t,x_t)
y_c1 = y_t #1 if it belongs to class 1, 0 otherwise
y_c2 = (y_t-1)*-1 #1 if it belongs to class 2, 0 otherwise
y = np.append(y_c1,y_c2)
k1 = SqExp(1,1)
gpc= GPC(k1)
gpc.train(x,y,2)
x_star = x_t
pi_star_mean_1 = gpc.predict(x_star,1)
pi_star_mean_2 = gpc.predict(x_star,2)
plt.scatter(x_star,pi_star_mean_1)
plt.scatter(x_star,pi_star_mean_2)
lml = gpc.lml()
#print(lml)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Train the Model
Step2: Predict
Step3: Binary Classification (using GPC)
Step4: Train the Model
|
1,685
|
<ASSISTANT_TASK:>
Python Code:
from collections import namedtuple, defaultdict
import random
import numpy as np
from tqdm import tqdm
%matplotlib inline
import matplotlib.pyplot as plt
MAX_SPEED = 4
N_ACTIONS = 3 # number of actions along x and y: 0, 1, -1
track1 =
XXXXXXXXXXXXXF
XXXXXXXXXXXXXXF
XXXXXXXXXXXXXXF
XXXXXXXXXXXXXXXF
XXXXXXXXXXXXXXXXF
XXXXXXXXXXXXXXXXF
XXXXXXXXXX
XXXXXXXXX
XXXXXXXXX
XXXXXXXXX
XXXXXXXXX
XXXXXXXXX
XXXXXXXXX
XXXXXXXXX
XXXXXXXX
XXXXXXXX
XXXXXXXX
XXXXXXXX
XXXXXXXX
XXXXXXXX
XXXXXXXX
XXXXXXXX
XXXXXXX
XXXXXXX
XXXXXXX
XXXXXXX
XXXXXXX
XXXXXXX
XXXXXXX
XXXXXX
XXXXXX
SSSSSS
track2 =
XXXXXXXXXXXXXXXF
XXXXXXXXXXXXXXXXXXF
XXXXXXXXXXXXXXXXXXXF
XXXXXXXXXXXXXXXXXXXXF
XXXXXXXXXXXXXXXXXXXXF
XXXXXXXXXXXXXXXXXXXXF
XXXXXXXXXXXXXXXXXXXXF
XXXXXXXXXXXXXXXXXXXF
XXXXXXXXXXXXXXXXXXF
XXXXXXXXXXXXXXXX
XXXXXXXXXXXXX
XXXXXXXXXXXX
XXXXXXXXXX
XXXXXXXXX
XXXXXXXXXX
XXXXXXXXXXX
XXXXXXXXXXXX
XXXXXXXXXXXXX
XXXXXXXXXXXXXX
XXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXX
SSSSSSSSSSSSSSSSSSSSSSS
State = namedtuple('State', ['x', 'y', 'vx', 'vy']) # current position and speed
Action = namedtuple('Action', ['ax', 'ay']) # acceleration along each component
Transition = namedtuple('Transition', ['state1',
'action',
'reward',
'state2'])
class Racetrack(object):
def __init__(self, track_str):
rows = track_str.split('\n')
rows = rows[1:-1] # remove first and last rows
rows = rows[::-1] # flip vertically so [0,0] corresponds to bottom left corner
cells = map(list, rows) # convert rows of strings to rows of chars
self._track = np.array(list(cells))
self._state = None
# define all possible actions
self.actions = []
for ax in [-1, 0, 1]:
for ay in [-1, 0, 1]:
self.actions.append(Action(ax, ay))
def _track_cell(self, x, y):
max_y = self._track.shape[0] - 1
max_x = self._track.shape[1] - 1
if x < 0 or x > max_x:
return ' '
if y < 0 or y > max_y:
return ' '
return self._track[y, x]
def _is_on_track(self, state):
assert state.vx <= MAX_SPEED
assert state.vx >= 0
assert state.vy <= MAX_SPEED
assert state.vy >= 0
return self._track_cell(state.x, state.y) != ' '
def _has_finished(self, state):
return self._track_cell(state.x, state.y) == 'F'
def _transition(self, state, action):
# update speed
vx2 = state.vx + action.ax
vy2 = state.vy + action.ay
vx2 = np.clip(vx2, 0, MAX_SPEED)
vy2 = np.clip(vy2, 0, MAX_SPEED)
# keep the speed constant if both components are zero
if vx2 == 0 and vy2 == 0:
vx2, vy2 = state.vx, state.vy
# advance car position
x2 = state.x + vx2
y2 = state.y + vy2
# # additional random move
# if random.random() > 0.5:
# if random.random() > 0.5:
# x2 += 1 # move right
# else:
# y2 += 1 # move forward
collision_y = collision_x = False
# check and fix collisions along 'x'
while True:
s2 = State(x2, state.y, vx2, vy2)
if self._is_on_track(s2):
break
collision_x = True
x2 -= 1
vx2 = 1
assert x2 >= 0
# check and fix collision along 'y'
while True:
s2 = State(x2, y2, vx2, vy2)
if self._is_on_track(s2):
break
collision_y = True
y2 -= 1
vy2 = 1
assert y2 >= 0
if collision_y or collision_x:
r = -5
else:
r = -1
if state.x == x2 and state.y == y2:
# the car did not move
if collision_y:
x2 += 1
elif collision_x:
y2 += 1
else:
assert False, 'the car has to move'
s2 = State(x2, y2, vx2, vy2)
assert self._is_on_track(s2)
term = self._has_finished(s2)
if term:
r = 0
return r, s2, term
def reset(self):
max_x = self._track.shape[1]
while True:
x = random.randint(0, max_x)
vx = random.choice([0, 1])
vy = random.choice([0, 1])
if vx == 0 and vy == 0:
continue
s = State(x=x, y=0, vx=vx, vy=vy)
if self._is_on_track(s):
break
self._state = s
return s
def step(self, action):
r, s2, term = self._transition(self._state, action)
self._state = s2
return s2, r, term, _
def track_as_np(self):
_track = self._track
track_np = np.zeros_like(_track, dtype=int)
track_np[_track == 'S'] = 1
track_np[_track == 'X'] = 2
track_np[_track == 'F'] = 3
return track_np
class OffPolicyMC(object):
def __init__(self, env, gamma=0.99):
self.env = env
self._policy = {}
self._Q = defaultdict(lambda: defaultdict(int)) # Q[s][a]
self._C = defaultdict(lambda: defaultdict(int)) # C[s][a]
self.gamma = gamma
def generate_episode(self, policy):
s = env.reset()
trajectory = []
while True:
a = policy(s)
s2, r, term, _ = self.env.step(a)
t = Transition(s, a, r, s2)
trajectory.append(t)
if term:
break
s = s2
return trajectory
def random_policy(self, state):
return random.choice(self.env.actions)
def greedy_policy(self, state):
if state in self._Q:
return max(self._Q[state], key=self._Q[state].get)
else:
return self.random_policy(state)
def optimize(self, n_iter):
myu = 1 / len(self.env.actions) # probability of action under random policy
for _ in tqdm(range(n_iter)):
traj = self.generate_episode(self.random_policy)
G = 0
W = 1
for tr in reversed(traj):
s, a, r = tr.state1, tr.action, tr.reward
G = self.gamma * G + r
self._C[s][a] += W
self._Q[s][a] = self._Q[s][a] + W / self._C[s][a] * (G - self._Q[s][a])
a_greedy = max(self._Q[s], key=self._Q[s].get)
if a_greedy != a:
break
W = W * 1 / myu
env = Racetrack(track2)
mc = OffPolicyMC(env)
mc.optimize(5000000)
plt.imshow(env.track_as_np())
plt.gca().invert_yaxis()
trajectory = mc.generate_episode(mc.greedy_policy)
for t in trajectory:
plt.plot(t.state2.x, t.state2.y, '.r')
rewards = map(lambda t: t.reward, trajectory)
print('return', sum(rewards))
plt.imshow(env.track_as_np())
plt.gca().invert_yaxis()
trajectory = mc.generate_episode(mc.greedy_policy)
for t in trajectory:
plt.plot(t.state2.x, t.state2.y, '.r')
rewards = map(lambda t: t.reward, trajectory)
print('return', sum(rewards))
plt.imshow(env.track_as_np())
plt.gca().invert_yaxis()
trajectory = mc.generate_episode(mc.greedy_policy)
for t in trajectory:
plt.plot(t.state2.x, t.state2.y, '.r')
rewards = map(lambda t: t.reward, trajectory)
print('return', sum(rewards))
np.full_like
V_xy = np.full_like(env.track_as_np(), -np.inf,dtype=float)
Q_max = defaultdict(list)
for s, actions in mc._Q.items():
Q_max[s.y, s.x].append(max(mc._Q[s].values()))
for pos, vals in Q_max.items():
V_xy[pos] = np.mean(vals)
plt.imshow(V_xy)
plt.colorbar()
plt.gca().invert_yaxis()
# count all possible car positions
n_start_positions = (env._track == 'S').sum()
n_track_positions = (env._track == 'X').sum()
n_start_positions, n_track_positions
# count all possible states, each state is car's position and speed
n_possible_states = n_start_positions * 3 + n_track_positions * (MAX_SPEED * MAX_SPEED - 1)
n_sampled_states = len(mc._Q)
print('n_possible_states = ', n_possible_states)
print('n_sampled_states = ', n_sampled_states)
print('n_sampled_states / n_possible_states = %d%%'
% round(n_sampled_states/n_possible_states * 100))
n_possible_state_actions = n_possible_states * len(env.actions)
n_sampled_state_actions = sum(map(lambda v: len(v), mc._Q.values()))
print('n_possible_state_actions = ', n_possible_state_actions)
print('n_sampled_state_actions = ', n_sampled_state_actions)
print('n_sampled_state_actions / n_possible_state_actions = %d%%'
% round(n_sampled_state_actions/n_possible_state_actions * 100))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: Define Racetracks
Step4: Create Racetrack environment
Step5: Off-Policy Monte Carlo Control
Step6: Solve Racetrack MDP
Step7: Visualize trajectories for the greedy policy
Step8: Note that at the last position before the finish line the car tries to go off the track. The way the game rules are defined it does not get penalized as long as it crosses the finish line.
Step9: Check action-state coverage by Monte Carlo
|
1,686
|
<ASSISTANT_TASK:>
Python Code:
import random
import pandas as pd
import matplotlib as mpl
import seaborn as sb
%matplotlib inline
class LotterySimulation(object):
def __init__(self, lottery, n_tickets, n_players):
self.lottery = lottery
self.n_tickets = n_tickets
self.n_players = n_players
self.winnings = [ sum(lottery.play_x_times(n_tickets))
for _ in range(n_players) ]
@property
def net(self):
outlay = self.n_tickets * self.lottery.ticket_cost
return [ win - outlay
for win in self.winnings ]
@property
def net_loss_proportion(self):
return float(sum(n < 0 for n in self.net)) / self.n_players
class Lottery(object):
def __init__(self, name, ticket_cost, total_combinations, winning_combinations):
self.name = name
self.ticket_cost = ticket_cost
self.total_combinations = total_combinations
self.winning_combinations = winning_combinations
self.cutoffs = self.construct_cutoffs()
def construct_cutoffs(self):
winning_cutoffs = []
cumulative = 0
for n, winnings in self.winning_combinations:
cumulative += n
winning_cutoffs.append((cumulative, winnings))
return winning_cutoffs
def get_odds_of_winning_anything(self):
return float(self.cutoffs[-1][0]) / self.total_combinations
def play_once(self):
rand = random.randint(1, self.total_combinations)
for i, winnings in self.cutoffs:
if rand <= i:
return winnings
return 0
def play_x_times(self, x):
return [ self.play_once() for n in range(x) ]
def simulate(self, n_tickets, n_players):
return LotterySimulation(self, n_tickets, n_players)
mega_millions = Lottery("Mega Millions", 1, 258890850, [
(1, 1000000000), # Hypothetical $1 billion jackpot
(14, 1000000), # $1 million
(350, 5000),
(4900, 500),
(24150, 50),
(338100, 5),
(547400, 5),
(4584475, 2),
(12103014, 1)
])
print("On a single Mega Millions ticket, "
"your odds of winning anything is "
"approximately {0:.5f}%, or 1 in {1:.2f}."\
.format(
mega_millions.get_odds_of_winning_anything() * 100,
1.0 / mega_millions.get_odds_of_winning_anything()
))
n_simulations = 1000 * 100
for n_tickets in (10, 50, 100):
sim = mega_millions.simulate(n_tickets, n_simulations)
print(
"Among 100k simulated players who each bought {0} tickets, "
"{1:.3f}% lost money overall.".format(
n_tickets,
sim.net_loss_proportion * 100
)
)
mm_1_million = mega_millions.simulate(50, 1000 * 1000)
print(
"Among 1 million simulated players who each bought {0} tickets, "
"{1:.3f}% lost money overall.".format(
n_tickets,
mm_1_million.net_loss_proportion * 100.0
)
)
def plot_net(sim):
count = sim.n_tickets
stemmed = pd.Series(sim.net).apply(lambda x: min(count, x))
ax = stemmed.hist(bins=range(-count * sim.lottery.ticket_cost, count+1, 1),
figsize=(12, 6),
normed=True)
ax.set_title("Net Profit/Loss After {0} Tickets — {1}"\
.format(count, sim.lottery.name), fontsize=18, fontweight="bold")
ax.set_ylabel("Percentage of Simulations", fontsize=14)
ax.set_yticks(list(pd.np.arange(0, ax.get_ylim()[-1] + 0.01, 0.01)))
ax.set_yticklabels([ "{0:.0f}%".format(y * 100)
for y in ax.get_yticks() ], fontsize=12)
ax.set_xlim((-count * sim.lottery.ticket_cost, count))
tick_spacing = 10 if (count * sim.lottery.ticket_cost <= 100) else 20
ax.set_xticks(range(-count * sim.lottery.ticket_cost, count+1, tick_spacing))
ax.set_xticklabels([ ("${0}" if x >= 0 else "-${0}").format(abs(x)) + ("+" if x == count else "")
for x in ax.get_xticks() ], fontsize=12)
mpl.pyplot.setp(ax.patches[:count * sim.lottery.ticket_cost - 1], facecolor="darkred")
mpl.pyplot.axvline(0, color="black", linestyle="dashed")
return ax
plot_net(mm_1_million);
powerball = Lottery("Powerball", 2, 292201338, [
(1, 1000000000),
(25, 1000000),
(320, 50000),
(8000, 100),
(20160, 100),
(504000, 7),
(416640, 7),
(3176880, 4),
(7624512, 4)
])
print("On a single Powerball ticket, "
"your odds of winning anything is "
"approximately {0:.5f}%, or 1 in {1:.2f}."\
.format(
powerball.get_odds_of_winning_anything() * 100,
1.0 / powerball.get_odds_of_winning_anything()
))
n_simulations = 1000 * 100
for n_tickets in (10, 50, 100):
sim = powerball.simulate(n_tickets, n_simulations)
print(
"Among 100k simulated players who each bought {0} tickets, "
"{1:.3f}% lost money overall.".format(
n_tickets,
sim.net_loss_proportion * 100
)
)
pb_1_million = powerball.simulate(50, 1000 * 1000)
print(
"Among 1 million simulated players who each bought {0} tickets, "
"{1:.3f}% lost money overall.".format(
n_tickets,
pb_1_million.net_loss_proportion * 100.0
)
)
plot_net(pb_1_million);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The Lottery class takes four parameters
Step2: Mega Millions
Step3: 100,000 player simulations — Mega Millions
Step4: 1 million 50-ticket simulations — Mega Millions
Step5: How much should you expect to lose playing Mega Millions?
Step6: Powerball
Step7: 100,000 player simulations — Powerball
Step8: How much should you expect to lose playing Powerball?
|
1,687
|
<ASSISTANT_TASK:>
Python Code:
data = np.random.uniform(0,1, (100,3))
class SelfOrganizingMap(nengo.Process):
def __init__(self, weights, learning_rate=1e1, influence_sigma=1.5):
self.weights = weights
self.learning_rate = learning_rate
self.influence_sigma = influence_sigma
super().__init__(default_size_in=weights.shape[2],
default_size_out=weights.shape[0]*weights.shape[1])
def make_step(self, shape_in, shape_out, dt, rng, state=None):
# this called during the build process, so any computationally expensive
# pre-processing should be done here. There isn't really much for an SOM,
# but we can pre-generate the distance matrix to speed that part up
pos = np.array(np.meshgrid(np.arange(self.weights.shape[1]),
np.arange(self.weights.shape[0])))
def step_som(t, x, w=self.weights, pos=pos,
sigma=self.influence_sigma,
learning_rate=self.learning_rate):
# this will be called every timestep, with x as the current input
# first, find the closest element in the map
diff = np.sum((w - x[None,None,:])**2, axis=2)
best = np.argmin(diff)
best = np.array([best % diff.shape[1], best // diff.shape[1]])
#assert diff[best[1],best[0]] == np.min(diff)
# now compute how much to influence the elements
dist = np.sum((pos - best[:,None,None])**2, axis=0)
influence = np.exp(-dist/(2*sigma**2))
# update the weights
w += learning_rate * dt * influence[:,:,None] * (x - w)
# the output from the map every timestep will just be the influence
return influence.flatten()
return step_som
w = np.random.uniform(0, 1, (10, 12, 3))
plt.imshow(w)
model = nengo.Network()
with model:
stim = nengo.Node(nengo.processes.PresentInput(data, presentation_time=0.001))
som = nengo.Node(SelfOrganizingMap(w))
nengo.Connection(stim, som, synapse=None)
p = nengo.Probe(som)
sim = nengo.Simulator(model)
sim.run(10)
plt.imshow(w)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now we implement the self-organizing map. Since this requires implementing our own learning rule, we'll have to add our own python code to do this.
Step2: Now let's try it out. Let's start with a randomly generated set of weights
Step3: Now we present things to the network
Step4: And plot the resulting map
|
1,688
|
<ASSISTANT_TASK:>
Python Code:
# Magics first (server issues)
%matplotlib inline
# Do below if you want interactive matplotlib plot ()
# %matplotlib notebook
# https://ipython.org/ipython-doc/dev/config/extensions/autoreload.html
%load_ext autoreload
%autoreload 2
# %install_ext http://raw.github.com/jrjohansson/version_information/master/version_information.py
%load_ext version_information
%version_information numpy, scipy, matplotlib, pandas
# Standard library
import os
import sys
sys.path.append("../src/")
# Third party imports
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
# Local imports
from simpleexample import example_func
# Customizations
sns.set() # matplotlib defaults
# Any tweaks that normally go in .matplotlibrc, etc., should explicitly go here
plt.rcParams['figure.figsize'] = (12, 12)
%config InlineBackend.figure_format='retina'
# Find the notebook the saved figures came from
fig_prefix = "../figures/2015-07-24-jw-"
from IPython.display import FileLink
FileLink("../deliver/coal_data_cleanup.ipynb")
dframe = pd.read_csv("../data/coal_prod_cleaned.csv")
plt.scatter(dframe['Year'], dframe['Production_short_tons'])
df2 = dframe.groupby('Mine_State').sum()
df2 = df2[df2.index != 'Wyoming']
sns.jointplot('Labor_Hours', 'Production_short_tons', data=df2, kind="reg", )
plt.xlabel("Labor Hours Worked")
plt.ylabel("Total Amount Produced")
plt.tight_layout()
plt.savefig(fig_prefix + "production-vs-hours-worked.png")
%load_ext autoreload
%autoreload 2
import sys
sys.path.append("../src/")
from simpleexample import example_func
example_func()
example_func()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Importing cleaned data
Step2: [Dead end] Does year predict production?
Step3: Does Hours worked correlate with output?
Step4: Advanced example, come back if time!
|
1,689
|
<ASSISTANT_TASK:>
Python Code:
# necessary imports
%pylab inline
import seaborn as sns
import pandas as pd
# locations of the results
results_filename="/home/chiroptera/workspace/QCThesis/CUDA/tests/test1v2/results.csv" #local
#results_filename="https://raw.githubusercontent.com/Chiroptera/QCThesis/master/CUDA/tests/test1v2/results.csv" #git repo
results = pd.read_csv(results_filename)
print "Structure of the results"
results.head()
N_labels=[1e3,5e3,1e4,5e4,1e5,5e5,1e6,2e6,4e6]
K_labels=[5,10,20,30,40,50,100,250,500]
results.drop(['R','NATC','D','iters'], axis=1, inplace=True)
results.head()
rounds = results.groupby(['type','N','K'],as_index = True)
results_mean = rounds.mean()
rounds.describe()
times = results_mean.loc["cuda"]
times['cuda']=times['time']
times['numpy']=results_mean.loc["numpy"]
times['python']=results_mean.loc["python"]
times['s_cuda_np']=times['numpy']/times['cuda']
times['s_cuda_py']=times['python']/times['cuda']
times['s_np_py']=times['python']/times['numpy']
times
a=times.groupby(level='K')
#a.get_group(20)['python'].plot(subplots=True,layout=(2,2))
p=a.get_group(20)[['python','numpy','cuda']].plot(title="Time evolution; 20 clusters",logy=True)
plt.xticks(range(len(N_labels)),N_labels)
plt.xlabel("Cardinality")
a.get_group(500)[['python','numpy','cuda']].plot(title="Time evolution; 500 clusters",logy=True)
plt.xticks(range(len(N_labels)),N_labels)
plt.xlabel("Cardinality")
b=times.groupby(level='N')
b.get_group(1e5)[['python','numpy','cuda']].plot(title="Time evolution by number of clusters; 1e5 datapoints",logy=True)
plt.xticks(range(len(K_labels)),K_labels)
plt.xlabel("Number of clusters")
b.get_group(1e5)[['numpy','cuda']].plot(title="Time evolution by number of clusters; 1e5 datapoints",logy=True)
plt.xticks(range(len(K_labels)),K_labels)
plt.xlabel("Number of clusters")
b.get_group(4e6)[['numpy','cuda']].plot(title="Time evolution by number of clusters; 4e6 datapoints",logy=True)
plt.xticks(range(len(K_labels)),K_labels)
plt.xlabel("Number of clusters")
s_cuda_np = results_mean.loc['numpy'] / results_mean.loc['cuda']
#s_cuda_np['speedup']=s_cuda_np['time']
s_cuda_np.groupby(level=['K']).describe()
for key, grp in s_cuda_np.groupby(level=['K']):
plt.plot(grp['time'],label=key)#grp.index.levels[0],
plt.legend(loc='best')
plt.title("Speedup by cardinality")
plt.plot([0, 8], [1, 1], 'k-', lw=2)
plt.ylabel("Speedup")
plt.xlabel("Cardinality")
plt.xticks(range(len(N_labels)),N_labels)
s_cuda_np.groupby(level=['N']).describe()
for key, grp in s_cuda_np.groupby(level=['N']):
plt.plot(grp['time'],label=key)#grp.index.levels[0],
plt.plot([0, 8], [1, 1], 'k-', lw=2) #slowdown/speedup threshold
plt.legend(loc='best')
plt.title("Speedup by cardinality")
plt.ylabel("Speedup")
plt.xlabel("Number of clusters")
plt.xticks(range(len(K_labels)),K_labels)
s_cuda_py = results_mean.loc['python'] / results_mean.loc['cuda']
for key, grp in s_cuda_py.groupby(level=['K']):
plt.plot(grp['time'],label=key)#grp.index.levels[0],
plt.plot([0, 8], [1, 1], 'k-', lw=2) #slowdown/speedup threshold
plt.legend(loc='best')
plt.title("Speedup by cardinality")
plt.ylabel("Speedup")
plt.xlabel("Cardinality")
plt.xticks(range(len(N_labels)),N_labels)
for key, grp in s_cuda_py.groupby(level=['N']):
plt.plot(grp['time'],label=key)#grp.index.levels[0],
plt.plot([0, 8], [1, 1], 'k-', lw=2) #slowdown/speedup threshold
plt.legend(loc='best')
plt.title("Speedup by cardinality")
plt.ylabel("Speedup")
plt.xlabel("Number of clusters")
plt.xticks(range(len(K_labels)),K_labels)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Some of the parameters were don't change in these results, so we can delete them (natural number of clusters, dimensionality and number of iterations). Furthermore, We can delete the rounds column because it becomes useless after averaging the times.
Step2: Below is some statistics about the timings for the rounds. The important thing to notice is that there is low variance on the data, which suggests that the results are consistent.
Step3: Time analysis
Step4: Speedup over NumPy
Step5: Speedup over Python
|
1,690
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
x = np.array([1,2,3,4,5])
print(x)
y = x**2
print(y)
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
x = np.arange(1,10,.1)
y = x**2
p = plt.plot(x,y)
#Example conditional statements
x = 1
y = 2
x<y #x is less than y
#x is greater than y
x>y
#x is less-than or equal to y
x<=y
#x is greater-than or equal to y
x>=y
#Example of and operator
(1<2) and (2<3)
#Example of or operator
(1<2) or (2>3)
#Example of not operator
not(1<2)
x = 1
y = 2
if (x < y):
print("Yup, totally true!")
else:
print("Nope, completely false!")
x = 2
y = 1
if (x > y):
print("x is greater than y")
x = 2
y = 2
if (x == y):
print("x and y are equal")
if (x != y):
print("x and y are not equal")
if (x > y or x < y):
print("x and y are not equal (again!)")
x = 1
while (x <= 10):
print(x)
x = x+1
x = 2
i = 0 #dummy variable
while (i<10):
x = 2*x
print(x)
i = i+1
#another way to write this is i+=1, but it's idiosyncratic and we won't use it here
#Defining a square root function
def sqrt(x):
if (x < 0):
print("Your input is not positive!")
else:
return x**(1/2)
sqrt(4)
sqrt(-4)
import math
print(math.sqrt(25))
print(math.sin(math.pi/2))
print(math.exp(math.pi)-math.pi)
def length(x, y):
Calculates the length of a vector (x,y) using the Pythagorean theorem.
return math.sqrt(x**2+y**2)
length(3,4)
def pathLength(x_array,y_array):
#Your code goes here
if len(x_array) != len(y_array):
raise Exception("Vectors do not have the same length")
n = len(x_array)
i = 1
L = 0
while (i < n):
L = L + length(x_array[i]-x_array[i-1],y_array[i]-y_array[i-1])
i = i+1
return L
x = np.array([1,2,3,4,5])
y = np.array([1,2,3,4,5])
pathLength(x,y)
def approxPi(n):
# Initialize two numpy arrays x and y of size n+1 with the points defined above
# Hint: use np.arange()
# Plot the points in x and y
# Call the function pathLength() with the arguments x and y and set it equal to pi_approx
# Print the value for pi_approx
# Calculate the error e = pi_approx - pi
return e
def approxPi(n):
if (type(n) != int):
raise Exception("n is not an integer")
# Initialize two numpy arrays x and y of size n+1 with the points defined above
# Hint: use np.arange()
x = 1/2*np.cos(2*math.pi*np.arange(n+1)/n)
y = 1/2*np.sin(2*math.pi*np.arange(n+1)/n)
# Plot the points in x and y
plt.plot(x,y)
# Call the function pathLength() with the arguments x and y and set it equal to pi_approx
pi_approx = pathLength(x,y)
# Print the value for pi_approx
print(pi_approx)
# Calculate the error e = pi - pi_approx
e = math.pi - pi_approx
return e
def piTolerance(tol):
#Your code goes here
n = 1
while (approxPi(n) > tol):
n = n+1
return n
data = np.zeros(10)
print(data)
data[0] = 137
print(data[0])
#Your code goes here
#Your code goes here
x = np.linspace(0,10,100)
x[0:3]
#Your code goes here
import numpy as np
time, velocity = np.loadtxt("./lecture2_data/droptower_vdata.txt",unpack = True)
n = len(velocity)
# Initialize accel as an array of zeros with size n-1
accel = np.zeros(n-1)
# Use a while loop to replace the value in element [i] with the acceleration at time i.
i = 0
while (i < n-2):
accel[i] = velocity[i+1] - velocity[i]
i = i+1
#Your code goes here
plt.plot(time,velocity,'o')
#Your code goes here
plt.plot(time[0:-1],accel,'o')
# Your code goes here
# Hint: think about the number of `while`-loops you might need to use
#Your code goes here
# Your code goes here
%matplotlib inline
import matplotlib.pyplot as plt
timeseriesData = np.loadtxt("./lecture2_data/timeseries_data.txt")
timeseriesData.shape
t = timeseriesData[0,:]
signal = timeseriesData[1,:]
#Your code goes here
plt.plot(t,signal)
cutOff = 15.
signalFix = signal[signal < cutOff]
tFix = t[signal < cutOff]
#Your code goes here
plt.plot(tFix,signalFix)
plt.show()
dataFix = np.array([tFix,signalFix])
np.save('./lecture2_data/dataFix.npy',dataFix)
np.savetxt('./lecture2_data/dataFix.txt',dataFix)
data = np.load('./lecture2_data/dataFix.npy')
t = data[0,:]
signal = data[1,:]
plt.plot(t,signal)
plt.show()
#Your code goes here
first_string = 'a'
second_string = 'b'
print(first_string + second_string)
first_string = 'a'
second_string = str(1)
print(first_string + second_string)
datalist = []
# Your code here
i = 1
while i <= 6:
datalist.append(np.loadtxt('./lecture2_data/c' + str(i) + '.dat'))
i = i+1
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Here we are calling in the contents of numpy and giving it the shorthand name 'np' for convenience.
Step2: As we learned in Lecture 1, numpy arrays are convenient because they allow us to do math across the whole array and not just individual numbers.
Step3: Now let's say we wanted to plot y as a function of x.
Step4: Lecture 2 - Logic, Loops, and Arrays
Step5: If you let a and b be conditional statements (like the above statements, e.g. a = x < y), then you can combine the two together using logical operators, which can be thought of as functions for conditional statements.
Step6: Now, these might not seem especially useful at first, but they're the bread and butter of programming. Even more importantly, they are used when we are doing if/else statements or loops, which we will now cover.
Step7: The idea here is that Python checks to see if the statement (in this case "x < y") is True. If it is, then it will do what is below the if statement. The else statement tells Python what to do if the condition is False.
Step8: Here's a more complicated case. Here, we introduce some logic that helps you figure out if two objects are equal or not.
Step9: While-loops are similar to if statements, in the sense that they also have a conditional statement built into them. The code inside the loop will execute when the conditional is True. And then it will check the conditional and, if it evaluates to True, the code will execute again. And so on and so forth...
Step10: Note here that we tell Python to print the number x (x starts at 1) and then redefining x as itself +1 (so, x=1 gets redefined to x = x+1 = 1+1 = 2). Python then executes the loop again, but now x has been incremented by 1. We continue this process from x = 1 to x = 10, printing out x every time. Thus, with a fairly compact bit of code, you get 10 lines of output.
Step11: Food for though
Step12: So the outline for a function is
Step14: When defining your own functions, you can also use multiple input variables. For example, if we want to calculate the length of a vector $(x,y)$, we can create a function that takes in the components $x$ and $y$ individually.
Step15: If we call this function on the vector (3,4), we should get 5.
Step16: In this lecture, we've learned about numpy arrays, loops, and defining functions. You'll have a chance to test these skills in the following exercise!
Step17: Test your function on the example below. Your answer should come out to $4\sqrt{2} \approx 5.657$
Step18: We can define $\pi$ to be the circumference of a circle whose radius is $1/2$. Using $n + 1$ points to describe a given circle, the $x$ and $y$ coordinates are given by the following expressions,
Step19: As an extra challenge, write a function piTolerance which takes a given tolerance, tol, as its input. This is the error to which you want to compute $\pi$ using the approxPi function. Find the smallest value of $n$ such that you can achieve $e < \text{tol}$.
Step20: C. Numpy Arrays - Review of Basics and Some More Advanced Topics
Step21: Now how do we assign a new value to an element of the array? We use the following "square bracket" notation
Step22: Now you try it. Store your second favorite number in the second position of your array and use a print statement to verify that you have done so.
Step23: Python array indexing is fairly straightforward once you get the hang of it.
Step24: Now, sometimes its useful to access more than one element of an array. Let's say that we have an array with 100 elements in the range [0,10] (including endpoints). If you recall, this can be done via the np.linspace() function.
Step25: Now then, in order to get a range of elements rather than simply a single one, we use the notation
Step26: If you want everything passed a certain point of the array (including that point), then you would just eliminate the right number, for example
Step27: Finally, simply using the "
Step28: Now we've initialized two numpy arrays
Step29: Note that the size of the array of accelerations is one less than the size of the array of velocities. Why should this be so?
Step30: To plot the acceleration, we need to do a little more work. Note that the array of times is one element longer than the array of accelerations, so calling the function plot(time,accel) will give an error.
Step31: At what rate are the riders accelerating downwards during times t=1 to t=4? Does this match your physical intuition?
Step32: After doing this, plot the positions of the drop tower as a function of time.
Step33: As an extra challenge, consider the following problem
Step34: D. Loading And Saving Data Arrays
Step35: Now then, let's say we are doing a timing experiment, where we look at the brightness of an object as a function of time. This is actually a very common type of measurement that you may do in research, such as looking for dips in the brightness of stars as a way to detect planets.
Step36: Now we have the data loaded into Python as a numpy array, and one handy thing you can do is to use Python to find the dimensions of the array. This is done by using ".shape" as so.
Step37: In this format, we know that this is a 2x1000 array (two rows, 1000 columns). Another way you can think about this is that you have two 1000-element arrays contained within another array, where each of those arrays are elements (think of it as an array of arrays).
Step38: Here, you have 2 dimensions with the array timeseriesData, and as such much specify the row first and then the column. So,
Step39: Looking at our data, you see clear spikes that jump well above most of the signal. (I've added this to the data to represent outliers that may sometimes appear when you're messing with raw data, and those must be dealt with). In astronomy, you sometimes have relativistic charged particles, not from your source, that hit the detector known as cosmic rays, and we often have to remove these.
Step40: In this case, the conditional statement that we have used is signal < cutOff.
Step41: Now let's plot it. You try.
Step42: Now that you have your data all cleaned up, it would be nice if we could save it for later and not have to go through the process of cleaning it up every time. Fear not! Python has you covered.
Step43: Then, we can use either the np.save() function or the np.savetxt function, the first saving the array into a '.npy' file and the other, into a '.txt' file. The syntax is pretty much the same for each.
Step44: Now that your data files are saved, you can load them up again, using np.loadtxt() and np.load() for .txt and .npy files respectively. We used np.loadtxt() above, and np.load works the same way. So, let's load in the .npy file and see if our data was saved correctly.
Step45: Now, let's see if you can do the same thing, but with the .txt file that we saved.
Step46: Loading data files automatically
Step47: You can also cast an integer to a string using the str command.
Step48: Now you try
|
1,691
|
<ASSISTANT_TASK:>
Python Code:
import sympy
import pyeda.boolalg.expr
import pyeda.boolalg.bfarray
xs = sympy.symbols(",".join("x%d" % i for i in range(64)))
ys = pyeda.boolalg.bfarray.exprvars('y', 64)
f = sympy.Xor(*xs[:4])
g = pyeda.boolalg.expr.Xor(*ys[:4])
f.atoms()
g.support
f.subs({xs[0]: 0, xs[1]: 1})
g.restrict({ys[0]: 0, ys[1]: 1})
sympy.to_nnf(f)
type(sympy.Not(xs[0]))
g.to_nnf()
type(~ys[0])
sympy.to_dnf(f)
g.to_dnf()
from sympy.logic import simplify_logic
simplify_logic(f)
simplify_logic(f)
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
N = 5
sympy_times = (.000485, .000957, .00202, .00426, .0103)
pyeda_times = (.0000609, .000104, .000147, .00027, .000451)
ind = np.arange(N) # the x locations for the groups
width = 0.35 # the width of the bars
fig, ax = plt.subplots()
rects1 = ax.bar(ind, sympy_times, width, color='r')
rects2 = ax.bar(ind + width, pyeda_times, width, color='y')
# add some text for labels, title and axes ticks
ax.set_ylabel('Time (s)')
ax.set_title('SymPy vs. PyEDA: Xor(x[0], x[1], ..., x[n-1]) to DNF')
ax.set_xticks(ind + width)
ax.set_xticklabels(('N=2', 'N=3', 'N=4', 'N=5', 'N=6'))
ax.legend((rects1[0], rects2[0]), ('SymPy', 'PyEDA'))
plt.show()
sympy.Equivalent(xs[0], xs[1], 0)
pyeda.boolalg.expr.Equal(ys[0], ys[1], 0)
sympy.ITE(xs[0], 0, xs[1])
pyeda.boolalg.expr.ITE(ys[0], 0, ys[1])
sympy.Or(xs[0], sympy.Or(xs[1], xs[2]))
pyeda.boolalg.expr.Or(ys[0], pyeda.boolalg.expr.Or(ys[1], ys[2]))
sympy.Xor(xs[0], sympy.Not(sympy.Xor(xs[1], xs[2])))
pyeda.boolalg.expr.Xor(ys[0], pyeda.boolalg.expr.Xnor(ys[1], ys[2]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create Variables
Step2: Basic Boolean Functions
Step3: Create a PyEDA XOR function
Step4: SymPy atoms method is similar to PyEDA's support property
Step5: SymPy's subs method is similar to PyEDA's restrict method
Step6: Conversion to NNF
Step7: Conversion to DNF
Step8: PyEDA's DNF conversion is minimal
Step9: It's a little hard to do an apples-to-apples comparison, because 1) SymPy is pure Python and 2) the algorithms are probably different.
Step10: Running this experiment from N=2 to N=6 shows that PyEDA's runtime grows significantly slower.
Step11: Going a bit further, things get worse.
|
1,692
|
<ASSISTANT_TASK:>
Python Code:
# 多行结果输出支持
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
import numpy as np
def LU(A):
U = np.copy(A)
m, n = A.shape
L = np.eye(n)
for k in range(n-1):
for j in range(k+1,n):
L[j,k] = U[j,k]/U[k,k]
U[j,k:n] -= L[j,k] * U[k,k:n]
return L, U
A = np.array([[2,1,1,0],[4,3,3,1],[8,7,9,3],[6,7,9,8]]).astype(np.float)
L, U = LU(A)
L
U
A
L @ U
np.allclose(A, L @ U)
v=np.array([1,2,3])
v
v.shape
v1=np.expand_dims(v, -1)
v1
v1.shape
v2 = v[np.newaxis]
v2
v2.shape
v3 = v[:, np.newaxis]
v3
v3.shape
import sklearn
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: LU 分解
Step2: The LU factorization is useful!
Step3: 广播运算
|
1,693
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import division, print_function
import numpy as np
import qutip as qt
from qutip.ipynbtools import version_table
%matplotlib inline
qt.settings.colorblind_safe = True
qt.visualization.hinton(qt.identity([2, 3]).unit());
qt.visualization.hinton(qt.Qobj([
[1, 0.5],
[0.5, 1]
]).unit());
qt.visualization.hinton(qt.to_super(qt.sigmaz()));
qt.visualization.hinton(qt.to_super(qt.hadamard_transform()));
qt.visualization.hinton(qt.to_super(qt.tensor(qt.sigmaz(), qt.hadamard_transform())));
s_meas = qt.tensor_contract(qt.to_super(qt.identity([2, 2])), (1, 3))
s_meas
q = qt.tensor(qt.identity(2), qt.basis(2))
s_prep = qt.sprepost(q, q.dag())
s_prep
qt.visualization.hinton(qt.to_super(qt.cnot()))
qt.tensor_contract(qt.to_super(qt.cnot()), (1, 3)) * s_prep
qt.visualization.hinton(qt.tensor_contract(qt.to_super(qt.cnot()), (1, 3)) * s_prep);
version_table()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Imports
Step2: Plotting Support
Step3: Settings
Step4: Superoperator Representations and Plotting
Step5: We show superoperators as matrices in the Pauli basis, such that any Hermicity-preserving map is represented by a real-valued matrix. This is especially convienent for use with Hinton diagrams, as the plot thus carries complete information about the channel.
Step6: As a couple more examples, we also consider the supermatrix for a Hadamard transform and for $\sigma_z \otimes H$.
Step7: Reduced Channels
Step8: Meanwhile, the super_tensor function implements the swap on the right, such that we can quickly find the preparation map.
Step9: For a $\cnot$ system-environment model, the composition of these maps should give us a completely dephasing channel. The channel on both qubits is just the superunitary $\cnot$ channel
Step10: We now complete by multiplying the superunitary $\cnot$ by the preparation channel above, then applying the partial trace channel by contracting the second and fourth index indices. As expected, this gives us a dephasing map.
Step11: Epilouge
|
1,694
|
<ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
import tensorflow_datasets as tfds
import numpy as np
import uncertainty_baselines as ub
def _ensemble_accuracy(labels, logits_list):
Compute the accuracy resulting from the ensemble prediction.
per_probs = tf.nn.softmax(logits_list)
probs = tf.reduce_mean(per_probs, axis=0)
acc = tf.keras.metrics.SparseCategoricalAccuracy()
acc.update_state(labels, probs)
return acc.result()
def _ensemble_cross_entropy(labels, logits):
logits = tf.convert_to_tensor(logits)
ensemble_size = float(logits.shape[0])
labels = tf.cast(labels, tf.int32)
ce = tf.nn.sparse_softmax_cross_entropy_with_logits(
labels=tf.broadcast_to(labels[tf.newaxis, ...], tf.shape(logits)[:-1]),
logits=logits)
nll = -tf.reduce_logsumexp(-ce, axis=0) + tf.math.log(ensemble_size)
return tf.reduce_mean(nll)
def greedy_selection(val_logits, val_labels, max_ens_size, objective='nll'):
Greedy procedure from Caruana et al. 2004, with replacement.
assert_msg = 'Unknown objective type (received {}).'.format(objective)
assert objective in ('nll', 'acc', 'nll-acc'), assert_msg
# Objective that should be optimized by the ensemble. Arbitrary objectives,
# e.g., based on nll, acc or calibration error (or combinations of those) can
# be used.
if objective == 'nll':
get_objective = lambda acc, nll: nll
elif objective == 'acc':
get_objective = lambda acc, nll: acc
else:
get_objective = lambda acc, nll: nll-acc
best_acc = 0.
best_nll = np.inf
best_objective = np.inf
ens = []
def get_ens_size():
return len(set(ens))
while get_ens_size() < max_ens_size:
current_val_logits = [val_logits[model_id] for model_id in ens]
best_model_id = None
for model_id, logits in enumerate(val_logits):
acc = _ensemble_accuracy(val_labels, current_val_logits + [logits])
nll = _ensemble_cross_entropy(val_labels, current_val_logits + [logits])
obj = get_objective(acc, nll)
if obj < best_objective:
best_acc = acc
best_nll = nll
best_objective = obj
best_model_id = model_id
if best_model_id is None:
print('Ensemble could not be improved: Greedy selection stops.')
break
ens.append(best_model_id)
return ens, best_acc, best_nll
def parse_checkpoint_dir(checkpoint_dir):
Parse directory of checkpoints.
paths = []
subdirectories = tf.io.gfile.glob(os.path.join(checkpoint_dir, '*'))
is_checkpoint = lambda f: ('checkpoint' in f and '.index' in f)
print('Load checkpoints')
for subdir in subdirectories:
for path, _, files in tf.io.gfile.walk(subdir):
if any(f for f in files if is_checkpoint(f)):
latest_checkpoint = tf.train.latest_checkpoint(path)
paths.append(latest_checkpoint)
print('.', end='')
break
print('')
return paths
DATASET = 'cifar10'
TRAIN_PROPORTION = 0.95
BATCH_SIZE = 64
ENSEMBLE_SIZE = 4
CHECKPOINT_DIR = 'gs://gresearch/reliable-deep-learning/checkpoints/baselines/cifar/hyper_ensemble/'
# Load data.
ds_info = tfds.builder(DATASET).info
num_classes = ds_info.features['label'].num_classes
# Test set.
steps_per_eval = ds_info.splits['test'].num_examples // BATCH_SIZE
test_dataset = ub.datasets.get(
DATASET,
split=tfds.Split.TEST).load(batch_size=BATCH_SIZE)
# Validation set.
validation_percent = 1 - TRAIN_PROPORTION
val_dataset = ub.datasets.get(
dataset_name=DATASET,
split=tfds.Split.VALIDATION,
validation_percent=validation_percent,
drop_remainder=False).load(batch_size=BATCH_SIZE)
steps_per_val_eval = int(ds_info.splits['train'].num_examples *
validation_percent) // BATCH_SIZE
# The model architecture we want to form the ensemble over
# here, we use the original ResNet-20 model by He et al. 2015.
model = ub.models.wide_resnet(
input_shape=ds_info.features['image'].shape,
depth=22,
width_multiplier=1,
num_classes=num_classes,
l2=0.,
version=1)
# Load checkpoints:
# These are 100 checkpoints and loading will take a few minutes.
ensemble_filenames = parse_checkpoint_dir(CHECKPOINT_DIR)
model_pool_size = len(ensemble_filenames)
checkpoint = tf.train.Checkpoint(model=model)
print('Model pool size: {}'.format(model_pool_size))
# Compute the logits on the validation set.
val_logits, val_labels = [], []
for m, ensemble_filename in enumerate(ensemble_filenames):
# Enforce memory clean-up.
tf.keras.backend.clear_session()
checkpoint.restore(ensemble_filename)
val_iterator = iter(val_dataset)
val_logits_m = []
for _ in range(steps_per_val_eval):
inputs = next(val_iterator)
features = inputs['features']
labels = inputs['labels']
val_logits_m.append(model(features, training=False))
if m == 0:
val_labels.append(labels)
val_logits.append(tf.concat(val_logits_m, axis=0))
if m == 0:
val_labels = tf.concat(val_labels, axis=0)
if m % 10 == 0 or m == model_pool_size - 1:
percent = (m + 1.) / model_pool_size
message = ('{:.1%} completion for prediction on validation set: '
'model {:d}/{:d}.'.format(percent, m + 1, model_pool_size))
print(message)
# Ensemble construction by greedy member selection on the validation set.
selected_members, val_acc, val_nll = greedy_selection(val_logits, val_labels,
ENSEMBLE_SIZE,
objective='nll')
unique_selected_members = list(set(selected_members))
message = ('Members selected by greedy procedure: model ids = {} (with {} '
'unique member(s)).').format(
selected_members, len(unique_selected_members))
print(message)
# Evaluate the following metrics on the test set.
metrics = {
'ensemble/negative_log_likelihood': tf.keras.metrics.Mean(),
'ensemble/accuracy': tf.keras.metrics.SparseCategoricalAccuracy(),
}
metrics_single = {
'single/negative_log_likelihood': tf.keras.metrics.SparseCategoricalCrossentropy(),
'single/accuracy': tf.keras.metrics.SparseCategoricalAccuracy(),
}
# Compute logits for each ensemble member on the test set.
logits_test = []
for m, member_id in enumerate(unique_selected_members):
ensemble_filename = ensemble_filenames[member_id]
checkpoint.restore(ensemble_filename)
logits = []
test_iterator = iter(test_dataset)
for _ in range(steps_per_eval):
features = next(test_iterator)['features']
logits.append(model(features, training=False))
logits_test.append(tf.concat(logits, axis=0))
logits_test = tf.convert_to_tensor(logits_test)
print('Completed computation of member logits on the test set.')
# Compute test metrics.
test_iterator = iter(test_dataset)
for step in range(steps_per_eval):
labels = next(test_iterator)['labels']
logits = logits_test[:, (step*BATCH_SIZE):((step+1)*BATCH_SIZE)]
labels = tf.cast(labels, tf.int32)
negative_log_likelihood = _ensemble_cross_entropy(labels, logits)
# Per member output probabilities.
per_probs = tf.nn.softmax(logits)
# Ensemble output probabilites.
probs = tf.reduce_mean(per_probs, axis=0)
metrics['ensemble/negative_log_likelihood'].update_state(
negative_log_likelihood)
metrics['ensemble/accuracy'].update_state(labels, probs)
# For comparison compute performance of the best single model,
# this is by definition the first model that was selected by the greedy
# selection method.
logits_single = logits_test[0, (step*BATCH_SIZE):((step+1)*BATCH_SIZE)]
probs_single = tf.nn.softmax(logits_single)
metrics_single['single/negative_log_likelihood'].update_state(labels, logits_single)
metrics_single['single/accuracy'].update_state(labels, probs_single)
percent = (step + 1) / steps_per_eval
if step % 25 == 0 or step == steps_per_eval - 1:
message = ('{:.1%} completion final test prediction'.format(percent))
print(message)
ensemble_results = {name: metric.result() for name, metric in metrics.items()}
single_results = {name: metric.result() for name, metric in metrics_single.items()}
print('Ensemble performance:')
for m, val in ensemble_results.items():
print(' {}: {}'.format(m, val))
print('\nFor comparison:')
for m, val in single_results.items():
print(' {}: {}'.format(m, val))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: Hyperparameter Ensembles for Robustness and Uncertainty Quantification
Step4: Let's construct the hyper-deep ensemble over a ResNet-20 architecture
Step5: Step 2
Step6: Now we are ready to construct the ensemble.
Step7: Evaluation on the test set
Step8: Here is the final ensemble performance
|
1,695
|
<ASSISTANT_TASK:>
Python Code:
supplyside = ['start_brandh', [ 'branch1', 'branch2', 'branch3'], 'end_branch']
demandside = ['d_start_brandh', ['d_branch1', 'd_branch2', 'd_branch3'], 'd_end_branch']
# you would normaly install eppy by doing
# python setup.py install
# or
# pip install eppy
# or
# easy_install eppy
# if you have not done so, uncomment the following three lines
import sys
# pathnameto_eppy = 'c:/eppy'
pathnameto_eppy = '../'
sys.path.append(pathnameto_eppy)
from eppy.modeleditor import IDF
from eppy import hvacbuilder
from io import StringIO
iddfile = "../eppy/resources/iddfiles/Energy+V7_0_0_036.idd"
IDF.setiddname(iddfile)
# make the topology of the loop
idf = IDF(StringIO('')) # makes an empty idf file in memory with no file name
loopname = "p_loop"
sloop = ['sb0', ['sb1', 'sb2', 'sb3'], 'sb4'] # supply side of the loop
dloop = ['db0', ['db1', 'db2', 'db3'], 'db4'] # demand side of the loop
hvacbuilder.makeplantloop(idf, loopname, sloop, dloop)
idf.saveas("hhh1.idf")
import ex_inits #no need to know this code, it just shows the image below
for_images = ex_inits
for_images.display_png(for_images.plantloop1) # display the image below
# make a new branch chiller->pipe1-> pipe2
# make a new pipe component
pipe1 = idf.newidfobject("PIPE:ADIABATIC", 'np1')
# make a new chiller
chiller = idf.newidfobject("Chiller:Electric".upper(), 'Central_Chiller')
# make another pipe component
pipe2 = idf.newidfobject("PIPE:ADIABATIC", 'np2')
# get the loop we are trying to modify
loop = idf.getobject('PLANTLOOP', 'p_loop') # args are (key, name)
# get the branch we are trying to modify
branch = idf.getobject('BRANCH', 'sb0') # args are (key, name)
listofcomponents = [chiller, pipe1, pipe2] # the new components are connected in this order
try:
newbr = hvacbuilder.replacebranch(idf, loop, branch, listofcomponents, fluid='Water')
except hvacbuilder.WhichLoopError as e:
print(e)
# instead of passing chiller to the function, we pass a tuple (chiller, 'Chilled_Water_').
# This lets eppy know where the connection should be made.
# The idfobject pipe does not have this ambiguity. So pipes do not need this extra information
listofcomponents = [(chiller, 'Chilled_Water_'), pipe1, pipe2]
try:
newbr = hvacbuilder.replacebranch(idf, loop, branch, listofcomponents, fluid='Water')
except Exception as e:
print(e)
else: # else will run only if the try suceeds
print("no exception was thrown")
idf.saveas("hhh_new.idf")
import ex_inits #no need to know this code, it just shows the image below
for_images = ex_inits
for_images.display_png(for_images.plantloop2) # display the image below
# to traverse the loop we are going to call some functions ex_loopdiagrams.py,
# the program that draws the loop diagrams.
from eppy.useful_scripts import loopdiagram
fname = 'hhh_new.idf'
iddfile = '../eppy/resources/iddfiles/Energy+V8_0_0.idd'
edges = loopdiagram.getedges(fname, iddfile)
# edges are the lines that draw the nodes in the loop.
# The term comes from graph theory in mathematics
from eppy import walk_hvac
firstnode = "Central_Chiller"
nextnodes = walk_hvac.nextnode(edges, firstnode)
print(nextnodes)
nextnodes = walk_hvac.nextnode(edges, nextnodes[0])
print(nextnodes)
nextnodes = walk_hvac.nextnode(edges, nextnodes[0])
print(nextnodes)
nextnodes = walk_hvac.nextnode(edges, nextnodes[0])
print(nextnodes)
nextnodes = walk_hvac.nextnode(edges, nextnodes[0])
print(nextnodes)
nextnodes = walk_hvac.nextnode(edges, nextnodes[0])
print(nextnodes)
nextnodes = walk_hvac.nextnode(edges, nextnodes[0])
print(nextnodes)
lastnode = 'sb4_pipe'
prevnodes = walk_hvac.prevnode(edges, lastnode)
print(prevnodes)
prevnodes = walk_hvac.prevnode(edges, prevnodes[0])
print(prevnodes)
prevnodes = walk_hvac.prevnode(edges, prevnodes[0])
print(prevnodes)
prevnodes = walk_hvac.prevnode(edges, prevnodes[0])
print(prevnodes)
prevnodes = walk_hvac.prevnode(edges, prevnodes[0])
print(prevnodes)
prevnodes = walk_hvac.prevnode(edges, prevnodes[0])
print(prevnodes)
prevnodes = walk_hvac.prevnode(edges, prevnodes[0])
print(prevnodes)
condensorloop_idf = IDF(StringIO(''))
loopname = "c_loop"
sloop = ['sb0', ['sb1', 'sb2', 'sb3'], 'sb4'] # supply side
dloop = ['db0', ['db1', 'db2', 'db3'], 'db4'] # demand side
theloop = hvacbuilder.makecondenserloop(condensorloop_idf, loopname, sloop, dloop)
condensorloop_idf.saveas("c_loop.idf")
airloop_idf = IDF(StringIO(''))
loopname = "a_loop"
sloop = ['sb0', ['sb1', 'sb2', 'sb3'], 'sb4'] # supply side of the loop
dloop = ['zone1', 'zone2', 'zone3'] # zones on the demand side
hvacbuilder.makeairloop(airloop_idf, loopname, sloop, dloop)
airloop_idf.saveas("a_loop.idf")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Eppy will build the build the shape/topology of the loop using the two lists above. Each branch will have a placeholder component, like a pipe or a duct
Step2: We have made plant loop and saved it as hhh1.idf.
Step3: Modifying the topology of the loop
Step4: Now we are going to try to replace branch with the a branch made up of listofcomponents
Step5: The above code throws the exception. It says that the idfobject CHILLER
Step6: Tagential note
Step7: This diagram shows the new components in the branch
Step8: The above code gets us the edges of the loop diagram. Once we have the edges, we can traverse through the diagram. Let us start with the "Central_Chiller" and work our way down.
Step9: This leads us to three components -> ['sb1_pipe', 'sb2_pipe', 'sb3_pipe']. Let us follow one of them
Step10: We have reached the end of this branch. There are no more components.
Step11: All the way to where the loop ends
Step12: Again, just as we did in the plant loop, we can change the components of the loop, by replacing the branchs and traverse the loop using the functions nextnode() and prevnode()
|
1,696
|
<ASSISTANT_TASK:>
Python Code:
from dkrz_forms import form_widgets
form_widgets.show_status('form-submission')
from dkrz_forms import form_handler, form_widgets
#please provide your last name - replacing ... below
MY_LAST_NAME = "ki"
form_info = form_widgets.check_pwd(MY_LAST_NAME)
sf = form_handler.init_form(form_info)
form = sf.sub.entity_out.form_info
import pprint
from dkrz_forms import form_handler
pprint.pprint(form_handler.form_to_dict(sf))
# (informal) type of data
form.data_type = "...." # e.g. model data, observational data, ..
# # free text describing scientific context of data
form.scientific_context ="..."
# free text describing the expected usage as part of the DKRZ CMIP Data pool
form.usage = "...."
# free text describing access rights (who is allowed to read the data)
form.access_rights = "...."
# generic terms of policy information
form.terms_of_use = "...." # e.g. unrestricted, restricted
# any additional comment on context
form.access_group = "...."
form.context_comment = "...."
# information on where the data is stored and can be accessed
# e.g. file system path if on DKRZ storage, url etc. if on web accessible resources (cloud,thredds server,..)
form.data_path = "...."
# timing constraints, when the data ingest should be completed
# (e.g. because the data source is only accessible in specific time frame)
form.best_ingest_before = "...."
# directory structure information, especially
form.directory_structure = "..." # e.g. institute/experiment/file.nc
form.directory_structure_convention = "..." # e.g. CMIP5, CMIP6, CORDEX, your_convention_name
form.directory_structure_comment = "..." # free text, e.g. with link describing the directory structure convention you used
# metadata information
form.metadata_convention_name = "..." # e.g. CF1.6 etc. None if not applicable
form.metadata_comment = "..." # information about metadata, e.g. links to metadata info etc.
# to be completed ..
form_handler.save_form(sf,"..my comment..") # edit my comment info
form_handler.email_form_info(sf)
form_handler.form_submission(sf)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Please provide information to unlock your form
Step2: Please provide the following information
Step3: technical information concerning your request
Step4: Check your submission form
Step5: Save your form
Step6: officially submit your form
|
1,697
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
train = pd.read_csv("data/train.csv", dtype={"Age": np.float64}, )
test = pd.read_csv("data/test.csv", dtype={"Age": np.float64}, )
def harmonize_data(titanic):
titanic["Age"] = titanic["Age"].fillna(titanic["Age"].median())
titanic["Age"].median()
titanic.loc[titanic["Sex"] == "male", "Sex"] = 0
titanic.loc[titanic["Sex"] == "female", "Sex"] = 1
titanic["Embarked"] = titanic["Embarked"].fillna("S")
titanic.loc[titanic["Embarked"] == "S", "Embarked"] = 0
titanic.loc[titanic["Embarked"] == "C", "Embarked"] = 1
titanic.loc[titanic["Embarked"] == "Q", "Embarked"] = 2
titanic["Fare"] = titanic["Fare"].fillna(titanic["Fare"].median())
return titanic
def create_submission(alg, train, test, predictors, filename):
alg.fit(train[predictors], train["Survived"])
predictions = alg.predict(test[predictors])
submission = pd.DataFrame({
"PassengerId": test["PassengerId"],
"Survived": predictions
})
submission.to_csv(filename, index=False)
train_data = harmonize_data(train)
test_data = harmonize_data(test)
from sklearn.ensemble import RandomForestClassifier
from sklearn import cross_validation
predictors = ["Pclass", "Sex", "Age", "SibSp", "Parch", "Fare", "Embarked"]
alg = RandomForestClassifier(
random_state=1,
n_estimators=150,
min_samples_split=4,
min_samples_leaf=2
)
scores = cross_validation.cross_val_score(
alg,
train_data[predictors],
train_data["Survived"],
cv=3
)
print(scores.mean())
create_submission(alg, train_data, test_data, predictors, "run-01.csv")
from sklearn.linear_model import LogisticRegression
from sklearn import cross_validation
predictors = ["Pclass", "Sex", "Age", "SibSp", "Parch", "Fare", "Embarked"]
alg = LogisticRegression(random_state=1)
scores = cross_validation.cross_val_score(
alg,
train_data[predictors],
train_data["Survived"],
cv=3
)
print(scores.mean())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Main part
Step2: Compare to Logistic Regression
|
1,698
|
<ASSISTANT_TASK:>
Python Code:
#Imports
from numpy import *
import matplotlib as mpl
import matplotlib.pyplot as plt
from scipy.integrate import quad
from scipy.special import erf
import sys
import os
#Import custom modules
sys.path.append('/home/drake/Documents/Physics/Research/Python/Modules')
from physics import *
%matplotlib notebook
Dcell = 55*10**-6
DPET = 175*10**-6
Dcell2 = 13*10**-6
rhocell = 1.44*10**3
rhoPET = 1.38*10**3
rhophos = 4.48*10**3
sigma = 70*10**-2
Dphos = (sigma - Dcell*rhocell - DPET*rhoPET - Dcell2*rhocell)/rhophos
print(Dphos*10**6)
N0 = 10**6 #Number of photons emitted at t=0
lambdas = 2.85*10**-6 #Diffusion length in m
D = lambdas*c/6 #Diffusion constant
A = 100*10**-6*100*10**-6 #Area of segment in m^2
L = 81*10**-6 #Depth of lanex in m
l = 10.0*10**-6 #Distance from top lanex edge to segment in m
d = L-l #Distance from bottom lanex edge to segment
def n(z,t):
'''Returns the photon density at position z and time t'''
n0 = N0/(2*A*sqrt(pi*D*t))
Sum = 0
maxm = 10
for m in range(-maxm,maxm+1):
Sum += exp(-(z-2*m*(l+d))**2/(4*D*t))-exp(-(z+2*m*(l+d)-2*d)**2/(4*D*t))
return n0*Sum
def particlecurrent(t):
'''Returns the particle current (photons per second per meter^2) at the boundary z=d at time t'''
Sum = 0
maxm = 10
for m in range(-maxm,maxm+1):
am = d-2*m*L
Sum += am*exp(-am**2/(4*D*t))
return N0/(A*sqrt(4*pi*D*t**3))*Sum
narray = []
zarray = np.linspace(-l,d,1000)
time = [1,10,10**2,10**3,10**4]
time = np.multiply(time,10**-15) #convert to s
for i in range(len(time)):
narray.append([])
for z in zarray:
narray[i].append(n(z,time[i])*10**-6)
zarray = np.multiply(zarray,10**6)
#Update the matplotlib configuration parameters
mpl.rcParams.update({'font.size': 18, 'font.family': 'serif'})
#Adjust figure size
plt.subplots(figsize=(12,6))
color = ['r','g','b','c','m','y','k']
legend = []
for i in range(5):
legend.append(str(int(time[i]*10**15))+' fs')
plt.plot(zarray,narray[i],color=color[i],linewidth=2,label=legend[i])
plt.xlim(np.min(zarray),np.max(zarray))
plt.ylim(1.0*10**6,np.max(narray[0]))
plt.xlabel('Position (um)')
plt.ylabel('Photon Density (m^-3)')
#plt.semilogy()
plt.legend(loc=1)
particlecurrentarray = []
tarray = []
for t in linspace(10**-15,50*10**-12,1000):
tarray.append(t*10**12)
particlecurrentarray.append(particlecurrent(t))
#Update the matplotlib configuration parameters
mpl.rcParams.update({'font.size': 18, 'font.family': 'serif'})
#Adjust figure size
plt.subplots(figsize=(12,6))
plt.plot(tarray,particlecurrentarray,linewidth=2)
plt.xlim(np.min(tarray),np.max(tarray))
plt.ylim(0)
plt.xlabel('time (ps)')
plt.ylabel('Photon Current at $z=d$ $(s^{-1} \cdot m^{-2})$')
#plt.semilogy()
plt.legend(loc=4)
Nabs = A*quad(particlecurrent,0,400*10**-12)[0] #Total number of photons absorbed at the boundary z=d
print(Nabs/N0)
def F(t,maxm,distance):
Sum1 = 0
Sum2 = 0
for m in range(-maxm,1):
am = distance-2*m*L
Sum1 += 1 - erf(am/sqrt(4*D*t))
for m in range(1,maxm+1):
am = distance-2*m*L
Sum2 += 1 + erf(am/sqrt(4*D*t))
return (Sum1 - Sum2)
FractionAbsArray = []
FractionAbsArrayAnalytic = []
tarray = []
for t in linspace(10**-12,50*10**-12,10000):
tarray.append(t*10**12)
#FractionAbsArray.append(A*quad(particlecurrent,0,t)[0]/N0)
FractionAbsArrayAnalytic.append(F(t,100,d))
#Adjust figure size
plt.subplots(figsize=(12,6))
plt.plot(tarray,FractionAbsArrayAnalytic,linewidth=2)
plt.xlim(np.min(tarray),np.max(tarray))
plt.ylim(0,1.0)
plt.xlim(0,50)
plt.xlabel('time (ps)')
plt.ylabel('Fraction Absorbed at $z=d$')
#plt.semilogy()
plt.legend(loc=4)
FractionAbsArrayAnalytic = []
distancearray = []
#Find the fraction of photons absorbed at z=d for various values of d ranging from 0 to L - 1 um (to avoid division by zero errors)
for distance in linspace(0,L-10**-6,100):
Integrationtime = 10**-12
TargetError = 10**-3
Error = 1.0
FractionAbsAnalytic=0
while Error>TargetError:
Error = abs(FractionAbsAnalytic-F(Integrationtime,100,distance))/F(Integrationtime,100,distance)
FractionAbsAnalytic = F(Integrationtime,100,distance)
Integrationtime *= 2
FractionAbsArrayAnalytic.append(FractionAbsAnalytic)
distancearray.append(distance*10**6)
#Update the matplotlib configuration parameters
mpl.rcParams.update({'font.size': 18, 'font.family': 'serif'})
#Adjust figure size
plt.subplots(figsize=(12,6))
plt.plot(distancearray,FractionAbsArrayAnalytic,linewidth=2)
#plt.xlim(np.min(tarray),np.max(tarray))
#plt.ylim(0,1.0)
#plt.xlim(0,50)
plt.xlabel('Segment Distance (um)')
plt.ylabel('Fraction Absorbed by CCD')
#plt.semilogy()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <h3>Calculation of the phosphor layer thickness of lanex regular given its areal density
Step2: <h3>Define functions for photon density and photon current density</h3>
Step3: <h3>Plot photon density</h3>
Step4: <h3>Plot photon current density</h3>
Step5: <h3>Integrate photon current density</h3>
Step6: <h3>Calculate number of photons absorbed as a function of d</h3>
|
1,699
|
<ASSISTANT_TASK:>
Python Code:
# Import Python libaries
# ----------------------
import numpy as np # NumPy library
from denise_IO.denise_out import * # "DENISE" library
para["filename"] = "DENISE_marm_OBC.inp"
para["descr"] = "Marmousi-II"
para["PHYSICS"] = 1
para["MODE"] = 0
para["NX"] = 500 # number of grid points in x-direction
para["NY"] = 174 # number of grid points in y-direction
para["DH"] = 20. # spatial grid point distance [m]
# Define model basename
base_model = "model/marmousi_II_marine"
# Open vp-model and write IEEE-le binary data to vp array
# -------------------------------------------------------
f = open(base_model + ".vp")
data_type = np.dtype ('float32').newbyteorder ('<')
vp = np.fromfile (f, dtype=data_type)
f.close()
# Reshape (1 x nx*ny) vector to (ny x nx) matrix
vp = vp.reshape(para["NX"],para["NY"])
vp = np.transpose(vp)
vp = np.flipud(vp)
# Open vs-model and write IEEE-le binary data to vs array
# -------------------------------------------------------
f = open(base_model + ".vs")
data_type = np.dtype ('float32').newbyteorder ('<')
vs = np.fromfile (f, dtype=data_type)
f.close()
# Reshape (1 x nx*ny) vector to (ny x nx) matrix
vs = vs.reshape(para["NX"],para["NY"])
vs = np.transpose(vs)
vs = np.flipud(vs)
# Open rho-model and write IEEE-le binary data to rho array
# ---------------------------------------------------------
f = open(base_model + ".rho")
data_type = np.dtype ('float32').newbyteorder ('<')
rho = np.fromfile (f, dtype=data_type)
f.close()
# Reshape (1 x nx*ny) vector to (ny x nx) matrix
rho = rho.reshape(para["NX"],para["NY"])
rho = np.transpose(rho)
rho = np.flipud(rho)
x = np.arange(para["DH"], para["DH"] * (para["NX"] + 1), para["DH"])
y = np.arange(para["DH"], para["DH"] * (para["NY"] + 1), para["DH"])
# convert m -> km
x = np.divide(x,1000.0);
y = np.divide(y,1000.0);
cmap = "magma" # colormap
# define minimum and maximum material parameter values
vpmin = np.min(vp)
vpmax = np.max(vp)
vsmin = np.min(vs)
vsmax = np.max(vs)
rhomin = np.min(rho)
rhomax = np.max(rho)
# plot elastic model
plot_model(vp,vs,rho,x,y,cmap,vpmin,vpmax,vsmin,vsmax,rhomin,rhomax)
# model basename
model_basename = "marmousi_II_marine"
# location of model files during DENISE forward modelling run
para["MFILE"] = "start/" + model_basename
# writing P-wave velocity model to IEEE-le binary file
name_model = model_basename + ".vp"
f = open (name_model, mode='wb')
data_type = np.dtype ('float32').newbyteorder ('<')
vp1 = np.array(vp, dtype=data_type)
vp1 = np.rot90(vp1,3)
vp1.tofile(f)
f.close()
# writing S-wave velocity model to IEEE-le binary file
name_model = model_basename + ".vs"
f = open (name_model, mode='wb')
data_type = np.dtype ('float32').newbyteorder ('<')
vs1 = np.array(vs, dtype=data_type)
vs1 = np.rot90(vs1,3)
vs1.tofile(f)
f.close()
# writing density model to IEEE-le binary file
name_model = model_basename + ".rho"
f = open (name_model, mode='wb')
data_type = np.dtype ('float32').newbyteorder ('<')
rho1 = np.array(rho, dtype=data_type)
rho1 = np.rot90(rho1,3)
rho1.tofile(f)
f.close()
print("ximage n1=" + str(para["NY"]) + " < " + model_basename + ".vp")
print("ximage n1=" + str(para["NY"]) + " < " + model_basename + ".vs")
print("ximage n1=" + str(para["NY"]) + " < " + model_basename + ".rho")
# Order of spatial FD operator (2, 4, 6, 8, 10, 12)
para["FD_ORDER"] = 8
# Maximum relative group velocity error E
# (minimum number of grid points per shortest wavelength is defined by FD_ORDER and E)
# values:
# 0 = Taylor coefficients
# 1 = Holberg coeff.: E = 0.1 %
# 2 = E = 0.5 %
# 3 = E = 1.0 %
# 4 = E = 3.0 %
para["max_relative_error"] = 3
# maximum modelling frequency based on grid dispersion criterion for spatial FD operator
freqmax = calc_max_freq(vp,vs,para)
para["NPROCX"] = 5 # number of processors in x-direction
para["NPROCY"] = 3 # number of processors in y-direction
check_domain_decomp(para)
para["DT"] = check_stability(vp,vs,para)
para["TIME"] = 6.0 # time of wave propagation [s]
para["DT"] = 2.0e-3 # timestep [s]
para["L"] = 0 # number of relaxation mechanisms
para["FL"] = 40. # relaxation frequencies [Hz]
# free surface boundary condition
para["FREE_SURF"] = 1 # activate free surface boundary condition
# PML boundary frame
para["FW"] = 10
para["DAMPING"] = 1500.
para["FPML"] = 10.
para["npower"] = 4.
para["k_max_PML"] = 1.
# receiver x-coordinates
drec = 20. # receiver spacing [m]
xrec1 = 800. # 1st receiver position [m]
xrec2 = 8780. # last receiver position [m]
xrec = np.arange(xrec1, xrec2 + para["DH"], drec) # receiver positions in x-direction [m]
# place receivers at depth yrec [m]
depth_rec = 460. # receiver depth [m]
yrec = depth_rec * xrec/xrec
# assemble vectors into an array
tmp = np.zeros(xrec.size, dtype=[('var1', float), ('var2', float)])
tmp['var1'] = xrec
tmp['var2'] = yrec
check_src_rec_pml(xrec,yrec,para,1)
# write receiver positions to file
basename_rec = 'receiver_OBC'
np.savetxt(basename_rec + ".dat", tmp, fmt='%4.3f %4.3f')
# type of seismogram
para["SEISMO"] = 1
para["READREC"] = 1
para["REC_FILE"] = "./receiver/" + basename_rec
para["NDT"] = 1 # seismogram sampling rate in timesteps (has to be set to NDT=1 if you run FWI)
# location and name of seismogram output files in SU format
# particle velocities (if SEISMO=1 or SEISMO=4)
para["SEIS_FILE_VX"] = "su/DENISE_MARMOUSI_x.su" # filename for vx component
para["SEIS_FILE_VY"] = "su/DENISE_MARMOUSI_y.su" # filename for vy component
# curl and div of wavefield (if SEISMO=3 or SEISMO=4)
para["SEIS_FILE_CURL"] = "su/DENISE_MARMOUSI_rot.su" # filename for rot_z component ~ S-wave energy
para["SEIS_FILE_DIV"] = "su/DENISE_MARMOUSI_div.su" # filename for div component ~ P-wave energy
# pressure field (hydrophones) (if SEISMO=2 or SEISMO=4)
para["SEIS_FILE_P"] = "su/DENISE_MARMOUSI_p.su" # filename for pressure component
# source x-coordinates
dsrc = 80. # source spacing [m]
xsrc1 = 800. # 1st source position [m]
xsrc2 = 8780. # last source position [m]
xsrc = np.arange(xsrc1, xsrc2 + para["DH"], dsrc) # source positions in x-direction [m]
# place sources at depth ysrc [m]
depth_src = 40. # source depth [m]
ysrc = depth_src * xsrc/xsrc
# number of source positions
nshot = (int)(len(ysrc))
# z-coordinate = 0 due to 2D code [m]
zsrc = 0.0 * (xsrc / xsrc)
# time delay of source wavelet [s]
td = 0.0 * (xsrc / xsrc)
# center frequency of pre-defined source wavelet [Hz]
fc = 10.0 * (xsrc / xsrc)
# you can also use the maximum frequency computed from the grid dispersion
# criterion in section 3. based on spatial discretization and FD operator
# fc = (freqmax / 2.) * (xsrc / xsrc)
# amplitude of source wavelet [m]
amp = 1.0 * (xsrc / xsrc)
# angle of rotated source [°]
angle = 0.0 * (xsrc / xsrc)
# define source type:
# 2D PSV case
# -----------
# explosive sources (QUELLTYP=1)
# point forces in x- and y-direction (QUELLTYP=2,3)
# 2D SH case
# -----------
# point force in z-direction (QUELLTYP=1)
QUELLTYP = 1
src_type = QUELLTYP * (xsrc / xsrc)
check_src_rec_pml(xsrc,ysrc,para,2)
# write source positions and properties to file
basename_src = "source_OBC_VSP.dat"
# create and open source file
fp = open(basename_src, mode='w')
# write nshot to file header
fp.write(str(nshot) + "\n")
# write source properties to file
for i in range(0,nshot):
fp.write('{:4.2f}'.format(xsrc[i]) + "\t" + '{:4.2f}'.format(zsrc[i]) + "\t" + '{:4.2f}'.format(ysrc[i]) + "\t" + '{:1.2f}'.format(td[i]) + "\t" + '{:4.2f}'.format(fc[i]) + "\t" + '{:1.2f}'.format(amp[i]) + "\t" + '{:1.2f}'.format(angle[i]) + "\t" + str(src_type[i]) + "\t" + "\n")
# close source file
fp.close()
para["SOURCE_FILE"] = "./source/" + basename_src
para["RUN_MULTIPLE_SHOTS"] = 1
para["QUELLART"] = 6
para["SIGNAL_FILE"] = "./wavelet/wavelet_marmousi"
para["FC_SPIKE_1"] = -5.0 # lower corner frequency [Hz]
para["FC_SPIKE_2"] = 15.0 # upper corner frequency [Hz]
# you can also use the maximum frequency computed from the grid dispersion
# criterion in section 3. based on spatial discretization and FD operator
# para["FC_SPIKE_2"] = freqmax # upper corner frequency [Hz]
para["ORDER_SPIKE"] = 5 # order of Butterworth filter
para["TS"] = 8.0 # sweep length [s]
para["WRITE_STF"] = 1
cmap = "inferno"
plot_acq(vp,xrec/1000,yrec/1000,xsrc/1000,ysrc/1000,x,y,cmap,vpmin,vpmax)
para["SNAP"] = 0
para["SNAP_SHOT"] = 1 # compute and write snapshots for shot no. SNAP_SHOT
para["TSNAP1"] = 0.002 # first snapshot [s] (TSNAP1 has to fullfill the condition TSNAP1 > DT)
para["TSNAP2"] = 3.0 # first snapshot [s]
para["TSNAPINC"] = 0.06 # snapshot increment [s]
para["IDX"] = 1 # write only every IDX spatial grid point in x-direction to snapshot file
para["IDY"] = 1 # write only every IDY spatial grid point in y-direction to snapshot file
para["SNAP_FILE"] = "./snap/waveform_forward" # location and basename of the snapshot files
para["LOG_FILE"] = "log/Marmousi.log" # Log file name
para["ITERMAX"] = 600 # maximum number of TDFWI iterations at each FWI stage defined in FWI workflow file
para["JACOBIAN"] = "jacobian/gradient_Test" # location and basename of FWI gradients
para["DATA_DIR"] = "su/MARMOUSI_spike/DENISE_MARMOUSI" # location and basename of field data seismograms
para["INVMAT1"] = 1 # material parameterization for FWI (Vp,Vs,rho=1/Zp,Zs,rho=2/lam,mu,rho=3)
# Currently, only the Vp-Vs-rho parametrization (INVMAT1=1) can be used
para["GRAD_FORM"] = 1 # gradient formulation (time integration of adjoint sources = 1, no time integration = 2)
# Adjoint source type
# x-y components = 1; y-comp = 2; x-comp = 3; p-comp = 4; x-p-comp = 5; y-p-comp = 6; x-y-p-comp = 7
para["QUELLTYPB"] = 1
# Optimization method
para["GRAD_METHOD"] = 2 # PCG = 1; LBFGS = 2
# PCG_BETA (Fletcher_Reeves=1/Polak_Ribiere=2/Hestenes_Stiefel=3/Dai_Yuan=4)
para["PCG_BETA"] = 2
# store NLBFGS update during LBFGS optimization
para["NLBFGS"] = 20
# store wavefields only every DTINV time sample for gradient computation
para["DTINV"] = 3
# FWI log file location and name
para["MISFIT_LOG_FILE"] = "Marmousi_fwi_log.dat"
# gradient taper geometry
para["GRADT1"] = 21
para["GRADT2"] = 25
para["GRADT3"] = 490
para["GRADT4"] = 500
para["TAPERLENGTH"] = (int)(para["GRADT2"]-para["GRADT1"])
# apply vertical taper (SWS_TAPER_GRAD_VERT=1)
para["SWS_TAPER_GRAD_VERT"] = 0
# apply horizontal taper (SWS_TAPER_GRAD_HOR=1)
para["SWS_TAPER_GRAD_HOR"] = 1
# exponent of depth scaling for preconditioning
para["EXP_TAPER_GRAD_HOR"] = 2.0
# Circular taper around all sources (not at receiver positions)
para["SWS_TAPER_GRAD_SOURCES"] = 0
para["SWS_TAPER_CIRCULAR_PER_SHOT"] = 0
para["SRTSHAPE"] = 1 # SRTSHAPE: 1 = error_function; 2 = log_function
para["SRTRADIUS"] = 5. # --> minimum for SRTRADIUS is 5x5 gridpoints
# Read taper file from external file
para["SWS_TAPER_FILE"] = 0
# Location and basename of taper files
para["TFILE"] = "taper/taper"
# model location and basename
para["INV_MODELFILE"] = "model/modelTest"
# write inverted model after each iteration (yes=1)?
# Warning: Might require a lot of disk space
para["INV_MOD_OUT"] = 0
# upper limit for vp
para["VPUPPERLIM"] = 6000.
# lower limit for vp
para["VPLOWERLIM"] = 0.
# upper limit for vs
para["VSUPPERLIM"] = 4000.
# lower limit for vs
para["VSLOWERLIM"] = 0.
# upper limit for density
para["RHOUPPERLIM"] = 3000.
# lower limit for density
para["RHOLOWERLIM"] = 1000.
# upper limit for Qs
para["QSUPPERLIM"] = 100.
# lower limit for Qs
para["QSLOWERLIM"] = 10.
para["EPS_SCALE"] = 0.01 # initial model update during step length estimation
para["STEPMAX"] = 6 # maximum number of attemps to find a step length during line search
para["SCALEFAC"] = 2. # scale step during line search
# evaluate objective function only for a limited number of shots
para["TESTSHOT_START"] = 25
para["TESTSHOT_END"] = 75
para["TESTSHOT_INCR"] = 10
check_steplength(nshot,para)
# Activate trace muting (yes=1)
para["TRKILL"] = 0
# Location and name of trace mute file containing muting matrix
para["TRKILL_FILE"] = "./trace_kill/trace_kill.dat"
# Basename of picked traveltimes for each shot
# Time damping parameters are defined in the DENISE
# workflow file for each FWI stage
para["PICKS_FILE"] = "./picked_times/picks_"
write_denise_para(para)
para["filename_workflow"] = "FWI_workflow_marmousi.inp"
write_denise_workflow_header(para)
# Define FWI parameters for stage 1 ...
# Termination criterion
para["PRO"] = 0.01
# Frequency filtering
# TIME_FILT = 0 (apply no frequency filter to field data and source wavelet)
# TIME_FILT = 1 (apply low-pass filter to field data and source wavelet)
# TIME_FILT = 2 (apply band-pass filter to field data and source wavelet)
para["TIME_FILT"] = 1
# Low- (FC_LOW) and high-pass (FC_HIGH) corner frequencies of Butterwortfilter
# of order ORDER
para["FC_LOW"] = 0.0
para["FC_HIGH"] = 2.0
para["ORDER"] = 6
# Time windowing
para["TIME_WIN"] = 0
para["GAMMA"] = 20.0
para["TWIN-"] = 0.0
para["TWIN+"] = 0.0
# Starting FWI of parameter class Vp, Vs, rho, Qs from iteration number
# INV_VP_ITER, INV_VS_ITER, INV_RHO_ITER, INV_QS_ITER
para["INV_VP_ITER"] = 0
para["INV_VS_ITER"] = 0
para["INV_RHO_ITER"] = 0
para["INV_QS_ITER"] = 0
# Apply spatial Gaussian filter to gradients
# SPATFILTER = 0 (apply no filter)
# SPATFILTER = 4 (Anisotropic Gaussian filter with half-width adapted to the local wavelength)
para["SPATFILTER"] = 0
# If Gaussian filter (SPATFILTER=4), define the fraction of the local wavelength in ...
# x-direction WD_DAMP and y-direction WD_DAMP1 used to define the half-width of the
# Gaussian filter
para["WD_DAMP"] = 0.5
para["WD_DAMP1"] = 0.5
# Preconditioning of the gradient directions
# EPRECOND = 0 - no preconditioning
# EPRECOND = 1 - approximation of the Pseudo-Hessian (Shin et al. 2001)
# EPRECOND = 3 - Hessian approximation according to Plessix & Mulder (2004)
para["EPRECOND"] = 3
# Define objective function
# LNORM = 2 - L2 norm
# LNORM = 5 - global correlation norm (Choi & Alkhalifah 2012)
# LNORM = 6 - envelope objective functions after Chi, Dong and Liu (2014) - EXPERIMENTAL
# LNORM = 7 - NIM objective function after Chauris et al. (2012) and Tejero et al. (2015) - EXPERIMENTAL
para["LNORM"] = 2
# Activate Random Objective Waveform Inversion (ROWI, Pan & Gao 2020)
# ROWI = 0 - no ROWI
# ROWI = 1 - 50% GCN l2 norm / 50% AGC l2 norm (AC, PSV, SH modules only)
para["ROWI"] = 0
# Source wavelet inversion
# STF = 0 - no source wavelet inversion
# STF = 1 - estimate source wavelet by stabilized Wiener Deconvolution
para["STF"] = 0
# If OFFSETC_STF > 0, limit source wavelet inversion to maximum offsets OFFSETC_STF
para["OFFSETC_STF"] = -4.0
# Source wavelet inversion stabilization term to avoid division by zero in Wiener Deco
para["EPS_STF"] = 1e-1
# Apply Offset mute to field and modelled seismograms
# OFFSET_MUTE = 0 - no offset mute
# OFFSET_MUTE = 1 - mute far-offset data for offset >= OFFSETC
# OFFSET_MUTE = 1 - mute near-offset data for offset <= OFFSETC
para["OFFSET_MUTE"] = 0
para["OFFSETC"] = 10
# Scale density and Qs updates during multiparameter FWI by factors
# SCALERHO and SCALEQS, respectively
para["SCALERHO"] = 0.5
para["SCALEQS"] = 1.0
# If LNORM = 6, define type of envelope objective function (EXPERIMENTAL)
# ENV = 1 - L2 envelope objective function
# ENV = 2 - Log L2 envelope objective function
para["ENV"] = 1
# Integrate synthetic and modelled data NORDER times (EXPERIMENTAL)
para["N_ORDER"] = 0
# Write parameters to DENISE workflow file
write_denise_workflow(para)
# Define FWI parameters for stage 2 ...
# Termination criterion
para["PRO"] = 0.01
# Frequency filtering
# TIME_FILT = 0 (apply no frequency filter to field data and source wavelet)
# TIME_FILT = 1 (apply low-pass filter to field data and source wavelet)
# TIME_FILT = 2 (apply band-pass filter to field data and source wavelet)
para["TIME_FILT"] = 1
# Low- (FC_LOW) and high-pass (FC_HIGH) corner frequencies of Butterwortfilter
# of order ORDER
para["FC_LOW"] = 0.0
para["FC_HIGH"] = 5.0
para["ORDER"] = 6
# Time windowing
para["TIME_WIN"] = 0
para["GAMMA"] = 20.0
para["TWIN-"] = 0.0
para["TWIN+"] = 0.0
# Starting FWI of parameter class Vp, Vs, rho, Qs from iteration number
# INV_VP_ITER, INV_VS_ITER, INV_RHO_ITER, INV_QS_ITER
para["INV_VP_ITER"] = 0
para["INV_VS_ITER"] = 0
para["INV_RHO_ITER"] = 0
para["INV_QS_ITER"] = 0
# Apply spatial Gaussian filter to gradients
# SPATFILTER = 0 (apply no filter)
# SPATFILTER = 4 (Anisotropic Gaussian filter with half-width adapted to the local wavelength)
para["SPATFILTER"] = 0
# If Gaussian filter (SPATFILTER=4), define the fraction of the local wavelength in ...
# x-direction WD_DAMP and y-direction WD_DAMP1 used to define the half-width of the
# Gaussian filter
para["WD_DAMP"] = 0.5
para["WD_DAMP1"] = 0.5
# Preconditioning of the gradient directions
# EPRECOND = 0 - no preconditioning
# EPRECOND = 1 - approximation of the Pseudo-Hessian (Shin et al. 2001)
# EPRECOND = 3 - Hessian approximation according to Plessix & Mulder (2004)
para["EPRECOND"] = 3
# Define objective function
# LNORM = 2 - L2 norm
# LNORM = 5 - global correlation norm (Choi & Alkhalifah 2012)
# LNORM = 6 - envelope objective functions after Chi, Dong and Liu (2014) - EXPERIMENTAL
# LNORM = 7 - NIM objective function after Chauris et al. (2012) and Tejero et al. (2015) - EXPERIMENTAL
para["LNORM"] = 2
# Activate Random Objective Waveform Inversion (ROWI, Pan & Gao 2020)
# ROWI = 0 - no ROWI
# ROWI = 1 - 50% GCN l2 norm / 50% AGC l2 norm (AC, PSV, SH modules only)
para["ROWI"] = 0
# Source wavelet inversion
# STF = 0 - no source wavelet inversion
# STF = 1 - estimate source wavelet by stabilized Wiener Deconvolution
para["STF"] = 0
# If OFFSETC_STF > 0, limit source wavelet inversion to maximum offsets OFFSETC_STF
para["OFFSETC_STF"] = -4.0
# Source wavelet inversion stabilization term to avoid division by zero in Wiener Deco
para["EPS_STF"] = 1e-1
# Apply Offset mute to field and modelled seismograms
# OFFSET_MUTE = 0 - no offset mute
# OFFSET_MUTE = 1 - mute far-offset data for offset >= OFFSETC
# OFFSET_MUTE = 1 - mute near-offset data for offset <= OFFSETC
para["OFFSET_MUTE"] = 0
para["OFFSETC"] = 10
# Scale density and Qs updates during multiparameter FWI by factors
# SCALERHO and SCALEQS, respectively
para["SCALERHO"] = 0.5
para["SCALEQS"] = 1.0
# If LNORM = 6, define type of envelope objective function (EXPERIMENTAL)
# ENV = 1 - L2 envelope objective function
# ENV = 2 - Log L2 envelope objective function
para["ENV"] = 1
# Integrate synthetic and modelled data NORDER times (EXPERIMENTAL)
para["N_ORDER"] = 0
# Write parameters to DENISE workflow file
write_denise_workflow(para)
# Define FWI parameters for stage 3 ...
# Termination criterion
para["PRO"] = 0.01
# Frequency filtering
# TIME_FILT = 0 (apply no frequency filter to field data and source wavelet)
# TIME_FILT = 1 (apply low-pass filter to field data and source wavelet)
# TIME_FILT = 2 (apply band-pass filter to field data and source wavelet)
para["TIME_FILT"] = 1
# Low- (FC_LOW) and high-pass (FC_HIGH) corner frequencies of Butterwortfilter
# of order ORDER
para["FC_LOW"] = 0.0
para["FC_HIGH"] = 10.0
para["ORDER"] = 6
# Time windowing
para["TIME_WIN"] = 0
para["GAMMA"] = 20.0
para["TWIN-"] = 0.0
para["TWIN+"] = 0.0
# Starting FWI of parameter class Vp, Vs, rho, Qs from iteration number
# INV_VP_ITER, INV_VS_ITER, INV_RHO_ITER, INV_QS_ITER
para["INV_VP_ITER"] = 0
para["INV_VS_ITER"] = 0
para["INV_RHO_ITER"] = 0
para["INV_QS_ITER"] = 0
# Apply spatial Gaussian filter to gradients
# SPATFILTER = 0 (apply no filter)
# SPATFILTER = 4 (Anisotropic Gaussian filter with half-width adapted to the local wavelength)
para["SPATFILTER"] = 0
# If Gaussian filter (SPATFILTER=4), define the fraction of the local wavelength in ...
# x-direction WD_DAMP and y-direction WD_DAMP1 used to define the half-width of the
# Gaussian filter
para["WD_DAMP"] = 0.5
para["WD_DAMP1"] = 0.5
# Preconditioning of the gradient directions
# EPRECOND = 0 - no preconditioning
# EPRECOND = 1 - approximation of the Pseudo-Hessian (Shin et al. 2001)
# EPRECOND = 3 - Hessian approximation according to Plessix & Mulder (2004)
para["EPRECOND"] = 3
# Define objective function
# LNORM = 2 - L2 norm
# LNORM = 5 - global correlation norm (Choi & Alkhalifah 2012)
# LNORM = 6 - envelope objective functions after Chi, Dong and Liu (2014) - EXPERIMENTAL
# LNORM = 7 - NIM objective function after Chauris et al. (2012) and Tejero et al. (2015) - EXPERIMENTAL
para["LNORM"] = 2
# Activate Random Objective Waveform Inversion (ROWI, Pan & Gao 2020)
# ROWI = 0 - no ROWI
# ROWI = 1 - 50% GCN l2 norm / 50% AGC l2 norm (AC, PSV, SH modules only)
para["ROWI"] = 0
# Source wavelet inversion
# STF = 0 - no source wavelet inversion
# STF = 1 - estimate source wavelet by stabilized Wiener Deconvolution
para["STF"] = 0
# If OFFSETC_STF > 0, limit source wavelet inversion to maximum offsets OFFSETC_STF
para["OFFSETC_STF"] = -4.0
# Source wavelet inversion stabilization term to avoid division by zero in Wiener Deco
para["EPS_STF"] = 1e-1
# Apply Offset mute to field and modelled seismograms
# OFFSET_MUTE = 0 - no offset mute
# OFFSET_MUTE = 1 - mute far-offset data for offset >= OFFSETC
# OFFSET_MUTE = 1 - mute near-offset data for offset <= OFFSETC
para["OFFSET_MUTE"] = 0
para["OFFSETC"] = 10
# Scale density and Qs updates during multiparameter FWI by factors
# SCALERHO and SCALEQS, respectively
para["SCALERHO"] = 0.5
para["SCALEQS"] = 1.0
# If LNORM = 6, define type of envelope objective function (EXPERIMENTAL)
# ENV = 1 - L2 envelope objective function
# ENV = 2 - Log L2 envelope objective function
para["ENV"] = 1
# Integrate synthetic and modelled data NORDER times (EXPERIMENTAL)
para["N_ORDER"] = 0
# Write parameters to DENISE workflow file
write_denise_workflow(para)
# Define FWI parameters for stage 4 ...
# Termination criterion
para["PRO"] = 0.01
# Frequency filtering
# TIME_FILT = 0 (apply no frequency filter to field data and source wavelet)
# TIME_FILT = 1 (apply low-pass filter to field data and source wavelet)
# TIME_FILT = 2 (apply band-pass filter to field data and source wavelet)
para["TIME_FILT"] = 1
# Low- (FC_LOW) and high-pass (FC_HIGH) corner frequencies of Butterwortfilter
# of order ORDER
para["FC_LOW"] = 0.0
para["FC_HIGH"] = 20.0
para["ORDER"] = 6
# Time windowing
para["TIME_WIN"] = 0
para["GAMMA"] = 20.0
para["TWIN-"] = 0.0
para["TWIN+"] = 0.0
# Starting FWI of parameter class Vp, Vs, rho, Qs from iteration number
# INV_VP_ITER, INV_VS_ITER, INV_RHO_ITER, INV_QS_ITER
para["INV_VP_ITER"] = 0
para["INV_VS_ITER"] = 0
para["INV_RHO_ITER"] = 0
para["INV_QS_ITER"] = 0
# Apply spatial Gaussian filter to gradients
# SPATFILTER = 0 (apply no filter)
# SPATFILTER = 4 (Anisotropic Gaussian filter with half-width adapted to the local wavelength)
para["SPATFILTER"] = 0
# If Gaussian filter (SPATFILTER=4), define the fraction of the local wavelength in ...
# x-direction WD_DAMP and y-direction WD_DAMP1 used to define the half-width of the
# Gaussian filter
para["WD_DAMP"] = 0.5
para["WD_DAMP1"] = 0.5
# Preconditioning of the gradient directions
# EPRECOND = 0 - no preconditioning
# EPRECOND = 1 - approximation of the Pseudo-Hessian (Shin et al. 2001)
# EPRECOND = 3 - Hessian approximation according to Plessix & Mulder (2004)
para["EPRECOND"] = 3
# Define objective function
# LNORM = 2 - L2 norm
# LNORM = 5 - global correlation norm (Choi & Alkhalifah 2012)
# LNORM = 6 - envelope objective functions after Chi, Dong and Liu (2014) - EXPERIMENTAL
# LNORM = 7 - NIM objective function after Chauris et al. (2012) and Tejero et al. (2015) - EXPERIMENTAL
para["LNORM"] = 2
# Activate Random Objective Waveform Inversion (ROWI, Pan & Gao 2020)
# ROWI = 0 - no ROWI
# ROWI = 1 - 50% GCN l2 norm / 50% AGC l2 norm (AC, PSV, SH modules only)
para["ROWI"] = 0
# Source wavelet inversion
# STF = 0 - no source wavelet inversion
# STF = 1 - estimate source wavelet by stabilized Wiener Deconvolution
para["STF"] = 0
# If OFFSETC_STF > 0, limit source wavelet inversion to maximum offsets OFFSETC_STF
para["OFFSETC_STF"] = -4.0
# Source wavelet inversion stabilization term to avoid division by zero in Wiener Deco
para["EPS_STF"] = 1e-1
# Apply Offset mute to field and modelled seismograms
# OFFSET_MUTE = 0 - no offset mute
# OFFSET_MUTE = 1 - mute far-offset data for offset >= OFFSETC
# OFFSET_MUTE = 1 - mute near-offset data for offset <= OFFSETC
para["OFFSET_MUTE"] = 0
para["OFFSETC"] = 10
# Scale density and Qs updates during multiparameter FWI by factors
# SCALERHO and SCALEQS, respectively
para["SCALERHO"] = 0.5
para["SCALEQS"] = 1.0
# If LNORM = 6, define type of envelope objective function (EXPERIMENTAL)
# ENV = 1 - L2 envelope objective function
# ENV = 2 - Log L2 envelope objective function
para["ENV"] = 1
# Integrate synthetic and modelled data NORDER times (EXPERIMENTAL)
para["N_ORDER"] = 0
# Write parameters to DENISE workflow file
write_denise_workflow(para)
print("mv " + model_basename + ".vp DENISE-Black-Edition/par/" + para["MFILE"] + ".vp")
print("mv " + model_basename + ".vs DENISE-Black-Edition/par/" + para["MFILE"] + ".vs")
print("mv " + model_basename + ".rho DENISE-Black-Edition/par/" + para["MFILE"] + ".rho")
print("mv " + basename_src + " DENISE-Black-Edition/par/" + para["SOURCE_FILE"][2::])
print("mv " + basename_rec + ".dat DENISE-Black-Edition/par" + para["REC_FILE"][1::] + ".dat")
print("mv " + para["filename"] + " DENISE-Black-Edition/par/")
print("mpirun -np " + str(para["NPROCX"]*para["NPROCY"]) + " ../bin/denise " + para["filename"])
print("mv " + para["filename_workflow"] + " DENISE-Black-Edition/par/")
print("mpirun -np " + str(para["NPROCX"]*para["NPROCY"]) + " ../bin/denise " + para["filename"] + "\t" + para["filename_workflow"])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Short description of modelling/FWI problem
Step2: Give a short description of your modelling/FWI problem
Step3: What kind of PHYSICS do you want to use? (2D-PSV=1; 2D-AC=2; 2D-PSV-VTI=3; 2D-PSV-TTI=4; 2D-SH=5)
Step4: Choose DENISE operation mode (MODE)
Step5: 2. Load external 2D elastic model
Step6: Load external elastic model
Step7: Define coordinate axis
Step8: Plot external model
Step9: Write model to IEEE-le binary file
Step10: To check if the models are correctly written to the binary files, you can use the Seismic Unix function ximage
Step11: 3. Define spatial FD operator
Step12: Estimate the maximum frequency in the source wavelet, which can be modelled by the given FD grid discretization and spatial FD operator, using the grid dispersion citerion
Step13: If you want to model higher frequency wave propagation, you have to decrease the spatial gridpoint distance DH by resampling the model
Step14: Check if the spatial domain decomposition is consistent with the spatial FD grid discretization. The following conditions have to be satisfied
Step15: If the domain decomposition conditions are not satisfied, you have to add additional gridpoints at the bottom and right model boundary.
Step16: If you want to apply a FWI, keep in mind that the FWI will change the velocity model. Therefore, the maxium seismic velocities in the model will increase and you should choose a smaller time step than the DT derived from the CFL criterion
Step17: 6. Q-approximation
Step18: 7. Boundary conditions
Step19: 8. Define acquisition geometry
Step20: Check if receivers are located in computational domain and not the PMLs
Step21: Write receiver positions to file
Step22: Define type of seismograms SEISMO
Step23: How does DENISE read receiver positions from a file? In case of a fixed spread geometry you only need a single receiver file (READREC=1). If you want to model streamer geometry or more generally variable acquisition geometry with changing receiver positions for each shot, you have to define a separate receiver file for each shot (READREC=2)
Step24: Define location and basename of receiver file, defined above, without ".dat" extension
Step25: Define the seismogram properties
Step26: b) Source properties and positions
Step27: Check if sources are located in computational domain and not the PMLs
Step28: Write source positions to file
Step29: Define location of the source file
Step30: Do you want to excite all source positions simultaneously (RUN_MULTIPLE_SHOTS=0) or start a separate modelling run for each shot (RUN_MULTIPLE_SHOTS=1)
Step31: Define shape of the source signal (QUELLART)
Step32: If you read the wavelet from an ASCII file (QUELLART=3), you have to define the location of the signal file (SIGNAL_FILE)
Step33: In case of the bandlimited spike wavelet you have to define ...
Step34: In case of the Klauder wavelet you have to define the sweep length TS
Step35: Do you want to write the source wavelet to a SU file for each shot (WRITE_STF=1)?
Step36: Plot acquisition geometry relative to the subsurface model. Red stars denote the source positions and cyan triangles receiver positions
Step37: 9. Wavefield snapshots
Step38: 10. Log file name
Step39: FWI parameters
Step40: 12. FWI gradient taper functions
Step41: 13. FWI model output
Step42: 14. Bound constraints
Step43: 15. Step length estimation
Step44: Check step length estimation
Step45: 16. Trace muting
Step46: 17. Time damping
Step47: 18. Create DENISE parameter file
Step48: Define FWI workflow file
Step49: Create Header for DENISE FWI workflow file
Step50: 1. FWI parameters for each inversion stage
Step51: Instructions for preparing and starting a modelling/FWI run with DENISE Black-Edition
Step52: You can also copy the model files to a HPC cluster using SCP.
Step53: (c) Move receiver file(s) to the directory DENISE-Black-Edition/par/para["REC_FILE"]
Step54: (d) Move DENISE parameter file to the directory DENISE-Black-Edition/par/
Step55: (e) Within the DENISE-Black-Edition/par directory you can start the DENISE modelling run with
Step56: If you want to run a FWI, you also have to define a FWI workflow file ...
Step57: and run the FWI by typing
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.