Unnamed: 0
int64 0
15.9k
| cleaned_code
stringlengths 67
124k
⌀ | cleaned_prompt
stringlengths 168
30.3k
⌀ |
|---|---|---|
5,500
|
<ASSISTANT_TASK:>
Python Code:
from os.path import join
hot_dog_image_dir = 'inputs/seefood/train/hot_dog'
hot_dog_paths = [join(hot_dog_image_dir, filename) for filename in
['1000288.jpg',
'127117.jpg']]
not_hot_dog_image_dir = 'inputs/seefood/train/not_hot_dog'
not_hot_dog_paths = [join(not_hot_dog_image_dir, filename) for filename in
['823536.jpg',
'99890.jpg']]
img_paths = hot_dog_paths + not_hot_dog_paths
img_paths
import numpy as np
import tensorflow as tf
from tensorflow.python.keras.applications.resnet50 import preprocess_input
from tensorflow.python.keras.applications import ResNet50
from tensorflow.python.keras.preprocessing.image import load_img, img_to_array
image_size = 224
def read_and_prep_images(img_paths, img_height=image_size, img_width=image_size):
imgs = [load_img(img_path, target_size=(img_height, img_width)) for img_path in img_paths]
img_array = np.array([img_to_array(img) for img in imgs])
return preprocess_input(img_array)
my_model = ResNet50(weights='inputs/resnet50_weights_tf_dim_ordering_tf_kernels.h5')
test_data = read_and_prep_images(img_paths)
preds = my_model.predict(test_data)
import sys
sys.path.append('inputs/utils')
from decode_predictions import decode_predictions
from IPython.display import Image, display
most_likely_labels = decode_predictions(preds, top=1, class_list_path='inputs/imagenet_class_index.json')
for i, img_path in enumerate(img_paths):
display(Image(img_path))
print(most_likely_labels[i])
import sys
most_likely_labels = decode_predictions(preds, top=3, class_list_path='inputs/imagenet_class_index.json')
for i, img_path in enumerate(img_paths):
display(Image(img_path))
print(most_likely_labels[i])
from tensorflow.python.keras.applications import ResNet50
from tensorflow.python.keras.models import Sequential
from tensorflow.python.keras.layers import Dense, Flatten, GlobalAveragePooling2D
num_classes = 2
resnet_weights_path = 'inputs/resnet50_weights_tf_dim_ordering_tf_kernels_notop.h5'
my_new_model = Sequential()
my_new_model.add(ResNet50(include_top=False, pooling='avg', weights=resnet_weights_path))
my_new_model.add(Dense(num_classes, activation='softmax'))
# No need to train the first layer of the ResNet model:
my_new_model.layers[0].trainable = False
my_new_model.compile(optimizer='sgd', loss='categorical_crossentropy', metrics=['accuracy'])
from tensorflow.python.keras.applications.resnet50 import preprocess_input
from tensorflow.python.keras.preprocessing.image import ImageDataGenerator
image_size = 224
data_generator = ImageDataGenerator()
train_generator = data_generator.flow_from_directory(
'inputs/rural_and_urban_photos/train',
target_size=(image_size,image_size),
batch_size=24,
class_mode='categorical')
validation_generator = data_generator.flow_from_directory(
'inputs/rural_and_urban_photos/val',
target_size=(image_size,image_size),
class_mode='categorical')
my_new_model.fit_generator(
train_generator,
steps_per_epoch=3,
validation_data=validation_generator,
validation_steps=1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2) Set Up Preprocessing
Step2: 3) Modeling
Step3: 4) Visualize Your Results
Step4: Now you are ready to move on to transfer learning, which allows you to build fairly robust models with relatively little data.
Step5: Compile the Model
Step6: Fit the Model
|
5,501
|
<ASSISTANT_TASK:>
Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
def big_list1(n):
l = []
for i in range(n):
l.append(i)
return l
def big_list2(n):
return list(range(n))
def big_list(n):
big_list1(n)
big_list2(n)
%prun -q -T profile_example.txt -D profile_example.stat big_list(100000)
with open('profile_example.txt', 'r') as f: content = f.read()
print(content)
import pstats
p = pstats.Stats('profile_example.stat')
p.strip_dirs().sort_stats('cumulative').print_stats()
from memory_profiler import memory_usage
mem_usage = memory_usage(-1, interval=.2, timeout=1)
mem_usage
%%file script_test.py
def big_list1(n):
l = []
for i in range(n):
l.append(i)
return l
def big_list2(n):
return list(range(n))
def big_list(n):
big_list1(n)
big_list2(n)
from script_test import big_list, big_list1, big_list2
%load_ext memory_profiler
prof = %mprun -r -f big_list1 -f big_list2 -T profile_example.mem -r big_list(100000)
with open('profile_example.mem', 'r') as f : content = f.read()
print(content)
%load_ext snakeviz
%system snakeviz --help
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: profiling with cProfile
Step2: Memory profile
Step3: Thz functions to test must be part of file and cannot be implemented in the notebook. So we save the funtion in script and we import it just after.
Step4: We run the momory profiling
Step5: SnakeViz
|
5,502
|
<ASSISTANT_TASK:>
Python Code:
import gzip
import requests
import zipfile
url = "https://dl.dropbox.com/s/lnly9gw8pb1xhir/overfitting.zip"
results = requests.get(url)
import StringIO
z = zipfile.ZipFile(StringIO.StringIO(results.content))
# z.extractall()
z.extractall()
z.namelist()
d = z.open('overfitting.csv')
d.readline()
import numpy as np
M = np.fromstring(d.read(), sep=",")
len(d.read())
np.fromstring?
data = np.loadtxt("overfitting.csv", delimiter=",", skiprows=1)
print
There are also 5 other fields,
case_id - 1 to 20,000, a unique identifier for each row
train - 1/0, this is a flag for the first 250 rows which are the training dataset
Target_Practice - we have provided all 20,000 Targets for this model, so you can develop your method completely off line.
Target_Leaderboard - only 250 Targets are provided. You submit your predictions for the remaining 19,750 to the Kaggle leaderboard.
Target_Evaluate - again only 250 Targets are provided. Those competitors who beat the 'benchmark' on the Leaderboard will be asked to make one further submission for the Evaluation model.
data.shape
ix_training = data[:, 1] == 1
ix_testing = data[:, 1] == 0
training_data = data[ix_training, 5:]
testing_data = data[ix_testing, 5:]
training_labels = data[ix_training, 2]
testing_labels = data[ix_testing, 2]
print "training:", training_data.shape, training_labels.shape
print "testing: ", testing_data.shape, testing_labels.shape
figsize(12, 4)
hist(training_data.flatten())
print training_data.shape[0] * training_data.shape[1]
import pymc as pm
to_include = pm.Bernoulli("to_include", 0.5, size=200)
coef = pm.Uniform("coefs", 0, 1, size=200)
@pm.deterministic
def Z(coef=coef, to_include=to_include, data=training_data):
ym = np.dot(to_include * training_data, coef)
return ym - ym.mean()
@pm.deterministic
def T(z=Z):
return 0.45 * (np.sign(z) + 1.1)
obs = pm.Bernoulli("obs", T, value=training_labels, observed=True)
model = pm.Model([to_include, coef, Z, T, obs])
map_ = pm.MAP(model)
map_.fit()
mcmc = pm.MCMC(model)
mcmc.sample(100000, 90000, 1)
(np.round(T.value) == training_labels).mean()
t_trace = mcmc.trace("T")[:]
(np.round(t_trace[-500:-400, :]).mean(axis=0) == training_labels).mean()
t_mean = np.round(t_trace).mean(axis=1)
imshow(t_trace[-10000:, :], aspect="auto")
colorbar()
figsize(23, 8)
coef_trace = mcmc.trace("coefs")[:]
imshow(coef_trace[-10000:, :], aspect="auto", cmap=pyplot.cm.RdBu, interpolation="none")
include_trace = mcmc.trace("to_include")[:]
figsize(23, 8)
imshow(include_trace[-10000:, :], aspect="auto", interpolation="none")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Implementation of Salisman's Don't Overfit submission
Step2: Develop Tim's model
Step3: looks pretty right
|
5,503
|
<ASSISTANT_TASK:>
Python Code:
p_hi = 0.8 # probability of success in the high probability subpopulation
p_lo = 0.2 # probability of success in the low probability subpopulation
delta_p = 0.05 # effect size
# probability of success under treatment
P_T_additive = delta_p + 0.5*p_hi+0.5*p_lo
# probability of success under control
P_C_additive = 0.5*p_hi+0.5*p_lo
def sample_additive_model(n):
# generate covariate and treatment variables and compute conditional outcome probabilities
covariate = np.random.binomial(1,0.5,n)
treatment = np.random.binomial(1,0.5,n)
prob = delta_p*treatment + (p_hi-p_lo)*covariate + p_lo
outcome = np.random.binomial(1, prob)
return covariate,treatment,outcome
pop_log_odds_additive=print_population_log_odds(P_T_additive,P_C_additive)
logistic_vs_plugin(sample_additive_model,pop_log_odds_additive)
# logistic model parameters
alpha_true = -1.5
beta_true = 0.4
gamma_true = 3.0
# probability of success under treatment
p_hi_T = logistic_map(alpha_true+beta_true+gamma_true)
p_lo_T = logistic_map(alpha_true+beta_true)
P_T_logistic = 0.5*p_hi_T+0.5*p_lo_T
# probability of success under control
p_hi_C = logistic_map(alpha_true+gamma_true)
p_lo_C = logistic_map(alpha_true)
P_C_logistic = 0.5*p_hi_C+0.5*p_lo_C
def sample_logistic_model(n):
# generate covariate and treatment variables and compute conditional outcome probabilities
covariate = np.random.binomial(1,0.5,n)
treatment = np.random.binomial(1,0.5,n)
prob = logistic_map(alpha_true + beta_true*treatment + gamma_true*covariate)
outcome = np.random.binomial(1, prob)
return covariate,treatment,outcome
pop_log_odds_logistic=print_population_log_odds(P_T_logistic,P_C_logistic)
logistic_vs_plugin(sample_logistic_model,pop_log_odds_logistic)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Example 2
|
5,504
|
<ASSISTANT_TASK:>
Python Code:
import os
import pandas as pd
from sqlalchemy import create_engine
with open(os.environ["PGPASS"], "rb") as f:
content = f.readline().decode("utf-8").replace("\n", "").split(":")
engine = create_engine("postgresql://{user}:{passwd}@{host}/{db}".format(user=content[3],
passwd=content[4],
host=content[0],
db=content[2]))
df_examples = pd.read_sql("SELECT * FROM raw.frontpage_examples", con=engine)
test_example_data = df_examples[100:200]
test_example_data.to_csv('frontpage_examples.csv', index=False)
unique_crawls = test_example_data.crawlid.unique()
crawls_select_str = ', '.join([str(x) for x in unique_crawls])
df_crawls = pd.read_sql("SELECT * FROM raw.crawls WHERE crawlid IN ({})".format(crawls_select_str), con=engine)
df_crawls['ip'] = '127.0.0.1'
df_crawls['kernel_version'] = '1.2.3-4-generic'
df_crawls['os'] = 'Linux'
df_crawls['entry_node'] = '1A2B3C4D'
df_crawls.to_csv('crawls.csv', index=False)
unique_onion_services = test_example_data.hsid.unique()
onions_select_str = ', '.join([str(x) for x in unique_onion_services])
df_hs = pd.read_sql("SELECT * FROM raw.hs_history WHERE hsid IN ({})".format(onions_select_str), con=engine)
import random
word_file = "/usr/share/dict/words"
WORDS = open(word_file).read().splitlines()
word = random.choice(WORDS)
df_hs['hs_url'] = df_hs['hs_url'].map(lambda x: 'http://{}1234.onion'.format(random.choice(WORDS)))
df_hs.to_csv('hs_history.csv', index=False)
examples_select_str = ', '.join([str(x) for x in test_example_data.exampleid])
df_cells = pd.read_sql("SELECT * FROM raw.frontpage_traces WHERE exampleid IN ({})".format(examples_select_str), con=engine)
df_cells.to_csv('frontpage_traces.csv', index=False)
df_examples_w_hs_info = pd.merge(test_example_data, df_hs, on='hsid')
df_examples_and_cells = pd.merge(df_examples_w_hs_info, df_cells, on='exampleid')
df_examples_and_cells.to_csv('test_data.csv', index=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: frontpage_examples
Step2: Grab some examples
Step3: crawls
Step4: Remove VPS related info
Step5: onion services
Step6: Probably unnecessary, but replace each hs_url with a random (fake and invalid) address
Step7: frontpage_traces
Step8: Make a nice merged version for others to play with
|
5,505
|
<ASSISTANT_TASK:>
Python Code:
# boilerplate code
from __future__ import print_function
import os
from io import BytesIO
import numpy as np
from functools import partial
import PIL.Image
from IPython.display import clear_output, Image, display, HTML
import tensorflow as tf
#!wget https://storage.googleapis.com/download.tensorflow.org/models/inception5h.zip && unzip inception5h.zip
# model_fn = 'tensorflow_inception_graph.pb'
#CHANGED THIS TO MY GRAPH
model_fn = 'retrained_graph.pb'
# creating TensorFlow session and loading the model
graph = tf.Graph()
sess = tf.InteractiveSession(graph=graph)
with tf.gfile.FastGFile(model_fn, 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
t_input = tf.placeholder(np.float32, name='input') # define the input tensor
imagenet_mean = 117.0
t_preprocessed = tf.expand_dims(t_input-imagenet_mean, 0)
#HAD TO CHANGE THIS
tf.import_graph_def(graph_def, {'ResizeBilinear':t_preprocessed})
#tf.import_graph_def(graph_def, name='')
#tf.import_graph_def(graph_def, input_map={'input':t_preprocessed})
#HAD TO CHANGE THIS - NOT LAYERS WITH 'IMPORT/' IN NAME
layers = [op.name for op in graph.get_operations() if op.type=='Conv2D' and 'import/' in op.name]
#layers = [op.name for op in graph.get_operations() if op.type=='Conv2D']
feature_nums = [int(graph.get_tensor_by_name(name+':0').get_shape()[-1]) for name in layers]
#ADDED THIS LINE TO SEE ALL LAYERS PRINTED
print([op.name for op in graph.get_operations()])
print('Number of layers', len(layers))
print('Total number of feature channels:', sum(feature_nums))
# Helper functions for TF Graph visualization
def strip_consts(graph_def, max_const_size=32):
Strip large constant values from graph_def.
strip_def = tf.GraphDef()
for n0 in graph_def.node:
n = strip_def.node.add()
n.MergeFrom(n0)
if n.op == 'Const':
tensor = n.attr['value'].tensor
size = len(tensor.tensor_content)
if size > max_const_size:
tensor.tensor_content = bytes("<stripped %d bytes>"%size, 'utf-8')
return strip_def
def rename_nodes(graph_def, rename_func):
res_def = tf.GraphDef()
for n0 in graph_def.node:
n = res_def.node.add()
n.MergeFrom(n0)
n.name = rename_func(n.name)
for i, s in enumerate(n.input):
n.input[i] = rename_func(s) if s[0]!='^' else '^'+rename_func(s[1:])
return res_def
def show_graph(graph_def, max_const_size=32):
Visualize TensorFlow graph.
if hasattr(graph_def, 'as_graph_def'):
graph_def = graph_def.as_graph_def()
strip_def = strip_consts(graph_def, max_const_size=max_const_size)
code =
<script>
function load() {{
document.getElementById("{id}").pbtxt = {data};
}}
</script>
<link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()>
<div style="height:600px">
<tf-graph-basic id="{id}"></tf-graph-basic>
</div>
.format(data=repr(str(strip_def)), id='graph'+str(np.random.rand()))
iframe =
<iframe seamless style="width:800px;height:620px;border:0" srcdoc="{}"></iframe>
.format(code.replace('"', '"'))
display(HTML(iframe))
# Visualizing the network graph. Be sure expand the "mixed" nodes to see their
# internal structure. We are going to visualize "Conv2D" nodes.
tmp_def = rename_nodes(graph_def, lambda s:"/".join(s.split('_',1)))
show_graph(tmp_def)
# Picking some internal layer. Note that we use outputs before applying the ReLU nonlinearity
# to have non-zero gradients for features with negative initial activations.
#SWITCHED LAYER TO THE FINAL LAYER OF MY GRAPH
#layer = 'mixed4d_3x3_bottleneck_pre_relu'
layer = 'final_result'
channel = 139 # picking some feature channel to visualize
# start with a gray image with a little noise
img_noise = np.random.uniform(size=(224,224,3)) + 100.0
def showarray(a, fmt='jpeg'):
a = np.uint8(np.clip(a, 0, 1)*255)
f = BytesIO()
PIL.Image.fromarray(a).save(f, fmt)
display(Image(data=f.getvalue()))
def visstd(a, s=0.1):
'''Normalize the image range for visualization'''
return (a-a.mean())/max(a.std(), 1e-4)*s + 0.5
def T(layer):
'''Helper for getting layer output tensor'''
# return graph.get_tensor_by_name("import/%s:0"%layer)
#TRYING TO RESIZE THE TENSOR TO GET IT TO WORK, BUT JUST GUESSING HONESTLY
# print(tf.shape(tf.reshape(graph.get_tensor_by_name(layer), [-1,-1,-1,-1], name=None)))
# print(graph.get_tensor_by_name(layer))
#return tf.reshape(graph.get_tensor_by_name(layer), [2,1,1,1], name=None)
return graph.get_tensor_by_name("import/%s:0"%layer)
# return graph.get_tensor_by_name(layer)
def render_naive(t_obj, img0=img_noise, iter_n=20, step=1.0):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
print(t_score)
print(t_input)
img = img0.copy()
for i in range(iter_n):
g, score = sess.run([t_grad, t_score], {t_input:img})
#g, score = sess.run(t_obj, {t_input:img})
# normalizing the gradient, so the same step size should work
g /= g.std()+1e-8 # for different layers and networks
img += g*step
print(score, end = ' ')
clear_output()
showarray(visstd(img))
render_naive(T(layer)[:][:,:,:,channel])
img_noise = np.random.uniform(size=(224,224,3)) + 100.0
def tffunc(*argtypes):
'''Helper that transforms TF-graph generating function into a regular one.
See "resize" function below.
'''
placeholders = list(map(tf.placeholder, argtypes))
def wrap(f):
out = f(*placeholders)
def wrapper(*args, **kw):
return out.eval(dict(zip(placeholders, args)), session=kw.get('session'))
return wrapper
return wrap
# Helper function that uses TF to resize an image
def resize(img, size):
img = tf.expand_dims(img, 0)
return tf.image.resize_bilinear(img, size)[0,:,:,:]
resize = tffunc(np.float32, np.int32)(resize)
def calc_grad_tiled(img, t_grad, tile_size=512):
'''Compute the value of tensor t_grad over the image in a tiled way.
Random shifts are applied to the image to blur tile boundaries over
multiple iterations.'''
sz = tile_size
h, w = img.shape[:2]
sx, sy = np.random.randint(sz, size=2)
img_shift = np.roll(np.roll(img, sx, 1), sy, 0)
grad = np.zeros_like(img)
for y in range(0, max(h-sz//2, sz),sz):
for x in range(0, max(w-sz//2, sz),sz):
sub = img_shift[y:y+sz,x:x+sz]
g = sess.run(t_grad, {t_input:sub})
grad[y:y+sz,x:x+sz] = g
return np.roll(np.roll(grad, -sx, 1), -sy, 0)
def render_multiscale(t_obj, img0=img_noise, iter_n=10, step=1.0, octave_n=3, octave_scale=1.4):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
img = img0.copy()
for octave in range(octave_n):
if octave>0:
hw = np.float32(img.shape[:2])*octave_scale
img = resize(img, np.int32(hw))
for i in range(iter_n):
g = calc_grad_tiled(img, t_grad)
# normalizing the gradient, so the same step size should work
g /= g.std()+1e-8 # for different layers and networks
img += g*step
print('.', end = ' ')
clear_output()
showarray(visstd(img))
render_multiscale(T(layer)[:,:,:,channel])
k = np.float32([1,4,6,4,1])
k = np.outer(k, k)
k5x5 = k[:,:,None,None]/k.sum()*np.eye(3, dtype=np.float32)
def lap_split(img):
'''Split the image into lo and hi frequency components'''
with tf.name_scope('split'):
lo = tf.nn.conv2d(img, k5x5, [1,2,2,1], 'SAME')
lo2 = tf.nn.conv2d_transpose(lo, k5x5*4, tf.shape(img), [1,2,2,1])
hi = img-lo2
return lo, hi
def lap_split_n(img, n):
'''Build Laplacian pyramid with n splits'''
levels = []
for i in range(n):
img, hi = lap_split(img)
levels.append(hi)
levels.append(img)
return levels[::-1]
def lap_merge(levels):
'''Merge Laplacian pyramid'''
img = levels[0]
for hi in levels[1:]:
with tf.name_scope('merge'):
img = tf.nn.conv2d_transpose(img, k5x5*4, tf.shape(hi), [1,2,2,1]) + hi
return img
def normalize_std(img, eps=1e-10):
'''Normalize image by making its standard deviation = 1.0'''
with tf.name_scope('normalize'):
std = tf.sqrt(tf.reduce_mean(tf.square(img)))
return img/tf.maximum(std, eps)
def lap_normalize(img, scale_n=4):
'''Perform the Laplacian pyramid normalization.'''
img = tf.expand_dims(img,0)
tlevels = lap_split_n(img, scale_n)
tlevels = list(map(normalize_std, tlevels))
out = lap_merge(tlevels)
return out[0,:,:,:]
# Showing the lap_normalize graph with TensorBoard
lap_graph = tf.Graph()
with lap_graph.as_default():
lap_in = tf.placeholder(np.float32, name='lap_in')
lap_out = lap_normalize(lap_in)
show_graph(lap_graph)
def render_lapnorm(t_obj, img0=img_noise, visfunc=visstd,
iter_n=10, step=1.0, octave_n=3, octave_scale=1.4, lap_n=4):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
# build the laplacian normalization graph
lap_norm_func = tffunc(np.float32)(partial(lap_normalize, scale_n=lap_n))
img = img0.copy()
for octave in range(octave_n):
if octave>0:
hw = np.float32(img.shape[:2])*octave_scale
img = resize(img, np.int32(hw))
for i in range(iter_n):
g = calc_grad_tiled(img, t_grad)
g = lap_norm_func(g)
img += g*step
print('.', end = ' ')
clear_output()
showarray(visfunc(img))
render_lapnorm(T(layer)[:,:,:,channel])
render_lapnorm(T(layer)[:,:,:,65])
render_lapnorm(T('mixed3b_1x1_pre_relu')[:,:,:,101])
render_lapnorm(T(layer)[:,:,:,65]+T(layer)[:,:,:,139], octave_n=4)
def render_deepdream(t_obj, img0=img_noise,
iter_n=10, step=1.5, octave_n=4, octave_scale=1.4):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
# split the image into a number of octaves
img = img0
octaves = []
for i in range(octave_n-1):
hw = img.shape[:2]
lo = resize(img, np.int32(np.float32(hw)/octave_scale))
hi = img-resize(lo, hw)
img = lo
octaves.append(hi)
# generate details octave by octave
for octave in range(octave_n):
if octave>0:
hi = octaves[-octave]
img = resize(img, hi.shape[:2])+hi
for i in range(iter_n):
g = calc_grad_tiled(img, t_grad)
img += g*(step / (np.abs(g).mean()+1e-7))
print('.',end = ' ')
clear_output()
showarray(img/255.0)
img0 = PIL.Image.open('eschertest.jpg')
img0 = np.float32(img0)
showarray(img0/255.0)
render_deepdream(tf.square(T('mixed4c')), img0)
render_deepdream(T(layer)[:,:,:,139], img0)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <a id='loading'></a>
Step6: To take a glimpse into the kinds of patterns that the network learned to recognize, we will try to generate images that maximize the sum of activations of particular channel of a particular convolutional layer of the neural network. The network we explore contains many convolutional layers, each of which outputs tens to hundreds of feature channels, so we have plenty of patterns to explore.
Step7: <a id='naive'></a>
Step8: <a id="multiscale"></a>
Step9: <a id="laplacian"></a>
Step10: <a id="playing"></a>
Step11: Lower layers produce features of lower complexity.
Step12: There are many interesting things one may try. For example, optimizing a linear combination of features often gives a "mixture" pattern.
Step13: <a id="deepdream"></a>
Step14: Let's load some image and populate it with DogSlugs (in case you've missed them).
Step15: Note that results can differ from the Caffe's implementation, as we are using an independently trained network. Still, the network seems to like dogs and animal-like features due to the nature of the ImageNet dataset.
|
5,506
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import division
import ipyparallel as ipp
import qinfer as qi
from functools import partial
%matplotlib inline
import matplotlib.pyplot as plt
try:
plt.style.use('ggplot')
except:
pass
client = ipp.Client()
print(client)
dview = client[:]
print(dview)
serial_model = qi.BinomialModel(qi.SimplePrecessionModel())
serial_model
parallel_model = qi.DirectViewParallelizedModel(serial_model, dview)
parallel_model
prior = qi.UniformDistribution([0, 1])
heuristic_class = partial(qi.ExpSparseHeuristic, t_field='x', other_fields={'n_meas': 20})
with qi.timing() as t:
performance = qi.perf_test_multiple(
100, parallel_model, 6000, prior, 200,
heuristic_class, progressbar=qi.IPythonProgressBar
)
print("Time elapsed: {:0.2f} s".format(t.delta_t))
plt.semilogy(performance['loss'].mean(axis=0))
plt.xlabel('# of Experiments')
plt.ylabel('Bayes Risk')
lbview = client.load_balanced_view()
with qi.timing() as t:
performance = qi.perf_test_multiple(
100, serial_model, 6000, prior, 200, heuristic_class,
progressbar=qi.IPythonProgressBar, apply=lbview.apply
)
print("Time elapsed: {:0.2f} s".format(t.delta_t))
plt.semilogy(performance['loss'].mean(axis=0))
plt.xlabel('# of Experiments')
plt.ylabel('Bayes Risk')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Next, we import the IPython parallelization library ipyparallel, as well as QInfer itself and some useful things from the Python standard library.
Step2: We'll make some nice graphical examples later, so we also import Matplotlib and point it to the ggplot stylesheet if available. We'll also tell Matplotlib to plot inline in our notebook.
Step3: Next, we setup the parallelization client. To start a cluster running, it's convienent to use the Jupyter Notebook extension. If you're running ipyparallel 5.0 or later, you can install the Notebook extension by running
Step4: Once we get a client up and running, the next step is to make a direct view of its engines. Since we want to use all of the engines provided by the client, we index it by a slice [
Step5: Next, we'll create a simple test model to try out parallelization support.
Step6: Having made this model, we can then wrap it in DirectViewParallelizedModel to parallelize its likelihood function.
Step7: Let's try it out by making a prior and specifying an experiment design heuristic.
Step8: The rest is handled by perf_test_multiple, which will run our example many times and report on its performance. In this case, parallelizing an individual model is a bit silly, since the simple precession likelihood is fast enough that communication costs blow away the savings from parallelization. That said, it makes an illustrative example. In practice, running a parallel model makes the most sense if the model likelihood is slow to evaluate, and if you're interested in a small number of runs.
Step9: Let's plot the performance data to make sure that our test actually works.
Step10: Note that, in the above plot, we see both the exponential scaling we expect as well the errors caused by numerical precision once we get very close to the true model.
|
5,507
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pyasf
import pylab as pl
import sympy as sp
pl.rcParams.update({'font.size':14})
from IPython.display import display, Math
print_latex = lambda x: display(Math(sp.latex(x)))
sto = pyasf.unit_cell("sto_bulk_80873.cif") # init the cif file, this one has quite OK Debye Waller factors
print_latex(sto.AU_positions)
Sr_f1 = pl.loadtxt("STO_Sr-f1.dat")
Sr_f2 = pl.loadtxt("STO_Sr-f2.dat")
sto.feed_feff("Sr1", Sr_f1[:,0], Sr_f1[:,1], Sr_f2[:,1])
sto.fit_feff = True # hidden switch to scale fine structure to table data... have to implement this properly
Energy = pl.linspace(15500, 16500, 1001) # energy range for simulation
delta = sp.Symbol("delta", real=True)
sto.get_tensor_symmetry()
sto.build_unit_cell()
# indexes are: label, num in unit cell, direction (2=z)
sto.positions["O1"][0][2] += 3*delta
sto.positions["O1"][1][2] += 3*delta
sto.positions["O1"][2][2] += 3*delta
sto.positions["Ti1"][0][2] -= delta
print_latex(sto.positions)
R = 2,2,7 # choose reflection
delta_range = pl.linspace(-0.001, 0.001, 201) # choose range for displacement (lattice units)
sto.calc_structure_factor() # caching all reflections
print_latex(sto.F_0)
print_latex(sto.Uiso)
print_latex(sto.Uaniso)
Intensities = []
for delta_val in delta_range:
sto.subs[delta] = delta_val
F = sto.DAFS(Energy, R, force_refresh=False) # no refresh for speed
I = abs(F)**2
Intensities.append(I)
Intensities = pl.vstack(Intensities)
fig, ax = pl.subplots(1, 2, figsize=(18,5))
norm = pl.matplotlib.colors.LogNorm()
extent=(Energy[0], Energy[-1], delta_range[0], delta_range[-1])
ax[0].imshow(Intensities, norm=norm, origin="lower", extent=extent, aspect="auto", cmap=pl.cm.jet)
lines = ax[1].semilogy(Energy, Intensities[::50].T) # plot one of 50 lines
ax[1].legend(lines, ["$\\delta=%.3f$"%val for val in delta_range[::50]])
ax[0].set_ylabel("displacement $\\delta/c$")
ax[0].set_xlabel("Energy (eV)")
ax[1].set_xlabel("Energy (eV)")
ax[1].set_ylabel("Intenity")
mu = sto.get_absorption_isotropic(Energy)
pl.plot(Energy, mu)
sto.subs[delta] = 0.003
pl.semilogy(Energy, abs(sto.DAFS(Energy, (0,1,6)))**2)
pl.semilogy(Energy, abs(sto.DAFS(Energy, (0,2,7)))**2)
pl.semilogy(Energy, abs(sto.DAFS(Energy, (0,2,5)))**2)
pl.semilogy(Energy, abs(sto.DAFS(Energy, (0,0,6)))**2)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: From databases we can only calculate the smooth part of the resonant corrections $f_1(E)$ and $f_2(E)$. The fine structure oscillations I obtained from powder EXAFS measurements and using Kramers Kronig transform. They are in these files and can be processed by pyasf...
Step2: We define a Symbol for the polar displacement of Ti
Step3: This time we come from the high-symmetry .cif file. It would probably be better do load the lower Symmetry one ($P4mm$, like BaTiO3). Therefore we now cannot modify the asymmetric unit as it restricts us to high symmetry. Instead we work on the unit cell after building...
Step4: Here we randomly decided that the oxygen displacement is 3 times larger and opposite to the titanium displacement. It's quite close to reality but in practice we don't know that and each non-equivalent atom in the unit cell (2xOxygen and 1xTi in MFP phase) has to get an own displacement parameter.
Step5: Let's calculate the generic structure factor and use some fancy sympy printing ;-)
Step6: U_ij are the atomic displacement parameters (due to thermal motion). Values are contained here
Step7: I hope you like this notebook. For the MFP phase, delta was roughly 0.003. A pitty that we didn't reach the -0.01 ;)
|
5,508
|
<ASSISTANT_TASK:>
Python Code:
documents = nltk.corpus.PlaintextCorpusReader('../data/EmbryoProjectTexts/files', 'https.+')
metadata = zotero.read('../data/EmbryoProjectTexts', index_by='link', follow_links=False)
wordcounts_per_document = nltk.ConditionalFreqDist([
(fileid, normalize_token(token))
for fileid in documents.fileids()
for token in documents.words(fileids=[fileid])
if filter_token(token)
])
from scipy import sparse
# We pick a single "focal" document that we want to characterize.
focal_fileid = documents.fileids()[3]
# Since this procedure will involve numerical matrices, we
# need to map documents and words onto row and column indices.
# These "dictionaries" will help us to keep track of those
# mappings.
document_index = {} # Maps int -> fileid (str).
vocabulary = {} # Maps int -> word (str).
lookup = {} # Reverse map for vocabulary (word (str) -> int).
# Containers for sparse data.
I = [] # Document vector.
J = [] # Word vector.
data = [] # Word count vector.
labels = [] # Vector of labels; either the URI of interest, or "Other".
# Here we transform the ConditionalFrequencyDist into three vectors (I, J, data)
# that sparsely describe the document-word count matrix.
for i, (fileid, counts) in enumerate(wordcounts_per_document.iteritems()):
document_index[i] = fileid
for token, count in counts.iteritems():
# Removing low-frequency terms is optional, but speeds things up
# quite a bit for this demonstration.
if count < 3:
continue
# get() lets us
j = lookup.get(token, len(vocabulary))
vocabulary[j] = token
lookup[token] = j
I.append(i)
J.append(j)
data.append(count)
labels.append(fileid if fileid == focal_fileid else 'Other')
print '\r', i,
sparse_matrix = sparse.coo_matrix((data, (I, J)))
sparse_matrix.shape
from sklearn.feature_selection import chi2
from sklearn.feature_extraction.text import CountVectorizer
keyness, _ = chi2(sparse_matrix, labels)
ranking = np.argsort(keyness)[::-1]
_, words = zip(*sorted(vocabulary.items(), key=lambda i: i[0]))
words = np.array(words)
keywords = words[ranking]
zip(keywords[:20], keyness[ranking][:20])
def extract_keywords(fileid, n=20):
print '\r', fileid,
document_index = {} # Maps int -> fileid (str).
vocabulary = {} # Maps int -> word (str).
lookup = {} # Reverse map for vocabulary (word (str) -> int).
I = []
J = []
data = []
labels = []
for i, (key, counts) in enumerate(wordcounts_per_document.iteritems()):
document_index[i] = key
for token, count in counts.iteritems():
if count < 3:
continue
j = lookup.get(token, len(vocabulary))
vocabulary[j] = token
lookup[token] = j
I.append(i)
J.append(j)
data.append(count)
labels.append(key if key == fileid else 'Other')
sparse_matrix = sparse.coo_matrix((data, (I, J)))
keyness, _ = chi2(sparse_matrix, labels)
ranking = np.argsort(keyness)[::-1]
_, words = zip(*sorted(vocabulary.items(), key=lambda i: i[0]))
words = np.array(words)
keywords = words[ranking]
return keywords[:n]
keywords = [extract_keywords(fileid) for fileid in documents.fileids()]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Our first step is to count up all of the words in each of the documents. This conditional frequency distribution should look familiar by now.
|
5,509
|
<ASSISTANT_TASK:>
Python Code:
# %load ../data/melanoma_data.py
from numpy import reshape, sum
melanoma_data = reshape([1.57808, 0.00000, 2, 1.27, 35.9945, 1, 1.48219,
0.00000, 2, 0.76, 41.9014, 1, 0.0, 7.33425, 1, 35.00, 70.2164, 2, 2.23288,
0.00000, 1, 1.70, 33.7096, 1, 0.0, 9.38356, 2, 1.00, 47.9726, 1, 3.27671,
0.00000, 1, 11.00, 31.8219, 2, 0.0, 9.64384, 1, 6.50, 32.9479, 1, 1.66575,
0.00000, 2, 3.62, 35.9205, 1, 0.94247, 0.00000, 1, 8.50, 40.5068, 2,
1.68767, 0.00000, 2, 4.20, 57.0384, 1, 2.34247, 0.00000, 2, 5.00, 62.0630, 1,
0.89863, 0.00000, 1, 2.25, 56.5342, 1, 0.0, 9.03288, 2, 2.30, 22.9945, 2,
0.0, 9.63014, 2, 10.00, 18.4712, 1, 0.52603, 0.00000, 1, 3.35, 41.2521, 1,
1.82192, 0.00000, 2, 3.80, 29.5178, 1, 0.93425, 0.00000, 1, 0.75, 59.0493, 2,
0.0, 8.98630, 2, 0.50, 32.2877, 1, 3.35068, 0.00000, 1, 0.46, 26.4822, 1,
8.67397, 0.00000, 1, 2.55, 55.0411, 1, 0.41096, 0.00000, 2, 1.95, 55.1233, 2,
2.78630, 0.00000, 1, 2.50, 22.4055, 2, 2.56438, 0.00000, 1, 2.10, 50.8466, 1,
0.0, 8.75342, 2, 9.00, 56.0274, 1, 0.56986, 0.00000, 2, 5.00, 55.4767, 1,
0.0, 8.40000, 1, 0.55, 41.2411, 1, 0.0, 7.25205, 1, 12.50, 32.7425, 1,
4.38630, 0.00000, 2, 1.16, 45.3479, 1, 0.0, 8.36712, 2, 4.25, 42.8438, 2,
0.0, 8.99178, 2, 15.00, 51.1068, 1, 0.86575, 0.00000, 2, 0.72, 30.1808, 1,
0.0, 4.76986, 1, 1.50, 58.7014, 2, 1.15616, 0.00000, 2, 6.50, 51.5397, 1,
0.0, 7.28767, 1, 2.75, 27.1973, 1, 3.13151, 0.00000, 1, 3.83, 67.6740, 1,
0.0, 8.55068, 2, 1.80, 64.4274, 2, 0.0, 8.45753, 2, 4.75, 35.4411, 1,
4.59452, 0.00000, 1, 5.80, 35.9452, 1, 2.88219, 0.00000, 2, 0.51, 48.1370, 1,
0.89589, 0.00000, 1, 3.25, 58.6082, 1, 1.76164, 0.00000, 2, 0.90, 40.0137, 2,
0.0, 7.81370, 1, 3.45, 26.0055, 1, 0.0, 8.33425, 2, 1.38, 36.9616, 1,
2.62192, 0.00000, 1, 5.28, 25.9068, 2, 0.16164, 0.00000, 2, 3.00, 63.8055, 1,
0.0, 8.24658, 1, 2.20, 29.6986, 2, 1.52603, 0.00000, 1, 7.00, 61.6384, 1,
5.30959, 0.00000, 1, 4.00, 49.9918, 1, 0.87123, 0.00000, 2, 2.36, 37.1068, 1,
0.41644, 0.00000, 1, 1.06, 53.4658, 2, 4.24110, 0.00000, 1, 6.50, 57.7425, 2,
0.13699, 0.00000, 1, 10.00, 29.1479, 1, 7.07671, 0.00000, 2, 1.20, 59.2466,
1, 0.13151, 0.00000, 2, 15.00, 61.3507, 2, 0.0, 8.02740, 1, 0.49, 33.9205,
2, 0.0, 6.16164, 2, 1.60, 43.1918, 1, 1.29863, 0.00000, 2, 11.50, 34.1890,
2, 1.29041, 0.00000, 2, 1.90, 58.3808, 2, 0.0, 7.99726, 1, 4.80, 21.9479,
2, 0.0, 8.34795, 1, 0.55, 35.1151, 1, 0.0, 7.30137, 2, 6.50, 31.6493, 1,
2.32877, 0.00000, 2, 12.00, 56.1890, 1, 0.56438, 0.00000, 1, 7.00, 60.7123,
1, 5.62740, 0.00000, 2, 6.50, 58.8329, 2, 1.23014, 0.00000, 1, 1.60,
44.4849, 2, 0.0, 7.94521, 1, 1.15, 51.1315, 2, 5.06301, 0.00000, 1, 2.65,
34.2164, 1, 3.27671, 0.00000, 2, 2.00, 35.2301, 1, 0.0, 0.60822, 2, 2.50,
32.7425, 2, 0.65753, 0.00000, 1, 4.38, 38.0986, 2, 0.84110, 0.00000, 2,
2.93, 45.7699, 1, 0.0, 8.40000, 2, 3.00, 44.2000, 1, 0.18356, 0.00000, 1,
2.50, 71.3260, 1, 2.62466, 0.00000, 2, 2.30, 59.0795, 1, 0.0, 7.96438, 2,
2.00, 35.3836, 2, 0.0, 7.77808, 1, 0.75, 58.0438, 2, 0.22192, 0.00000, 1,
5.00, 43.2164, 1, 2.33973, 0.00000, 1, 10.00, 60.4932, 1, 0.52329, 0.00000,
1, 0.87, 32.4795, 2, 0.0, 8.04110, 2, 1.33, 60.2986, 1, 0.0, 7.83288, 1,
5.60, 47.1342, 1, 0.64110, 0.00000, 1, 2.55, 42.3233, 1, 0.38356, 0.00000,
1, 6.50, 54.4164, 1, 0.0, 7.82192, 2, 1.20, 51.4219, 1, 0.51781, 0.00000,
2, 3.00, 46.5973, 1, 0.0, 8.09863, 2, 2.55, 58.3562, 1, 0.0, 8.16712, 2,
1.61, 25.6712, 2, 4.42740, 0.00000, 1, 1.40, 29.1726, 1, 0.88493, 0.00000,
1, 2.25, 18.6795, 1, 2.78356, 0.00000, 1, 4.50, 60.9671, 2, 2.64658,
0.00000, 2, 0.81, 63.8849, 2, 0.0, 8.21370, 2, 1.30, 37.9808, 2, 0.0,
7.41918, 2, 3.20, 32.3507, 2, 0.99726, 0.00000, 1, 1.29, 42.9589, 1,
5.88493, 0.00000, 2, 4.40, 40.9562, 1, 0.41644, 0.00000, 1, 6.00, 61.9753, 1,
3.53699, 0.00000, 1, 3.93, 55.3315, 2, 0.0, 7.56164, 1, 0.60, 36.0767, 1,
0.0, 7.53151, 1, 0.75, 50.6795, 1, 0.27671, 0.00000, 1, 0.73, 66.6986, 1,
0.76986, 0.00000, 2, 0.20, 29.3479, 2, 0.0, 7.62192, 2, 3.88, 33.1863, 1,
0.0, 7.79726, 1, 2.48, 48.6356, 2, 0.64110, 0.00000, 1, 2.50, 29.4877, 1,
1.14521, 0.00000, 2, 10.00, 42.6685, 1, 2.01644, 0.00000, 1, 16.00, 24.4055,
2, 2.84384, 0.00000, 1, 4.00, 40.3890, 1, 0.0, 7.00000, 2, 1.35, 45.4192,
1, 1.27397, 0.00000, 2, 3.00, 65.3945, 1, 0.0, 7.09589, 1, 10.72, 47.5753,
2, 2.04110, 0.00000, 1, 1.50, 58.4438, 2, 0.83562, 0.00000, 1, 3.50,
59.2767, 2, 0.92329, 0.00000, 1, 1.10, 30.2630, 2, 0.07397, 0.00000, 1,
1.00, 40.7370, 1, 0.0, 7.30685, 2, 5.10, 44.7452, 1, 2.07671, 0.00000, 2,
0.50, 67.8329, 1, 0.0, 7.70959, 2, 4.03, 27.7452, 1, 0.0, 6.15890, 1,
1.80, 25.9260, 2, 0.0, 6.89315, 2, 3.50, 31.2740, 1, 3.30685, 0.00000, 1,
1.15, 58.8822, 2, 0.36164, 0.00000, 1, 1.75, 57.0575, 1, 1.97808, 0.00000,
2, 2.50, 59.8137, 1, 1.23836, 0.00000, 2, 2.10, 77.5151, 1, 0.10685,
0.00000, 1, 1.35, 43.4219, 1, 0.0, 7.63836, 1, 4.50, 52.2082, 1, 2.06301,
0.00000, 1, 0.50, 36.3205, 2, 0.0, 7.42466, 2, 2.30, 25.9781, 1, 0.50959,
0.00000, 1, 4.00, 49.4411, 1, 0.65753, 0.00000, 1, 5.40, 57.9589, 1, 0.0,
6.93151, 1, 6.00, 65.5644, 1, 0.0, 7.23288, 2, 5.10, 72.3425, 1, 6.01096,
0.00000, 1, 4.50, 68.8548, 1, 0.33699, 0.00000, 1, 1.45, 50.4438, 2, 0.0,
6.47123, 2, 3.38, 48.2877, 1, 0.94795, 0.00000, 1, 3.00, 46.9479, 2,
2.91781, 0.00000, 2, 1.20, 33.6000, 2, 1.59726, 0.00000, 2, 7.30, 51.1644, 2,
0.84932, 0.00000, 2, 1.67, 47.7836, 1, 1.38356, 0.00000, 1, 4.00, 53.8795, 2,
3.81644, 0.00000, 2, 2.10, 38.7068, 2, 0.0, 7.06849, 1, 10.00, 69.3205, 2,
0.0, 7.04110, 2, 3.50, 66.0219, 1, 1.00274, 0.00000, 2, 1.10, 36.0329, 2,
0.0, 6.34795, 2, 0.40, 63.4603, 1, 1.18082, 0.00000, 1, 0.70, 48.8986, 2,
0.97534, 0.00000, 1, 5.00, 45.0575, 1, 2.16712, 0.00000, 1, 0.85, 57.6712, 2,
0.0, 6.85479, 1, 4.80, 45.2000, 1, 1.38356, 0.00000, 1, 1.20, 49.0438, 1,
1.71507, 0.00000, 2, 1.30, 51.4630, 1, 0.79452, 0.00000, 2, 5.80, 34.5479, 1,
0.0, 6.86301, 2, 6.00, 47.6438, 2, 0.0, 6.50411, 1, 3.00, 38.7233, 2,
0.42466, 0.00000, 2, 1.88, 54.0658, 1, 0.98630, 0.00000, 1, 2.60, 45.7397, 1,
0.0, 6.13699, 2, 2.70, 47.2822, 2, 3.80000, 0.00000, 2, 6.00, 62.6411, 1,
0.0, 6.48493, 1, 4.00, 62.0192, 2, 0.0, 6.96438, 2, 1.71, 41.0904, 2, 0.0,
6.78082, 2, 1.60, 50.2712, 2, 0.56164, 0.00000, 2, 1.50, 49.5288, 2,
2.67123, 0.00000, 1, 3.00, 70.8192, 1, 1.56712, 0.00000, 2, 0.90, 59.0712, 1,
2.07397, 0.00000, 2, 4.00, 53.9041, 1, 0.33973, 0.00000, 1, 2.80, 44.7342, 1,
3.37808, 0.00000, 2, 0.80, 22.1397, 1, 3.15068, 0.00000, 1, 0.70, 72.8575, 1,
0.0, 6.81096, 2, 0.90, 61.4521, 1, 3.20822, 0.00000, 2, 12.00, 61.2904, 1,
0.62740, 0.00000, 1, 5.78, 34.7507, 1, 1.64384, 0.00000, 1, 0.60, 67.4164, 2,
1.40822, 0.00000, 1, 12.00, 53.2493, 1, 0.0, 6.06575, 1, 4.00, 49.0082, 1,
1.66301, 0.00000, 2, 0.45, 56.7699, 1, 1.36986, 0.00000, 2, 1.30, 34.0247, 2,
5.46849, 0.00000, 1, 0.81, 34.3014, 2, 0.42740, 0.00000, 1, 3.20, 45.0712, 2,
1.13973, 0.00000, 2, 4.00, 54.7671, 2, 1.73699, 0.00000, 2, 4.77, 42.8548, 2,
0.0, 5.54521, 2, 2.20, 36.6301, 2, 0.85205, 0.00000, 1, 3.00, 43.2466, 1,
0.43014, 0.00000, 1, 3.00, 53.3562, 1, 1.20822, 0.00000, 2, 0.80, 35.3534, 1,
4.36164, 0.00000, 1, 4.00, 36.5233, 1, 0.52877, 0.00000, 2, 5.00, 52.7863, 1,
0.0, 6.51507, 1, 2.00, 24.4329, 2, 2.89863, 0.00000, 2, 3.85, 58.7178, 1,
0.0, 6.20274, 2, 0.76, 45.5479, 1, 1.21644, 0.00000, 2, 0.75, 43.3014, 2,
0.0, 6.00000, 2, 6.50, 51.4055, 2, 0.0, 6.25479, 1, 0.85, 38.9671, 2, 0.0,
6.49863, 1, 4.30, 68.2658, 1, 1.13699, 0.00000, 2, 2.10, 59.4493, 2,
1.69589, 0.00000, 1, 1.50, 30.0192, 1, 0.0, 6.41096, 2, 2.00, 22.1562, 2,
0.0, 6.02192, 1, 11.00, 54.7671, 1, 3.04932, 0.00000, 2, 4.88, 45.0384, 1,
0.0, 5.62740, 2, 5.20, 39.7589, 1, 0.72603, 0.00000, 1, 3.04, 41.3808, 1,
0.73425, 0.00000, 2, 8.00, 34.9671, 1, 1.47945, 0.00000, 2, 1.60, 46.3479, 1,
0.37808, 0.00000, 2, 1.10, 29.9233, 2, 0.0, 5.75890, 2, 3.00, 32.8740, 1,
1.48219, 0.00000, 2, 10.00, 39.5397, 2, 0.0, 5.88493, 1, 1.95, 55.4822, 1,
0.0, 1.80274, 1, 2.00, 32.3562, 1, 1.40548, 0.00000, 2, 3.70, 41.8027, 2,
0.0, 4.74795, 1, 2.90, 35.3452, 2, 0.0, 5.24658, 1, 1.80, 50.4795, 1,
0.29041, 0.00000, 1, 6.00, 61.3507, 2, 0.0, 5.83836, 1, 1.50, 67.3562, 1,
0.0, 5.32055, 2, 1.75, 53.8548, 2, 5.16712, 0.00000, 2, 5.00, 78.7315, 2,
0.0, 5.59178, 2, 0.63, 62.7233, 1, 0.0, 5.77808, 1, 1.15, 65.1507, 1,
0.53425, 0.00000, 2, 1.50, 34.8274, 1, 0.0, 2.22466, 1, 0.98, 33.8466, 2,
3.59726, 0.00000, 1, 5.00, 67.8822, 1, 0.0, 5.32329, 1, 5.50, 66.0712, 2,
1.78630, 0.00000, 2, 1.00, 55.0658, 2, 0.70411, 0.00000, 2, 10.00, 50.5123,
1, 0.0, 4.94795, 2, 5.00, 42.4055, 2, 0.0, 5.45479, 2, 3.75, 58.1068, 2,
4.32877, 0.00000, 1, 10.00, 26.0137, 1, 1.16164, 0.00000, 2, 3.00, 54.4685,
1, 0.0, 5.20274, 2, 8.00, 54.0630, 2, 0.0, 4.40822, 1, 1.64, 34.5589, 1,
1.41096, 0.00000, 1, 4.95, 58.5068, 1, 0.0, 4.92877, 2, 1.45, 63.9370, 1,
0.0, 5.42192, 2, 12.00, 49.8274, 2, 0.98904, 0.00000, 1, 2.05, 50.5562, 1,
0.36438, 0.00000, 1, 3.60, 40.4795, 2, 0.0, 4.38082, 1, 8.30, 61.7479, 2,
0.77260, 0.00000, 2, 0.45, 41.6712, 1, 4.90959, 0.00000, 2, 3.00, 25.5096, 1,
1.26849, 0.00000, 1, 4.40, 61.2000, 1, 0.58082, 0.00000, 2, 1.10, 53.1260, 1,
0.0, 4.95616, 1, 1.05, 40.4658, 1, 0.0, 5.12329, 1, 1.71, 60.3068, 1, 0.0,
4.74795, 1, 6.30, 48.7425, 2, 0.0, 4.90685, 2, 0.50, 46.7562, 2, 1.41918,
0.00000, 1, 5.10, 34.8932, 2, 0.44110, 0.00000, 1, 6.00, 33.3096, 1, 0.0,
4.29863, 2, 1.50, 35.7589, 1, 0.0, 4.63836, 2, 0.36, 49.8575, 1, 0.0,
4.81370, 1, 3.00, 57.3726, 2, 4.50137, 0.00000, 2, 1.24, 29.7726, 2,
3.92329, 0.00000, 2, 0.70, 51.8822, 2, 0.0, 4.86027, 2, 0.80, 65.3123, 2,
0.52603, 0.00000, 1, 1.00, 52.0658, 2, 2.10685, 0.00000, 2, 3.38, 60.9534, 2,
0.0, 4.24384, 1, 1.52, 32.6055, 2, 3.39178, 0.00000, 1, 2.20, 51.5123, 2,
0.0, 4.36164, 2, 2.10, 48.6548, 1, 0.0, 4.81918, 2, 1.40, 43.8438, 2],
(255, 6))
# Censoring indicator
censored = (melanoma_data[:, 0] == 0).astype(int)
# Time
t = sum(melanoma_data[:, 0:2], 1)
# Treatment
treat = melanoma_data[:, 2].astype(int) - 1
# Breslow scale
breslow = melanoma_data[:, 3]
# Age and sex
age = melanoma_data[:, 4]
sex = melanoma_data[:, 5].astype(int) - 1
from pymc3 import Normal, Model, DensityDist, sample, log, exp
with Model() as melanoma_survival:
# Convert censoring indicators to indicators for failure event
failure = (censored==0).astype(int)
# Parameters (intercept and treatment effect) for survival rate
beta = Normal('beta', mu=0.0, sd=1e5, shape=2)
# Survival rates, as a function of treatment
lam = exp(beta[0] + beta[1]*treat)
# Survival likelihood, accounting for censoring
def logp(failure, value):
return (failure * log(lam) - lam * value).sum()
x = DensityDist('x', logp, observed={'failure':failure, 'value':t})
from pymc3 import find_MAP
with melanoma_survival:
estimates = find_MAP()
from pymc3 import sample
with melanoma_survival:
trace = sample(1000)
from pymc3 import summary
summary(trace)
from theano import function
# Write your answer here
import numpy as np
data = np.random.randn(100)
import pymc3 as pm
with pm.Model() as model:
mu = pm.Normal('mu', mu=0, sd=1, testval=0)
sd = pm.HalfNormal('sd', sd=1)
n = pm.Normal('n', mu=mu, sd=sd, observed=data)
with model:
means, sds, elbos = pm.variational.advi(model=model, n=10000, accurate_elbo=True)
means
with model:
trace = pm.sample(1000)
%matplotlib inline
import seaborn as sns
from scipy import stats
ax = sns.distplot(trace['mu'], label='NUTS')
xlim = ax.get_xlim()
x = np.linspace(xlim[0], xlim[1], 100)
y = stats.norm(means['mu'], sds['mu']).pdf(x)
ax.plot(x, y, label='ADVI')
ax.set_title('mu')
ax.legend(loc=0)
from pymc3 import Normal, Metropolis, sample, MvNormal, Dirichlet, Model, DensityDist, find_MAP, NUTS, Slice
import theano.tensor as tt
from theano.tensor.nlinalg import det
import matplotlib.pyplot as plt
n_samples = 100
rng = np.random.RandomState(123)
ms = np.array([[-1, -1.5], [1, 1]])
ps = np.array([0.2, 0.8])
zs = np.array([rng.multinomial(1, ps) for _ in range(n_samples)]).T
xs = [z[:, np.newaxis] * rng.multivariate_normal(m, np.eye(2), size=n_samples)
for z, m in zip(zs, ms)]
data = np.sum(np.dstack(xs), axis=2)
plt.figure(figsize=(5, 5))
plt.scatter(data[:, 0], data[:, 1], c='g', alpha=0.5)
plt.scatter(ms[0, 0], ms[0, 1], c='r', s=100)
plt.scatter(ms[1, 0], ms[1, 1], c='b', s=100)
from pymc3.math import logsumexp
# Log likelihood of normal distribution
def logp_normal(mu, tau, value):
# log probability of individual samples
k = tau.shape[0]
delta = lambda mu: value - mu
return (-1 / 2.) * (k * tt.log(2 * np.pi) + tt.log(1./det(tau)) +
(delta(mu).dot(tau) * delta(mu)).sum(axis=1))
# Log likelihood of Gaussian mixture distribution
def logp_gmix(mus, pi, tau):
def logp_(value):
logps = [tt.log(pi[i]) + logp_normal(mu, tau, value)
for i, mu in enumerate(mus)]
return tt.sum(logsumexp(tt.stacklists(logps)[:, :n_samples], axis=0))
return logp_
with pm.Model() as model:
mus = [MvNormal('mu_%d' % i, mu=np.zeros(2), tau=0.1 * np.eye(2), shape=(2,))
for i in range(2)]
pi = Dirichlet('pi', a=0.1 * np.ones(2), shape=(2,))
xs = DensityDist('x', logp_gmix(mus, pi, np.eye(2)), observed=data)
with model:
trace = sample(1000, step=Metropolis(), start=find_MAP())
plt.figure(figsize=(5, 5))
plt.scatter(data[:, 0], data[:, 1], alpha=0.5, c='g')
mu_0, mu_1 = trace['mu_0'], trace['mu_1']
plt.scatter(mu_0[-500:, 0], mu_0[-500:, 1], c="r", s=10)
plt.scatter(mu_1[-500:, 0], mu_1[-500:, 1], c="b", s=10)
plt.xlim(-6, 6)
plt.ylim(-6, 6)
sns.barplot([1, 2], np.mean(trace['pi'][-5000:], axis=0),
palette=['red', 'blue'])
# with pm.Model() as model:
# mus = [MvNormal('mu_%d' % i, mu=np.zeros(2), tau=0.1 * np.eye(2), shape=(2,))
# for i in range(2)]
# pi = Dirichlet('pi', a=0.1 * np.ones(2), shape=(2,))
# xs = DensityDist('x', logp_gmix(mus, pi, np.eye(2)), observed=data)
with model:
%time means, sds, elbos = pm.variational.advi(model=model, n=1000, learning_rate=1e-1)
from copy import deepcopy
mu_0, sd_0 = means['mu_0'], sds['mu_0']
mu_1, sd_1 = means['mu_1'], sds['mu_1']
def logp_normal_np(mu, tau, value):
# log probability of individual samples
k = tau.shape[0]
delta = lambda mu: value - mu
return (-1 / 2.) * (k * np.log(2 * np.pi) + np.log(1./np.linalg.det(tau)) +
(delta(mu).dot(tau) * delta(mu)).sum(axis=1))
def threshold(zz):
zz_ = deepcopy(zz)
zz_[zz < np.max(zz) * 1e-2] = None
return zz_
def plot_logp_normal(ax, mu, sd, cmap):
f = lambda value: np.exp(logp_normal_np(mu, np.diag(1 / sd**2), value))
g = lambda mu, sd: np.arange(mu - 3, mu + 3, .1)
xx, yy = np.meshgrid(g(mu[0], sd[0]), g(mu[1], sd[1]))
zz = f(np.vstack((xx.reshape(-1), yy.reshape(-1))).T).reshape(xx.shape)
ax.contourf(xx, yy, threshold(zz), cmap=cmap, alpha=0.9)
fig, ax = plt.subplots(figsize=(5, 5))
plt.scatter(data[:, 0], data[:, 1], alpha=0.5, c='g')
plot_logp_normal(ax, mu_0, sd_0, cmap='Reds')
plot_logp_normal(ax, mu_1, sd_1, cmap='Blues')
plt.xlim(-6, 6)
plt.ylim(-6, 6)
plt.plot(elbos)
n_samples = 100000
zs = np.array([rng.multinomial(1, ps) for _ in range(n_samples)]).T
xs = [z[:, np.newaxis] * rng.multivariate_normal(m, np.eye(2), size=n_samples)
for z, m in zip(zs, ms)]
data = np.sum(np.dstack(xs), axis=2)
plt.figure(figsize=(5, 5))
plt.scatter(data[:, 0], data[:, 1], c='g', alpha=0.5)
plt.scatter(ms[0, 0], ms[0, 1], c='r', s=100)
plt.scatter(ms[1, 0], ms[1, 1], c='b', s=100)
plt.xlim(-6, 6)
plt.ylim(-6, 6)
with pm.Model() as model:
mus = [MvNormal('mu_%d' % i, mu=np.zeros(2), tau=0.1 * np.eye(2), shape=(2,))
for i in range(2)]
pi = Dirichlet('pi', a=0.1 * np.ones(2), shape=(2,))
xs = DensityDist('x', logp_gmix(mus, pi, np.eye(2)), observed=data)
start = find_MAP()
step = Metropolis()
trace = sample(1000, step, start=start)
plt.figure(figsize=(5, 5))
plt.scatter(data[:, 0], data[:, 1], alpha=0.1, c='g')
mu_0, mu_1 = trace['mu_0'], trace['mu_1']
plt.scatter(mu_0[-500:, 0], mu_0[-500:, 1], c="r", s=50, alpha=0.4)
plt.scatter(mu_1[-500:, 0], mu_1[-500:, 1], c="b", s=50, alpha=0.4)
plt.xlim(-6, 6)
plt.ylim(-6, 6)
data_t = tt.matrix()
data_t.tag.test_value = np.zeros((1, 2)).astype(float)
with pm.Model() as model:
mus = [MvNormal('mu_%d' % i, mu=np.zeros(2), tau=0.1 * np.eye(2), shape=(2,))
for i in range(2)]
pi = Dirichlet('pi', a=0.1 * np.ones(2), shape=(2,))
xs = DensityDist('x', logp_gmix(mus, pi, np.eye(2)), observed=data_t)
minibatch_tensors = [data_t]
minibatch_RVs = [xs]
def create_minibatch(data):
rng = np.random.RandomState(0)
while True:
ixs = rng.randint(len(data), size=200)
yield data[ixs]
minibatches = [create_minibatch(data)]
total_size = len(data)
# Used only to write the function call in single line for using %time
# is there more smart way?
def f():
return pm.variational.advi_minibatch(
model=model, n=1000, minibatch_tensors=minibatch_tensors,
minibatch_RVs=minibatch_RVs, minibatches=minibatches,
total_size=total_size, learning_rate=1e-1)
%time means, sds, elbos = f()
from copy import deepcopy
mu_0, sd_0 = means['mu_0'], sds['mu_0']
mu_1, sd_1 = means['mu_1'], sds['mu_1']
fig, ax = plt.subplots(figsize=(5, 5))
plt.scatter(data[:, 0], data[:, 1], alpha=0.2, c='g')
plt.scatter(mu_0[0], mu_0[1], c="r", s=50)
plt.scatter(mu_1[0], mu_1[1], c="b", s=50)
plt.xlim(-6, 6)
plt.ylim(-6, 6)
plt.plot(elbos)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The MAP estimate can be obtained in PyMC3 via the find_MAP function. As with sample, we run find_MAP inside a model context, or pass the model explicitly to the function as the model parameter.
Step2: For this model, the MAP estimates are comparable to those we would have obtained using MCMC sampling
Step3: find_MAP only returns estimates unobserved random variables from the model, and does not include deterministic values. If we wish to evaluate a determinsitic quantity, we can construct a Theano function and pass in the relevant parameter values as arguments.
Step4: The major limitation to using MAP for inference is that there is no associated measure of uncertainty. Hence, find_MAP cannot be used for inference. It is useful, however, for getting a sense of typical values the model may take for a particular dataset, and for PyMC3 it is intended to be used to get reasonable starting values for use in MCMC algorithms.
Step5: Example
Step6: Gaussian mixture models are usually constructed with categorical random variables. However, ADVI cannot fit models with discrete variables, since it uses the gradient of the model with respect to the parameters. Just as with the coal mining disasters model, the class assignment variables are marginalized out in the model below, giving weighted sum of the probability for the Gaussian components. The log likelihood of the total probability is calculated using logsumexp, which is a standard technique for making this kind of calculation stable
Step7: For comparison with ADVI, run MCMC.
Step8: Check posterior of component means and weights. We can see that the MCMC samples of the component means differed in variance due to the difference of the sample size of these clusters.
Step9: We can fit the same model with ADVI as follows.
Step10: The function returns three variables. means and sds are the mean and standard deviations of the variational posterior (Note that these values are in the transformed space, not in the original space).
Step11: elbos contains the trace of the evidence lower bound, showing stochastic convergence of the algorithm.
Step12: To demonstrate that ADVI works for large dataset with mini-batch, let's create 100,000 samples from the same mixture distribution.
Step13: Posterior samples are concentrated on the true means, so looks like single point for each component.
Step14: For ADVI with mini-batch, pass a Theano tensor to the likelihood (an ObservedRV). The tensor will iteratively be replaced with mini-batches during the ADVI run. Because of the difference of the size of mini-batch and whole samples, the log-likelihood term needs to be appropriately scaled.
Step15: Make a generator for mini-batches, each of size 200. Mini-batches are generated using Theano RandomState realizations.
Step16: The ADVI model fitting is much faster than MCMC,
Step17: ... but the result is almost the same (at least for this simple model).
Step18: The variance of the trace of ELBO is larger than without mini-batch because of the subsampling from the whole samples.
|
5,510
|
<ASSISTANT_TASK:>
Python Code:
from lxml import etree
tree = etree.parse("data/TEI/sonnet18.xml")
print(tree)
print(etree.tostring(tree))
print(etree.tostring(tree).decode())
print(etree.tostring(tree, pretty_print=True).decode())
for node in tree.iterfind("//rhyme"):
print(node)
for node in tree.iterfind("//rhyme"):
print(node.tag)
for node in tree.iterfind("//rhyme"):
print(node.text)
root_node = tree.getroot()
print(root_node.tag)
print(root_node.attrib["author"])
print(root_node.attrib["year"])
for key in root_node.attrib.keys():
print(root_node.attrib[key])
print(len(root_node))
for node in root_node:
print(node.tag)
for node in root_node:
if node.tag != "volta":
line_text = ""
for text in node.itertext():
line_text = line_text + text
print(line_text)
else:
print("=== Volta found! ===")
for node in root_node:
if node.tag == "line":
print(node.attrib["n"])
root_node = tree.getroot()
root_node.attrib["author"] = "J.K. Rowling"
root_node.attrib["year"] = "2015"
root_node.attrib["new_element"] = "dummy string!"
root_node.attrib["place"] = "maynooth"
print(etree.tostring(root_node).decode())
root_node.attrib["year"] = "2015"
break_el = etree.Element("break")
break_el.attrib["author"] = "Mike"
print(etree.tostring(break_el).decode())
for node in tree.iterfind("//line"):
break_el = etree.Element("break")
node.append(break_el)
print(etree.tostring(tree).decode())
break_el = etree.Element("break")
print(etree.tostring(break_el).decode())
break_el.text = "XXX"
print(etree.tostring(break_el).decode())
tree = etree.parse("data/TEI/sonnet18.xml")
root_node = tree.getroot()
for node in root_node:
if node.tag == "line":
v = node.attrib["n"]
break_el = etree.Element("break")
break_el.attrib["n"] = v
node.append(break_el)
print(etree.tostring(tree).decode())
tree = etree.parse("data/TEI/sonnet17.xml")
print(etree.tostring(tree).decode())
# add your parsing code here...
import os
dirname = "data/TEI/french_plays/"
for filename in os.listdir(dirname):
if filename.endswith(".xml"):
print(filename)
for filename in os.listdir(dirname):
if filename.endswith(".xml"):
print("*****")
print("\t-", filename)
tree = etree.parse(dirname+filename)
author_element = tree.find("//author") # find vs iterfind!
print("\t-", author_element.text)
title_element = tree.find("//title")
print("\t-", title_element.text)
# your code goes here
from IPython.core.display import HTML
def css_styling():
styles = open("styles/custom.css", "r").read()
return HTML(styles)
css_styling()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: For the record, we should mention that there exist many other libraries in Python to parse XML, such as minidom or BeautifulSoup which is an interesting library, when you intend to scrape data from the web. While these might come with more advanced bells and whistles than lxml, they can also be more complex to use, which is why we stick to lxml in this course. Let us now import our sonnet in Python, which has been saved in the file sonnet18.xml
Step2: Python has now read and parsed our xml-file via the etree.parse() function. We have stored our XML tree structure, which is returned by the parse() function, in the tree variable, so that we can access it later. If we print tree as such, we don't get a lot of useful information. To have a closer look at the XML in a printable text version, we need to call the tostring() method on the tree before printing it.
Step3: You'll notice that we actually get a string in a raw format
Step4: If we have more complex data, it might also be to set the pretty_print parameter to True, to obtain a more beautifully formatted string, with Python taking care of indendation etc. In our example, it doesn't change much
Step5: Now let us start processing the contents of our file. Suppose that we are not really interested in the full hierarchical structure of our file, but just in the rhyme words occuring in it. The high-level function interfind() allows us to easily select all rhyme-element in our tree, regardless of where exactly they occur. Because this functions returns a list of nodes, we can simply loop over them
Step6: Note that the search expression ("//rhyme") has two forward slashes before our actual search term. This is in fact XPath syntax, and the two slashes indicate that the search term can occur anywhere (e.g. not necessarily among a node's direct children). Unfortunately, printing the nodes themselves again isn't really insightful
Step7: To extract the actual rhyme word contained in the element, we can use the .text property of the nodes
Step8: That looks better!
Step9: We can access the value of the attributes of an element via .attrib, just like we would access the information in a Python dictionary, that is via key-based indexing. We know that our sonnet element, for instance, should have an author and year attribute. We can inspect the value of these as follows
Step10: If we wouldn't know which attributes were in fact available for a node, we could also retrieve the attribute names by calling keys() on the attributes property of a node, just like we would do with a regular dictionary
Step11: So far so good. Now that we have selected our root element, we can start drilling down our tree's structure. Let us first find out how many child nodes our root element has
Step12: Our root node turns out to have 15 child nodes, which makes a lot of sense, since we have 14 line elements and the volta. We can actually loop over these children, just as we would loop over any other list
Step13: To extract the actual text in our lines, we need one additional for-loop which will allow us to iteratre over the pieces of text under each line
Step14: Note that we get an empty line at the volta, since there isn't any actual text associated with this empty tag.
Step15: Manipulating XML in Python
Step16: That was easy, wasn't it? Did you see we can just add new attributes to an element? Just take care only to put strings as attribute values
Step17: Adding whole elements is fairly easy too. Let's add a single dummy element (<break/>) to indicate a line break at the end of each line. Importantly, we have to create this element inside our loop, before we can add it
Step18: You'll notice that we actually created an empty <break/> tag. Now, let's add it add the end of each line
Step19: Adding an element with actual content is just as easy by the way
Step20: Quiz
Step21: Python for TEI
Step22: Quiz
Step23: A hands-on case study
Step24: OK
Step25: As you can see, we have made you a nice subset selection of this data, containing only texts by the famous pair of brothers
Step26:
|
5,511
|
<ASSISTANT_TASK:>
Python Code:
import ctcsound
cs = ctcsound.Csound()
csd = '''
<CsoundSynthesizer>
<CsOptions>
-d -o dac -m0
</CsOptions>
<CsInstruments>
sr = 48000
ksmps = 100
nchnls = 2
0dbfs = 1
instr 1
idur = p3
iamp = p4
icps = cpspch(p5)
irise = p6
idec = p7
ipan = p8
kenv linen iamp, irise, idur, idec
kenv = kenv*kenv
asig poscil kenv, icps
a1, a2 pan2 asig, ipan
outs a1, a2
endin
</CsInstruments>
<CsScore>
f 0 14400 ; a 4 hours session should be enough
</CsScore>
</CsoundSynthesizer>
'''
cs.compileCsdText(csd)
cs.start()
pt = ctcsound.CsoundPerformanceThread(cs.csound())
pt.play()
pt.scoreEvent(False, 'i', (1, 0, 1, 0.5, 8.06, 0.05, 0.3, 0.5))
pt.scoreEvent(False, 'i', (1, 0.5, 1, 0.5, 9.06, 0.05, 0.3, 0.5))
pt.stop()
pt.join()
cs.reset()
csd = '''
<CsoundSynthesizer>
<CsOptions>
-odac
</CsOptions>
<CsInstruments>
sr = 44100
ksmps = 64
nchnls = 2
0dbfs = 1
seed 0
instr 1
iPch random 60, 72
chnset iPch, "pch"
kPch init iPch
kNewPch chnget "new_pitch"
if kNewPch > 0 then
kPch = kNewPch
endif
aTone poscil .2, mtof(kPch)
out aTone, aTone
endin
</CsInstruments>
<CsScore>
i 1 0 600
</CsScore>
</CsoundSynthesizer>
'''
cs.compileCsdText(csd)
cs.start()
pt = ctcsound.CsoundPerformanceThread(cs.csound())
pt.play()
print(cs.controlChannel('pch'))
cs.setControlChannel('new_pitch',73)
pt.stop()
pt.join()
cs.reset()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Then, let's start a new thread, passing the opaque pointer of the Csound instance as argument
Step2: Now, we can send messages to the performance thread
Step3: When we're done, we stop the performance thread and reset the csound instance
Step4: Note that we can still access the csound instance with other methods, like controlChannel() or setControlChannel()
Step5: We can ask for the values in the Csound instance ...
Step6: ... or we can set our own values to the Csound instance
Step7: At the end, stop and reset as usual
|
5,512
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import scipy.stats as st
from sci_analysis import analyze
%matplotlib inline
# Create x-sequence and y-sequence from random variables.
np.random.seed(987654321)
x_sequence = st.norm.rvs(2, size=2000)
y_sequence = np.array([x + st.norm.rvs(0, 0.5, size=1) for x in x_sequence])
analyze(x_sequence, y_sequence)
analyze(x_sequence, y_sequence, boxplot_borders=True)
x_continuous = st.weibull_max.rvs(2.7, size=2000)
y_discrete = st.geom.rvs(0.5, loc=0, size=2000)
analyze(x_continuous, y_discrete, contours=True, fit=False)
# Create new x-grouped and y-grouped from independent groups A, B, and C.
a_x = st.norm.rvs(2, size=500)
a_y = np.array([x + st.norm.rvs(0, 0.5, size=1) for x in a_x])
b_x = st.norm.rvs(4, size=500)
b_y = np.array([1.5 * x + st.norm.rvs(0, 0.65, size=1) for x in b_x])
c_x = st.norm.rvs(1.5, size=500)
c_y = np.array([3 * x + st.norm.rvs(0, 0.95, size=1) - 1 for x in c_x])
x_grouped = np.concatenate((a_x, b_x, c_x))
y_grouped = np.concatenate((a_y, b_y, c_y))
grps = np.concatenate((['Group A'] * 500, ['Group B'] * 500, ['Group C'] * 500))
analyze(
x_grouped,
y_grouped,
groups=grps,
boxplot_borders=False,
)
analyze(
x_sequence,
y_sequence,
)
analyze(
x_sequence,
y_sequence,
fit=False,
)
analyze(
x_sequence,
y_sequence,
points=False,
)
analyze(
x_sequence,
y_sequence,
boxplot_borders=True,
)
analyze(
x_sequence,
y_sequence,
contours=True,
)
labels = np.random.randint(low=10000, high=99999, size=2000)
analyze(
x_sequence,
y_sequence,
labels=labels,
highlight=[66286]
)
# Create new x-grouped and y-grouped from independent groups A, B, and C.
a_x = st.norm.rvs(2, size=500)
a_y = np.array([x + st.norm.rvs(0, 0.5, size=1) for x in a_x])
b_x = st.norm.rvs(4, size=500)
b_y = np.array([1.5 * x + st.norm.rvs(0, 0.65, size=1) for x in b_x])
c_x = st.norm.rvs(1.5, size=500)
c_y = np.array([3 * x + st.norm.rvs(0, 0.95, size=1) - 1 for x in c_x])
x_grouped = np.concatenate((a_x, b_x, c_x))
y_grouped = np.concatenate((a_y, b_y, c_y))
grps = np.concatenate((['Group A'] * 500, ['Group B'] * 500, ['Group C'] * 500))
analyze(
x_grouped,
y_grouped,
groups=grps,
)
analyze(
x_grouped,
y_grouped,
groups=grps,
highlight=['Group A'],
)
analyze(
x_grouped,
y_grouped,
groups=grps,
highlight=['Group A', 'Group B'],
)
x_sequence = st.norm.rvs(2, size=2000)
y_sequence = np.array([x + st.norm.rvs(0, 0.5, size=1) for x in x_sequence])
analyze(
x_sequence,
y_sequence,
title='This is a Title',
)
analyze(
x_sequence,
y_sequence,
xname='This is the x-axis data'
)
analyze(
x_sequence,
y_sequence,
yname='This is the y-axis data'
)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Scatter Plot
Step2: Boxplot Borders
Step3: Contours
Step4: Grouped Scatter Plot
Step5: Interpreting the Statistics
Step6: fit
Step7: points
Step8: boxplot_borders
Step9: contours
Step10: labels, highlight
Step11: groups
Step12: groups, highlight
Step13: Multiple groups can also be highlighted.
Step14: title
Step15: xname
Step16: yname
|
5,513
|
<ASSISTANT_TASK:>
Python Code:
data = pd.read_csv( '../../data/dailybots.csv' )
#Look at a summary of the data
data.describe()
data['botfam'].value_counts()
grouped_df = data[data.botfam == "Ramnit"].groupby(['industry'])
grouped_df.sum()
group2 = data[['botfam','orgs']].groupby( ['botfam'])
summary = group2.agg([np.min, np.max, np.mean, np.median, np.std])
summary.sort_values( [('orgs', 'median')], ascending=False)
df3 = data[['date','hosts']].groupby('date').agg(['count'])
df3.sort_values(by=[('hosts', 'count')], ascending=False).head(10)
filteredData = data[ data['botfam'].isin(['Necurs', 'Ramnit', 'PushDo']) ][['date', 'botfam', 'hosts']]
groupedFilteredData = filteredData.groupby( ['date', 'botfam']).sum()
groupedFilteredData.unstack(level=1).plot(kind='line', subplots=False)
data.date = data.date = pd.to_datetime( data.date )
data['day'] = data.date.dt.weekday
data[['hosts', 'day']].boxplot( by='day')
grouped = data.groupby('day')
grouped.boxplot('hosts')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exercise 1
Step2: Exercise 2
Step3: Exercise 3
Step4: Exercise 4
Step5: Exercise 5
|
5,514
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import division
import graphlab
import math
import string
import numpy
products = graphlab.SFrame('amazon_baby.gl/')
products
products[269]
def remove_punctuation(text):
import string
return text.translate(None, string.punctuation)
review_without_punctuation = products['review'].apply(remove_punctuation)
products['word_count'] = graphlab.text_analytics.count_words(review_without_punctuation)
products[269]['word_count']
products = products[products['rating'] != 3]
len(products)
products['sentiment'] = products['rating'].apply(lambda rating : +1 if rating > 3 else -1)
products
train_data, test_data = products.random_split(.8, seed=1)
print len(train_data)
print len(test_data)
sentiment_model = graphlab.logistic_classifier.create(train_data,
target = 'sentiment',
features=['word_count'],
validation_set=None)
sentiment_model
weights = sentiment_model.coefficients
weights.column_names()
weights[weights['value'] > 0]['value']
num_positive_weights = weights[weights['value'] >= 0]['value'].size()
num_negative_weights = weights[weights['value'] < 0]['value'].size()
print "Number of positive weights: %s " % num_positive_weights
print "Number of negative weights: %s " % num_negative_weights
sample_test_data = test_data[10:13]
print sample_test_data['rating']
sample_test_data
sample_test_data[0]['review']
sample_test_data[1]['review']
scores = sentiment_model.predict(sample_test_data, output_type='margin')
print scores
def margin_based_classifier(score):
return 1 if score > 0 else -1
sample_test_data['predictions'] = scores.apply(margin_based_classifier)
sample_test_data['predictions']
print "Class predictions according to GraphLab Create:"
print sentiment_model.predict(sample_test_data)
def logistic_classifier_prob(weight):
return 1.0 / (1.0 + math.exp(-1 * weight))
probabilities = scores.apply(logistic_classifier_prob)
probabilities
print "Class predictions according to GraphLab Create:"
print sentiment_model.predict(sample_test_data, output_type='probability')
print "Third"
a = graphlab.SArray([1,2,3])
b = graphlab.SArray([1,2,1])
print a == b
print (a == b).sum()
test_data['predicted_prob'] = sentiment_model.predict(test_data, output_type='probability')
test_data
test_data.topk('predicted_prob', 20).print_rows(20)
test_data.topk('predicted_prob', 20, reverse=True).print_rows(20)
def get_classification_accuracy(model, data, true_labels):
# First get the predictions
prediction = model.predict(data)
# Compute the number of correctly classified examples
correctly_classified = prediction == true_labels
# Then compute accuracy by dividing num_correct by total number of examples
accuracy = float(correctly_classified.sum()) / true_labels.size()
return accuracy
get_classification_accuracy(sentiment_model, test_data, test_data['sentiment'])
significant_words = ['love', 'great', 'easy', 'old', 'little', 'perfect', 'loves',
'well', 'able', 'car', 'broke', 'less', 'even', 'waste', 'disappointed',
'work', 'product', 'money', 'would', 'return']
len(significant_words)
train_data['word_count_subset'] = train_data['word_count'].dict_trim_by_keys(significant_words, exclude=False)
test_data['word_count_subset'] = test_data['word_count'].dict_trim_by_keys(significant_words, exclude=False)
train_data[0]['review']
print train_data[0]['word_count']
print train_data[0]['word_count_subset']
simple_model = graphlab.logistic_classifier.create(train_data,
target = 'sentiment',
features=['word_count_subset'],
validation_set=None)
simple_model
get_classification_accuracy(simple_model, test_data, test_data['sentiment'])
simple_model.coefficients
simple_model.coefficients.sort('value', ascending=False).print_rows(num_rows=21)
simple_model.coefficients[simple_model.coefficients['value'] > 0]['value'].size() - 1
positive_significant_words = simple_model.coefficients[simple_model.coefficients['value'] > 0]
positive_significant_words
for w in positive_significant_words['index']:
print sentiment_model.coefficients[sentiment_model.coefficients['index'] == w]
get_classification_accuracy(sentiment_model, train_data, train_data['sentiment'])
get_classification_accuracy(simple_model, train_data, train_data['sentiment'])
round(get_classification_accuracy(sentiment_model, test_data, test_data['sentiment']), 2)
get_classification_accuracy(simple_model, test_data, test_data['sentiment'])
num_positive = (train_data['sentiment'] == +1).sum()
num_negative = (train_data['sentiment'] == -1).sum()
print num_positive
print num_negative
num_positive_test = (test_data['sentiment'] == +1).sum()
num_negative_test = (test_data['sentiment'] == -1).sum()
print num_positive_test
print num_negative_test
majority_accuracy = float(num_positive_test) / test_data['sentiment'].size()
print round(majority_accuracy, 2)
print "Yes"
graphlab.version
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data preparation
Step2: Now, let us see a preview of what the dataset looks like.
Step3: Build the word count vector for each review
Step4: Now, we will perform 2 simple data transformations
Step5: Now, let us explore what the sample example above looks like after these 2 transformations. Here, each entry in the word_count column is a dictionary where the key is the word and the value is a count of the number of times the word occurs.
Step6: Extract sentiments
Step7: Now, we will assign reviews with a rating of 4 or higher to be positive reviews, while the ones with rating of 2 or lower are negative. For the sentiment column, we use +1 for the positive class label and -1 for the negative class label.
Step8: Now, we can see that the dataset contains an extra column called sentiment which is either positive (+1) or negative (-1).
Step9: Train a sentiment classifier with logistic regression
Step10: Aside. You may get a warning to the effect of "Terminated due to numerical difficulties --- this model may not be ideal". It means that the quality metric (to be covered in Module 3) failed to improve in the last iteration of the run. The difficulty arises as the sentiment model puts too much weight on extremely rare words. A way to rectify this is to apply regularization, to be covered in Module 4. Regularization lessens the effect of extremely rare words. For the purpose of this assignment, however, please proceed with the model above.
Step11: There are a total of 121713 coefficients in the model. Recall from the lecture that positive weights $w_j$ correspond to weights that cause positive sentiment, while negative weights correspond to negative sentiment.
Step12: Quiz question
Step13: Let's dig deeper into the first row of the sample_test_data. Here's the full review
Step14: That review seems pretty positive.
Step15: We will now make a class prediction for the sample_test_data. The sentiment_model should predict +1 if the sentiment is positive and -1 if the sentiment is negative. Recall from the lecture that the score (sometimes called margin) for the logistic regression model is defined as
Step16: Predicting sentiment
Step17: Run the following code to verify that the class predictions obtained by your calculations are the same as that obtained from GraphLab Create.
Step18: Checkpoint
Step19: Checkpoint
Step20: Quiz Question
Step21: Find the most positive (and negative) review
Step22: Quiz Question
Step23: Quiz Question
Step24: Now, let's compute the classification accuracy of the sentiment_model on the test_data.
Step25: Quiz Question
Step26: For each review, we will use the word_count column and trim out all words that are not in the significant_words list above. We will use the SArray dictionary trim by keys functionality. Note that we are performing this on both the training and test set.
Step27: Let's see what the first example of the dataset looks like
Step28: The word_count column had been working with before looks like the following
Step29: Since we are only working with a subset of these words, the column word_count_subset is a subset of the above dictionary. In this example, only 2 significant words are present in this review.
Step30: Train a logistic regression model on a subset of data
Step31: We can compute the classification accuracy using the get_classification_accuracy function you implemented earlier.
Step32: Now, we will inspect the weights (coefficients) of the simple_model
Step33: Let's sort the coefficients (in descending order) by the value to obtain the coefficients with the most positive effect on the sentiment.
Step34: Quiz Question
Step35: Quiz Question
Step36: Comparing models
Step37: Now, compute the classification accuracy of the simple_model on the train_data
Step38: Quiz Question
Step39: Next, we will compute the classification accuracy of the simple_model on the test_data
Step40: Quiz Question
Step41: Now compute the accuracy of the majority class classifier on test_data.
Step42: Quiz Question
|
5,515
|
<ASSISTANT_TASK:>
Python Code:
# Basic imports
import os
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import datetime as dt
import scipy.optimize as spo
import sys
from time import time
from sklearn.metrics import r2_score, median_absolute_error
from multiprocessing import Pool
import pickle
%matplotlib inline
%pylab inline
pylab.rcParams['figure.figsize'] = (20.0, 10.0)
%load_ext autoreload
%autoreload 2
sys.path.append('../../')
import recommender.simulator as sim
from utils.analysis import value_eval
from recommender.agent_predictor import AgentPredictor
from functools import partial
from sklearn.externals import joblib
NUM_THREADS = 1
LOOKBACK = 252*3
STARTING_DAYS_AHEAD = 252
POSSIBLE_FRACTIONS = [0.0, 1.0]
DYNA = 20
BASE_DAYS = 112
# Get the data
SYMBOL = 'SPY'
total_data_train_df = pd.read_pickle('../../data/data_train_val_df.pkl').stack(level='feature')
data_train_df = total_data_train_df[SYMBOL].unstack()
total_data_test_df = pd.read_pickle('../../data/data_test_df.pkl').stack(level='feature')
data_test_df = total_data_test_df[SYMBOL].unstack()
if LOOKBACK == -1:
total_data_in_df = total_data_train_df
data_in_df = data_train_df
else:
data_in_df = data_train_df.iloc[-LOOKBACK:]
total_data_in_df = total_data_train_df.loc[data_in_df.index[0]:]
# Create many agents
index = np.arange(NUM_THREADS).tolist()
env, num_states, num_actions = sim.initialize_env(total_data_in_df,
SYMBOL,
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS)
estimator_close = joblib.load('../../data/best_predictor.pkl')
estimator_volume = joblib.load('../../data/best_volume_predictor.pkl')
agents = [AgentPredictor(num_states=num_states,
num_actions=num_actions,
random_actions_rate=0.98,
random_actions_decrease=0.999,
dyna_iterations=DYNA,
name='Agent_{}'.format(i),
estimator_close=estimator_close,
estimator_volume=estimator_volume,
env=env,
prediction_window=BASE_DAYS) for i in index]
def show_results(results_list, data_in_df, graph=False):
for values in results_list:
total_value = values.sum(axis=1)
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(total_value))))
print('-'*100)
initial_date = total_value.index[0]
compare_results = data_in_df.loc[initial_date:, 'Close'].copy()
compare_results.name = SYMBOL
compare_results_df = pd.DataFrame(compare_results)
compare_results_df['portfolio'] = total_value
std_comp_df = compare_results_df / compare_results_df.iloc[0]
if graph:
plt.figure()
std_comp_df.plot()
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(data_in_df['Close'].iloc[STARTING_DAYS_AHEAD:]))))
# Simulate (with new envs, each time)
n_epochs = 4
for i in range(n_epochs):
tic = time()
env.reset(STARTING_DAYS_AHEAD)
results_list = sim.simulate_period(total_data_in_df,
SYMBOL,
agents[0],
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False,
other_env=env)
toc = time()
print('Epoch: {}'.format(i))
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))
show_results([results_list], data_in_df)
env.reset(STARTING_DAYS_AHEAD)
results_list = sim.simulate_period(total_data_in_df,
SYMBOL, agents[0],
learn=False,
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
other_env=env)
show_results([results_list], data_in_df, graph=True)
import pickle
with open('../../data/dyna_q_with_predictor.pkl', 'wb') as best_agent:
pickle.dump(agents[0], best_agent)
TEST_DAYS_AHEAD = 112
env.set_test_data(total_data_test_df, TEST_DAYS_AHEAD)
tic = time()
results_list = sim.simulate_period(total_data_test_df,
SYMBOL,
agents[0],
learn=False,
starting_days_ahead=TEST_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False,
other_env=env)
toc = time()
print('Epoch: {}'.format(i))
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))
show_results([results_list], data_test_df, graph=True)
TEST_DAYS_AHEAD = 112
env.set_test_data(total_data_test_df, TEST_DAYS_AHEAD)
tic = time()
results_list = sim.simulate_period(total_data_test_df,
SYMBOL,
agents[0],
learn=True,
starting_days_ahead=TEST_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False,
other_env=env)
toc = time()
print('Epoch: {}'.format(i))
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))
show_results([results_list], data_test_df, graph=True)
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(data_test_df['Close'].iloc[STARTING_DAYS_AHEAD:]))))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's show the symbols data, to see how good the recommender has to be.
Step2: Let's run the trained agent, with the test set
Step3: And now a "realistic" test, in which the learner continues to learn from past samples in the test set (it even makes some random moves, though very few).
Step4: What are the metrics for "holding the position"?
|
5,516
|
<ASSISTANT_TASK:>
Python Code:
# your code here
# your code here
plt.close() # leave this here. it makes sure that if you run this cell again, the plot appears below
# parameters
# calculate the trajectory
# plot-don't forget to label your axes!
plt.close() # keep this here
# your code here
# your code here
plt.close()
# your code here
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now write a function that returns the total time the projectile will stay in the air (which means return to $y = 0$), in units of seconds, given $v_0$ (with units m /s), $\theta$ (in radians) , and $g$ (with units m / s$^2$).
Step2: In the cell below, make a plot of a projectile trajectory when $g$ = 9.8 m/$s^2$ in the -y direction, $v_0$ is 100 m/s, and $\theta$ is 45 degrees ($\pi$/4 radians).
Step3: Next, let's see what happens when we change the launch angle. Make the same plot as above, but for several different values of the launch angle $\theta$, all displayed together in a single plot. For example, you could use 9 angles linearly spaced between 5 and 85 degrees. Plot all the trajectories on the same graph in the cell below.
Step4: Now we want to estimate the total distance traveled by the projectile in the air. You've already written a path-length function. Rewrite the path-length function (and any helper functions) into the cell below.
Step5: Loop through the launch angles again, load the trajectories, and determine the total distance traveled in the air for each angle. Plot these distances versus launch angle.
|
5,517
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function, division
%matplotlib inline
import numpy as np
import pandas as pd
import random
import thinkstats2
import thinkplot
import scipy.stats
def EvalNormalCdfInverse(p, mu=0, sigma=1):
return scipy.stats.norm.ppf(p, loc=mu, scale=sigma)
EvalNormalCdfInverse(0.05, mu=90, sigma=2.5)
EvalNormalCdfInverse(0.95, mu=90, sigma=2.5)
from normal import Normal
dist = Normal(90, 7.5**2)
dist
dist_xbar = dist.Sum(9) / 9
dist_xbar.sigma
dist_xbar.Percentile(5), dist_xbar.Percentile(95)
def MakeExpoSamples(beta=2.0, iters=1000):
Generates samples from an exponential distribution.
beta: parameter
iters: number of samples to generate for each size
returns: list of samples
samples = []
for n in [1, 10, 100]:
sample = [np.sum(np.random.exponential(beta, n))
for _ in range(iters)]
samples.append((n, sample))
return samples
def NormalPlotSamples(samples, plot=1, ylabel=''):
Makes normal probability plots for samples.
samples: list of samples
label: string
for n, sample in samples:
thinkplot.SubPlot(plot)
thinkstats2.NormalProbabilityPlot(sample)
thinkplot.Config(title='n=%d' % n,
legend=False,
xticks=[],
yticks=[],
xlabel='random normal variate',
ylabel=ylabel)
plot += 1
thinkplot.PrePlot(num=3, rows=2, cols=3)
samples = MakeExpoSamples()
NormalPlotSamples(samples, plot=1,
ylabel='sum of expo values')
def MakeLognormalSamples(mu=1.0, sigma=1.0, iters=1000):
Generates samples from a lognormal distribution.
mu: parmeter
sigma: parameter
iters: number of samples to generate for each size
returns: list of samples
samples = []
for n in [1, 10, 100]:
sample = [np.sum(np.random.lognormal(mu, sigma, n))
for _ in range(iters)]
samples.append((n, sample))
return samples
thinkplot.PrePlot(num=3, rows=2, cols=3)
samples = MakeLognormalSamples()
NormalPlotSamples(samples, ylabel='sum of lognormal values')
def MakeParetoSamples(alpha=1.0, iters=1000):
Generates samples from a Pareto distribution.
alpha: parameter
iters: number of samples to generate for each size
returns: list of samples
samples = []
for n in [1, 10, 100]:
sample = [np.sum(np.random.pareto(alpha, n))
for _ in range(iters)]
samples.append((n, sample))
return samples
thinkplot.PrePlot(num=3, rows=2, cols=3)
samples = MakeParetoSamples()
NormalPlotSamples(samples, ylabel='sum of Pareto values')
def GenerateCorrelated(rho, n):
Generates a sequence of correlated values from a standard normal dist.
rho: coefficient of correlation
n: length of sequence
returns: iterator
x = random.gauss(0, 1)
yield x
sigma = np.sqrt(1 - rho**2)
for _ in range(n-1):
x = random.gauss(x * rho, sigma)
yield x
def GenerateExpoCorrelated(rho, n):
Generates a sequence of correlated values from an exponential dist.
rho: coefficient of correlation
n: length of sequence
returns: NumPy array
normal = list(GenerateCorrelated(rho, n))
uniform = scipy.stats.norm.cdf(normal)
expo = scipy.stats.expon.ppf(uniform)
return expo
def MakeCorrelatedSamples(rho=0.9, iters=1000):
Generates samples from a correlated exponential distribution.
rho: correlation
iters: number of samples to generate for each size
returns: list of samples
samples = []
for n in [1, 10, 100]:
sample = [np.sum(GenerateExpoCorrelated(rho, n))
for _ in range(iters)]
samples.append((n, sample))
return samples
thinkplot.PrePlot(num=3, rows=2, cols=3)
samples = MakeCorrelatedSamples()
NormalPlotSamples(samples, ylabel='sum of correlated exponential values')
import first
live, firsts, others = first.MakeFrames()
delta = firsts.prglngth.mean() - others.prglngth.mean()
delta
def SamplingDistMean(data, n):
Computes the sampling distribution of the mean.
data: sequence of values representing the population
n: sample size
returns: Normal object
mean, var = data.mean(), data.var()
dist = Normal(mean, var)
return dist.Sum(n) / n
dist1 = SamplingDistMean(live.prglngth, len(firsts))
dist2 = SamplingDistMean(live.prglngth, len(others))
dist_diff = dist1 - dist2
dist
1 - dist_diff.Prob(delta)
dist_diff.Prob(-delta)
def StudentCdf(n):
Computes the CDF correlations from uncorrelated variables.
n: sample size
returns: Cdf
ts = np.linspace(-3, 3, 101)
ps = scipy.stats.t.cdf(ts, df=n-2)
rs = ts / np.sqrt(n - 2 + ts**2)
return thinkstats2.Cdf(rs, ps)
import hypothesis
class CorrelationPermute(hypothesis.CorrelationPermute):
Tests correlations by permutation.
def TestStatistic(self, data):
Computes the test statistic.
data: tuple of xs and ys
xs, ys = data
return np.corrcoef(xs, ys)[0][1]
def ResampleCorrelations(live):
Tests the correlation between birth weight and mother's age.
live: DataFrame for live births
returns: sample size, observed correlation, CDF of resampled correlations
live2 = live.dropna(subset=['agepreg', 'totalwgt_lb'])
data = live2.agepreg.values, live2.totalwgt_lb.values
ht = CorrelationPermute(data)
p_value = ht.PValue()
return len(live2), ht.actual, ht.test_cdf
n, r, cdf = ResampleCorrelations(live)
model = StudentCdf(n)
thinkplot.Plot(model.xs, model.ps, color='gray',
alpha=0.5, label='Student t')
thinkplot.Cdf(cdf, label='sample')
thinkplot.Config(xlabel='correlation',
ylabel='CDF',
legend=True, loc='lower right')
t = r * np.sqrt((n-2) / (1-r**2))
p_value = 1 - scipy.stats.t.cdf(t, df=n-2)
print(r, p_value)
def ChiSquaredCdf(n):
Discrete approximation of the chi-squared CDF with df=n-1.
n: sample size
returns: Cdf
xs = np.linspace(0, 25, 101)
ps = scipy.stats.chi2.cdf(xs, df=n-1)
return thinkstats2.Cdf(xs, ps)
data = [8, 9, 19, 5, 8, 11]
dt = hypothesis.DiceChiTest(data)
p_value = dt.PValue(iters=1000)
n, chi2, cdf = len(data), dt.actual, dt.test_cdf
model = ChiSquaredCdf(n)
thinkplot.Plot(model.xs, model.ps, color='gray',
alpha=0.3, label='chi squared')
thinkplot.Cdf(cdf, label='sample')
thinkplot.Config(xlabel='chi-squared statistic',
ylabel='CDF',
loc='lower right')
p_value = 1 - scipy.stats.chi2.cdf(chi2, df=n-1)
print(chi2, p_value)
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Analytic methods
Step2: Here's the confidence interval for the estimated mean.
Step3: normal.py provides a Normal class that encapsulates what we know about arithmetic operations on normal distributions.
Step4: We can use it to compute the sampling distribution of the mean.
Step5: And then compute a confidence interval.
Step7: Central Limit Theorem
Step9: This function generates normal probability plots for samples with various sizes.
Step10: The following plot shows how the sum of exponential variates converges to normal as sample size increases.
Step12: The lognormal distribution has higher variance, so it requires a larger sample size before it converges to normal.
Step14: The Pareto distribution has infinite variance, and sometimes infinite mean, depending on the parameters. It violates the requirements of the CLT and does not generally converge to normal.
Step18: If the random variates are correlated, that also violates the CLT, so the sums don't generally converge.
Step19: Difference in means
Step21: The following function computes the sampling distribution of the mean for a set of values and a given sample size.
Step22: Here are the sampling distributions for the means of the two groups under the null hypothesis.
Step23: And the sampling distribution for the difference in means.
Step24: Under the null hypothesis, here's the chance of exceeding the observed difference.
Step25: And the chance of falling below the negated difference.
Step27: The sum of these probabilities is the two-sided p-value.
Step30: The following is a HypothesisTest that uses permutation to estimate the sampling distribution of a correlation.
Step32: Now we can estimate the sampling distribution by permutation and compare it to the Student t distribution.
Step33: That confirms the analytic result. Now we can use the CDF of the Student t distribution to compute a p-value.
Step35: Chi-squared test
Step36: Again, we can confirm the analytic result by comparing values generated by simulation with the analytic distribution.
Step37: And then we can use the analytic distribution to compute p-values.
Step38: Exercises
Step39: Exercise
Step40: Exercise
|
5,518
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from mpl_toolkits.mplot3d import Axes3D
from __future__ import unicode_literals
from matplotlib.gridspec import GridSpec
# %matplotlib notebook
my_data = np.loadtxt('../Catalogue/Match07_small_mags_doubleclean_type.csv', delimiter=',', dtype=str)
my_dictionary = {}
for i in range(len(my_data[0, :])): # Converting numpy array into dictionary
my_dictionary[my_data[0, i]] = np.array(my_data[0 + 1:, i], dtype=str)
print my_data.shape
print my_data.size
print my_data[0,:]
redshift = my_dictionary['Z'].astype(float)
cataid = my_dictionary['CATAID'].astype(str)
z_prob = my_dictionary['PROB'].astype(float)
z_quality = my_dictionary['NQ'].astype(int)
fuv_band = my_dictionary['MAG_AB_FUV'].astype(float)
nuv_band = my_dictionary['MAG_AB_NUV'].astype(float)
u_band = my_dictionary['MAG_AB_U'].astype(float)
g_band = my_dictionary['MAG_AB_G'].astype(float)
r_band = my_dictionary['MAG_AB_R'].astype(float)
mag_abs_r = my_dictionary['MAG_ABSOLUTE_R'].astype(float)
stellar_mass = my_dictionary['logmstar'].astype(float) # stellar mass from sed fitting - log scale
stellar_age = my_dictionary['logage'].astype(float) # stellar age - log scale
stellar_met = my_dictionary['metal'].astype(float) # stellar metallicity
dn4000 = my_dictionary['D4000N'].astype(float)
h_alpha_flux = my_dictionary['HA_FLUX_COMP'].astype(float)
h_alpha_ew = my_dictionary['HA_EW_COMP'].astype(float)
h_beta_flux = my_dictionary['HB_FLUX_COMP'].astype(float)
h_beta_ew = my_dictionary['HB_EW_COMP'].astype(float)
nii_flux = my_dictionary['NIIR_FLUX_COMP'].astype(float) # R for red or 6583A -- see http://www.gama-survey.org/dr3/schema/dmu.php?id=8
oiii_flux = my_dictionary['OIIIR_EW_COMP'].astype(float) # R for red or 5007A -- see http://www.gama-survey.org/dr3/schema/dmu.php?id=8
uv_class = my_dictionary['UV_CLASS_YI2011'].astype(str)
obj_type = my_dictionary['TYPE'].astype(int)
print np.unique(uv_class)
print my_data[:,0].shape
z_min=0.06
z_max=0.40
plt.hist(z_prob, bins=200)
plt.yscale('log')
plt.show()
indexes = np.arange(redshift.size)
index_all = indexes[(r_band>0)*(r_band<19.8)*(nuv_band>0)*(fuv_band>0)*((fuv_band-nuv_band)<50)
*((fuv_band-nuv_band)>(-20))*(redshift>=z_min)*(z_prob>0.8)*(obj_type==3)]
print my_data[index_all].shape
print (np.unique(cataid[index_all])).size
index_uvup = np.where(((r_band>0)*(r_band<19.8)*(nuv_band>0)*(fuv_band>0)*(nuv_band-r_band)>5.4)
*(fuv_band-nuv_band<0.9)*(fuv_band-r_band<6.6)*(fuv_band-nuv_band<50)*(fuv_band-nuv_band>-20)
*(redshift>=z_min))
index_rsf = np.where(((r_band>0)*(r_band<19.8)*(nuv_band>0)*(fuv_band>0)*(nuv_band-r_band)<5.4)
*(fuv_band-nuv_band<50)*(fuv_band-nuv_band>-20)*(redshift>=z_min))
index_uvweak = np.where(((r_band>0)*(r_band<19.8)*(nuv_band>0)*(fuv_band>0)*(nuv_band-r_band)>5.4)
*((fuv_band-r_band)>6.6)*(fuv_band-nuv_band<50)*(fuv_band-nuv_band>-20)*(redshift>=z_min))
index_redsequence = np.where(((r_band>0)*(r_band<19.8)*(nuv_band>0)*(fuv_band>0)*(nuv_band-r_band)>5.4)
*(fuv_band-nuv_band<50)*(fuv_band-nuv_band>-20)*(redshift>=z_min))
sns.set_style("whitegrid")
plt.rcParams["axes.edgecolor"] = "0.15"
plt.rcParams["axes.linewidth"] = 1.
plt.subplots(1,1, figsize=(8,5))
plot01, = plt.plot((nuv_band - r_band)[index_rsf], (fuv_band - nuv_band)[index_rsf], 'o', markersize=6,
color = '#018571', alpha=0.5, label="RSF")
plot02, = plt.plot((nuv_band - r_band)[index_uvweak], (fuv_band - nuv_band)[index_uvweak], 's',
markersize=6, color = '#dfc27d', alpha=0.5, label="UV Weak")
plot03, = plt.plot((nuv_band - r_band)[index_uvup], (fuv_band - nuv_band)[index_uvup], 'D', markersize=6,
color = '#a6611a', alpha=0.5, label="UV upturn")
plt.legend(numpoints=1, loc='best', fontsize=14, frameon=True, framealpha=0.85)
# plt.text(0.0, 4, r"RSF", fontsize=18)
# plt.text(7.5, 4, r"UV Weak", fontsize=18)
# plt.text(7.5, -1.8, r"UV upturn", fontsize=18)
plt.axvline(x=5.4, color='black', linewidth=2.)
plt.axhline(y=0.9, xmin=0.659, xmax=2, color='black', linewidth=2.)
plt.xlabel("NUV-r", fontsize=15)
plt.ylabel("FUV-NUV", fontsize=15)
plt.tick_params('both', labelsize='14')
plt.grid(alpha=0.00)
plt.savefig('../Figs/g2_maglim_yi_diagram.pdf')
plt.savefig('../Figs/g2_maglim_yi_diagram.png')
plt.show()
xbpt_k01 = np.linspace(-2.2, 0.4, 1000) # Kewyley et al. 2001
xbpt_k03 = np.linspace(-2.2, -0.01, 1000) # Kaufmann et al. 2003
xbpt_s06 = np.linspace(-2.2, -0.05, 1000) # Stasinska et al. 2006
xbpt_s07 = np.linspace(-0.182, 2.0, 1000) # Schawinski et al. 2007
ybpt_k01 = []
for j in range(len(xbpt_k01)):
ybpt_k01j = 0.61 / (xbpt_k01[j] - 0.47) + 1.19 # Kewyley et al. 2001
ybpt_k01.append(ybpt_k01j)
ybpt_k01 = np.array(ybpt_k01)
ybpt_k03 = []
for j in range(len(xbpt_k03)):
ybpt_k03j = 0.61 / (xbpt_k03[j] - 0.05) + 1.3 # Kaufmann et al. 2003
ybpt_k03.append(ybpt_k03j)
ybpt_k03 = np.array(ybpt_k03)
ybpt_s06 = []
for j in range(len(xbpt_s06)):
# Stasinska et al. 2006
ybpt_s06j = (-30.787 + (1.1358 * xbpt_s06[j]) + 0.27297) * np.tanh(5.7409 * xbpt_s06[j]) - 31.093
ybpt_s06.append(ybpt_s06j)
ybpt_s06 = np.array(ybpt_s06)
ybpt_s07 = []
for j in range(len(xbpt_s07)):
ybpt_s07j = 1.05 * xbpt_s07[j] + 0.45 # Schawinski et al. 2007
ybpt_s07.append(ybpt_s07j)
ybpt_s07 = np.array(ybpt_s07)
print np.unique(np.isnan(h_alpha_flux[index_all])) #is there any NAN value?
print np.unique(np.isinf(h_alpha_flux[index_all])) #is there any infinite value?
print h_alpha_flux[index_all][[h_alpha_flux[index_all]==0]].size #total amount of zeroes
print h_alpha_flux[index_all].size #total size of the sample
print h_alpha_flux[index_all].size - h_alpha_flux[index_all][[h_alpha_flux[index_all]==0]].size #removing the zeroes
idx_bpt_clean = np.where((np.logical_not(np.isnan(np.log10(nii_flux[index_all]/h_alpha_flux[index_all]))))
*(np.logical_not(np.isinf(np.log10(nii_flux[index_all]/h_alpha_flux[index_all]))))
*(np.logical_not(np.isnan(np.log10(oiii_flux[index_all]/h_beta_flux[index_all]))))
*(np.logical_not(np.isinf(np.log10(oiii_flux[index_all]/h_beta_flux[index_all]))))
*(np.logical_not(h_alpha_flux[index_all]<=0))*(np.logical_not(h_beta_flux[index_all]<=0))
*(np.logical_not(nii_flux[index_all]<=0))*(np.logical_not(oiii_flux[index_all]<=0)))
idx_whan_clean = np.where((np.logical_not(np.isnan(np.log10(nii_flux[index_all]/h_alpha_flux[index_all]))))
*(np.logical_not(np.isinf(np.log10(nii_flux[index_all]/h_alpha_flux[index_all]))))
*(np.logical_not(np.isnan(np.log10(h_alpha_ew[index_all]))))
*(np.logical_not(np.isinf(np.log10(h_alpha_ew[index_all]))))
*(np.logical_not(h_alpha_flux[index_all]<=0))*(np.logical_not(nii_flux[index_all]<=0))
*(np.logical_not(h_alpha_ew[index_all]<=0)))
print redshift[index_all][idx_bpt_clean].size
print redshift[index_all][idx_whan_clean].size
print redshift[index_all].size-redshift[index_all][idx_bpt_clean].size
# for i in range(h_alpha_flux[index_all][idx_bpt_clean].size):
# print h_alpha_flux[index_all][idx_bpt_clean][i]
h_alpha_flux_bpt = h_alpha_flux[index_all][idx_bpt_clean]
h_beta_flux_bpt = h_beta_flux[index_all][idx_bpt_clean]
nii_flux_bpt = nii_flux[index_all][idx_bpt_clean]
oiii_flux_bpt = oiii_flux[index_all][idx_bpt_clean]
uv_class_bpt = uv_class[index_all][idx_bpt_clean]
print h_alpha_flux_bpt.size
print uv_class_bpt.size
print np.unique(uv_class_bpt)
idx_uvup_bpt = np.where(uv_class_bpt=='UV_UPTURN')
idx_uvwk_bpt = np.where(uv_class_bpt=='UV_WEAK')
idx_rsf_bpt = np.where(uv_class_bpt=='RSF')
h_alpha_flux_whan = h_alpha_flux[index_all][idx_whan_clean]
h_beta_flux_whan = h_beta_flux[index_all][idx_whan_clean]
nii_flux_whan = nii_flux[index_all][idx_whan_clean]
h_alpha_ew_whan = h_alpha_ew[index_all][idx_whan_clean]
uv_class_whan = uv_class[index_all][idx_whan_clean]
print h_alpha_flux_whan.size
print uv_class_whan.size
print np.unique(uv_class_whan)
idx_uvup_whan = np.where(uv_class_whan=='UV_UPTURN')
idx_uvwk_whan = np.where(uv_class_whan=='UV_WEAK')
idx_rsf_whan = np.where(uv_class_whan=='RSF')
xbpt = np.log10(nii_flux_bpt/h_alpha_flux_bpt)
xbpt_uvup = np.log10(nii_flux_bpt[idx_uvup_bpt]/h_alpha_flux_bpt[idx_uvup_bpt])
xbpt_uvwk = np.log10(nii_flux_bpt[idx_uvwk_bpt]/h_alpha_flux_bpt[idx_uvwk_bpt])
xbpt_rsf = np.log10(nii_flux_bpt[idx_rsf_bpt]/h_alpha_flux_bpt[idx_rsf_bpt])
ybpt = np.log10(oiii_flux_bpt/h_beta_flux_bpt)
ybpt_uvup = np.log10(oiii_flux_bpt[idx_uvup_bpt]/h_beta_flux_bpt[idx_uvup_bpt])
ybpt_uvwk = np.log10(oiii_flux_bpt[idx_uvwk_bpt]/h_beta_flux_bpt[idx_uvwk_bpt])
ybpt_rsf = np.log10(oiii_flux_bpt[idx_rsf_bpt]/h_beta_flux_bpt[idx_rsf_bpt])
xwhan = np.log10(nii_flux_whan/h_alpha_flux_whan)
xwhan_uvup = np.log10(nii_flux_whan[idx_uvup_whan]/h_alpha_flux_whan[idx_uvup_whan])
xwhan_uvwk = np.log10(nii_flux_whan[idx_uvwk_whan]/h_alpha_flux_whan[idx_uvwk_whan])
xwhan_rsf = np.log10(nii_flux_whan[idx_rsf_whan]/h_alpha_flux_whan[idx_rsf_whan])
ywhan = np.log10(h_alpha_ew_whan)
ywhan_uvup = np.log10(h_alpha_ew_whan[idx_uvup_whan])
ywhan_uvwk = np.log10(h_alpha_ew_whan[idx_uvwk_whan])
ywhan_rsf = np.log10(h_alpha_ew_whan[idx_rsf_whan])
print xbpt_rsf.size, ybpt_rsf.size
print xwhan_rsf.size, ywhan_rsf.size
print xwhan.size
print (xwhan_uvup.size+xwhan_uvwk.size+xwhan_rsf.size)
# INITIAL SETTINGS
plt.rcParams["axes.edgecolor"] = "0.15"
plt.rcParams["axes.linewidth"] = 1.
plt.subplots(figsize=(10.5,5))
# FIRST PLOT -- BPT
plt.subplot(1,2,1)
plot01c = plt.scatter(xbpt_rsf, ybpt_rsf, c='#018571', s=5, alpha=0.3, marker='o')
plot01b = plt.scatter(xbpt_uvwk, ybpt_uvwk, c='#dfc27d', s=10, alpha=1, marker='s')
plot01a = plt.scatter(xbpt_uvup, ybpt_uvup, c='#a6611a', s=20, alpha=1, marker='D')
plot02, = plt.plot(xbpt_k01, ybpt_k01, ':', color='black', label='Kewley+01')
plot03, = plt.plot(xbpt_k03, ybpt_k03, '-', color='black', label='Kauffman+03')
plot04, = plt.plot(xbpt_s06, ybpt_s06, '-.', color='black', label='Stasinska+06')
plot05, = plt.plot(xbpt_s07, ybpt_s07, '--', color='black', label='Schawinski+07')
l1 = plt.legend([plot01a, plot01b, plot01c], [r"UV Upturn", r"UV Weak", r"RSF"], numpoints=1, loc='lower left',
fontsize=12, frameon=True, framealpha=0.85)
l2 = plt.legend([plot02, plot03, plot04, plot05], [r"Kewley+01", r"Kauffman+03", r"Stasińska+06", r"Schawinski+07"],
numpoints=3, loc='lower right', fontsize=12, frameon=True, framealpha=0.85)
l1.get_frame().set_edgecolor('black')
l2.get_frame().set_edgecolor('black')
plt.gca().add_artist(l1)
plt.fill_betweenx(ybpt_k01, xbpt_k01, xbpt_s06, where=(xbpt_s06>-1.242)*(ybpt_s06<0.835), facecolor='gray', alpha=0.2)
plt.fill_between(xbpt_s06, ybpt_k01, ybpt_s06, where=(xbpt_s06>-1.242)*(ybpt_k01<0.835), facecolor='gray', alpha=0.2)
plt.xlabel(r"$\log ([NII]/H{\alpha})$", fontweight='bold', size=14)
plt.ylabel(r"$\log (\left[OIII\right]/H \beta) $", fontweight='bold', fontsize=14)
plt.text(-1.9, 0.2, r"Star Forming", fontsize=16)
# plt.text(-0.5, 1.6, r"AGN", fontsize=13)
plt.text(-0.5, 1.3, r"Seyfert", fontsize=13)
plt.text(0.9, -1, r"LINER", fontsize=13)
plt.annotate(r"Composite", xy=(0., -2), xycoords='data', xytext=(0.5, 0.5), size=13,
arrowprops=dict(arrowstyle='wedge',facecolor='black', connectionstyle="angle3,angleA=90,angleB=0"))
plt.xlim([-2, 1.5])
plt.ylim([-5, 2])
plt.minorticks_on()
plt.tick_params('both', labelsize='13')
plt.grid(alpha=0.0)
# SECOND PLOT -- WHAN DIAGRAM
plt.subplot(1,2,2)
plot01c = plt.scatter(xwhan_rsf, ywhan_rsf, c='#018571', s=5, alpha=0.3, marker='o')
plot01b = plt.scatter(xwhan_uvwk, ywhan_uvwk, c='#dfc27d', s=10, alpha=1.0, marker='s')
plot01a = plt.scatter(xwhan_uvup, ywhan_uvup, c='#a6611a', s=20, alpha=1.0, marker='D')
l3 = plt.legend([plot01a, plot01b, plot01c], [r"UV Upturn", r"UV Weak", r"RSF"],numpoints=1, loc='upper right',
fontsize=12, frameon=True, framealpha=0.7)
l3.get_frame().set_edgecolor('black')
plt.axvline(x=-0.4, ymin=.332, ymax=3.5, color='black', linewidth=1.5)
plt.axhline(y=+0.5, color='black', linewidth=0.5)
plt.axhline(y=0.78, xmin=0.469, xmax=1, color='black', linewidth=1.5)
plt.xlabel(r"$\log ([NII]/H{\alpha})$", fontweight='bold', fontsize=14)
plt.ylabel(r"$\log EW(H{\alpha})$", fontweight='bold', fontsize=14)
plt.text(-1.75, -0.75, r"Retired/Passive", fontsize=13)
plt.text(0.75, 2.0, r"sAGN", fontsize=13)
plt.text(0.75, 0.6, r"wAGN", fontsize=13)
plt.text(-1.75, 3, r"Star Forming", fontsize=13)
plt.xlim([-2, 1.5])
plt.ylim([-1.0, 3.5])
plt.minorticks_on()
plt.tick_params('both', labelsize='13')
plt.grid(alpha=0.0)
# FINAL SETTINGS
plt.tight_layout()
plt.savefig('../Figs/g2_bptwhan.pdf')
plt.savefig('../Figs/g2_bptwhan.png')
plt.show()
# OVERALL SETTINGS
plt.subplots(figsize=(8,5))
plt.rcParams["axes.edgecolor"] = "0.15"
plt.rcParams["axes.linewidth"] = 1.
bpt_xlim = [-2, 1.5]
bpt_ylim = [-5, 2]
whan_xlim = [-2, 1.5]
whan_ylim = [-1.0, 3.5]
# BPT
## RSF
ax1=plt.subplot(2,3,1)
plot01c = plt.scatter(xbpt_rsf, ybpt_rsf, c='#018571', s=10, alpha=0.7, label='RSF')
plot02, = plt.plot(xbpt_k01, ybpt_k01, ':', color='black', label='Kewley+01')
plot03, = plt.plot(xbpt_k03, ybpt_k03, '-', color='black', label='Kauffman+03')
plot04, = plt.plot(xbpt_s06, ybpt_s06, '-.', color='black', label='Stasinska+06')
plot05, = plt.plot(xbpt_s07, ybpt_s07, '--', color='black', label='Schawinski+07')
plt.ylabel(r"$\log (\left[OIII\right]/H \beta) $", fontweight='bold', fontsize=15)
# plt.legend([plot01c], [r"RSF"], numpoints=500, loc='upper right', fontsize=10, frameon=True, framealpha=1.)
plt.title("RSF", fontsize=14)
plt.xlim(bpt_xlim)
plt.ylim(bpt_ylim)
plt.minorticks_on()
plt.tick_params('both', labelsize='14')
plt.xticks(np.arange(bpt_xlim[0], bpt_xlim[1], 1.))
plt.yticks(np.arange(bpt_ylim[0], bpt_ylim[1], 2))
plt.grid(alpha=0.0)
## UV Weak
ax2=plt.subplot(2,3,2)
plot01b = plt.scatter(xbpt_uvwk, ybpt_uvwk, c='#dfc27d', s=10, alpha=1, label='UV weak')
plot02, = plt.plot(xbpt_k01, ybpt_k01, ':', color='black', label='Kewley+01')
plot03, = plt.plot(xbpt_k03, ybpt_k03, '-', color='black', label='Kauffman+03')
plot04, = plt.plot(xbpt_s06, ybpt_s06, '-.', color='black', label='Stasinska+06')
plot05, = plt.plot(xbpt_s07, ybpt_s07, '--', color='black', label='Schawinski+07')
# plt.xlabel(r"$\log ([NII]/H{\alpha})$", fontweight='bold', size=19)
plt.title("UV weak", fontsize=14)
plt.xlim(bpt_xlim)
plt.ylim(bpt_ylim)
ax2.yaxis.set_visible(False)
plt.minorticks_on()
plt.tick_params('both', labelsize='14')
plt.xticks(np.arange(bpt_xlim[0], bpt_xlim[1], 1.))
plt.grid(alpha=0.0)
## UV Upturn
ax3=plt.subplot(2,3,3)
plot01a = plt.scatter(xbpt_uvup, ybpt_uvup, c='#a6611a', s=10, alpha=1, label='UV upturn')
plot02, = plt.plot(xbpt_k01, ybpt_k01, ':', color='black', label='Kewley+01')
plot03, = plt.plot(xbpt_k03, ybpt_k03, '-', color='black', label='Kauffman+03')
plot04, = plt.plot(xbpt_s06, ybpt_s06, '-.', color='black', label='Stasinska+06')
plot05, = plt.plot(xbpt_s07, ybpt_s07, '--', color='black', label='Schawinski+07')
plt.title("UV upturn", fontsize=14)
plt.xlim(bpt_xlim)
plt.ylim(bpt_ylim)
ax3.yaxis.set_visible(False)
plt.minorticks_on()
plt.tick_params('both', labelsize='14')
plt.xticks(np.arange(bpt_xlim[0], bpt_xlim[1], 1.))
plt.grid(alpha=0.0)
# WHAN
## RSF
ax4=plt.subplot(2,3,4)
plot01c = plt.scatter(xwhan_rsf, ywhan_rsf, c='#018571', s=10, alpha=1, label='RSF')
plt.axvline(x=-0.4, ymin=.332, ymax=3.5, color='black', linewidth=1.5)
plt.axhline(y=+0.5, color='black', linewidth=0.5)
plt.axhline(y=0.82, xmin=0.455, xmax=1, color='black', linewidth=1.5)
plt.ylabel(r"$\log EW(H{\alpha})$", fontweight='bold', fontsize=16)
plt.xlim(whan_xlim)
plt.ylim(whan_ylim)
plt.minorticks_on()
plt.tick_params('both', labelsize='14')
plt.xticks(np.arange(whan_xlim[0], whan_xlim[1], 1.))
plt.yticks(np.arange(whan_ylim[0], whan_ylim[1], 1.))
plt.grid(alpha=0.0)
ax5=plt.subplot(2,3,5)
plot01b = plt.scatter(xwhan_uvwk, ywhan_uvwk, c='#dfc27d', s=10, alpha=1., label='UV weak')
plt.axvline(x=-0.4, ymin=.332, ymax=3.5, color='black', linewidth=1.5)
plt.axhline(y=+0.5, color='black', linewidth=0.5)
plt.axhline(y=0.82, xmin=0.455, xmax=1, color='black', linewidth=1.5)
plt.xlabel(r"$\log ([NII]/H{\alpha})$", fontweight='bold', size=16)
plt.xlim(whan_xlim)
plt.ylim(whan_ylim)
ax5.yaxis.set_visible(False)
plt.minorticks_on()
plt.tick_params('both', labelsize='14')
plt.xticks(np.arange(whan_xlim[0], whan_xlim[1], 1.))
plt.grid(alpha=0.0)
ax6=plt.subplot(2,3,6)
plot01a = plt.scatter(xwhan_uvup, ywhan_uvup, c='#a6611a', s=10, alpha=1, label='UV upturn')
plt.axvline(x=-0.4, ymin=.332, ymax=3.5, color='black', linewidth=1.5)
plt.axhline(y=+0.5, color='black', linewidth=0.5)
plt.axhline(y=0.82, xmin=0.455, xmax=1, color='black', linewidth=1.5)
plt.xlim(whan_xlim)
plt.ylim(whan_ylim)
ax6.yaxis.set_visible(False)
plt.minorticks_on()
plt.tick_params('both', labelsize='14')
plt.xticks(np.arange(whan_xlim[0], whan_xlim[1], 1.))
plt.grid(alpha=0.0)
plt.tight_layout()
plt.savefig('../Figs/g2_bptwhan_split.pdf')
plt.savefig('../Figs/g2_bptwhan_split.png')
plt.show()
print xbpt.size
print xwhan.size
whan_class = []
for i in range(xwhan.size):
if (xwhan[i]<-0.4)*(ywhan[i]>0.5):
whan_class_i = 'SF'
elif (xwhan[i]>-0.4)*(ywhan[i]>0.82):
whan_class_i = 'sAGN'
elif (xwhan[i]>-0.4)*(ywhan[i]<0.82)*(ywhan[i]>0.5):
whan_class_i = 'wAGN'
elif (ywhan[i]<0.5):
whan_class_i = 'Retired/Passive'
else:
print 'error'
whan_class.append(whan_class_i)
whan_class = np.array(whan_class)
idx_sf = np.where(whan_class=='SF')
idx_sagn = np.where(whan_class=='sAGN')
idx_wagn = np.where(whan_class=='wAGN')
idx_rp = np.where(whan_class=='Retired/Passive')
print r"RSF objects in SF region in WHAN diagram is %d" % list(uv_class_whan[idx_sf]).count('RSF')
print r"UV weak objects in SF region in WHAN diagram is %d" % list(uv_class_whan[idx_sf]).count('UV_WEAK')
print r"UV upturn objects in SF region in WHAN diagram is %d" % list(uv_class_whan[idx_sf]).count('UV_UPTURN')
print r"RSF objects in sAGN region in WHAN diagram is %d" % list(uv_class_whan[idx_sagn]).count('RSF')
print r"UV weak objects in sAGN region in WHAN diagram is %d" % list(uv_class_whan[idx_sagn]).count('UV_WEAK')
print r"UV upturn objects in sAGN region in WHAN diagram is %d" % list(uv_class_whan[idx_sagn]).count('UV_UPTURN')
print r"RSF objects in wAGN region in WHAN diagram is %d" % list(uv_class_whan[idx_wagn]).count('RSF')
print r"UV weak objects in wAGN region in WHAN diagram is %d" % list(uv_class_whan[idx_wagn]).count('UV_WEAK')
print r"UV upturn objects in wAGN region in WHAN diagram is %d" % list(uv_class_whan[idx_wagn]).count('UV_UPTURN')
print r"RSF objects in Retired/Passive region in WHAN diagram is %d" % list(uv_class_whan[idx_rp]).count('RSF')
print r"UV weak objects in Retired/Passive region in WHAN diagram is %d" % list(uv_class_whan[idx_rp]).count('UV_WEAK')
print r"UV upturn objects in Retired/Passive region in WHAN diagram is %d" % list(uv_class_whan[idx_rp]).count('UV_UPTURN')
print "UV Classification & SF & sAGN & wAGN & Retired/Passive \\"
print "RSF & %d & %d & %d & %d \\ " % (list(uv_class_whan[idx_sf]).count('RSF'),
list(uv_class_whan[idx_sagn]).count('RSF'),
list(uv_class_whan[idx_wagn]).count('RSF'),
list(uv_class_whan[idx_rp]).count('RSF'))
print "UV weak & %d & %d & %d & %d \\ " % (list(uv_class_whan[idx_sf]).count('UV_WEAK'),
list(uv_class_whan[idx_sagn]).count('UV_WEAK'),
list(uv_class_whan[idx_wagn]).count('UV_WEAK'),
list(uv_class_whan[idx_rp]).count('UV_WEAK'))
print "UV upturn & %d & %d & %d & %d \\ " % (list(uv_class_whan[idx_sf]).count('UV_UPTURN'),
list(uv_class_whan[idx_sagn]).count('UV_UPTURN'),
list(uv_class_whan[idx_wagn]).count('UV_UPTURN'),
list(uv_class_whan[idx_rp]).count('UV_UPTURN'))
bpt_class = []
idx_co = []
idx_sf = []
idx_sy = []
idx_ln = []
for i in range(xbpt.size):
# checking the proximity of each observation to each curve
idx_k01 = np.abs(xbpt_k01-xbpt[i]).argmin() # index of the K01 curve closest to my observation i
idx_s06 = np.abs(xbpt_s06-xbpt[i]).argmin() # index of the S06 curve closest to my observation i
idx_s07 = np.abs(xbpt_s07-xbpt[i]).argmin() # index of the S07 curve closest to my observation i
if (ybpt[i]>ybpt_k01[idx_k01])*(ybpt[i]>ybpt_s07[idx_s07]):
idx_sy.append(i)
bpt_class.append('Seyfert')
elif (ybpt[i]>ybpt_k01[idx_k01])*(ybpt[i]<ybpt_s07[idx_s07]):
idx_ln.append(i)
bpt_class.append('LINER')
elif (ybpt[i]<ybpt_k01[idx_k01])*(ybpt[i]>ybpt_s06[idx_s06]):
idx_co.append(i)
bpt_class.append('Composite')
else:
idx_sf.append(i)
bpt_class.append('SF')
bpt_class = np.array(bpt_class)
print bpt_class.size
plot_co = plt.scatter(xbpt[idx_co], ybpt[idx_co], c='#a6611a', s=10, alpha=1)
plot_sf = plt.scatter(xbpt[idx_sf], ybpt[idx_sf], c='green', s=10, alpha=0.8)
plot_sy = plt.scatter(xbpt[idx_sy], ybpt[idx_sy], c='blue', s=10, alpha=0.8)
plot_ln = plt.scatter(xbpt[idx_ln], ybpt[idx_ln], c='magenta', s=10, alpha=0.8)
# plot_na = plt.scatter(xbpt[idx_na], ybpt[idx_na], c='red', s=10, alpha=1)
plot02, = plt.plot(xbpt_k01, ybpt_k01, ':', color='black', label='Kewley+01')
plot03, = plt.plot(xbpt_k03, ybpt_k03, '-', color='black', label='Kauffman+03')
plot04, = plt.plot(xbpt_s06, ybpt_s06, '-.', color='black', label='Stasinska+06')
plot05, = plt.plot(xbpt_s07, ybpt_s07, '--', color='black', label='Schawinski+07')
plt.xlim(bpt_xlim)
plt.ylim(bpt_ylim)
plt.minorticks_on()
plt.tick_params('both', labelsize='15')
plt.xticks(np.arange(bpt_xlim[0], bpt_xlim[1], 1.))
plt.grid(alpha=0.0)
plt.show()
idxx_sf = np.where(bpt_class=='SF')
idxx_sy = np.where(bpt_class=='Seyfert')
idxx_ln = np.where(bpt_class=='LINER')
idxx_co = np.where(bpt_class=='Composite')
print "UV Classification & SF & Seyfert & LINER & Composite \\"
print "RSF & %d & %d & %d & %d \\ " % (list(uv_class_bpt[idxx_sf]).count('RSF'),
list(uv_class_bpt[idxx_sy]).count('RSF'),
list(uv_class_bpt[idxx_ln]).count('RSF'),
list(uv_class_bpt[idxx_co]).count('RSF'))
print "UV weak & %d & %d & %d & %d \\ " % (list(uv_class_bpt[idxx_sf]).count('UV_WEAK'),
list(uv_class_bpt[idxx_sy]).count('UV_WEAK'),
list(uv_class_bpt[idxx_ln]).count('UV_WEAK'),
list(uv_class_bpt[idxx_co]).count('UV_WEAK'))
print "UV upturn & %d & %d & %d & %d \\ " % (list(uv_class_bpt[idxx_sf]).count('UV_UPTURN'),
list(uv_class_bpt[idxx_sy]).count('UV_UPTURN'),
list(uv_class_bpt[idxx_ln]).count('UV_UPTURN'),
list(uv_class_bpt[idxx_co]).count('UV_UPTURN'))
bins = np.arange(0, (redshift[index_all]).max(), 0.05)
ratio_uvup_redseq = []
average_redshift = []
z_uv = []
z_rs = []
redshift_uvup = redshift[index_uvup]
for i in range(bins.size):
if i==0:
continue
else:
index_redseq_i = np.where((bins[i-1]<=redshift[index_redsequence])*(redshift[index_redsequence]<=bins[i]))
index_uvup_i = np.where((bins[i-1]<=redshift_uvup)*(redshift_uvup <= bins[i]))
redshift_bin_redseq = redshift[index_redseq_i]
redshift_bin_uvup = redshift_uvup[index_uvup_i]
if (redshift_bin_redseq.size==0):
ratio_uvup_i = 0
print "There are no UV Upturn galaxies in this range of redshift: %.2f and %.2f" % (bins[i-1], bins[i])
else:
ratio_uvup_i = (np.float(redshift_bin_uvup.size) / np.float(redshift_bin_redseq.size)) *100
average_redshift_i = np.average((bins[i], bins[i-1]))
average_redshift.append(average_redshift_i)
z_uv.append(redshift_bin_uvup.size)
z_rs.append(redshift_bin_redseq.size)
ratio_uvup_redseq.append(ratio_uvup_i)
ratio_uvup_redseq = np.array(ratio_uvup_redseq)
z_uv = np.array(z_uv)
z_rs = np.array(z_rs)
average_redshift = np.array(average_redshift)
n_groups = bins.size
index = np.arange(1,n_groups,1)
sns.set_style('white')
plt.rcParams["axes.edgecolor"] = "0.15"
plt.rcParams["axes.linewidth"] = 1.
plt.rcParams['mathtext.fontset'] = u'stixsans'
plt.subplots(1,1, figsize=(8,5))
plt.bar(index[[ratio_uvup_redseq!=0]], ratio_uvup_redseq[[ratio_uvup_redseq!=0]], width=1., alpha=0.8,
color='#a6611a', edgecolor='#a6611a')
for i in range(bins[[ratio_uvup_redseq!=0]].size):
plt.text(index[i+1]-0.2, ratio_uvup_redseq[i+1]+3.5, r"$\mathrm{\mathbf{\frac{%4d}{%4d}}}$" % (z_uv[i+1], z_rs[i+1]),
fontsize=15)
plt.xticks(index, bins)
plt.ylabel("% of UV Upturn Galaxies", fontsize=15)
plt.xlabel("Redshift", fontsize=15)
plt.tick_params('both', labelsize='14')
plt.xlim(0.5, bins[[ratio_uvup_redseq!=0]].size +2.5)
plt.ylim(0, 75)
plt.savefig('../Figs/g2_barplot_uvred.pdf')
plt.savefig('../Figs/g2_barplot_uvred.png')
plt.show()
idx_not_bpt= np.where(np.logical_not((np.logical_not(np.isnan(np.log10(nii_flux[index_all]/h_alpha_flux[index_all]))))
*(np.logical_not(np.isinf(np.log10(nii_flux[index_all]/h_alpha_flux[index_all]))))
*(np.logical_not(np.isnan(np.log10(oiii_flux[index_all]/h_beta_flux[index_all]))))
*(np.logical_not(np.isinf(np.log10(oiii_flux[index_all]/h_beta_flux[index_all]))))
*(np.logical_not(h_alpha_flux[index_all]<=0))*(np.logical_not(h_beta_flux[index_all]<=0))
*(np.logical_not(nii_flux[index_all]<=0))*(np.logical_not(oiii_flux[index_all]<=0))))
print idx_not_bpt
print idx_bpt_clean
print (uv_class[index_all][idx_not_bpt]).size
print (uv_class[index_all][idx_bpt_clean]).size
print (uv_class[index_all][idx_not_bpt]).size + (uv_class[index_all][idx_bpt_clean]).size
idx_not_whan = np.where(np.logical_not((np.logical_not(np.isnan(np.log10(nii_flux[index_all]/h_alpha_flux[index_all]))))
*(np.logical_not(np.isinf(np.log10(nii_flux[index_all]/h_alpha_flux[index_all]))))
*(np.logical_not(np.isnan(np.log10(h_alpha_ew[index_all]))))
*(np.logical_not(np.isinf(np.log10(h_alpha_ew[index_all]))))
*(np.logical_not(h_alpha_flux[index_all]<=0))*(np.logical_not(nii_flux[index_all]<=0))
*(np.logical_not(h_alpha_ew[index_all]<=0))))
print np.array(idx_not_whan).size
print np.unique(uv_class[index_all][idx_not_whan])
print (uv_class[index_all][idx_not_whan]).size + (uv_class[index_all][idx_whan_clean]).size
print list(uv_class[index_all][idx_not_bpt]).count('UV_UPTURN')
print list(uv_class[index_all][idx_not_bpt]).count('UV_WEAK')
print list(uv_class[index_all][idx_not_bpt]).count('RSF')
print list(uv_class[index_all][idx_not_whan]).count('UV_UPTURN')
print list(uv_class[index_all][idx_not_whan]).count('UV_WEAK')
print list(uv_class[index_all][idx_not_whan]).count('RSF')
print uv_class[index_all][idx_not_whan].size
print uv_class[index_all][idx_whan_clean].size
emlines_not_bpt = []
for i in range(np.array(idx_not_bpt).size):
emlines_not_bpt.append('NA')
emlines_not_bpt = np.array(emlines_not_bpt)
emlines_not_whan = []
for i in range(np.array(idx_not_whan).size):
emlines_not_whan.append('NA')
emlines_not_whan = np.array(emlines_not_whan)
idxs_bpt_temp = idx_bpt_clean + idx_not_bpt
idxs_bpt = list(idxs_bpt_temp[0])+list(idxs_bpt_temp[1])
print len(idxs_bpt)
bpt_catai_temp1 = pd.DataFrame(cataid[index_all][idx_bpt_clean])
bpt_class_temp1 = pd.DataFrame(bpt_class)
bpt_pos = pd.concat([bpt_catai_temp1, bpt_class_temp1], axis=1)
bpt_pos.columns = ['CATAID', 'BPT_CLASS']
bpt_catai_temp2 = pd.DataFrame(cataid[index_all][idx_not_bpt])
bpt_class_temp2 = pd.DataFrame(emlines_not_bpt)
bpt_neg = pd.concat([bpt_catai_temp2, bpt_class_temp2], axis=1)
bpt_neg.columns = ['CATAID', 'BPT_CLASS']
bpt_all = pd.concat([bpt_pos, bpt_neg])
print np.unique(bpt_all['CATAID']).size
whan_catai_temp1 = pd.DataFrame(cataid[index_all][idx_whan_clean])
whan_class_temp1 = pd.DataFrame(whan_class)
whan_pos = pd.concat([whan_catai_temp1, whan_class_temp1], axis=1)
whan_pos.columns = ['CATAID', 'WHAN_CLASS']
whan_catai_temp2 = pd.DataFrame(cataid[index_all][idx_not_whan])
whan_class_temp2 = pd.DataFrame(emlines_not_whan)
whan_neg = pd.concat([whan_catai_temp2, whan_class_temp2], axis=1)
whan_neg.columns = ['CATAID', 'WHAN_CLASS']
whan_all = pd.concat([whan_pos, whan_neg])
print np.unique(whan_all['CATAID']).size
my_df_temp = pd.DataFrame(my_data[index_all+1])
my_df_temp.columns = my_data[0,:]
my_df_temp2 = my_df_temp.set_index('CATAID').join(bpt_all.set_index('CATAID')).join(whan_all.set_index('CATAID'))
my_df_temp2['BPT_CLASS'].value_counts(dropna=False)
my_df_temp2['WHAN_CLASS'].value_counts(dropna=False)
my_df_temp2.to_csv('../Catalogue/Match07_smalldoubleclean_emlines.csv', index=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Loading Dataset
Step2: Redshift volume on which I intend to focus my analysis
Step3: Selecting the subsample
Step4: Characterizing the UV emission of the Galaxies
Step5: Plot 01
Step6: Characterizing the emission-lines profile of these objects
Step7: New indices for emission lines - cleaning the trash
Step8: Creating new arrays of clean h alpha, h beta, nii, oiii to simplify the notation
Step9: WHAN
Step10: All BPT settings
Step11: Y-axis
Step12: All WHAN settings
Step13: Y-axis
Step14: Plot 02
Step15: Plot 03
Step16: Analysing the impact of the cross-correlation between BPT, WHAN and UV classes
Step17: I now have both classifications
Step18: Printing the same information above for latex table
Step19: Estimating the amount of objects in each category of the BPT diagram
Step20: I now have both classifications
Step21: Now we are evaluating the impact of UV upturn sources in redshift
Step22: Now we are going to add these information into the whole dataset, in case we need it in the future
|
5,519
|
<ASSISTANT_TASK:>
Python Code:
def word_count(document, search_term):
Count how many times search_term appears in document.
words = document.split()
answer = 0
for word in words:
if word == search_term:
answer += 1
return answer
def nearest_square(limit):
Find the largest square number smaller than limit.
answer = 0
while (answer+1)**2 < limit:
answer += 1
return answer**2
egg_count = 0
def buy_eggs():
egg_count += 12 # purchase a dozen eggs
# buy_eggs()
egg_count = 0
def buy_eggs():
return egg_count + 12
egg_count = buy_eggs()
print(egg_count)
egg_count = buy_eggs()
print(egg_count)
# list of numbers of type Integer
numbers = [1, 2, 3, 4, 5]
print("List :", numbers)
print("Second element :", numbers[1]) ## 2
print("Length of list :",len(numbers)) ## 5
print() # Empty line
# list of strings
colors = ['red', 'blue', 'green']
print("List :", colors)
print ("First color :", colors[0]) ## red
print ("Third color :", colors[2]) ## green
print ("Last color :", colors[-1]) ## green
print ("Second last color :", colors[-2]) ## blue
print ("Length of list :",len(colors)) ## 3
print() # Empty line
# list with multiple variable types
me = ['Shantanu Kamath', 'Computer Science', 20, 1000000]
print("List :", me)
print("Fourth element :", me[3]) ## 1000000
print("Length of list :", len(me)) ## 4
myList = [1,2,3,4]
# Indexing
A = myList[2]
print(A)
# Repititoin
A = [A]*3
print(A)
# Concatenation
print(myList + A)
# Membership
print(1 in myList)
# Length
print(len(myList))
# Slicing [inclusive : exclusive]
print(myList[1:3])
# Leaving the exclusive parameter empty
print(myList[-3:])
# Creating sentence and list form of sentence
name = "Welcome to coding with Python v3.6"
words = ["Welcome", "to", "coding", "with", "Python", "v3.6"]
print(name[4])
print(words[4])
# This is okay
words[5] = "v2.7"
print(words)
# This is not
# name[5] = "d"
# print(name)
langs = ["Python", "Java", "C++", "C"]
languages = langs
langs.append("C#")
print(langs)
print(languages)
# list.append example
names = ['Hermione Granger', 'Ronald Weasley']
names.append('Harry Potter')
print("New list :", names) ## ['Hermione Granger', 'Ronald Weasley', 'Harry Potter']
# list.insert example
names = ['Ronald Weasley', 'Hermione Granger']
names.insert(1, 'Harry Potter')
print("New list :", names) ## ['Ronald Weasley', 'Harry Potter', 'Hermione Granger']
# list.extend example
MainChar = ['Ronald Weasley', 'Harry Potter', 'Hermione Granger']
SupChar = ['Neville Longbottom', 'Luna Lovegood']
MainChar.extend(SupChar)
print("Full list :", MainChar) ## ['Ronald Weasley', 'Harry Potter', 'Hermione Granger', 'Neville Longbottom', 'Luna Lovegood']
# list.index example
names = ['Ronald Weasley', 'Harry Potter', 'Hermione Granger']
index = names.index('Harry Potter')
print("Index of Harry Potter in list :",index) ## 1
# Throws a ValueError (Uncomment to see error.)
# index = names.index('Albus Dumbledore')
names = ['Ronald Weasley', 'Harry Potter', 'Hermione Granger']
index = names.remove('Harry Potter') ## ['Ronald Weasley', 'Hermione Granger']
print("Modified list :", names)
names = ['Ronald Weasley', 'Harry Potter', 'Hermione Granger']
index = names.pop(1)
print("Modified list :", names) ## ['Ronald Weasley', 'Hermione Granger']
alphabets = ['a', 'f','c', 'e','b', 'd']
alphabets.sort();
print ("Sorted list :", alphabets) ## ['a', 'b', 'c', 'd', 'e', 'f']
alphabets = ['a', 'b', 'c', 'd', 'e', 'f']
alphabets.reverse()
print("Reversed list :", alphabets) ## ['f', 'e', 'd', 'c', 'b', 'a']
# Using loops and list methods
squares = []
for x in range(10):
squares.append(x**2)
print("Squares :", squares) ## [0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
exponents = []
for i in range(13):
exponents.append(2**i)
print("Exponents :", exponents) ## [1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096]
evenSquares = []
for x in squares:
if x % 2 == 0:
evenSquares.append(x)
print("Even Squares :", evenSquares) ## [0, 4, 16, 36, 64]
# Using list comprehensions
squares = [x**2 for x in range(10)]
exponents = [2**i for i in range(13)]
evenSquares = [x for x in squares if x % 2 == 0]
print("Squares :", squares) ## [0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
print("Exponents :", exponents) ## [1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096]
print("Even Squares :", evenSquares) ## [0, 4, 16, 36, 64]
# Using in to check if number is present in the list.
print(15 in [3,5,2,4,1])
print('Work' in 'Python Advanced Workshop')
# Using index to get position of the number if present in list.
# In case of lists, its important to remember that the index function will throw an error if the value isn't present in the list.
values = [3,5,2,4,1]
if 5 in values:
print("Value present at",values.index(5)) ## 1
else:
print("Value not present in list")
# Using find to get the index of the first occurrence of the word in a sentence.
sentence = "This be a string"
index = sentence.find("is")
if index == -1:
print("There is no 'is' here!")
else:
print("Found 'is' in the sentence at position "+str(index))
# Using index to find words in a list of words
sentence = "This be a string"
words = sentence.split(' ')
if 'is' in words:
print("Found 'is' in the list at position "+str(words.index('is')))
else:
print("There is no 'is' here!")
# Using sort() with a list.
values = [7, 4, 3, 6, 1, 2, 5]
print("Unsorted list :", values) ## [7, 4, 3, 6, 1, 2, 5]
newValues = values.sort()
print("New list :", newValues) ## None
print("Old list :", values) ## [1, 2, 3, 4, 5, 6, 7]
print()
# Using sorted() with a list.
values = [7, 4, 3, 6, 1, 2, 5]
print("Unsorted list :", values) ## [7, 4, 3, 6, 1, 2, 5]
newValues = sorted(values)
print("New list :", newValues) ## [1, 2, 3, 4, 5, 6, 7]
print("Old list :", values) ## [7, 4, 3, 6, 1, 2, 5]
# Using key in sorted
values = ['ccc', 'aaaa', 'd', 'bb']
print (sorted(values, key=len)) ## ['d', 'bb', 'ccc', 'aaaa']
# Remember case sensitivity : All upper case characters come before lower case character in an ascending sequence.
sentence = "This is a test string from Andrew"
print(sorted(sentence.split(), key=str.lower)) ## ['a', 'Andrew', 'from', 'is', 'string', 'test', 'This']
# Using reverse for ascending and descending
strs = ['aa', 'BB', 'zz', 'CC']
print (sorted(strs)) ## ['BB', 'CC', 'aa', 'zz'] (case sensitive)
print (sorted(strs, reverse=True)) ## ['zz', 'aa', 'CC', 'BB']
class Person:
pass # An empty block
p = Person()
print(p)
class Person:
def say_hi(self):
print('Hello, how are you?')
p = Person()
p.say_hi()
class Person:
def __init__(self, name):
self.name = name
def say_hi(self):
print('Hello, my name is', self.name)
p = Person('Shantanu')
p.say_hi()
class Robot:
## Represents a robot, with a name.
# A class variable, counting the number of robots
population = 0
def __init__(self, name):
## Initializes the data.
self.name = name
print("(Initializing {})".format(self.name))
# When this person is created, the robot
# adds to the population
Robot.population += 1
def die(self):
## I am dying.
print("{} is being destroyed!".format(self.name))
Robot.population -= 1
if Robot.population == 0:
print("{} was the last one.".format(self.name))
else:
print("There are still {:d} robots working.".format(
Robot.population))
def say_hi(self):
## Greeting by the robot. Yeah, they can do that.
print("Greetings, my masters call me {}.".format(self.name))
@classmethod
def how_many(cls):
## Prints the current population.
print("We have {:d} robots.".format(cls.population))
droid1 = Robot("R2-D2")
droid1.say_hi()
Robot.how_many()
droid2 = Robot("C-3PO")
droid2.say_hi()
Robot.how_many()
print("\nRobots can do some work here.\n")
print("Robots have finished their work. So let's destroy them.")
droid1.die()
droid2.die()
Robot.how_many()
# ( Uncomment to see Syntax error. )
# for i in range(10)
import math
anumber = int(input("Please enter an integer "))
# Give input as negative number and also see output from next code snippet
print(math.sqrt(anumber))
try:
print(math.sqrt(anumber))
except:
print("Bad Value for square root")
print("Using absolute value instead")
print(math.sqrt(abs(anumber)))
if anumber < 0:
raise RuntimeError("You can't use a negative number")
else:
print(math.sqrt(anumber))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Python Advanced
Step3: Since the variable answer here is defined within each function seperately, you can reuse the same name of the variable, as the scope of the variables itself is different.
Step4: In such situations its better to redefine the functions as below.
Step5: List Basics
Step6: Since lists are considered to be sequentially ordered, they support a number of operations that can be applied to any Python sequence.
Step7: Mutability
Step8: Passed by reference
Step9: List Methods
Step10: list.insert(index, element)
Step11: list.extend(list2)
Step12: list.index(element)
Step13: list.remove(element)
Step14: list.pop(index)
Step15: list.sort()
Step16: list.reverse()
Step17: Others methods include
Step18: These extend to more than one line. But by using list comprehensions you can bring it down to just one line.
Step19: Searching
Step20: Sometimes it can be important to get the position of the searched value. In that case, we can use index method for lists and the find method for strings.
Step21: For more efficient Search Algorithms, look through the Algorithm Implementation section of this repository
Step22: Sorting using additional key
Step23: Basics on Class and OOP
Step24: Methods
Step25: The init
Step26: Object variables
Step27: How It Works
Step28: The other type of error, known as a logic error, denotes a situation where the program executes but gives the wrong result. This can be due to an error in the underlying algorithm or an error in your translation of that algorithm. In some cases, logic errors lead to very bad situations such as trying to dividing by zero or trying to access an item in a list where the index of the item is outside the bounds of the list. In this case, the logic error leads to a runtime error that causes the program to terminate. These types of runtime errors are typically called exceptions.
Step29: We can handle this exception by calling the print function from within a try block. A corresponding except block catches the exception and prints a message back to the user in the event that an exception occurs. For example
Step30: It is also possible for a programmer to cause a runtime exception by using the raise statement. For example, instead of calling the square root function with a negative number, we could have checked the value first and then raised our own exception. The code fragment below shows the result of creating a new RuntimeError exception. Note that the program would still terminate but now the exception that caused the termination is something explicitly created by the programmer.
|
5,520
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
from sklearn import preprocessing
from sklearn.cross_validation import train_test_split
from helpers.models import fit_model
from helpers.helpers import make_binary, class_info
# set random state for camparability
random_state = np.random.RandomState(0)
# read data
context = pd.read_csv('data/muestra_variables.csv')
# select variable columns
cols_select = context.columns[6:]
variables = context.ix[:,cols_select]
for c in ['no_se','uname','content','cve_mza']:
del variables[c]
# reclass intervalo as numerical
def intervalo_to_numbers(x):
equiv = {'sun':0,'mon':1,'tue':2,'wed':3,'thu':4,'fri':5,'sat':6,'sun':7}
interval = 0.16666*int(x.split('.')[1])
day = x.split('.')[0]
valor = equiv[day] + interval
return valor
reclass = variables['intervalo'].apply(intervalo_to_numbers)
# drop old 'intervalo' column and replace it with numerical values
del variables['intervalo']
variables = variables.join(reclass,how='inner')
data = variables.as_matrix()
data_Y = data[:,0]
data_X = data[:,1:]
print("Initial label distribution")
class_info(data_Y)
data_X, data_Y = data_X[data_Y != 4], data_Y[data_Y != 4]
Y_pos_neu = make_binary(data_Y, set((1.,2.)))
Y_neg_neu = make_binary(data_Y, set((3.,2.)))
print("Label distribution after binarization")
print("Pos + Neu")
class_info(Y_pos_neu)
print()
print("Neg + Neu")
class_info(Y_neg_neu)
(X_train_pos_neu, X_test_pos_neu,
Y_train_pos_neu, Y_test_pos_neu) = train_test_split(data_X, Y_pos_neu,
test_size=0.4,
random_state=random_state)
(X_train_neg_neu, X_test_neg_neu,
Y_train_neg_neu, Y_test_neg_neu) = train_test_split(data_X, Y_neg_neu,
test_size=0.4,
random_state=random_state)
X_pos_neu_s = preprocessing.scale(X_train_pos_neu)
X_neg_neu_s = preprocessing.scale(X_train_neg_neu)
param_grid = {'C': [1, 10, 100, 1000], 'gamma': [0.01,0.001, 0.0001],
'kernel': ['rbf']}
metrics = ['f1','accuracy','average_precision','roc_auc','recall']
fitted_models_pos_neu = {}
for metric in metrics:
fitted_models_pos_neu[metric] = fit_model(X_pos_neu_s,Y_train_pos_neu,
param_grid,metric,6)
for metric, model in fitted_models_pos_neu.items():
print ("Using metric {}".format(metric))
print("Best parameters set found on development set:")
print()
print(model.best_params_)
print("Grid scores on development set:")
print()
for params, mean_score, scores in model.grid_scores_:
print("%0.3f (+/-%0.03f) for %r"
% (mean_score, scores.std() * 2, params))
print()
#X_pos_neu_s_test = preprocessing.scale(X_test_pos_neu)
for metric, model in fitted_models_pos_neu.items():
this_estimator = fitted_models_pos_neu[metric].best_estimator_
this_score = this_estimator.score(X_pos_neu_s_test, Y_test_pos_neu)
y_pred = this_estimator.fit(X_pos_neu_s_test, Y_test_pos_neu).predict(X_pos_neu_s_test)
#conf_matrix = confusion_matrix(Y_test_pos_neu,y_pred)
df_confusion = pd.crosstab(Y_test_pos_neu, y_pred,
rownames=['Actual'],
colnames=['Predicted'], margins=True)
print ("Using metric {}".format(metric))
print("Validation score {}".format(this_score))
print("Confusion Matrix:")
print(df_confusion)
print()
Y_train_neg_neu = np.array([1 if val == 0 else 0 for val in Y_train_neg_neu])
fitted_models_neg_neu = {}
for metric in metrics:
fitted_models_neg_neu[metric] = fit_model(X_neg_neu_s,Y_train_neg_neu,
param_grid,metric,6)
for metric, model in fitted_models_neg_neu.items():
print ("Using metric {}".format(metric))
print("Best parameters set found on development set:")
print()
print(model.best_params_)
print("Grid scores on development set:")
print()
for params, mean_score, scores in model.grid_scores_:
print("%0.3f (+/-%0.03f) for %r"
% (mean_score, scores.std() * 2, params))
print()
X_neg_neu_s_test = preprocessing.scale(X_test_neg_neu)
for metric, model in fitted_models_neg_neu.items():
this_estimator = fitted_models_neg_neu[metric].best_estimator_
this_score = this_estimator.score(X_neg_neu_s_test, Y_test_neg_neu)
y_pred = this_estimator.fit(X_neg_neu_s_test, Y_test_neg_neu).predict(X_neg_neu_s_test)
#conf_matrix = confusion_matrix(Y_test_pos_neu,y_pred)
df_confusion = pd.crosstab(Y_test_neg_neu, y_pred,
rownames=['Actual'],
colnames=['Predicted'], margins=True)
print ("Using metric {}".format(metric))
print("Validation score {}".format(this_score))
print()
print("Confusion Matrix:")
print(df_confusion)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Preprocesamiento
Step2: Obtener los datos como np.array y separar los datos en predictor (X) y objetivo (Y)
Step3: Eliminar los datos con etiqueta 4 (no sé que sean)
Step4: Hacemos dos binarizaciones de los datos, en una agregamos las clases Pos y Neu (etiquetas 1 y 2) y en la otra agregamos Neg Y Neu (etiquetas 3 y 2).
Step5: Separamos en muestras de prueba (40%) y entrenamiento para ambas binarizaciones.
Step6: Reescalamos las muestras de entrenamiento
Step7: Entrenamiento con las muestras sin balancear.
Step8: Ahora sí, ajustamos las SVM con diferentes métricas, primero para la binarización Pos + Neu
Step9: Ahora evaluamos sobre la mustra de prueba, para obtener los scores de validación
Step10: Ahora lo mismo pero con la otra binarización, para hacer los dos casos comparables vamos a voltear las etiquetas de las clases
Step11: Y sus métricas sobre la muestra test
|
5,521
|
<ASSISTANT_TASK:>
Python Code:
import csv
import yaml
reader = csv.reader(open("../data/questions.csv"))
question_1 = reader.next()
question_1
yaml.load(question_1[-1].replace(": u'", ": '"))
reader = csv.reader(open("../data/train.csv"))
reader.next()
train_set = []
for row in reader:
train_set.append(row)
print len(train_set)
print len(train_set[0])
print train_set[0]
print train_set[-1]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read the first line and see the structure.
Step2: Yes, each line is converted into list and it has 6 items as expected. However, how can we use the last item? It is string type but it seems dictionary or json.
Step3: Now, you know how to convert csv files into other formats that you want. So, you can handle all the given files.
Step4: However, you know that train.csv has header which is not data we want to use. So, you might need to get rid of the first line. By the way, we need to know that reader returned by csv.reader is enumerater not list. So, you just use reader only once. If you want to use it once again, you need to use csv.reader once.
Step5: OK, now reader is on the 2nd line of the csv flie. Try to convert it into list.
|
5,522
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
from scipy.stats import norm
import statsmodels.api as sm
import matplotlib.pyplot as plt
Univariate Local Linear Trend Model
class LocalLinearTrend(sm.tsa.statespace.MLEModel):
def __init__(self, endog):
# Model order
k_states = k_posdef = 2
# Initialize the statespace
super(LocalLinearTrend, self).__init__(
endog, k_states=k_states, k_posdef=k_posdef,
initialization='approximate_diffuse',
loglikelihood_burn=k_states
)
# Initialize the matrices
self.ssm['design'] = np.array([1, 0])
self.ssm['transition'] = np.array([[1, 1],
[0, 1]])
self.ssm['selection'] = np.eye(k_states)
# Cache some indices
self._state_cov_idx = ('state_cov',) + np.diag_indices(k_posdef)
@property
def param_names(self):
return ['sigma2.measurement', 'sigma2.level', 'sigma2.trend']
@property
def start_params(self):
return [np.std(self.endog)]*3
def transform_params(self, unconstrained):
return unconstrained**2
def untransform_params(self, constrained):
return constrained**0.5
def update(self, params, *args, **kwargs):
params = super(LocalLinearTrend, self).update(params, *args, **kwargs)
# Observation covariance
self.ssm['obs_cov',0,0] = params[0]
# State covariance
self.ssm[self._state_cov_idx] = params[1:]
import requests
from io import BytesIO
from zipfile import ZipFile
# Download the dataset
ck = requests.get('http://staff.feweb.vu.nl/koopman/projects/ckbook/OxCodeAll.zip').content
zipped = ZipFile(BytesIO(ck))
df = pd.read_table(
BytesIO(zipped.read('OxCodeIntroStateSpaceBook/Chapter_2/NorwayFinland.txt')),
skiprows=1, header=None, sep='\s+', engine='python',
names=['date','nf', 'ff']
)
# Load Dataset
df.index = pd.date_range(start='%d-01-01' % df.date[0], end='%d-01-01' % df.iloc[-1, 0], freq='AS')
# Log transform
df['lff'] = np.log(df['ff'])
# Setup the model
mod = LocalLinearTrend(df['lff'])
# Fit it using MLE (recall that we are fitting the three variance parameters)
res = mod.fit(disp=False)
print(res.summary())
# Perform prediction and forecasting
predict = res.get_prediction()
forecast = res.get_forecast('2014')
fig, ax = plt.subplots(figsize=(10,4))
# Plot the results
df['lff'].plot(ax=ax, style='k.', label='Observations')
predict.predicted_mean.plot(ax=ax, label='One-step-ahead Prediction')
predict_ci = predict.conf_int(alpha=0.05)
predict_index = np.arange(len(predict_ci))
ax.fill_between(predict_index[2:], predict_ci.iloc[2:, 0], predict_ci.iloc[2:, 1], alpha=0.1)
forecast.predicted_mean.plot(ax=ax, style='r', label='Forecast')
forecast_ci = forecast.conf_int()
forecast_index = np.arange(len(predict_ci), len(predict_ci) + len(forecast_ci))
ax.fill_between(forecast_index, forecast_ci.iloc[:, 0], forecast_ci.iloc[:, 1], alpha=0.1)
# Cleanup the image
ax.set_ylim((4, 8));
legend = ax.legend(loc='lower left');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: To take advantage of the existing infrastructure, including Kalman filtering and maximum likelihood estimation, we create a new class which extends from statsmodels.tsa.statespace.MLEModel. There are a number of things that must be specified
Step3: Using this simple model, we can estimate the parameters from a local linear trend model. The following example is from Commandeur and Koopman (2007), section 3.4., modeling motor vehicle fatalities in Finland.
Step4: Since we defined the local linear trend model as extending from MLEModel, the fit() method is immediately available, just as in other statsmodels maximum likelihood classes. Similarly, the returned results class supports many of the same post-estimation results, like the summary method.
Step5: Finally, we can do post-estimation prediction and forecasting. Notice that the end period can be specified as a date.
|
5,523
|
<ASSISTANT_TASK:>
Python Code:
# Install datacommons_pandas
!pip install datacommons_pandas --upgrade --quiet
# Import Data Commons
import datacommons_pandas as dc
# Import other required libraries
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
import pandas as pd
import json
# In the browser, we saw that the dcid for United States is country/USA
usa = 'country/USA'
# Get lists of states, counties, and cities within the United States, respectively.
states = dc.get_places_in([usa], 'State')[usa]
counties = dc.get_places_in([usa], 'County')[usa]
cities = dc.get_places_in([usa], 'City')[usa]
# Display the first 10 states
states[:10]
# Get StatVarObservations for states.
df_state = dc.build_multivariate_dataframe(states, ['Count_Person', 'Median_Age_Person'])
# Get StatVarObservations for counties.
df_county = dc.build_multivariate_dataframe(counties, ['Count_Person', 'Median_Age_Person'])
# Get StatVarObservations for cities.
df_city = dc.build_multivariate_dataframe(cities, ['Count_Person', 'Median_Age_Person'])
# View the first 5 rows of the state table.
df_state.head(5)
def add_name_col(df):
# Add a new column called name, where each value is the name for the place dcid in the index.
df['name'] = df.index.map(dc.get_property_values(df.index, 'name'))
# Keep just the first name, instead of a list of all names.
df['name'] = df['name'].str[0]
add_name_col(df_state)
df_state.head()
# Filter for all cities that have at least one person
df_city = df_city[df_city['Count_Person'] >= 1]
def plot_data(title, pd_table):
Generate a scatter plot comparing median age and population count.
plt.figure(figsize=(12, 8))
plt.title(title)
plt.xlabel('Median Age in Years')
plt.ylabel('Population Count (log scale)')
# Scatter plot the information
ax = plt.gca()
ax.set_yscale('log')
ax.scatter(pd_table['Median_Age_Person'], pd_table['Count_Person'], alpha=0.7)
# Generate the plot for state data
plot_data('Median Age vs. Population Count for States', df_state)
# Generate the plot for county data
plot_data('Median Age vs. Population Count for Counties', df_county)
# Generate the plot for city data
plot_data('Median Age vs. Population Count for Cities', df_city)
def plot_all_data(state_table, county_table, city_table):
plt.figure(figsize=(12, 8))
plt.title('Median Age vs. Population Count')
plt.xlabel('Median Age in Years')
plt.ylabel('Population Count (log scale)')
# Make things pretty
state_color = "#ffa600"
county_color = "#bc5090"
city_color = "#003f5c"
# Scatter plot the information
ax = plt.gca()
ax.set_yscale('log')
ax.scatter(state_table['Median_Age_Person'], state_table['Count_Person'], color=state_color, alpha=0.75)
ax.scatter(county_table['Median_Age_Person'], county_table['Count_Person'], color=county_color, alpha=0.5)
ax.scatter(city_table['Median_Age_Person'], city_table['Count_Person'], color=city_color, alpha=0.4)
# Create the legend
state_patch = mpatches.Patch(color=state_color, label='States')
county_patch = mpatches.Patch(color=county_color, label='Counties')
city_patch = mpatches.Patch(color=city_color, label='Cities')
plt.legend(handles=[state_patch, county_patch, city_patch])
# Plot all the data together.
plot_all_data(df_state, df_county, df_city)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Example
Step2: Using get_places_in to Query Administrative Areas
Step3: Let's see what states are in the USA
Step4: Great! With the place dcids ready, we can build a dataframe with the population and median age for each state. To do that, we'll need to understand a little bit about queryng statistical data.
Step5: We view the data we've queried for.
Step6: To get the name of places, we can use the get_property_values function
Step7: Cleaning and Plotting the Data
Step9: and finally, let's visualize our results.
Step10: We can also plot each administrative area granularity on the same plot to see how they relate.
|
5,524
|
<ASSISTANT_TASK:>
Python Code:
import cvxpy as cp
import numpy as np
# Ensure repeatably random problem data.
np.random.seed(0)
# Generate random data matrix A.
m = 10
n = 10
k = 5
A = np.random.rand(m, k).dot(np.random.rand(k, n))
# Initialize Y randomly.
Y_init = np.random.rand(m, k)
# Ensure same initial random Y, rather than generate new one
# when executing this cell.
Y = Y_init
# Perform alternating minimization.
MAX_ITERS = 30
residual = np.zeros(MAX_ITERS)
for iter_num in range(1, 1+MAX_ITERS):
# At the beginning of an iteration, X and Y are NumPy
# array types, NOT CVXPY variables.
# For odd iterations, treat Y constant, optimize over X.
if iter_num % 2 == 1:
X = cp.Variable(shape=(k, n))
constraint = [X >= 0]
# For even iterations, treat X constant, optimize over Y.
else:
Y = cp.Variable(shape=(m, k))
constraint = [Y >= 0]
# Solve the problem.
# increase max iters otherwise, a few iterations are "OPTIMAL_INACCURATE"
# (eg a few of the entries in X or Y are negative beyond standard tolerances)
obj = cp.Minimize(cp.norm(A - Y*X, 'fro'))
prob = cp.Problem(obj, constraint)
prob.solve(solver=cp.SCS, max_iters=10000)
if prob.status != cp.OPTIMAL:
raise Exception("Solver did not converge!")
print('Iteration {}, residual norm {}'.format(iter_num, prob.value))
residual[iter_num-1] = prob.value
# Convert variable to NumPy array constant for next iteration.
if iter_num % 2 == 1:
X = X.value
else:
Y = Y.value
#
# Plot residuals.
#
import matplotlib.pyplot as plt
# Show plot inline in ipython.
%matplotlib inline
# Set plot properties.
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
font = {'weight' : 'normal',
'size' : 16}
plt.rc('font', **font)
# Create the plot.
plt.plot(residual)
plt.xlabel('Iteration Number')
plt.ylabel('Residual Norm')
plt.show()
#
# Print results.
#
print('Original matrix:')
print(A)
print('Left factor Y:')
print(Y)
print('Right factor X:')
print(X)
print('Residual A - Y * X:')
print(A - Y.dot(X))
print('Residual after {} iterations: {}'.format(iter_num, prob.value))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Perform alternating minimization
Step2: Output results
|
5,525
|
<ASSISTANT_TASK:>
Python Code:
# Setup feedback system
from learntools.core import binder
binder.bind(globals())
from learntools.computer_vision.ex3 import *
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from matplotlib import gridspec
import learntools.computer_vision.visiontools as visiontools
plt.rc('figure', autolayout=True)
plt.rc('axes', labelweight='bold', labelsize='large',
titleweight='bold', titlesize=18, titlepad=10)
plt.rc('image', cmap='magma')
# Read image
image_path = '../input/computer-vision-resources/car_illus.jpg'
image = tf.io.read_file(image_path)
image = tf.io.decode_jpeg(image, channels=1)
image = tf.image.resize(image, size=[400, 400])
# Embossing kernel
kernel = tf.constant([
[-2, -1, 0],
[-1, 1, 1],
[0, 1, 2],
])
# Reformat for batch compatibility.
image = tf.image.convert_image_dtype(image, dtype=tf.float32)
image = tf.expand_dims(image, axis=0)
kernel = tf.reshape(kernel, [*kernel.shape, 1, 1])
kernel = tf.cast(kernel, dtype=tf.float32)
image_filter = tf.nn.conv2d(
input=image,
filters=kernel,
strides=1,
padding='VALID',
)
image_detect = tf.nn.relu(image_filter)
# Show what we have so far
plt.figure(figsize=(12, 6))
plt.subplot(131)
plt.imshow(tf.squeeze(image), cmap='gray')
plt.axis('off')
plt.title('Input')
plt.subplot(132)
plt.imshow(tf.squeeze(image_filter))
plt.axis('off')
plt.title('Filter')
plt.subplot(133)
plt.imshow(tf.squeeze(image_detect))
plt.axis('off')
plt.title('Detect')
plt.show();
# YOUR CODE HERE
image_condense = ____
# Check your answer
q_1.check()
#%%RM_IF(PROD)%%
image_condense = tf.nn.pool(
input=image_detect, # image in the Detect step above
window_shape=(2, 2),
pooling_type='MAX',
# we'll see what these do in the next lesson!
strides=(2, 2),
padding='SAME',
)
q_1.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_1.hint()
#_COMMENT_IF(PROD)_
q_1.solution()
plt.figure(figsize=(8, 6))
plt.subplot(121)
plt.imshow(tf.squeeze(image_detect))
plt.axis('off')
plt.title("Detect (ReLU)")
plt.subplot(122)
plt.imshow(tf.squeeze(image_condense))
plt.axis('off')
plt.title("Condense (MaxPool)")
plt.show();
REPEATS = 4
SIZE = [64, 64]
# Create a randomly shifted circle
image = visiontools.circle(SIZE, r_shrink=4, val=1)
image = tf.expand_dims(image, axis=-1)
image = visiontools.random_transform(image, jitter=3, fill_method='replicate')
image = tf.squeeze(image)
plt.figure(figsize=(16, 4))
plt.subplot(1, REPEATS+1, 1)
plt.imshow(image, vmin=0, vmax=1)
plt.title("Original\nShape: {}x{}".format(image.shape[0], image.shape[1]))
plt.axis('off')
# Now condense with maximum pooling several times
for i in range(REPEATS):
ax = plt.subplot(1, REPEATS+1, i+2)
image = tf.reshape(image, [1, *image.shape, 1])
image = tf.nn.pool(image, window_shape=(2,2), strides=(2, 2), padding='SAME', pooling_type='MAX')
image = tf.squeeze(image)
plt.imshow(image, vmin=0, vmax=1)
plt.title("MaxPool {}\nShape: {}x{}".format(i+1, image.shape[0], image.shape[1]))
plt.axis('off')
# View the solution (Run this code cell to receive credit!)
q_2.solution()
feature_maps = [visiontools.random_map([5, 5], scale=0.1, decay_power=4) for _ in range(8)]
gs = gridspec.GridSpec(1, 8, wspace=0.01, hspace=0.01)
plt.figure(figsize=(18, 2))
for i, feature_map in enumerate(feature_maps):
plt.subplot(gs[i])
plt.imshow(feature_map, vmin=0, vmax=1)
plt.axis('off')
plt.suptitle('Feature Maps', size=18, weight='bold', y=1.1)
plt.show()
# reformat for TensorFlow
feature_maps_tf = [tf.reshape(feature_map, [1, *feature_map.shape, 1])
for feature_map in feature_maps]
global_avg_pool = tf.keras.layers.GlobalAvgPool2D()
pooled_maps = [global_avg_pool(feature_map) for feature_map in feature_maps_tf]
img = np.array(pooled_maps)[:,:,0].T
plt.imshow(img, vmin=0, vmax=1)
plt.axis('off')
plt.title('Pooled Feature Maps')
plt.show();
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.preprocessing import image_dataset_from_directory
# Load VGG16
pretrained_base = tf.keras.models.load_model(
'../input/cv-course-models/cv-course-models/vgg16-pretrained-base',
)
model = keras.Sequential([
pretrained_base,
# Attach a global average pooling layer after the base
layers.GlobalAvgPool2D(),
])
# Load dataset
ds = image_dataset_from_directory(
'../input/car-or-truck/train',
labels='inferred',
label_mode='binary',
image_size=[128, 128],
interpolation='nearest',
batch_size=1,
shuffle=True,
)
ds_iter = iter(ds)
car = next(ds_iter)
car_tf = tf.image.resize(car[0], size=[128, 128])
car_features = model(car_tf)
car_features = tf.reshape(car_features, shape=(16, 32))
label = int(tf.squeeze(car[1]).numpy())
plt.figure(figsize=(8, 4))
plt.subplot(121)
plt.imshow(tf.squeeze(car[0]))
plt.axis('off')
plt.title(["Car", "Truck"][label])
plt.subplot(122)
plt.imshow(car_features)
plt.title('Pooled Feature Maps')
plt.axis('off')
plt.show();
# View the solution (Run this code cell to receive credit!)
q_3.check()
# Line below will give you a hint
#_COMMENT_IF(PROD)_
q_3.hint()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Run this cell to get back to where you left off in the previous lesson. We'll use a predefined kernel this time.
Step2: 1) Apply Pooling to Condense
Step3: Run the next cell to see what maximum pooling did to the feature!
Step4: We learned about how MaxPool2D layers give a convolutional network the property of translation invariance over small distances. In this exercise, you'll have a chance to observe this in action.
Step5: 2) Explore Invariance
Step6: Global Average Pooling
Step7: Since each of the $5 \times 5$ feature maps was reduced to a single value, global pooling reduced the number of parameters needed to represent these features by a factor of 25 -- a substantial savings!
Step8: Notice how we've attached a GlobalAvgPool2D layer after the pretrained VGG16 base. Ordinarily, VGG16 will produce 512 feature maps for each image. The GlobalAvgPool2D layer reduces each of these to a single value, an "average pixel", if you like.
Step9: 3) Understand the Pooled Features
|
5,526
|
<ASSISTANT_TASK:>
Python Code:
import sys
print sys.executable
%load_ext autoreload
%autoreload 2
%reload_ext autoreload
import sonnet as snt
import tensorflow as tf
import tflearn
import numpy as np
import dataset_utils
data = dataset_utils.load_data(filename="../synthetic_data/toy.pickle")
input_data_, output_mask_, target_data_ = dataset_utils.preprocess_data_for_rnn(data)
tflearn.init_graph()
n_hidden = 64
n_samples, n_timesteps, n_inputdim = input_data_.shape
_,_,n_outputdim = target_data_.shape
print n_timesteps
print n_inputdim
print n_outputdim
graph_to_use = tf.Graph()
with graph_to_use.as_default():
net = tflearn.input_data([None, n_timesteps, n_inputdim],dtype=tf.float32, name='input_data')
output_mask = tflearn.input_data([None, n_timesteps, n_outputdim], dtype=tf.float32, name='output_mask')
net = tflearn.lstm(net, n_hidden, return_seq=True, name="lstm_1")
net = tflearn.lstm(net, n_outputdim, return_seq=True, name="lstm_2")
net = tf.stack(net, axis=1)
preds = net
net = net * output_mask
net = tflearn.regression(net, optimizer='adam', learning_rate=0.001,
loss='mean_square')
model = tflearn.DNN(net, tensorboard_verbose=0)
model.fit([ input_data_, output_mask_], target_data_, validation_set=0.1)
tf.get_collection(tf.GraphKeys.INPUTS)
data = dataset_utils.load_data(filename="../synthetic_data/1000stud_100seq_expert.pickle")
input_data_, output_mask_, target_data_ = dataset_utils.preprocess_data_for_rnn(data)
n_samples, n_timesteps, n_inputdim = input_data_.shape
_,_,n_outputdim = target_data_.shape
print n_samples
print n_timesteps
print n_inputdim
print n_outputdim
graph_to_use = tf.Graph()
with graph_to_use.as_default():
net = tflearn.input_data([None, n_timesteps, n_inputdim],dtype=tf.float32, name='input_data')
output_mask = tflearn.input_data([None, n_timesteps, n_outputdim], dtype=tf.float32, name='output_mask')
net = tflearn.lstm(net, n_hidden, return_seq=True, name="lstm_1")
net = tflearn.lstm(net, n_outputdim, return_seq=True, name="lstm_2")
net = tf.stack(net, axis=1)
preds = net
net = net * output_mask
net = tflearn.regression(net, optimizer='adam', learning_rate=0.001,
loss='mean_square')
model = tflearn.DNN(net, tensorboard_verbose=2)
model.fit([ input_data_, output_mask_], target_data_, n_epoch=32, validation_set=0.1)
from dynamics_model import *
model = load_model(model_id="test_model", load_checkpoint=False, is_training=True)
train_data = (input_data_[:,:10,:], output_mask_[:,:10,:], target_data_[:,:10,:])
train(model, train_data)
predict(model, input_data_[:10, :10, :])
import dynamics_model_class as dm
dmodel = dm.DynamicsModel(model_id="test_model", load_checkpoint=False)
dmodel.train(train_data)
preds = dmodel.predict(input_data_[:10,:10, :])
generator_model = dm.DynamicsModel(model_id="test_model", timesteps=1, load_checkpoint=False)
preds = generator_model.predict(input_data_[:10,:1, :])
print preds[0]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load toy data set
Step2: Build RNN Model
|
5,527
|
<ASSISTANT_TASK:>
Python Code:
%%script bash
# Ignore this boring cell.
# It allows one to do C in Jupyter notebook.
cat >20170706_head.c <<EOF
#include <stdlib.h>
#include <stdio.h>
#define LINES (3)
#define COLUMNS (4)
void print_buf(char buf[LINES][COLUMNS])
{
for (int row = 0; row < LINES; row++) {
for (int column = 0; column < COLUMNS; column++)
putchar(buf[row][column]);
putchar('\n');
}
}
EOF
cat >20170706_tail.c <<EOF
int main(int argc, char *argv())
{
char buf[LINES][COLUMNS];
for (int row = 0; row < LINES; row++)
for (int column = 0; column < COLUMNS; column++)
buf[row][column] = '@';
blank(buf, 1, 2);
print_buf(buf);
}
EOF
program_name="${PATH%%:*}/20170706_c_foo"
echo $program_name
cat >"$program_name" <<EOF
#!/usr/bin/env sh
cat 20170706_head.c >20170706_blank.c
cat >>20170706_blank.c
cat 20170706_tail.c >>20170706_blank.c
cc 20170706_blank.c -o 20170706_blank
./20170706_blank | tr ' ' '.'
EOF
chmod +x "$program_name"
%%script 20170706_c_foo
void blank(char buf[LINES][COLUMNS], int row, int column)
{
goto MIDDLE;
for ( ; row < LINES; row++)
for (column = 0; column < COLUMNS; column++)
MIDDLE: buf[row][column] = ' ';
}
%%script 20170706_c_foo
void blank(char buf[LINES][COLUMNS], int row, int column)
{
for ( ; row < LINES; row++) {
for ( ; column < COLUMNS; column++)
buf[row][column] = ' ';
column = 0;
}
}
%%script 20170706_c_foo
void blank_to_end_of_row(char buf[LINES][COLUMNS], int row, int column)
{
for ( ;column < COLUMNS; column++)
buf[row][column] = ' ';
}
void blank_row(char buf[LINES][COLUMNS], int row)
{
blank_to_end_of_row(buf, row, 0);
}
void blank(char buf[LINES][COLUMNS], int row, int column)
{
blank_to_end_of_row(buf, row++, column);
for ( ; row < LINES; row++)
blank_row(buf, row);
}
%%script 20170706_c_foo
void blank_to_end_of_row(char buf[LINES][COLUMNS], int row, int column)
{
for ( ;column < COLUMNS; column++)
buf[row][column] = ' ';
}
void blank(char buf[LINES][COLUMNS], int row, int column)
{
blank_to_end_of_row(buf, row++, column);
for ( ; row < LINES; row++)
blank_to_end_of_row(buf, row, 0);
}
LINES = 3
COLUMNS = 4
def foo(row=1, column=2):
buf = [
['@' for _ in range(COLUMNS)]
for _ in range(LINES)
]
blank(buf, row, column)
for row in buf:
print(''.join(row).replace(' ', '.'))
def blank(buf, row, column):
for row in range(row, LINES):
for column in range(column, COLUMNS):
buf[row][column] = ' '
column = 0
foo()
def blank_to_end_of_row(buf, row, column):
for column in range(column, COLUMNS):
buf[row][column] = ' '
def blank_row(buf, row):
blank_to_end_of_row(buf, row, 0)
def blank(buf, row, column):
blank_to_end_of_row(buf, row, column)
row += 1
for row in range(row, LINES):
blank_row(buf, row)
foo()
def blank_to_end_of_row(buf, row, column):
for column in range(column, COLUMNS):
buf[row][column] = ' '
def blank(buf, row, column):
blank_to_end_of_row(buf, row, column)
row += 1
for row in range(row, LINES):
blank_to_end_of_row(buf, row, 0)
foo()
%%script 20170706_c_foo
/* this is wrong: fails to clear beginning of following lines */
int blank(char buf[LINES][COLUMNS], int row_arg, int column_arg)
{
int row;
int column;
for (row = row_arg; row < LINES; row++)
for (column = column_arg; column < COLUMNS; column++)
buf[row][column] = ' ';
}
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The above is what the output should look like.
Step2: Now for some Python.
|
5,528
|
<ASSISTANT_TASK:>
Python Code:
u1 = ["green", "green", "blue", "green"]
a1 = set({("green", 3), ("blue", 1)})
assert a1 == set(urn_to_dict(u1).items())
u2 = ["red", "blue", "blue", "green", "yellow", "black", "black", "green", "blue", "yellow", "red", "green", "blue", "black", "yellow", "yellow", "yellow", "green", "blue", "red", "red", "blue", "red", "blue", "yellow", "yellow", "yellow"]
a2 = set({('black', 3), ('blue', 7), ('green', 4), ('red', 5), ('yellow', 8)})
assert a2 == set(urn_to_dict(u2).items())
import numpy.testing as t
c1 = {"blue": 3, "red": 1}
t.assert_allclose(chances_of_color(c1, "blue"), 0.75)
import numpy.testing as t
c2 = {"red": 934, "blue": 493859, "yellow": 31, "green": 3892, "black": 487}
t.assert_allclose(chances_of_color(c2, "green"), 0.007796427505443677)
import numpy.testing as t
c3 = {"red": 5, "blue": 5, "yellow": 5, "green": 5, "black": 5}
t.assert_allclose(chances_of_color(c2, "orange"), 0.0)
import numpy.testing as t
c1 = {"blue": 3, "red": 1}
t.assert_allclose(chances_of_not_color(c1, "blue"), 0.25)
import numpy.testing as t
c2 = {"red": 934, "blue": 493859, "yellow": 31, "green": 3892, "black": 487}
t.assert_allclose(chances_of_not_color(c2, "blue"), 0.010705063871811693)
import numpy.testing as t
c3 = {"red": 5, "blue": 5, "yellow": 5, "green": 5, "black": 5}
t.assert_allclose(chances_of_not_color(c2, "orange"), 1.0)
import numpy.testing as t
q1 = ["blue", "green", "red"]
t.assert_allclose(select_chances(q1, 2, "red"), 2/3)
q2 = ["red", "blue", "blue", "green", "yellow", "black", "black", "green", "blue", "yellow", "red", "green", "blue", "black", "yellow", "yellow", "yellow", "green", "blue", "red", "red", "blue", "red", "blue", "yellow", "yellow", "yellow"]
t.assert_allclose(select_chances(q2, 3, "red"), 0.4735042735042735)
import numpy.testing as t
q1 = ["blue", "green", "red"]
t.assert_allclose(select_chances_first(q1, 2, "red"), 2/6)
q2 = ["red", "blue", "blue", "green", "yellow", "black", "black", "green", "blue", "yellow", "red", "green", "blue", "black", "yellow", "yellow", "yellow", "green", "blue", "red", "red", "blue", "red", "blue", "yellow", "yellow", "yellow"]
t.assert_allclose(select_chances_first(q2, 3, "red"), 0.18518518518518517)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Part B
Step2: Part C
Step3: Part D
Step4: Part E
|
5,529
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import math
import matplotlib.pyplot as plt
import numpy as np
import openmc
import openmc.mgxs
# 1.6 enriched fuel
fuel = openmc.Material(name='1.6% Fuel')
fuel.set_density('g/cm3', 10.31341)
fuel.add_nuclide('U235', 3.7503e-4)
fuel.add_nuclide('U238', 2.2625e-2)
fuel.add_nuclide('O16', 4.6007e-2)
# borated water
water = openmc.Material(name='Borated Water')
water.set_density('g/cm3', 0.740582)
water.add_nuclide('H1', 4.9457e-2)
water.add_nuclide('O16', 2.4732e-2)
water.add_nuclide('B10', 8.0042e-6)
# zircaloy
zircaloy = openmc.Material(name='Zircaloy')
zircaloy.set_density('g/cm3', 6.55)
zircaloy.add_nuclide('Zr90', 7.2758e-3)
# Create a materials collection and export to XML
materials = openmc.Materials((fuel, water, zircaloy))
materials.export_to_xml()
# Create cylinders for the fuel and clad
fuel_outer_radius = openmc.ZCylinder(R=0.39218)
clad_outer_radius = openmc.ZCylinder(R=0.45720)
# Create boundary planes to surround the geometry
min_x = openmc.XPlane(x0=-10.71, boundary_type='reflective')
max_x = openmc.XPlane(x0=+10.71, boundary_type='reflective')
min_y = openmc.YPlane(y0=-10.71, boundary_type='reflective')
max_y = openmc.YPlane(y0=+10.71, boundary_type='reflective')
min_z = openmc.ZPlane(z0=-10., boundary_type='reflective')
max_z = openmc.ZPlane(z0=+10., boundary_type='reflective')
# Create a Universe to encapsulate a fuel pin
fuel_pin_universe = openmc.Universe(name='1.6% Fuel Pin')
# Create fuel Cell
fuel_cell = openmc.Cell(name='1.6% Fuel')
fuel_cell.fill = fuel
fuel_cell.region = -fuel_outer_radius
fuel_pin_universe.add_cell(fuel_cell)
# Create a clad Cell
clad_cell = openmc.Cell(name='1.6% Clad')
clad_cell.fill = zircaloy
clad_cell.region = +fuel_outer_radius & -clad_outer_radius
fuel_pin_universe.add_cell(clad_cell)
# Create a moderator Cell
moderator_cell = openmc.Cell(name='1.6% Moderator')
moderator_cell.fill = water
moderator_cell.region = +clad_outer_radius
fuel_pin_universe.add_cell(moderator_cell)
# Create a Universe to encapsulate a control rod guide tube
guide_tube_universe = openmc.Universe(name='Guide Tube')
# Create guide tube Cell
guide_tube_cell = openmc.Cell(name='Guide Tube Water')
guide_tube_cell.fill = water
guide_tube_cell.region = -fuel_outer_radius
guide_tube_universe.add_cell(guide_tube_cell)
# Create a clad Cell
clad_cell = openmc.Cell(name='Guide Clad')
clad_cell.fill = zircaloy
clad_cell.region = +fuel_outer_radius & -clad_outer_radius
guide_tube_universe.add_cell(clad_cell)
# Create a moderator Cell
moderator_cell = openmc.Cell(name='Guide Tube Moderator')
moderator_cell.fill = water
moderator_cell.region = +clad_outer_radius
guide_tube_universe.add_cell(moderator_cell)
# Create fuel assembly Lattice
assembly = openmc.RectLattice(name='1.6% Fuel Assembly')
assembly.pitch = (1.26, 1.26)
assembly.lower_left = [-1.26 * 17. / 2.0] * 2
# Create array indices for guide tube locations in lattice
template_x = np.array([5, 8, 11, 3, 13, 2, 5, 8, 11, 14, 2, 5, 8,
11, 14, 2, 5, 8, 11, 14, 3, 13, 5, 8, 11])
template_y = np.array([2, 2, 2, 3, 3, 5, 5, 5, 5, 5, 8, 8, 8, 8,
8, 11, 11, 11, 11, 11, 13, 13, 14, 14, 14])
# Create universes array with the fuel pin and guide tube universes
universes = np.tile(fuel_pin_universe, (17,17))
universes[template_x, template_y] = guide_tube_universe
# Store the array of universes in the lattice
assembly.universes = universes
# Create root Cell
root_cell = openmc.Cell(name='root cell', fill=assembly)
# Add boundary planes
root_cell.region = +min_x & -max_x & +min_y & -max_y & +min_z & -max_z
# Create root Universe
root_universe = openmc.Universe(universe_id=0, name='root universe')
root_universe.add_cell(root_cell)
# Create Geometry and export to XML
geometry = openmc.Geometry(root_universe)
geometry.export_to_xml()
# OpenMC simulation parameters
batches = 50
inactive = 10
particles = 2500
# Instantiate a Settings object
settings = openmc.Settings()
settings.batches = batches
settings.inactive = inactive
settings.particles = particles
settings.output = {'tallies': False}
# Create an initial uniform spatial source distribution over fissionable zones
bounds = [-10.71, -10.71, -10, 10.71, 10.71, 10.]
uniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True)
settings.source = openmc.Source(space=uniform_dist)
# Export to "settings.xml"
settings.export_to_xml()
# Plot our geometry
plot = openmc.Plot.from_geometry(geometry)
plot.pixels = (250, 250)
plot.color_by = 'material'
openmc.plot_inline(plot)
# Instantiate a 20-group EnergyGroups object
energy_groups = openmc.mgxs.EnergyGroups()
energy_groups.group_edges = np.logspace(-3, 7.3, 21)
# Instantiate a 1-group EnergyGroups object
one_group = openmc.mgxs.EnergyGroups()
one_group.group_edges = np.array([energy_groups.group_edges[0], energy_groups.group_edges[-1]])
# Instantiate a tally mesh
mesh = openmc.Mesh(mesh_id=1)
mesh.type = 'regular'
mesh.dimension = [17, 17, 1]
mesh.lower_left = [-10.71, -10.71, -10000.]
mesh.width = [1.26, 1.26, 20000.]
# Initialize an 20-energy-group and 6-delayed-group MGXS Library
mgxs_lib = openmc.mgxs.Library(geometry)
mgxs_lib.energy_groups = energy_groups
mgxs_lib.num_delayed_groups = 6
# Specify multi-group cross section types to compute
mgxs_lib.mgxs_types = ['total', 'transport', 'nu-scatter matrix', 'kappa-fission', 'inverse-velocity', 'chi-prompt',
'prompt-nu-fission', 'chi-delayed', 'delayed-nu-fission', 'beta']
# Specify a "mesh" domain type for the cross section tally filters
mgxs_lib.domain_type = 'mesh'
# Specify the mesh domain over which to compute multi-group cross sections
mgxs_lib.domains = [mesh]
# Construct all tallies needed for the multi-group cross section library
mgxs_lib.build_library()
# Create a "tallies.xml" file for the MGXS Library
tallies_file = openmc.Tallies()
mgxs_lib.add_to_tallies_file(tallies_file, merge=True)
# Instantiate a current tally
mesh_filter = openmc.MeshFilter(mesh)
current_tally = openmc.Tally(name='current tally')
current_tally.scores = ['current']
current_tally.filters = [mesh_filter]
# Add current tally to the tallies file
tallies_file.append(current_tally)
# Export to "tallies.xml"
tallies_file.export_to_xml()
# Run OpenMC
openmc.run()
# Load the last statepoint file
sp = openmc.StatePoint('statepoint.50.h5')
# Initialize MGXS Library with OpenMC statepoint data
mgxs_lib.load_from_statepoint(sp)
# Extrack the current tally separately
current_tally = sp.get_tally(name='current tally')
# Set the time constants for the delayed precursors (in seconds^-1)
precursor_halflife = np.array([55.6, 24.5, 16.3, 2.37, 0.424, 0.195])
precursor_lambda = math.log(2.0) / precursor_halflife
beta = mgxs_lib.get_mgxs(mesh, 'beta')
# Create a tally object with only the delayed group filter for the time constants
beta_filters = [f for f in beta.xs_tally.filters if type(f) is not openmc.DelayedGroupFilter]
lambda_tally = beta.xs_tally.summation(nuclides=beta.xs_tally.nuclides)
for f in beta_filters:
lambda_tally = lambda_tally.summation(filter_type=type(f), remove_filter=True) * 0. + 1.
# Set the mean of the lambda tally and reshape to account for nuclides and scores
lambda_tally._mean = precursor_lambda
lambda_tally._mean.shape = lambda_tally.std_dev.shape
# Set a total nuclide and lambda score
lambda_tally.nuclides = [openmc.Nuclide(name='total')]
lambda_tally.scores = ['lambda']
delayed_nu_fission = mgxs_lib.get_mgxs(mesh, 'delayed-nu-fission')
# Use tally arithmetic to compute the precursor concentrations
precursor_conc = beta.xs_tally.summation(filter_type=openmc.EnergyFilter, remove_filter=True) * \
delayed_nu_fission.xs_tally.summation(filter_type=openmc.EnergyFilter, remove_filter=True) / lambda_tally
# The difference is a derived tally which can generate Pandas DataFrames for inspection
precursor_conc.get_pandas_dataframe().head(10)
current_tally.get_pandas_dataframe().head(10)
# Extract the energy-condensed delayed neutron fraction tally
beta_by_group = beta.get_condensed_xs(one_group).xs_tally.summation(filter_type='energy', remove_filter=True)
beta_by_group.mean.shape = (17, 17, 6)
beta_by_group.mean[beta_by_group.mean == 0] = np.nan
# Plot the betas
plt.figure(figsize=(18,9))
fig = plt.subplot(231)
plt.imshow(beta_by_group.mean[:,:,0], interpolation='none', cmap='jet')
plt.colorbar()
plt.title('Beta - delayed group 1')
fig = plt.subplot(232)
plt.imshow(beta_by_group.mean[:,:,1], interpolation='none', cmap='jet')
plt.colorbar()
plt.title('Beta - delayed group 2')
fig = plt.subplot(233)
plt.imshow(beta_by_group.mean[:,:,2], interpolation='none', cmap='jet')
plt.colorbar()
plt.title('Beta - delayed group 3')
fig = plt.subplot(234)
plt.imshow(beta_by_group.mean[:,:,3], interpolation='none', cmap='jet')
plt.colorbar()
plt.title('Beta - delayed group 4')
fig = plt.subplot(235)
plt.imshow(beta_by_group.mean[:,:,4], interpolation='none', cmap='jet')
plt.colorbar()
plt.title('Beta - delayed group 5')
fig = plt.subplot(236)
plt.imshow(beta_by_group.mean[:,:,5], interpolation='none', cmap='jet')
plt.colorbar()
plt.title('Beta - delayed group 6')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First we need to define materials that will be used in the problem
Step2: With our three materials, we can now create a Materials object that can be exported to an actual XML file.
Step3: Now let's move on to the geometry. This problem will be a square array of fuel pins and control rod guide tubes for which we can use OpenMC's lattice/universe feature. The basic universe will have three regions for the fuel, the clad, and the surrounding coolant. The first step is to create the bounding surfaces for fuel and clad, as well as the outer bounding surfaces of the problem.
Step4: With the surfaces defined, we can now construct a fuel pin cell from cells that are defined by intersections of half-spaces created by the surfaces.
Step5: Likewise, we can construct a control rod guide tube with the same surfaces.
Step6: Using the pin cell universe, we can construct a 17x17 rectangular lattice with a 1.26 cm pitch.
Step7: Next, we create a NumPy array of fuel pin and guide tube universes for the lattice.
Step8: OpenMC requires that there is a "root" universe. Let us create a root cell that is filled by the pin cell universe and then assign it to the root universe.
Step9: We now must create a geometry that is assigned a root universe and export it to XML.
Step10: With the geometry and materials finished, we now just need to define simulation parameters. In this case, we will use 10 inactive batches and 40 active batches each with 2500 particles.
Step11: Let us also create a plot to verify that our fuel assembly geometry was created successfully.
Step12: As we can see from the plot, we have a nice array of fuel and guide tube pin cells with fuel, cladding, and water!
Step13: Next, we will instantiate an openmc.mgxs.Library for the energy and delayed groups with our the fuel assembly geometry.
Step14: Now, we can run OpenMC to generate the cross sections.
Step15: Tally Data Processing
Step16: The statepoint is now ready to be analyzed by the Library. We simply have to load the tallies from the statepoint into the Library and our MGXS objects will compute the cross sections for us under-the-hood.
Step17: Using Tally Arithmetic to Compute the Delayed Neutron Precursor Concentrations
Step18: Another useful feature of the Python API is the ability to extract the surface currents for the interfaces and surfaces of a mesh. We can inspect the currents for the mesh by getting the pandas dataframe.
Step19: Cross Section Visualizations
|
5,530
|
<ASSISTANT_TASK:>
Python Code:
a=5
a==6
i=6
i>5
i=2
i>5
i=2
i!=6
i=6
i!=6
"ACDC"=="Michael Jackson"
"ACDC"!="Michael Jackson"
'+'>'!'
'B'>'A'
'BA'>'AB'
age=19
#age=18
#expression that can be true or false
if age>18:
#within an indent, we have the expression that is run if the condition is true
print("you can enter" )
#The statements after the if statement will run regardless if the condition is true or false
print("move on")
age=18
#age=19
if age>18:
print("you can enter" )
else:
print("go see Meat Loaf" )
print("move on")
age=18
if age>18:
print("you can enter" )
elif age==18:
print("go see Pink Floyd")
else:
print("go see Meat Loaf" )
print("move on")
album_year = 1983
album_year=1970
if album_year > 1980:
print("Album year is greater than 1980")
print("")
print('do something..')
album_rating = 8.5
if album_rating > 8:
print("this album is Amazing !")
album_year = 1983
#album_year=1970
if album_year > 1980:
print("Album year is greater than 1980")
else:
print("less than 1980")
print("")
print('do something..')
album_rating = 7.5
if album_rating > 8:
print("this album is Amazing !")
else:
print("this album is ok")
album_year = 1980
if(album_year > 1979) and (album_year < 1990):
print ("Album year was in between 1981 and 1989")
print("")
print("Do Stuff..")
album_year = 1990
if(album_year < 1980) or (album_year > 1989):
print ("Album was not made in the 1980's")
else:
print("The Album was made in the 1980's ")
album_year = 1983
if not (album_year == '1984'):
print ("Album year is not 1984")
album_year = 1983
if album_year < 1980 or album_year == 1989 or album_year == 1983:
print(album_year)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The result is false, as 5 does not equal 6.
Step2: If we set i=2 the condition is false as 2 is less than 5
Step3: Let's display some values for i in the figure. Set the values greater than 5 in green and the rest in red. The green region represents where the condition is True, the red where the statement is false. If the value of i is 2, we get False as the 2 falls in the red region. Similarly, if the value for i is 6 we get a True as the condition falls in the green region.
Step4: When i equals six the expression produces False.
Step5: We can use a number line as before, when the condition is True the corresponding numbers are marked in green and for where the condition is False the corresponding number is marked in red. If we set i equal to 2 the operator is true as 2 is in the green region. If we set i equal to 6 we get a False as the condition falls in the red region.
Step6: If we use the inequality operator, we get a True as the strings are not equal.
Step7: We can also perform inequality operations, the order of the letter depends on the ASCII value. The decimal value shown in the following table represents the order of the character
Step8: Similarly, the value for A is 101, and the value for B is 102 therefore
Step9: When there are multiple letters, the first letter takes precedence in ordering
Step10: <a id="ref2"></a>
Step11: Try uncommenting the age variable.
Step12: The process is demonstrated below, where each of the possibilities is illustrated on each side of the image. On the left is the case where the age is 17, we set the value of the variable age to 17, and this corresponds to the individual attending the Meatloaf concert. The right portion shows what happens when the individual is over 18, and the individual is granted access to the concert.
Step13: The three combinations are shown in the figure below. The left-most region shows what happens when the individual is less than 18 years of age. The central component shows when the individual is 18. The rightmost shows when the individual is over 18.
Step14: Feel free to change album_year value to other values -- you'll see that the result changes!
Step15: <div align="right">
Step16: Feel free to change the album_year value to other values -- you'll see that the result changes based on it!
Step17: <div align="right">
Step18: To determine if an album was released before 1980 or after 1990, we can use an or statement. Periods before 1981 or after 1989 satisfy this condition. This is demonstrated in the following figure, the color green in a and b represents periods where the statement is true. The color green in c represents where at least one of the conditions
Step19: The not statement checks if the statement is false
Step20: Write an if statement to determine if an album came out before 1980 or in the years
|
5,531
|
<ASSISTANT_TASK:>
Python Code:
import csv
import numpy as np
import scipy as sp
import pandas as pd
import sklearn as sk
import matplotlib.pyplot as plt
from IPython.display import Image
print('csv: {}'.format(csv.__version__))
print('numpy: {}'.format(np.__version__))
print('scipy: {}'.format(sp.__version__))
print('pandas: {}'.format(pd.__version__))
print('sklearn: {}'.format(sk.__version__))
Image(url='http://www.radford.edu/~rsheehy/Gen_flash/Tutorials/Linear_Regression/reg-tut_files/linreg3.gif')
filename = '/Users/jessicagronski/Downloads/bldgstories1.csv'
raw_data = open(filename, 'rt')
reader = csv.reader(raw_data, delimiter=',', quoting=csv.QUOTE_NONE)
x = list(reader)
data = np.array(x).astype('float')
# Load CSV with numpy
import numpy
raw_data = open(filename, 'rb')
data = numpy.loadtxt(raw_data, delimiter=",")
# Load CSV using Pandas
import pandas
colnames = ['year', 'height', 'stories']
data = pandas.read_csv(filename, names=colnames)
data = pandas.DataFrame(data, columns=colnames)
print('Dimensions:')
print(data.shape)
print('Ten observations:')
print(data.head(6))
print('Correlation matrix:')
correlations = data.corr(method='pearson')
print(correlations)
pandas.set_option('precision', 3)
description = data.describe()
print(description)
from sklearn import linear_model
obj = linear_model.LinearRegression()
obj.fit(np.array(data.height.values.reshape(-1,1)), data.stories )#need this values.reshape(-1,1) to avoid deprecation warnings
print( obj.coef_, obj.intercept_ )
x_min, x_max = data.height.values.min() - .5, data.height.values.max() + .5 # for plotting
x_rng = np.linspace(x_min,x_max,200)
plt.plot(x_rng, x_rng * obj.coef_ + obj.intercept_, 'k')
plt.plot(data.height.values, data.stories.values,'ro', alpha = 0.5)
plt.show()
obj2 = linear_model.LinearRegression()
X = np.array( (data.height.values, data.year.values))
obj2.fit(X.transpose() , data.stories)
print(obj2.coef_, obj2.intercept_)
from mpl_toolkits.mplot3d import Axes3D
ax = plt.axes(projection = '3d')
#ax.plot(data.height.values, data.year.values , data.stories.values, 'bo')
ax.plot_surface(data.height.values, data.year.values, (np.dot(X.transpose(),obj2.coef_) \
+ obj2.intercept_), color='b')
ax.show()
#plt.close()
##### doesn't work - have the students try to solve it.
print(np.dot(X.transpose(),obj2.coef_).shape)
data.height.values.shape
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The easiest way to learn how regression works is by thinking about an example. Consider an imaginary dataset of buildings built in Denver containing three pieces of information for each building
Step2: But we can learn about the math later. Let's think about other interesting questions. Which would be better for predicting
Step3: Know Your Data
Step4: Remember we can acces the five number summary + some using the describe function.
Step5: Regression Model
Step6: We show the data and the regression lines.
Step7: Check residuals for normality.
|
5,532
|
<ASSISTANT_TASK:>
Python Code:
imcontroller = ImageController(demo.image_info)
demo.image_info.items()
imcontroller.generate_image_obj()
imcontroller.channels
imcontroller
print('Numerical Labels that index image: ')
print(imcontroller.image_obj.labels)
print('Channels: ')
print(imcontroller.channels)
print('Image types: ' + str(imcontroller.feat_data.keys()))
print ('Features: ')
print(imcontroller.feat_data['IF'].keys())
print('ER features: ')
print(imcontroller.feat_data['IF']['intensity']['ER'].keys())
xy = imcontroller.feat_data['IF']['xy']
x = xy[:,0]
a = ((xy[:, 0] > 100) & (xy[:, 1] < 400))
print sum(a)
# print np.where([xy[:, 0] > 100], [xy[:, 1] < 300])
# print xy[:,0]
b = [True, False, True]
def generate_patches():
num_patches = 5
patches = np.zeros((num_patches, 3, 256, 256))
for i in xrange(num_patches):
a = np.ones((3, 256, 256))
patches[i] = a
return patches, np.ones(num_patches)
patches = []
labels = []
for i in xrange(5):
pats, labs =generate_patches()
patches.append(pats)
labels.append(labs)
print np.vstack(patches).shape
print np.hstack(labels).shape
# X = np.vstack(zip(*patches_labels)[0])
# a = np.ones((3, 256, 256))
# b = np.ones((3, 256, 256))
# aug_patches = []
# aug_patches2 = []
# aug_patches.append(a)
# aug_patches.append(b)
# print type(aug_patches)
# aug_patches2.append(a)
# aug_patches2.append(b)
# aug_patches = np.stack(aug_patches, axis=0)
# aug_patches2 = np.stack(aug_patches2, axis=0)
# print aug_patches.shape
# print aug_patches2.shape
# c = []
# c.append((aug_patches,1))
# c.append((aug_patches2,2))
# cz = zip(*c)
# print np.vstack(cz[0]).shape
# d = [(1, 3),(2, 4)]
# e = zip(*d)
# print e[0]
# f = np.stack(e[0], axis=0)
# print f
# print f.shape
ER = imcontroller.get_image_channel(('IF','ER'))
flash.show_channel(ER) # Note: this is the unadjusted image
print imcontroller.image_obj
flash.show_first_three_channels(imcontroller.image_obj)
patch_shape = (3, 256, 256)
imcontroller.patchify(patch_shape=patch_shape, overlap=64)
print imcontroller.patches.shape
patches = imcontroller.patches[0].reshape((-1,) + patch_shape)
print patches.shape
print np.stack(patches, axis=0).shape
class_num = 2
labels = np.ones(60) * int(class_num)
labels2 = np.ones(60) * int(class_num)
l = [labels, labels2]
print np.hstack(l)
print
plt.imshow(imcontroller.patches[0,1,0,0,:, :])
# plt.show()
plt.imshow(patches[8, 0,:, :])
# plt.show()
for k in xrange(64):
for i in xrange(8):
for j in xrange(8):
a = sum(sum(patches[k,0,:, :] == imcontroller.patches[0,i,j,0,:, :]))
if a > 60000:
print (i, j, k)
print'(%d, %d, %d)' % ((k / imcontroller.patches.shape[1]), (k % imcontroller.patches.shape[2]), k)
print 'found match'
print 'done'
p1 = patches
p2 = patches
p3 = patches
p = [p1, p2, p3]
# flash.show_subset_patches(imcontroller.patches, (8,8), 0)
imcontroller.patchify(patch_shape=(512, 512, 3),overlap=200)
flash.show_subset_patches(imcontroller.patches, (4,4), 0)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Normally the image controller would be passed images directly, but for now, we have to load them from disk by calling generate_im_obj
Step2: From above, it is possible to inspect basic properties such as whether or not the ImageController's image is running in local or Spark mode, its type and shape. (3, 2048, 2048) indicates that our image is 2048 x 2048 with 3 channels. Let's look at some of the other information housed by the ImageController
Step3: <b>Question</b>
Step4: Showing a pseudo-colored image
Step5: Patches
|
5,533
|
<ASSISTANT_TASK:>
Python Code:
import urllib, time, hashlib
hosts = ['http://www.scikit-learn.org', 'http://www.numpy.org', 'http://www.scipy.org', 'http://pandas.pydata.org']
start = time.time()
for host in hosts:
f = urllib.request.urlopen(host)
print(f.read().upper()[:20], host)
print("Elapsed time: {}".format(time.time() - start))
import threading
def print_page(host):
f = urllib.request.urlopen(host)
print(f.read().upper()[:20], host)
# generate jobs
jobs = list()
for host in hosts:
jobs.append(threading.Thread(target=print_page, args=(host,)))
start = time.time()
# start jobs
for job in jobs:
job.start()
# wait for jobs to finish
for job in jobs:
job.join()
print("Elapsed time: {}".format(time.time() - start))
# This code runs only with Python 2
from twisted.internet import defer, task
from twisted.web.client import getPage
start = time.time()
def print_capitalized(html, host):
print(html.upper()[:20], host)
def print_elapsed_time(result):
print("Elapsed time: {}".format(time.time() - start))
def main(react, hosts):
dlist = list()
for host in hosts:
d = getPage(host)
d.addCallback(print_capitalized, host)
dlist.append(d)
return defer.gatherResults(dlist).addCallback(print_elapsed_time)
task.react(main, (hosts,))
import asyncio
import aiohttp
start = time.time()
@asyncio.coroutine
def print_page(host):
response = yield from aiohttp.request('GET', host)
html = yield from response.read()
print(html.upper()[:20], host)
tasks = list()
for host in hosts:
tasks.append(print_page(host))
loop = asyncio.get_event_loop()
loop.run_until_complete(asyncio.wait(tasks))
print("Elapsed time: {}".format(time.time() - start))
def run(self):
self._client.on_connect.add_observer(self._on_connect)
self._client.on_disconnect.add_observer(self._on_disconnect)
loop = asyncio.get_event_loop()
# Set alarm
wait_time = self._get_secs_to_next_alarm()
loop.call_later(wait_time, self.set_alarm)
loop.run_until_complete(self._client.connect())
@asyncio.coroutine
def send_alarm(self):
_logger.info('Sending reminder...')
msg = 'Hey buddy, did you take your insulin?'
conv = self._get_conv_with_recipient()
yield from self._send_message(conv, msg)
self._asked = True
repeat_timeout = 60*20
yield from asyncio.sleep(repeat_timeout)
for _ in range(2):
if not self._asked: break
yield from self._send_message(conv, 'How about now?')
yield from asyncio.sleep(repeat_timeout)
else:
if self._asked:
yield from self._send_message(conv, "I'm giving up!")
wait_time = self._get_secs_to_next_alarm()
loop = asyncio.get_event_loop()
loop.call_later(wait_time, self.set_alarm)
self._asked = False
@asyncio.coroutine
def _on_event(self, conv_event):
_logger.info("Message received...")
conv = self._conv_list.get(conv_event.conversation_id)
user = conv.get_user(conv_event.user_id)
if isinstance(conv_event, hangups.ChatMessageEvent):
text = conv_event.text.strip()
else:
text = ''
from_recipient = self._recipient_id == user.id_
is_positive = text.lower().startswith('yes')
if from_recipient and is_positive and self._asked:
_logger.info("Positive reply received")
self._asked = False
yield from self._send_message(conv, "That's great!")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Multi-threaded
Step2: Asynchronous
Step3: Some thoughts about callbacks
Step4: What should our MedBot do?
Step5: Sending the alarm
Step6: Handling a message
|
5,534
|
<ASSISTANT_TASK:>
Python Code:
from sklearn.datasets import load_iris
from sklearn.pipeline import make_pipeline
from sklearn import preprocessing
from sklearn import model_selection
from sklearn import svm
# load iris data
iris = load_iris()
X = iris.data
y = iris.target
# Create a pipeline that scales the data then trains a support vector classifier
classifier_pipeline = make_pipeline(preprocessing.StandardScaler(), svm.SVC(C=1))
scores = model_selection.cross_val_score(classifier_pipeline, X, y, cv=3)
scores
scores.mean()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create classifier pipeline
Step2: Cross validation
Step3: Model evaluation
|
5,535
|
<ASSISTANT_TASK:>
Python Code:
from causalinfo import *
from numpy import log2
from numpy.testing import assert_allclose
# You only need this if you want to draw pretty pictures of the Networksa
from nxpd import draw, nxpdParams
nxpdParams['show'] = 'ipynb'
w, x, y, z = make_variables("W X Y Z", 2)
wdist = UniformDist(w)
eq1 = Equation('BR', [w], [x, y], equations.branch_same_)
eq2 = Equation('XOR', [x, y], [z], equations.xor_)
# Build the graph
eg3 = CausalGraph([eq1, eq2])
draw(eg3.full_network)
eg3
m_eg3 = MeasureCause(eg3, wdist)
# See the table on p29a
assert m_eg3.mutual_info(x, y) == 1
assert m_eg3.mutual_info(x, y, w) == 0
assert m_eg3.mutual_info(w, z, y) == 0
assert m_eg3.causal_flow(x, y) == 0
assert m_eg3.causal_flow(x, y, w) == 0
assert m_eg3.causal_flow(w, z, y) == 1
def copy_first_(i1, i2, o1):
o1[i1] = 1.0
eq2 = Equation('COPY_FIRST', [x, y], [z], copy_first_)
eg51 = CausalGraph([eq1, eq2])
draw(eg51.full_network)
m_eg51 = MeasureCause(eg51, wdist)
# See paragraph at top of page 30
assert m_eg51.mutual_info(x, z, y) == 0
assert m_eg51.causal_flow(x, z, y) == 1
assert m_eg51.causal_flow(x, z) == 1
def random_sometimes_(i1, i2, o1):
if i1 != i2:
o1[:] = .5
else:
equations.xor_(i1, i2, o1)
eq2 = Equation('RAND', [x, y], [z], random_sometimes_)
eg52 = CausalGraph([eq1, eq2])
draw(eg52.full_network)
m_eg52 = MeasureCause(eg52, wdist)
# See pg 30
expected = 3.0 / 4.0 * log2(4.0 / 3.0)
assert_allclose(m_eg52.causal_flow(x, z, y), expected)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Ay & Polani, Example 3
Step2: Ay & Polani, Example 5.1
Step3: Ay & Polani, Example 5.2
|
5,536
|
<ASSISTANT_TASK:>
Python Code:
names = {}
for node in graph:
for edge in node:
if edge.guid == "169a81aefca74e92b45e3fa03c7021df":
value = node[edge].value
if value in names:
raise ValueError('name: "{}" defined twice'.format(value))
names[value] = node
names["ctor"]
def name_to_guid(name):
if name not in names:
return None
node = names[name]
if not hasattr(node, "guid"):
return None
return node.guid
from pikov.sprite import Bitmap, Clip, Frame, FrameList, Point, Resource, Sprite, Transition
resource = Resource(graph, guid=name_to_guid("spritesheet"))
spritesheet = []
for row in range(16):
for column in range(16):
sprite_number = row * 16 + column
bitmap_name = "bitmap[{}]".format(sprite_number)
bitmap = Bitmap(graph, guid=name_to_guid(bitmap_name))
spritesheet.append(bitmap)
def find_nodes(graph, ctor, cls):
nodes = set()
# TODO: With graph formats that have indexes, there should be a faster way.
for node in graph:
if node[names["ctor"]] == ctor:
node = cls(graph, guid=node.guid)
nodes.add(node)
return nodes
def find_frames(graph):
return find_nodes(graph, names["frame"], Frame)
def find_transitions(graph):
return find_nodes(graph, names["transition"], Transition)
def find_absorbing_frames(graph):
transitions = find_transitions(graph)
target_frames = set()
source_frames = set()
for transition in transitions:
target_frames.add(transition.target.guid)
source_frames.add(transition.source.guid)
return target_frames - source_frames # In but not out. Dead end!
MICROS_12_FPS = int(1e6 / 12) # 12 frames per second
MICROS_24_FPS = int(1e6 / 24)
def connect_frames(graph, transition_name, source, target):
transition = Transition(graph, guid=name_to_guid(transition_name))
transition.name = transition_name
transition.source = source
transition.target = target
return transition
def make_clip(graph, name, sprite_numbers, loop=False, duration=MICROS_12_FPS, guid=None):
clip_name = "clip[{}]".format(name)
clip_guid = guid or name_to_guid(clip_name)
clip = Clip(graph, guid=clip_guid)
clip.name = clip_name
if clip.frames:
logging.warning("Clip already has frames")
return clip
frame_list_name = "framelist[{}, 0]".format(name)
end_of_clip = FrameList(graph, guid=name_to_guid(frame_list_name))
clip.frames = end_of_clip
clip.frames.name = "framelist[{}, 0]".format(name)
previous_sprite_name = None
previous_frame = None
for sequence, sprite_number in enumerate(sprite_numbers):
sprite_name = "{}[{}]".format(name, sequence)
frame_name = "frames[{}]".format(sprite_name)
frame = Frame(graph, guid=name_to_guid(frame_name))
frame.name = frame_name
frame.bitmap = spritesheet[sprite_number]
frame.duration_microsections = duration
if previous_sprite_name:
transition_name = "transitions[{}, {}]".format(
previous_sprite_name,
sprite_name)
connect_frames(graph, transition_name, previous_frame, frame)
previous_sprite_name = sprite_name
previous_frame = frame
frame_list_name = "framelist[{}, {}]".format(name, sequence + 1)
end_of_clip = end_of_clip.append(frame, guid=name_to_guid(frame_list_name))
end_of_clip.name = frame_list_name
if loop:
transition_name = "transitions[{}, {}]".format(
previous_sprite_name,
"{}[0]".format(name))
connect_frames(graph, transition_name, previous_frame, clip.frames.head)
return clip
sit = make_clip(graph, "sit", [0], loop=True)
#sit[0].bitmap.image
sit
sit_to_stand = make_clip(graph, "sit_to_stand", [1,2,3,4])
sit_to_stand
stand_waggle = make_clip(graph, "stand_waggle", [26,4], loop=True)
stand_waggle
stand_to_sit = make_clip(graph, "stand_to_sit", [57, 58, 59, 60, 61])
stand_to_sit
origin = Point(graph, guid=name_to_guid("origin"))
origin.name = "origin"
origin.x = 0
origin.y = 0
origin
sprite = Sprite(graph, guid=name_to_guid("gamekitty"))
graph._properties["root"] = sprite.guid
sprite.name = "gamekitty"
sprite.position = origin
sprite.frame = sit[0]
sprite
sit_paw = make_clip(graph, "sit_paw", [62, 63, 64, 65])
sit_paw
sit_to_crouch = make_clip(graph, "sit_to_crouch", [69, 70, 71])
sit_to_crouch
crouch = make_clip(graph, "crouch", [72])
crouch
crouch_to_sit = make_clip(graph, "crouch_to_sit", [75, 76, 77])
crouch_to_sit
find_absorbing_frames(graph)
graph.save()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Pikov Classes
Step2: Gamekitty
Step3: Create frames for each "clip"
Step4: Create the root node
Step5: More clips and transitions
|
5,537
|
<ASSISTANT_TASK:>
Python Code:
# Update the PIP version.
!python -m pip install --upgrade pip
!pip install kfp==1.1.1
!pip install kubeflow-katib==0.10.1
from IPython.display import display_html
display_html("<script>Jupyter.notebook.kernel.restart()</script>",raw=True)
import kfp
import kfp.dsl as dsl
from kfp import components
from kubeflow.katib import ApiClient
from kubeflow.katib import V1beta1ExperimentSpec
from kubeflow.katib import V1beta1AlgorithmSpec
from kubeflow.katib import V1beta1EarlyStoppingSpec
from kubeflow.katib import V1beta1EarlyStoppingSetting
from kubeflow.katib import V1beta1ObjectiveSpec
from kubeflow.katib import V1beta1ParameterSpec
from kubeflow.katib import V1beta1FeasibleSpace
from kubeflow.katib import V1beta1TrialTemplate
from kubeflow.katib import V1beta1TrialParameterSpec
# Experiment name and namespace.
experiment_name = "median-stop"
experiment_namespace = "anonymous"
# Trial count specification.
max_trial_count = 18
max_failed_trial_count = 3
parallel_trial_count = 2
# Objective specification.
objective=V1beta1ObjectiveSpec(
type="maximize",
goal= 0.99,
objective_metric_name="Validation-accuracy",
additional_metric_names=[
"Train-accuracy"
]
)
# Algorithm specification.
algorithm=V1beta1AlgorithmSpec(
algorithm_name="random",
)
# Early Stopping specification.
early_stopping=V1beta1EarlyStoppingSpec(
algorithm_name="medianstop",
algorithm_settings=[
V1beta1EarlyStoppingSetting(
name="min_trials_required",
value="2"
)
]
)
# Experiment search space.
# In this example we tune learning rate, number of layer and optimizer.
# Learning rate has bad feasible space to show more early stopped Trials.
parameters=[
V1beta1ParameterSpec(
name="lr",
parameter_type="double",
feasible_space=V1beta1FeasibleSpace(
min="0.01",
max="0.3"
),
),
V1beta1ParameterSpec(
name="num-layers",
parameter_type="int",
feasible_space=V1beta1FeasibleSpace(
min="2",
max="5"
),
),
V1beta1ParameterSpec(
name="optimizer",
parameter_type="categorical",
feasible_space=V1beta1FeasibleSpace(
list=[
"sgd",
"adam",
"ftrl"
]
),
),
]
# JSON template specification for the Trial's Worker Kubernetes Job.
trial_spec={
"apiVersion": "batch/v1",
"kind": "Job",
"spec": {
"template": {
"metadata": {
"annotations": {
"sidecar.istio.io/inject": "false"
}
},
"spec": {
"containers": [
{
"name": "training-container",
"image": "docker.io/kubeflowkatib/mxnet-mnist:v1beta1-e294a90",
"command": [
"python3",
"/opt/mxnet-mnist/mnist.py",
"--batch-size=64",
"--lr=${trialParameters.learningRate}",
"--num-layers=${trialParameters.numberLayers}",
"--optimizer=${trialParameters.optimizer}"
]
}
],
"restartPolicy": "Never"
}
}
}
}
# Configure parameters for the Trial template.
# We set the retain parameter to "True" to not clean-up the Trial Job's Kubernetes Pods.
trial_template=V1beta1TrialTemplate(
retain=True,
primary_container_name="training-container",
trial_parameters=[
V1beta1TrialParameterSpec(
name="learningRate",
description="Learning rate for the training model",
reference="lr"
),
V1beta1TrialParameterSpec(
name="numberLayers",
description="Number of training model layers",
reference="num-layers"
),
V1beta1TrialParameterSpec(
name="optimizer",
description="Training model optimizer (sdg, adam or ftrl)",
reference="optimizer"
),
],
trial_spec=trial_spec
)
experiment_spec=V1beta1ExperimentSpec(
max_trial_count=max_trial_count,
max_failed_trial_count=max_failed_trial_count,
parallel_trial_count=parallel_trial_count,
objective=objective,
algorithm=algorithm,
early_stopping=early_stopping,
parameters=parameters,
trial_template=trial_template
)
# Get the Katib launcher.
katib_experiment_launcher_op = components.load_component_from_url(
"https://raw.githubusercontent.com/kubeflow/pipelines/master/components/kubeflow/katib-launcher/component.yaml")
@dsl.pipeline(
name="Launch Katib early stopping Experiment",
description="An example to launch Katib Experiment with early stopping"
)
def median_stop():
# Katib launcher component.
# Experiment Spec should be serialized to a valid Kubernetes object.
op = katib_experiment_launcher_op(
experiment_name=experiment_name,
experiment_namespace=experiment_namespace,
experiment_spec=ApiClient().sanitize_for_serialization(experiment_spec),
experiment_timeout_minutes=60,
delete_finished_experiment=False)
# Output container to print the results.
op_out = dsl.ContainerOp(
name="best-hp",
image="library/bash:4.4.23",
command=["sh", "-c"],
arguments=["echo Best HyperParameters: %s" % op.output],
)
kfp.Client().create_run_from_pipeline_func(median_stop, arguments={})
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Restart the Notebook kernel to use the SDK packages
Step2: Import required packages
Step3: Define an Experiment
Step4: Define a Trial template
Step5: Define an Experiment specification
Step6: Create a Pipeline using Katib component
Step7: Run the Pipeline
|
5,538
|
<ASSISTANT_TASK:>
Python Code:
# Author: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.datasets import sample
from mne.minimum_norm import apply_inverse_epochs, read_inverse_operator
from mne.minimum_norm import apply_inverse
print(__doc__)
data_path = sample.data_path()
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'
fname_raw = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
fname_event = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
label_name = 'Aud-lh'
fname_label = data_path + '/MEG/sample/labels/%s.label' % label_name
event_id, tmin, tmax = 1, -0.2, 0.5
# Using the same inverse operator when inspecting single trials Vs. evoked
snr = 3.0 # Standard assumption for average data but using it for single trial
lambda2 = 1.0 / snr ** 2
method = "dSPM" # use dSPM method (could also be MNE or sLORETA)
# Load data
inverse_operator = read_inverse_operator(fname_inv)
label = mne.read_label(fname_label)
raw = mne.io.read_raw_fif(fname_raw)
events = mne.read_events(fname_event)
# Set up pick list
include = []
# Add a bad channel
raw.info['bads'] += ['EEG 053'] # bads + 1 more
# pick MEG channels
picks = mne.pick_types(raw.info, meg=True, eeg=False, stim=False, eog=True,
include=include, exclude='bads')
# Read epochs
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=dict(mag=4e-12, grad=4000e-13,
eog=150e-6))
# Get evoked data (averaging across trials in sensor space)
evoked = epochs.average()
# Compute inverse solution and stcs for each epoch
# Use the same inverse operator as with evoked data (i.e., set nave)
# If you use a different nave, dSPM just scales by a factor sqrt(nave)
stcs = apply_inverse_epochs(epochs, inverse_operator, lambda2, method, label,
pick_ori="normal", nave=evoked.nave)
# Mean across trials but not across vertices in label
mean_stc = sum(stcs) / len(stcs)
# compute sign flip to avoid signal cancellation when averaging signed values
flip = mne.label_sign_flip(label, inverse_operator['src'])
label_mean = np.mean(mean_stc.data, axis=0)
label_mean_flip = np.mean(flip[:, np.newaxis] * mean_stc.data, axis=0)
# Get inverse solution by inverting evoked data
stc_evoked = apply_inverse(evoked, inverse_operator, lambda2, method,
pick_ori="normal")
# apply_inverse() does whole brain, so sub-select label of interest
stc_evoked_label = stc_evoked.in_label(label)
# Average over label (not caring to align polarities here)
label_mean_evoked = np.mean(stc_evoked_label.data, axis=0)
times = 1e3 * stcs[0].times # times in ms
plt.figure()
h0 = plt.plot(times, mean_stc.data.T, 'k')
h1, = plt.plot(times, label_mean, 'r', linewidth=3)
h2, = plt.plot(times, label_mean_flip, 'g', linewidth=3)
plt.legend((h0[0], h1, h2), ('all dipoles in label', 'mean',
'mean with sign flip'))
plt.xlabel('time (ms)')
plt.ylabel('dSPM value')
plt.show()
# Single trial
plt.figure()
for k, stc_trial in enumerate(stcs):
plt.plot(times, np.mean(stc_trial.data, axis=0).T, 'k--',
label='Single Trials' if k == 0 else '_nolegend_',
alpha=0.5)
# Single trial inverse then average.. making linewidth large to not be masked
plt.plot(times, label_mean, 'b', linewidth=6,
label='dSPM first, then average')
# Evoked and then inverse
plt.plot(times, label_mean_evoked, 'r', linewidth=2,
label='Average first, then dSPM')
plt.xlabel('time (ms)')
plt.ylabel('dSPM value')
plt.legend()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: View activation time-series to illustrate the benefit of aligning/flipping
Step2: Viewing single trial dSPM and average dSPM for unflipped pooling over label
|
5,539
|
<ASSISTANT_TASK:>
Python Code:
import os
import pandas as pd
import numpy as np
import xgboost as xgb
from xgboost.sklearn import XGBClassifier
from sklearn import cross_validation, metrics
from sklearn.grid_search import GridSearchCV
from sklearn.model_selection import train_test_split
import matplotlib.pylab as plt
%matplotlib inline
from matplotlib.pylab import rcParams
rcParams['figure.figsize'] = 12, 4
path = "./data/allstate"
inputFilePath = os.path.join(path, "train.csv.zip")
df = pd.read_csv(inputFilePath, compression="zip", header=0)
msk = np.random.rand(len(df)) < 0.8
train = df[msk]
test = df[~msk]
train.shape, test.shape
target='loss'
IDcol = 'id'
train[target].value_counts()
test_results = pd.read_csv('test_results.csv')
def modelfit(alg, dtrain, dtest, predictors,useTrainCV=True, cv_folds=5, early_stopping_rounds=50):
if useTrainCV:
xgb_param = alg.get_xgb_params()
xgtrain = xgb.DMatrix(dtrain[predictors].values, label=dtrain[target].values)
xgtest = xgb.DMatrix(dtest[predictors].values)r
cvresult = xgb.cv(xgb_param, xgtrain, num_boost_round=alg.get_params()['n_estimators'], nfold=cv_folds,
metrics='auc', early_stopping_rounds=early_stopping_rounds, show_progress=False)
alg.set_params(n_estimators=cvresult.shape[0])
#Fit the algorithm on the data
alg.fit(dtrain[predictors], dtrain[target],eval_metric='auc')
#Predict training set:
dtrain_predictions = alg.predict(dtrain[predictors])
dtrain_predprob = alg.predict_proba(dtrain[predictors])[:,1]
#Print model report:
print "\nModel Report"
print "Accuracy : %.4g" % metrics.accuracy_score(dtrain[target].values, dtrain_predictions)
print "AUC Score (Train): %f" % metrics.roc_auc_score(dtrain[target], dtrain_predprob)
# Predict on testing data:
dtest['predprob'] = alg.predict_proba(dtest[predictors])[:,1]
results = test_results.merge(dtest[['ID','predprob']], on='ID')
print 'AUC Score (Test): %f' % metrics.roc_auc_score(results[target], results['predprob'])
feat_imp = pd.Series(alg.booster().get_fscore()).sort_values(ascending=False)
feat_imp.plot(kind='bar', title='Feature Importances')
plt.ylabel('Feature Importance Score')
predictors = [x for x in train.columns if x not in [target, IDcol]]
xgb1 = XGBRegresor(
learning_rate =0.1,
n_estimators=1000,
max_depth=5,
min_child_weight=1,
gamma=0,
subsample=0.8,
colsample_bytree=0.8,
objective= 'binary:logistic',
nthread=4,
scale_pos_weight=1,
seed=27)
modelfit(xgb1, train, test, predictors)
#Grid seach on subsample and max_features
#Choose all predictors except target & IDcols
param_test1 = {
'max_depth':range(3,10,2),
'min_child_weight':range(1,6,2)
}
gsearch1 = GridSearchCV(estimator = XGBClassifier( learning_rate =0.1, n_estimators=140, max_depth=5,
min_child_weight=1, gamma=0, subsample=0.8, colsample_bytree=0.8,
objective= 'binary:logistic', nthread=4, scale_pos_weight=1, seed=27),
param_grid = param_test1, scoring='roc_auc',n_jobs=4,iid=False, cv=5)
gsearch1.fit(train[predictors],train[target])
gsearch1.grid_scores_, gsearch1.best_params_, gsearch1.best_score_
#Grid seach on subsample and max_features
#Choose all predictors except target & IDcols
param_test2 = {
'max_depth':[4,5,6],
'min_child_weight':[4,5,6]
}
gsearch2 = GridSearchCV(estimator = XGBClassifier( learning_rate=0.1, n_estimators=140, max_depth=5,
min_child_weight=2, gamma=0, subsample=0.8, colsample_bytree=0.8,
objective= 'binary:logistic', nthread=4, scale_pos_weight=1,seed=27),
param_grid = param_test2, scoring='roc_auc',n_jobs=4,iid=False, cv=5)
gsearch2.fit(train[predictors],train[target])
gsearch2.grid_scores_, gsearch2.best_params_, gsearch2.best_score_
#Grid seach on subsample and max_features
#Choose all predictors except target & IDcols
param_test2b = {
'min_child_weight':[6,8,10,12]
}
gsearch2b = GridSearchCV(estimator = XGBClassifier( learning_rate=0.1, n_estimators=140, max_depth=4,
min_child_weight=2, gamma=0, subsample=0.8, colsample_bytree=0.8,
objective= 'binary:logistic', nthread=4, scale_pos_weight=1,seed=27),
param_grid = param_test2b, scoring='roc_auc',n_jobs=4,iid=False, cv=5)
gsearch2b.fit(train[predictors],train[target])
gsearch2b.grid_scores_, gsearch2b.best_params_, gsearch2b.best_score_
#Grid seach on subsample and max_features
#Choose all predictors except target & IDcols
param_test3 = {
'gamma':[i/10.0 for i in range(0,5)]
}
gsearch3 = GridSearchCV(estimator = XGBClassifier( learning_rate =0.1, n_estimators=140, max_depth=4,
min_child_weight=6, gamma=0, subsample=0.8, colsample_bytree=0.8,
objective= 'binary:logistic', nthread=4, scale_pos_weight=1,seed=27),
param_grid = param_test3, scoring='roc_auc',n_jobs=4,iid=False, cv=5)
gsearch3.fit(train[predictors],train[target])
gsearch3.grid_scores_, gsearch3.best_params_, gsearch3.best_score_
predictors = [x for x in train.columns if x not in [target, IDcol]]
xgb2 = XGBClassifier(
learning_rate =0.1,
n_estimators=1000,
max_depth=4,
min_child_weight=6,
gamma=0,
subsample=0.8,
colsample_bytree=0.8,
objective= 'binary:logistic',
nthread=4,
scale_pos_weight=1,
seed=27)
modelfit(xgb2, train, test, predictors)
#Grid seach on subsample and max_features
#Choose all predictors except target & IDcols
param_test4 = {
'subsample':[i/10.0 for i in range(6,10)],
'colsample_bytree':[i/10.0 for i in range(6,10)]
}
gsearch4 = GridSearchCV(estimator = XGBClassifier( learning_rate =0.1, n_estimators=177, max_depth=4,
min_child_weight=6, gamma=0, subsample=0.8, colsample_bytree=0.8,
objective= 'binary:logistic', nthread=4, scale_pos_weight=1,seed=27),
param_grid = param_test4, scoring='roc_auc',n_jobs=4,iid=False, cv=5)
gsearch4.fit(train[predictors],train[target])
gsearch4.grid_scores_, gsearch4.best_params_, gsearch4.best_score_
#Grid seach on subsample and max_features
#Choose all predictors except target & IDcols
param_test5 = {
'subsample':[i/100.0 for i in range(75,90,5)],
'colsample_bytree':[i/100.0 for i in range(75,90,5)]
}
gsearch5 = GridSearchCV(estimator = XGBClassifier( learning_rate =0.1, n_estimators=177, max_depth=4,
min_child_weight=6, gamma=0, subsample=0.8, colsample_bytree=0.8,
objective= 'binary:logistic', nthread=4, scale_pos_weight=1,seed=27),
param_grid = param_test5, scoring='roc_auc',n_jobs=4,iid=False, cv=5)
gsearch5.fit(train[predictors],train[target])
gsearch5.grid_scores_, gsearch5.best_params_, gsearch5.best_score_
#Grid seach on subsample and max_features
#Choose all predictors except target & IDcols
param_test6 = {
'reg_alpha':[1e-5, 1e-2, 0.1, 1, 100]
}
gsearch6 = GridSearchCV(estimator = XGBClassifier( learning_rate =0.1, n_estimators=177, max_depth=4,
min_child_weight=6, gamma=0.1, subsample=0.8, colsample_bytree=0.8,
objective= 'binary:logistic', nthread=4, scale_pos_weight=1,seed=27),
param_grid = param_test6, scoring='roc_auc',n_jobs=4,iid=False, cv=5)
gsearch6.fit(train[predictors],train[target])
gsearch6.grid_scores_, gsearch6.best_params_, gsearch6.best_score_
#Grid seach on subsample and max_features
#Choose all predictors except target & IDcols
param_test7 = {
'reg_alpha':[0, 0.001, 0.005, 0.01, 0.05]
}
gsearch7 = GridSearchCV(estimator = XGBClassifier( learning_rate =0.1, n_estimators=177, max_depth=4,
min_child_weight=6, gamma=0.1, subsample=0.8, colsample_bytree=0.8,
objective= 'binary:logistic', nthread=4, scale_pos_weight=1,seed=27),
param_grid = param_test7, scoring='roc_auc',n_jobs=4,iid=False, cv=5)
gsearch7.fit(train[predictors],train[target])
gsearch7.grid_scores_, gsearch7.best_params_, gsearch7.best_score_
xgb3 = XGBClassifier(
learning_rate =0.1,
n_estimators=1000,
max_depth=4,
min_child_weight=6,
gamma=0,
subsample=0.8,
colsample_bytree=0.8,
reg_alpha=0.005,
objective= 'binary:logistic',
nthread=4,
scale_pos_weight=1,
seed=27)
modelfit(xgb3, train, test, predictors)
xgb4 = XGBClassifier(
learning_rate =0.01,
n_estimators=5000,
max_depth=4,
min_child_weight=6,
gamma=0,
subsample=0.8,
colsample_bytree=0.8,
reg_alpha=0.005,
objective= 'binary:logistic',
nthread=4,
scale_pos_weight=1,
seed=27)
modelfit(xgb4, train, test, predictors)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load Data
Step2: Define a function for modeling and cross-validation
Step3: Step 1- Find the number of estimators for a high learning rate
Step4: Tune subsample and colsample_bytree
Step5: tune subsample
Step6: Got the same value as assument and no change requried.
|
5,540
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
def well2d(x, y, nx, ny, L=1.0):
Compute the 2d quantum well wave function.
psi=(2/L)*np.sin((nx*np.pi*x)/L)*np.sin((ny*np.pi*y)/L)
return psi
psi = well2d(np.linspace(0,1,10), np.linspace(0,1,10), 1, 1)
assert len(psi)==10
assert psi.shape==(10,)
L=1.0
xlist=np.linspace(0.0,1.0,100)
ylist=np.linspace(0.0,1.0,100)
x,y=np.meshgrid(xlist,ylist)
f=plt.figure(figsize=(10,6))
a=plt.contourf(well2d(x,y,3,2), cmap='winter');
plt.colorbar(a)
plt.xticks((0,50,100),(0,L/2,L))
plt.yticks((0,50,100),(0,L/2,L))
plt.box(False)
plt.title('Contour of 2D Wavefunction')
plt.xlabel('X'), plt.ylabel('Y')
plt.tick_params(axis='x', top='off',direction='out')
plt.tick_params(axis='y', right='off',direction='out')
assert True # use this cell for grading the contour plot
f=plt.figure(figsize=(10,6))
plt.colorbar(plt.pcolormesh(well2d(x,y,3,2), cmap='cool'))
plt.title('Contour of 2D Wavefunction')
plt.xlabel('X'), plt.ylabel('Y')
plt.xticks((0,50,100),(0,L/2,L))
plt.yticks((0,50,100),(0,L/2,L))
plt.box(False)
plt.tick_params(axis='x', top='off',direction='out')
plt.tick_params(axis='y', right='off',direction='out')
assert True # use this cell for grading the pcolor plot
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Contour plots of 2d wavefunctions
Step3: The contour, contourf, pcolor and pcolormesh functions of Matplotlib can be used for effective visualizations of 2d scalar fields. Use the Matplotlib documentation to learn how to use these functions along with the numpy.meshgrid function to visualize the above wavefunction
Step4: Next make a visualization using one of the pcolor functions
|
5,541
|
<ASSISTANT_TASK:>
Python Code:
import pypot.dynamixel
ports = pypot.dynamixel.get_available_ports()
if not ports:
raise IOError('no port found!')
print 'ports found', ports
my_port = "/dev/ttyACM1" #Change this value to match your setup
using_XL320 = False #Change this value to True if you use XL-320 motors
old_id = 1 #Should be 1 if the motor has never been configured
new_id = 10 #Change this value
old_baudrate = 57600 #Should be 57600 for new MX-28 or MX-64, 1000000 for new AX612A or XL-320
new_baudrate = 1000000 #Should be 1000000
if using_XL320:
dxl_io = pypot.dynamixel.Dxl320IO(my_port, baudrate=old_baudrate)
else:
dxl_io = pypot.dynamixel.DxlIO(my_port, baudrate=old_baudrate)
print "scanning"
found = dxl_io.scan(range(60))
print found
if old_id in found:
dxl_io.set_return_delay_time({old_id : 0})
dxl_io.change_id({old_id : new_id})
dxl_io.change_baudrate({new_id : new_baudrate})
else:
print "no motor found with ID ",old_id," at baudrate ",old_baudrate, " on port ",my_port
dxl_io.close()
dxl_io = pypot.dynamixel.DxlIO(my_port, baudrate=new_baudrate)
print "scanning"
found = dxl_io.scan(range(60))
print found
dxl_io.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: From the available ports, select the port where the new motor is pluggued.
Step2: Select the new ID and the new baudrate you wish for your motor.
Step3: Change the ID, baudrate and set the return time to 0.
Step4: Check that the new motor is detected with its new id and new baudrate.
|
5,542
|
<ASSISTANT_TASK:>
Python Code:
import math
import torch
import tqdm
import gpytorch
from matplotlib import pyplot as plt
%matplotlib inline
%load_ext autoreload
%autoreload 2
# Training data is 100 points in [0,1] inclusive regularly spaced
train_x_mean = torch.linspace(0, 1, 20)
# We'll assume the variance shrinks the closer we get to 1
train_x_stdv = torch.linspace(0.03, 0.01, 20)
# True function is sin(2*pi*x) with Gaussian noise
train_y = torch.sin(train_x_mean * (2 * math.pi)) + torch.randn(train_x_mean.size()) * 0.2
f, ax = plt.subplots(1, 1, figsize=(8, 3))
ax.errorbar(train_x_mean, train_y, xerr=(train_x_stdv * 2), fmt="k*", label="Train Data")
ax.legend()
from gpytorch.models import ApproximateGP
from gpytorch.variational import CholeskyVariationalDistribution
from gpytorch.variational import VariationalStrategy
class GPModel(ApproximateGP):
def __init__(self, inducing_points):
variational_distribution = CholeskyVariationalDistribution(inducing_points.size(0))
variational_strategy = VariationalStrategy(self, inducing_points, variational_distribution, learn_inducing_locations=True)
super(GPModel, self).__init__(variational_strategy)
self.mean_module = gpytorch.means.ConstantMean()
self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel())
def forward(self, x):
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
return gpytorch.distributions.MultivariateNormal(mean_x, covar_x)
inducing_points = torch.randn(10, 1)
model = GPModel(inducing_points=inducing_points)
likelihood = gpytorch.likelihoods.GaussianLikelihood()
# this is for running the notebook in our testing framework
import os
smoke_test = ('CI' in os.environ)
training_iter = 2 if smoke_test else 400
model.train()
likelihood.train()
# We use SGD here, rather than Adam. Emperically, we find that SGD is better for variational regression
optimizer = torch.optim.Adam([
{'params': model.parameters()},
{'params': likelihood.parameters()},
], lr=0.01)
# Our loss object. We're using the VariationalELBO
mll = gpytorch.mlls.VariationalELBO(likelihood, model, num_data=train_y.size(0))
iterator = tqdm.notebook.tqdm(range(training_iter))
for i in iterator:
# First thing: draw a sample set of features from our distribution
train_x_sample = torch.distributions.Normal(train_x_mean, train_x_stdv).rsample()
# Now do the rest of the training loop
optimizer.zero_grad()
output = model(train_x_sample)
loss = -mll(output, train_y)
iterator.set_postfix(loss=loss.item())
loss.backward()
optimizer.step()
# Get into evaluation (predictive posterior) mode
model.eval()
likelihood.eval()
# Test points are regularly spaced along [0,1]
# Make predictions by feeding model through likelihood
with torch.no_grad(), gpytorch.settings.fast_pred_var():
test_x = torch.linspace(0, 1, 51)
observed_pred = likelihood(model(test_x))
with torch.no_grad():
# Initialize plot
f, ax = plt.subplots(1, 1, figsize=(8, 3))
# Get upper and lower confidence bounds
lower, upper = observed_pred.confidence_region()
# Plot training data as black stars
ax.errorbar(train_x_mean.numpy(), train_y.numpy(), xerr=train_x_stdv, fmt='k*')
# Plot predictive means as blue line
ax.plot(test_x.numpy(), observed_pred.mean.numpy(), 'b')
# Shade between the lower and upper confidence bounds
ax.fill_between(test_x.numpy(), lower.numpy(), upper.numpy(), alpha=0.5)
ax.set_ylim([-3, 3])
ax.legend(['Observed Data', 'Mean', 'Confidence'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set up training data
Step2: Setting up the model
Step3: Training the model with uncertain features
|
5,543
|
<ASSISTANT_TASK:>
Python Code:
from datetime import date
from organizer.models import Tag, Startup, NewsLink
from blog.models import Post
edut = Tag(name='Education', slug='education')
edut
edut.save()
edut.delete()
edut # still in memory!
type(Tag.objects) # a model manager
Tag.objects.create(name='Video Games', slug='video-games')
# create multiple objects in a go!
Tag.objects.bulk_create([
Tag(name='Django', slug='django'),
Tag(name='Mobile', slug='mobile'),
Tag(name='Web', slug='web'),
])
Tag.objects.all()
Tag.objects.all()[0] # acts like a list
type(Tag.objects.all()) # is not a list
# managers are not accessible to model instances, only to model classes!
try:
edut.objects
except AttributeError as e:
print(e)
Tag.objects.all()
Tag.objects.count()
Tag.objects.get(slug='django')
type(Tag.objects.all())
type(Tag.objects.get(slug='django'))
# case-sensitive!
try:
Tag.objects.get(slug='Django')
except Tag.DoesNotExist as e:
print(e)
# the i is for case-Insensitive
Tag.objects.get(slug__iexact='DJANGO')
Tag.objects.get(slug__istartswith='DJ')
Tag.objects.get(slug__contains='an')
# get always returns a single object
try:
# djangO, mObile, videO-games
Tag.objects.get(slug__contains='o')
except Tag.MultipleObjectsReturned as e:
print(e)
## unlike get, can fetch multiple objects
Tag.objects.filter(slug__contains='o')
type(Tag.objects.filter(slug__contains='o'))
Tag.objects.filter(slug__contains='o').order_by('-name')
# first we call order_by on the manager
Tag.objects.order_by('-name')
# now we call filter on the manager, and order the resulting queryset
Tag.objects.filter(slug__contains='e').order_by('-name')
Tag.objects.values_list()
type(Tag.objects.values_list())
Tag.objects.values_list('name', 'slug')
Tag.objects.values_list('name')
Tag.objects.values_list('name', flat=True)
type(Tag.objects.values_list('name', flat=True))
jb = Startup.objects.create(
name='JamBon Software',
slug='jambon-software',
contact='django@jambonsw.com',
description='Web and Mobile Consulting.\n'
'Django Tutoring.\n',
founded_date=date(2013, 1, 18),
website='https://jambonsw.com/',
)
jb # this output only clear because of __str__()
jb.founded_date
jb.founded_date = date(2014,1,1)
# we're not calling save() !
jb.founded_date
# get version in database
jb = Startup.objects.get(slug='jambon-software')
# work above is all for nought because we didn't save()
jb.founded_date
djt = Post.objects.create(
title='Django Training',
slug='django-training',
text=(
"Learn Django in a classroom setting "
"with JamBon Software."),
)
djt
djt.pub_date = date(2013, 1, 18)
djt.save()
djt
type(djt.tags)
type(djt.startups)
djt.tags.all()
djt.startups.all()
django = Tag.objects.get(slug__contains='django')
djt.tags.add(django)
djt.tags.all()
django.blog_posts.all() # a "reverse" relation
django.startup_set.add(jb) # a "reverse" relation
django.startup_set.all()
jb.tags.all() # the "forward" relation
# on more time, for repetition!
djt
# "forward" relation
djt.startups.add(jb)
djt.startups.all()
jb.blog_posts.all() # "reverse" relation
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Interacting With the Database
Step2: Creation and Destruction with Managers
Step3: Methods of Data Retrieval
Step4: The get method
Step5: The filter method
Step6: Chaining Calls
Step7: values and values_list
Step8: Data in Memory vs Data in the Database
Step9: Connecting Data through Relations
|
5,544
|
<ASSISTANT_TASK:>
Python Code:
pudl_settings = pudl.workspace.setup.get_defaults()
settings_file_name= 'etl_full.yml'
etl_settings = EtlSettings.from_yaml(
pathlib.Path(pudl_settings['settings_dir'],
settings_file_name))
validated_etl_settings = etl_settings.datasets
datasets = validated_etl_settings.get_datasets()
eia_settings = datasets["eia"]
eia860_tables = eia_settings.eia860.tables
eia860_years = eia_settings.eia860.years
eia860m = eia_settings.eia860.eia860m
eia923_tables = eia_settings.eia923.tables
eia923_years = eia_settings.eia923.years
ds = Datastore()
# Extract EIA forms 923, 860
eia923_raw_dfs = pudl.extract.eia923.Extractor(ds).extract(
settings=eia_settings.eia923
)
eia860_raw_dfs = pudl.extract.eia860.Extractor(ds).extract(
settings=eia_settings.eia860
)
# if we are trying to add the EIA 860M YTD data, then extract it and append
if eia860m:
eia860m_raw_dfs = pudl.extract.eia860m.Extractor(ds).extract(
settings=eia_settings.eia860
)
eia860_raw_dfs = pudl.extract.eia860m.append_eia860m(
eia860_raw_dfs=eia860_raw_dfs, eia860m_raw_dfs=eia860m_raw_dfs
)
# Transform EIA forms 923, 860
eia860_transformed_dfs = pudl.transform.eia860.transform(
eia860_raw_dfs, eia860_settings=eia_settings.eia860
)
eia923_transformed_dfs = pudl.transform.eia923.transform(
eia923_raw_dfs, eia923_settings=eia_settings.eia923
)
# create an eia transformed dfs dictionary
eia_transformed_dfs = eia860_transformed_dfs.copy()
eia_transformed_dfs.update(eia923_transformed_dfs.copy())
# Do some final cleanup and assign appropriate types:
eia_transformed_dfs = {
name: convert_cols_dtypes(df, data_source="eia")
for name, df in eia_transformed_dfs.items()
}
# we want to investigate the harvesting of the plants in this case...
entity = 'generators'
# create the empty entities df to fill up
entities_dfs = {}
entities_dfs, eia_transformed_dfs, col_dfs = (
pudl.transform.eia.harvesting(
entity, eia_transformed_dfs, entities_dfs, debug=True)
)
pmc = col_dfs['prime_mover_code']
pmc.prime_mover_code.unique()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: You can skip the settings step above and set these years/tables yourself here without using the settings files... just know they are not validated below so they could be wrong and fail after some time. It is HIGHLY RECOMMENDED that you use all the years/tables
Step2: Run extract step & phase 1 transform step
Step3: You have to re-run this cell every time you want to re-run the havesting cell below (bc pudl.transform.eia.harvesting removes columns from the dfs). This cell enables you to start with a fresheia_transformed_dfs without needing to re-run the 860/923 transforms.
Step4: Run harvest w/ debug=True
Step5: Use col_dfs to explore harvested values
|
5,545
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
%matplotlib notebook
# use the 'seaborn-colorblind' style
plt.style.use('seaborn-colorblind')
# SOURCE: 2018: https://www.officeholidays.com/countries/usa/michigan/2018
# scrap tool: https://www.convertcsv.com/html-table-to-csv.htm
holidays = pd.read_csv('michigan-holidays_2018.csv')
holidays.set_index('Day',inplace=True)
holidays.head()
#SOURCE: https://crime-data-explorer.fr.cloud.gov/downloads-and-docs
#MI-2018
incidents = pd.read_csv('nibrs_incident_2018.csv')
incidents.columns = map(str.lower, incidents.columns)
incidents.head(3)
incidents.incident_date = pd.to_datetime(incidents.incident_date)
incidents.set_index('incident_date', inplace=True)
incidents.head()
incidents = (incidents.resample('D')
.incident_id.count()
.to_frame()
.rename(columns={'incident_id':'incident_count'}) )
incidents[incidents.index.month==1].head(2)
incidents_mean = (incidents.groupby(pd.Grouper(freq='M'))
.mean()
.rename(columns={'incident_count':'incident_mean'}))
incidents_mean[incidents_mean.index.month==1].head(2)
incidents['incident_mean'] = 0
for i in range(1,13):
incidents.at[incidents.index.month==i, 'incident_mean'] = incidents_mean[incidents_mean.index.month==i].incident_mean.mean()
incidents[incidents.index.month==1].head(2)
fig, ax = plt.subplots()#figsize=(9,5))
incidents.incident_count.plot(kind='line', color='skyblue', ax=ax, label='Daily incidents')
incidents.incident_mean.plot(ax=ax, label='Monthly incidents average')
#holidays
holiday_dates = holidays.index.values
holi = incidents.loc[incidents.index.isin(holiday_dates)]
ax.scatter(holiday_dates, holi.incident_count, zorder=10, color='black', marker='.', label='Incidents on holiday')
#styles
ax.legend(loc=4, fontsize='x-small', frameon=False)
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['left'].set_visible(False)
ax.set_title('Crimes on Holidays, Michigan 2018', fontsize='medium')
ax.xaxis.set_minor_locator(mdates.MonthLocator())
ax.xaxis.set_major_locator(mdates.MonthLocator(bymonthday=15))
ax.xaxis.set_major_formatter(mdates.DateFormatter('%b'))
ax.tick_params(axis='x', which='major', bottom=False)
ax.tick_params(axis='y', which='major', left=False)
ax.set_xlabel('Month')
ax.set_ylabel('Number of Incidents');
ax.axhline(y=1750, linestyle='--', color='gray', linewidth=0.5, alpha=0.4)
ax.axhline(y=1500, linestyle='--', color='gray', linewidth=0.5, alpha=0.7)
ax.axhline(y=1000, linestyle='--', color='gray', linewidth=0.5, alpha=0.7)
ax.axhline(y=750, linestyle='--', color='gray', linewidth=0.5, alpha=0.4)
ax.set_ylim(0,2000)
ax.set_xlim(pd.Timestamp('2017-12-30'), pd.Timestamp('2019-01-2'))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Getting Dataset
Step2: Processing Data
Step3: Plotting
|
5,546
|
<ASSISTANT_TASK:>
Python Code:
import math
x = math.sin(1.2)
x
from math import pi
theta_d = 30.0
theta_r = pi / 180.0 * theta_d
print(theta_r)
from math import pi
def degrees_to_radians(theta_d):
Convert an angle from degrees to radians.
Parameters
----------
theta_d : float
The angle in degrees.
Returns
-------
theta_r : float
The angle in radians.
theta_r = pi / 180.0 * theta_d
return theta_r
print(degrees_to_radians(30.0))
print(degrees_to_radians(60.0))
print(degrees_to_radians(90.0))
help(degrees_to_radians)
help(math.sin)
%reset
from math import pi
def degrees_to_radians(theta_d):
Convert an angle from degrees to radians.
Parameters
----------
theta_d : float
The angle in degrees.
Returns
-------
theta_r : float
The angle in radians.
theta_r = pi / 180.0 * theta_d
return theta_r
angle = degrees_to_radians(45.0)
print(angle)
theta_d
x1 = 1.1
def print_x1():
print(x1)
print(x1)
print_x1()
x2 = 1.2
def print_x2():
x2 = 2.3
print(x2)
print(x2)
print_x2()
x3 = 1.3
def print_x3():
print(x3)
x3 = 2.4
print(x3)
print_x3()
from math import sqrt
def drop_time(height, speed, gravity):
Return how long it takes an object released from a height h,
in a gravitational field of strength g, with initial vertical speed v,
to hit the ground.
Parameters
----------
height : float
Initial height h
speed : float
Initial vertical speed v
gravity : float
Gravitional field strength g
Returns
-------
t : float
Time the object hits the ground
return (speed + sqrt(speed**2 + 2.0*height*gravity)) / gravity
print(drop_time(10.0, 0.0, 9.8))
print(drop_time(10.0, 1.0, 9.8))
print(drop_time(100.0, 9.8, 15.0))
print(drop_time(height=10.0, speed=0.0, gravity=9.8))
print(drop_time(height=100.0, gravity=9.8, speed=15.0))
def drop_time(height, speed, gravity=9.8):
Return how long it takes an object released from a height h,
in a gravitational field of strength g, with initial vertical speed v,
to hit the ground.
Parameters
----------
height : float
Initial height h
speed : float
Initial vertical speed v
gravity : float
Gravitional field strength g
Returns
-------
t : float
Time the object hits the ground
return (speed + sqrt(speed**2 + 2.0*height*gravity)) / gravity
print(drop_time(10.0, 0.0))
print(drop_time(height=50.0, speed=1.0))
print(drop_time(gravity=15.0, height=50.0, speed=1.0))
import math
x = 1.2
name = "Alice"
print("Hello")
print(6)
print(name)
print(x)
print(math.pi)
print(math.sin(x))
print(math.sin)
print(math)
print("Hello {}. We set x={}.".format(name, x))
print ("The function {} applied to x={} gives {}".format(math.sin, x, math.sin(x)))
name = "Alice"
number = "13"
sentence = " a b c d e "
print(name.upper())
print(name.lower())
print(name.isdigit())
print(number.isdigit())
print(sentence.strip())
print(sentence.split())
print("Hello" + "Alice")
print("Hello" * 3)
print(str(3.4))
print("Hello"[0])
print("Hello"[2])
print("Hello"[1:3])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Go to the editor in spyder and enter those commands in a file
Step2: Also, in the top right of the spyder window, select the "Variable explorer" tab. It shows the variables that it currently knows, which should include x, its type (float) and its value.
Step4: This is effective for a single angle. If we want to repeat this for many angles, we could copy and paste the code. However, this is dangerous. We could make a mistake in editing the code. We could find a mistake in our original code, and then have to remember to modify every location where we copied it to. Instead we want to have a single piece of code that performs an action, and use that piece of code without modification whenever needed.
Step5: We check that it works by printing the result for multiple angles
Step6: How does the function definition work?
Step8: This allows you to quickly use code correctly without having to look at the code. We can do the same with functions from packages, such as
Step9: You can put whatever you like in the docstring. The format used above in the degrees_to_radians function follows the numpydoc convention, but there are other conventions that work well. One reason for following this convention can be seen in spyder. Copy the function degrees_to_radians into the console, if you have not done so already. Then, in the top right part of the window, select the "Object inspector" tab. Ensure that the "Source" is "Console". Type degrees_to_radians into the "Object" box. You should see the help above displayed, but nicely formatted.
Step11: Then copy and paste the function definition again
Step12: (Alternatively you can use the history in the console by pressing the up arrow until the definition of the function you previously entered appears. Then click at the end of the function and press Return). Now call the function as
Step13: But the variables used internally, theta_d and theta_r, are not known outside the function
Step14: This is an example of scope
Step15: In the first (x1) example, the variable x1 was not defined within the function, but it was used. When x1 is printed, Python has to look for the definition outside of the scope of the function, which it does successfully.
Step17: The only significant change from the second example is the order of the print statement and the assignment to x3 inside the function. Because x3 is assigned inside the function, Python wants to use the local value within the function, and will ignore the value defined outside the function. However, the print function is called before x3 has been set within the function, leading to an error.
Step18: But when we start using it, it can be a bit confusing
Step19: Is that last case correct? Did we really want to change the gravitational field, whilst at the same time using an initial velocity of exactly the value we expect for $g$?
Step20: The result is exactly the same, but now it's explicitly clear what we're doing.
Step22: This is the same as the confusing case above, but now there is no ambiguity. Whilst it is good practice to match the order of the arguments to the function definition, it is only needed when you don't use the keywords. Using the keywords is always useful.
Step23: Note that there is only one difference here, in the very first line
Step24: So, we can still give a specific value for gravity when we don't want to use the value 9.8, but it isn't needed if we're happy for it to take the default value of 9.8. This works both if we use keyword arguments and if not, with certain restrictions.
Step25: We see that variables are converted to their values (such as name and math.pi) and functions are called to get values (such as math.sin(x)), which are then converted to strings displayed on screen. However, functions (math.sin) and modules (math) are also "printed", in that a string saying what they are, and where they come from, is displayed.
Step26: The format command takes the string (here "Hello {}. We set x={}.") and replaces the {} with the values of the variables (here name and x in order).
Step27: There are many more ways to use the format command which can be helpful.
Step28: The use of the "dot" notation appears here. We saw this with accessing functions in modules and packages above; now we see it with accessing attributes and methods. It appears repeatedly in Python. The format method used above is particularly important for our purposes, but there are a lot of methods available.
Step29: We can repeat strings using the * operator.
Step30: We can convert numbers to strings using the str function.
Step31: We can also access individual characters (starting from 0!), or a range of characters
|
5,547
|
<ASSISTANT_TASK:>
Python Code:
import os
PROJECT = "cloud-training-demos" # REPLACE WITH YOUR PROJECT ID
BUCKET = "cloud-training-demos-ml" # REPLACE WITH YOUR BUCKET NAME
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# Do not change these
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["TFVERSION"] = "1.13"
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
import tensorflow as tf
print(tf.__version__)
from google.cloud import bigquery
bq = bigquery.Client(project = PROJECT)
sql =
#standardSQL
WITH CTE_visitor_page_content AS (
SELECT
fullVisitorID,
(SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(hits.customDimensions)) AS latestContentId,
(LEAD(hits.time, 1) OVER (PARTITION BY fullVisitorId ORDER BY hits.time ASC) - hits.time) AS session_duration
FROM
`cloud-training-demos.GA360_test.ga_sessions_sample`,
UNNEST(hits) AS hits
WHERE
# only include hits on pages
hits.type = "PAGE"
GROUP BY
fullVisitorId,
latestContentId,
hits.time )
-- Aggregate web stats
SELECT
fullVisitorID as visitorId,
latestContentId as contentId,
SUM(session_duration) AS session_duration
FROM
CTE_visitor_page_content
WHERE
latestContentId IS NOT NULL
GROUP BY
fullVisitorID,
latestContentId
HAVING
session_duration > 0
ORDER BY
latestContentId
df = bq.query(sql).to_dataframe()
df.head()
stats = df.describe()
stats
df[["session_duration"]].plot(kind="hist", logy=True, bins=100, figsize=[8,5])
# The rating is the session_duration scaled to be in the range 0-1. This will help with training.
median = stats.loc["50%", "session_duration"]
df["rating"] = 0.3 * df["session_duration"] / median
df.loc[df["rating"] > 1, "rating"] = 1
df[["rating"]].plot(kind="hist", logy=True, bins=100, figsize=[8,5])
del df["session_duration"]
%%bash
rm -rf data
mkdir data
df.to_csv(path_or_buf = "data/collab_raw.csv", index = False, header = False)
!head data/collab_raw.csv
import pandas as pd
import numpy as np
def create_mapping(values, filename):
with open(filename, 'w') as ofp:
value_to_id = {value:idx for idx, value in enumerate(values.unique())}
for value, idx in value_to_id.items():
ofp.write("{},{}\n".format(value, idx))
return value_to_id
df = pd.read_csv(filepath_or_buffer = "data/collab_raw.csv",
header = None,
names = ["visitorId", "contentId", "rating"],
dtype = {"visitorId": str, "contentId": str, "rating": np.float})
df.to_csv(path_or_buf = "data/collab_raw.csv", index = False, header = False)
user_mapping = create_mapping(df["visitorId"], "data/users.csv")
item_mapping = create_mapping(df["contentId"], "data/items.csv")
!head -3 data/*.csv
df["userId"] = df["visitorId"].map(user_mapping.get)
df["itemId"] = df["contentId"].map(item_mapping.get)
mapped_df = df[["userId", "itemId", "rating"]]
mapped_df.to_csv(path_or_buf = "data/collab_mapped.csv", index = False, header = False)
mapped_df.head()
import pandas as pd
import numpy as np
mapped_df = pd.read_csv(filepath_or_buffer = "data/collab_mapped.csv", header = None, names = ["userId", "itemId", "rating"])
mapped_df.head()
NITEMS = np.max(mapped_df["itemId"]) + 1
NUSERS = np.max(mapped_df["userId"]) + 1
mapped_df["rating"] = np.round(mapped_df["rating"].values, 2)
print("{} items, {} users, {} interactions".format( NITEMS, NUSERS, len(mapped_df) ))
grouped_by_items = mapped_df.groupby("itemId")
iter = 0
for item, grouped in grouped_by_items:
print(item, grouped["userId"].values, grouped["rating"].values)
iter = iter + 1
if iter > 5:
break
import tensorflow as tf
grouped_by_items = mapped_df.groupby("itemId")
with tf.python_io.TFRecordWriter("data/users_for_item") as ofp:
for item, grouped in grouped_by_items:
example = tf.train.Example(features = tf.train.Features(feature = {
"key": tf.train.Feature(int64_list = tf.train.Int64List(value = [item])),
"indices": tf.train.Feature(int64_list = tf.train.Int64List(value = grouped["userId"].values)),
"values": tf.train.Feature(float_list = tf.train.FloatList(value = grouped["rating"].values))
}))
ofp.write(example.SerializeToString())
grouped_by_users = mapped_df.groupby("userId")
with tf.python_io.TFRecordWriter("data/items_for_user") as ofp:
for user, grouped in grouped_by_users:
example = tf.train.Example(features = tf.train.Features(feature = {
"key": tf.train.Feature(int64_list = tf.train.Int64List(value = [user])),
"indices": tf.train.Feature(int64_list = tf.train.Int64List(value = grouped["itemId"].values)),
"values": tf.train.Feature(float_list = tf.train.FloatList(value = grouped["rating"].values))
}))
ofp.write(example.SerializeToString())
!ls -lrt data
import os
import tensorflow as tf
from tensorflow.python.lib.io import file_io
from tensorflow.contrib.factorization import WALSMatrixFactorization
def read_dataset(mode, args):
def decode_example(protos, vocab_size):
# TODO
return
def remap_keys(sparse_tensor):
# Current indices of our SparseTensor that we need to fix
bad_indices = sparse_tensor.indices # shape = (current_batch_size * (number_of_items/users[i] + 1), 2)
# Current values of our SparseTensor that we need to fix
bad_values = sparse_tensor.values # shape = (current_batch_size * (number_of_items/users[i] + 1),)
# Since batch is ordered, the last value for a batch index is the user
# Find where the batch index chages to extract the user rows
# 1 where user, else 0
user_mask = tf.concat(values = [bad_indices[1:,0] - bad_indices[:-1,0], tf.constant(value = [1], dtype = tf.int64)], axis = 0) # shape = (current_batch_size * (number_of_items/users[i] + 1), 2)
# Mask out the user rows from the values
good_values = tf.boolean_mask(tensor = bad_values, mask = tf.equal(x = user_mask, y = 0)) # shape = (current_batch_size * number_of_items/users[i],)
item_indices = tf.boolean_mask(tensor = bad_indices, mask = tf.equal(x = user_mask, y = 0)) # shape = (current_batch_size * number_of_items/users[i],)
user_indices = tf.boolean_mask(tensor = bad_indices, mask = tf.equal(x = user_mask, y = 1))[:, 1] # shape = (current_batch_size,)
good_user_indices = tf.gather(params = user_indices, indices = item_indices[:,0]) # shape = (current_batch_size * number_of_items/users[i],)
# User and item indices are rank 1, need to make rank 1 to concat
good_user_indices_expanded = tf.expand_dims(input = good_user_indices, axis = -1) # shape = (current_batch_size * number_of_items/users[i], 1)
good_item_indices_expanded = tf.expand_dims(input = item_indices[:, 1], axis = -1) # shape = (current_batch_size * number_of_items/users[i], 1)
good_indices = tf.concat(values = [good_user_indices_expanded, good_item_indices_expanded], axis = 1) # shape = (current_batch_size * number_of_items/users[i], 2)
remapped_sparse_tensor = tf.SparseTensor(indices = good_indices, values = good_values, dense_shape = sparse_tensor.dense_shape)
return remapped_sparse_tensor
def parse_tfrecords(filename, vocab_size):
if mode == tf.estimator.ModeKeys.TRAIN:
num_epochs = None # indefinitely
else:
num_epochs = 1 # end-of-input after this
files = tf.gfile.Glob(filename = os.path.join(args["input_path"], filename))
# Create dataset from file list
dataset = tf.data.TFRecordDataset(files)
dataset = dataset.map(map_func = lambda x: decode_example(x, vocab_size))
dataset = dataset.repeat(count = num_epochs)
dataset = dataset.batch(batch_size = args["batch_size"])
dataset = dataset.map(map_func = lambda x: remap_keys(x))
return dataset.make_one_shot_iterator().get_next()
def _input_fn():
features = {
WALSMatrixFactorization.INPUT_ROWS: parse_tfrecords("items_for_user", args["nitems"]),
WALSMatrixFactorization.INPUT_COLS: parse_tfrecords("users_for_item", args["nusers"]),
WALSMatrixFactorization.PROJECT_ROW: tf.constant(True)
}
return features, None
return _input_fn
def input_cols():
return parse_tfrecords("users_for_item", args["nusers"])
return _input_fn#_subset
def try_out():
with tf.Session() as sess:
fn = read_dataset(
mode = tf.estimator.ModeKeys.EVAL,
args = {"input_path": "data", "batch_size": 4, "nitems": NITEMS, "nusers": NUSERS})
feats, _ = fn()
print(feats["input_rows"].eval())
print(feats["input_rows"].eval())
try_out()
def find_top_k(user, item_factors, k):
all_items = tf.matmul(a = tf.expand_dims(input = user, axis = 0), b = tf.transpose(a = item_factors))
topk = tf.nn.top_k(input = all_items, k = k)
return tf.cast(x = topk.indices, dtype = tf.int64)
def batch_predict(args):
import numpy as np
with tf.Session() as sess:
estimator = tf.contrib.factorization.WALSMatrixFactorization(
num_rows = args["nusers"],
num_cols = args["nitems"],
embedding_dimension = args["n_embeds"],
model_dir = args["output_dir"])
# This is how you would get the row factors for out-of-vocab user data
# row_factors = list(estimator.get_projections(input_fn=read_dataset(tf.estimator.ModeKeys.EVAL, args)))
# user_factors = tf.convert_to_tensor(np.array(row_factors))
# But for in-vocab data, the row factors are already in the checkpoint
user_factors = tf.convert_to_tensor(value = estimator.get_row_factors()[0]) # (nusers, nembeds)
# In either case, we have to assume catalog doesn"t change, so col_factors are read in
item_factors = tf.convert_to_tensor(value = estimator.get_col_factors()[0])# (nitems, nembeds)
# For each user, find the top K items
topk = tf.squeeze(input = tf.map_fn(fn = lambda user: find_top_k(user, item_factors, args["topk"]), elems = user_factors, dtype = tf.int64))
with file_io.FileIO(os.path.join(args["output_dir"], "batch_pred.txt"), mode = 'w') as f:
for best_items_for_user in topk.eval():
f.write(",".join(str(x) for x in best_items_for_user) + '\n')
def train_and_evaluate(args):
train_steps = int(0.5 + (1.0 * args["num_epochs"] * args["nusers"]) / args["batch_size"])
steps_in_epoch = int(0.5 + args["nusers"] / args["batch_size"])
print("Will train for {} steps, evaluating once every {} steps".format(train_steps, steps_in_epoch))
def experiment_fn(output_dir):
return tf.contrib.learn.Experiment(
tf.contrib.factorization.WALSMatrixFactorization(
num_rows = args["nusers"],
num_cols = args["nitems"],
embedding_dimension = args["n_embeds"],
model_dir = args["output_dir"]),
train_input_fn = read_dataset(tf.estimator.ModeKeys.TRAIN, args),
eval_input_fn = read_dataset(tf.estimator.ModeKeys.EVAL, args),
train_steps = train_steps,
eval_steps = 1,
min_eval_frequency = steps_in_epoch
)
from tensorflow.contrib.learn.python.learn import learn_runner
learn_runner.run(experiment_fn = experiment_fn, output_dir = args["output_dir"])
batch_predict(args)
import shutil
shutil.rmtree(path = "wals_trained", ignore_errors=True)
train_and_evaluate({
"output_dir": "wals_trained",
"input_path": "data/",
"num_epochs": 0.05,
"nitems": NITEMS,
"nusers": NUSERS,
"batch_size": 512,
"n_embeds": 10,
"topk": 3
})
!ls wals_trained
!head wals_trained/batch_pred.txt
os.environ["NITEMS"] = str(NITEMS)
os.environ["NUSERS"] = str(NUSERS)
%%bash
rm -rf wals.tar.gz wals_trained
gcloud ai-platform local train \
--module-name=walsmodel.task \
--package-path=${PWD}/walsmodel \
-- \
--output_dir=${PWD}/wals_trained \
--input_path=${PWD}/data \
--num_epochs=0.01 --nitems=${NITEMS} --nusers=${NUSERS} \
--job-dir=./tmp
%%bash
gsutil -m cp data/* gs://${BUCKET}/wals/data
%%bash
OUTDIR=gs://${BUCKET}/wals/model_trained
JOBNAME=wals_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
gcloud ai-platform jobs submit training $JOBNAME \
--region=$REGION \
--module-name=walsmodel.task \
--package-path=${PWD}/walsmodel \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=BASIC_GPU \
--runtime-version=$TFVERSION \
-- \
--output_dir=$OUTDIR \
--input_path=gs://${BUCKET}/wals/data \
--num_epochs=10 --nitems=${NITEMS} --nusers=${NUSERS}
def get_factors(args):
with tf.Session() as sess:
estimator = tf.contrib.factorization.WALSMatrixFactorization(
num_rows = args["nusers"],
num_cols = args["nitems"],
embedding_dimension = args["n_embeds"],
model_dir = args["output_dir"])
row_factors = estimator.get_row_factors()[0]
col_factors = estimator.get_col_factors()[0]
return row_factors, col_factors
args = {
"output_dir": "gs://{}/wals/model_trained".format(BUCKET),
"nitems": NITEMS,
"nusers": NUSERS,
"n_embeds": 10
}
user_embeddings, item_embeddings = get_factors(args)
print(user_embeddings[:3])
print(item_embeddings[:3])
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from sklearn.decomposition import PCA
pca = PCA(n_components = 3)
pca.fit(user_embeddings)
user_embeddings_pca = pca.transform(user_embeddings)
fig = plt.figure(figsize = (8,8))
ax = fig.add_subplot(111, projection = "3d")
xs, ys, zs = user_embeddings_pca[::150].T
ax.scatter(xs, ys, zs)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Create raw dataset
Step3: Create dataset for WALS
Step4: Creating rows and columns datasets
Step5: To summarize, we created the following data files from collab_raw.csv
Step6: This code is helpful in developing the input function. You don't need it in production.
Step7: Run as a Python module
Step8: Run on Cloud
Step9: This took <b>10 minutes</b> for me.
Step10: You can visualize the embedding vectors using dimensional reduction techniques such as PCA.
|
5,548
|
<ASSISTANT_TASK:>
Python Code:
from qrays import Qvector, Vector
a = Qvector((1,0,0,0))
a.length()
b = Qvector((0,1,0,0))
(a-b).length()
from tetvols import make_tet
import unittest
class TestQuadrays(unittest.TestCase):
def test_martian(self):
p = Qvector((2,1,0,1))
q = Qvector((2,1,1,0))
r = Qvector((2,0,1,1))
result = make_tet(5*q, 2*p, 2*r) # three scalars
self.assertAlmostEqual(result[0], 20, 7)
if __name__ == '__main__':
unittest.main(argv=['first-arg-is-ignored'], exit=False)
def martian_volume(s0, s1, s2):
p, q, r are unit vectors from the corner of a
regular unit volume tetrahedron, expressed using
Quadray coordinates. Stretch or shrink using any
three scalars, even negatives which reverse a
vector
p = Qvector((2,1,0,1))
q = Qvector((2,1,1,0))
r = Qvector((2,0,1,1))
result = make_tet(s0*q, s1*p, s2*r)
return result[0]
float("{:5f}".format(martian_volume(.5, .5, .5)))
p = Qvector((2,1,0,1))
p.length()
from IPython.display import YouTubeVideo
YouTubeVideo("F9ZVCSPIUls")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The Quadray coordinate system stands on its own without merging with 20th Century Neoplatonist esoterica. Some of the shoptalk which follows is inessential to the system's other applications, insofar as it has any.
Step3: In the demonstration below, we're showing any three positive scalars s0, s1, s2, used to stretch three vectors (or shrink them), p, q, r, from the corner of a regular tetrahedron, define a tetrahedron of volume s0 * s1 * s2.
Step4: What's the volume formula we're using? A simple conversion constant will take us from XYZ to IVM volume, but the algorithm below returns the IVM volume natively. It's the XYZ volume we use the constant to derive.
|
5,549
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
def V_vdW(p, kT, N, a=0, b=0):
Solve the van der Waals equation for V.
coeffs = [p, - (kT * N + p * N *b), a * N**2, - a * N**3 * b]
V = sorted(np.roots(coeffs))
return np.real(V).tolist()
print(V_vdW(1.0, 1.0, 1000))
import signac
project = signac.get_project('projects/tutorial')
for job in project:
if 'a' not in job.sp:
job.sp.a = 0
if 'b' not in job.sp:
job.sp.b = 0
for job in project:
if 'V' in job.document:
job.document['V_liq'] = 0
job.document['V_gas'] = job.document.pop('V')
with open(job.fn('V.txt'), 'w') as file:
file.write('{},{}\n'.format(0, job.document['V_gas']))
for job in project:
print(job.statepoint(), job.document)
vdW = {
# Source: https://en.wikipedia.org/wiki/Van_der_Waals_constants_(data_page)
'ideal gas': {'a': 0, 'b': 0},
'argon':{'a': 1.355, 'b': 0.03201},
'water': {'a': 5.536, 'b': 0.03049},
}
def calc_volume(job):
V = V_vdW(** job.statepoint())
job.document['V_liq'] = min(V)
job.document['V_gas'] = max(V)
with open(job.fn('V.txt'), 'w') as file:
file.write('{},{}\n'.format(min(V), max(V)))
for fluid in vdW:
for p in np.linspace(0.1, 10.0, 10):
sp = {'N': 1000, 'p': p, 'kT': 1.0}
sp.update(vdW[fluid])
job = project.open_job(sp)
job.document['fluid'] = fluid
calc_volume(job)
ps = set((job.statepoint()['p'] for job in project))
for fluid in sorted(vdW):
print(fluid)
for p in sorted(ps):
jobs = project.find_jobs({'p': p}, doc_filter={'fluid': fluid})
for job in jobs:
print(round(p, 2), round(job.document['V_liq'], 4), round(job.document['V_gas'], 2))
print()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1.4 Modifying the Data Space
Step2: You will notice that this equation is a cubic polynomial and therefore has 3 possible solutions instead of only one!
Step3: That is because the vdW system has a critical point and up to three possible solutions.
Step4: Please checkout the section on State Point Modifications in the reference documentation for a detailed description on how to modify state points.
Step5: Let's verify our modifications!
Step6: Next, we add a few state points with known parameters.
Step7: The fluid label is stored in the job document as a hint, which parameters were used, however they are deliberately not part of the state point, since our calculation is only based on the parameters N, kT, p, a, and b.
|
5,550
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'messy-consortium', 'emac-2-53-aerchem', 'toplevel')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Flux Correction
Step7: 3. Key Properties --> Genealogy
Step8: 3.2. CMIP3 Parent
Step9: 3.3. CMIP5 Parent
Step10: 3.4. Previous Name
Step11: 4. Key Properties --> Software Properties
Step12: 4.2. Code Version
Step13: 4.3. Code Languages
Step14: 4.4. Components Structure
Step15: 4.5. Coupler
Step16: 5. Key Properties --> Coupling
Step17: 5.2. Atmosphere Double Flux
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Step19: 5.4. Atmosphere Relative Winds
Step20: 6. Key Properties --> Tuning Applied
Step21: 6.2. Global Mean Metrics Used
Step22: 6.3. Regional Metrics Used
Step23: 6.4. Trend Metrics Used
Step24: 6.5. Energy Balance
Step25: 6.6. Fresh Water Balance
Step26: 7. Key Properties --> Conservation --> Heat
Step27: 7.2. Atmos Ocean Interface
Step28: 7.3. Atmos Land Interface
Step29: 7.4. Atmos Sea-ice Interface
Step30: 7.5. Ocean Seaice Interface
Step31: 7.6. Land Ocean Interface
Step32: 8. Key Properties --> Conservation --> Fresh Water
Step33: 8.2. Atmos Ocean Interface
Step34: 8.3. Atmos Land Interface
Step35: 8.4. Atmos Sea-ice Interface
Step36: 8.5. Ocean Seaice Interface
Step37: 8.6. Runoff
Step38: 8.7. Iceberg Calving
Step39: 8.8. Endoreic Basins
Step40: 8.9. Snow Accumulation
Step41: 9. Key Properties --> Conservation --> Salt
Step42: 10. Key Properties --> Conservation --> Momentum
Step43: 11. Radiative Forcings
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Step45: 12.2. Additional Information
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Step47: 13.2. Additional Information
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Step49: 14.2. Additional Information
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Step51: 15.2. Additional Information
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Step53: 16.2. Additional Information
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Step55: 17.2. Equivalence Concentration
Step56: 17.3. Additional Information
Step57: 18. Radiative Forcings --> Aerosols --> SO4
Step58: 18.2. Additional Information
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Step60: 19.2. Additional Information
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Step62: 20.2. Additional Information
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Step64: 21.2. Additional Information
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Step66: 22.2. Aerosol Effect On Ice Clouds
Step67: 22.3. Additional Information
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Step69: 23.2. Aerosol Effect On Ice Clouds
Step70: 23.3. RFaci From Sulfate Only
Step71: 23.4. Additional Information
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Step73: 24.2. Additional Information
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Step77: 25.4. Additional Information
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Step81: 26.4. Additional Information
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Step83: 27.2. Additional Information
Step84: 28. Radiative Forcings --> Other --> Land Use
Step85: 28.2. Crop Change Only
Step86: 28.3. Additional Information
Step87: 29. Radiative Forcings --> Other --> Solar
Step88: 29.2. Additional Information
|
5,551
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import platform
from utils import *
from mesh import *
from deformation import *
import numpy as np
import os
from sklearn import preprocessing, decomposition, neighbors, cluster
from scipy import sparse
# Mayavi stuff, can be a bit tricky on OSX
# OSX: x3d interactive backend but still buggy
# Linux: png Offscreen rendering since x3d backend schrinks when plotted
backend = 'x3d' if platform.system() == 'Darwin' else 'png'
from mayavi import mlab
mlab.init_notebook(backend, local=False)
plt.rcParams['figure.figsize'] = (17, 5)
# Base folder where data are located, user-specific
data_folder = '../data'
folder_path = os.path.join(data_folder, 'FWMesh/')
individuals, individuals_test = utils.load_set(0.75,folder_path)
nb_face = 15
expression_label, features = utils.construct_features_matrix(individuals, nb_face)
expression_label_test, features_test = utils.construct_features_matrix(individuals_test, nb_face)
eigenvalues, eigenvectors = utils.compute_laplacian(features,
nb_neighbours = 10,
distance_metric = 'euclidean',
nb_eigen = 20,
verbose=1)
x = eigenvectors[:,3]
y = eigenvectors[:,13]
z = eigenvectors[:,1]
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(x, y, z, c=expression_label, cmap='RdYlBu', alpha=0.5);
# Train classifier
n_neighbors = nb_face
clf = neighbors.KNeighborsClassifier(n_neighbors, weights='distance')
clf.fit(features, expression_label)
# Test
Z = clf.predict(features_test)
# Check performance
error_clf = np.count_nonzero(Z-expression_label_test)
print('Number of miss-classified test samples %d' % error_clf)
nb_cluster = len(eigenvalues[eigenvalues < 0.02])
cluster_features = cluster.KMeans(nb_cluster).fit(eigenvectors[:,:nb_cluster])
eigenvalues_test, eigenvectors_test = utils.compute_laplacian(features_test,
nb_neighbours = 10,
distance_metric = 'euclidean',
nb_eigen = 20,
verbose=1)
prediction = cluster_features.predict(eigenvectors_test[:,:nb_cluster])
plt.hist(prediction, bins=14)
folder_path = os.path.join(data_folder, 'LTS5Mesh/')
caucasian, caucasian_test = utils.load_set(0.75,folder_path)
# Check
totals = list(individuals)
totals.extend(caucasian)
totals_test = list(individuals_test)
totals_test.extend(caucasian_test)
len(totals[113])
nb_face = 0
expression_label, neutrals = utils.construct_features_matrix(totals, nb_face)
expression_label_test, neutrals_test = utils.construct_features_matrix(totals_test, nb_face)
eigenvalues_neutral, eigenvectors_neutral = utils.compute_laplacian(neutrals,
nb_neighbours = 10,
distance_metric = 'euclidean',
nb_eigen = 20,
verbose=1)
nb_cluster_neutral = len(eigenvalues_neutral[eigenvalues_neutral < 0.02])
cluster_features = cluster.KMeans(nb_cluster_neutral).fit(eigenvectors_neutral[:,:nb_cluster_neutral])
eigenvalues_neutral_test, eigenvectors_neutral_test = utils.compute_laplacian(neutrals_test,
nb_neighbours = 10,
distance_metric = 'euclidean',
nb_eigen = 20,
verbose=1)
prediction = cluster_features.predict(eigenvectors_neutral_test[:,:nb_cluster_neutral])
plt.hist(prediction, bins=nb_cluster_neutral)
folder_path = os.path.join(data_folder, 'FWMesh/')
tri = utils.load_triangulation(os.path.join(data_folder, 'FWTri/fw_triangulation.tri'))
neutrals = [ind[0] for ind in individuals]
neutrals = np.array(neutrals)
neutrals_avg = neutrals.mean(0)
mlab.clf()
color = (0.5, 0.5, 0.5)
trimesh_avg = mlab.triangular_mesh(neutrals_avg[:,0], neutrals_avg[:,1], neutrals_avg[:,2], tri, color=color)
mlab.view(0,0)
trimesh_avg
nb_face=15
for j,face in enumerate(individuals[0][1:nb_face]):
if j == 0:
features_err = face.reshape(1,-1) - individuals[0][0].reshape(1,-1)
else:
err = face.reshape(1,-1) - individuals[0][0].reshape(1,-1)
features_err = np.vstack((features_err, err))
features_err.sum(0)
index_err = [i for i, j in enumerate(features_err.sum(0)) if j != 0]
features_avg = []
for i,ind in enumerate(tqdm_notebook((individuals),desc = 'Built Features')):
for j,face in enumerate(ind[1:nb_face]):
if (i == 0 and j == 0):
features_avg = face.reshape(1,-1)[:,index_err] - neutrals_avg.reshape(1,-1)[:,index_err]
else:
err = face.reshape(1,-1)[:,index_err] - neutrals_avg.reshape(1,-1)[:,index_err]
features_avg = np.vstack((features_avg, err))
for i,ind in enumerate(tqdm_notebook((individuals_test),desc = 'Built Features')):
for j,face in enumerate(ind[1:nb_face]):
if (i == 0 and j == 0):
features_avg_test = face.reshape(1,-1)[:,index_err] - neutrals_avg.reshape(1,-1)[:,index_err]
else:
err = face.reshape(1,-1)[:,index_err] - neutrals_avg.reshape(1,-1)[:,index_err]
features_avg_test = np.vstack((features_avg_test, err))
eigenvalues_avg, eigenvectors_avg = utils.compute_laplacian(features_avg,
nb_neighbours = 10,
distance_metric = 'euclidean',
nb_eigen = 200,
verbose=1)
expression_label = [j for i in range(0,len(individuals)) for j in range(1,nb_face)]
x = eigenvectors_avg[:, 3]
y = eigenvectors_avg[:, 2]
fig = plt.figure()
plt.scatter(x, y, c=expression_label, cmap='RdYlBu', alpha=0.5);
# Load source + target meshes + anchors
tri = utils.load_triangulation(os.path.join(data_folder, 'FWTri/fw_triangulation.tri'))
meshes = utils.load_meshes(os.path.join(data_folder, 'FWMesh/115_shape.bs'), [0, 22])
anchors = utils.load_anchor_point(os.path.join(data_folder, 'anchors.cfg'))
# Create mesh instances for source and target surface
mesh_xs = Mesh(meshes[0], tri)
mesh_xt = Mesh(meshes[1], tri)
N = meshes[0].shape[0]
# Dump meshes into .obj file
mesh_xs.save('source.obj')
mesh_xt.save('target.obj')
# Define ground-truth
d_true = mesh_xt.vertex - mesh_xs.vertex
d_norm = np.linalg.norm(d_true, axis=1)
# Source surface
mlab.clf()
color = (0.5, 0.5, 0.5)
trimesh_xs = mlab.triangular_mesh(mesh_xs.vertex[:,0], mesh_xs.vertex[:,1], mesh_xs.vertex[:,2], mesh_xs.tri, color=color)
mlab.view(0,0)
trimesh_xs
# Target surface, with displacment color coded (Red = large).
mlab.clf()
trimesh_xt = mlab.triangular_mesh(mesh_xt.vertex[:,0], mesh_xt.vertex[:,1], mesh_xs.vertex[:,2], mesh_xt.tri, scalars=d_norm)
mlab.view(0, 0)
trimesh_xt
# 1) create selection mask
K = len(anchors)
ridx = [k for k in range(K)]
cidx = anchors
data = [1.0] * K
M = sparse.coo_matrix((data, (ridx, cidx)), shape=(K, N), dtype=np.float32)
# 2) Comptue laplacian
_,_, Lap = mesh_xs.compute_laplacian('cotan')
# 3) Compute target
Xs = mesh_xs.vertex
Xt = M.dot(mesh_xt.vertex)
# 4) Estimate deformation field without regularization
estm_xt, d = deform_regularized_anchor(Xs, Xt, M, Lap, 0.05)
# Error
err = np.linalg.norm(estm_xt - mesh_xt.vertex, axis=1)
e_def = np.mean(err)
print('Mean error of %f, for %d anchors' % (e_def, K))
m = Mesh(estm_xt, tri)
m.save('deformation_field.obj')
mlab.clf()
trimesh_xt_estm = mlab.triangular_mesh(estm_xt[:,0], estm_xt[:,1], estm_xt[:,2], mesh_xt.tri, scalars=err)
mlab.view(0, 0)
trimesh_xt_estm
# Reuse the same anchors for fare comparison, comparison with subset will be conducted below
anchorsIdx = anchors
anchors_pts = mesh_xt.vertex[anchorsIdx, :]
# Deform
estm_xt = deform_mesh(mesh_xs, anchors_pts, anchorsIdx, 1.0)
# Error
err = np.linalg.norm(estm_xt - mesh_xt.vertex, axis=1)
e = np.mean(err)
print('Mean error: %f, for %d anchors' % (e, len(anchorsIdx)))
mlab.clf()
trimesh_xt_estm = mlab.triangular_mesh(estm_xt[:,0], estm_xt[:,1], estm_xt[:,2], mesh_xt.tri, scalars=err)
mlab.view(0, 0)
trimesh_xt_estm
# Analyse error with respect to the number of anchor points selected. What happen if we take only a subset of
# the original anchor set ?
err_step = []
estm_step = []
n_step = 11
for k in range(1, n_step):
idx = np.random.permutation(anchors)
n_idx = k * int(len(anchors) / n_step)
anchorsIdx = idx[0:n_idx]
anchors_pts = mesh_xt.vertex[anchorsIdx, :]
# Deform
estm_xt = deform_mesh(mesh_xs, anchors_pts, anchorsIdx, 1.0)
estm_step.append(estm_xt)
# Error
err = np.linalg.norm(estm_xt - mesh_xt.vertex, axis=1)
e = np.mean(err)
print('Mean error: %f, for %d anchors' % (e, len(anchorsIdx)))
err_step.append(e)
# Error comparisons
plt.plot(err_step)
plt.title('Error evolution vs #anchors')
plt.ylabel('Mean vertex error')
plt.xlabel('Number of anchors selected')
x = [k * int(len(anchors) / n_step) for k in range(1, n_step)]
xi = [k for k in range(0, n_step - 1)]
plt.xticks(xi, x)
# Deformation field
plt.plot([0, xi[-1]],[e_def, e_def], '--')
# Legend
plt.legend(['Constrainted reconstruction', 'Deformation field'])
plt.show()
mlab.clf()
trimesh_xt_estm_5 = mlab.triangular_mesh(estm_step[5][:,0], estm_step[5][:,1], estm_step[5][:,2], mesh_xt.tri, color=(0.5, 0.5, 0.5))
mlab.view(0, 0)
trimesh_xt_estm_5
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Introduction
Step2: 3.2 Classification approach - Supervised
Step3: 3.2.2 Compute Laplacian, eigenvectors and eigenvalues.
Step4: Calculating the eigenvalues enables to find the number of cluster inherent of the features. Here, we find 14 eigenvalues with values equals to zero and therefore it matches the 14 different expressive faces.
Step5: With the plot above, we can see that similar expressions form 14 clusters clearly separated, especially when using the eigenvectors as coordinates.
Step6: 3.3 Clustering approach - Unsupervised
Step7: 3.3.1.2 Compute Laplacian for the test set
Step8: Again, we can see that the eigenvalues fit the number of clusters.
Step9: In this plot, we can observe that the model, on average, can correctly aggregate identical facial expressions. Nevertheless, it fails to recognize three expression and mixes the meshes with another face which might be really close of the other one.
Step10: The caucasian dataset contains 120 samples. With the split ratio between train and test set, 90 caucasian samples are assigned to the train set whereas 30 are in the test set.
Step11: 3.3.2.2 Compute Laplacian for both sets
Step12: We can see in the plots above that in both sets, the distances and the eigenvalues indicate that we have two clusters. From this point we could be confident that our model will be able to differentiate between ethnies.
Step13: Here, we can observe that the model is able to differentiate the two ethnies, asians and caucasians. Indeed, we have in our group the 30 samples that are caucasians and the rest that corresponds to the number of asians in the test set.
Step14: 3.3.3.1 Create average neutral face
Step15: In order to have a more precise analysis, the features matrix is reduced to only meaningful nodes (ex
Step16: 3.3.3.2 Compute features for train and test sets
Step17: 3.3.3.3 Compute Laplacian
Step18: We see from the graph of the eigenvalues we have approximatively 115 zero eigenvalues meaning we have 115 connected subgraphs, corresponding, we believe, to the 113 individuals from the train set. Thus, we conclude the use of the averaged neutral face is not convincing since it does not allow a clustering of the different kind of faces.
Step19: On this graph we see clustering of dots with different colors that may be all kind of faces grouped by individuals.
Step20: Figure above shows an example of $\boldsymbol{x}^s$ used as source for the deformation estimation
Step21: Figure above shows an example of a targetted expressions, $\boldsymbol{x}^t$, that need to be recovered. The colors represents the distance from the source, $\boldsymbol{x}^s$, which is the deformation $\boldsymbol{d}$ that needs to be estimated.
Step22: Figure above shows the estimated target surface. The color represents the point-to-point error between the two surfaces. The sparse blue dots on the surface can be explained by the fact that they are the vertices correspoinding to the anchor points, therefore the error at those locations is small compare to the rest of the surface. Regardings the other parts such as the top of the skull or the ears, the erros is minimum since the is not deformation in those area.
Step23: Figure above shows the result with positional and curvature constraints. Using the same set of anchors the average per-vertex error is smaller and the deformation is more realistic than previously shown.
Step24: This plot shows the evolution of the average per-vertex error when the number of anchors is augemented. The deformation field estimated earlier is used as baseline for comparison (i.e. estimated with all anchors !).
|
5,552
|
<ASSISTANT_TASK:>
Python Code:
import numba
import numexpr as ne
import numpy as np
import matplotlib.pyplot as roberplot
import matplotlib.image as mpimg
%load_ext line_profiler
%load_ext memory_profiler
def rgb2gray(rgb):
return np.dot(rgb[...,:3], [0.299, 0.587, 0.114])
def image_plot(img):
roberplot.figure(figsize=(10,5))
roberplot.imshow(img, cmap='gray')
roberplot.axis('off')
roberplot.show()
def histogram_plot(h):
roberplot.figure(figsize=(5,10))
roberplot.plot(range(len(h)), h)
roberplot.xlim([-5,260])
roberplot.ylim([0,200000])
roberplot.xlabel('Feature index')
roberplot.ylabel('Number of repetitions')
roberplot.grid()
roberplot.show()
def lbp(region):
Region is a 3x3 ndarray
ret = (region > region[1,1]).astype(int)
return (2**7)*ret[0,0] + (2**6)*ret[0,1] + (2**5)*ret[0,2] + (2**4)*ret[1,2] + (2**3)*ret[2,2] + \
(2**2)*ret[2,1] + (2**1)*ret[2,0] + (2**0)*ret[1,0]
def lbp_representation(image):
Compute lbp representation of image
m,n = image.shape
ret = np.empty((m-2,n-2))
for i in range(1,m-1):
for j in range(1,n-1):
ret[i-1,j-1] = lbp(image[i-1:i+2, j-1:j+2])
return ret
def lbp_histogram(image):
Compute lbp histogram representation of image
lbp_image = lbp_representation(image)
m,n = lbp_image.shape
hist = np.zeros(256)
for i in range(m):
for j in range(n):
hist[lbp_image[i,j]] += 1
return hist
img = mpimg.imread('data/doge.jpg')
img = rgb2gray(img)
image_plot(img)
lbp_image = lbp_representation(img)
image_plot(lbp_image)
h = lbp_histogram(img)
histogram_plot(h)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step4: Local Binary Pattern Representation
Step5: Just an example of usage
|
5,553
|
<ASSISTANT_TASK:>
Python Code:
import time
import numpy as np
import visa
rm = visa.ResourceManager() # Creamos al Resource Manager
rm.list_resources() # Esto les permitirá ver qué es lo que pyvisa reconoce conectado a la PC
resource_name = 'USB0::0x0699::0x0346::C033250::INSTR' # Este es un nombre ejemplo con el cual Pyvisa reconoce al instrumento
fungen = rm.open_resource(resource_name) # "Abrimos la comunicación con el aparato llamándolo por su nombre
fungen.write('*IDN?') # Entre otras cosas, nosotros podemos preguntarle al generador de funciones quién es o cómo se hace llamar
print(fungen.read())
print(fungen.query('*IDN?'))
# Rampa logaritmica de frequencias
# Los dos primeros numeros (1 y 3) indican los exponentes de los limites(10^1 y 10^3)
# El siguiente el numero de pasos
for freq in np.logspace(1, 3, 20):
fungen.write('FREQ %f' % freq)
time.sleep(0.1)
# Rampa lineal de amplitudes
# Los dos primeros numeros (0 y 1) indican los limites.
# El siguiente el numero de pasos
for amplitude in np.linspace(0, 1, 10):
fungen.write('VOLT %f' % amplitude)
time.sleep(0.1)
# Rampa lineal de offset
# Los dos primeros numeros (0 y 1) indican los limites.
# El siguiente el numero de pasos
for offset in np.linspace(0, 1, 10):
fungen.write('VOLT:OFFS %f' % offset)
time.sleep(0.1)
fungen.close()
rm = visa.ResourceManager()
rm.list_resources()
resource_name = 'USB0::0x0699::0x0363::C065089::INSTR'
osci = rm.open_resource(resource_name)
osci.query('*IDN?')
osci.write('DAT:ENC RPB') # Recordar que esto puede depender del instrumental usado y de su sintaxis
osci.write('DAT:WID 1')
xze, xin, yze, ymu, yoff = osci.query_ascii_values('WFMPRE:XZE?;XIN?;YZE?;YMU?;YOFF?;', separator=';')
data = osci.query_binary_values('CURV?', datatype='B', container=np.array)
tiempo = xze + np.arange(len(data)) * xin
plt.plot(tiempo, data)
fungen.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Es importante en los pasos que acabamos de dar que reconozcamos cuando Pyvisa reconoce nuestro instrumental y cuando no. Si nos conectamos por USB, eso debería indicarlo (distinto sería si nos conectáramos por puerto GPIB o similares), y si es un instrumento, debería decir "instr" o similar. Todos los demás números del nombre del recurso reconocen marca, modelo y número de serie, propio de cada aparato.
Step2: A parte de preguntarle cosas, al generador de funciones le puedo setear condiciones, como el voltaje, la frecuencia o el offset de la señal que quiero que genere
Step3: Notemos que la función write pretende un caracter como argumento para enviar al generador, lo cual coincide con lo que veniamos diciendo antes. Qué palabras y cómo escribirlas dependerá siempre del instrumental que usemos y de su manual.
Step4: Osciloscopio
Step5: Bien, para lo siguiente es importante reconocer que los tipos de datos que el osciloscopio nos puede ofrecer se pueden escribir en ASCII o en binario. El ASCII es una forma de enumerar a todos los dígitos y teclas conocidas, es un estándar de representación numérica. El binario es lo que es, un número escrito en binario. Generalmente, el ASCII es más fácil de leer (aunque con cierta dificultad) por un humano, pero eso lo hace más difícil de manipular. En cambio información en binario es mucho más difícil de leer, pero mejor para cálculos.
Step6: Luego, los datos del osciloscopio los necesitaremos calibrar. Para eso necesitamos ciertos parámetros
Step7: Luego sí, pedimos que levante la curva en pantalla, y la magia está cuando la ploteamos.
Step8: Y no olvidemos cerrar la comunicación con el aparato
|
5,554
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pytz
import matplotlib.pyplot as plt
import pandas as pd
import ulmo
from ulmo.util import convert_datetime
print(ulmo.cuahsi.wof.__doc__)
print([obj for obj in dir(ulmo.cuahsi.wof) if not obj.startswith('__')])
# WaterML/WOF WSDL endpoints
wsdlurl = 'http://54.186.36.247:8080/mysqlodm2timeseries/soap/cuahsi_1_0/.wsdl' # WOF 1.0
# 'network code'
networkcd = 'mysqlodm2timeseries'
sitecd = 'USU-LBR-Mendon'
siteinfo = ulmo.cuahsi.wof.get_site_info(wsdlurl, networkcd+':'+sitecd)
type(siteinfo), siteinfo.keys()
siteinfo['network'], siteinfo['code'], siteinfo['name']
print(siteinfo['location'])
type(siteinfo['series']), len(siteinfo['series']), siteinfo['series'].keys()
siteinfo['series']['mysqlodm2timeseries:USU33'].keys()
siteinfo['series']['mysqlodm2timeseries:USU33']
def site_series_values_to_df(series_values, variable_name):
# Create a clean timeseries list of (dt, val) tuples
tsdt_tuplst = [
(convert_datetime(valdict['datetime']).replace(tzinfo=pytz.utc),
float(valdict['value'])) for valdict in series_values['values']
]
dt, val = zip(*tsdt_tuplst)
ts_df = pd.DataFrame({'time': dt, variable_name: val})
ts_df.set_index('time', inplace=True)
ts_df.sort_index(ascending=True, inplace=True)
return ts_df
print(
ulmo.cuahsi.wof.get_values.__doc__.replace('<', '').replace('>', '')
)
variablecd = 'USU33'
site_values = ulmo.cuahsi.wof.get_values(wsdlurl, networkcd+':'+sitecd, networkcd+':'+variablecd)
site_values.keys()
sitevariable = site_values['variable']
sitevariable
type(site_values['values']), site_values['values'][0].keys()
site_values['values'][0]['datetime'], site_values['values'][-1]['datetime']
variable_name = '%s (%s)' % (sitevariable['name'], sitevariable['value_type'])
variable_name
dtstr_last = site_values['values'][-1]['datetime']
convert_datetime(dtstr_last).replace(tzinfo=pytz.utc)
ts_df = site_series_values_to_df(site_values, variable_name)
ts_df.tail()
type(ts_df), ts_df.columns, ts_df.index.dtype, ts_df.index.min(), ts_df.index.max()
fig, ax = plt.subplots(figsize=(10, 4))
varlabel = ts_df.columns[0]
ts_df[varlabel].plot(style='-', ax=ax)
ax.set_ylabel(varlabel + ', ' + sitevariable['units']['abbreviation']);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: CUAHSI WaterOneFlow
Step2: Get site information
Step3: Get Values
Step4: 'odm2timeseries
Step5: site_values['values'] is a list of individual time series values (timestamp and data value)
Step6: Start and end timestamps (local time with time offset vs utc; iso8601 format)
Step7: Set a nice, user-friendly variable name string.
Step8: Hmm, this failed
|
5,555
|
<ASSISTANT_TASK:>
Python Code:
import os
import mne
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file)
raw.crop(tmax=60).load_data()
raw.plot()
raw.plot_psd(average=True)
midline = ['EEG 002', 'EEG 012', 'EEG 030', 'EEG 048', 'EEG 058', 'EEG 060']
raw.plot_psd(picks=midline)
raw.plot_psd_topo()
raw.copy().pick_types(meg=False, eeg=True).plot_psd_topo()
raw.plot_sensors(ch_type='eeg')
raw.plot_projs_topomap()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We've seen in a previous tutorial <tut-raw-class> how to plot data
Step2: It may not be obvious when viewing this tutorial online, but by default, the
Step3: If the data have been filtered, vertical dashed lines will automatically
Step4: Alternatively, you can plot the PSD for every sensor on its own axes, with
Step5: This plot is also interactive; hovering over each "thumbnail" plot will
Step6: Plotting sensor locations from Raw objects
Step7: Plotting projectors from Raw objects
|
5,556
|
<ASSISTANT_TASK:>
Python Code:
def DH_simbolico(a, d, α, θ):
from sympy import Matrix, sin, cos
# YOUR CODE HERE
raise NotImplementedError()
from sympy import Matrix, sin, cos, pi
from nose.tools import assert_equal
assert_equal(DH_simbolico(0,0,0,pi/2), Matrix([[0,-1,0,0],[1,0,0,0], [0,0,1,0],[0,0,0,1]]))
assert_equal(DH_simbolico(0,0,pi/2,0), Matrix([[1,0,0,0],[0,0,-1,0], [0,1,0,0],[0,0,0,1]]))
assert_equal(DH_simbolico(0,1,0,0), Matrix([[1,0,0,0],[0,1,0,0], [0,0,1,1],[0,0,0,1]]))
assert_equal(DH_simbolico(1,0,0,0), Matrix([[1,0,0,1],[0,1,0,0], [0,0,1,0],[0,0,0,1]]))
def cinematica_PUMA(q1, q2, q3):
from sympy import pi, var
var("l1:4")
# YOUR CODE HERE
raise NotImplementedError()
return A1, A2, A3
from nose.tools import assert_equal
from sympy import pi, var, Matrix
var("l1:4")
A1, A2, A3 = cinematica_PUMA(0, 0, 0)
assert_equal(A1*A2*A3, Matrix([[1,0,0,l2+l3], [0,0,-1,0], [0,1,0,l1], [0,0,0,1]]))
A1, A2, A3 = cinematica_PUMA(pi/2, 0, 0)
assert_equal(A1*A2*A3, Matrix([[0,0,1,0], [1,0,0,l2+l3], [0,1,0,l1], [0,0,0,1]]))
A1, A2, A3 = cinematica_PUMA(0, pi/2, 0)
assert_equal(A1*A2*A3, Matrix([[0,-1,0,0], [0,0,-1,0], [1,0,0,l1+l2+l3], [0,0,0,1]]))
A1, A2, A3 = cinematica_PUMA(0, 0, pi/2)
assert_equal(A1*A2*A3, Matrix([[0,-1,0,l2], [0,0,-1,0], [1,0,0,l1+l3], [0,0,0,1]]))
def transformacion_PUMA(q1, q2, q3):
from sympy import pi, var
var("l1:4")
# YOUR CODE HERE
raise NotImplementedError()
from nose.tools import assert_equal
from sympy import pi, var, Matrix
var("l1:4")
assert_equal(transformacion_PUMA(0, 0, 0), Matrix([[1,0,0,l2+l3], [0,0,-1,0], [0,1,0,l1], [0,0,0,1]]))
assert_equal(transformacion_PUMA(pi/2, 0, 0), Matrix([[0,0,1,0], [1,0,0,l2+l3], [0,1,0,l1], [0,0,0,1]]))
assert_equal(transformacion_PUMA(0, pi/2, 0), Matrix([[0,-1,0,0], [0,0,-1,0], [1,0,0,l1+l2+l3], [0,0,0,1]]))
assert_equal(transformacion_PUMA(0, 0, pi/2), Matrix([[0,-1,0,l2], [0,0,-1,0], [1,0,0,l1+l3], [0,0,0,1]]))
def DH_numerico(a, d, α, θ):
# YOUR CODE HERE
raise NotImplementedError()
def cinematica_PUMA(q1, q2, q3):
# Considere que las longitudes son todas iguales a 1
l1, l2, l3 = 1, 1, 1
from numpy import pi
# YOUR CODE HERE
raise NotImplementedError()
return A1, A2, A3
def grafica_PUMA(q1, q2, q3):
from numpy import matrix
# YOUR CODE HERE
raise NotImplementedError()
fig = figure(figsize=(8, 8))
ax = fig.add_subplot(111, projection='3d')
ax.plot(xs, ys, zs, "-o")
ax.set_xlim(-1.1, 1.1)
ax.set_ylim(-1.1, 1.1)
ax.set_zlim(-0.1, 2.1)
return ax
%matplotlib inline
from matplotlib.pyplot import figure, plot, style
from mpl_toolkits.mplot3d import Axes3D
style.use("ggplot")
from numpy.testing import assert_allclose
from numpy import array
ax = grafica_PUMA(0,0.5,0.5)
ls = ax.get_lines()
assert_allclose(ls[0].get_xdata(), array([0, 0, 0.8775, 1.417885]), rtol=1e-01, atol=1e-01)
assert_allclose(ls[0].get_ydata(), array([-0.0384900179, 0, 0.00915, 0.03809]), rtol=1e-01, atol=1e-01)
%matplotlib inline
from matplotlib.pyplot import figure, plot, style
from mpl_toolkits.mplot3d import Axes3D
style.use("ggplot")
from ipywidgets import interact
from numpy import pi
τ = 2*pi
# YOUR CODE HERE
raise NotImplementedError()
from nose.tools import assert_almost_equal
from numpy import pi
τ = 2*pi
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Cree una función que tome como argumentos los parametros de los grados de libertad de un manipulador tipo PUMA y devuelva las matrices de transformación homogénea asociadas a cada eslabon.
Step2: Cree una función que dados los angulos del manipulador devuelva la transformación total del manipulador (ayudese de la función creada en el segundo problema).
Step3: Cree una función que dados los angulos del manipulador, grafique las posiciones de los eslabones del manipulador del primer punto (ayudese de las funciones creadas en el primer y segundo problemas, modificadas ligeramente para aceptar matrices numéricas, así como la función creada en la práctica anterior para la graficación de un sistema robótico).
Step4: Utilice la función interact para manipular la posición del manipulador, de tal manera que su posición sea aproximadamente $q_1=0.6rad$, $q_2=0.2rad$ y $q_3 = -0.8rad$
|
5,557
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'csiro-bom', 'sandbox-3', 'landice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Ice Albedo
Step7: 1.4. Atmospheric Coupling Variables
Step8: 1.5. Oceanic Coupling Variables
Step9: 1.6. Prognostic Variables
Step10: 2. Key Properties --> Software Properties
Step11: 2.2. Code Version
Step12: 2.3. Code Languages
Step13: 3. Grid
Step14: 3.2. Adaptive Grid
Step15: 3.3. Base Resolution
Step16: 3.4. Resolution Limit
Step17: 3.5. Projection
Step18: 4. Glaciers
Step19: 4.2. Description
Step20: 4.3. Dynamic Areal Extent
Step21: 5. Ice
Step22: 5.2. Grounding Line Method
Step23: 5.3. Ice Sheet
Step24: 5.4. Ice Shelf
Step25: 6. Ice --> Mass Balance
Step26: 7. Ice --> Mass Balance --> Basal
Step27: 7.2. Ocean
Step28: 8. Ice --> Mass Balance --> Frontal
Step29: 8.2. Melting
Step30: 9. Ice --> Dynamics
Step31: 9.2. Approximation
Step32: 9.3. Adaptive Timestep
Step33: 9.4. Timestep
|
5,558
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import Image
Image(filename='images/mdgxs.png', width=350)
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import openmc
import openmc.mgxs as mgxs
# Instantiate some Nuclides
h1 = openmc.Nuclide('H1')
o16 = openmc.Nuclide('O16')
u235 = openmc.Nuclide('U235')
u238 = openmc.Nuclide('U238')
pu239 = openmc.Nuclide('Pu239')
zr90 = openmc.Nuclide('Zr90')
# Instantiate a Material and register the Nuclides
inf_medium = openmc.Material(name='moderator')
inf_medium.set_density('g/cc', 5.)
inf_medium.add_nuclide(h1, 0.03)
inf_medium.add_nuclide(o16, 0.015)
inf_medium.add_nuclide(u235 , 0.0001)
inf_medium.add_nuclide(u238 , 0.007)
inf_medium.add_nuclide(pu239, 0.00003)
inf_medium.add_nuclide(zr90, 0.002)
# Instantiate a Materials collection and export to XML
materials_file = openmc.Materials([inf_medium])
materials_file.default_xs = '71c'
materials_file.export_to_xml()
# Instantiate boundary Planes
min_x = openmc.XPlane(boundary_type='reflective', x0=-0.63)
max_x = openmc.XPlane(boundary_type='reflective', x0=0.63)
min_y = openmc.YPlane(boundary_type='reflective', y0=-0.63)
max_y = openmc.YPlane(boundary_type='reflective', y0=0.63)
# Instantiate a Cell
cell = openmc.Cell(cell_id=1, name='cell')
# Register bounding Surfaces with the Cell
cell.region = +min_x & -max_x & +min_y & -max_y
# Fill the Cell with the Material
cell.fill = inf_medium
# Instantiate Universe
root_universe = openmc.Universe(universe_id=0, name='root universe')
root_universe.add_cell(cell)
# Create Geometry and set root Universe
openmc_geometry = openmc.Geometry()
openmc_geometry.root_universe = root_universe
# Export to "geometry.xml"
openmc_geometry.export_to_xml()
# OpenMC simulation parameters
batches = 50
inactive = 10
particles = 5000
# Instantiate a Settings object
settings_file = openmc.Settings()
settings_file.batches = batches
settings_file.inactive = inactive
settings_file.particles = particles
settings_file.output = {'tallies': True}
# Create an initial uniform spatial source distribution over fissionable zones
bounds = [-0.63, -0.63, -0.63, 0.63, 0.63, 0.63]
uniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True)
settings_file.source = openmc.Source(space=uniform_dist)
# Export to "settings.xml"
settings_file.export_to_xml()
# Instantiate a 100-group EnergyGroups object
energy_groups = mgxs.EnergyGroups()
energy_groups.group_edges = np.logspace(-3, 7.3, 101)
# Instantiate a 1-group EnergyGroups object
one_group = mgxs.EnergyGroups()
one_group.group_edges = np.array([energy_groups.group_edges[0], energy_groups.group_edges[-1]])
delayed_groups = list(range(1,7))
# Instantiate a few different sections
chi_prompt = mgxs.Chi(domain=cell, groups=energy_groups, by_nuclide=True, prompt=True)
prompt_nu_fission = mgxs.FissionXS(domain=cell, groups=energy_groups, by_nuclide=True, nu=True, prompt=True)
chi_delayed = mgxs.ChiDelayed(domain=cell, energy_groups=energy_groups, by_nuclide=True)
delayed_nu_fission = mgxs.DelayedNuFissionXS(domain=cell, energy_groups=energy_groups, delayed_groups=delayed_groups, by_nuclide=True)
beta = mgxs.Beta(domain=cell, energy_groups=energy_groups, delayed_groups=delayed_groups, by_nuclide=True)
decay_rate = mgxs.DecayRate(domain=cell, energy_groups=one_group, delayed_groups=delayed_groups, by_nuclide=True)
chi_prompt.nuclides = ['U235', 'Pu239']
prompt_nu_fission.nuclides = ['U235', 'Pu239']
chi_delayed.nuclides = ['U235', 'Pu239']
delayed_nu_fission.nuclides = ['U235', 'Pu239']
beta.nuclides = ['U235', 'Pu239']
decay_rate.nuclides = ['U235', 'Pu239']
decay_rate.tallies
# Instantiate an empty Tallies object
tallies_file = openmc.Tallies()
# Add chi-prompt tallies to the tallies file
tallies_file += chi_prompt.tallies.values()
# Add prompt-nu-fission tallies to the tallies file
tallies_file += prompt_nu_fission.tallies.values()
# Add chi-delayed tallies to the tallies file
tallies_file += chi_delayed.tallies.values()
# Add delayed-nu-fission tallies to the tallies file
tallies_file += delayed_nu_fission.tallies.values()
# Add beta tallies to the tallies file
tallies_file += beta.tallies.values()
# Add decay rate tallies to the tallies file
tallies_file += decay_rate.tallies.values()
# Export to "tallies.xml"
tallies_file.export_to_xml()
# Run OpenMC
openmc.run()
# Load the last statepoint file
sp = openmc.StatePoint('statepoint.50.h5')
# Load the tallies from the statepoint into each MGXS object
chi_prompt.load_from_statepoint(sp)
prompt_nu_fission.load_from_statepoint(sp)
chi_delayed.load_from_statepoint(sp)
delayed_nu_fission.load_from_statepoint(sp)
beta.load_from_statepoint(sp)
decay_rate.load_from_statepoint(sp)
delayed_nu_fission.get_condensed_xs(one_group).get_xs()
df = delayed_nu_fission.get_pandas_dataframe()
df.head(10)
df = decay_rate.get_pandas_dataframe()
df.head(12)
beta.export_xs_data(filename='beta', format='excel')
chi_prompt.build_hdf5_store(filename='mdgxs', append=True)
chi_delayed.build_hdf5_store(filename='mdgxs', append=True)
# Get the decay rate data
dr_tally = decay_rate.xs_tally
dr_u235 = dr_tally.get_values(nuclides=['U235']).flatten()
dr_pu239 = dr_tally.get_values(nuclides=['Pu239']).flatten()
# Compute the exponential decay of the precursors
time = np.logspace(-3,3)
dr_u235_points = np.exp(-np.outer(dr_u235, time))
dr_pu239_points = np.exp(-np.outer(dr_pu239, time))
# Create a plot of the fraction of the precursors remaining as a f(time)
colors = ['b', 'g', 'r', 'c', 'm', 'k']
legend = []
fig = plt.figure(figsize=(8,6))
for g,c in enumerate(colors):
plt.semilogx(time, dr_u235_points [g,:], color=c, linestyle='--', linewidth=3)
plt.semilogx(time, dr_pu239_points[g,:], color=c, linestyle=':' , linewidth=3)
legend.append('U-235 $t_{1/2}$ = ' + '{0:1.2f} seconds'.format(np.log(2) / dr_u235[g]))
legend.append('Pu-239 $t_{1/2}$ = ' + '{0:1.2f} seconds'.format(np.log(2) / dr_pu239[g]))
plt.title('Delayed Neutron Precursor Decay Rates')
plt.xlabel('Time (s)')
plt.ylabel('Fraction Remaining')
plt.legend(legend, loc=1, bbox_to_anchor=(1.55, 0.95))
# Use tally arithmetic to compute the precursor concentrations
precursor_conc = beta.get_condensed_xs(one_group).xs_tally.summation(filter_type=openmc.EnergyFilter, remove_filter=True) * \
delayed_nu_fission.get_condensed_xs(one_group).xs_tally.summation(filter_type=openmc.EnergyFilter, remove_filter=True) / \
decay_rate.xs_tally.summation(filter_type=openmc.EnergyFilter, remove_filter=True)
# Get the Pandas DataFrames for inspection
precursor_conc.get_pandas_dataframe()
energy_filter = [f for f in beta.xs_tally.filters if type(f) is openmc.EnergyFilter]
beta_integrated = beta.get_condensed_xs(one_group).xs_tally.summation(filter_type=openmc.EnergyFilter, remove_filter=True)
beta_u235 = beta_integrated.get_values(nuclides=['U235'])
beta_pu239 = beta_integrated.get_values(nuclides=['Pu239'])
# Reshape the betas
beta_u235.shape = (beta_u235.shape[0])
beta_pu239.shape = (beta_pu239.shape[0])
df = beta_integrated.summation(filter_type=openmc.DelayedGroupFilter, remove_filter=True).get_pandas_dataframe()
print('Beta (U-235) : {:.6f} +/- {:.6f}'.format(df[df['nuclide'] == 'U235']['mean'][0], df[df['nuclide'] == 'U235']['std. dev.'][0]))
print('Beta (Pu-239): {:.6f} +/- {:.6f}'.format(df[df['nuclide'] == 'Pu239']['mean'][1], df[df['nuclide'] == 'Pu239']['std. dev.'][1]))
beta_u235 = np.append(beta_u235[0], beta_u235)
beta_pu239 = np.append(beta_pu239[0], beta_pu239)
# Create a step plot for the MGXS
plt.plot(np.arange(0.5, 7.5, 1), beta_u235, drawstyle='steps', color='b', linewidth=3)
plt.plot(np.arange(0.5, 7.5, 1), beta_pu239, drawstyle='steps', color='g', linewidth=3)
plt.title('Delayed Neutron Fraction (beta)')
plt.xlabel('Delayed Group')
plt.ylabel('Beta(fraction total neutrons)')
plt.legend(['U-235', 'Pu-239'])
plt.xlim([0,7])
chi_d_u235 = np.squeeze(chi_delayed.get_xs(nuclides=['U235'], order_groups='decreasing'))
chi_d_pu239 = np.squeeze(chi_delayed.get_xs(nuclides=['Pu239'], order_groups='decreasing'))
chi_p_u235 = np.squeeze(chi_prompt.get_xs(nuclides=['U235'], order_groups='decreasing'))
chi_p_pu239 = np.squeeze(chi_prompt.get_xs(nuclides=['Pu239'], order_groups='decreasing'))
chi_d_u235 = np.append(chi_d_u235 , chi_d_u235[0])
chi_d_pu239 = np.append(chi_d_pu239, chi_d_pu239[0])
chi_p_u235 = np.append(chi_p_u235 , chi_p_u235[0])
chi_p_pu239 = np.append(chi_p_pu239, chi_p_pu239[0])
# Create a step plot for the MGXS
plt.semilogx(energy_groups.group_edges, chi_d_u235 , drawstyle='steps', color='b', linestyle='--', linewidth=3)
plt.semilogx(energy_groups.group_edges, chi_d_pu239, drawstyle='steps', color='g', linestyle='--', linewidth=3)
plt.semilogx(energy_groups.group_edges, chi_p_u235 , drawstyle='steps', color='b', linestyle=':', linewidth=3)
plt.semilogx(energy_groups.group_edges, chi_p_pu239, drawstyle='steps', color='g', linestyle=':', linewidth=3)
plt.title('Energy Spectrum for Fission Neutrons')
plt.xlabel('Energy (eV)')
plt.ylabel('Fraction on emitted neutrons')
plt.legend(['U-235 delayed', 'Pu-239 delayed', 'U-235 prompt', 'Pu-239 prompt'],loc=2)
plt.xlim(1.0e3, 20.0e6)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A variety of tools employing different methodologies have been developed over the years to compute multi-group cross sections for certain applications, including NJOY (LANL), MC$^2$-3 (ANL), and Serpent (VTT). The openmc.mgxs Python module is designed to leverage OpenMC's tally system to calculate multi-group cross sections with arbitrary energy discretizations and different delayed group models (e.g. 6, 7, or 8 delayed group models) for fine-mesh heterogeneous deterministic neutron transport applications.
Step2: First we need to define materials that will be used in the problem. Before defining a material, we must create nuclides that are used in the material.
Step3: With the nuclides we defined, we will now create a material for the homogeneous medium.
Step4: With our material, we can now create a Materials object that can be exported to an actual XML file.
Step5: Now let's move on to the geometry. This problem will be a simple square cell with reflective boundary conditions to simulate an infinite homogeneous medium. The first step is to create the outer bounding surfaces of the problem.
Step6: With the surfaces defined, we can now create a cell that is defined by intersections of half-spaces created by the surfaces.
Step7: OpenMC requires that there is a "root" universe. Let us create a root universe and add our square cell to it.
Step8: We now must create a geometry that is assigned a root universe and export it to XML.
Step9: Next, we must define simulation parameters. In this case, we will use 10 inactive batches and 40 active batches each with 2500 particles.
Step10: Now we are ready to generate multi-group cross sections! First, let's define a 100-energy-group structure and 1-energy-group structure using the built-in EnergyGroups class. We will also create a 6-delayed-group list.
Step11: We can now use the EnergyGroups object and delayed group list, along with our previously created materials and geometry, to instantiate some MGXS objects from the openmc.mgxs module. In particular, the following are subclasses of the generic and abstract MGXS class
Step12: Each multi-group cross section object stores its tallies in a Python dictionary called tallies. We can inspect the tallies in the dictionary for our Decay Rate object as follows.
Step13: The Beta object includes tracklength tallies for the 'nu-fission' and 'delayed-nu-fission' scores in the 100-energy-group and 6-delayed-group structure in cell 1. Now that each MGXS and MDGXS object contains the tallies that it needs, we must add these tallies to a Tallies object to generate the "tallies.xml" input file for OpenMC.
Step14: Now we a have a complete set of inputs, so we can go ahead and run our simulation.
Step15: Tally Data Processing
Step16: In addition to the statepoint file, our simulation also created a summary file which encapsulates information about the materials and geometry. By default, a Summary object is automatically linked when a StatePoint is loaded. This is necessary for the openmc.mgxs module to properly process the tally data.
Step17: Voila! Our multi-group cross sections are now ready to rock 'n roll!
Step18: Since the openmc.mgxs module uses tally arithmetic under-the-hood, the cross section is stored as a "derived" Tally object. This means that it can be queried and manipulated using all of the same methods supported for the Tally class in the OpenMC Python API. For example, we can construct a Pandas DataFrame of the multi-group cross section data.
Step19: Each multi-group cross section object can be easily exported to a variety of file formats, including CSV, Excel, and LaTeX for storage or data processing.
Step20: The following code snippet shows how to export the chi-prompt and chi-delayed MGXS to the same HDF5 binary data store.
Step21: Using Tally Arithmetic to Compute the Delayed Neutron Precursor Concentrations
Step22: Now let's compute the initial concentration of the delayed neutron precursors
Step23: We can plot the delayed neutron fractions for each nuclide.
Step24: We can also plot the energy spectrum for fission emission of prompt and delayed neutrons.
|
5,559
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nuist', 'sandbox-2', 'landice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Ice Albedo
Step7: 1.4. Atmospheric Coupling Variables
Step8: 1.5. Oceanic Coupling Variables
Step9: 1.6. Prognostic Variables
Step10: 2. Key Properties --> Software Properties
Step11: 2.2. Code Version
Step12: 2.3. Code Languages
Step13: 3. Grid
Step14: 3.2. Adaptive Grid
Step15: 3.3. Base Resolution
Step16: 3.4. Resolution Limit
Step17: 3.5. Projection
Step18: 4. Glaciers
Step19: 4.2. Description
Step20: 4.3. Dynamic Areal Extent
Step21: 5. Ice
Step22: 5.2. Grounding Line Method
Step23: 5.3. Ice Sheet
Step24: 5.4. Ice Shelf
Step25: 6. Ice --> Mass Balance
Step26: 7. Ice --> Mass Balance --> Basal
Step27: 7.2. Ocean
Step28: 8. Ice --> Mass Balance --> Frontal
Step29: 8.2. Melting
Step30: 9. Ice --> Dynamics
Step31: 9.2. Approximation
Step32: 9.3. Adaptive Timestep
Step33: 9.4. Timestep
|
5,560
|
<ASSISTANT_TASK:>
Python Code:
import graphlab
sales = graphlab.SFrame('kc_house_data_small.gl/')
import numpy as np # note this allows us to refer to numpy as np instead
def get_numpy_data(data_sframe, features, output):
data_sframe['constant'] = 1 # this is how you add a constant column to an SFrame
# add the column 'constant' to the front of the features list so that we can extract it along with the others:
features = ['constant'] + features # this is how you combine two lists
# select the columns of data_SFrame given by the features list into the SFrame features_sframe (now including constant):
features_sframe = data_sframe[features]
# the following line will convert the features_SFrame into a numpy matrix:
feature_matrix = features_sframe.to_numpy()
# assign the column of data_sframe associated with the output to the SArray output_sarray
output_sarray = data_sframe[output]
# the following will convert the SArray into a numpy array by first converting it to a list
output_array = output_sarray.to_numpy()
return(feature_matrix, output_array)
def normalize_features(feature_matrix):
norms = np.linalg.norm(feature_matrix, axis=0)
normalized_features = feature_matrix / norms
return (normalized_features, norms)
(train_and_validation, test) = sales.random_split(.8, seed=1) # initial train/test split
(train, validation) = train_and_validation.random_split(.8, seed=1) # split training set into training and validation sets
feature_list = ['bedrooms',
'bathrooms',
'sqft_living',
'sqft_lot',
'floors',
'waterfront',
'view',
'condition',
'grade',
'sqft_above',
'sqft_basement',
'yr_built',
'yr_renovated',
'lat',
'long',
'sqft_living15',
'sqft_lot15']
features_train, output_train = get_numpy_data(train, feature_list, 'price')
features_test, output_test = get_numpy_data(test, feature_list, 'price')
features_valid, output_valid = get_numpy_data(validation, feature_list, 'price')
features_train, norms = normalize_features(features_train) # normalize training set features (columns)
features_test = features_test / norms # normalize test set by training set norms
features_valid = features_valid / norms # normalize validation set by training set norms
print features_train[0]
print features_train[9]
def euclidean_distance(x, y):
return np.sqrt(np.sum((x-y)**2))
euclidean_distance(features_test[0], features_train[9])
query_house = features_test[0]
closest_house = None
min_distance = (-1, 99)
for i in xrange(10):
house = features_train[i]
distance = euclidean_distance(query_house, house)
if i != 0 and min_distance[1] > distance:
min_distance = (i, distance)
print distance
print "min distance", min_distance
print min_distance
for i in xrange(3):
print features_train[i]-features_test[0]
# should print 3 vectors of length 18
print features_train[0:3] - features_test[0]
# verify that vectorization works
results = features_train[0:3] - features_test[0]
print results[0] - (features_train[0]-features_test[0])
# should print all 0's if results[0] == (features_train[0]-features_test[0])
print results[1] - (features_train[1]-features_test[0])
# should print all 0's if results[1] == (features_train[1]-features_test[0])
print results[2] - (features_train[2]-features_test[0])
# should print all 0's if results[2] == (features_train[2]-features_test[0])
diff = features_train - features_test[0]
print diff[-1].sum() # sum of the feature differences between the query and last training house
# should print -0.0934339605842
print np.sum(diff**2, axis=1)[15] # take sum of squares across each row, and print the 16th sum
print np.sum(diff[15]**2) # print the sum of squares for the 16th row -- should be same as above
distances = np.sqrt(np.sum(diff**2, axis=1))
print distances[100] # Euclidean distance between the query house and the 101th training house
# should print 0.0237082324496
def distances_for(matrix_features, feature):
return np.sqrt(np.sum((matrix_features - feature)**2, axis=1))
query_house = features_test[2]
distances = distances_for(features_train, query_house)
index = distances.argmin()
print index, distances[index]
print output_train[index]
a = np.array([9, 4, 4, 3, 3, 9, 0, 4, 6, 0])
ind = np.argsort(a)[:4]
print ind
print a[ind]
def fetch_k_nearest_neighbours(k, features_matrix, feature):
d = distances_for(features_matrix, feature)
return np.argsort(d)[:k]
query_house = features_test[2]
print fetch_k_nearest_neighbours(5, features_train, query_house)
def predictions_k_nearest_neighbour(k, features, prices, feature):
nearest_neighbours = fetch_k_nearest_neighbours(k, features, feature)
return np.mean(prices[nearest_neighbours])
query_house = features_test[2]
print predictions_k_nearest_neighbour(4, features_train, output_train, query_house)
def multiple_predictions(k, feature_matrix, prices, query_set):
predictions = list()
for i in xrange(query_set.shape[0]):
query_house = query_set[i]
prediction = predictions_k_nearest_neighbour(k, feature_matrix, prices, query_house)
predictions.append(prediction)
return predictions
houses=features_test[0:10]
predictions = multiple_predictions(10, features_train, output_train, houses)
print "index", predictions.index(min(predictions)), "price", min(predictions)
def get_residual_sum_of_squares(predictions, outcome):
# Then compute the residuals/errors
residuals = predictions - outcome
# Then square and add them up
RSS = (residuals * residuals).sum()
return(RSS)
rss_all = list()
for k in xrange(1,16):
p = multiple_predictions(k, features_train, output_train, features_valid)
rss = get_residual_sum_of_squares(p, output_valid)
rss_all.append(rss)
print rss_all
import matplotlib.pyplot as plt
%matplotlib inline
kvals = range(1, 16)
plt.plot(kvals, rss_all,'bo-')
best_k = rss_all.index(min(rss_all)) + 1 # INDEX FROM 0 !
print best_k
predictions = multiple_predictions(best_k, features_train, output_train, features_test)
print "RSS", get_residual_sum_of_squares(predictions, output_test)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load in house sales data
Step2: Import useful functions from previous notebooks
Step3: We will also need the normalize_features() function from Week 5 that normalizes all feature columns to unit norm. Paste this function below.
Step4: Split data into training, test, and validation sets
Step5: Extract features and normalize
Step6: In computing distances, it is crucial to normalize features. Otherwise, for example, the sqft_living feature (typically on the order of thousands) would exert a much larger influence on distance than the bedrooms feature (typically on the order of ones). We divide each column of the training feature matrix by its 2-norm, so that the transformed column has unit norm.
Step7: Compute a single distance
Step8: Now print the 10th row (index 9) of the training feature matrix. Again, you get an 18-dimensional vector with components between 0 and 1.
Step9: QUIZ QUESTION
Step10: Compute multiple distances
Step11: QUIZ QUESTION
Step12: It is computationally inefficient to loop over computing distances to all houses in our training dataset. Fortunately, many of the Numpy functions can be vectorized, applying the same operation over multiple values or vectors. We now walk through this process.
Step13: The subtraction operator (-) in Numpy is vectorized as follows
Step14: Note that the output of this vectorized operation is identical to that of the loop above, which can be verified below
Step15: Aside
Step16: To test the code above, run the following cell, which should output a value -0.0934339605842
Step17: The next step in computing the Euclidean distances is to take these feature-by-feature differences in diff, square each, and take the sum over feature indices. That is, compute the sum of square feature differences for each training house (row in diff).
Step18: To test the code above, run the following cell, which should output a value 0.0237082324496
Step19: Now you are ready to write a function that computes the distances from a query house to all training houses. The function should take two parameters
Step20: QUIZ QUESTIONS
Step21: Perform k-nearest neighbor regression
Step22: QUIZ QUESTION
Step23: Make a single prediction by averaging k nearest neighbor outputs
Step24: QUIZ QUESTION
Step25: Compare this predicted value using 4-nearest neighbors to the predicted value using 1-nearest neighbor computed earlier.
Step26: QUIZ QUESTION
Step27: Choosing the best value of k using a validation set
Step28: To visualize the performance as a function of k, plot the RSS on the VALIDATION set for each considered k value
Step29: QUIZ QUESTION
|
5,561
|
<ASSISTANT_TASK:>
Python Code:
from pyspark.sql import SQLContext
from pyspark.sql.types import *
sqlContext = SQLContext(sc)
schema = StructType([ \
StructField("state", StringType(), True), \
StructField("account_length", DoubleType(), True), \
StructField("area_code", StringType(), True), \
StructField("phone_number", StringType(), True), \
StructField("intl_plan", StringType(), True), \
StructField("voice_mail_plan", StringType(), True), \
StructField("number_vmail_messages", DoubleType(), True), \
StructField("total_day_minutes", DoubleType(), True), \
StructField("total_day_calls", DoubleType(), True), \
StructField("total_day_charge", DoubleType(), True), \
StructField("total_eve_minutes", DoubleType(), True), \
StructField("total_eve_calls", DoubleType(), True), \
StructField("total_eve_charge", DoubleType(), True), \
StructField("total_night_minutes", DoubleType(), True), \
StructField("total_night_calls", DoubleType(), True), \
StructField("total_night_charge", DoubleType(), True), \
StructField("total_intl_minutes", DoubleType(), True), \
StructField("total_intl_calls", DoubleType(), True), \
StructField("total_intl_charge", DoubleType(), True), \
StructField("number_customer_service_calls", DoubleType(), True), \
StructField("churned", StringType(), True)])
churn_data = sqlContext.read \
.format('com.databricks.spark.csv') \
.load('churn.all', schema = schema)
count = churn_data.count()
voice_mail_plans = churn_data.filter(churn_data.voice_mail_plan == " yes").count()
"%d, %d" % (count, voice_mail_plans)
# Your code here
sample_data = churn_data.sample(False, 0.5, 83).toPandas()
sample_data.head()
numeric_cols = ["account_length", "number_vmail_messages", "total_day_minutes",
"total_day_calls", "total_day_charge", "total_eve_minutes",
"total_eve_calls", "total_eve_charge", "total_night_minutes",
"total_night_calls", "total_intl_minutes", "total_intl_calls",
"total_intl_charge"]
categorical_cols = ["state", "international_plan", "voice_mail_plan", "area_code"]
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sb
sb.distplot(sample_data['number_customer_service_calls'], kde=False)
sb.boxplot(x="churned", y="number_customer_service_calls", data=sample_data)
example_numeric_data = sample_data[["total_intl_minutes", "total_intl_calls",
"total_intl_charge", "churned"]]
sb.pairplot(example_numeric_data, hue="churned")
corr = sample_data[["account_length", "number_vmail_messages", "total_day_minutes",
"total_day_calls", "total_day_charge", "total_eve_minutes",
"total_eve_calls", "total_eve_charge", "total_night_minutes",
"total_night_calls", "total_intl_minutes", "total_intl_calls",
"total_intl_charge"]].corr()
sb.heatmap(corr)
reduced_numeric_cols = ["account_length", "number_vmail_messages", "total_day_calls",
"total_day_charge", "total_eve_calls", "total_eve_charge",
"total_night_calls", "total_intl_calls", "total_intl_charge"]
sb.pairplot(sample_data[reduced_numeric_cols + ['churned']], hue="churned", palette='Paired')
from pyspark.ml.feature import StringIndexer
from pyspark.ml.feature import VectorAssembler
label_indexer = StringIndexer(inputCol = 'churned', outputCol = 'label')
plan_indexer = StringIndexer(inputCol = 'intl_plan', outputCol = 'intl_plan_indexed')
assembler = VectorAssembler(
inputCols = ['intl_plan_indexed'] + reduced_numeric_cols,
outputCol = 'features')
from pyspark.ml import Pipeline
from pyspark.ml.classification import DecisionTreeClassifier
classifier = DecisionTreeClassifier(labelCol = 'label', featuresCol = 'features')
pipeline = Pipeline(stages=[plan_indexer, label_indexer, assembler, classifier])
(train, test) = churn_data.randomSplit([0.7, 0.3])
model = pipeline.fit(train)
from pyspark.ml.evaluation import BinaryClassificationEvaluator
predictions = model.transform(test)
evaluator = BinaryClassificationEvaluator()
auroc = evaluator.evaluate(predictions, {evaluator.metricName: "areaUnderROC"})
aupr = evaluator.evaluate(predictions, {evaluator.metricName: "areaUnderPR"})
"The AUROC is %s and the AUPR is %s." % (auroc, aupr)
from pyspark.ml.classification import RandomForestClassifier
# Your code here
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Basic DataFrame operations
Step2: Exercise
Step3: Feature Visualization
Step4: DataTypes
Step5: Seaborn
Step6: We can examine feature differences in the distribution of our features when we condition (split) our data in whether they churned or not.
Step7: Joint Distributions
Step8: Clearly, there are some strong linear relationships between some variables, let's get a general impression of the correlations between variables by using Seaborn's heatmap functionality.
Step9: Let's generate a pair plot for all numerical variables that we have.
Step10: Build a classification model using MLLib
Step11: Model Training
Step12: Model Evaluation
Step13: Exercise
|
5,562
|
<ASSISTANT_TASK:>
Python Code:
T = 120
Tr = 2.2*120
print [Tr/10, Tr/4]
h = 40.0
pc = -1.0/120
pd = np.exp(pc*h)
print pd
s,z = sy.symbols('s, z')
h = sy.symbols('h', positive=True)
F = (16*s+1)/(100*s+1)
H = sy.simplify(F.subs(s, (z-1)/(z*h)))
print H
p1,p2,p3,p4 = sy.symbols('p1, p2, p3, p4')
sy.expand((z-0.7+sy.I*0.1)*(z-0.7-sy.I*0.1))
B = 0.6*z + 0.5
A = z*(z**2 - 1.9*z + 0.9)
S, R = sy.symbols('S, R')
H_dy = (B/A) / (1 + (B/A)*(S/R))
sy.simplify(H_dy)
H_dys = sy.simplify(H_dy)
s=sy.latex(H_dys)
print s
H = sy.simplify(z*B/A)
sy.apart(H)
Phi = sy.Matrix([[0.9, 0], [0, 1]])
Gamma = sy.Matrix([[1],[1]])
Cm = sy.Matrix([[-10.4, 11.0]])
Htest = Cm*(z*sy.eye(2)-Phi).inv()*Gamma
Htest
H
l1, l2 = sy.symbols('l1, l2')
L = sy.Matrix([[l1, l2]])
sy.factor((z*sy.eye(2) - (Phi - Gamma*L)).det(), z)
sy.simplify(sy.expand((z-0.6+sy.I*0.3)*(z-0.6-sy.I*0.3)))
%load_ext pymatbridge
%%matlab --size 800,400
%Mean arteriar pressure, automatic anasthesia model
% Plant
G = tf([1], [120 1 0]);
a = 1/100;
b = 1/160;
F0 = zpk([-b], [-a], a/b);
K = 1e-2;
F = K*F0;
Gc = feedback(G*F, 1);
step(Gc, 1000)
%%matlab --size 800,400 -o L,y,t
h = 40.0;
Phi = [0.9 0; 0 1];
Gamma = [1;1];
C = [-10.4 11.0];
D = 0;
sys = ss(Phi, Gamma, C, D, h);
L = place(Phi, Gamma, [0.6+i*0.3 0.6-i*0.3]);
sys_cl = ss(Phi-Gamma*L, Gamma, C, D, h);
[y, t] = step(sys_cl);
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure()
plt.plot(t, y)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: (b)
Step2: Problem 2
Step3: Problem 3
Step4: Set up the state-space model. Make sure it is correct.
Step5: (c) Finding the feedback gain
Step6: Check with matlab
|
5,563
|
<ASSISTANT_TASK:>
Python Code:
from serial import Serial
from Servo import Servo
from IPython.html.widgets import interact
sp = Serial("/dev/ttyUSB0", 19200)
a = Servo(sp, dir = 'a')
w1 = interact(a.set_pos, pos = (-90, 90))
import time
#-- Sequence of angles
seq = [40, 0, 20, -40, -80, 0]
#-- Repeat the sequence n times
for n in range(2):
for ang in seq:
a.pos = ang
time.sleep(0.8)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import the Servo class. It is needed for creating the Servo objects
Step2: Import the IPython 3 interact function. It is needed for creating the Interactive slider that moves the servo
Step3: Open the serial port
Step4: Create a servo object. It is linked to the serial port already opened
Step5: Interactive widget for moving the servo
Step6: Example of a simple servo sequence generation
|
5,564
|
<ASSISTANT_TASK:>
Python Code:
%pylab notebook
fe_25 = 25 # [Hz]
fe_60 = 60 # [Hz]
P = array([2.0, 4.0, 6.0, 8.0, 10.0, 12.0, 14.0])
n = 120*fe_25 / P
print('''
|-----------------+--------------|
| Number of Poles | n_m |
|-----------------+--------------|''')
# We use a simple for-loop to print a row per result:
for i in range(7):
print('| {:2.0f} | {:6.1f} r/min |'.format(P[i], n[i]))
print('|================================|')
# for-loop is used to find the max value:
n_max_25 = 0
for i in range(7):
if n[i] > n_max_25:
n_max_25 = n[i]
print('''
The highest possible rotational speed was {:.0f} r/min.
=====================================================
'''.format(n_max_25))
max(n)
r = 1.5 # 1.5th power of the speed of rotation
n_max_60 = n_max_25 * (fe_60/fe_25)
ratio = (n_max_25 / n_max_60)**r
print('''
ratio = {:.3f} or {:.1f} %
=======================
'''.format(ratio ,ratio*100))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Description
Step2: (a)
Step3: Alternatively (and much simpler) you can use the "max()" function
Step4: (b)
|
5,565
|
<ASSISTANT_TASK:>
Python Code:
workDir = '/home/nick/notebook/SIPSim/dev/bac_genome3/validation/'
R_dir = '/home/nick/notebook/SIPSim/lib/R/'
figDir = '/home/nick/notebook/SIPSim/figures/'
nprocs = 3
import os
import numpy as np
import dill
import pandas as pd
%load_ext rpy2.ipython
%%R
library(ggplot2)
library(plyr)
library(dplyr)
library(tidyr)
library(gridExtra)
if not os.path.isdir(workDir):
os.makedirs(workDir)
# max 13C shift
max_13C_shift_in_BD = 0.036
# min BD (that we care about)
min_GC = 13.5
min_BD = min_GC/100.0 * 0.098 + 1.66
# max BD (that we care about)
max_GC = 80
max_BD = max_GC / 100.0 * 0.098 + 1.66 # 80.0% G+C
max_BD = max_BD + max_13C_shift_in_BD
## BD range of values
BD_vals = np.arange(min_BD, max_BD, 0.001)
F = os.path.join(workDir, 'ampFrags_real_kde_dif.pkl')
with open(F, 'rb') as inFH:
kde = dill.load(inFH)
kde
# probability at each location in gradient
pdf = {}
for k,v in kde.items():
pdf[k] = v.evaluate(BD_vals)
pdf.keys()
df = pd.DataFrame(pdf)
df['BD'] = BD_vals
df.head(n=3)
%%R -i df -w 800 -h 350
df.g = apply(df, 2, as.numeric) %>% as.data.frame %>%
gather(taxon_name, P, 1:3) %>%
mutate(BD = as.numeric(BD),
P = as.numeric(P),
taxon_name = as.character(taxon_name)) %>%
filter(P > 1e-15)
x.lab = expression(paste('Buoyant density (g ml' ^ '-1', ')'))
p1.skn = ggplot(df.g, aes(BD, P, color=taxon_name)) +
geom_point() +
geom_line() +
labs(x=x.lab, y='Probability density') +
theme_bw() +
theme(
text = element_text(size=16),
legend.position = 'none'
)
p2.skn = p1.skn + scale_y_log10()
grid.arrange(p1.skn, p2.skn, ncol=2)
F = os.path.join(workDir, 'ampFrags_sm_kde_dif.pkl')
with open(F, 'rb') as inFH:
kde = dill.load(inFH)
kde
# probability at each location in gradient
pdf = {}
for k,v in kde.items():
pdf[k] = v.evaluate(BD_vals)
pdf.keys()
df = pd.DataFrame(pdf)
df['BD'] = BD_vals
df.head(n=3)
%%R -i df -w 800 -h 350
df.g = apply(df, 2, as.numeric) %>% as.data.frame %>%
gather(taxon_name, P, 1:3) %>%
mutate(BD = as.numeric(BD),
P = as.numeric(P),
taxon_name = as.character(taxon_name)) %>%
filter(P > 1e-9)
p1 = ggplot(df.g, aes(BD, P, color=taxon_name)) +
geom_point() +
geom_line() +
theme_bw() +
theme(
text = element_text(size=16),
legend.position = 'none'
)
p2 = p1 + scale_y_log10()
grid.arrange(p1, p2, ncol=2)
BD_vals = np.arange(min_BD, max_BD, 0.001)
F = os.path.join(workDir, 'ampFrags_real_kde_dif_DBL.pkl')
with open(F, 'rb') as inFH:
kde = dill.load(inFH)
kde
# probability at each location in gradient
pdf = {}
for k,v in kde.items():
for kk,vv in v.items():
pdf[kk] = vv.evaluate(BD_vals)
pdf.keys()
df = pd.DataFrame(pdf)
df['BD'] = BD_vals
df.head(n=3)
%%R -i df -w 800 -h 350
df.g = apply(df, 2, as.numeric) %>% as.data.frame %>%
gather(taxon_name, P, 1:3) %>%
mutate(BD = as.numeric(BD),
P = as.numeric(P),
taxon_name = as.character(taxon_name)) %>%
filter(P > 1e-15)
x.lab = expression(paste('Buoyant density (g ml' ^ '-1', ')'))
p1.skn.dbl = ggplot(df.g, aes(BD, P, color=taxon_name)) +
geom_point() +
geom_line() +
labs(x=x.lab, y='Probability density') +
theme_bw() +
theme(
text = element_text(size=16),
legend.position = 'none'
)
p2.skn.dbl = p1.skn.dbl + scale_y_log10()
grid.arrange(p1.skn.dbl, p2.skn.dbl, ncol=2)
%%R -w 800 -h 300
# plot formatting
title.size=16
p2.skn.f = p2.skn +
ggtitle('Gaussian BD') +
theme(
plot.title = element_text(size=title.size)
)
p2.skn.dbl.f = p2.skn.dbl +
ggtitle('Gaussian BD + DBL') +
theme(
plot.title = element_text(size=title.size)
)
# combined plot
#p.comb = cowplot::plot_grid(p2.skn.f, p2.skn.dbl.f, labels=c('A)', 'B)'), align='h')
p.comb = cowplot::ggdraw() +
geom_rect(aes(xmin=0, ymin=0, xmax=1, ymax=1), fill='white') +
cowplot::draw_plot(p2.skn.f, 0.01, 0.01, 0.49, 0.99) +
cowplot::draw_plot(p2.skn.dbl.f, 0.5, 0.01, 0.49, 0.99) +
cowplot::draw_plot_label(c('A)', 'B)'), c(0, 0.5), c(0.99, 0.99))
p.comb
%%R -i workDir
# writting plot
outFile = file.path(workDir, 'DBL_example_log10.pdf')
ggsave(outFile, p.comb, width=10, height=3.75)
cat('File written:', outFile, '\n')
%%R -w 800 -h 300
p1.skn.e = p1.skn +
scale_x_continuous(limits=c(1.675, 1.775))
p2.skn.e = p2.skn +
scale_x_continuous(limits=c(1.675, 1.775)) +
scale_y_log10(limits=c(1e-12, 150))
p1.skn.dbl.e = p1.skn.dbl +
scale_x_continuous(limits=c(1.675, 1.775))
p2.skn.dbl.e = p2.skn.dbl +
scale_x_continuous(limits=c(1.675, 1.775)) +
scale_y_log10(limits=c(1e-12, 150))
p.comb = cowplot::ggdraw() +
geom_rect(aes(xmin=0, ymin=0, xmax=1, ymax=1), fill='white') +
cowplot::draw_plot(p2.skn.e, 0.01, 0.01, 0.49, 0.99) +
cowplot::draw_plot(p2.skn.dbl.e, 0.5, 0.01, 0.49, 0.99) +
cowplot::draw_plot_label(c('A)', 'B)'), c(0, 0.5), c(0.99, 0.99))
p.comb
%%R -i workDir
# writting plot
outFile = file.path(workDir, 'DBL_example_log10.pdf')
ggsave(outFile, p.comb, width=10, height=3.75)
cat('File written:', outFile, '\n')
BD_vals = np.arange(min_BD, max_BD, 0.001)
F = os.path.join(workDir, 'ampFrags_sm_kde_dif_DBL.pkl')
with open(F, 'rb') as inFH:
kde = dill.load(inFH)
kde
# probability at each location in gradient
pdf = {}
for k,v in kde.items():
pdf[k] = v.evaluate(BD_vals)
pdf.keys()
df = pd.DataFrame(pdf)
df['BD'] = BD_vals
df.head(n=3)
%%R -i df -w 800 -h 350
df.g = apply(df, 2, as.numeric) %>% as.data.frame %>%
gather(taxon_name, P, 1:3) %>%
mutate(BD = as.numeric(BD),
P = as.numeric(P),
taxon_name = as.character(taxon_name)) %>%
filter(P > 1e-9)
p1 = ggplot(df.g, aes(BD, P, color=taxon_name)) +
geom_point() +
geom_line() +
theme_bw() +
theme(
text = element_text(size=16),
legend.position = 'none'
)
p2 = p1 + scale_y_log10()
grid.arrange(p1, p2, ncol=2)
BD_vals = np.arange(min_BD, max_BD, 0.001)
F = os.path.join(workDir, 'ampFrags_real_kde_dif_DBL_fa1e-4.pkl')
with open(F, 'rb') as inFH:
kde = dill.load(inFH)
kde
# probability at each location in gradient
pdf = {}
for k,v in kde.items():
pdf[k] = v.evaluate(BD_vals)
pdf.keys()
df = pd.DataFrame(pdf)
df['BD'] = BD_vals
df.head(n=3)
%%R -i df -w 800 -h 350
df.g = apply(df, 2, as.numeric) %>% as.data.frame %>%
gather(taxon_name, P, 1:3) %>%
mutate(BD = as.numeric(BD),
P = as.numeric(P),
taxon_name = as.character(taxon_name)) %>%
filter(P > 1e-9)
p1 = ggplot(df.g, aes(BD, P, color=taxon_name)) +
geom_point() +
geom_line() +
theme_bw() +
theme(
text = element_text(size=16),
legend.position = 'none'
)
p2 = p1 + scale_y_log10()
grid.arrange(p1, p2, ncol=2)
BD_vals = np.arange(min_BD, max_BD, 0.001)
F = os.path.join(workDir, 'ampFrags_real_kde_dif_DBL-comm.pkl')
with open(F, 'rb') as inFH:
kde = dill.load(inFH)
kde
# probability at each location in gradient
pdf = {}
for libID,v in kde.items():
for taxon,k in v.items():
pdf[taxon] = k.evaluate(BD_vals)
pdf.keys()
df = pd.DataFrame(pdf)
df['BD'] = BD_vals
df.head(n=3)
%%R -i df -w 800 -h 350
df.g = apply(df, 2, as.numeric) %>% as.data.frame %>%
gather(taxon_name, P, 1:3) %>%
mutate(BD = as.numeric(BD),
P = as.numeric(P),
taxon_name = as.character(taxon_name)) %>%
filter(P > 1e-9)
p1 = ggplot(df.g, aes(BD, P, color=taxon_name)) +
geom_point() +
geom_line() +
theme_bw() +
theme(
text = element_text(size=16),
legend.position = 'none'
)
p2 = p1 + scale_y_log10()
grid.arrange(p1, p2, ncol=2)
%%R
df.g %>%
group_by(taxon_name) %>%
summarize(max_P = max(P),
min_P = min(P)) %>% print
%%R -i workDir
F = file.path(workDir, 'comm.txt')
df.comm = read.delim(F, sep='\t') %>%
mutate(rel_abund = rel_abund_perc / 100)
df.comm %>% print
df.g.s = df.g %>%
filter(BD > 1.75) %>%
group_by(BD) %>%
mutate(P_rel_abund = P / sum(P)) %>%
group_by(taxon_name) %>%
summarize(mean_P = mean(P))
df.g.s = inner_join(df.g.s, df.comm, c('taxon_name' = 'taxon_name'))
df.g.s %>% print
ggplot(df.g.s, aes(rel_abund, mean_P)) +
geom_point() +
geom_line()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Init
Step2: Determining the probability of detecting the taxa across the entire gradient
Step3: skewed normal distribution
Step4: small uniform distribution
Step5: Notes
Step6: Notes
Step7: Combined plot (v2)
Step8: small fragment size distribution
Step9: with DBL 'smearing' (smaller DBL)
Step10: DBL with abundance-weighted smearing
Step11: Plotting pre-frac abundance vs heavy fraction P
|
5,566
|
<ASSISTANT_TASK:>
Python Code:
%%capture
!pip install git+https://github.com/biothings/biothings_explorer#egg=biothings_explorer
# import modules from biothings_explorer
from biothings_explorer.hint import Hint
from biothings_explorer.user_query_dispatcher import FindConnection
import nest_asyncio
nest_asyncio.apply()
ht = Hint()
# find all potential representations of hyperphenylalaninemia
hyperphenylalaninemia_hint = ht.query("hyperphenylalaninemia")
# select the correct representation of hyperphenylalaninemia
hyperphenylalaninemia = hyperphenylalaninemia_hint['Disease'][0]
hyperphenylalaninemia
help(FindConnection.__init__)
fc = FindConnection(input_obj=hyperphenylalaninemia, output_obj='ChemicalSubstance', intermediate_nodes=['Gene'])
# set verbose=True will display all steps which BTE takes to find the connection
fc.connect(verbose=True)
df = fc.display_table_view()
# because UMLS is not currently well-integrated in our ID-to-object translation system, removing UMLS-only outputs here
patternDel = "^UMLS:C\d+"
filter = df.output_id.str.contains(patternDel)
df = df[~filter]
patternDel2 = "^InChI="
filter2 = df.output_name.str.contains(patternDel2)
df = df[~filter2]
patternDel3 = "^umls:C\d+"
filter3 = df.node1_id.str.contains(patternDel3)
df = df[~filter3]
print(df.shape)
df.sample(10)
df.output_name.value_counts().head(20)
df[df.output_name=="NITRIC OXIDE"]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Next, import the relevant modules
Step2: Step 1
Step3: Step 2
Step4: Here, we formulate a FindConnection query with "hyperphenylalaninemia" as the input_ojb, "ChemicalSubstance" as the output_obj (which corresponds to a Biolink Entity type). We further specify with the intermediate_nodes parameter that we are looking for paths joining hyperphenylalaninemia and chemical compounds with one intermediate node that is a Gene. (The ability to search for longer reasoning paths that include additional intermediate nodes will be added shortly.)
Step5: We next execute the connect method, which performs the query path planning and query path execution process. In short, BioThings Explorer is deconstructing the query into individual API calls, executing those API calls, then assembling the results.
Step6: Step 3
Step7: While most results are based on edges from semmed, edges from DGIdb, biolink, disgenet, and mychem.info were also retrieved from their respective APIs.
Step8: Hyperphenylalaninemia is a condition characterized by elevated levels of phenylalanine in the blood. This phenotype is strongly associated with phenylketonuria (PKU), an inherited, genetic disorder that affects the ability to metabolize phenylalanine. Sapropterin is a naturally-occurring cofactor associated with several enzymatic processes, including the metabolism of phenylalanine to tyrosine. It has been FDA-approved as a treatment for PKU patients. Tyrosine is also a precursor to several neurotransmitters, including norepinephrine and dopamine.
|
5,567
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from scipy.io import loadmat, savemat
from numpy import random
from os import path
mat = loadmat('../../../data/multiclass/usps.mat')
Xall = mat['data']
Yall = np.array(mat['label'].squeeze(), dtype=np.double)
# map from 1..10 to 0..9, since shogun
# requires multiclass labels to be
# 0, 1, ..., K-1
Yall = Yall - 1
random.seed(0)
subset = random.permutation(len(Yall))
Xtrain = Xall[:, subset[:5000]]
Ytrain = Yall[subset[:5000]]
Xtest = Xall[:, subset[5000:6000]]
Ytest = Yall[subset[5000:6000]]
Nsplit = 2
all_ks = range(1, 21)
print Xall.shape
print Xtrain.shape
print Xtest.shape
%matplotlib inline
import pylab as P
def plot_example(dat, lab):
for i in xrange(5):
ax=P.subplot(1,5,i+1)
P.title(int(lab[i]))
ax.imshow(dat[:,i].reshape((16,16)), interpolation='nearest')
ax.set_xticks([])
ax.set_yticks([])
_=P.figure(figsize=(17,6))
P.gray()
plot_example(Xtrain, Ytrain)
_=P.figure(figsize=(17,6))
P.gray()
plot_example(Xtest, Ytest)
from modshogun import MulticlassLabels, RealFeatures
from modshogun import KNN, EuclideanDistance
labels = MulticlassLabels(Ytrain)
feats = RealFeatures(Xtrain)
k=3
dist = EuclideanDistance()
knn = KNN(k, dist, labels)
labels_test = MulticlassLabels(Ytest)
feats_test = RealFeatures(Xtest)
knn.train(feats)
pred = knn.apply_multiclass(feats_test)
print "Predictions", pred[:5]
print "Ground Truth", Ytest[:5]
from modshogun import MulticlassAccuracy
evaluator = MulticlassAccuracy()
accuracy = evaluator.evaluate(pred, labels_test)
print "Accuracy = %2.2f%%" % (100*accuracy)
idx=np.where(pred != Ytest)[0]
Xbad=Xtest[:,idx]
Ybad=Ytest[idx]
_=P.figure(figsize=(17,6))
P.gray()
plot_example(Xbad, Ybad)
knn.set_k(13)
multiple_k=knn.classify_for_multiple_k()
print multiple_k.shape
for k in xrange(13):
print "Accuracy for k=%d is %2.2f%%" % (k+1, 100*np.mean(multiple_k[:,k]==Ytest))
from modshogun import Time, KNN_COVER_TREE, KNN_BRUTE
start = Time.get_curtime()
knn.set_k(3)
knn.set_knn_solver_type(KNN_BRUTE)
pred = knn.apply_multiclass(feats_test)
print "Standard KNN took %2.1fs" % (Time.get_curtime() - start)
start = Time.get_curtime()
knn.set_k(3)
knn.set_knn_solver_type(KNN_COVER_TREE)
pred = knn.apply_multiclass(feats_test)
print "Covertree KNN took %2.1fs" % (Time.get_curtime() - start)
def evaluate(labels, feats, use_cover_tree=False):
from modshogun import MulticlassAccuracy, CrossValidationSplitting
import time
split = CrossValidationSplitting(labels, Nsplit)
split.build_subsets()
accuracy = np.zeros((Nsplit, len(all_ks)))
acc_train = np.zeros(accuracy.shape)
time_test = np.zeros(accuracy.shape)
for i in range(Nsplit):
idx_train = split.generate_subset_inverse(i)
idx_test = split.generate_subset_indices(i)
for j, k in enumerate(all_ks):
#print "Round %d for k=%d..." % (i, k)
feats.add_subset(idx_train)
labels.add_subset(idx_train)
dist = EuclideanDistance(feats, feats)
knn = KNN(k, dist, labels)
knn.set_store_model_features(True)
if use_cover_tree:
knn.set_knn_solver_type(KNN_COVER_TREE)
else:
knn.set_knn_solver_type(KNN_BRUTE)
knn.train()
evaluator = MulticlassAccuracy()
pred = knn.apply_multiclass()
acc_train[i, j] = evaluator.evaluate(pred, labels)
feats.remove_subset()
labels.remove_subset()
feats.add_subset(idx_test)
labels.add_subset(idx_test)
t_start = time.clock()
pred = knn.apply_multiclass(feats)
time_test[i, j] = (time.clock() - t_start) / labels.get_num_labels()
accuracy[i, j] = evaluator.evaluate(pred, labels)
feats.remove_subset()
labels.remove_subset()
return {'eout': accuracy, 'ein': acc_train, 'time': time_test}
labels = MulticlassLabels(Ytest)
feats = RealFeatures(Xtest)
print("Evaluating KNN...")
wo_ct = evaluate(labels, feats, use_cover_tree=False)
wi_ct = evaluate(labels, feats, use_cover_tree=True)
print("Done!")
import matplotlib
fig = P.figure(figsize=(8,5))
P.plot(all_ks, wo_ct['eout'].mean(axis=0), 'r-*')
P.plot(all_ks, wo_ct['ein'].mean(axis=0), 'r--*')
P.legend(["Test Accuracy", "Training Accuracy"])
P.xlabel('K')
P.ylabel('Accuracy')
P.title('KNN Accuracy')
P.tight_layout()
fig = P.figure(figsize=(8,5))
P.plot(all_ks, wo_ct['time'].mean(axis=0), 'r-*')
P.plot(all_ks, wi_ct['time'].mean(axis=0), 'b-d')
P.xlabel("K")
P.ylabel("time")
P.title('KNN time')
P.legend(["Plain KNN", "CoverTree KNN"], loc='center right')
P.tight_layout()
from modshogun import GaussianKernel, GMNPSVM
width=80
C=1
gk=GaussianKernel()
gk.set_width(width)
svm=GMNPSVM(C, gk, labels)
_=svm.train(feats)
out=svm.apply(feats_test)
evaluator = MulticlassAccuracy()
accuracy = evaluator.evaluate(out, labels_test)
print "Accuracy = %2.2f%%" % (100*accuracy)
Xrem=Xall[:,subset[6000:]]
Yrem=Yall[subset[6000:]]
feats_rem=RealFeatures(Xrem)
labels_rem=MulticlassLabels(Yrem)
out=svm.apply(feats_rem)
evaluator = MulticlassAccuracy()
accuracy = evaluator.evaluate(out, labels_rem)
print "Accuracy = %2.2f%%" % (100*accuracy)
idx=np.where(out.get_labels() != Yrem)[0]
Xbad=Xrem[:,idx]
Ybad=Yrem[idx]
_=P.figure(figsize=(17,6))
P.gray()
plot_example(Xbad, Ybad)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let us plot the first five examples of the train data (first row) and test data (second row).
Step2: Then we import shogun components and convert the data to shogun objects
Step3: Let's plot a few missclassified examples - I guess we all agree that these are notably harder to detect.
Step4: Now the question is - is 97.30% accuracy the best we can do? While one would usually re-train KNN with different values for k here and likely perform Cross-validation, we just use a small trick here that saves us lots of computation time
Step5: We have the prediction for each of the 13 k's now and can quickly compute the accuracies
Step6: So k=3 seems to have been the optimal choice.
Step7: So we can significantly speed it up. Let's do a more systematic comparison. For that a helper function is defined to run the evaluation for KNN
Step8: Evaluate KNN with and without Cover Tree. This takes a few seconds
Step9: Generate plots with the data collected in the evaluation
Step10: Although simple and elegant, KNN is generally very resource costly. Because all the training samples are to be memorized literally, the memory cost of KNN learning becomes prohibitive when the dataset is huge. Even when the memory is big enough to hold all the data, the prediction will be slow, since the distances between the query point and all the training points need to be computed and ranked. The situation becomes worse if in addition the data samples are all very high-dimensional. Leaving aside computation time issues, k-NN is a very versatile and competitive algorithm. It can be applied to any kind of objects (not just numerical data) - as long as one can design a suitable distance function. In pratice k-NN used with bagging can create improved and more robust results.
Step11: Let's apply the SVM to the same test data set to compare results
Step12: Since the SVM performs way better on this task - let's apply it to all data we did not use in training.
|
5,568
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
%matplotlib inline
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
# import all Shogun classes
from shogun import *
from matplotlib.patches import Ellipse
# a tool for visualisation
def get_gaussian_ellipse_artist(mean, cov, nstd=1.96, color="red", linewidth=3):
Returns an ellipse artist for nstd times the standard deviation of this
Gaussian, specified by mean and covariance
# compute eigenvalues (ordered)
vals, vecs = eigh(cov)
order = vals.argsort()[::-1]
vals, vecs = vals[order], vecs[:, order]
theta = numpy.degrees(arctan2(*vecs[:, 0][::-1]))
# width and height are "full" widths, not radius
width, height = 2 * nstd * sqrt(vals)
e = Ellipse(xy=mean, width=width, height=height, angle=theta, \
edgecolor=color, fill=False, linewidth=linewidth)
return e
# create mixture of three Gaussians
num_components=3
num_max_samples=100
gmm=GMM(num_components)
dimension=2
# set means (TODO interface should be to construct mixture from individuals with set parameters)
means=zeros((num_components, dimension))
means[0]=[-5.0, -4.0]
means[1]=[7.0, 3.0]
means[2]=[0, 0.]
[gmm.set_nth_mean(means[i], i) for i in range(num_components)]
# set covariances
covs=zeros((num_components, dimension, dimension))
covs[0]=array([[2, 1.3],[.6, 3]])
covs[1]=array([[1.3, -0.8],[-0.8, 1.3]])
covs[2]=array([[2.5, .8],[0.8, 2.5]])
[gmm.set_nth_cov(covs[i],i) for i in range(num_components)]
# set mixture coefficients, these have to sum to one (TODO these should be initialised automatically)
weights=array([0.5, 0.3, 0.2])
gmm.set_coef(weights)
# now sample from each component seperately first, the from the joint model
hold(True)
colors=["red", "green", "blue"]
for i in range(num_components):
# draw a number of samples from current component and plot
num_samples=int(rand()*num_max_samples)+1
# emulate sampling from one component (TODO fix interface of GMM to handle this)
w=zeros(num_components)
w[i]=1.
gmm.set_coef(w)
# sample and plot (TODO fix interface to have loop within)
X=array([gmm.sample() for _ in range(num_samples)])
plot(X[:,0], X[:,1], "o", color=colors[i])
# draw 95% elipsoid for current component
gca().add_artist(get_gaussian_ellipse_artist(means[i], covs[i], color=colors[i]))
hold(False)
_=title("%dD Gaussian Mixture Model with %d components" % (dimension, num_components))
# since we used a hack to sample from each component
gmm.set_coef(weights)
# generate a grid over the full space and evaluate components PDF
resolution=100
Xs=linspace(-10,10, resolution)
Ys=linspace(-8,6, resolution)
pairs=asarray([(x,y) for x in Xs for y in Ys])
D=asarray([gmm.cluster(pairs[i])[3] for i in range(len(pairs))]).reshape(resolution,resolution)
figure(figsize=(18,5))
subplot(1,2,1)
pcolor(Xs,Ys,D)
xlim([-10,10])
ylim([-8,6])
title("Log-Likelihood of GMM")
subplot(1,2,2)
pcolor(Xs,Ys,exp(D))
xlim([-10,10])
ylim([-8,6])
_=title("Likelihood of GMM")
# sample and plot (TODO fix interface to have loop within)
X=array([gmm.sample() for _ in range(num_max_samples)])
plot(X[:,0], X[:,1], "o")
_=title("Samples from GMM")
def estimate_gmm(X, num_components):
# bring data into shogun representation (note that Shogun data is in column vector form, so transpose)
features=RealFeatures(X.T)
gmm_est=GMM(num_components)
gmm_est.set_features(features)
# learn GMM
gmm_est.train_em()
return gmm_est
component_numbers=[2,3]
# plot true likelihood
D_true=asarray([gmm.cluster(pairs[i])[num_components] for i in range(len(pairs))]).reshape(resolution,resolution)
figure(figsize=(18,5))
subplot(1,len(component_numbers)+1,1)
pcolor(Xs,Ys,exp(D_true))
xlim([-10,10])
ylim([-8,6])
title("True likelihood")
for n in range(len(component_numbers)):
# TODO get rid of these hacks and offer nice interface from Shogun
# learn GMM with EM
gmm_est=estimate_gmm(X, component_numbers[n])
# evaluate at a grid of points
D_est=asarray([gmm_est.cluster(pairs[i])[component_numbers[n]] for i in range(len(pairs))]).reshape(resolution,resolution)
# visualise densities
subplot(1,len(component_numbers)+1,n+2)
pcolor(Xs,Ys,exp(D_est))
xlim([-10,10])
ylim([-8,6])
_=title("Estimated likelihood for EM with %d components"%component_numbers[n])
# function to draw ellipses for all components of a GMM
def visualise_gmm(gmm, color="blue"):
for i in range(gmm.get_num_components()):
component=Gaussian.obtain_from_generic(gmm.get_component(i))
gca().add_artist(get_gaussian_ellipse_artist(component.get_mean(), component.get_cov(), color=color))
# multiple runs to illustrate random initialisation matters
for _ in range(3):
figure(figsize=(18,5))
subplot(1, len(component_numbers)+1, 1)
plot(X[:,0],X[:,1], 'o')
visualise_gmm(gmm_est, color="blue")
title("True components")
for i in range(len(component_numbers)):
gmm_est=estimate_gmm(X, component_numbers[i])
subplot(1, len(component_numbers)+1, i+2)
plot(X[:,0],X[:,1], 'o')
visualise_gmm(gmm_est, color=colors[i])
# TODO add a method to get likelihood of full model, retraining is inefficient
likelihood=gmm_est.train_em()
_=title("Estimated likelihood: %.2f (%d components)"%(likelihood,component_numbers[i]))
def cluster_and_visualise(gmm_est):
# obtain cluster index for each point of the training data
# TODO another hack here: Shogun should allow to pass multiple points and only return the index
# as the likelihood can be done via the individual components
# In addition, argmax should be computed for us, although log-pdf for all components should also be possible
clusters=asarray([argmax(gmm_est.cluster(x)[:gmm.get_num_components()]) for x in X])
# visualise points by cluster
hold(True)
for i in range(gmm.get_num_components()):
indices=clusters==i
plot(X[indices,0],X[indices,1], 'o', color=colors[i])
hold(False)
# learn gmm again
gmm_est=estimate_gmm(X, num_components)
figure(figsize=(18,5))
subplot(121)
cluster_and_visualise(gmm)
title("Clustering under true GMM")
subplot(122)
cluster_and_visualise(gmm_est)
_=title("Clustering under estimated GMM")
figure(figsize=(18,5))
for comp_idx in range(num_components):
subplot(1,num_components,comp_idx+1)
# evaluated likelihood under current component
# TODO Shogun should do the loop and allow to specify component indices to evaluate pdf for
# TODO distribution interface should be the same everywhere
component=Gaussian.obtain_from_generic(gmm.get_component(comp_idx))
cluster_likelihoods=asarray([component.compute_PDF(X[i]) for i in range(len(X))])
# normalise
cluster_likelihoods-=cluster_likelihoods.min()
cluster_likelihoods/=cluster_likelihoods.max()
# plot, coloured by likelihood value
cm=get_cmap("jet")
hold(True)
for j in range(len(X)):
color = cm(cluster_likelihoods[j])
plot(X[j,0], X[j,1] ,"o", color=color)
hold(False)
title("Data coloured by likelihood for component %d" % comp_idx)
# compute cluster index for every point in space
D_est=asarray([gmm_est.cluster(pairs[i])[:num_components].argmax() for i in range(len(pairs))]).reshape(resolution,resolution)
# visualise clustering
cluster_and_visualise(gmm_est)
# visualise space partitioning
hold(True)
pcolor(Xs,Ys,D_est)
hold(False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Gaussian Mixture Models and Expectation Maximisation in Shogun
Step2: Set up the model in Shogun
Step3: Sampling from mixture models
Step4: Evaluating densities in mixture Models
Step5: Density estimating with mixture models
Step6: Imagine you did not know the true generating process of this data. What would you think just looking at it? There are clearly at least two components (or clusters) that might have generated this data, but three also looks reasonable. So let us try to learn a Gaussian mixture model on those.
Step7: So far so good, now lets plot the density of this GMM using the code from above
Step8: It is also possible to access the individual components of the mixture distribution. In our case, we can for example draw 95% ellipses for each of the Gaussians using the method from above. We will do this (and more) below.
Step9: Clustering with mixture models
Step10: These are clusterings obtained via the true mixture model and the one learned via EM. There is a slight subtlety here
Step11: Note how the lower left and middle cluster are overlapping in the sense that points at their intersection have similar likelihoods. If you do not care at all about this and are just interested in a partitioning of the space, simply choose the maximum.
|
5,569
|
<ASSISTANT_TASK:>
Python Code:
def addFunction(inputNumber):
result = inputNumber + 2
return result
print addFunction(2)
var = 2
print addFunction(var)
def addFunction(inputNumber):
if inputNumber < 0:
return 'Number must be positive!'
result = inputNumber + 2
return result
print addFunction(-2)
print addFunction(2)
def addTwoNumbers(inputNumber1, inputNumber2):
result = inputNumber1 + inputNumber2
return result
print addTwoNumbers(2, 3)
def twoNumbers(inputNumber1, inputNumber2):
addition = inputNumber1 + inputNumber2
multiplication = inputNumber1 * inputNumber2
return [addition, multiplication]
result = twoNumbers(2, 3)
print 'addition: ', result[0]
print 'multiplication: ', result[1]
add, mult = twoNumbers(2, 3)
print 'addition: ', str(add)
print 'multiplication: ', str(mult)
class CounterClass:
count = 0
def addToCounter(self, inputValue):
self.count += inputValue
def getCount(self):
return self.count
myCounter = CounterClass()
myCounter.addToCounter(2)
print myCounter.getCount()
myCounter.count
class CounterClass:
count = 0
def __init__(self, inputValue):
self.count = inputValue
def addToCounter(self, inputValue):
self.count += inputValue
def getCount(self):
return self.count
myNewCounter = CounterClass(10)
myNewCounter.addToCounter(2)
#this should now return 12
print myNewCounter.getCount()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: On its own, this code will only define what the function does, but will not actually run any code. To execute the code inside the function you have to call it somewhere within the script and pass it the proper inputs
Step2: A function's definition begins with the keyword 'def'. After this is the function's name, which follows the same naming conventions as variables. Inside the parenthesis after the function name you can place any number of input variables, which will be passed to the function when it is called, and are available within the body of the function. When you call a function, you can either directly pass values or pass variables that have values stored inside of them. For example, this code will call the function in the same way
Step3: Here the value of the 'var' variable, which in this case is 2, is being passed to the 'addFunction' function, and is then available within that function through the 'inputNumber' variable. Notice that the names of the two variables 'var' and 'inputNumber' don't have to match. When a value gets passed to a function it forms a direct connection between the two sets of parenthesis which carries the data. In this case 'var' is a global variable that stores the value '2' in the main script, while 'inputNumber' is a local variable which stores that value only for the duration of that function. In this way functions 'wrap up' specific tasks and all the data that is necessary to execute that task to limit the number of global variables necessary in the main function.
Step4: You can see that in this case, if the input is less than zero the conditional will be met, which causes the first return statement to run, skipping the rest of the code in the function.
Step5: You can also return multiple values by building them into a list, and then extracting them from the returned list. Let's expand our function to return both the addition and multiplication of two numbers
Step6: If you don't want to use a list you can also ask for the results as an ordered set of new variables separated by a comma
Step7: These kinds of functions are extremely useful for creating efficient and readable code. By wrapping up certain functionalities into custom modules, they allow you (and possibly others) to reuse code in a very efficient way, and also force you to be explicit about the various sets of operations happening in your code. You can see that the basic definition of functions is quite simple, however you can quickly start to define more advanced logics, where functions call each other and pass around inputs and returns in highly complex ways (you can even pass a function as an input into another function!). This kind of programming, which uses functions to encapsulate discrete logics in a program is called functional programming.
Step8: Notice we are again using the '+=' shorthand to increment the value of the object's count variable by the input value. To use this class, we first need to create an instance of it, which we will store in a variable just like any other piece of data
Step9: Once we create an instance of a class (this is called 'instantiation'), we can run that instance's methods, and query it's variables. Note that the general class definition is only a construct. All variables within the class only apply to a particular instance, and the methods can only be run as they relate to that instance. For example
Step10: Right away, you will notice a few differences between how we define functions and classes. First of all, no variables are passed on the first line of the definition since the 'class' keyword only defines the overall structure of the class. After the first line you will find a list of variables that are the local variables of that class, and keep track of data for individual instances. After this you will have a collection of local methods (remember 'methods' are simply functions that belong to a particular class) that define the class functionality. These methods are defined the same way as before, except you see that the first input is always the keyword 'self'. This represents the object instance, and is always passed as the first input into each method in a class. This allows you to query the local variables of the instance, as you can see us doing with the 'count' variable.
Step11: However, this is discouraged because it reveals the true name of the local variables to the end user. In a production environment this would pose severe security risks, but it is considered bad practice even in private uses. Instead, you are encouraged to create special 'accessor' methods to pull variable values from the instance, as we have done with the 'getCount()' method in our example. Another advantage of this practice (which is called encapsulation) is that the code is easier to maintain. You are free to make any changes within the class definition, including changing the names of the local variables and what they do. As long as you maintain the accessor functions and they return the expected result, you do not have to update anything in the main code.
Step12: Now we can create a new instance of the counter, but this time pass in a starting value for the count.
|
5,570
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
import scipy as sp
zz = np.loadtxt('wiggleZ_DR1_z.dat',dtype='float'); # Load WiggleZ redshifts
np.min() # Check bounds
np.max()
nbins = 50; # Is this a good choice?
n, bins, patches = hist() # With hist, one needs to (spuriously) request the patch objects as well
x = bins[0:nbins] + (bins[2]-bins[1])/2; # Convert bin edges to centres, chopping the last
# Import the function you need
from scipy.interpolate import interp1d
# Build an interpolation function for p(z) that accepts an arbitrary redshift z
z = linspace(0,2,100); plot(z,p(z)) # Test your interpolation function out
# Import the function you need
from scipy import integrate
Pz = lambda : ... # Use integrate inside a lambda function to define P(z)?
total = Pz(5) # Get normalisation constant by evaluating P(z->\infty)
total # Check that this worked
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load data from file
Step2: Check bounds
Step3: Construct histogram from data
Step4: Interpolate histogram output -> p(z); n.b. that you can also use numerical quadrature to get $P(z)$ directly.
Step5: Use numerical integration to get $P(z) = \int_0^\infty p(z') dz'$
|
5,571
|
<ASSISTANT_TASK:>
Python Code:
### Get Gradient Jacobians (Change in h(x) i.e. ground level/ Change in x/y)
grad_lat = (np.gradient(map_terrain, axis = 0))/75
grad_lon = (np.gradient(map_terrain, axis = 1))/75
grid_points = np.array(list(product(map_lat_range, map_lon_range)))
map_grad_stack_lat = grad_lat.reshape(-1,1)
map_grad_stack_lon = grad_lon.reshape(-1,1)
# lat, lon
func_map_grad_lat = LinearNDInterpolator( \
np.vstack((grid_points[:,0], grid_points[:,1])).T, \
map_grad_stack_lat, \
fill_value=np.nan, \
rescale=False)
func_map_grad_lon = LinearNDInterpolator( \
np.vstack((grid_points[:,0], grid_points[:,1])).T, \
map_grad_stack_lon, \
fill_value=np.nan, \
rescale=False)
# Load Altimeter data
z_alt = # LOAD ALTIMETER DATA
# Filtering utilities
import numpy as np
from scipy.fftpack import fft
from scipy import signal
def median_filter(s, f_size):
return signal.medfilt(s, f_size)
def freq_filter(s, f_size, cutoff):
lpf=signal.firwin(f_size, cutoff, window='hamming')
return signal.convolve(s, lpf, mode='same')
def comb_filter(s, f_size, cutoff, fs):
median_s=median_filter(s, f_size)
return freq_filter(median_s, f_size, cutoff/fs)
fs=200
f_size = 55
cutoff = 10
g = 9.80665
z_alt_lp = comb_filter(z_alt.flatten(), f_size, cutoff, fs)
# Predict and Update Functions
def predict(x, P, F, Q):
x = F @ x
P = F @ P @ F.T + Q
return x, P
def update(x, P, z, H, R, debug = False):
dim_x = len(x)
y = z - H @ x
S = H @ P @ H.T + R
K = P @ H.T @ np.linalg.pinv(S)
x = x + K @ y
P = (np.eye(dim_x) - K @ H) @ P
y2 = z - H @ x
if debug:
return x, P, y, y2
else:
return x, P
def update_terrain(x, P, z, H, R, func_map):
dim_x = len(x)
# Get current LLA
z_pred = func_map(x[0], x[1])
# Update
y = z - z_pred
S = H @ P @ H.T + R
K = P @ H.T @ np.linalg.pinv(S)
x = x + K @ y
P = (np.eye(dim_x) - K @ H) @ P
return x, P
# Process Model
F = np.array([[1, dt, 0, 0, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0, 0, 0],
[0, 0, 1, dt, 0, 0, 0, 0],
[0, 0, 0, 1, 0, 0, 0, 0],
[0, 0, 0, 0, 1, dt, 0, 0],
[0, 0, 0, 0, 0, 1, 0, 0]
[0, 0, 0, 0, 0, 0, 1, dt],
[0, 0, 0, 0, 0, 0, 0, dt]])
# Measurement Models
H_vel = np.array([[0, 1, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 1, 0, 0, 0, 0]])
H_gps = np.array([[1, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 0, 0, 0, 0, 0]])
# Logging arrays
x_mat, P_mat, residual_mat, grad_mat = [], [], [], []
# Initialize filter
ind_sim = 1000
# Initial Conditions
x0 = np.array([0, 0, 0, 0, 0, 0])
P0 = np.diag([100**2, 100**2, 10**2, 10**2, 10**2, 10**2, 10**2, 10**2])
# Measurement noise
R_vel = np.diag([10, 10])
R_gps = np.diag([10, 10])
R_alt = np.diag([100])
# Process Noise
Q = np.diag([10**2, 10**2, 1**2, 1**2, .1**2, .1**2, .01**2, .01**2])
for i in range(ind_test):
z_vel = # velocity data
z_gps = # gps data
z_alt = # Filtered altimeter data
# Obtain Jacobian from Terrain Map
dzdx = func_map_grad_x(x[0], x[1])
dzdy = func_map_grad_y(x[0], x[1])
H_map = np.array([[dzdx[0], 0, 0, 0, 0, 0, 0, 0],
[0, 0, dzdy[0], 0, 0, 0, 0, 0]])
## Update
x, P = update(x, P, z_vel, H_vel_bias, R_vel)
if i % c_rate == 0 and flag_sensor:
if sensor == 'gps':
x, P = update(x, P, z_gps, H_gps, R_gps)
elif sensor == 'terrain':
x, P, y_pre, y_post = update_terrain(x, P, z_alt, H_alt, R_alt, func_map)
## Log
x_mat.append(x), P_mat.append(P)
## Predict
x, P = predict(x, P, F, Q)
plt.figure(figsize = (16,4))
plt.subplot(1,2,1)
plt.plot(x_mat[:,0], ll_mat[:,1])
plt.xlabel('x'), plt.ylabel('y')
plt.subplot(1,2,2)
pos_err = np.linalg.norm(x_true - x)
plt.plot(abs(pos_err))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Terrain Altimeter Sensor
Step2: Set Up Navigation Filter
Step3: Plot Results
|
5,572
|
<ASSISTANT_TASK:>
Python Code:
# Print platform info of Python exec env.
import sys
sys.version
import warnings
warnings.simplefilter('ignore', FutureWarning)
from pandas import *
show_versions()
data = read_excel('WHO POP TB some.xls')
data.head()
data.tail()
data.info()
data.describe()
tbColumn = data['TB deaths']
tbColumn.sum()
tbColumn.max()
tbColumn.min()
tbColumn.mean()
tbColumn.median()
data.sort_values('TB deaths').head()
populationColumn = data['Population (1000s)']
data['TB deaths (per 100,000)'] = tbColumn * 100 / populationColumn
data.head()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The data
Step2: The range of the problem
Step3: The total number of deaths in 2013 is
Step4: The largest and smallest number of deaths in a single country are
Step5: From less than 20 to almost a quarter of a million deaths is a huge range. The average number of deaths, over all countries in the data, can give a better idea of the seriousness of the problem in each country.
Step6: The median is far lower than the mean. This indicates that some of the countries had a very high number of TB deaths in 2013, pushing the value of the mean up.
Step7: The table raises the possibility that a large number of deaths may be partly due to a large population. To compare the countries on an equal footing, the death rate per 100,000 inhabitants is computed.
|
5,573
|
<ASSISTANT_TASK:>
Python Code:
!pip install arviz
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pymc3 as pm
import pandas as pd
import theano
import seaborn as sns
sns.set_style("whitegrid")
np.random.seed(123)
url = "https://github.com/twiecki/WhileMyMCMCGentlySamples/blob/master/content/downloads/notebooks/radon.csv?raw=true"
data = pd.read_csv(url)
# data = pd.read_csv('../data/radon.csv')
data["log_radon"] = data["log_radon"].astype(theano.config.floatX)
county_names = data.county.unique()
county_idx = data.county_code.values
n_counties = len(data.county.unique())
with pm.Model() as hierarchical_model_centered:
# Hyperpriors for group nodes
mu_a = pm.Normal("mu_a", mu=0.0, sd=100**2)
sigma_a = pm.HalfCauchy("sigma_a", 5)
mu_b = pm.Normal("mu_b", mu=0.0, sd=100**2)
sigma_b = pm.HalfCauchy("sigma_b", 5)
# Intercept for each county, distributed around group mean mu_a
# Above we just set mu and sd to a fixed value while here we
# plug in a common group distribution for all a and b (which are
# vectors of length n_counties).
a = pm.Normal("a", mu=mu_a, sd=sigma_a, shape=n_counties)
# Intercept for each county, distributed around group mean mu_a
b = pm.Normal("b", mu=mu_b, sd=sigma_b, shape=n_counties)
# Model error
eps = pm.HalfCauchy("eps", 5)
# Linear regression
radon_est = a[county_idx] + b[county_idx] * data.floor.values
# Data likelihood
radon_like = pm.Normal("radon_like", mu=radon_est, sd=eps, observed=data.log_radon)
# Inference button (TM)!
with hierarchical_model_centered:
hierarchical_centered_trace = pm.sample(draws=5000, tune=1000)[1000:]
pm.traceplot(hierarchical_centered_trace);
print("Rhat(sigma_b) = {}".format(pm.diagnostics.gelman_rubin(hierarchical_centered_trace)["sigma_b"]))
fig, axs = plt.subplots(nrows=2)
axs[0].plot(hierarchical_centered_trace.get_values("sigma_b", chains=1), alpha=0.5)
axs[0].set(ylabel="sigma_b")
axs[1].plot(hierarchical_centered_trace.get_values("b", chains=1), alpha=0.5)
axs[1].set(ylabel="b");
x = pd.Series(hierarchical_centered_trace["b"][:, 75], name="slope b_75")
y = pd.Series(hierarchical_centered_trace["sigma_b"], name="slope group variance sigma_b")
sns.jointplot(x, y, ylim=(0, 0.7));
with pm.Model() as hierarchical_model_non_centered:
# Hyperpriors for group nodes
mu_a = pm.Normal("mu_a", mu=0.0, sd=100**2)
sigma_a = pm.HalfCauchy("sigma_a", 5)
mu_b = pm.Normal("mu_b", mu=0.0, sd=100**2)
sigma_b = pm.HalfCauchy("sigma_b", 5)
# Before:
# a = pm.Normal('a', mu=mu_a, sd=sigma_a, shape=n_counties)
# Transformed:
a_offset = pm.Normal("a_offset", mu=0, sd=1, shape=n_counties)
a = pm.Deterministic("a", mu_a + a_offset * sigma_a)
# Before:
# b = pm.Normal('b', mu=mu_b, sd=sigma_b, shape=n_counties)
# Now:
b_offset = pm.Normal("b_offset", mu=0, sd=1, shape=n_counties)
b = pm.Deterministic("b", mu_b + b_offset * sigma_b)
# Model error
eps = pm.HalfCauchy("eps", 5)
radon_est = a[county_idx] + b[county_idx] * data.floor.values
# Data likelihood
radon_like = pm.Normal("radon_like", mu=radon_est, sd=eps, observed=data.log_radon)
# Inference button (TM)!
with hierarchical_model_non_centered:
hierarchical_non_centered_trace = pm.sample(draws=5000, tune=1000)[1000:]
pm.traceplot(hierarchical_non_centered_trace, varnames=["sigma_b"]);
fig, axs = plt.subplots(ncols=2, sharex=True, sharey=True)
x = pd.Series(hierarchical_centered_trace["b"][:, 75], name="slope b_75")
y = pd.Series(hierarchical_centered_trace["sigma_b"], name="slope group variance sigma_b")
axs[0].plot(x, y, ".")
axs[0].set(title="Centered", ylabel="sigma_b", xlabel="b_75")
x = pd.Series(hierarchical_non_centered_trace["b"][:, 75], name="slope b_75")
y = pd.Series(hierarchical_non_centered_trace["sigma_b"], name="slope group variance sigma_b")
axs[1].plot(x, y, ".")
axs[1].set(title="Non-centered", xlabel="b_75");
pm.kdeplot(
np.stack(
[
hierarchical_centered_trace["sigma_b"],
hierarchical_non_centered_trace["sigma_b"],
]
).T
)
plt.axvline(hierarchical_centered_trace["sigma_b"].mean(), color="b", linestyle="--")
plt.axvline(hierarchical_non_centered_trace["sigma_b"].mean(), color="g", linestyle="--")
plt.legend(["Centered", "Non-cenetered", "Centered posterior mean", "Non-centered posterior mean"])
plt.xlabel("sigma_b")
plt.ylabel("Probability Density");
x = pd.Series(hierarchical_non_centered_trace["b_offset"][:, 75], name="slope b_offset_75")
y = pd.Series(hierarchical_non_centered_trace["sigma_b"], name="slope group variance sigma_b")
sns.jointplot(x, y, ylim=(0, 0.7))
with hierarchical_model_centered:
mode = pm.find_MAP()
mode["b"]
np.exp(mode["sigma_b_log_"])
hierarchical_non_centered_trace["b"].mean(axis=0)
hierarchical_non_centered_trace["sigma_b"].mean(axis=0)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The intuitive specification
Step2: I have seen plenty of traces with terrible convergences but this one might look fine to the unassuming eye. Perhaps sigma_b has some problems, so let's look at the Rhat
Step3: Not too bad -- well below 1.01. I used to think this wasn't a big deal but Michael Betancourt in his StanCon 2017 talk makes a strong point that it is actually very problematic. To understand what's going on, let's take a closer look at the slopes b and their group variance (i.e. how far they are allowed to move from the mean) sigma_b. I'm just plotting a single chain now.
Step4: sigma_b seems to drift into this area of very small values and get stuck there for a while. This is a common pattern and the sampler is trying to tell you that there is a region in space that it can't quite explore efficiently. While stuck down there, the slopes b_i become all squished together. We've entered The Funnel of Hell (it's just called the funnel, I added the last part for dramatic effect).
Step5: This makes sense, as the slope group variance goes to zero (or, said differently, we apply maximum shrinkage), individual slopes are not allowed to deviate from the slope group mean, so they all collapose to the group mean.
Step6: Pay attention to the definitions of a_offset, a, b_offset, and b and compare them to before (commented out). What's going on here? It's pretty neat actually. Instead of saying that our individual slopes b are normally distributed around a group mean (i.e. modeling their absolute values directly), we can say that they are offset from a group mean by a certain value (b_offset; i.e. modeling their values relative to that mean). Now we still have to consider how far from that mean we actually allow things to deviate (i.e. how much shrinkage we apply). This is where sigma_b makes a comeback. We can simply multiply the offset by this scaling factor to get the same effect as before, just under a different parameterization. For a more formal introduction, see e.g. Betancourt & Girolami (2013).
Step7: That looks much better as also confirmed by the joint plot
Step8: To really drive this home, let's also compare the sigma_b marginal posteriors of the two models
Step9: That's crazy -- there's a large region of very small sigma_b values that the sampler could not even explore before. In other words, our previous inferences ("Centered") were severely biased towards higher values of sigma_b. Indeed, if you look at the previous blog post the sampler never even got stuck in that low region causing me to believe everything was fine. These issues are hard to detect and very subtle, but they are meaningful as demonstrated by the sizable difference in posterior mean.
Step10: This is the space the sampler sees; you can see how the funnel is flattened out. We can freely change the (relative) slope offset parameters even if the slope group variance is tiny as it just acts as a scaling parameter.
Step11: As you can see, the slopes are all identical and the group slope variance is effectively zero. The reason is again related to the funnel. The MAP only cares about the probability density which is highest at the bottom of the funnel.
|
5,574
|
<ASSISTANT_TASK:>
Python Code:
count,feature_names=text.count_letters('data/languages/E3.txt')
print((count,feature_names))
count,feature_names=text.count_letters('data/languages/E3.txt')
print((count,feature_names))
p=text.letter_freq('English',feature_names)
print(p)
print((sum(count*log10(p))))
C=text.LanguageFileClassifier()
result=C.loglikelihood('data/languages/E*.txt',verbose=True)
C.predict('data/languages/E*',verbose=True)
[C.target_names[i] for i in C.predict('data/languages/E*')]
from classy import text
train=text.load_files('data/films/train',verbose=True)
test=text.load_files('data/films/test',verbose=True)
train,test=text.text_to_vectors('data/films/train','data/films/test',verbose=True)
train.vectors
v=array(train.vectors[0,:].todense()).ravel()
v.max()
v.shape
v=array(train.vectors[0,:].todense()).ravel()
plot(v,'.')
v=array(train.vectors[10,:].todense()).ravel()
plot(v,'.')
xlabel('feature number')
ylabel('frequency of feature')
train.vectors.shape
C=text.Multinomial()
C.fit(train.vectors,train.targets)
C.predict(test.vectors)
C.percent_correct(test.vectors,test.targets)
from classy import *
train_files=text.load_files('data/films/train',verbose=True)
test_files=text.load_files('data/films/test',verbose=True)
train_data,test_data=text.text_to_vectors(train_files,test_files,verbose=True)
train_data.vectors
vectors_to_image(train_data.vectors,binary=True)
vectors_to_image(train_data.vectors,binary=False)
from classy import text
train_files=text.load_files('data/films/train',verbose=True)
test_files=text.load_files('data/films/test',verbose=True)
train_data,test_data=text.text_to_vectors(train_files,test_files,verbose=True)
train_data.vectors
train_data,test_data=text.text_to_vectors(train_files,test_files,ngram_range=(1,2),verbose=True)
train_data.vectors
print((train_data.feature_names[:100]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Text Classification from Folders
Step2: Footnote
Step3: Bigrams/Trigrams
Step4: specify the ngram_range - the smallest ngram to use, and the largest. the default is (1,1), so only 1-grams are used. this example calculates the 1-gram and the 2-gram (bi-gram)
|
5,575
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
xv=[1,2,3,4]; yv=[5,1,4,0]
plt.plot(xv,yv);
plt.plot(xv,yv,'ro');
myplot=plt.plot(xv,yv,'k--');
plt.setp(myplot,linewidth=3.0,marker='+',markersize=30);
myplot=plt.plot(xv,yv,'k--');
plt.setp(myplot,'linewidth',3.0,'marker','+','markersize',30);
plt.setp(myplot)
plt.axis()
plt.axis([0.5,4.5,-0.5,5.5])
%matplotlib
plt.ioff() # deactivates 'interactive' mode
fig = plt.figure()
ti=plt.title('Very important data')
xl=plt.xlabel('time'); yl=plt.ylabel('value')
plt.setp(xl,fontweight='bold');
plt.show()
fig.savefig('foo.ps', dpi=600, format='ps',orientation='landscape')
fig.savefig('foo.png', dpi=600, format='png',orientation='landscape')
fig.savefig('foo.pdf', dpi=600, format='pdf',orientation='landscape')
plt.close(fig)
plt.close('all')
fig = plt.figure()
plt.subplot(2,1,1)
plt.plot(xv,yv,'b-')
plt.subplot(2,1,2)
plt.plot(yv,xv,'ro')
plt.show()
plt.close(fig)
# switch back to inline mode
%matplotlib inline
fig2=plt.figure(figsize=(10,10))
plt.subplot(2,1,1)
plt.plot(xv,yv,'b-')
plt.subplot(2,1,2)
plt.plot(yv,xv,'ro');
xv=np.arange(-10,10.5,0.5); xv
plt.plot(xv,2*xv**3-5*xv**2+7*xv)
plt.plot(xv,2000*np.cos(xv),'r--')
plt.text(-10,-2800,'curve A')
plt.text(3,1500,'curve B');
xv_lin=np.arange(-3,3.01,0.02)
xv=10.**xv_lin
plt.semilogx(xv,np.exp(-xv/0.01)+0.5*np.exp(-xv/10)+0.2*np.exp(-xv/200));
plt.semilogx(xv,np.exp(-xv/0.01)+0.5*np.exp(-xv/10)+0.2*np.exp(-xv/200))
plt.grid(color='k')
plt.semilogy(xv,np.exp(-xv/0.01)+0.5*np.exp(-xv/10)+0.2*np.exp(-xv/200))
xv=[0.5,1.5,2.5,3.5]; yv=[2,5,1,6]
mybar=plt.bar(xv, yv, width=1, yerr=0.5, facecolor='b')
mybar=plt.bar(xv, yv, width=1, yerr=0.5);
plt.xticks(xv, ['A','B','C','D'])
plt.setp(mybar, facecolor='r', edgecolor='w');
plt.figure(figsize=(5,5))
handles=plt.pie([1,2,3,4], explode=[0.2,0,0,0], shadow=True, labels=['A','B','C','D']);
handles
plt.figure(figsize=(5,5))
handles=plt.pie([1,2,3,4], explode=[0.2,0,0,0], shadow=True, labels=['A','B','C','D'])
plt.setp(handles[0] [0], color='y')
plt.setp(handles[1] [0], text='Blubber');
n=8; X,Y=np.mgrid[0:n,0:n]
T=np.arctan2(Y-n/2.0,X-n/2.0)
R=10+np.sqrt((Y-n/2.0)**2+(X-n/2.0)**2)
U,V=R*np.cos(T),R*np.sin(T)
plt.axes([0.025,0.025,0.95,0.95])
plt.quiver(X,Y,U,V,R,alpha=.5)
plt.quiver(X,Y,U,V, edgecolor='k', facecolor= 'None', linewidth=.5);
ax=plt.axes([0.025,0.025,0.95,0.95],polar=True)
N=20; theta=np.arange(0.0,2*np.pi,2*np.pi/N)
radii=10*np.random.rand(N)
width=np.pi/4*np.random.rand(N)
bars=plt.bar(theta,radii,width=width,bottom=0.0)
for r,bar in zip(radii,bars):
bar.set_facecolor( plt.cm.jet(r/10.))
bar.set_alpha(0.5)
xv=np.linspace(-10,10,100); yv=xv
X,Y=np.meshgrid(xv,yv)
Z=np.exp(-((X-1)**2/2/0.5**2)-((Y+2)**2/2/3**2))
Z=Z+1.5*np.exp(-((X-5)**2/2/4**2)-((Y-6)**2/2/3**2))
plt.contourf(X,Y,Z,10,alpha=0.5,cmap=plt.cm.hot)
C=plt.contour(X,Y,Z,10,colors='black', linewidth=0.5)
plt.clabel(C,inline=1,fontsize=10);
plt.pcolormesh(X,Y,Z,alpha=0.5,cmap=plt.cm.hot)
plt.axis([-5,10,-8,10]);
from mpl_toolkits.mplot3d import Axes3D
%matplotlib
plt.ioff()
plt.close('all'); fig=plt.figure()
ax=Axes3D(fig)
plt.show()
plt.close('all'); fig=plt.figure()
ax=Axes3D(fig)
import random as rn
xv=[]; yv=[]; zv=[]
for c in range(100):
xv.append(rn.random()); yv.append(rn.random())
zv.append(rn.random())
ax.scatter(xv,yv,zv)
plt.show()
plt.close('all'); fig=plt.figure()
ax=Axes3D(fig)
xv=np.linspace(-10,10,100); yv=np.linspace(-10,10,100)
cx,cy=np.meshgrid(xv,yv)
cz=0.5*cx+np.exp(-cy**2)
tilt=ax.plot_surface(cx,cy,cz,linewidth=0, cmap=plt.cm.jet);
plt.show()
plt.close('all'); fig=plt.figure()
ax=Axes3D(fig)
xv=np.linspace(-10,10,100); yv=np.linspace(-10,10,100)
cx,cy=np.meshgrid(xv,yv)
cz=0*cx
def gauss2D(x0,y0,sigx=1,sigy=1,height=1):
z=height*np.exp(-((cx-x0)**2/2/sigx**2)-((cy-y0)**2/2/sigy**2))
return z
cz=cz+gauss2D(-2,3)
cz=cz+gauss2D(2,4,2,3)
ax.plot_surface(cx,cy,cz,linewidth=0,cstride=2, rstride=2,cmap=plt.cm.jet)
plt.show()
plt.close('all'); fig=plt.figure()
ax=Axes3D(fig)
ax.contour(cx,cy,cz,cstride=2,rstride=2, cmap=plt.cm.jet)
plt.show()
%matplotlib inline
plt.close('all')
plt.style.available
plt.style.use('ggplot')
x = np.linspace(0,10,100)
y = np.sin(x)
plt.plot(x,y)
%matplotlib inline
N = 1e4
%matplotlib inline
%matplotlib
plt.ioff()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Above commands enable pylab environment => direct access to numpy, scipy and matplotlib. The option 'inline' results in plot outputs to be directly embedded in the Notebook. If this causes problems, remove the option 'inline'.
Step2: A simple plotting example. Maybe some variations
Step3: Alternatively, Matplotlib understands the MATLAB syntax, e.g., for the above command (does not work with 'inline' enabled)
Step4: Available settings for a plot can be found this way (does not work with 'inline' enabled)
Step5: Some more commands for plot formatting
Step6: Let's switch to the Qt backend for plotting
Step7: Figures can be saved in a number of output formats such as Postscript
Step8: Alternatively, you can also save figures in PNG bitmap and PDF vector formats. (Note that some export formats may not be supported on your platform.)
Step9: You can close an open figure using its handle
Step10: Or in MATLAB style, close all open figures
Step11: Let's do a figure with several subpanels
Step12: Especially for figures with multiple subpanels it may be advisable to increase the figure size somewhat. Do this by using function arguments in the figure() call
Step13: By using Numpy arrays, Matplotlib can conveniently be used as a function ploting program
Step14: Certainly, you can do plots with logarithmic scale
Step15: Let's add grid lines
Step16: Analogously, you can use semilogy() and loglog() for plots with log y-axis and loglog plots.
Step17: Anybody bar charts?
Step18: Let's pimp the plot a little
Step19: For horizontal bar charts, you would use barh().
Step20: As you will have seen, we retrieved handles to the individual pie slices. Let's do something with them
Step21: Matplotlib also offers quiver() plots which are illustrated in the following example (taken from http
Step22: Polar plots are also nicely illustrated on the very same homepage
Step23: Contour plots are well suited for visualization of three-dimensional data sets
Step24: A similar yet distint representation is provided by pcolormesh().
Step25: Compare the two figures, spot the similarities and differences.
Step26: Let's switch to the Qt backend for 3D plotting
Step27: Then, we can play around.
Step28: Try moving and rotating the (so far empty) plot in three dimensions. Once you have calmed down, let's populate the plot with some data
Step29: In addition to the above 3D scatter plot, other plot types are supported, such as 3D surface plots
Step30: Try some other colormaps such as cm.bone, cm.spring or cm.cool (once more, these are the MATLAB color schemes).
Step31: Let's display the same data in contour representation
Step32: Style sheets
Step33: There are several predefinded style sheets for matplotlib. You can show all availible styles by typing
Step34: to pick one of them, type e.g.
Step35: and your plots will look similar to those created with ggplot2 in R
Step36: Exercises for Numpy and Matplotlib
Step37: Exercise 2
Step38: Check whether they are correlated using a scatter plot
Step39: Plot their 2D density as a contour plot (hint
|
5,576
|
<ASSISTANT_TASK:>
Python Code:
sexual_mean, sexual_standard_deviation = 1.1, 0.15
asexual_mean, asexual_standard_deviation = 1.2, 0.3
pod_sexual = norm.cdf(0, loc=sexual_mean, scale=sexual_standard_deviation)
pod_asexual = norm.cdf(0, loc=asexual_mean, scale=asexual_standard_deviation)
print("The probability that the sexual lizards die off in a given generation is {}".format(pod_sexual))
print("The probability that the asexual lizards die off in a given generation is {}".format(pod_asexual))
expected_generations_sexual = int(1/pod_sexual)
expected_generations_asexual = int(1/pod_asexual)
print("The sexual lizards are expected to survive {} generations".format(expected_generations_sexual))
print("The asexual lizards are expected to survive {} generations".format(expected_generations_asexual))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: One way to figure out the expected extinction time is to figure out the expected number of generations until the mean population growth rate dips below $0$. We'll call it "Probability of Death", or pod, for short.
Step2: Now that we know the probability of both populations dying out in a given generation, we can easily compute their expected number of generations, which is the mean of a geometric random variable whose probability of "failure" is the variable pod.
|
5,577
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
# importamos bibliotecas para plotear
import matplotlib
import matplotlib.pyplot as plt
# para desplegar los plots en el notebook
%matplotlib inline
# para cómputo simbólico
from sympy import *
init_printing()
x, y = symbols('x y')
f = (1-x-y)*x
f
g = (4-7*x-3*y)*y
g
solve(f, x)
solve(g, y)
Y = solve(g, y)[1]
solve(f.subs(y, Y),x)
solve(g.subs(x, -y + 1), y)
J = symbols("J")
J = Matrix([[diff(f, x), diff(f, y)],
[diff(g, x), diff(g, y)]])
J
J = J.subs({x: 1/4, y:3/4})
J
J.det(), J.trace()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Equilibrios
Step2: Jacobiana
Step3: Evaluada en un punto de equilibrio
|
5,578
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import HTML
HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide();
} else {
$('div.input').show();
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
<form action="javascript:code_toggle()"><input type="submit" value="Click here to toggle on/off the raw code."></form>''')
from IPython.display import HTML
HTML('''
<figure>
<img src="Stripe82_gal_projection.png" alt="Swift observations over Stripe82">
<figcaption>Figure 1: Swift observations over Stripe82</figcaption>
</figure>
''')
import pandas
cat = pandas.read_csv('Swift_Master_Stripe82_groups.ascii',
delim_whitespace=True)
print "Table 1: Sample of the catalog"
pandas.concat([cat.head(5),cat.tail(5)])
print "Table 2: Summary of the catalog columns"
cat.describe(include='all')
cat['start_time'] = pandas.to_datetime(cat['start_time'])
cat_grouped_by_target = cat[['Target_Name','start_time']].groupby(['Target_Name'])
cat_descr = cat_grouped_by_target.describe().unstack()
cat_time = cat_descr.sort_values([('start_time','count')],ascending=False)
del cat_descr
title = "Figure 2: Number of sources(Y axis) observed number of times(X axis)"
%matplotlib inline
from matplotlib import pyplot as plt
width = 16
height = 4
plt.figure(figsize=(width, height))
yticks = [2,10,50,100,200,300]
xticks = range(51)
ax = cat_time[('start_time','count')].plot.hist(bins=xticks,xlim=(0,50),title=title,grid=True,xticks=xticks,yticks=yticks,align='left')
ax.set_xlabel('Number of observations (per source)')
print "Table 3: Number counts and dates (first/last) of the observations (per object)"
cat_time
print "Table 4: Observation carried out for source 'V1647ORI' sorted in time"
g = cat_grouped_by_target.get_group('V1647ORI')
g_sorted = g.sort_values('start_time')
g_sorted
def find_clustered_observations(sorted_target_observations,time_range=10):
# Let's select a 'time_range' days window to select valid observations
window_size = time_range
g_sorted = sorted_target_observations
# an ordered dictionary works as a 'set' structure
from collections import OrderedDict
selected_allObs = OrderedDict()
# define en identificator for each cluster of observations, to ease future filtering
group_obs = 1
_last_time = None
_last_id = None
for _row in g_sorted.iterrows():
ind,row = _row
if _last_time is None:
_last_time = row.start_time
_last_id = ind
continue
_delta = row.start_time - _last_time
if _delta.days <= window_size:
selected_allObs[_last_id] = group_obs
selected_allObs[ind] = group_obs
else:
if len(selected_allObs):
group_obs = selected_allObs.values()[-1] + 1
_last_time = row.start_time
_last_id = ind
return selected_allObs
from collections import OrderedDict
obs_indx = OrderedDict()
for name,group in cat_grouped_by_target:
g_sorted = group.sort_values('start_time')
filtered_indxs = find_clustered_observations(g_sorted,time_range=20)
obs_indx.update(filtered_indxs)
import pandas
obsChunks_forFilteringCat = pandas.DataFrame(obs_indx.values(),columns=['obs_chunk'],index=obs_indx.keys())
# obsChunks_forFilteringCat.sort_index()
print "Table 5: original catalog with column 'obs_chunk' to flag which rows succeed the filtering (non-NA values)."
cat_with_obsChunksFlag = cat.join(obsChunks_forFilteringCat)
# cols = list(cat_with_obsChunksFlag.columns)
# cols.insert(2,cols.pop(-1))
# cat_with_obsChunksFlag = cat_with_obsChunksFlag.ix[:,cols]
cat_with_obsChunksFlag
cat_filtered = cat_with_obsChunksFlag.dropna(subset=['obs_chunk'])
cat_filtered
cat_filtered = cat_filtered.loc[cat_filtered.xrt_exposure >= 200]
cat_filtered
cat_filtered.describe(include='all')
cat_filtered.to_csv('Swift_Master_Stripe82_groups_filtered.csv')
cat_filtered_minimum = cat_filtered[['obsid','RA_(J2000)','Dec_(J2000)']]
cat_filtered_minimum.index.name = 'index'
cat_filtered_minimum.to_csv('Swift_Master_Stripe82_filtered_positions.csv')
%qtconsole
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The base catalog
Step2: Target_Name is the name of the (central) object at each observation, from that we see we have 681 unique sources out of the 3035 observations. GroupSize is the number of overlapping observations, the average number is ~54. Let's see how sparse are the observations in time and how do they distribute for each source.
Step3: Number of observations
Step4: Filtering the data
Step5: If we consider each group of observations of our interest -- let me call them "chunk" -- observations that distance each other no more than "X" days (for example, X=20 days) we see from this example that it happens to exist more than one "chunk" of observations per object. Here, for instance, rows 347,344,343,346 and 338,339,336,335,341 form the cluster of observations of our interest, "chunk-1" and "chunk-2", respectively.
Step6: Filtered catalog
|
5,579
|
<ASSISTANT_TASK:>
Python Code:
data_in_shape = (6, 6, 3)
L = AveragePooling2D(pool_size=(2, 2), strides=None, padding='valid', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(270)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling2D.0'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
data_in_shape = (6, 6, 3)
L = AveragePooling2D(pool_size=(2, 2), strides=(1, 1), padding='valid', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(271)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling2D.1'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
data_in_shape = (6, 7, 3)
L = AveragePooling2D(pool_size=(2, 2), strides=(2, 1), padding='valid', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(272)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling2D.2'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
data_in_shape = (6, 6, 3)
L = AveragePooling2D(pool_size=(3, 3), strides=None, padding='valid', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(273)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling2D.3'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
data_in_shape = (6, 6, 3)
L = AveragePooling2D(pool_size=(3, 3), strides=(3, 3), padding='valid', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(274)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling2D.4'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
data_in_shape = (6, 6, 3)
L = AveragePooling2D(pool_size=(2, 2), strides=None, padding='same', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(275)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling2D.5'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
data_in_shape = (6, 6, 3)
L = AveragePooling2D(pool_size=(2, 2), strides=(1, 1), padding='same', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(276)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling2D.6'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
data_in_shape = (6, 7, 3)
L = AveragePooling2D(pool_size=(2, 2), strides=(2, 1), padding='same', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(277)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling2D.7'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
data_in_shape = (6, 6, 3)
L = AveragePooling2D(pool_size=(3, 3), strides=None, padding='same', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(278)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling2D.8'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
data_in_shape = (6, 6, 3)
L = AveragePooling2D(pool_size=(3, 3), strides=(3, 3), padding='same', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(279)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling2D.9'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
data_in_shape = (5, 6, 3)
L = AveragePooling2D(pool_size=(3, 3), strides=(2, 2), padding='valid', data_format='channels_first')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(280)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling2D.10'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
data_in_shape = (5, 6, 3)
L = AveragePooling2D(pool_size=(3, 3), strides=(1, 1), padding='same', data_format='channels_first')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(281)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling2D.11'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
data_in_shape = (4, 6, 4)
L = AveragePooling2D(pool_size=(2, 2), strides=None, padding='valid', data_format='channels_first')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(282)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.AveragePooling2D.12'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
print(json.dumps(DATA))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: [pooling.AveragePooling2D.1] input 6x6x3, pool_size=(2, 2), strides=(1, 1), padding='valid', data_format='channels_last'
Step2: [pooling.AveragePooling2D.2] input 6x7x3, pool_size=(2, 2), strides=(2, 1), padding='valid', data_format='channels_last'
Step3: [pooling.AveragePooling2D.3] input 6x6x3, pool_size=(3, 3), strides=None, padding='valid', data_format='channels_last'
Step4: [pooling.AveragePooling2D.4] input 6x6x3, pool_size=(3, 3), strides=(3, 3), padding='valid', data_format='channels_last'
Step5: [pooling.AveragePooling2D.5] input 6x6x3, pool_size=(2, 2), strides=None, padding='same', data_format='channels_last'
Step6: [pooling.AveragePooling2D.6] input 6x6x3, pool_size=(2, 2), strides=(1, 1), padding='same', data_format='channels_last'
Step7: [pooling.AveragePooling2D.7] input 6x7x3, pool_size=(2, 2), strides=(2, 1), padding='same', data_format='channels_last'
Step8: [pooling.AveragePooling2D.8] input 6x6x3, pool_size=(3, 3), strides=None, padding='same', data_format='channels_last'
Step9: [pooling.AveragePooling2D.9] input 6x6x3, pool_size=(3, 3), strides=(3, 3), padding='same', data_format='channels_last'
Step10: [pooling.AveragePooling2D.10] input 5x6x3, pool_size=(3, 3), strides=(2, 2), padding='valid', data_format='channels_first'
Step11: [pooling.AveragePooling2D.11] input 5x6x3, pool_size=(3, 3), strides=(1, 1), padding='same', data_format='channels_first'
Step12: [pooling.AveragePooling2D.12] input 4x6x4, pool_size=(2, 2), strides=None, padding='valid', data_format='channels_first'
Step13: export for Keras.js tests
|
5,580
|
<ASSISTANT_TASK:>
Python Code:
!pip install lxml
!pip install BeautifulSoup4
import urllib.request
from lxml import html
from bs4 import BeautifulSoup
# Scrape all HTML from webpage.
def scrapewebpage(url):
# Open URL and get HTML.
web = urllib.request.urlopen(url)
# Make sure there wasn't any errors opening the URL.
if (web.getcode() == 200):
html = web.read()
return(html)
else:
print("Error %s reading %s" % str(web.getcode()), url)
# Helper function that scrape the webpage and turn it into soup.
def makesoup(url):
html = scrapewebpage(url)
return(BeautifulSoup(html, "lxml"))
# Scrape Interstellar (2014) by using our own function "makesoup" we defined above.
movie_soup = makesoup('http://www.imdb.com/title/tt0816692/')
# Get movie title.
title = movie_soup.find(itemprop="name").get_text()
title = title.strip() # Remove whitespace before and after text
# Get movie year.
year = movie_soup.find(id="titleYear").get_text()
year = year[1:5] # Remove parentheses, make (2014) into 2014.
# Get movie duration.
duration = movie_soup.find(itemprop="duration").get_text()
duration = duration.strip() # Remove whitespace before and after text
# Get director.
director = movie_soup.find(itemprop="director").get_text()
director = director.strip() # Remove whitespace before and after text
# Get movie rating.
rating = movie_soup.find(itemprop="ratingValue").get_text()
# Get cast list.
actors = []
for castlist in movie_soup.find_all("table", "cast_list"):
for actor in castlist.find_all(itemprop="actor"):
actors.append(actor.get_text().strip())
# Present the results.
print("Movie: " + title)
print("Year: " + year)
print("Director: " + director)
print("Duration: " + duration)
print("Rating: " + rating)
# Present list of actors.
print()
print("Main actors:")
for actor in actors:
print("- " + actor)
wpost_soup = makesoup("http://www.washingtonpost.com/")
# Get headlines.
headlines = wpost_soup.find_all("div", "headline")
print("Found " + str(len(headlines)) + " headlines")
# Print headlines.
for headline in headlines:
print(headline.get_text().strip())
# Print headlines and links.
for links in headlines:
for link in links.find_all("a"):
print(link.get_text())
print(link.get("href"))
print()
# Get all the links on the page.
for link in wpost_soup.find_all("a"):
href = link.get("href")
if href is not None:
if href[:4] == "http":
print(href)
wiki_soup = makesoup("https://en.wikipedia.org/wiki/Parliamentary_Assembly_of_the_Council_of_Europe")
# Lets find the table "Composition by parliamentary delegation".
# The table doesn't have a unique name, which makes it difficult to scrape.
# However, it's the first table. So we can use find, which returns the first match.
table = wiki_soup.find("table")
# Go through all rows in the table.
for row in table.find_all("tr"):
# Go through all cells in each row.
cell = row.find_all("td")
if len(cell) == 3:
# Extract the text from the three cells.
country = cell[0].get_text()
seats = cell[1].get_text()
accessiondate = cell[2].get_text()
print(country + ": " + seats + " seats (" + accessiondate + ")")
# Modify this to your favorite movie.
soup = makesoup('http://www.imdb.com/title/tt0816692/')
# Get rating count instead of name.
title = soup.find(itemprop="name").get_text()
title = title.strip() # Remove whitespace before and after text
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Define functions for scraping
Step2: 3. Scrape Internet Movie Database
Step3: 4. Scrape Washington Post
Step4: 5. Scrape Wikipedia
Step5: Exercise
|
5,581
|
<ASSISTANT_TASK:>
Python Code:
from neon.backends import gen_backend
be = gen_backend(backend='gpu', batch_size=1)
print be
import pickle as pkl
sentence_length = 128
vocab_size = 20000
# we have some special codes
pad_char = 0 # padding character
start = 1 # marker for start of review
oov = 2 # when the word is out of the vocab
index_from = 3 # index of first word in vocab
# load the vocab
vocab, rev_vocab = pkl.load(open('data/imdb.vocab', 'rb'))
from neon.models import Model
model = Model('imdb_lstm.pkl')
# we initialize the model, passing in the size of the input data.
model.initialize(dataset=(sentence_length, 1))
import numpy as np
input_device = be.zeros((sentence_length, 1), dtype=np.int32) # `be` is the backend that we created earlier in the code.
input_numpy = np.zeros((sentence_length, 1), dtype=np.int32)
line = Beautiful attracts excellent idea, but ruined with a bad selection of the actors. The main character is
a loser and his woman friend and his friend upset viewers. Apart from the first episode all the other become
more boring and boring. First, it considers it illogical behavior. No one normal would not behave the way the
main character behaves. It all represents a typical Halmark way to endear viewers to the reduced amount of
intelligence. Does such a scenario, or the casting director and destroy this question is on Halmark
producers. Cat is the main character is wonderful. The main character behaves according to
his friend selfish.
from neon.data.text_preprocessing import clean_string
tokens = clean_string(line).strip().split()
sent = [len(vocab) + 1 if t not in vocab else vocab[t] for t in tokens]
sent = [start] + [w + index_from for w in sent]
sent = [oov if w >= vocab_size else w for w in sent]
print sent
trunc = sent[-sentence_length:] # take the last sentence_length words
input_numpy[:] = 0 # fill with zeros
input_numpy[-len(trunc):, 0] = trunc # place the input into the numpy array
print input_numpy.T
input_device.set(input_numpy) # copy the numpy array to device
y_pred = model.fprop(input_device, inference=True) # run the forward pass through the model
print("Predicted sentiment: {}".format(y_pred.get()[1])) # print the estimated sentiment
def sentiment(sent, model):
input_device = be.zeros((sentence_length, 1), dtype=np.int32)
input_numpy = np.zeros((sentence_length, 1), dtype=np.int32)
tokens = clean_string(line).strip().split()
sent = [len(vocab) + 1 if t not in vocab else vocab[t] for t in tokens]
sent = [start] + [w + index_from for w in sent]
sent = [oov if w >= vocab_size else w for w in sent]
trunc = sent[-sentence_length:] # take the last sentence_length words
input_numpy[:] = 0 # fill with zeros
input_numpy[-len(trunc):, 0] = trunc # place the input into the numpy array
input_device.set(input_numpy) # copy the numpy array to device
y_pred = model.fprop(input_device, inference=True) # run the forward pass through the model
return y_pred.get()[1]
line = The characters voices were very good. I was only really bothered by Kanga. The music, however, was twice
as loud in parts than the dialog, and incongruous to the film. As for the story, it was a bit preachy and
militant in tone. Overall, I was disappointed, but I would go again just to see the same excitement on my
child's face. I liked Lumpy's laugh...
result = sentiment(line, model)
print("Sentiment: {}".format(result))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We also define a few parameters, and the load the vocabulary. The vocab is a 1
Step2: Load Model
Step3: Inference
Step5: Now we write our new movie review. We've included a sample here, but feel free to write your own and see how well the model responds.
Step6: Before we send the data to the model, we need to convert the string to a sequence of numbers, with each number representing a word, using the vocab that we loaded earlier in the code. If a word is not in our vocab, we use a special out-of-vocab number.
Step7: The text data is now converted to a list of integers
Step8: We truncate the input to sentence_length=128 words. If the text is less than 128 words, we pad with zeros. The text is then loaded into the numpy array named input_host.
Step9: Experimentation
Step11: Now you can easily enter your own review and get the result. Here we included a more neutral review below
|
5,582
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import GPy
import pods
from IPython.display import display
data = pods.datasets.olympic_sprints()
X = data['X']
y = data['Y']
print data['info'], data['details']
print data['citation']
print data['output_info']
print('First column of X contains the olympic years.')
print(np.unique(data['X'][:, 0]))
print('Second column of X contains the event index.')
print(np.unique(data['X'][:, 1]))
markers = ['bo', 'ro', 'bx', 'rx', 'bs', 'rs']
for i in range(6):
# extract the event
x_event = X[np.nonzero(X[:, 1]==i), 0]
y_event = y[np.nonzero(X[:, 1]==i), 0]
plt.plot(x_event, y_event, markers[i])
plt.title('Olympic Sprint Times')
plt.xlabel('year')
plt.ylabel('time/s')
GPy.kern.Coregionalize?
kern = GPy.kern.RBF(1, lengthscale=80)**GPy.kern.Coregionalize(1,output_dim=6, rank=5)
display(kern)
model = GPy.models.GPRegression(X, y, kern)
model.optimize()
fig, ax = plt.subplots()
for i in range(6):
model.plot(fignum=1,fixed_inputs=[(1, i)],ax=ax,legend=i==0)
plt.xlabel('years')
plt.ylabel('time/s')
kern1 = GPy.kern.RBF(1, lengthscale=80)**GPy.kern.Coregionalize(1,output_dim=6, rank=1)
kern2 = GPy.kern.Bias(1)**GPy.kern.Coregionalize(1,output_dim=6, rank=1)
kern = kern1 + kern2
model = GPy.models.GPRegression(X, y, kern)
model.optimize()
fig, ax = plt.subplots()
for i in range(6):
model.plot(fignum=1,fixed_inputs=[(1, i)],ax=ax,legend=i==0)
plt.xlabel('years')
plt.ylabel('time/s')
m.plot?
kern1 = GPy.kern.RBF(1, lengthscale=80) + GPy.kern.Bias(1)
kern1.name = 'rbf_plus_bias'
kern2 = GPy.kern.Coregionalize(1,output_dim=6, rank=5)
kern = kern1**kern2
kern.name = 'product'
display(kern)
model = GPy.models.GPRegression(X, y, kern)
model.optimize()
fig, ax = plt.subplots()
for i in range(6):
model.plot(fignum=1,fixed_inputs=[(1, i)],ax=ax,legend=(i==0))
plt.xlabel('years (after first ')
plt.ylabel('time/s')
# Question 1 answer here
# set up the inputs to index sex and event separately.
X2 = np.zeros((X.shape[0], 3))
X2[:, 0] = X[:, 0]
X2[:, 1] = np.remainder(X[:, 1],2) == 1 # Gender
X2[:, 2] = np.floor(X[:, 1]/2) # Event
X2mean = X2[:,0].mean()
# Zero mean X for the linear trend to be feasible
X2m = X2.copy()
X2m[:,0] -= X2mean
kern = ((GPy.kern.RBF(1, variance=1, lengthscale=70)+
GPy.kern.Linear(1, 1, active_dims=[0])+
GPy.kern.White(1)+
GPy.kern.Bias(1, 1))
*GPy.kern.Coregionalize(1, output_dim=2, rank=1, active_dims=1, name='gender')
*GPy.kern.Coregionalize(1, output_dim=3, rank=1, active_dims=2, name='event')
)
#display(kern)
#print "First coregion:", [(k.name, k.active_dims) for k in kern.parts[0].parts]
#print "Second ", [(k.name, k.active_dims) for k in kern.parts[0].parts[1].parts]
model = GPy.models.GPRegression(X2m, y, kern, normalizer=True)
model.optimize(messages=1,max_iters=5e5)
print(X2mean)
fig, ax = plt.subplots()
for i in range(2):
for j in range(3):
model.plot(fignum=1,fixed_inputs=[(1, i), (2, j)],
ax=ax, legend=(i==0)*(j==0)*2,
plot_data=(i==0)*(j==0))
ax.set_xlabel('Year (+ {})'.format(X2mean.round(2)))
ax.set_ylabel('Time [s]')
m.plot?
plotlim = (-100, 100)
fig, ax = plt.subplots()
for sex in range(2):
c = 'magenta' if sex == 1 else 'blue'
for event in range(3):
conflabel = label = None
if event == 0:
label = ['Men', 'Women'][sex]
conflabel = label+" 95\%"
model.plot_mean(fixed_inputs=[(1, sex), (2, event)],
color=c, ax=ax, plot_limits=plotlim, label=label)
model.plot_density(fixed_inputs=[(1, sex), (2, event)],
color=c, plot_limits=plotlim,
ax=ax, label=conflabel)
model.plot_data(visible_dims=[0], ax=ax)
ax.set_xlabel('Year (+ {})'.format(X2mean.round(2)))
ax.set_ylabel('Time [s]')
_ = ax.set_xlim(plotlim)
plotlim = (-100, 100)
fig, ax = plt.subplots()
for sex in range(2):
c = 'magenta' if sex == 1 else 'blue'
for event in range(3):
conflabel = label = None
if event == 0:
label = ['Men', 'Women'][sex]
conflabel = label+" 95\%"
model.plot_mean(fixed_inputs=[(1, sex), (2, event)],
color=c, ax=ax, plot_limits=plotlim, label=label)
model.plot_confidence(fixed_inputs=[(1, sex), (2, event)],
color=c, plot_limits=plotlim,
ax=ax, label=conflabel)
model.plot_data(visible_dims=[0], ax=ax)
ax.set_xlabel('Year (+ {})'.format(X2mean.round(2)))
ax.set_ylabel('Time [s]')
_ = ax.set_xlim(plotlim)
display(model)
X = np.sort(np.random.rand(50,1)*12,0)
k = GPy.kern.RBF(1)
K = k.K(X)
K+= np.eye(50)*0.01 # add some independence (noise) to K
y = np.random.multivariate_normal(np.zeros(50), K).reshape(50,1)
model = GPy.models.GPRegression(X,y)
model.optimize()
fig = plt.figure()
ax = fig.add_subplot(111)
model.plot_f(ax=ax)
model._raw_predict?
mu, var = model._raw_predict(X) # this fetches the posterior of f
plt.vlines(X[:,0], mu[:,0]-2.*np.sqrt(var[:,0]), mu[:,0]+2.*np.sqrt(var[:,0]),color='r',lw=2)
# Exercise 2 answer here
Z = np.random.rand(3,1)*12
model = GPy.models.SparseGPRegression(X,y,Z=Z)
display(model)
mu, var = model._raw_predict(Z)
plt.vlines(Z[:,0], mu[:,0]-2.*np.sqrt(var[:,0]), mu[:,0]+2.*np.sqrt(var[:,0]),color='r')
# Exercise 3 answer
# Exercise 5 answer
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Running Example
Step2: When using data sets it's good practice to cite the originators of the data, you can get information about the source of the data from data['citation']
Step3: The data consists of all the male and female sprinting data for 100m, 200m and 400m since 1896 (six outputs in total). The ouput information can be found from
Step4: In GPy we deal with multiple output data in a particular way. We specify the output we are interested in for modelling as an additional input. So whilst for this data, normally, the only input would be the year of the event. We additionally have an input giving the index of the output we are modelling. This can be seen from examining data['X'].
Step5: Now let's plot the data
Step6: In the plot above red is women's events, blue is men's. Squares are 400 m, crosses 200m and circles 100m. Not all events were run in all years, for example the women's 400 m only started in 1964.
Step7: The coregionalize matrix, $\mathbf{B}$, is itself is constructed from two other matrices, $\mathbf{B} = \mathbf{W}\mathbf{W}^\top + \text{diag}(\boldsymbol{\kappa})$. This allows us to specify a low rank form for the coregionalization matrix. However, for our first example we want to specify that the matrix $\mathbf{B}$ is not constrained to have a low rank form.
Step8: Note here that the rank we specify is that of the $\mathbf{W}\mathbf{W}^\top$ part. When this part is combined with the diagonal matrix from $\mathbf{\kappa}$ the matrix $\mathbf{B}$ is totally general. This covariance function can now be used in a standard Gaussian process regression model. Let's build the model and optimize it.
Step9: We can plot the results using the ability to 'fix inputs' in the model.plot() function. We can specify that column 1 should be fixed to event number 2 by passing fixed_inputs = [(1, 2)] to the model. To plot the results for all events on the same figure we also specify fignum=1 in the loop as below.
Step10: There is a lot we can do with this model. First of all, each of the races is a different length, so the series have a different mean. We can include another coregionalization term to deal with the mean. Below we do this and reduce the rank of the coregionalization matrix to 1.
Step11: This is a simple form of the linear model of coregionalization. Note how confident the model is about what the women's 400 m performance would have been. You might feel that the model is being over confident in this region. Perhaps we are forcing too much sharing of information between the sprints. We could return to the intrinsic coregionalization model and force the two base covariance functions to share the same coregionalization matrix.
Step12: Exercise 1
Step13: Predictions in the multioutput case can be very effected by our covariance function design. This reflects the themes we saw on the first day where the importance of covariance function choice was emphasized at design time.
Step14: Low Rank Approximations
Step15: Build a straightforward GP model of our simulation. We’ll also plot the posterior of $f$.
Step16: Exercise 2
Step17: Building the Low Rank Approximation
Step18: In GPy, the sparse inputs $\mathbf{Z}$ are abbreviated 'iip' , for inducing input. Plot the posterior
Step19: Exercise 3
Step20: Exercise 4
|
5,583
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import tensorflow as tf
import tflearn
from tflearn.data_utils import to_categorical
reviews = pd.read_csv('reviews.txt', header=None)
labels = pd.read_csv('labels.txt', header=None)
from collections import Counter
total_counts = Counter()
for idx, row in reviews.iterrows():
for word in row[0].split(' '):
if len(word) >= 3:
total_counts[word] += 1
# total_counts.update(row[0].split(' '))
print("Total words in data set: ", len(total_counts))
vocab = sorted(total_counts, key=total_counts.get, reverse=True)[:10000]
print(vocab[:60])
print(vocab[-1], ': ', total_counts[vocab[-1]])
word2idx = {word: i for i, word in enumerate(vocab)}
def text_to_vector(text):
word_vector = np.zeros(len(vocab), dtype=np.int_)
for word in text.split(' '):
idx = word2idx.get(word, None)
if idx is not None:
word_vector[idx] += 1
return np.array(word_vector)
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
word_vectors = np.zeros((len(reviews), len(vocab)), dtype=np.int_)
for ii, (_, text) in enumerate(reviews.iterrows()):
word_vectors[ii] = text_to_vector(text[0])
# Printing out the first 5 word vectors
word_vectors[:5, :23]
Y = (labels=='positive').astype(np.int_)
records = len(labels)
shuffle = np.arange(records)
np.random.shuffle(shuffle)
test_fraction = 0.9
train_split, test_split = shuffle[:int(records*test_fraction)], shuffle[int(records*test_fraction):]
trainX, trainY = word_vectors[train_split,:], to_categorical(Y.values[train_split], 2)
testX, testY = word_vectors[test_split,:], to_categorical(Y.values[test_split], 2)
trainY
# Network building
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
#### Your code ####
net = tflearn.input_data([None, len(vocab)])
net = tflearn.fully_connected(net, 200, activation='ReLU')
net = tflearn.fully_connected(net, 25, activation='ReLU')
net = tflearn.fully_connected(net, 2, activation='softmax')
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
model = tflearn.DNN(net)
return model
model = build_model()
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=64, n_epoch=100)
predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_)
test_accuracy = np.mean(predictions == testY[:,0], axis=0)
print("Test accuracy: ", test_accuracy)
text = "This movie is so bad. It was awful and the worst"
positive_prob = model.predict([text_to_vector(text.lower())])[0][1]
print('P(positive) = {:.3f} :'.format(positive_prob),
'Positive' if positive_prob > 0.5 else 'Negative')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Preparing the data
Step2: Counting word frequency
Step3: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.
Step4: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.
Step5: The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words.
Step6: Text to vector function
Step7: If you do this right, the following code should return
Step8: Now, run through our entire review data set and convert each review to a word vector.
Step9: Train, Validation, Test sets
Step10: Building the network
Step11: Intializing the model
Step12: Training the network
Step13: Testing
Step14: Try out your own text!
|
5,584
|
<ASSISTANT_TASK:>
Python Code:
import copy
import numpy as np
from astropy.io import fits
import matplotlib.pyplot as plt
% matplotlib inline
#%matplotlib auto
# Observation
obs = fits.getdata("/home/jneal/.handy_spectra/HD211847-1-mixavg-tellcorr_1.fits")
plt.plot(obs["wavelength"], obs["flux"])
plt.hlines(1, 2111, 2124, linestyle="--")
plt.title("CRIRES spectra")
plt.xlabel("Wavelength (nm)")
plt.show()
# Models
wav_model = fits.getdata("/home/jneal/Phd/data/PHOENIX-ALL/PHOENIX/WAVE_PHOENIX-ACES-AGSS-COND-2011.fits")
wav_model /= 10 # nm
host = "/home/jneal/Phd/data/PHOENIX-ALL/PHOENIX/Z-0.0/lte05700-4.50-0.0.PHOENIX-ACES-AGSS-COND-2011-HiRes.fits"
old_companion = "/home/jneal/Phd/data/PHOENIX-ALL/PHOENIX/Z-0.0/lte02600-4.50-0.0.PHOENIX-ACES-AGSS-COND-2011-HiRes.fits"
companion = "/home/jneal/Phd/data/PHOENIX-ALL/PHOENIX/Z-0.0/lte02300-4.50-0.0.PHOENIX-ACES-AGSS-COND-2011-HiRes.fits"
host_f = fits.getdata(host)
comp_f = fits.getdata(companion)
plt.plot(wav_model, host_f, label="Host")
plt.plot(wav_model, comp_f, label="Companion")
plt.title("Phoenix spectra")
plt.xlabel("Wavelength (nm)")
plt.legend()
plt.show()
mask = (2000 < wav_model) & (wav_model < 2200)
wav_model = wav_model[mask]
host_f = host_f[mask]
comp_f = comp_f[mask]
plt.plot(wav_model, host_f, label="Host")
plt.plot(wav_model, comp_f, label="Companion")
plt.title("Phoenix spectra")
plt.legend()
plt.xlabel("Wavelength (nm)")
plt.show()
def get_continuum_points(wave, flux, splits=50, top=20):
Get continuum points along a spectrum.
This splits a spectrum into "splits" number of bins and calculates
the medain wavelength and flux of the upper "top" number of flux
values.
# Shorten array until can be evenly split up.
remainder = len(flux) % splits
if remainder:
# Nozero reainder needs this slicing
wave = wave[:-remainder]
flux = flux[:-remainder]
wave_shaped = wave.reshape((splits, -1))
flux_shaped = flux.reshape((splits, -1))
s = np.argsort(flux_shaped, axis=-1)[:, -top:]
s_flux = np.array([ar1[s1] for ar1, s1 in zip(flux_shaped, s)])
s_wave = np.array([ar1[s1] for ar1, s1 in zip(wave_shaped, s)])
wave_points = np.median(s_wave, axis=-1)
flux_points = np.median(s_flux, axis=-1)
assert len(flux_points) == splits
return wave_points, flux_points
def continuum(wave, flux, splits=50, method='scalar', plot=False, top=20):
Fit continuum of flux.
top: is number of top points to take median of continuum.
org_wave = wave[:]
org_flux = flux[:]
# Get continuum value in chunked sections of spectrum.
wave_points, flux_points = get_continuum_points(wave, flux, splits=splits, top=top)
poly_num = {"scalar": 0, "linear": 1, "quadratic": 2, "cubic": 3}
if method == "exponential":
z = np.polyfit(wave_points, np.log(flux_points), deg=1, w=np.sqrt(flux_points))
p = np.poly1d(z)
norm_flux = np.exp(p(org_wave)) # Un-log the y values.
else:
z = np.polyfit(wave_points, flux_points, poly_num[method])
p = np.poly1d(z)
norm_flux = p(org_wave)
if plot:
plt.subplot(211)
plt.plot(wave, flux)
plt.plot(wave_points, flux_points, "x-", label="points")
plt.plot(org_wave, norm_flux, label='norm_flux')
plt.legend()
plt.subplot(212)
plt.plot(org_wave, org_flux / norm_flux)
plt.title("Normalization")
plt.xlabel("Wavelength (nm)")
plt.show()
return norm_flux
#host_cont = local_normalization(wav_model, host_f, splits=50, method="exponential", plot=True)
host_continuum = continuum(wav_model, host_f, splits=50, method="exponential", plot=True)
host_cont = host_f / host_continuum
#comp_cont = local_normalization(wav_model, comp_f, splits=50, method="exponential", plot=True)
comp_continuum = continuum(wav_model, comp_f, splits=50, method="exponential", plot=True)
comp_cont = comp_f / comp_continuum
plt.plot(wav_model, comp_cont, label="Companion")
plt.plot(wav_model, host_cont-0.3, label="Host")
plt.title("Continuum Normalized (with -0.3 offset)")
plt.xlabel("Wavelength (nm)")
plt.legend()
plt.show()
plt.plot(wav_model[20:200], comp_cont[20:200], label="Companion")
plt.plot(wav_model[20:200], host_cont[20:200], label="Host")
plt.title("Continuum Normalized - close up")
plt.xlabel("Wavelength (nm)")
ax = plt.gca()
ax.get_xaxis().get_major_formatter().set_useOffset(False)
plt.legend()
plt.show()
def mix(h, c, alpha):
return (h + c * alpha) / (1 + alpha)
mix1 = mix(host_cont, comp_cont, 0.01) # 1% of the companion spectra
mix2 = mix(host_cont, comp_cont, 0.05) # 5% of the companion spectra
# plt.plot(wav_model[20:100], comp_cont[20:100], label="comp")
plt.plot(wav_model[20:100], host_cont[20:100], label="host")
plt.plot(wav_model[20:100], mix1[20:100], label="mix 1%")
plt.plot(wav_model[20:100], mix2[20:100], label="mix 5%")
plt.xlabel("Wavelength (nm)")
plt.legend()
plt.show()
mask = (wav_model > np.min(obs["wavelength"])) & (wav_model < np.max(obs["wavelength"]))
plt.plot(wav_model[mask], mix1[mask], label="mix 1%")
plt.plot(wav_model[mask], mix2[mask], label="mix 5%")
plt.plot(obs["wavelength"], obs["flux"], label="obs")
#plt.xlabel("Wavelength (nm)")
plt.legend()
plt.show()
# Zoomed in
plt.plot(wav_model[mask], mix2[mask], label="mix 5%")
plt.plot(wav_model[mask], mix1[mask], label="mix 1%")
plt.plot(obs["wavelength"], obs["flux"], label="obs")
plt.xlabel("Wavelength (nm)")
plt.legend()
plt.xlim([2112, 2117])
plt.ylim([0.9, 1.1])
plt.title("Zoomed")
plt.show()
from scipy.interpolate import interp1d
# mix1_norm = continuum(wav_model, mix1, splits=50, method="linear", plot=False)
# mix2_norm = local_normalization(wav_model, mix2, splits=50, method="linear", plot=False)
obs_continuum = continuum(obs["wavelength"], obs["flux"], splits=20, method="linear", plot=True)
linear1 = continuum(wav_model, mix1, splits=50, method="linear", plot=True)
linear2 = continuum(wav_model, mix2, splits=50, method="linear", plot=False)
obs_renorm1 = obs["flux"] * (interp1d(wav_model, linear1)(obs["wavelength"]) / obs_continuum)
obs_renorm2 = obs["flux"] * (interp1d(wav_model, linear2)(obs["wavelength"]) / obs_continuum)
# Just a scalar
# mix1_norm = local_normalization(wav_model, mix1, splits=50, method="scalar", plot=False)
# mix2_norm = local_normalization(wav_model, mix2, splits=50, method="scalar", plot=False)
obs_scalar = continuum(obs["wavelength"], obs["flux"], splits=20, method="scalar", plot=False)
scalar1 = continuum(wav_model, mix1, splits=50, method="scalar", plot=True)
scalar2 = continuum(wav_model, mix2, splits=50, method="scalar", plot=False)
print(scalar2)
obs_renorm_scalar1 = obs["flux"] * (interp1d(wav_model, scalar1)(obs["wavelength"]) / obs_scalar)
obs_renorm_scalar2 = obs["flux"] * (interp1d(wav_model, scalar2)(obs["wavelength"]) / obs_scalar)
plt.plot(obs["wavelength"], obs_scalar, label="scalar observed")
plt.plot(obs["wavelength"], obs_continuum, label="linear observed")
plt.plot(obs["wavelength"], interp1d(wav_model, scalar1)(obs["wavelength"]), label="scalar 1%")
plt.plot(obs["wavelength"], interp1d(wav_model, linear1)(obs["wavelength"]), label="linear 1%")
plt.plot(obs["wavelength"], interp1d(wav_model, scalar2)(obs["wavelength"]), label="scalar 5%")
plt.plot(obs["wavelength"], interp1d(wav_model, linear2)(obs["wavelength"]), label="linear 5%")
plt.title("Linear and Scalar continuum renormalizations.")
plt.legend()
plt.show()
plt.plot(obs["wavelength"], obs["flux"], label="obs", alpha =0.6)
plt.plot(obs["wavelength"], obs_renorm1, label="linear norm")
plt.plot(obs["wavelength"], obs_renorm_scalar1, label="scalar norm")
plt.plot(wav_model[mask], mix1[mask], label="mix 1%")
plt.legend()
plt.title("1% model")
plt.hlines(1, 2111, 2124, linestyle="--", alpha=0.2)
plt.show()
plt.plot(obs["wavelength"], obs["flux"], label="obs", alpha =0.6)
plt.plot(obs["wavelength"], obs_renorm1, label="linear norm")
plt.plot(obs["wavelength"], obs_renorm_scalar1, label="scalar norm")
plt.plot(wav_model[mask], mix1[mask], label="mix 1%")
plt.legend()
plt.title("1% model, zoom")
plt.xlim([2120, 2122])
plt.hlines(1, 2111, 2124, linestyle="--", alpha=0.2)
plt.show()
plt.plot(obs["wavelength"], obs["flux"], label="obs", alpha =0.6)
plt.plot(obs["wavelength"], obs_renorm2, label="linear norm")
plt.plot(obs["wavelength"], obs_renorm_scalar2, label="scalar norm")
plt.plot(wav_model[mask], mix2[mask], label="mix 5%")
plt.legend()
plt.title("5% model")
plt.hlines(1, 2111, 2124, linestyle="--", alpha=0.2)
plt.show()
plt.plot(obs["wavelength"], obs["flux"], label="obs", alpha =0.6)
plt.plot(obs["wavelength"], obs_renorm2, label="linear norm")
plt.plot(obs["wavelength"], obs_renorm_scalar2, label="scalar norm")
plt.plot(wav_model[mask], mix2[mask], label="mix 5%")
plt.legend()
plt.title("5% model zoomed")
plt.xlim([2120, 2122])
plt.hlines(1, 2111, 2124, linestyle="--", alpha=0.2)
plt.show()
wav_model = fits.getdata("/home/jneal/Phd/data/PHOENIX-ALL/PHOENIX/WAVE_PHOENIX-ACES-AGSS-COND-2011.fits")
wav_model /= 10 # nm
temps = [2300, 3000, 4000, 5000]
mask1 = (1000 < wav_model) & (wav_model < 3300)
masked_wav1 = wav_model[mask1]
for temp in temps[::-1]:
file = "/home/jneal/Phd/data/PHOENIX-ALL/PHOENIX/Z-0.0/lte0{0}-4.50-0.0.PHOENIX-ACES-AGSS-COND-2011-HiRes.fits".format(temp)
host_f = fits.getdata(file)
plt.plot(masked_wav1, host_f[mask1], label="Teff={}".format(temp))
plt.title("Phoenix spectra")
plt.xlabel("Wavelength (nm)")
plt.legend()
plt.show()
mask = (2000 < wav_model) & (wav_model < 2300)
masked_wav = wav_model[mask]
for temp in temps[::-1]:
file = "/home/jneal/Phd/data/PHOENIX-ALL/PHOENIX/Z-0.0/lte0{0}-4.50-0.0.PHOENIX-ACES-AGSS-COND-2011-HiRes.fits".format(temp)
host_f = fits.getdata(file)
host_f = host_f[mask]
plt.plot(masked_wav, host_f, label="Teff={}".format(temp))
plt.title("Phoenix spectra")
plt.xlabel("Wavelength (nm)")
plt.legend()
plt.show()
# Observations
for chip in range(1,5):
obs = fits.getdata("/home/jneal/.handy_spectra/HD211847-1-mixavg-tellcorr_{}.fits".format(chip))
plt.plot(obs["wavelength"], obs["flux"], label="chip {}".format(chip))
plt.hlines(1, 2111, 2165, linestyle="--")
plt.title("CRIRES spectrum HD211847")
plt.xlabel("Wavelength (nm)")
plt.legend()
plt.show()
# Observations
for chip in range(1,5):
obs = fits.getdata("/home/jneal/.handy_spectra/HD30501-1-mixavg-tellcorr_{}.fits".format(chip))
plt.plot(obs["wavelength"], obs["flux"], label="chip {}".format(chip))
plt.hlines(1, 2111, 2165, linestyle="--")
plt.title("CRIRES spectrum HD30501")
plt.xlabel("Wavelength (nm)")
plt.legend()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The obeservatios were originally automatically continuum normalized in the iraf extraction pipeline.
Step2: The two PHOENIX ACES spectra here are the first best guess of the two spectral components.
Step5: Current Normalization
Step6: Above the top is the unnormalize spectra, with the median points in orangeand the green line the continuum fit. The bottom plot is the contiuum normalized result
Step7: Combining Spectra
Step8: The companion is cooler there are many more deeper lines present in the spectra.
Step9: As you can see here my observations are above the continuum most of the time.
Step10: In this example for the 5% companion spectra there is a bit of difference between the linear and scalar normalizations. With a larger difference at the longer wavelength. (more orange visible above the red.) Faint blue is the spectrum before the renormalization.
|
5,585
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
pd.options.mode.chained_assignment = None # default='warn', hides SettingWithCopyWarning
file = 'data/evaluations.csv'
conversion_dict = {'research_type': lambda x: int(x == 'E')}
evaluation_data = pd.read_csv(file, sep=',', header=0, index_col=0, converters=conversion_dict)
print('Samples per conference\n{}'.format(evaluation_data.groupby('conference').size()), end='\n')
column_headers = evaluation_data.columns.values
print('\nColumn headers: {}'.format(column_headers))
evaluation_data.drop(['title', 'authors', 'link', 'comments'], axis=1, inplace=True)
column_headers = evaluation_data.columns.values
evaluation_data.head(2)
print('Samples per affiliation\n{}'.format(evaluation_data.groupby('affiliation').size()), end='\n\n')
print('Affiliation by conference\n{}'.format(evaluation_data.groupby(['conference', 'affiliation']).size()), end='\n\n')
print('Samples per research type\n{}'.format(evaluation_data.groupby('research_type').size()), end='\n\n')
print('Research type by conference\n{}'.format(evaluation_data.groupby(['conference', 'research_type']).size()), end='\n\n')
print('Samples per research outcome\n{}'.format(evaluation_data.groupby('result_outcome').size()), end='\n\n')
print('Research outcome by conference\n{}'.format(evaluation_data.groupby(['conference', 'result_outcome']).size()), end='\n\n')
print('Samples with contribution\n{}'.format(evaluation_data.groupby('contribution').size()), end='\n\n')
print('Contribution by conference\n{}'.format(evaluation_data.groupby(['conference', 'contribution']).size()), end='\n\n')
experimental_data = evaluation_data[evaluation_data.research_type == 1]
early_years_index = (experimental_data.conference == 'AAAI 14') | (experimental_data.conference == 'IJCAI 13')
late_years_index = (experimental_data.conference == 'AAAI 16') | (experimental_data.conference == 'IJCAI 16')
method = ['conference','problem_description','goal/objective','research_method',
'research_question','pseudocode']
r3_columns = method
experimental_data.loc[:, 'R3'] = experimental_data[r3_columns].all(axis=1)
print('R3(e)\nTotal = {}'.format(experimental_data['R3'].sum()))
display(experimental_data[['R3', 'conference']].groupby('conference').sum())
experimental_data.loc[:, 'R3D'] = experimental_data[r3_columns].mean(axis=1)
print('\n\nR3D\nTotal: {:.4f}, variance = {:.4f}\nBy conference, followed by variance'
.format(experimental_data['R3D'].mean(), experimental_data['R3D'].var()))
display(experimental_data[['R3D', 'conference']].groupby('conference').mean())
display(experimental_data[['R3D', 'conference']].groupby('conference').var())
print('\n\nYear\tR3D\tVariance\n2013/14\t{:.4f}\t{:.4f}'.format(
experimental_data[early_years_index].R3D.mean(),
experimental_data[early_years_index].R3D.var()))
print('2016\t{:.4f}\t{:.4f}'.format(
experimental_data[late_years_index].R3D.mean(),
experimental_data[late_years_index].R3D.var()))
data = ['train', 'validation', 'test', 'results']
r2_columns = r3_columns + data
experimental_data.loc[:, 'Data'] = experimental_data[data].all(axis=1)
print('Data(e)\nTotal = {:}'.format(experimental_data['Data'].sum()))
display(experimental_data[['Data', 'conference']].groupby('conference').sum())
experimental_data.loc[:, 'DataD'] = experimental_data[data].mean(axis=1)
print('\n\nDataDegree(e)\nTotal = {:.4f}, variance = {:.4f}\nBy conference, followed by variance'
.format(experimental_data['DataD'].mean(), experimental_data['DataD'].var()))
display(experimental_data[['DataD', 'conference']].groupby('conference').mean())
display(experimental_data[['DataD', 'conference']].groupby('conference').var())
print('\n\nYear\tDataD\tVariance\n2013/14\t{:.4f}\t{:.4f}'.format(
experimental_data[early_years_index].DataD.mean(),
experimental_data[early_years_index].DataD.var()))
print('2016\t{:.4f}\t{:.4f}'.format(
experimental_data[late_years_index].DataD.mean(),
experimental_data[late_years_index].DataD.var()))
experimental_data.loc[:, 'R2'] = experimental_data[r2_columns].all(axis=1)
print('\n\nR2(e)\nTotal = {}'.format(experimental_data['R2'].sum()))
display(experimental_data[['R2', 'conference']].groupby('conference').sum())
experimental_data.loc[:, 'R2D'] = experimental_data[r2_columns].mean(axis=1)
print('\n\nR2D(e)\nTotal = {:.4f}, variance = {:.4f}\nBy conference, followed by variance'
.format(experimental_data['R2D'].mean(), experimental_data['R2D'].var()))
display(experimental_data[['R2D', 'conference']].groupby('conference').mean())
display(experimental_data[['R2D', 'conference']].groupby('conference').var())
print('\n\nYear\tR2D\tVariance\n2013/14\t{:.4f}\t{:.4f}'.format(
experimental_data[early_years_index].R2D.mean(),
experimental_data[early_years_index].R2D.var()))
print('2016\t{:.4f}\t{:.4f}'.format(
experimental_data[late_years_index].R2D.mean(),
experimental_data[late_years_index].R2D.var()))
experiment = ['hypothesis', 'prediction',
'open_source_code', 'open_experiment_code',
'hardware_specification', 'software_dependencies',
'experiment_setup', 'evaluation_criteria']
r1_columns = r2_columns + experiment
experimental_data.loc[:, 'Exp'] = experimental_data[experiment].all(axis=1)
print('Exp(e)\nTotal = {:.4f}'.format(experimental_data['Exp'].sum()))
display(experimental_data[['Exp', 'conference']].groupby('conference').sum())
experimental_data.loc[:, 'ExpD'] = experimental_data[experiment].mean(axis=1)
print('\n\nExpDegree(e)\nTotal = {:.4f}, variance = {:.4f}\nBy conference, followed by variance'
.format(experimental_data['ExpD'].mean(), experimental_data['ExpD'].var()))
display(experimental_data[['ExpD', 'conference']].groupby('conference').mean())
display(experimental_data[['ExpD', 'conference']].groupby('conference').var())
print('\n\nYear\tExpD\tVariance\n2013/14\t{:.4f}\t{:.4f}'.format(
experimental_data[early_years_index].ExpD.mean(),
experimental_data[early_years_index].ExpD.var()))
print('2016\t{:.4f}\t{:.4f}'.format(
experimental_data[late_years_index].ExpD.mean(),
experimental_data[late_years_index].ExpD.var()))
experimental_data.loc[:, 'R1'] = experimental_data[r1_columns].all(axis=1)
print('\n\nR1(e)\nTotal = {:.4f}'.format(experimental_data['R1'].sum()))
display(experimental_data[['R1', 'conference']].groupby('conference').sum())
experimental_data.loc[:, 'R1D'] = experimental_data[r1_columns].mean(axis=1)
print('\n\nR1D(e)\nTotal = {:.4f}, variance = {:.4f}\nBy conference, followed by variance'
.format(experimental_data['R1D'].mean(), experimental_data['R1D'].var()))
display(experimental_data[['R1D', 'conference']].groupby('conference').mean())
display(experimental_data[['R1D', 'conference']].groupby('conference').var())
print('\n\nYear\tR1D\tVariance\n2013/14\t{:.4f}\t{:.4f}'.format(
experimental_data[early_years_index].R1D.mean(),
experimental_data[early_years_index].R1D.var()))
print('2016\t{:.4f}\t{:.4f}'.format(
experimental_data[late_years_index].R1D.mean(),
experimental_data[late_years_index].R1D.var()))
import IPython
import platform
print('Python version: {}'.format(platform.python_version()))
print('IPython version: {}'.format(IPython.__version__))
print('pandas version: {}'.format(pd.__version__))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The dataset has 400 samples with 27 columns. Some of these columns are not necessary for further analysis
Step2: The above two rows exemplify an experimental (top row) and a theoretical (bottom row) paper. Values with a NaN value appear for theoretical papers for all columns that are specific to experimental papers. For the affiliation column, 0 represents academia, 1 represents collaboration and 2 represents industry authors. The remaining columns are boolean, 1 if documented and 0 if not. Note that some experimental papers have no value (NaN) for training and/or validation data if a train/validation/test split is not applicable.
Step3: Extracting experimental papers
Step4: $R3(e) = Method(e)$
Step5: $R2(e) = Method(e) \land Data(e)$
Step6: $R1(e) = Method(e) \land Data(e) \land Exp(e)$
Step7: Versions
|
5,586
|
<ASSISTANT_TASK:>
Python Code:
display('Number of rows: {}'.format(len(df)))
display('Unique SSIDs: {}'.format(len(df['SSID'].unique())))
display('Unique MACs: {}'.format(len(df['MAC'].unique())))
display('Number of Auth Mode types: {}'.format(len(df['AuthMode'].unique())))
def auth_filter(x):
if 'WPA2' in x:
return 'WPA2'
elif 'WPA' in x:
return 'WPA'
elif 'WEP' in x:
return 'WEP'
else:
return 'OPEN'
df['AuthMode'].apply(auth_filter).value_counts().plot(kind='barh')
def wps(x):
if 'WPS' in x:
return 'WPS'
else:
return 'Not WPS'
df['AuthMode'].apply(wps).value_counts().plot(kind='barh')
s = df['AuthMode'].apply(wps)
wps_entries = df.ix[s[s == 'WPS'].index]
wps_entries.head()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: So there are a significant number of open networks, but the overall majority use WPA2. That's good for the University but not so great for attackers. Of course, there could be a way around that via WPS. How many networks use that?
Step2: Over 500 networks use WPS! Using a tool like Reaver an attacker could easily breach those networks.
|
5,587
|
<ASSISTANT_TASK:>
Python Code:
import mltoolbox.image.classification as model
from google.datalab.ml import *
import os
bucket = 'gs://' + datalab_project_id() + '-coast'
preprocessed_dir = bucket + '/preprocessed'
staging_dir = bucket + '/staging'
model_dir = bucket + '/model'
train_set = BigQueryDataSet('SELECT image_url, label FROM coast.train')
job = model.preprocess_async(train_set, preprocessed_dir, cloud={'num_workers': 5})
job.wait()
model.train_async(preprocessed_dir, 64, 8000, model_dir,
cloud=CloudTrainingConfig('us-central1', 'BASIC')).wait()
tb_id = TensorBoard.start(model_dir)
summary = Summary(model_dir)
summary.list_events()
summary.plot('accuracy')
summary.plot('loss')
Models().create('coast')
ModelVersions('coast').deploy('v1', model_dir)
ModelVersions('coast').describe('v1')
# gs://tamucc_coastline/esi_images/IMG_2849_SecDE_Spr12.jpg,3B
# gs://tamucc_coastline/esi_images/IMG_0047_SecBC_Spr12.jpg,10A
# gs://tamucc_coastline/esi_images/IMG_0617_SecBC_Spr12.jpg,7
# gs://tamucc_coastline/esi_images/IMG_2034_SecEGH_Sum12_Pt2.jpg,10A
images = [
'gs://tamucc_coastline/esi_images/IMG_2849_SecDE_Spr12.jpg',
'gs://tamucc_coastline/esi_images/IMG_0047_SecBC_Spr12.jpg',
'gs://tamucc_coastline/esi_images/IMG_0617_SecBC_Spr12.jpg',
'gs://tamucc_coastline/esi_images/IMG_2034_SecEGH_Sum12_Pt2.jpg'
]
# resize=True because otherwise the images are too large to send for online prediction.
model.predict('coast.v1', images, resize=True, cloud=True)
eval_set = BigQueryDataSet(sql='select * from coast.eval')
model.batch_predict_async(eval_set, model_dir, output_bq_table='coast.eval_results_full',
cloud={'num_workers': 3, 'temp_location': staging_dir}).wait()
%%bq query --name accuracy
SELECT
target,
SUM(CASE WHEN target=predicted THEN 1 ELSE 0 END) as correct,
COUNT(*) as total,
SUM(CASE WHEN target=predicted THEN 1 ELSE 0 END)/COUNT(*) as accuracy
FROM
coast.eval_results_full
GROUP BY
target
accuracy.execute().result()
%%bq query --name logloss
SELECT feature, AVG(-logloss) as logloss, count(*) as count FROM
(
SELECT feature, CASE WHEN correct=1 THEN LOG(prob) ELSE LOG(1-prob) END as logloss
FROM
(
SELECT
target as feature,
CASE WHEN target=predicted THEN 1 ELSE 0 END as correct,
target_prob as prob
FROM coast.eval_results_full
)
)
GROUP BY feature
FeatureSliceView().plot(logloss)
import shutil
import google.datalab.bigquery as bq
TensorBoard.stop(tb_id)
bq.Table('coast.eval_results_full').delete()
ModelVersions('coast').delete('v1')
Models().delete('coast')
!gsutil -m rm -r {preprocessed_dir}
!gsutil -m rm -r {model_dir}
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Training
Step2: Check your job status. You can run
Step3: Evaluation
Step4: Model Deployment
Step5: Online Prediction
Step6: Batch Prediction
Step7: Check evaluation results. It is much better than local run results --- of course, we used much more data, and we ran more training steps.
Step8: Clean up
|
5,588
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License v2.0 with LLVM Exceptions.
# See https://llvm.org/LICENSE.txt for license information.
# SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
#@title General setup
import os
import tempfile
ARTIFACTS_DIR = os.path.join(tempfile.gettempdir(), "iree", "colab_artifacts")
os.makedirs(ARTIFACTS_DIR, exist_ok=True)
print(f"Using artifacts directory '{ARTIFACTS_DIR}'")
#@title Define a sample TensorFlow module using dynamic shapes
import tensorflow as tf
class DynamicShapesModule(tf.Module):
# reduce_sum_1d (dynamic input size, static output size)
# e.g. [1, 2, 3] -> 6
@tf.function(input_signature=[tf.TensorSpec([None], tf.int32)])
def reduce_sum_1d(self, values):
return tf.math.reduce_sum(values)
# reduce_sum_2d (partially dynamic input size, static output size)
# e.g. [[1, 2, 3], [10, 20, 30]] -> [11, 22, 33]
@tf.function(input_signature=[tf.TensorSpec([None, 3], tf.int32)])
def reduce_sum_2d(self, values):
return tf.math.reduce_sum(values, 0)
# add_one (dynamic input size, dynamic output size)
# e.g. [1, 2, 3] -> [2, 3, 4]
@tf.function(input_signature=[tf.TensorSpec([None], tf.int32)])
def add_one(self, values):
return tf.math.add(values, tf.constant(1, dtype=tf.int32))
%%capture
!python -m pip install iree-compiler iree-tools-tf -f https://github.com/google/iree/releases
#@title Import the TensorFlow program into IREE as MLIR
from IPython.display import clear_output
from iree.compiler import tf as tfc
compiler_module = tfc.compile_module(
DynamicShapesModule(), import_only=True,
output_mlir_debuginfo=False)
clear_output() # Skip over TensorFlow's output.
# Print the imported MLIR to see how the compiler views this program.
print("Dynamic Shapes MLIR:\n```\n%s```\n" % compiler_module.decode("utf-8"))
# Save the imported MLIR to disk.
imported_mlir_path = os.path.join(ARTIFACTS_DIR, "dynamic_shapes.mlir")
with open(imported_mlir_path, "wt") as output_file:
output_file.write(compiler_module.decode("utf-8"))
print(f"Wrote MLIR to path '{imported_mlir_path}'")
%%capture
!python -m pip install iree-compiler -f https://github.com/google/iree/releases
#@title Compile the imported MLIR further into an IREE VM bytecode module
from iree.compiler import compile_str
# Note: we'll use the cpu (LLVM) backend since it has the best support
# for dynamic shapes among our compiler targets.
flatbuffer_blob = compile_str(compiler_module, target_backends=["cpu"], input_type="mhlo")
# Save the compiled program to disk.
flatbuffer_path = os.path.join(ARTIFACTS_DIR, "dynamic_shapes_cpu.vmfb")
with open(flatbuffer_path, "wb") as output_file:
output_file.write(flatbuffer_blob)
print(f"Wrote compiled program to path '{flatbuffer_path}'")
%%capture
!python -m pip install iree-runtime -f https://github.com/google/iree/releases
#@title Test running the compiled VM module using IREE's runtime
from iree import runtime as ireert
vm_module = ireert.VmModule.from_flatbuffer(flatbuffer_blob)
config = ireert.Config("local-task")
ctx = ireert.SystemContext(config=config)
ctx.add_vm_module(vm_module)
import numpy as np
# Our @tf.functions are accessible by name on the module named 'module'
dynamic_shapes_program = ctx.modules.module
print(dynamic_shapes_program.reduce_sum_1d(np.array([1, 10, 100], dtype=np.int32)).to_host())
print(dynamic_shapes_program.reduce_sum_2d(np.array([[1, 2, 3], [10, 20, 30]], dtype=np.int32)).to_host())
print(dynamic_shapes_program.reduce_sum_2d(np.array([[1, 2, 3], [10, 20, 30], [100, 200, 300]], dtype=np.int32)).to_host())
print(dynamic_shapes_program.add_one(np.array([1, 10, 100], dtype=np.int32)).to_host())
ARTIFACTS_ZIP = "/tmp/dynamic_shapes_colab_artifacts.zip"
print(f"Zipping '{ARTIFACTS_DIR}' to '{ARTIFACTS_ZIP}' for download...")
!cd {ARTIFACTS_DIR} && zip -r {ARTIFACTS_ZIP} .
# Note: you can also download files using Colab's file explorer
try:
from google.colab import files
print("Downloading the artifacts zip file...")
files.download(ARTIFACTS_ZIP)
except ImportError:
print("Missing google_colab Python package, can't download files")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Dynamic Shapes
Step2: Create a program using TensorFlow and import it into IREE
Step3: Test the imported program
Step4: Download compilation artifacts
|
5,589
|
<ASSISTANT_TASK:>
Python Code:
# Imports
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister, execute
from qiskit.tools.visualization import matplotlib_circuit_drawer as circuit_drawer
from qiskit.tools.visualization import plot_histogram, qx_color_scheme
from qiskit.wrapper.jupyter import *
from qiskit import IBMQ, Aer
from qiskit.backends.ibmq import least_busy
IBMQ.load_accounts()
# use simulator to learn more about entangled quantum states where possible
sim_backend = Aer.get_backend('qasm_simulator')
sim_shots = 8192
# use device to test entanglement
device_shots = 1024
device_backend = least_busy(IBMQ.backends(operational=True, simulator=False))
device_coupling = device_backend.configuration()['coupling_map']
print("the best backend is " + device_backend.name() + " with coupling " + str(device_coupling))
# Creating registers
q = QuantumRegister(2)
c = ClassicalRegister(2)
# quantum circuit to make an entangled bell state
bell = QuantumCircuit(q, c)
bell.h(q[0])
bell.cx(q[0], q[1])
# quantum circuit to measure q in the standard basis
measureZZ = QuantumCircuit(q, c)
measureZZ.measure(q[0], c[0])
measureZZ.measure(q[1], c[1])
bellZZ = bell+measureZZ
# quantum circuit to measure q in the superposition basis
measureXX = QuantumCircuit(q, c)
measureXX.h(q[0])
measureXX.h(q[1])
measureXX.measure(q[0], c[0])
measureXX.measure(q[1], c[1])
bellXX = bell+measureXX
# quantum circuit to measure ZX
measureZX = QuantumCircuit(q, c)
measureZX.h(q[0])
measureZX.measure(q[0], c[0])
measureZX.measure(q[1], c[1])
bellZX = bell+measureZX
# quantum circuit to measure XZ
measureXZ = QuantumCircuit(q, c)
measureXZ.h(q[1])
measureXZ.measure(q[0], c[0])
measureXZ.measure(q[1], c[1])
bellXZ = bell+measureXZ
circuits = [bellZZ,bellXX,bellZX,bellXZ]
circuit_drawer(bellZZ,style=qx_color_scheme())
circuit_drawer(bellXX,style=qx_color_scheme())
circuit_drawer(bellZX,style=qx_color_scheme())
circuit_drawer(bellXZ,style=qx_color_scheme())
%%qiskit_job_status
HTMLProgressBar()
job = execute(circuits, backend=device_backend, coupling_map=device_coupling, shots=device_shots)
result = job.result()
observable_first ={'00': 1, '01': -1, '10': 1, '11': -1}
observable_second ={'00': 1, '01': 1, '10': -1, '11': -1}
observable_correlated ={'00': 1, '01': -1, '10': -1, '11': 1}
print('IZ = ' + str(result.average_data(bellZZ,observable_first)))
print('ZI = ' + str(result.average_data(bellZZ,observable_second)))
print('ZZ = ' + str(result.average_data(bellZZ,observable_correlated)))
print('IX = ' + str(result.average_data(bellXX,observable_first)))
print('XI = ' + str(result.average_data(bellXX,observable_second)))
print('XX = ' + str(result.average_data(bellXX,observable_correlated)))
print('ZX = ' + str(result.average_data(bellZX,observable_correlated)))
print('XZ = ' + str(result.average_data(bellXZ,observable_correlated)))
CHSH = lambda x : x[0]+x[1]+x[2]-x[3]
measure = [measureZZ, measureZX, measureXX, measureXZ]
# Theory
sim_chsh_circuits = []
sim_x = []
sim_steps = 30
for step in range(sim_steps):
theta = 2.0*np.pi*step/30
bell_middle = QuantumCircuit(q,c)
bell_middle.ry(theta,q[0])
for m in measure:
sim_chsh_circuits.append(bell+bell_middle+m)
sim_x.append(theta)
job = execute(sim_chsh_circuits, backend=sim_backend, shots=sim_shots)
result = job.result()
sim_chsh = []
circ = 0
for x in range(len(sim_x)):
temp_chsh = []
for m in range(len(measure)):
temp_chsh.append(result.average_data(sim_chsh_circuits[circ].name,observable_correlated))
circ += 1
sim_chsh.append(CHSH(temp_chsh))
# Experiment
real_chsh_circuits = []
real_x = []
real_steps = 10
for step in range(real_steps):
theta = 2.0*np.pi*step/10
bell_middle = QuantumCircuit(q,c)
bell_middle.ry(theta,q[0])
for m in measure:
real_chsh_circuits.append(bell+bell_middle+m)
real_x.append(theta)
%%qiskit_job_status
HTMLProgressBar()
job = execute(real_chsh_circuits, backend=device_backend, coupling_map=device_coupling, shots=device_shots)
result = job.result()
real_chsh = []
circ = 0
for x in range(len(real_x)):
temp_chsh = []
for m in range(len(measure)):
temp_chsh.append(result.average_data(real_chsh_circuits[circ].name,observable_correlated))
circ += 1
real_chsh.append(CHSH(temp_chsh))
plt.plot(sim_x, sim_chsh, 'r-', real_x, real_chsh, 'bo')
plt.plot([0, 2*np.pi], [2, 2], 'b-')
plt.plot([0, 2*np.pi], [-2, -2], 'b-')
plt.grid()
plt.ylabel('CHSH', fontsize=20)
plt.xlabel(r'$Y(\theta)$', fontsize=20)
plt.show()
print(real_chsh)
# 2 - qubits
# quantum circuit to make GHZ state
q2 = QuantumRegister(2)
c2 = ClassicalRegister(2)
ghz = QuantumCircuit(q2, c2)
ghz.h(q2[0])
ghz.cx(q2[0],q2[1])
# quantum circuit to measure q in standard basis
measureZZ = QuantumCircuit(q2, c2)
measureZZ.measure(q2[0], c2[0])
measureZZ.measure(q2[1], c2[1])
ghzZZ = ghz+measureZZ
measureXX = QuantumCircuit(q2, c2)
measureXX.h(q2[0])
measureXX.h(q2[1])
measureXX.measure(q2[0], c2[0])
measureXX.measure(q2[1], c2[1])
ghzXX = ghz+measureXX
circuits2 = [ghzZZ, ghzXX]
circuit_drawer(ghzZZ,style=qx_color_scheme())
circuit_drawer(ghzXX,style=qx_color_scheme())
job2 = execute(circuits2, backend=sim_backend, shots=sim_shots)
result2 = job2.result()
plot_histogram(result2.get_counts(ghzZZ))
plot_histogram(result2.get_counts(ghzXX))
# 3 - qubits
# quantum circuit to make GHZ state
q3 = QuantumRegister(3)
c3 = ClassicalRegister(3)
ghz3 = QuantumCircuit(q3, c3)
ghz3.h(q3[0])
ghz3.cx(q3[0],q3[1])
ghz3.cx(q3[1],q3[2])
# quantum circuit to measure q in standard basis
measureZZZ = QuantumCircuit(q3, c3)
measureZZZ.measure(q3[0], c3[0])
measureZZZ.measure(q3[1], c3[1])
measureZZZ.measure(q3[2], c3[2])
ghzZZZ = ghz3+measureZZZ
measureXXX = QuantumCircuit(q3, c3)
measureXXX.h(q3[0])
measureXXX.h(q3[1])
measureXXX.h(q3[2])
measureXXX.measure(q3[0], c3[0])
measureXXX.measure(q3[1], c3[1])
measureXXX.measure(q3[2], c3[2])
ghzXXX = ghz3+measureXXX
circuits3 = [ghzZZZ, ghzXXX]
circuit_drawer(ghzZZZ,style=qx_color_scheme())
circuit_drawer(ghzXXX,style=qx_color_scheme())
job3 = execute(circuits3, backend=sim_backend, shots=sim_shots)
result3 = job3.result()
plot_histogram(result3.get_counts(ghzZZZ))
plot_histogram(result3.get_counts(ghzXXX))
# 4 - qubits
# quantum circuit to make GHZ state
q4 = QuantumRegister(4)
c4 = ClassicalRegister(4)
ghz4 = QuantumCircuit(q4, c4)
ghz4.h(q4[0])
ghz4.cx(q4[0],q4[1])
ghz4.cx(q4[1],q4[2])
ghz4.h(q4[3])
ghz4.h(q4[2])
ghz4.cx(q4[3],q4[2])
ghz4.h(q4[3])
ghz4.h(q4[2])
# quantum circuit to measure q in standard basis
measureZZZZ = QuantumCircuit(q4, c4)
measureZZZZ.measure(q4[0], c4[0])
measureZZZZ.measure(q4[1], c4[1])
measureZZZZ.measure(q4[2], c4[2])
measureZZZZ.measure(q4[3], c4[3])
ghzZZZZ = ghz4+measureZZZZ
measureXXXX = QuantumCircuit(q4, c4)
measureXXXX.h(q4[0])
measureXXXX.h(q4[1])
measureXXXX.h(q4[2])
measureXXXX.h(q4[3])
measureXXXX.measure(q4[0], c4[0])
measureXXXX.measure(q4[1], c4[1])
measureXXXX.measure(q4[2], c4[2])
measureXXXX.measure(q4[3], c4[3])
ghzXXXX = ghz4+measureXXXX
circuits4 = [ghzZZZZ, ghzXXXX]
circuit_drawer(ghzZZZZ,style=qx_color_scheme())
circuit_drawer(ghzXXXX,style=qx_color_scheme())
job4 = execute(circuits4, backend=sim_backend, shots=sim_shots)
result4 = job4.result()
plot_histogram(result4.get_counts(ghzZZZZ))
plot_histogram(result4.get_counts(ghzXXXX))
# quantum circuit to make GHZ state
q3 = QuantumRegister(3)
c3 = ClassicalRegister(3)
ghz3 = QuantumCircuit(q3, c3)
ghz3.h(q3[0])
ghz3.cx(q3[0],q3[1])
ghz3.cx(q3[0],q3[2])
# quantum circuit to measure q in standard basis
measureZZZ = QuantumCircuit(q3, c3)
measureZZZ.measure(q3[0], c3[0])
measureZZZ.measure(q3[1], c3[1])
measureZZZ.measure(q3[2], c3[2])
ghzZZZ = ghz3+measureZZZ
circuits5 = [ghzZZZ]
circuit_drawer(ghzZZZ,style=qx_color_scheme())
job5 = execute(circuits5, backend=sim_backend, shots=sim_shots)
result5 = job5.result()
plot_histogram(result5.get_counts(ghzZZZ))
MerminM = lambda x : x[0]*x[1]*x[2]*x[3]
observable ={'000': 1, '001': -1, '010': -1, '011': 1, '100': -1, '101': 1, '110': 1, '111': -1}
# quantum circuit to measure q XXX
measureXXX = QuantumCircuit(q3, c3)
measureXXX.h(q3[0])
measureXXX.h(q3[1])
measureXXX.h(q3[2])
measureXXX.measure(q3[0], c3[0])
measureXXX.measure(q3[1], c3[1])
measureXXX.measure(q3[2], c3[2])
ghzXXX = ghz3+measureXXX
# quantum circuit to measure q XYY
measureXYY = QuantumCircuit(q3, c3)
measureXYY.s(q3[1]).inverse()
measureXYY.s(q3[2]).inverse()
measureXYY.h(q3[0])
measureXYY.h(q3[1])
measureXYY.h(q3[2])
measureXYY.measure(q3[0], c3[0])
measureXYY.measure(q3[1], c3[1])
measureXYY.measure(q3[2], c3[2])
ghzXYY = ghz3+measureXYY
# quantum circuit to measure q YXY
measureYXY = QuantumCircuit(q3, c3)
measureYXY.s(q3[0]).inverse()
measureYXY.s(q3[2]).inverse()
measureYXY.h(q3[0])
measureYXY.h(q3[1])
measureYXY.h(q3[2])
measureYXY.measure(q3[0], c3[0])
measureYXY.measure(q3[1], c3[1])
measureYXY.measure(q3[2], c3[2])
ghzYXY = ghz3+measureYXY
# quantum circuit to measure q YYX
measureYYX = QuantumCircuit(q3, c3)
measureYYX.s(q3[0]).inverse()
measureYYX.s(q3[1]).inverse()
measureYYX.h(q3[0])
measureYYX.h(q3[1])
measureYYX.h(q3[2])
measureYYX.measure(q3[0], c3[0])
measureYYX.measure(q3[1], c3[1])
measureYYX.measure(q3[2], c3[2])
ghzYYX = ghz3+measureYYX
circuits6 = [ghzXXX, ghzYYX, ghzYXY, ghzXYY]
circuit_drawer(ghzXXX,style=qx_color_scheme())
circuit_drawer(ghzYYX,style=qx_color_scheme())
circuit_drawer(ghzYXY,style=qx_color_scheme())
circuit_drawer(ghzXYY,style=qx_color_scheme())
%%qiskit_job_status
HTMLProgressBar()
job6 = execute(circuits6, backend=device_backend, coupling_map=device_coupling, shots=device_shots)
result6 = job6.result()
temp=[]
temp.append(result6.average_data(ghzXXX,observable))
temp.append(result6.average_data(ghzYYX,observable))
temp.append(result6.average_data(ghzYXY,observable))
temp.append(result6.average_data(ghzXYY,observable))
print(MerminM(temp))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Recall that to make the Bell state $|\psi\rangle= (|00\rangle+|11\rangle)/\sqrt{2}$ from the initial state $|00\rangle$, the quantum circuit first applies a Hadamard on $q_0$, followed by a CNOT from $q_0$ to $q_1$. Using Qiskit, this can done by using the script below to measure the above expectation values; we run four different experiments with measurements in the standard basis, superposition basis, and a combination of both.
Step2: Here we see that for the state $|\psi\rangle= (|00\rangle+|11\rangle)/\sqrt{2}$, expectation values (within experimental errors) are
Step3: The resulting graph created by running the previous cell compares the simulated data (sinusoidal line) and the data from the real experiment. The graph also gives lines at $\pm 2$ for reference. Did you violate the hidden variable model?
Step4: Despite the presence of loopholes in our demonstration, we can see that this experiment is compatible with quantum mechanics as a theory with no local hidden variables. See the original experimental demonstrations of this test with superconducting qubits here and here.
Step5: Mermin's Test and the Three Box Game<a id='section4'></a>
Step6: Suppose we have three independent systems, ${A, B, C}$, for which we can query two particular questions (observables) $X$ and $Y$. In each case, either query can give $+1$ or $-1$. Consider whether it is possible to choose some state of the three boxes, such that we can satisfy the following four conditions
|
5,590
|
<ASSISTANT_TASK:>
Python Code:
import time
import numpy as np
from pypot.creatures import PoppyTorso
poppy = PoppyTorso()
for m in poppy.motors:
m.goto_position(0, 2)
# Left arm is compliant, right arm is active
for m in poppy.l_arm:
m.compliant = False
for m in poppy.r_arm:
m.compliant = False
# The torso itself must not be compliant
for m in poppy.torso:
m.compliant = False
def follow_hand(poppy, delta):
Tell the right hand to follow the left hand
right_arm_position = poppy.l_arm_chain.end_effector + delta
poppy.r_arm_chain.goto(right_arm_position, 0.5, wait=True)
try:
while True:
follow_hand(poppy, target_delta)
time.sleep(delay_time)
# Close properly the object when finished
except KeyboardInterrupt:
poppy.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Then, create your Pypot robot
Step2: Initialize your robot positions to 0
Step3: The left arm must be compliant (so you can move it), and the right arm must be active
Step5: Following the left hand
Step6: Now, do this repeatedly in a loop
|
5,591
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
df = pd.DataFrame({'product': [1179160, 1066490, 1148126, 1069104, 1069105, 1160330, 1069098, 1077784, 1193369, 1179741],
'score': [0.424654, 0.424509, 0.422207, 0.420455, 0.414603, 0.168784, 0.168749, 0.168738, 0.168703, 0.168684]})
products = [1066490, 1077784, 1179741]
Max = df.loc[df['product'].isin(products), 'score'].max()
Min = df.loc[df['product'].isin(products), 'score'].min()
df.loc[df['product'].isin(products), 'score'] = (df.loc[df['product'].isin(products), 'score'] - Min) / (Max - Min)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
5,592
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy
import matplotlib.pyplot as plt
t = numpy.linspace(0.0, 1.6e3, 100)
c_0 = 1.0
decay_constant = numpy.log(2.0) / 1600.0
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, 1.0 * numpy.exp(-decay_constant * t))
axes.set_title("Radioactive Decay with $t_{1/2} = 1600$ years")
axes.set_xlabel('t (years)')
axes.set_ylabel('$c$')
axes.set_ylim((0.5,1.0))
plt.show()
import scipy.integrate as integrate
def f(t, u, mu=5):
return numpy.array([u[1], mu * (1.0 - u[0]**2) * u[1] - u[0]])
t = numpy.linspace(0.0, 100, 1000)
u = numpy.empty((2, t.shape[0]))
u[:, 0] = [0.1, 0.0]
integrator = integrate.ode(f)
integrator.set_integrator("dopri5")
integrator.set_initial_value(u[:, 0])
for (n, t_n) in enumerate(t[1:]):
integrator.integrate(t_n)
if not integrator.successful():
break
u[:, n + 1] = integrator.y
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, u[0,:])
axes.set_title("Solution to Van der Pol Oscillator")
axes.set_xlabel("t")
axes.set_ylabel("y(t)")
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(u[0,:], u[1, :])
axes.set_title("Phase Diagram for Van der Pol Oscillator")
axes.set_xlabel("y(t)")
axes.set_ylabel("y'(t)")
plt.show()
t = numpy.linspace(0.0, 1.6e3, 100)
c_0 = 1.0
decay_constant = numpy.log(2.0) / 1600.0
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, c_0 * numpy.exp(-decay_constant * t), label="True Solution")
# Plot Euler step
dt = 1e3
u_np = c_0 + dt * (-decay_constant * c_0)
axes.plot((0.0, dt), (c_0, u_np), 'k')
axes.plot((dt, dt), (u_np, c_0 * numpy.exp(-decay_constant * dt)), 'k--')
axes.plot((0.0, 0.0), (c_0, u_np), 'k--')
axes.plot((0.0, dt), (u_np, u_np), 'k--')
axes.text(400, u_np - 0.05, '$\Delta t$', fontsize=16)
axes.set_title("Radioactive Decay with $t_{1/2} = 1600$ years")
axes.set_xlabel('t (years)')
axes.set_ylabel('$c$')
axes.set_xlim(-1e2, 1.6e3)
axes.set_ylim((0.5,1.0))
plt.show()
c_0 = 1.0
decay_constant = numpy.log(2.0) / 1600.0
f = lambda t, u: -decay_constant * u
t_exact = numpy.linspace(0.0, 1.6e3, 100)
u_exact = c_0 * numpy.exp(-decay_constant * t_exact)
# Implement Euler
t_euler = numpy.linspace(0.0, 1.6e3, 10)
delta_t = t_euler[1] - t_euler[0]
u_euler = numpy.empty(t_euler.shape)
u_euler[0] = c_0
for (n, t_n) in enumerate(t_euler[:-1]):
u_euler[n + 1] = u_euler[n] + delta_t * f(t_n, u_euler[n])
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(t_euler, u_euler, 'or', label="Euler")
axes.plot(t_exact, u_exact, 'k--', label="True Solution")
axes.set_title("Forward Euler")
axes.set_xlabel("t (years)")
axes.set_xlabel("$c(t)$")
axes.set_ylim((0.4,1.1))
axes.legend()
plt.show()
t = numpy.linspace(0.0, 1.6e3, 100)
c_0 = 1.0
decay_constant = numpy.log(2.0) / 1600.0
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, c_0 * numpy.exp(-decay_constant * t), label="True Solution")
# Plot Euler step
dt = 1e3
u_np = c_0 + dt * (-decay_constant * c_0 * numpy.exp(-decay_constant * dt / 2.0))
axes.plot((0.0, dt), (c_0, u_np), 'k')
axes.plot((dt, dt), (u_np, c_0 * numpy.exp(-decay_constant * dt)), 'k--')
axes.plot((0.0, 0.0), (c_0, u_np), 'k--')
axes.plot((0.0, dt), (u_np, u_np), 'k--')
axes.text(400, u_np - 0.05, '$\Delta t$', fontsize=16)
axes.set_title("Radioactive Decay with $t_{1/2} = 1600$ years")
axes.set_xlabel('t (years)')
axes.set_ylabel('$c$')
axes.set_xlim(-1e2, 1.6e3)
axes.set_ylim((0.5,1.0))
plt.show()
c_0 = 1.0
decay_constant = numpy.log(2.0) / 1600.0
f = lambda t, u: -decay_constant * u
t_exact = numpy.linspace(0.0, 1.6e3, 100)
u_exact = c_0 * numpy.exp(-decay_constant * t_exact)
# Implement leap-frog
t_leapfrog = numpy.linspace(0.0, 1.6e3, 10)
delta_t = t_leapfrog[1] - t_leapfrog[0]
u_leapfrog = numpy.empty(t_leapfrog.shape)
u_leapfrog[0] = c_0
# First evaluation use Euler to get us going
u_leapfrog[1] = u_leapfrog[0] + delta_t * f(t_leapfrog[0], u_leapfrog[0])
for n in xrange(1, t_leapfrog.shape[0] - 1):
u_leapfrog[n + 1] = u_leapfrog[n - 1] + 2.0 * delta_t * f(t[n], u_leapfrog[n])
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(t_leapfrog, u_leapfrog, 'or', label="Leap-Frog")
axes.plot(t_exact, u_exact, 'k--', label="True Solution")
axes.set_title("Leap-Frog")
axes.set_xlabel("t (years)")
axes.set_xlabel("$c(t)$")
axes.set_ylim((0.4,1.1))
axes.legend()
plt.show()
t = numpy.linspace(0.0, 1.6e3, 100)
c_0 = 1.0
decay_constant = numpy.log(2.0) / 1600.0
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, c_0 * numpy.exp(-decay_constant * t), label="True Solution")
# Plot Euler step
dt = 1e3
u_np = c_0 + dt * (-decay_constant * c_0 * numpy.exp(-decay_constant * dt))
axes.plot((0.0, dt), (c_0, u_np), 'k')
axes.plot((dt, dt), (u_np, c_0 * numpy.exp(-decay_constant * dt)), 'k--')
axes.plot((0.0, 0.0), (c_0, c_0 * numpy.exp(-decay_constant * dt)), 'k--')
axes.plot((0.0, dt), (c_0 * numpy.exp(-decay_constant * dt), c_0 * numpy.exp(-decay_constant * dt)), 'k--')
axes.text(400, u_np - 0.05, '$\Delta t$', fontsize=16)
axes.set_title("Radioactive Decay with $t_{1/2} = 1600$ years")
axes.set_xlabel('t (years)')
axes.set_ylabel('$c$')
axes.set_xlim(-1e2, 1.6e3)
axes.set_ylim((0.5,1.0))
plt.show()
c_0 = 1.0
decay_constant = numpy.log(2.0) / 1600.0
f = lambda t, u: -decay_constant * u
t_exact = numpy.linspace(0.0, 1.6e3, 100)
u_exact = c_0 * numpy.exp(-decay_constant * t_exact)
# Implement backwards Euler
t_backwards = numpy.linspace(0.0, 1.6e3, 10)
delta_t = t_backwards[1] - t_backwards[0]
u_backwards = numpy.empty(t_backwards.shape)
u_backwards[0] = c_0
for n in xrange(0, t_backwards.shape[0] - 1):
u_backwards[n + 1] = u_backwards[n] / (1.0 + decay_constant * delta_t)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(t_backwards, u_backwards, 'or', label="Backwards Euler")
axes.plot(t_exact, u_exact, 'k--', label="True Solution")
axes.set_title("Backwards Euler")
axes.set_xlabel("t (years)")
axes.set_xlabel("$c(t)$")
axes.set_ylim((0.4,1.1))
axes.legend()
plt.show()
c_0 = 1.0
decay_constant = numpy.log(2.0) / 1600.0
t_exact = numpy.linspace(0.0, 1.6e3, 100)
u_exact = c_0 * numpy.exp(-decay_constant * t_exact)
# Implement trapezoidal method
t = numpy.linspace(0.0, 1.6e3, 10)
delta_t = t[1] - t[0]
u = numpy.empty(t.shape)
u[0] = c_0
integration_constant = (1.0 - decay_constant * delta_t / 2.0) / (1.0 + decay_constant * delta_t / 2.0)
for n in xrange(t.shape[0] - 1):
u[n + 1] = u[n] * integration_constant
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, u, 'or', label="Trapezoidal")
axes.plot(t_exact, u_exact, 'k--', label="True Solution")
axes.set_title("Trapezoidal")
axes.set_xlabel("t (years)")
axes.set_xlabel("$c(t)$")
axes.set_ylim((0.4,1.1))
axes.legend()
plt.show()
# Compare accuracy between Euler and Leap-Frog
f = lambda t, u: -u
u_exact = lambda t: numpy.exp(-t)
u_0 = 1.0
t_f = 10.0
num_steps = [2**n for n in xrange(4,10)]
delta_t = numpy.empty(len(num_steps))
error_euler = numpy.empty(len(num_steps))
error_trap = numpy.empty(len(num_steps))
error_leapfrog = numpy.empty(len(num_steps))
for (i, N) in enumerate(num_steps):
t = numpy.linspace(0, t_f, N)
delta_t[i] = t[1] - t[0]
# Compute Euler solution
u_euler = numpy.empty(t.shape)
u_euler[0] = u_0
for n in xrange(t.shape[0] - 1):
u_euler[n+1] = u_euler[n] + delta_t[i] * f(t[n], u_euler[n])
# Compute trapezoidal
u_trap = numpy.empty(t.shape)
u_trap[0] = u_0
integration_constant = (1.0 - delta_t[i] / 2.0) / (1.0 + delta_t[i] / 2.0)
for n in xrange(t.shape[0] - 1):
u_trap[n + 1] = u_trap[n] * integration_constant
# Compute Leap-Frog
u_leapfrog = numpy.empty(t.shape)
u_leapfrog[0] = 1.0
u_leapfrog[1] = u_euler[1]
for n in xrange(1, t.shape[0] - 1):
u_leapfrog[n+1] = u_leapfrog[n-1] + 2.0 * delta_t[i] * f(t[n], u_leapfrog[n])
# Compute error for each
error_euler[i] = numpy.linalg.norm(delta_t[i] * (u_euler - u_exact(t)), ord=1)
error_trap[i] = numpy.linalg.norm(delta_t[i] * (u_trap - u_exact(t)), ord=1)
error_leapfrog[i] = numpy.linalg.norm(delta_t[i] * (u_leapfrog - u_exact(t)), ord=1)
# Plot error vs. delta_t
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
order_C = lambda delta_x, error, order: numpy.exp(numpy.log(error) - order * numpy.log(delta_x))
axes.loglog(delta_t, error_euler, 'bo', label='Forward Euler')
axes.loglog(delta_t, error_trap, 'go', label='Trapezoidal')
axes.loglog(delta_t, error_leapfrog, 'ro', label="Leap-Frog")
axes.loglog(delta_t, order_C(delta_t[2], error_euler[2], 1.0) * delta_t**1.0, '--b')
axes.loglog(delta_t, order_C(delta_t[2], error_trap[2], 2.0) * delta_t**2.0, '--r')
axes.loglog(delta_t, order_C(delta_t[2], error_leapfrog[2], 2.0) * delta_t**2.0, '--r')
axes.legend(loc=2)
axes.set_title("Comparison of Errors")
axes.set_xlabel("$\Delta t$")
axes.set_ylabel("$|U(t_f) - u(t_f)|$")
plt.show()
# Implement and compare the two-stage and 4-stage Runge-Kutta methods
f = lambda t, u: -u
t_exact = numpy.linspace(0.0, 10.0, 100)
u_exact = numpy.exp(-t_exact)
N = 50
t = numpy.linspace(0, 10.0, N)
delta_t = t[1] - t[0]
U_2 = numpy.empty(t.shape)
U_4 = numpy.empty(t.shape)
U_2[0] = 1.0
U_4[0] = 1.0
for (n, t_n) in enumerate(t[1:]):
U_2[n+1] = U_2[n] + 0.5 * delta_t * f(t_n, U_2[n])
U_2[n+1] = U_2[n] + delta_t * f(t_n + 0.5 * delta_t, U_2[n+1])
y_1 = U_4[n]
y_2 = U_4[n] + 0.5 * delta_t * f(t_n, y_1)
y_3 = U_4[n] + 0.5 * delta_t * f(t_n + 0.5 * delta_t, y_2)
y_4 = U_4[n] + delta_t * f(t_n + 0.5 * delta_t, y_3)
U_4[n+1] = U_4[n] + delta_t / 6.0 * (f(t_n, y_1) + 2.0 * f(t_n + 0.5 * delta_t, y_2) + 2.0 * f(t_n + 0.5 * delta_t, y_3) + f(t_n + delta_t, y_4))
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(t_exact, u_exact, 'k', label="True")
axes.plot(t, U_2, 'ro', label="2-Stage")
axes.plot(t, U_4, 'bo', label="4-Stage")
axes.legend(loc=1)
plt.show()
# Compare accuracy between Euler and RK
f = lambda t, u: -u
u_exact = lambda t: numpy.exp(-t)
t_f = 10.0
num_steps = [2**n for n in xrange(5,12)]
delta_t = numpy.empty(len(num_steps))
error_euler = numpy.empty(len(num_steps))
error_2 = numpy.empty(len(num_steps))
error_4 = numpy.empty(len(num_steps))
for (i, N) in enumerate(num_steps):
t = numpy.linspace(0, t_f, N)
delta_t[i] = t[1] - t[0]
# Compute Euler solution
U_euler = numpy.empty(t.shape)
U_euler[0] = 1.0
for (n, t_n) in enumerate(t[1:]):
U_euler[n+1] = U_euler[n] + delta_t[i] * f(t_n, U_euler[n])
# Compute 2 and 4-stage
U_2 = numpy.empty(t.shape)
U_4 = numpy.empty(t.shape)
U_2[0] = 1.0
U_4[0] = 1.0
for (n, t_n) in enumerate(t[1:]):
U_2[n+1] = U_2[n] + 0.5 * delta_t[i] * f(t_n, U_2[n])
U_2[n+1] = U_2[n] + delta_t[i] * f(t_n, U_2[n+1])
y_1 = U_4[n]
y_2 = U_4[n] + 0.5 * delta_t[i] * f(t_n, y_1)
y_3 = U_4[n] + 0.5 * delta_t[i] * f(t_n + 0.5 * delta_t[i], y_2)
y_4 = U_4[n] + delta_t[i] * f(t_n + 0.5 * delta_t[i], y_3)
U_4[n+1] = U_4[n] + delta_t[i] / 6.0 * (f(t_n, y_1) + 2.0 * f(t_n + 0.5 * delta_t[i], y_2) + 2.0 * f(t_n + 0.5 * delta_t[i], y_3) + f(t_n + delta_t[i], y_4))
# Compute error for each
error_euler[i] = numpy.abs(U_euler[-1] - u_exact(t_f)) / numpy.abs(u_exact(t_f))
error_2[i] = numpy.abs(U_2[-1] - u_exact(t_f)) / numpy.abs(u_exact(t_f))
error_4[i] = numpy.abs(U_4[-1] - u_exact(t_f)) / numpy.abs(u_exact(t_f))
# Plot error vs. delta_t
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.loglog(delta_t, error_euler, 'bo', label='Forward Euler')
axes.loglog(delta_t, error_2, 'ro', label='2-stage')
axes.loglog(delta_t, error_4, 'go', label="4-stage")
order_C = lambda delta_x, error, order: numpy.exp(numpy.log(error) - order * numpy.log(delta_x))
axes.loglog(delta_t, order_C(delta_t[1], error_euler[1], 1.0) * delta_t**1.0, '--b')
axes.loglog(delta_t, order_C(delta_t[1], error_2[1], 2.0) * delta_t**2.0, '--r')
axes.loglog(delta_t, order_C(delta_t[1], error_4[1], 4.0) * delta_t**4.0, '--g')
axes.legend(loc=4)
axes.set_title("Comparison of Errors")
axes.set_xlabel("$\Delta t$")
axes.set_ylabel("$|U(t_f) - u(t_f)|$")
plt.show()
# Use 2-step Adams-Bashforth to compute solution
f = lambda t, u: -u
t_exact = numpy.linspace(0.0, 10.0, 100)
u_exact = numpy.exp(-t_exact)
N = 100
t = numpy.linspace(0, 10.0, N)
delta_t = t[1] - t[0]
U = numpy.empty(t.shape)
# Use RK-2 to start the method
U[0] = 1.0
U[1] = U[0] + 0.5 * delta_t * f(t[0], U[0])
U[1] = U[0] + delta_t * f(t[0], U[1])
for n in xrange(0,len(t)-2):
U[n+2] = U[n + 1] + delta_t / 2.0 * (-f(t[n], U[n]) + 3.0 * f(t[n+1], U[n+1]))
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(t_exact, u_exact, 'k', label="True")
axes.plot(t, U, 'ro', label="2-step A-B")
axes.set_title("Adams-Bashforth Method")
axes.set_xlabel("t")
axes.set_xlabel("u(t)")
axes.legend(loc=1)
plt.show()
# Use 2-step Adams-Moulton to compute solution
# u' = - decay u
decay_constant = 1.0
f = lambda t, u: -decay_constant * u
t_exact = numpy.linspace(0.0, 10.0, 100)
u_exact = numpy.exp(-t_exact)
N = 20
t = numpy.linspace(0, 10.0, N)
delta_t = t[1] - t[0]
U = numpy.empty(t.shape)
U[0] = 1.0
U[1] = U[0] + 0.5 * delta_t * f(t[0], U[0])
U[1] = U[0] + delta_t * f(t[0], U[1])
integration_constant = 1.0 / (1.0 + 5.0 * decay_constant * delta_t / 12.0)
for n in xrange(t.shape[0] - 2):
U[n+2] = (U[n+1] + decay_constant * delta_t / 12.0 * (U[n] - 8.0 * U[n+1])) * integration_constant
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(t_exact, u_exact, 'k', label="True")
axes.plot(t, U, 'ro', label="2-step A-M")
axes.set_title("Adams-Moulton Method")
axes.set_xlabel("t")
axes.set_xlabel("u(t)")
axes.legend(loc=1)
plt.show()
# Compare accuracy between RK-2, AB-2 and AM-2
f = lambda t, u: -u
u_exact = lambda t: numpy.exp(-t)
t_f = 10.0
num_steps = [2**n for n in xrange(4,10)]
delta_t = numpy.empty(len(num_steps))
error_rk = numpy.empty(len(num_steps))
error_ab = numpy.empty(len(num_steps))
error_am = numpy.empty(len(num_steps))
for (i, N) in enumerate(num_steps):
t = numpy.linspace(0, t_f, N)
delta_t[i] = t[1] - t[0]
# Compute RK2
U_rk = numpy.empty(t.shape)
U_rk[0] = 1.0
for n in xrange(t.shape[0]-1):
U_rk[n+1] = U_rk[n] + 0.5 * delta_t[i] * f(t[n], U_rk[n])
U_rk[n+1] = U_rk[n] + delta_t[i] * f(t[n], U_rk[n+1])
# Compute Adams-Bashforth 2-stage
U_ab = numpy.empty(t.shape)
U_ab[:2] = U_rk[:2]
for n in xrange(t.shape[0] - 2):
U_ab[n+2] = U_ab[n + 1] + delta_t[i] / 2.0 * (-f(t[n], U_ab[n]) + 3.0 * f(t[n+1], U_ab[n+1]))
# Compute Adama-Moulton 2-stage
U_am = numpy.empty(t.shape)
U_am[:2] = U_rk[:2]
decay_constant = 1.0
integration_constant = 1.0 / (1.0 + 5.0 * decay_constant * delta_t[i] / 12.0)
for n in xrange(t.shape[0] - 2):
U_am[n+2] = (U_am[n+1] + decay_constant * delta_t[i] / 12.0 * (U_am[n] - 8.0 * U_am[n+1])) * integration_constant
# Compute error for each
error_rk[i] = numpy.linalg.norm(delta_t[i] * (U_rk - u_exact(t)), ord=1)
error_ab[i] = numpy.linalg.norm(delta_t[i] * (U_ab - u_exact(t)), ord=1)
error_am[i] = numpy.linalg.norm(delta_t[i] * (U_am - u_exact(t)), ord=1)
# Plot error vs. delta_t
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.loglog(delta_t, error_rk, 'bo', label='RK-2')
axes.loglog(delta_t, error_ab, 'ro', label='AB-2')
axes.loglog(delta_t, error_am, 'go', label="AM-2")
order_C = lambda delta_x, error, order: numpy.exp(numpy.log(error) - order * numpy.log(delta_x))
axes.loglog(delta_t, order_C(delta_t[1], error_rk[1], 2.0) * delta_t**2.0, '--r')
axes.loglog(delta_t, order_C(delta_t[1], error_ab[1], 2.0) * delta_t**2.0, '--r')
axes.loglog(delta_t, order_C(delta_t[1], error_am[1], 3.0) * delta_t**3.0, '--g')
axes.legend(loc=4)
axes.set_title("Comparison of Errors")
axes.set_xlabel("$\Delta t$")
axes.set_ylabel("$|U(t) - u(t)|$")
plt.show()
# One-step Adams-Bashforth-Moulton
f = lambda t, u: -u
t_exact = numpy.linspace(0.0, 10.0, 100)
u_exact = numpy.exp(-t_exact)
N = 100
t = numpy.linspace(0, 10.0, N)
delta_t = t[1] - t[0]
U = numpy.empty(t.shape)
U[0] = 1.0
for n in xrange(t.shape[0] - 1):
U[n+1] = U[n] + delta_t * f(t[n], U[n])
U[n+1] = U[n] + 0.5 * delta_t * (f(t[n], U[n]) + f(t[n+1], U[n+1]))
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(t_exact, u_exact, 'k', label="True")
axes.plot(t, U, 'ro', label="2-step A-B")
axes.set_title("Adams-Bashforth-Moulton P/C Method")
axes.set_xlabel("t")
axes.set_xlabel("u(t)")
axes.legend(loc=1)
plt.show()
# Compare accuracy between RK-2, AB-2 and AM-2
f = lambda t, u: -u
u_exact = lambda t: numpy.exp(-t)
t_f = 10.0
num_steps = [2**n for n in xrange(4,10)]
delta_t = numpy.empty(len(num_steps))
error_ab = numpy.empty(len(num_steps))
error_am = numpy.empty(len(num_steps))
error_pc = numpy.empty(len(num_steps))
for (i, N) in enumerate(num_steps):
t = numpy.linspace(0, t_f, N)
delta_t[i] = t[1] - t[0]
# RK-2 bootstrap for AB and AM
U_rk = numpy.empty(2)
U_rk[0] = 1.0
U_rk[1] = U_rk[0] + 0.5 * delta_t[i] * f(t[0], U_rk[0])
U_rk[1] = U_rk[0] + delta_t[i] * f(t[0], U_rk[1])
# Compute Adams-Bashforth 2-stage
U_ab = numpy.empty(t.shape)
U_ab[:2] = U_rk[:2]
for n in xrange(t.shape[0] - 2):
U_ab[n+2] = U_ab[n + 1] + delta_t[i] / 2.0 * (-f(t[n], U_ab[n]) + 3.0 * f(t[n+1], U_ab[n+1]))
# Compute Adams-Moulton 2-stage
U_am = numpy.empty(t.shape)
U_am[:2] = U_ab[:2]
decay_constant = 1.0
integration_constant = 1.0 / (1.0 + 5.0 * decay_constant * delta_t[i] / 12.0)
for n in xrange(t.shape[0] - 2):
U_am[n+2] = (U_am[n+1] + decay_constant * delta_t[i] / 12.0 * (U_am[n] - 8.0 * U_am[n+1])) * integration_constant
# Compute Adams-Bashforth-Moulton
U_pc = numpy.empty(t.shape)
U_pc[0] = 1.0
for n in xrange(t.shape[0] - 1):
U_pc[n+1] = U_pc[n] + delta_t[i] * f(t[n], U_pc[n])
U_pc[n+1] = U_pc[n] + 0.5 * delta_t[i] * (f(t[n], U_pc[n]) + f(t[n+1], U_pc[n+1]))
# Compute error for each
error_ab[i] = numpy.linalg.norm(delta_t[i] * (U_ab - u_exact(t)), ord=1)
error_am[i] = numpy.linalg.norm(delta_t[i] * (U_am - u_exact(t)), ord=1)
error_pc[i] = numpy.linalg.norm(delta_t[i] * (U_pc - u_exact(t)), ord=1)
# Plot error vs. delta_t
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.loglog(delta_t, error_pc, 'bo', label='PC')
axes.loglog(delta_t, error_ab, 'ro', label='AB-2')
axes.loglog(delta_t, error_am, 'go', label="AM-2")
order_C = lambda delta_x, error, order: numpy.exp(numpy.log(error) - order * numpy.log(delta_x))
axes.loglog(delta_t, order_C(delta_t[1], error_pc[1], 2.0) * delta_t**2.0, '--b')
axes.loglog(delta_t, order_C(delta_t[1], error_ab[1], 2.0) * delta_t**2.0, '--r')
axes.loglog(delta_t, order_C(delta_t[1], error_am[1], 3.0) * delta_t**3.0, '--g')
axes.legend(loc=4)
axes.set_title("Comparison of Errors")
axes.set_xlabel("$\Delta t$")
axes.set_ylabel("$|U(t) - u(t)|$")
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Numerical Methods for Initial Value Problems
Step2: Examples
Step3: Examples
Step4: A similar method can be derived if we consider instead using the second order accurate central difference
Step5: Similar to forward Euler is the backward Euler method which, as you may have guessed, evaluates the function $f$ at the updated time so that
Step6: Another simple implicit method is based on integration using the trapezoidal method. The scheme is
Step7: Error Analysis
Step8: One-Step Errors
Step9: Linear Multi-Step Methods
Step10: Adams-Moulton Methods
Step11: Truncation Error for Multi-Step Methods
Step12: Predictor-Corrector Methods
|
5,593
|
<ASSISTANT_TASK:>
Python Code:
baseDir = '/home/nick/notebook/SIPSim/dev/priming_exp/'
workDir = os.path.join(baseDir, 'exp_info')
otuTableFile = '/var/seq_data/priming_exp/data/otu_table.txt'
otuTableSumFile = '/var/seq_data/priming_exp/data/otu_table_summary.txt'
metaDataFile = '/var/seq_data/priming_exp/data/allsample_metadata_nomock.txt'
#otuRepFile = '/var/seq_data/priming_exp/otusn.pick.fasta'
#otuTaxFile = '/var/seq_data/priming_exp/otusn_tax/otusn_tax_assignments.txt'
#genomeDir = '/home/nick/notebook/SIPSim/dev/bac_genome1210/genomes/'
import glob
%load_ext rpy2.ipython
%%R
library(ggplot2)
library(dplyr)
library(tidyr)
library(gridExtra)
library(fitdistrplus)
if not os.path.isdir(workDir):
os.makedirs(workDir)
%%R -i otuTableFile
tbl = read.delim(otuTableFile, sep='\t')
# filter
tbl = tbl %>%
select(ends_with('.NA'))
tbl %>% ncol %>% print
tbl[1:4,1:4]
%%R
tbl.h = tbl %>%
gather('sample', 'count', 1:ncol(tbl)) %>%
separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F)
tbl.h %>% head
%%R -w 900 -h 400
tbl.h.s = tbl.h %>%
group_by(sample) %>%
summarize(total_count = sum(count)) %>%
separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F)
ggplot(tbl.h.s, aes(day, total_count, color=rep %>% as.character)) +
geom_point() +
facet_grid(isotope ~ treatment) +
theme(
text = element_text(size=16)
)
%%R
tbl.h.s$sample[grepl('700', tbl.h.s$sample)] %>% as.vector %>% sort
%%R
# bulk soil samples for gradients to simulate
samples.to.use = c(
"X12C.700.14.05.NA",
"X12C.700.28.03.NA",
"X12C.700.45.01.NA",
"X13C.700.14.08.NA",
"X13C.700.28.06.NA",
"X13C.700.45.01.NA"
)
%%R -i otuTableFile
tbl = read.delim(otuTableFile, sep='\t')
# filter
tbl = tbl %>%
select(ends_with('.NA'))
tbl$OTUId = rownames(tbl)
tbl %>% ncol %>% print
tbl[1:4,1:4]
%%R
tbl.h = tbl %>%
gather('sample', 'count', 1:(ncol(tbl)-1)) %>%
separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F)
tbl.h %>% head
%%R -w 800
tbl.s = tbl.h %>%
filter(count > 0) %>%
group_by(sample, isotope, treatment, day, rep, fraction) %>%
summarize(n_taxa = n())
ggplot(tbl.s, aes(day, n_taxa, color=rep %>% as.character)) +
geom_point() +
facet_grid(isotope ~ treatment) +
theme_bw() +
theme(
text = element_text(size=16),
axis.text.x = element_blank()
)
%%R -w 800 -h 350
# filter to just target samples
tbl.s.f = tbl.s %>% filter(sample %in% samples.to.use)
ggplot(tbl.s.f, aes(day, n_taxa, fill=rep %>% as.character)) +
geom_bar(stat='identity') +
facet_grid(. ~ isotope) +
labs(y = 'Number of taxa') +
theme_bw() +
theme(
text = element_text(size=16),
axis.text.x = element_blank()
)
%%R
message('Bulk soil total observed richness: ')
tbl.s.f %>% select(-fraction) %>% as.data.frame %>% print
%%R -i otuTableFile
# loading OTU table
tbl = read.delim(otuTableFile, sep='\t') %>%
select(-ends_with('.NA'))
tbl.h = tbl %>%
gather('sample', 'count', 2:ncol(tbl)) %>%
separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F)
tbl.h %>% head
%%R
# basename of fractions
samples.to.use.base = gsub('\\.[0-9]+\\.NA', '', samples.to.use)
samps = tbl.h$sample %>% unique
fracs = sapply(samples.to.use.base, function(x) grep(x, samps, value=TRUE))
for (n in names(fracs)){
n.frac = length(fracs[[n]])
cat(n, '-->', 'Number of fraction samples: ', n.frac, '\n')
}
%%R
# function for getting all OTUs in a sample
n.OTUs = function(samples, otu.long){
otu.long.f = otu.long %>%
filter(sample %in% samples,
count > 0)
n.OTUs = otu.long.f$OTUId %>% unique %>% length
return(n.OTUs)
}
num.OTUs = lapply(fracs, n.OTUs, otu.long=tbl.h)
num.OTUs = do.call(rbind, num.OTUs) %>% as.data.frame
colnames(num.OTUs) = c('n_taxa')
num.OTUs$sample = rownames(num.OTUs)
num.OTUs
%%R
tbl.s.f %>% as.data.frame
%%R
# joining with bulk soil sample summary table
num.OTUs$data = 'fractions'
tbl.s.f$data = 'bulk_soil'
tbl.j = rbind(num.OTUs,
tbl.s.f %>% ungroup %>% select(sample, n_taxa, data)) %>%
mutate(isotope = gsub('X|\\..+', '', sample),
sample = gsub('\\.[0-9]+\\.NA', '', sample))
tbl.j
%%R -h 300 -w 800
ggplot(tbl.j, aes(sample, n_taxa, fill=data)) +
geom_bar(stat='identity', position='dodge') +
facet_grid(. ~ isotope, scales='free_x') +
labs(y = 'Number of OTUs') +
theme(
text = element_text(size=16)
# axis.text.x = element_text(angle=90)
)
%%R -i otuTableFile
tbl = read.delim(otuTableFile, sep='\t')
# filter
tbl = tbl %>%
select(-ends_with('.NA'))
tbl %>% ncol %>% print
tbl[1:4,1:4]
%%R
tbl.h = tbl %>%
gather('sample', 'count', 2:ncol(tbl)) %>%
separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F)
tbl.h %>% head
%%R -h 400
tbl.h.s = tbl.h %>%
group_by(sample) %>%
summarize(total_seqs = sum(count))
p = ggplot(tbl.h.s, aes(total_seqs)) +
theme_bw() +
theme(
text = element_text(size=16)
)
p1 = p + geom_histogram(binwidth=200)
p2 = p + geom_density()
grid.arrange(p1,p2,ncol=1)
%%R -w 700 -h 350
plotdist(tbl.h.s$total_seqs)
%%R -w 450 -h 400
descdist(tbl.h.s$total_seqs, boot=1000)
%%R
f.n = fitdist(tbl.h.s$total_seqs, 'norm')
f.ln = fitdist(tbl.h.s$total_seqs, 'lnorm')
f.ll = fitdist(tbl.h.s$total_seqs, 'logis')
#f.c = fitdist(tbl.s$count, 'cauchy')
f.list = list(f.n, f.ln, f.ll)
plot.legend = c('normal', 'log-normal', 'logistic')
par(mfrow = c(2,1))
denscomp(f.list, legendtext=plot.legend)
qqcomp(f.list, legendtext=plot.legend)
%%R
gofstat(list(f.n, f.ln, f.ll), fitnames=plot.legend)
%%R
summary(f.ln)
%%R -i otuTableFile
tbl = read.delim(otuTableFile, sep='\t')
# filter
tbl = tbl %>%
select(-ends_with('.NA')) %>%
select(-starts_with('X0MC'))
tbl = tbl %>%
gather('sample', 'count', 2:ncol(tbl)) %>%
mutate(sample = gsub('^X', '', sample))
tbl %>% head
%%R
# summarize
tbl.s = tbl %>%
group_by(sample) %>%
summarize(total_count = sum(count))
tbl.s %>% head(n=3)
%%R -i metaDataFile
tbl.meta = read.delim(metaDataFile, sep='\t')
tbl.meta %>% head(n=3)
%%R -w 700
tbl.j = inner_join(tbl.s, tbl.meta, c('sample' = 'Sample'))
ggplot(tbl.j, aes(Density, total_count, color=rep)) +
geom_point() +
facet_grid(Treatment ~ Day)
%%R -w 600 -h 350
ggplot(tbl.j, aes(Density, total_count)) +
geom_point(aes(color=Treatment)) +
geom_smooth(method='lm') +
labs(x='Buoyant density', y='Total sequences') +
theme_bw() +
theme(
text = element_text(size=16)
)
%%R
tbl.s = tbl %>%
filter(count > 0) %>%
group_by(sample) %>%
summarize(n_taxa = sum(count > 0))
tbl.j = inner_join(tbl.s, tbl.meta, c('sample' = 'Sample'))
tbl.j %>% head(n=3)
%%R -w 900 -h 600
ggplot(tbl.j, aes(Density, n_taxa, fill=rep, color=rep)) +
#geom_area(stat='identity', alpha=0.5, position='dodge') +
geom_point() +
geom_line() +
labs(x='Buoyant density', y='Number of taxa') +
facet_grid(Treatment ~ Day) +
theme_bw() +
theme(
text = element_text(size=16),
legend.position = 'none'
)
%%R -i otuTableFile
# loading OTU table
tbl = read.delim(otuTableFile, sep='\t')
# filter
tbl = tbl %>%
select(matches('OTUId'), ends_with('.NA'))
tbl %>% ncol %>% print
tbl[1:4,1:4]
%%R
# long table format w/ selecting samples of interest
tbl.h = tbl %>%
gather('sample', 'count', 2:ncol(tbl)) %>%
separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F) %>%
filter(sample %in% samples.to.use,
count > 0)
tbl.h %>% head
%%R
message('Number of samples: ', tbl.h$sample %>% unique %>% length)
message('Number of OTUs: ', tbl.h$OTUId %>% unique %>% length)
%%R
tbl.hs = tbl.h %>%
group_by(OTUId) %>%
summarize(
total_count = sum(count),
mean_count = mean(count),
median_count = median(count),
sd_count = sd(count)
) %>%
filter(total_count > 0)
tbl.hs %>% head
%%R -i workDir
setwd(workDir)
samps = tbl.h$sample %>% unique %>% as.vector
for(samp in samps){
outFile = paste(c(samp, 'OTU.txt'), collapse='_')
tbl.p = tbl.h %>%
filter(sample == samp, count > 0)
write.table(tbl.p, outFile, sep='\t', quote=F, row.names=F)
message('Table written: ', outFile)
message(' Number of OTUs: ', tbl.p %>% nrow)
}
p = os.path.join(workDir, '*_OTU.txt')
files = glob.glob(p)
baseDir = os.path.split(workDir)[0]
newDirs = [os.path.split(x)[1].rstrip('.NA_OTU.txt') for x in files]
newDirs = [os.path.join(baseDir, x) for x in newDirs]
for newDir,f in zip(newDirs, files):
if not os.path.isdir(newDir):
print 'Making new directory: {}'.format(newDir)
os.makedirs(newDir)
else:
print 'Directory exists: {}'.format(newDir)
# symlinking file
linkPath = os.path.join(newDir, os.path.split(f)[1])
if not os.path.islink(linkPath):
os.symlink(f, linkPath)
%%R -i otuTableFile
tbl = read.delim(otuTableFile, sep='\t')
# filter
tbl = tbl %>%
select(matches('OTUId'), ends_with('.NA'))
tbl %>% ncol %>% print
tbl[1:4,1:4]
%%R
# long table format w/ selecting samples of interest
tbl.h = tbl %>%
gather('sample', 'count', 2:ncol(tbl)) %>%
separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F) %>%
filter(sample %in% samples.to.use,
count > 0)
tbl.h %>% head
%%R
# ranks of relative abundances
tbl.r = tbl.h %>%
group_by(sample) %>%
mutate(perc_rel_abund = count / sum(count) * 100,
rank = row_number(-perc_rel_abund)) %>%
unite(day_rep, day, rep, sep='-')
tbl.r %>% as.data.frame %>% head(n=3)
%%R -w 900 -h 350
ggplot(tbl.r, aes(rank, perc_rel_abund)) +
geom_point() +
# labs(x='Buoyant density', y='Number of taxa') +
facet_wrap(~ day_rep) +
theme_bw() +
theme(
text = element_text(size=16),
legend.position = 'none'
)
%%R -i otuTableFile
tbl = read.delim(otuTableFile, sep='\t')
# filter
tbl = tbl %>%
select(-ends_with('.NA')) %>%
select(-starts_with('X0MC'))
tbl = tbl %>%
gather('sample', 'count', 2:ncol(tbl)) %>%
mutate(sample = gsub('^X', '', sample))
tbl %>% head
%%R
tbl.ar = tbl %>%
#mutate(fraction = gsub('.+\\.', '', sample) %>% as.numeric) %>%
#mutate(treatment = gsub('(.+)\\..+', '\\1', sample)) %>%
group_by(sample) %>%
mutate(rel_abund = count / sum(count)) %>%
summarize(abund_range = max(rel_abund) - min(rel_abund)) %>%
ungroup() %>%
separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F)
tbl.ar %>% head(n=3)
%%R -w 800
tbl.ar = tbl.ar %>%
mutate(fraction = as.numeric(fraction))
ggplot(tbl.ar, aes(fraction, abund_range, fill=rep, color=rep)) +
geom_point() +
geom_line() +
labs(x='Buoyant density', y='Range of relative abundance values') +
facet_grid(treatment ~ day) +
theme_bw() +
theme(
text = element_text(size=16),
legend.position = 'none'
)
%%R -i otuTableFile
# loading OTU table
tbl = read.delim(otuTableFile, sep='\t') %>%
select(-ends_with('.NA'))
tbl.h = tbl %>%
gather('sample', 'count', 2:ncol(tbl)) %>%
separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F)
tbl.h %>% head
%%R
# basename of fractions
samples.to.use.base = gsub('\\.[0-9]+\\.NA', '', samples.to.use)
samps = tbl.h$sample %>% unique
fracs = sapply(samples.to.use.base, function(x) grep(x, samps, value=TRUE))
for (n in names(fracs)){
n.frac = length(fracs[[n]])
cat(n, '-->', 'Number of fraction samples: ', n.frac, '\n')
}
%%R
# function for getting mean OTU abundance from all fractions
OTU.abund = function(samples, otu.long){
otu.rel.abund = otu.long %>%
filter(sample %in% samples,
count > 0) %>%
ungroup() %>%
group_by(sample) %>%
mutate(total_count = sum(count)) %>%
ungroup() %>%
mutate(perc_abund = count / total_count * 100) %>%
group_by(OTUId) %>%
summarize(mean_perc_abund = mean(perc_abund),
median_perc_abund = median(perc_abund),
max_perc_abund = max(perc_abund))
return(otu.rel.abund)
}
## calling function
otu.rel.abund = lapply(fracs, OTU.abund, otu.long=tbl.h)
otu.rel.abund = do.call(rbind, otu.rel.abund) %>% as.data.frame
otu.rel.abund$sample = gsub('\\.[0-9]+$', '', rownames(otu.rel.abund))
otu.rel.abund %>% head
%%R -h 600 -w 900
# plotting
otu.rel.abund.l = otu.rel.abund %>%
gather('abund_stat', 'value', mean_perc_abund, median_perc_abund, max_perc_abund)
otu.rel.abund.l$OTUId = reorder(otu.rel.abund.l$OTUId, -otu.rel.abund.l$value)
ggplot(otu.rel.abund.l, aes(OTUId, value, color=abund_stat)) +
geom_point(shape='O', alpha=0.7) +
scale_y_log10() +
facet_grid(abund_stat ~ sample) +
theme_bw() +
theme(
text = element_text(size=16),
axis.text.x = element_blank(),
legend.position = 'none'
)
%%R -i workDir
setwd(workDir)
# each sample is a file
samps = otu.rel.abund.l$sample %>% unique %>% as.vector
for(samp in samps){
outFile = paste(c(samp, 'frac_OTU.txt'), collapse='_')
tbl.p = otu.rel.abund %>%
filter(sample == samp, mean_perc_abund > 0)
write.table(tbl.p, outFile, sep='\t', quote=F, row.names=F)
cat('Table written: ', outFile, '\n')
cat(' Number of OTUs: ', tbl.p %>% nrow, '\n')
}
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Init
Step2: Loading OTU table (filter to just bulk samples)
Step3: Which gradient(s) to simulate?
Step4: Notes
Step5: Total richness of starting (bulk-soil) community
Step6: Number of taxa in all fractions corresponding to each bulk soil sample
Step7: Distribution of total sequences per fraction
Step8: Distribution fitting
Step9: Notes
Step10: Loading metadata
Step11: Determining association
Step12: Number of taxa along the gradient
Step13: Notes
Step14: For each sample, writing a table of OTU_ID and count
Step15: Making directories for simulations
Step16: Rank-abundance distribution for each sample
Step17: Taxon abundance range for each sample-fraction
Step18: Total abundance of each target taxon
Step19: For each sample, writing a table of OTU_ID and count
|
5,594
|
<ASSISTANT_TASK:>
Python Code:
from openhunt.mordorutils import *
spark = get_spark()
mordor_file = "https://raw.githubusercontent.com/OTRF/mordor/master/datasets/small/windows/lateral_movement/host/empire_psexec_dcerpc_tcp_svcctl.zip"
registerMordorSQLTable(spark, mordor_file, "mordorTable")
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, SubjectUserName ServiceName, ServiceType, ServiceStartType, ServiceAccount
FROM mordorTable
WHERE LOWER(Channel) = "security" AND EventID = 4697
'''
)
df.show(10,False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Download & Process Mordor Dataset
Step2: Analytic I
|
5,595
|
<ASSISTANT_TASK:>
Python Code:
!pip install cython https://github.com/SciTools/cartopy/archive/v0.18.0b1.zip
import cartopy.io.img_tiles as cimgt
request = cimgt.OSM()
import cartopy.crs as ccrs
carto_proj = ccrs.GOOGLE_MERCATOR
%matplotlib notebook
import matplotlib.pyplot as plt
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1, projection=carto_proj)
# Add mapbox basemap via cartopy
request = cimgt.OSM()
tile_image = ax.add_image(request, 3)
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1, projection=carto_proj)
ax.set_extent([-123.0, -122.0, 37.5, 38.0])
tile_image = ax.add_image(request, 10)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Selecting a Tileset
Step2: Projection
Step3: Rendering Tiles
Step4: To render a specific region, call set_extent with the geodetic (lat/lon) bounds of the region, before calling add_image. You'll also want to update your zoom level to something appropriate for those bounds.
|
5,596
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from astropy.table import Table
import numpy as np
import matplotlib.pyplot as plt
data = Table.read('GTR-ADM-QSO-ir-testhighz_findbw_lup_2016_starclean.fits')
# X is in the format need for all of the sklearn tools, it just has the colors
# X = np.vstack([ data['ug'], data['gr'], data['ri'], data['iz'], data['zs1'], data['s1s2'], data['imag'], data['extinctu']]).T
# Don't use imag and extinctu since they don't contribute much to the accuracy and they add a lot to the data volume.
X = np.vstack([ data['ug'], data['gr'], data['ri'], data['iz'], data['zs1'], data['s1s2'] ]).T
y = np.array(data['labels'])
# For algorithms that need scaled data:
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(X) # Use the full training set now
XStrain = scaler.transform(X)
# SVM
from sklearn.svm import SVC
svm = SVC(random_state=42)
svm.fit(XStrain,y)
# Bagging
from sklearn.ensemble import BaggingClassifier
from sklearn.neighbors import KNeighborsClassifier
bag = BaggingClassifier(KNeighborsClassifier(n_neighbors=7), max_samples=0.5, max_features=1.0, random_state=42)
bag.fit(XStrain, y)
#data2 = Table.read('GTR-ADM-QSO-ir_good_test_2016n.fits')
data2 = Table.read('GTR-ADM-QSO-ir_good_test_2016.fits')
print data2.keys()
# Not sure why I need to do this because there don't appear to be any unfilled columns
# but the code segment below won't run without it.
# Only need to do for the file with imag and extinctu
# data2 = data2.filled()
ramask = ( ( (data2['ra']>=300.0) & (data2['ra']<=360.0) ) | ( (data2['ra']>=0.0) & (data2['ra']<=60.0) ) )
decmask = ((data2['dec']>=-1.5) & (data2['dec']<=1.5))
dataS82 = data2[ramask & decmask]
print len(dataS82)
#Xtest = np.vstack([dataS82['ug'], dataS82['gr'], dataS82['ri'], dataS82['iz'], dataS82['zs1'], dataS82[]'s1s2'], dataS82['i'], data2['extinctu']]).T
Xtest = np.vstack([dataS82['ug'], dataS82['gr'], dataS82['ri'], dataS82['iz'], dataS82['zs1'], dataS82['s1s2'] ]).T
XStest = scaler.transform(Xtest)
from dask import compute, delayed
def processSVM(Xin):
return svm.predict(Xin)
# Create dask objects
# Reshape is necessary because the format of x as drawm from Xtest
# is not what sklearn wants.
dobjsSVM = [delayed(processSVM)(x.reshape(1,-1)) for x in XStest]
import dask.threaded
ypredSVM = compute(*dobjsSVM, get=dask.threaded.get)
ypredSVM = np.array(ypredSVM).reshape(1,-1)[0]
from dask import compute, delayed
def processBAG(Xin):
return bag.predict(Xin)
# Create dask objects
# Reshape is necessary because the format of x as drawm from Xtest
# is not what sklearn wants.
dobjsBAG = [delayed(processBAG)(x.reshape(1,-1)) for x in XStest]
import dask.threaded
ypredBAG = compute(*dobjsBAG, get=dask.threaded.get)
ypredBAG = np.array(ypredBAG).reshape(1,-1)[0]
dataS82['ypredSVM'] = ypredSVM
dataS82['ypredBAG'] = ypredBAG
#dataS82.write('GTR-ADM-QSO-ir_good_test_2016_Stripe82svm.fits', format='fits')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Second, load the test data
Step2: I had some problems with GTR-ADM-QSO-ir_good_test_2016n.fits because it thought that there were blank entries among the attributes. There actually weren't (as far as I could tell), but I found that I could use filled to fix the problem. However, that just caused problems later!
Step3: Taking too long to do all the objects, so just do Stripe 82, which is all that we really care about anyway.
Step4: Quasar Candidates
Step5: Now write results to output file. Didn't do bagging b/c takes too long. See SpIESHighzQuasarsS82all.py which I ran on dirac.
|
5,597
|
<ASSISTANT_TASK:>
Python Code:
reset_start_time(O.map, title='map') # alias is "select"
# warming up:
d = subs(O.from_((1, 2 , 3)).map(lambda x: x * 2))
rst(O.pluck, title='pluck')
d = subs(O.from_([{'x': 1, 'y': 2}, {'x': 3, 'y': 4}]).pluck('y'))
class Coord:
def __init__(self, x, y):
self.x = x
self.y = y
rst(title='pluck_attr')
d = subs(O.from_([Coord(1, 2), Coord(3, 4)]).pluck_attr('y'))
rst(O.flat_map)
stream = O.range(1, 2)\
.flat_map(lambda x: O.range(x, 2)) # alias: flat_map
d = subs(stream)
rst() # from an array
s1 = O.from_(('a', 'b', 'c'))
d = subs(s1.flat_map(lambda x: x))
d = subs(s1.flat_map(lambda x, i: (x, i)))
#d = subs(O.from_(('a', 'b', 'c')).flat_map(lambda x, i: '%s%s' % (x, i))) # ident, a string is iterable
header('using a result mapper')
def res_sel(*a):
# in conrast to the RxJS example I get only 3 parameters, see output
return '-'.join([str(s) for s in a])
# for every el of the original stream we get *additional* two elements: the el and its index:
d = subs(s1.flat_map(lambda x, i: (x, i) , res_sel))
# ident, flat_map flattens the inner stream:
d = subs(s1.flat_map(lambda x, i: O.from_((x, i)), res_sel))
rst(O.flat_map_latest) # alias: select_switch
d = subs(O.range(1, 2).flat_map_latest(lambda x: O.range(x, 2)))
# maybe better to understand: A, B, C are emitted always more recent, then the inner streams' elements
d = subs(O.from_(('A', 'B', 'C')).flat_map_latest(
lambda x, i: O.from_(('%s%s-a' % (x, i),
'%s%s-b' % (x, i),
'%s%s-c' % (x, i),
))))
# with emission delays: Now the inner's is faster:
outer = O.from_marbles('A--B--C|').to_blocking()
inner = O.from_marbles('a-b-c|').to_blocking()
# the inner .map is to show also outer's value
d = subs(outer.flat_map_latest(lambda X: inner.map(lambda x: '%s%s' % (X, x))))
rst(O.for_in)
abc = O.from_marbles('a-b|').to_blocking()
# abc times 3, via:
d = subs(O.for_in([1, 2, 3],
lambda i: abc.map(
# just to get the results of array and stream:
lambda letter: '%s%s' % (letter, i) )))
sleep(0.5)
# we can also for_in from an observable.
# TODO: Dont' understand the output though - __doc__ says only arrays.
d = subs(O.for_in(O.from_((1, 2, 3)),
lambda i: abc.map(lambda letter: '%s%s' % (letter, i) )).take(2))
rst(O.many_select)
stream = O.from_marbles('a-b-c|')
# TODO: more use cases
d = subs(stream.many_select(lambda x: x.first()).merge_all())
rst(O.scan)
s = O.from_marbles("1-2-3-4---5").to_blocking()
d = subs(s.scan(lambda x, y: int(x) + int(y), seed=10000))
rst(O.timestamp)
# the timestamps are objects, not dicts:
d = subs(marble_stream('a-b-c|').timestamp().pluck_attr('timestamp'))
rst(O.time_interval)
d = subs(marble_stream('a-b--c|').time_interval().map(lambda x: x.interval))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: ...by emitting all of the items emitted by corresponding Observables
Step2: flat_map_latest(select_switch)
Step3: concat_map
Step4: many_select
Step5: ... based on ALL of the items that preceded them scan
Step6: ... by attaching a timestamp to them timestamp
Step7: ... into an indicator of the amount of time that lapsed before the emission of the item time_interval
|
5,598
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import division
import gym
import numpy as np
import random
import tensorflow as tf
import matplotlib.pyplot as plt
%matplotlib inline
import tensorflow.contrib.slim as slim
env = gym.make('CartPole-v0')
class experience_buffer():
def __init__(self, buffer_size = 10000):
self.buffer = []
self.buffer_size = buffer_size
def add(self,experience):
if len(self.buffer) + len(experience) >= self.buffer_size:
self.buffer[0:(len(experience)+len(self.buffer))-self.buffer_size] = []
self.buffer.extend(experience)
def sample(self,size):
return np.reshape(np.array(random.sample(self.buffer,size)),[size,5])
def updateTargetGraph(tfVars,tau):
total_vars = len(tfVars)
op_holder = []
for idx,var in enumerate(tfVars[0:total_vars//2]):
op_holder.append(tfVars[idx+total_vars//2].assign((var.value()*tau) + ((1-tau)*tfVars[idx+total_vars//2].value())))
return op_holder
def updateTarget(op_holder,sess):
for op in op_holder:
sess.run(op)
class Q_Network():
def __init__(self):
#These lines establish the feed-forward part of the network used to choose actions
self.inputs = tf.placeholder(shape=[None,4],dtype=tf.float32)
self.Temp = tf.placeholder(shape=None,dtype=tf.float32)
self.keep_per = tf.placeholder(shape=None,dtype=tf.float32)
hidden = slim.fully_connected(self.inputs,64,activation_fn=tf.nn.tanh,biases_initializer=None)
hidden = slim.dropout(hidden,self.keep_per)
self.Q_out = slim.fully_connected(hidden,2,activation_fn=None,biases_initializer=None)
self.predict = tf.argmax(self.Q_out,1)
self.Q_dist = tf.nn.softmax(self.Q_out/self.Temp)
#Below we obtain the loss by taking the sum of squares difference between the target and prediction Q values.
self.actions = tf.placeholder(shape=[None],dtype=tf.int32)
self.actions_onehot = tf.one_hot(self.actions,2,dtype=tf.float32)
self.Q = tf.reduce_sum(tf.multiply(self.Q_out, self.actions_onehot), reduction_indices=1)
self.nextQ = tf.placeholder(shape=[None],dtype=tf.float32)
loss = tf.reduce_sum(tf.square(self.nextQ - self.Q))
trainer = tf.train.GradientDescentOptimizer(learning_rate=0.0005)
self.updateModel = trainer.minimize(loss)
# Set learning parameters
exploration = "e-greedy" #Exploration method. Choose between: greedy, random, e-greedy, boltzmann, bayesian.
y = .99 #Discount factor.
num_episodes = 20000 #Total number of episodes to train network for.
tau = 0.001 #Amount to update target network at each step.
batch_size = 32 #Size of training batch
startE = 1 #Starting chance of random action
endE = 0.1 #Final chance of random action
anneling_steps = 200000 #How many steps of training to reduce startE to endE.
pre_train_steps = 50000 #Number of steps used before training updates begin.
tf.reset_default_graph()
q_net = Q_Network()
target_net = Q_Network()
init = tf.initialize_all_variables()
trainables = tf.trainable_variables()
targetOps = updateTargetGraph(trainables,tau)
myBuffer = experience_buffer()
#create lists to contain total rewards and steps per episode
jList = []
jMeans = []
rList = []
rMeans = []
with tf.Session() as sess:
sess.run(init)
updateTarget(targetOps,sess)
e = startE
stepDrop = (startE - endE)/anneling_steps
total_steps = 0
for i in range(num_episodes):
s = env.reset()
rAll = 0
d = False
j = 0
while j < 999:
j+=1
if exploration == "greedy":
#Choose an action with the maximum expected value.
a,allQ = sess.run([q_net.predict,q_net.Q_out],feed_dict={q_net.inputs:[s],q_net.keep_per:1.0})
a = a[0]
if exploration == "random":
#Choose an action randomly.
a = env.action_space.sample()
if exploration == "e-greedy":
#Choose an action by greedily (with e chance of random action) from the Q-network
if np.random.rand(1) < e or total_steps < pre_train_steps:
a = env.action_space.sample()
else:
a,allQ = sess.run([q_net.predict,q_net.Q_out],feed_dict={q_net.inputs:[s],q_net.keep_per:1.0})
a = a[0]
if exploration == "boltzmann":
#Choose an action probabilistically, with weights relative to the Q-values.
Q_d,allQ = sess.run([q_net.Q_dist,q_net.Q_out],feed_dict={q_net.inputs:[s],q_net.Temp:e,q_net.keep_per:1.0})
a = np.random.choice(Q_d[0],p=Q_d[0])
a = np.argmax(Q_d[0] == a)
if exploration == "bayesian":
#Choose an action using a sample from a dropout approximation of a bayesian q-network.
a,allQ = sess.run([q_net.predict,q_net.Q_out],feed_dict={q_net.inputs:[s],q_net.keep_per:(1-e)+0.1})
a = a[0]
#Get new state and reward from environment
s1,r,d,_ = env.step(a)
myBuffer.add(np.reshape(np.array([s,a,r,s1,d]),[1,5]))
if e > endE and total_steps > pre_train_steps:
e -= stepDrop
if total_steps > pre_train_steps and total_steps % 5 == 0:
#We use Double-DQN training algorithm
trainBatch = myBuffer.sample(batch_size)
Q1 = sess.run(q_net.predict,feed_dict={q_net.inputs:np.vstack(trainBatch[:,3]),q_net.keep_per:1.0})
Q2 = sess.run(target_net.Q_out,feed_dict={target_net.inputs:np.vstack(trainBatch[:,3]),target_net.keep_per:1.0})
end_multiplier = -(trainBatch[:,4] - 1)
doubleQ = Q2[range(batch_size),Q1]
targetQ = trainBatch[:,2] + (y*doubleQ * end_multiplier)
_ = sess.run(q_net.updateModel,feed_dict={q_net.inputs:np.vstack(trainBatch[:,0]),q_net.nextQ:targetQ,q_net.keep_per:1.0,q_net.actions:trainBatch[:,1]})
updateTarget(targetOps,sess)
rAll += r
s = s1
total_steps += 1
if d == True:
break
jList.append(j)
rList.append(rAll)
if i % 100 == 0 and i != 0:
r_mean = np.mean(rList[-100:])
j_mean = np.mean(jList[-100:])
if exploration == 'e-greedy':
print("Mean Reward: " + str(r_mean) + " Total Steps: " + str(total_steps) + " e: " + str(e))
if exploration == 'boltzmann':
print("Mean Reward: " + str(r_mean) + " Total Steps: " + str(total_steps) + " t: " + str(e))
if exploration == 'bayesian':
print("Mean Reward: " + str(r_mean) + " Total Steps: " + str(total_steps) + " p: " + str(e))
if exploration == 'random' or exploration == 'greedy':
print("Mean Reward: " + str(r_mean) + " Total Steps: " + str(total_steps))
rMeans.append(r_mean)
jMeans.append(j_mean)
print("Percent of succesful episodes: " + str(sum(rList)/num_episodes) + "%")
plt.plot(rMeans)
plt.plot(jMeans)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the environment
Step2: The Deep Q-Network
Step3: Implementing the network itself
Step4: Training the network
Step5: Some statistics on network performance
|
5,599
|
<ASSISTANT_TASK:>
Python Code:
# location is relative or absolute path to the file
sne = ans.snanaSims.SnanaSims.fromSNANAfileroot('snana_fits', location=ans.example_data, n=1)
sn = ans.snanaSims.SnanaSims.reformat_SNANASN(sne.snList[0])
ans.snanaSims.SnanaSims.matchSNANAbandnamesinregistry()
sn[:5]
sn.meta
model = sncosmo.Model(source='salt2-extended')
model.set(z=sn.meta['REDSHIFT_FINAL'])
fit_results, fit_model = sncosmo.fit_lc(sn, model, vparam_names=['t0', 'x0', 'x1', 'c'])
print fit_model
fig = sncosmo.plot_lc(sn, model=fit_model, band=['u', 'g', 'r', 'i', 'z', 'Y'], zp=25., zpsys='ab')
sncosmo.chisq(sn, fit_model)/ fit_results.ndof
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Pick out the very first SN and reformat it to look the way sncosmo expects SN to look
Step2: Make sure SNCosmo can understand band names
Step3: We can look at the information in the light curve, or at the metadata
Step4: Fitting SN to models
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.