text_prompt
stringlengths 168
30.3k
| code_prompt
stringlengths 67
124k
|
|---|---|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: ์ ๋์ฝ๋ ๋ฌธ์์ด
Step2: tf.string ๋ฐ์ดํฐ ํ์
Step3: tf.string ํ
์๋ ๋ฐ์ดํธ ๋ฌธ์์ด์ ์ต์ ๋จ์๋ก ๋ค๋ฃจ๊ธฐ ๋๋ฌธ์ ๋ค์ํ ๊ธธ์ด์ ๋ฐ์ดํธ ๋ฌธ์์ด์ ๋ค๋ฃฐ ์ ์์ต๋๋ค. ๋ฌธ์์ด ๊ธธ์ด๋ ํ
์ ์ฐจ์(dimensions)์ ํฌํจ๋์ง ์์ต๋๋ค.
Step4: ๋
ธํธ
Step5: ํํ ๊ฐ์ ๋ณํ
Step6: ๋ฐฐ์น(batch) ์ฐจ์
Step7: tf.RaggedTensor๋ฅผ ๋ฐ๋ก ์ฌ์ฉํ๊ฑฐ๋, ํจ๋ฉ(padding)์ ์ฌ์ฉํด tf.Tensor๋ก ๋ณํํ๊ฑฐ๋, tf.RaggedTensor.to_tensor ์ tf.RaggedTensor.to_sparse ๋ฉ์๋๋ฅผ ์ฌ์ฉํด tf.SparseTensor๋ก ๋ณํํ ์ ์์ต๋๋ค.
Step8: ๊ธธ์ด๊ฐ ๊ฐ์ ์ฌ๋ฌ ๋ฌธ์์ด์ ์ธ์ฝ๋ฉํ ๋๋ tf.Tensor๋ฅผ ์
๋ ฅ์ผ๋ก ์ฌ์ฉํฉ๋๋ค.
Step9: ๊ธธ์ด๊ฐ ๋ค๋ฅธ ์ฌ๋ฌ ๋ฌธ์์ด์ ์ธ์ฝ๋ฉํ ๋๋ tf.RaggedTensor๋ฅผ ์
๋ ฅ์ผ๋ก ์ฌ์ฉํด์ผ ํฉ๋๋ค.
Step10: ํจ๋ฉ๋ ํ
์๋ ํฌ์(sparse) ํ
์๋ unicode_encode๋ฅผ ํธ์ถํ๊ธฐ ์ ์ tf.RaggedTensor๋ก ๋ฐ๊ฟ๋๋ค.
Step11: ์ ๋์ฝ๋ ์ฐ์ฐ
Step12: ๋ถ๋ถ ๋ฌธ์์ด
Step13: ์ ๋์ฝ๋ ๋ฌธ์์ด ๋ถ๋ฆฌ
Step14: ๋ฌธ์ ๋ฐ์ดํธ ์คํ์
Step15: ์ ๋์ฝ๋ ์คํฌ๋ฆฝํธ
Step16: tf.strings.unicode_script ์ฐ์ฐ์ ์ฝ๋ ํฌ์ธํธ์ ๋ค์ฐจ์ tf.Tensor๋ tf.RaggedTensor์ ์ ์ฉํ ์ ์์ต๋๋ค
Step17: ์์
Step18: ๋จผ์ ๋ฌธ์ฅ์ ๋ฌธ์ ์ฝ๋ ํฌ์ธํธ๋ก ๋์ฝ๋ฉํ๊ณ ๊ฐ ๋ฌธ์์ ๋ํ ์คํฌ๋ฆฝํธ ์๋ณ์๋ฅผ ์ฐพ์ต๋๋ค.
Step19: ๊ทธ๋ค์ ์คํฌ๋ฆฝํธ ์๋ณ์๋ฅผ ์ฌ์ฉํ์ฌ ๋จ์ด ๊ฒฝ๊ณ๊ฐ ์ถ๊ฐ๋ ์์น๋ฅผ ๊ฒฐ์ ํฉ๋๋ค. ๊ฐ ๋ฌธ์ฅ์ ์์๊ณผ ์ด์ ๋ฌธ์์ ์คํฌ๋ฆฝํธ๊ฐ ๋ค๋ฅธ ๋ฌธ์์ ๋จ์ด ๊ฒฝ๊ณ๋ฅผ ์ถ๊ฐํฉ๋๋ค.
Step20: ์ด ์์ ์คํ์
์ ์ฌ์ฉํ์ฌ ์ ์ฒด ๋ฐฐ์น์ ์๋ ๋จ์ด ๋ฆฌ์คํธ๋ฅผ ๋ด์ RaggedTensor๋ฅผ ๋ง๋ญ๋๋ค.
Step21: ๋ง์ง๋ง์ผ๋ก ๋จ์ด ์ฝ๋ ํฌ์ธํธ RaggedTensor๋ฅผ ๋ฌธ์ฅ์ผ๋ก ๋ค์ ๋๋๋๋ค.
Step22: ์ต์ข
๊ฒฐ๊ณผ๋ฅผ ์ฝ๊ธฐ ์ฝ๊ฒ utf-8 ๋ฌธ์์ด๋ก ๋ค์ ์ธ์ฝ๋ฉํฉ๋๋ค.
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 Franรงois Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
import tensorflow as tf
tf.constant(u"Thanks ๐")
tf.constant([u"You're", u"welcome!"]).shape
# UTF-8๋ก ์ธ์ฝ๋ฉ๋ string ์ค์นผ๋ผ๋ก ํํํ ์ ๋์ฝ๋ ๋ฌธ์์ด์
๋๋ค.
text_utf8 = tf.constant(u"่ฏญ่จๅค็")
text_utf8
# UTF-16-BE๋ก ์ธ์ฝ๋ฉ๋ string ์ค์นผ๋ผ๋ก ํํํ ์ ๋์ฝ๋ ๋ฌธ์์ด์
๋๋ค.
text_utf16be = tf.constant(u"่ฏญ่จๅค็".encode("UTF-16-BE"))
text_utf16be
# ์ ๋์ฝ๋ ์ฝ๋ ํฌ์ธํธ์ ๋ฒกํฐ๋ก ํํํ ์ ๋์ฝ๋ ๋ฌธ์์ด์
๋๋ค.
text_chars = tf.constant([ord(char) for char in u"่ฏญ่จๅค็"])
text_chars
tf.strings.unicode_decode(text_utf8,
input_encoding='UTF-8')
tf.strings.unicode_encode(text_chars,
output_encoding='UTF-8')
tf.strings.unicode_transcode(text_utf8,
input_encoding='UTF8',
output_encoding='UTF-16-BE')
# UTF-8 ์ธ์ฝ๋ฉ๋ ๋ฌธ์์ด๋ก ํํํ ์ ๋์ฝ๋ ๋ฌธ์์ด์ ๋ฐฐ์น์
๋๋ค.
batch_utf8 = [s.encode('UTF-8') for s in
[u'hรllo', u'What is the weather tomorrow', u'Gรถรถdnight', u'๐']]
batch_chars_ragged = tf.strings.unicode_decode(batch_utf8,
input_encoding='UTF-8')
for sentence_chars in batch_chars_ragged.to_list():
print(sentence_chars)
batch_chars_padded = batch_chars_ragged.to_tensor(default_value=-1)
print(batch_chars_padded.numpy())
batch_chars_sparse = batch_chars_ragged.to_sparse()
tf.strings.unicode_encode([[99, 97, 116], [100, 111, 103], [ 99, 111, 119]],
output_encoding='UTF-8')
tf.strings.unicode_encode(batch_chars_ragged, output_encoding='UTF-8')
tf.strings.unicode_encode(
tf.RaggedTensor.from_sparse(batch_chars_sparse),
output_encoding='UTF-8')
tf.strings.unicode_encode(
tf.RaggedTensor.from_tensor(batch_chars_padded, padding=-1),
output_encoding='UTF-8')
# UTF8์์ ๋ง์ง๋ง ๋ฌธ์๋ 4๋ฐ์ดํธ๋ฅผ ์ฐจ์งํฉ๋๋ค.
thanks = u'Thanks ๐'.encode('UTF-8')
num_bytes = tf.strings.length(thanks).numpy()
num_chars = tf.strings.length(thanks, unit='UTF8_CHAR').numpy()
print('{} ๋ฐ์ดํธ; {}๊ฐ์ UTF-8 ๋ฌธ์'.format(num_bytes, num_chars))
# ๊ธฐ๋ณธ: unit='BYTE'. len=1์ด๋ฉด ๋ฐ์ดํธ ํ๋๋ฅผ ๋ฐํํฉ๋๋ค.
tf.strings.substr(thanks, pos=7, len=1).numpy()
# unit='UTF8_CHAR'๋ก ์ง์ ํ๋ฉด 4 ๋ฐ์ดํธ์ธ ๋ฌธ์ ํ๋๋ฅผ ๋ฐํํฉ๋๋ค.
print(tf.strings.substr(thanks, pos=7, len=1, unit='UTF8_CHAR').numpy())
tf.strings.unicode_split(thanks, 'UTF-8').numpy()
codepoints, offsets = tf.strings.unicode_decode_with_offsets(u"๐๐๐", 'UTF-8')
for (codepoint, offset) in zip(codepoints.numpy(), offsets.numpy()):
print("๋ฐ์ดํธ ์คํ์
{}: ์ฝ๋ ํฌ์ธํธ {}".format(offset, codepoint))
uscript = tf.strings.unicode_script([33464, 1041]) # ['่ธ', 'ะ']
print(uscript.numpy()) # [17, 8] == [USCRIPT_HAN, USCRIPT_CYRILLIC]
print(tf.strings.unicode_script(batch_chars_ragged))
# dtype: string; shape: [num_sentences]
#
# ์ฒ๋ฆฌํ ๋ฌธ์ฅ๋ค ์
๋๋ค. ์ด ๋ผ์ธ์ ์์ ํด์ ๋ค๋ฅธ ์
๋ ฅ๊ฐ์ ์๋ํด ๋ณด์ธ์!
sentence_texts = [u'Hello, world.', u'ไธ็ใใใซใกใฏ']
# dtype: int32; shape: [num_sentences, (num_chars_per_sentence)]
#
# sentence_char_codepoint[i, j]๋
# i๋ฒ์งธ ๋ฌธ์ฅ ์์ ์๋ j๋ฒ์งธ ๋ฌธ์์ ๋ํ ์ฝ๋ ํฌ์ธํธ ์
๋๋ค.
sentence_char_codepoint = tf.strings.unicode_decode(sentence_texts, 'UTF-8')
print(sentence_char_codepoint)
# dtype: int32; shape: [num_sentences, (num_chars_per_sentence)]
#
# sentence_char_codepoint[i, j]๋
# i๋ฒ์งธ ๋ฌธ์ฅ ์์ ์๋ j๋ฒ์งธ ๋ฌธ์์ ์ ๋์ฝ๋ ์คํฌ๋ฆฝํธ ์
๋๋ค.
sentence_char_script = tf.strings.unicode_script(sentence_char_codepoint)
print(sentence_char_script)
# dtype: bool; shape: [num_sentences, (num_chars_per_sentence)]
#
# sentence_char_starts_word[i, j]๋
# i๋ฒ์งธ ๋ฌธ์ฅ ์์ ์๋ j๋ฒ์งธ ๋ฌธ์๊ฐ ๋จ์ด์ ์์์ด๋ฉด True ์
๋๋ค.
sentence_char_starts_word = tf.concat(
[tf.fill([sentence_char_script.nrows(), 1], True),
tf.not_equal(sentence_char_script[:, 1:], sentence_char_script[:, :-1])],
axis=1)
# dtype: int64; shape: [num_words]
#
# word_starts[i]์ (๋ชจ๋ ๋ฌธ์ฅ์ ๋ฌธ์๋ฅผ ์ผ๋ ฌ๋ก ํผ์น ๋ฆฌ์คํธ์์)
# i๋ฒ์งธ ๋จ์ด๊ฐ ์์๋๋ ๋ฌธ์์ ์ธ๋ฑ์ค ์
๋๋ค.
word_starts = tf.squeeze(tf.where(sentence_char_starts_word.values), axis=1)
print(word_starts)
# dtype: int32; shape: [num_words, (num_chars_per_word)]
#
# word_char_codepoint[i, j]์
# i๋ฒ์งธ ๋จ์ด ์์ ์๋ j๋ฒ์งธ ๋ฌธ์์ ๋ํ ์ฝ๋ ํฌ์ธํธ ์
๋๋ค.
word_char_codepoint = tf.RaggedTensor.from_row_starts(
values=sentence_char_codepoint.values,
row_starts=word_starts)
print(word_char_codepoint)
# dtype: int64; shape: [num_sentences]
#
# sentence_num_words[i]๋ i๋ฒ์งธ ๋ฌธ์ฅ ์์ ์๋ ๋จ์ด์ ์์
๋๋ค.
sentence_num_words = tf.reduce_sum(
tf.cast(sentence_char_starts_word, tf.int64),
axis=1)
# dtype: int32; shape: [num_sentences, (num_words_per_sentence), (num_chars_per_word)]
#
# sentence_word_char_codepoint[i, j, k]๋ i๋ฒ์งธ ๋ฌธ์ฅ ์์ ์๋
# j๋ฒ์งธ ๋จ์ด ์์ k๋ฒ์งธ ๋ฌธ์์ ๋ํ ์ฝ๋ ํฌ์ธํธ์
๋๋ค.
sentence_word_char_codepoint = tf.RaggedTensor.from_row_lengths(
values=word_char_codepoint,
row_lengths=sentence_num_words)
print(sentence_word_char_codepoint)
tf.strings.unicode_encode(sentence_word_char_codepoint, 'UTF-8').to_list()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data
Step2: Kernel
Step3: Model fitting
Step4: Using our loss function defined above, we'll run a gradient based optimization routine from scipy (you could also use a jax-specific optimizer, but that's not necessary) to fit this model as follows
Step5: Warning
|
<ASSISTANT_TASK:>
Python Code:
try:
import tinygp
except ImportError:
!pip install -q tinygp
from jax.config import config
config.update("jax_enable_x64", True)
import numpy as np
import matplotlib.pyplot as plt
try:
from statsmodels.datasets import co2
except ModuleNotFoundError:
%pip install -qq statsmodels
from statsmodels.datasets import co2
data = co2.load_pandas().data
t = 2000 + (np.array(data.index.to_julian_date()) - 2451545.0) / 365.25
y = np.array(data.co2)
m = np.isfinite(t) & np.isfinite(y) & (t < 1996)
t, y = t[m][::4], y[m][::4]
plt.plot(t, y, ".k")
plt.xlim(t.min(), t.max())
plt.xlabel("year")
_ = plt.ylabel("CO$_2$ in ppm")
plt.savefig("gp-mauna-loa-data.pdf")
import jax
import jax.numpy as jnp
try:
from tinygp import kernels, transforms, GaussianProcess
except ModuleNotFoundError:
%pip install -qq tinygp
from tinygp import kernels, transforms, GaussianProcess
def build_gp(theta, X):
mean = theta[-1]
# We want most of out parameters to be positive so we take the `exp` here
# Note that we're using `jnp` instead of `np`
theta = jnp.exp(theta[:-1])
# Construct the kernel by multiplying and adding `Kernel` objects
k1 = theta[0] ** 2 * kernels.ExpSquared(theta[1])
k2 = (
theta[2] ** 2
* kernels.ExpSquared(theta[3])
* kernels.ExpSineSquared(period=theta[4], gamma=theta[5])
)
k3 = theta[6] ** 2 * kernels.RationalQuadratic(
alpha=theta[7], scale=theta[8]
)
k4 = theta[9] ** 2 * kernels.ExpSquared(theta[10])
kernel = k1 + k2 + k3 + k4
return GaussianProcess(kernel, X, diag=theta[11] ** 2, mean=mean)
def neg_log_likelihood(theta, X, y):
gp = build_gp(theta, X)
return -gp.condition(y)
# Objective
obj = jax.jit(jax.value_and_grad(neg_log_likelihood))
# These are the parameters from R&W
mean_output = 340.0
theta_init = np.append(
np.log([66.0, 67.0, 2.4, 90.0, 1.0, 4.3, 0.66, 1.2, 0.78, 0.18, 1.6, 0.19]),
mean_output,
)
obj(theta_init, t, y)
from scipy.optimize import minimize
soln = minimize(obj, theta_init, jac=True, args=(t, y))
print(f"Final negative log likelihood: {soln.fun}")
x = np.linspace(max(t), 2025, 2000)
gp = build_gp(soln.x, t)
mu, var = gp.predict(y, x, return_var=True)
plt.plot(t, y, ".k")
plt.fill_between(x, mu + np.sqrt(var), mu - np.sqrt(var), color="C0", alpha=0.5)
plt.plot(x, mu, color="C0", lw=2)
plt.xlim(t.min(), 2025)
plt.xlabel("year")
_ = plt.ylabel("CO$_2$ in ppm")
plt.savefig("gp-mauna-loa-pred.pdf")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Image Classification
Step2: Explore the Data
Step5: Implement Preprocess Functions
Step8: One-hot encode
Step10: Randomize Data
Step12: Check Point
Step17: Build the network
Step20: Convolution and Max Pooling Layer
Step23: Flatten Layer
Step26: Fully-Connected Layer
Step29: Output Layer
Step32: Create Convolutional Model
Step35: Train the Neural Network
Step37: Show Stats
Step38: Hyperparameters
Step40: Train on a Single CIFAR-10 Batch
Step42: Fully Train the Model
Step45: Checkpoint
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
# TODO: Implement Function
return x / 255
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_normalize(normalize)
from sklearn.preprocessing import LabelBinarizer
lb = LabelBinarizer()
lb.fit([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
def one_hot_encode(x):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
# TODO: Implement Function
return lb.transform(x)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_one_hot_encode(one_hot_encode)
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
DON'T MODIFY ANYTHING IN THIS CELL
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
# TODO: Implement Function
return tf.placeholder(tf.float32, shape=(None, image_shape[0], image_shape[1], image_shape[2]), name='x')
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
# TODO: Implement Function
return tf.placeholder(tf.float32, shape=(None, n_classes), name='y')
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
# TODO: Implement Function
return tf.placeholder(tf.float32, name='keep_prob')
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
tf.reset_default_graph()
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
# TODO: Implement Function
tensor_shape = x_tensor.get_shape().as_list()
num_channels = tensor_shape[3]
weights = tf.get_variable('weights',
shape=[conv_ksize[0], conv_ksize[1], num_channels, conv_num_outputs],
initializer=tf.random_normal_initializer(stddev=0.1))
biases = tf.get_variable('biases',
shape=[conv_num_outputs],
initializer=tf.constant_initializer(0.0))
conv = tf.nn.conv2d(x_tensor, weights, strides=[1, conv_strides[0], conv_strides[1], 1], padding='SAME')
conv = tf.nn.bias_add(conv, biases)
conv_relu = tf.nn.relu(conv)
pooled = tf.nn.max_pool(conv_relu, ksize=[1, pool_ksize[0], pool_ksize[1], 1],
strides=[1, pool_strides[0], pool_strides[1], 1], padding='SAME')
return pooled
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
# TODO: Implement Function
tensor_shape = x_tensor.get_shape().as_list()
batch_size = tf.shape(x_tensor)[0]
flat_image_size = np.product(tensor_shape[1:])
return tf.reshape(x_tensor, shape=tf.stack([batch_size, flat_image_size]))
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_flatten(flatten)
tf.reset_default_graph()
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
tensor_shape = x_tensor.get_shape().as_list()
batch_size = tensor_shape[0]
num_features = tensor_shape[1]
weights = tf.get_variable('weights',
shape=[num_features, num_outputs],
initializer=tf.random_normal_initializer(stddev=0.1))
biases = tf.get_variable('biases',
shape=[num_outputs],
initializer=tf.constant_initializer(0.0))
fc = tf.matmul(x_tensor, weights)
fc = tf.nn.bias_add(fc, biases)
fc_relu = tf.nn.relu(fc)
return fc_relu
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_fully_conn(fully_conn)
tf.reset_default_graph()
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
tensor_shape = x_tensor.get_shape().as_list()
batch_size = tensor_shape[0]
num_features = tensor_shape[1]
weights = tf.get_variable('weights',
shape=[num_features, num_outputs],
initializer=tf.random_normal_initializer(stddev=0.1))
biases = tf.get_variable('biases',
shape=[num_outputs],
initializer=tf.constant_initializer(0.0))
out = tf.matmul(x_tensor, weights)
out = tf.nn.bias_add(out, biases)
return out
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_output(output)
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
with tf.variable_scope("conv1"):
conv1_out = conv2d_maxpool(x,
conv_num_outputs=32,
conv_ksize=(5,5),
conv_strides=(1,1),
pool_ksize=(3,3),
pool_strides=(2,2))
with tf.variable_scope("conv2"):
conv2_out = conv2d_maxpool(conv1_out,
conv_num_outputs=64,
conv_ksize=(5,5),
conv_strides=(1,1),
pool_ksize=(3,3),
pool_strides=(2,2))
with tf.variable_scope("conv3"):
conv3_out = conv2d_maxpool(conv2_out,
conv_num_outputs=128,
conv_ksize=(5,5),
conv_strides=(1,1),
pool_ksize=(3,3),
pool_strides=(2,2))
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
conv3_flat = flatten(conv3_out)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
with tf.variable_scope("fc1"):
fc1_out = fully_conn(conv3_flat, num_outputs=512)
fc1_out = tf.nn.dropout(fc1_out, keep_prob)
with tf.variable_scope("fc2"):
fc2_out = fully_conn(fc1_out, num_outputs=64)
fc2_out = tf.nn.dropout(fc2_out, keep_prob)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
with tf.variable_scope("out"):
logits = output(fc2_out, 10)
# TODO: return output
return logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
##############################
## Build the Neural Network ##
##############################
#Moved the test so it doesn't interfere with the variable scopes
tf.reset_default_graph()
tests.test_conv_net(conv_net)
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
# TODO: Implement Function
session.run(optimizer, feed_dict={x: feature_batch,
y: label_batch,
keep_prob: keep_probability})
pass
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_train_nn(train_neural_network)
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
# TODO: Implement Function
loss = session.run(cost, feed_dict={x: feature_batch,
y: label_batch,
keep_prob: 1.0})
valid_acc = session.run(accuracy, feed_dict={x: valid_features,
y: valid_labels,
keep_prob: 1.0})
print('Loss: {:>10.4f} Validation Accuracy: {:.6f}'.format(loss, valid_acc))
pass
# TODO: Tune Parameters
epochs = 30
batch_size = 512
keep_probability = 0.5
DON'T MODIFY ANYTHING IN THIS CELL
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load CIFAR Dataset
Step2: RegularNet
Step3: ResNet
Step4: HighwayNet
Step5: DenseNet
Step10: Visualize the network graph
Step11: Training
Step12: Results
|
<ASSISTANT_TASK:>
Python Code:
#Load necessary libraries
import tensorflow as tf
import numpy as np
import tensorflow.contrib.slim as slim
import input_data
import matplotlib.pyplot as plt
%matplotlib inline
def unpickle(file):
import cPickle
fo = open(file, 'rb')
dict = cPickle.load(fo)
fo.close()
return dict
currentCifar = 1
cifar = unpickle('./cifar10/data_batch_1')
cifarT = unpickle('./cifar10/test_batch')
total_layers = 25 #Specify how deep we want our network
units_between_stride = total_layers / 5
tf.reset_default_graph()
input_layer = tf.placeholder(shape=[None,32,32,3],dtype=tf.float32,name='input')
label_layer = tf.placeholder(shape=[None],dtype=tf.int32)
label_oh = slim.layers.one_hot_encoding(label_layer,10)
layer1 = slim.conv2d(input_layer,64,[3,3],normalizer_fn=slim.batch_norm,scope='conv_'+str(0))
for i in range(5):
for j in range(units_between_stride):
layer1 = slim.conv2d(layer1,64,[3,3],normalizer_fn=slim.batch_norm,scope='conv_'+str((j+1) + (i*units_between_stride)))
layer1 = slim.conv2d(layer1,64,[3,3],stride=[2,2],normalizer_fn=slim.batch_norm,scope='conv_s_'+str(i))
top = slim.conv2d(layer1,10,[3,3],normalizer_fn=slim.batch_norm,activation_fn=None,scope='conv_top')
output = slim.layers.softmax(slim.layers.flatten(top))
loss = tf.reduce_mean(-tf.reduce_sum(label_oh * tf.log(output) + 1e-10, axis=[1]))
trainer = tf.train.AdamOptimizer(learning_rate=0.001)
update = trainer.minimize(loss)
def resUnit(input_layer,i):
with tf.variable_scope("res_unit"+str(i)):
part1 = slim.batch_norm(input_layer,activation_fn=None)
part2 = tf.nn.relu(part1)
part3 = slim.conv2d(part2,64,[3,3],activation_fn=None)
part4 = slim.batch_norm(part3,activation_fn=None)
part5 = tf.nn.relu(part4)
part6 = slim.conv2d(part5,64,[3,3],activation_fn=None)
output = input_layer + part6
return output
tf.reset_default_graph()
input_layer = tf.placeholder(shape=[None,32,32,3],dtype=tf.float32,name='input')
label_layer = tf.placeholder(shape=[None],dtype=tf.int32)
label_oh = slim.layers.one_hot_encoding(label_layer,10)
layer1 = slim.conv2d(input_layer,64,[3,3],normalizer_fn=slim.batch_norm,scope='conv_'+str(0))
for i in range(5):
for j in range(units_between_stride):
layer1 = resUnit(layer1,j + (i*units_between_stride))
layer1 = slim.conv2d(layer1,64,[3,3],stride=[2,2],normalizer_fn=slim.batch_norm,scope='conv_s_'+str(i))
top = slim.conv2d(layer1,10,[3,3],normalizer_fn=slim.batch_norm,activation_fn=None,scope='conv_top')
output = slim.layers.softmax(slim.layers.flatten(top))
loss = tf.reduce_mean(-tf.reduce_sum(label_oh * tf.log(output) + 1e-10, axis=[1]))
trainer = tf.train.AdamOptimizer(learning_rate=0.001)
update = trainer.minimize(loss)
def highwayUnit(input_layer,i):
with tf.variable_scope("highway_unit"+str(i)):
H = slim.conv2d(input_layer,64,[3,3])
T = slim.conv2d(input_layer,64,[3,3], #We initialize with a negative bias to push the network to use the skip connection
biases_initializer=tf.constant_initializer(-1.0),activation_fn=tf.nn.sigmoid)
output = H*T + input_layer*(1.0-T)
return output
tf.reset_default_graph()
input_layer = tf.placeholder(shape=[None,32,32,3],dtype=tf.float32,name='input')
label_layer = tf.placeholder(shape=[None],dtype=tf.int32)
label_oh = slim.layers.one_hot_encoding(label_layer,10)
layer1 = slim.conv2d(input_layer,64,[3,3],normalizer_fn=slim.batch_norm,scope='conv_'+str(0))
for i in range(5):
for j in range(units_between_stride):
layer1 = highwayUnit(layer1,j + (i*units_between_stride))
layer1 = slim.conv2d(layer1,64,[3,3],stride=[2,2],normalizer_fn=slim.batch_norm,scope='conv_s_'+str(i))
top = slim.conv2d(layer1,10,[3,3],normalizer_fn=slim.batch_norm,activation_fn=None,scope='conv_top')
output = slim.layers.softmax(slim.layers.flatten(top))
loss = tf.reduce_mean(-tf.reduce_sum(label_oh * tf.log(output) + 1e-10, axis=[1]))
trainer = tf.train.AdamOptimizer(learning_rate=0.001)
update = trainer.minimize(loss)
def denseBlock(input_layer,i,j):
with tf.variable_scope("dense_unit"+str(i)):
nodes = []
a = slim.conv2d(input_layer,64,[3,3],normalizer_fn=slim.batch_norm)
nodes.append(a)
for z in range(j):
b = slim.conv2d(tf.concat(nodes,3),64,[3,3],normalizer_fn=slim.batch_norm)
nodes.append(b)
return b
tf.reset_default_graph()
input_layer = tf.placeholder(shape=[None,32,32,3],dtype=tf.float32,name='input')
label_layer = tf.placeholder(shape=[None],dtype=tf.int32)
label_oh = slim.layers.one_hot_encoding(label_layer,10)
layer1 = slim.conv2d(input_layer,64,[3,3],normalizer_fn=slim.batch_norm,scope='conv_'+str(0))
for i in range(5):
layer1 = denseBlock(layer1,i,units_between_stride)
layer1 = slim.conv2d(layer1,64,[3,3],stride=[2,2],normalizer_fn=slim.batch_norm,scope='conv_s_'+str(i))
top = slim.conv2d(layer1,10,[3,3],normalizer_fn=slim.batch_norm,activation_fn=None,scope='conv_top')
output = slim.layers.softmax(slim.layers.flatten(top))
loss = tf.reduce_mean(-tf.reduce_sum(label_oh * tf.log(output) + 1e-10, axis=[1]))
trainer = tf.train.AdamOptimizer(learning_rate=0.001)
update = trainer.minimize(loss)
from IPython.display import clear_output, Image, display, HTML
def strip_consts(graph_def, max_const_size=32):
Strip large constant values from graph_def.
strip_def = tf.GraphDef()
for n0 in graph_def.node:
n = strip_def.node.add()
n.MergeFrom(n0)
if n.op == 'Const':
tensor = n.attr['value'].tensor
size = len(tensor.tensor_content)
if size > max_const_size:
tensor.tensor_content = "<stripped %d bytes>"%size
return strip_def
def show_graph(graph_def, max_const_size=32):
Visualize TensorFlow graph.
if hasattr(graph_def, 'as_graph_def'):
graph_def = graph_def.as_graph_def()
strip_def = strip_consts(graph_def, max_const_size=max_const_size)
code =
<script>
function load() {{
document.getElementById("{id}").pbtxt = {data};
}}
</script>
<link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()>
<div style="height:600px">
<tf-graph-basic id="{id}"></tf-graph-basic>
</div>
.format(data=repr(str(strip_def)), id='graph'+str(np.random.rand()))
iframe =
<iframe seamless style="width:1200px;height:620px;border:0" srcdoc="{}"></iframe>
.format(code.replace('"', '"'))
display(HTML(iframe))
show_graph(tf.get_default_graph().as_graph_def())
init = tf.global_variables_initializer()
batch_size = 64
currentCifar = 1
total_steps = 20000
l = []
a = []
aT = []
with tf.Session() as sess:
sess.run(init)
i = 0
draw = range(10000)
while i < total_steps:
if i % (10000/batch_size) != 0:
batch_index = np.random.choice(draw,size=batch_size,replace=False)
else:
draw = range(10000)
if currentCifar == 5:
currentCifar = 1
print "Switched CIFAR set to " + str(currentCifar)
else:
currentCifar = currentCifar + 1
print "Switched CIFAR set to " + str(currentCifar)
cifar = unpickle('./cifar10/data_batch_'+str(currentCifar))
batch_index = np.random.choice(draw,size=batch_size,replace=False)
x = cifar['data'][batch_index]
x = np.reshape(x,[batch_size,32,32,3],order='F')
x = (x/256.0)
x = (x - np.mean(x,axis=0)) / np.std(x,axis=0)
y = np.reshape(np.array(cifar['labels'])[batch_index],[batch_size,1])
_,lossA,yP,LO = sess.run([update,loss,output,label_oh],feed_dict={input_layer:x,label_layer:np.hstack(y)})
accuracy = np.sum(np.equal(np.hstack(y),np.argmax(yP,1)))/float(len(y))
l.append(lossA)
a.append(accuracy)
if i % 10 == 0: print "Step: " + str(i) + " Loss: " + str(lossA) + " Accuracy: " + str(accuracy)
if i % 100 == 0:
point = np.random.randint(0,10000-500)
xT = cifarT['data'][point:point+500]
xT = np.reshape(xT,[500,32,32,3],order='F')
xT = (xT/256.0)
xT = (xT - np.mean(xT,axis=0)) / np.std(xT,axis=0)
yT = np.reshape(np.array(cifarT['labels'])[point:point+500],[500])
lossT,yP = sess.run([loss,output],feed_dict={input_layer:xT,label_layer:yT})
accuracy = np.sum(np.equal(yT,np.argmax(yP,1)))/float(len(yT))
aT.append(accuracy)
print "Test set accuracy: " + str(accuracy)
i+= 1
plt.plot(l) #Plot training loss
plt.plot(a) #Plot training accuracy
plt.plot(aT) #Plot test accuracy
np.max(aT) #Best test accuracy
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 3.1 Data And Interpolating Functions
Step2: Example
Step3: $$
Step4: Example
Step5: 3.4 Cubic Splines
Step6: Example
Step7: Example
Step9: 3.5 Bรฉzier Curves
Step10: Example
|
<ASSISTANT_TASK:>
Python Code:
# Import modules
import sys
import math
import numpy as np
from matplotlib import pyplot as plt
from scipy import linalg
def newton_divided_diff(X, Y, n):
V = {}
for j in range(n):
V[j, 0] = Y[j]
for i in range(1,n):
for j in range(n-i):
V[j, i] = (V[j+1, i-1] - V[j, i-1]) / (X[j+i] - X[j])
C = np.zeros(n)
for i in range(n):
C[i] = V[0,i]
return C
X = [0, 1, 2, 3]
Y = [2, 1, 0, -1]
C = newton_divided_diff(X, Y, 4)
print(C)
def chebyshev_interpolation_nodes(a, b, n):
X = np.zeros(n)
for i in range(1,n + 1):
X[i - 1] = (b + a) / 2 + (b - a) / 2 * math.cos( (2 * i - 1) * math.pi / (2 * n) )
return X
def interpolating_polynomial(X, Y, x):
while x >= math.pi * 2:
x -= (2 * math.pi)
C = newton_divided_diff(X, Y, X.size)
poly = 0
for i in range(C.size):
mul = 1
for j in range(0, i):
mul *= (x - X[j])
poly += (C[i] * mul)
return poly
X = chebyshev_interpolation_nodes(0, 2 * math.pi, 8)
Y = np.sin(X)
x = 1
poly = interpolating_polynomial(X, Y, x)
print('chebyshev : %.15f' %poly)
print('real sin : %.15f' %math.sin(x))
print('error : %.15f' %abs(math.sin(x) - poly))
pX = np.linspace(0, 2 * math.pi, 1024)
pY = np.zeros(1024)
for i in range(pX.size):
pY[i] = interpolating_polynomial(X, Y, pX[i])
real_sin_y = np.sin(pX)
error_y = real_sin_y - pY
plt.subplot(211)
plt.title('real vs chebyshev interpolation $\sin{(x)}$')
plt.plot(pX, pY, color='b',linewidth=2)
plt.plot(pX, real_sin_y, color='r', linestyle='--',linewidth=2)
plt.grid(True)
plt.subplot(212)
plt.plot(pX, error_y, color='g', linewidth=2)
plt.grid(True)
plt.show()
def natural_cubic_spline(X, Y):
A = np.zeros(X.size - 1, dtype=float)
for i in range(X.size - 1):
A[i] = Y[i]
delta = np.zeros(X.size - 1, dtype=float)
Delta = np.zeros(X.size - 1, dtype=float)
for i in range(1, X.size):
delta[i - 1] = X[i] - X[i - 1]
Delta[i - 1] = Y[i] - Y[i - 1]
Mat = np.zeros(X.size * X.size, dtype=float).reshape(X.size, X.size)
Mat[0][0] = Mat[X.size - 1][X.size - 1] = 1.0
for i in range(1, X.size - 1):
Mat[i][i] = 2 * (delta[i - 1] + delta[i]) #diagonal
Mat[i][i - 1] = delta[i - 1]
Mat[i][i + 1] = delta[i]
b = np.zeros(X.size, dtype=float)
for i in range(1,X.size - 1):
b[i] = 3 * (Delta[i]/delta[i] - Delta[i - 1]/delta[i - 1])
# Solve the equations systems
lu, piv = linalg.lu_factor(Mat)
C = linalg.lu_solve([lu, piv], b)
D = np.zeros(X.size - 1, dtype=float)
B = np.zeros(X.size - 1, dtype=float)
for i in range(1, X.size):
D[i - 1] = (C[i] - C[i - 1]) / (3 * delta[i - 1])
B[i - 1] = Delta[i - 1] / delta[i - 1] - delta[i - 1] * (2 * C[i - 1] + C[i]) / 3
return A, B, C, D
X = np.array([0, 1, 2])
Y = np.array([3, -2, 1])
A, B, C, D = natural_cubic_spline(X, Y)
S = lambda x, i : A[i] + B[i] * (x - X[i]) + C[i] * pow(x - X[i], 2) + D[i] * pow(x - X[i], 3)
total = 200
pX = np.linspace(0,2,total)
pY = np.zeros(total)
for i in range(pX.size):
x = pX[i]
for j in range(X.size - 1):
if x <= X[j + 1]:
pY[i] = S(x, j)
break
else:
continue
plt.plot(pX,pY,color='b')
plt.plot(X,Y,'ro')
plt.show()
X = np.array([0, 1, 2, 3, 4, 5])
Y = np.array([3, 1, 4, 1, 2, 0])
A, B, C, D = natural_cubic_spline(X, Y)
S = lambda x, i : A[i] + B[i] * (x - X[i]) + C[i] * pow(x - X[i], 2) + D[i] * pow(x - X[i], 3)
total = 1000
pX = np.linspace(0,5,total)
pY = np.zeros(total)
for i in range(pX.size):
x = pX[i]
for j in range(X.size - 1):
if x <= X[j + 1]:
pY[i] = S(x, j)
break
else:
continue
plt.plot(pX,pY,color='b')
plt.plot(X,Y,'ro')
plt.grid(True)
plt.show()
def bezier_curve(X, Y):
Make the function handle of the bezier curve formula
Args:
X - x-axis points (include endpoints and control points)
Y - y-axis points (include endpoints and control points)
Returns:
fx - bezier curve for x-axis points
fy - bezier curve for y-axis points
x1 = X[0]
x2 = X[1]
x3 = X[2]
x4 = X[3]
y1 = Y[0]
y2 = Y[1]
y3 = Y[2]
y4 = Y[3]
bx = 3 * (x2 - x1)
cx = 3 * (x3 - x2) - bx
dx = x4 - x1 - bx - cx
by = 3 * (y2 - y1)
cy = 3 * (y3 - y2) - by
dy = y4 - y1 - by - cy
fx = lambda t : x1 + bx * t + cx * pow(t, 2) + dx * pow(t, 3)
fy = lambda t : y1 + by * t + cy * pow(t, 2) + dy * pow(t, 3)
return fx, fy
X = np.array([1, 1, 3, 2])
Y = np.array([1, 3, 3, 2])
fx, fy = bezier_curve(X, Y)
t = np.linspace(0, 1, 100)
px = fx(t)
py = fy(t)
plt.plot(px, py, color='b')
plt.plot(X[0:2], Y[0:2], color='r', marker='o', linestyle='--')
plt.plot(X[2:4], Y[2:4], color='r', marker='o', linestyle='--')
plt.grid(True)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: ็ทด็ฟๅ้ก
Step2: (2) (1)ใฎ้ขๆฐใซใคใใฆใxใฎๅคใจใๅฏพๅฟใใใy = sin(2ฯx) + ๆญฃ่ฆๅๅธใฎไนฑๆฐ๏ผๅนณๅ0, ๆจๆบๅๅทฎ0.3๏ผใใฎๅคใๅใซๆใคDataFrameใ่ฟใใใใซไฟฎๆญฃใใฆใใ ใใใ๏ผๅใฎๅๅใฏใใใใใ 'x' ใใใณ 'y' ใจใใพใใ๏ผ
Step3: (3) (2)ใฎ้ขๆฐใ็จใใฆใnum=1000 ใฎใใผใฟใใฌใผใ ใไฝๆใใๅพใdescribe()ใกใฝใใใงใใผใฟใฎ็ตฑ่จๆ
ๅ ฑใ็ขบ่ชใใฆใใ ใใใ
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from pandas import Series, DataFrame
from numpy.random import normal
def create_dataset(num):
data_x = np.linspace(0,1,num)
data_y = np.sin(2*np.pi*data_x) + normal(loc=0, scale=0.3, size=num)
return data_y
create_dataset(10)
def create_dataset1(num):
data_x = np.linspace(0,1,num)
data_y = np.sin(2*np.pi*data_x) + normal(loc=0, scale=0.3, size=num)
df = DataFrame()
df['x'] = data_x
df['y'] = data_y
return df
create_dataset1(10)
def create_dataset2(num):
data_x = np.linspace(0,1,num)
data_y = np.sin(2*np.pi*data_x) + normal(loc=0, scale=0.3, size=num)
df = DataFrame({'x': data_x, 'y': data_y})
return df
create_dataset2(10)
def create_dataset3(num):
data_x = np.linspace(0,1,num)
data_y = np.sin(2*np.pi*data_x) + normal(loc=0, scale=0.3, size=num)
col_x = Series(data_x, name='x')
col_y = Series(data_y, name='y')
df = pd.concat([col_x, col_y], axis=1)
return df
create_dataset3(10)
data = create_dataset1(1000)
data.describe()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load cube.
Step2: Construct button and display it. Then define what happens when you press the button and relate the function to the button-pressing event.
|
<ASSISTANT_TASK:>
Python Code:
import ipywidgets
import IPython.display
import iris
import iris.plot as iplt
import matplotlib.pyplot as plt
from cube_browser import Contour, Browser, Contourf, Pcolormesh
cube = iris.load_cube(iris.sample_data_path('GloSea4/ensemble_001.pp'))
print cube
go_button = ipywidgets.Button(
description='Press Me!')
IPython.display.display(go_button)
def on_button_clicked(b):
projection = iplt.default_projection(cube)
ax = plt.subplot(111, projection=projection)
plot = Contourf(cube, ax)
Browser([plot]).display()
go_button.on_click(on_button_clicked)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Containers
Step2: The elements of a list do not need to be of the same type
Step3: Elements can be added or removed from a list
Step4: Elements of a list can be changed
Step5: Lists Indexing
Step6: List slicing
Step7: Tuples
Step8: Can be 'unpacked' to assign variable. Often used with functions which return multiple items.
Step9: Dictionaries
Step10: Entries can be added or remove from dictionaries
Step11: Note
Step12: for loops
Step13: Functions
Step14: Functions can have multiple, no and even default arguments
Step15: Functions can return multiple values
Step17: Classes
Step18: Libraries
|
<ASSISTANT_TASK:>
Python Code:
var1 = 1 # interger
var2 = 2.34 # floating point numbers
var3 = 5.6 + 7.8j # complex numbers
var4 = "Hello World" # strings
var5 = True # booleans
var6 = None # special value to indicate the absence of a value
print("var1 value:", var1, "type:", type(var1))
print("var2 value:", var2, "type:", type(var2))
print("var3 value:", var3, "type:", type(var3))
print("var4 value:", var4, "type:", type(var4))
print("var5 value:", var5, "type:", type(var5))
print("var6 value:", var6, "type:", type(var6))
hydrometeors = ['rain', 'snow', 'hail'] # create a list holding three elements
print(hydrometeors)
print('length:', len(hydrometeors))
mixed_type_list = ['rain', 4.5, 99, None]
print(mixed_type_list)
hydrometeors = ['rain', 'snow', 'hail']
hydrometeors.append('drizzle') # add 'drizzle' to the end of the list
print(hydrometeors)
hydrometeors = ['rain', 'snow', 'hail']
hydrometeors.insert(1, 'graupel') # insert graupel before position 1
print(hydrometeors)
hydrometeors = ['rain', 'snow', 'hail']
del hydrometeors[0] # remove the first element from the list
print(hydrometeors)
hydrometeors = ['rain', 'snow', 'hail']
observation = hydrometeors.pop() # remove the last item from the list and store it in hydrometeor
print("observation:", observation)
print("hydrometeors:", hydrometeors)
hydrometeors = ['rain', 'snow', 'hail']
print("Before change:", hydrometeors)
hydrometeors[0] = 'virga'
print("After change:", hydrometeors)
hydrometeors = ['rain', 'snow', 'hail']
print('index 0:', hydrometeors[0]) # indexing begins at 0
print('index 1:', hydrometeors[1])
print('index 2:', hydrometeors[2])
hydrometeors[3] # Trying to access elements which do not exist raises a IndexError
hydrometeors = ['rain', 'snow', 'hail']
print('index -1:', hydrometeors[-1])
print('index -2:', hydrometeors[-2])
print('index -3:', hydrometeors[-3])
hydrometeors = ['rain', 'snow', 'hail', 'drizzle', 'graupel', 'virga']
print(hydrometeors[2:4]) # select elements from index 2 to index 4
hydrometeors[:3] # start from beginning
hydrometeors[3:] # until the end
hydrometeors[3:-1] # negative indices
hydrometeors[1::2] # every 2nd element
t = ('rain', 'snow', 'hail')
print(t)
print(len(t))
t[0] = 'virga' # tuples cannot be changed
observations = ('rain', 'snow', 'hail') # tuple with three elements
obs1, obs2, obs3 = observations # unpack tuple into obs1, obs2, obs3 variables
print("observations:", observations)
print("obs1:", obs1)
print("obs2:", obs2)
print("obs3:", obs3)
d = {'site': 'KLOT', 'amount': 20, 'wind': 'east'}
print(d.keys())
print(d.values())
print('site:', d['wind'])
print('amount:', d['amount'])
print('wind:', d['wind'])
print("wind before change:", d['wind'])
d['wind'] = 'west'
print("wind after change:", d['wind'])
d = {'site': 'KLOT', 'amount': 20, 'wind': 'east'}
print(d)
del d['wind']
print(d)
d['wind_speed'] = 'east'
d['wind_direction'] = '10 m/s'
print(d)
hydrometeor = 'rain'
if hydrometeor == 'rain':
print("You saw rain")
hydrometeor = 'hail'
if hydrometeor == 'rain':
print("You saw rain")
else:
print("You did NOT see rain")
hydrometeor = 'snow'
if hydrometeor == 'rain':
print("You saw rain")
elif hydrometeor == 'snow':
print("You saw snow")
else:
print("I do not know what you saw")
hydrometeors = ['rain', 'snow', 'hail']
for hydrometeor in hydrometeors: # loop over elements in a list
print(hydrometeor)
for i in range(5): # loop over the number 0 to 4
print(i)
d = {'site': 'KLOT', 'amount': 20, 'wind': 'east'}
for key, value in d.items():
print(key, ':', value)
# simple
def func(arg1):
print(arg1)
return 42
# call a function
return_value = func("Hello World")
print("ret_value:", return_value)
def add_numbers(number1, number2):
return number1 + number2
def say_hello():
print("Hello AMS")
def favorite_hydrometeor(name, hydrometeor='snow'):
print("Hello", name)
print("Your favorite hydrometeor is", hydrometeor)
print(add_numbers(1, 2))
say_hello()
favorite_hydrometeor("Jonathan")
favorite_hydrometeor("Jonathan", hydrometeor="hail")
def sum_and_product(a, b):
return a+b, a*b
sum_ab, product_ab = sum_and_product(2, 3)
print("sum", sum_ab)
print("product", product_ab)
class Point(object):
A class to store the coordinate in a plane
def __init__(self, x, y):
self.x = x # an attribute
self.y = y # an attribute
def sum_of_coordinates(self): # a class method
return self.x + self.y
home = Point(2, 3)
print(home.x)
print(home.y)
home.sum_of_coordinates()
import math # import the entire math module
math.sqrt(2)
from random import randrange # import just the randrange function from the random module
for i in range(5):
print(randrange(1, 10))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Since $p > 0.05$, we cannot reject the hypothesis that the series has a unit root in any of these cases.
|
<ASSISTANT_TASK:>
Python Code:
from statsmodels.tsa.stattools import coint, adfuller
import pandas as pd
fundamentals = init_fundamentals()
data = get_fundamentals(query(fundamentals.income_statement.total_revenue)
.filter((fundamentals.company_reference.primary_symbol == 'MCD') |
(fundamentals.company_reference.primary_symbol == 'MSFT') |
(fundamentals.company_reference.primary_symbol == 'KO')),
'2015-01-01', '30q')
# Get time series for each security individually
x0 = data.values[0].T[1]
x1 = data.values[0].T[2]
x2 = data.values[0].T[0]
print 'p-values of Dickey-Fuller statistic on total revenue data:'
print 'PEP:', adfuller(x0)[1]
print 'KO:', adfuller(x1)[1]
print 'MSFT:', adfuller(x2)[1]
# Compute the p-value for the cointegration of the two series
print 'p-values of cointegration statistic on total revenue data:'
print 'MCD and MSFT:', coint(x0, x1)[1]
print 'MCD and KO:', coint(x0, x2)[1]
print 'MSFT and KO:', coint(x1, x2)[1]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Using TT-Matrices we can compactly represent densely connected layers in neural networks, which allows us to greatly reduce number of parameters. Matrix multiplication can be handled by the t3f.matmul method which allows for multiplying dense (ordinary) matrices and TT-Matrices. Very simple neural network could look as following (for initialization several options such as t3f.glorot_initializer, t3f.he_initializer or t3f.random_matrix are available)
Step2: For convenience we have implemented a layer analogous to Keras Dense layer but with a TT-Matrix instead of an ordinary matrix. An example of fully trainable net is provided below.
Step3: Some preprocessing...
Step4: Note that in the dense layer we only have $1725$ parameters instead of $784 * 625 = 490000$.
Step5: Compression of Dense layers
Step6: Let us convert the matrix used in the Dense layer to the TT-Matrix with tt-ranks equal to 16 (since we trained the network without the low-rank structure assumption we may wish start with high rank values).
Step7: We need to evaluate the tt-cores of Wtt. We also need to store other parameters for later (biases and the second dense layer).
Step8: Now we can construct a tensor network with the first Dense layer replaced by Wtt
Step9: We see that even though we now have about 5% of the original number of parameters we still achieve a relatively high accuracy.
|
<ASSISTANT_TASK:>
Python Code:
# Import TF 2.
%tensorflow_version 2.x
import tensorflow as tf
import numpy as np
import tensorflow.keras.backend as K
# Fix seed so that the results are reproducable.
tf.random.set_seed(0)
np.random.seed(0)
try:
import t3f
except ImportError:
# Install T3F if it's not already installed.
!git clone https://github.com/Bihaqo/t3f.git
!cd t3f; pip install .
import t3f
W = t3f.random_matrix([[4, 7, 4, 7], [5, 5, 5, 5]], tt_rank=2)
print(W)
class Learner:
def __init__(self):
initializer = t3f.glorot_initializer([[4, 7, 4, 7], [5, 5, 5, 5]], tt_rank=2)
self.W1 = t3f.get_variable('W1', initializer=initializer)
self.W2 = tf.Variable(tf.random.normal([625, 10]))
self.b2 = tf.Variable(tf.random.normal([10]))
def predict(self, x):
b1 = tf.Variable(tf.zeros([625]))
h1 = t3f.matmul(x, W1) + b1
h1 = tf.nn.relu(h1)
return tf.matmul(h1, W2) + b2
def loss(self, x, y):
y_ = tf.one_hot(y, 10)
logits = self.predict(x)
return tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=logits))
from tensorflow.keras.datasets import mnist
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation, Dropout, Flatten
from tensorflow.keras.utils import to_categorical
from tensorflow.keras import optimizers
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train / 127.5 - 1.0
x_test = x_test / 127.5 - 1.0
y_train = to_categorical(y_train, num_classes=10)
y_test = to_categorical(y_test, num_classes=10)
model = Sequential()
model.add(Flatten(input_shape=(28, 28)))
tt_layer = t3f.nn.KerasDense(input_dims=[7, 4, 7, 4], output_dims=[5, 5, 5, 5],
tt_rank=4, activation='relu',
bias_initializer=1e-3)
model.add(tt_layer)
model.add(Dense(10))
model.add(Activation('softmax'))
model.summary()
optimizer = optimizers.Adam(lr=1e-2)
model.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(x_train, y_train, epochs=3, batch_size=64, validation_data=(x_test, y_test))
model = Sequential()
model.add(Flatten(input_shape=(28, 28)))
model.add(Dense(625, activation='relu'))
model.add(Dense(10))
model.add(Activation('softmax'))
model.summary()
optimizer = optimizers.Adam(lr=1e-3)
model.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(x_train, y_train, epochs=5, batch_size=64, validation_data=(x_test, y_test))
W = model.trainable_weights[0]
print(W)
Wtt = t3f.to_tt_matrix(W, shape=[[7, 4, 7, 4], [5, 5, 5, 5]], max_tt_rank=16)
print(Wtt)
cores = Wtt.tt_cores
other_params = model.get_weights()[1:]
model = Sequential()
model.add(Flatten(input_shape=(28, 28)))
tt_layer = t3f.nn.KerasDense(input_dims=[7, 4, 7, 4], output_dims=[5, 5, 5, 5],
tt_rank=16, activation='relu')
model.add(tt_layer)
model.add(Dense(10))
model.add(Activation('softmax'))
optimizer = optimizers.Adam(lr=1e-3)
model.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=['accuracy'])
model.set_weights(list(cores) + other_params)
print("new accuracy: ", model.evaluate(x_test, y_test)[1])
model.summary()
model.fit(x_train, y_train, epochs=2, batch_size=64, validation_data=(x_test, y_test))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Seed
Step2: Combining Simulations
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
import tellurium as te
te.setDefaultPlottingEngine('matplotlib')
%matplotlib inline
import numpy as np
r = te.loada('S1 -> S2; k1*S1; k1 = 0.1; S1 = 40')
r.integrator = 'gillespie'
r.integrator.seed = 1234
results = []
for k in range(1, 50):
r.reset()
s = r.simulate(0, 40)
results.append(s)
r.plot(s, show=False, alpha=0.7)
te.show()
results = []
for k in range(1, 20):
r.reset()
r.setSeed(123456)
s = r.simulate(0, 40)
results.append(s)
r.plot(s, show=False, loc=None, color='black', alpha=0.7)
te.show()
import tellurium as te
import numpy as np
r = te.loada('S1 -> S2; k1*S1; k1 = 0.02; S1 = 100')
r.setSeed(1234)
for k in range(1, 20):
r.resetToOrigin()
res1 = r.gillespie(0, 10)
# change in parameter after the first half of the simulation
r.k1 = r.k1*20
res2 = r.gillespie (10, 20)
sim = np.vstack([res1, res2])
te.plot(sim[:,0], sim[:,1:], alpha=0.7, names=['S1', 'S2'], tags=['S1', 'S2'], show=False)
te.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 3. naloga
Step2: Polje za diskusijo
|
<ASSISTANT_TASK:>
Python Code:
1+1
?print
import time
tekst = "Matematika in Python"
for znak in tekst:
time.sleep(0.2)
print(znak,end='')
import disqus
%reload_ext disqus
%disqus matpy
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: *Scenario 2
|
<ASSISTANT_TASK:>
Python Code:
palPhrase = ['r', 'i', 's', 'e', 't', 'o', 'v', 'o', 't', 'e', 's', 'i', 'r']
newLoopCnt = 0
print "==" * 2 + "direct (in-place modification) operations on the list" + "==" * 2
print "Length of the list : %d" %(len(palPhrase))
for ee in palPhrase:
print "Counter {0}".format(newLoopCnt)
ff = palPhrase.pop()
print "Popped element : ", ff
newLoopCnt = newLoopCnt + 1
print "Elements in the original list : ", palPhrase
palPhrase = ['r', 'i', 's', 'e', 't', 'o', 'v', 'o', 't', 'e', 's', 'i', 'r']
loopCnt = 0
print "==" * 2 + "operations using a copy of the list" + "==" * 2
print "Length of the list : %d" %(len(palPhrase))
for cc in palPhrase[:]:
print "Counter {0}".format(loopCnt)
dd = palPhrase.pop()
print "Popped element : %s" %(dd)
loopCnt = loopCnt + 1
print "Elements in the original list : ", palPhrase
print "Find out why the original list is empty"
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: If the above cell ran without error, we're ready to go!
Step2: Download public 1000 Genomes data
Step3: Importing data from VCF
Step4: Next we read the written file, assigning the variable mt (for matrix table).
Step5: Getting to know our data
Step6: Alternatively
Step7: Here is how to peek at the first few sample IDs
Step8: To look at the first few genotype calls, we can use entries along with select and take. The take method collects the first n rows into a list. Alternatively, we can use the show method, which prints the first n rows to the console in a table format.
Step9: Adding column fields
Step10: A good way to peek at the structure of a Table is to look at its schema.
Step11: To peek at the first few values, use the show method
Step12: Now we'll use this table to add sample annotations to our dataset, storing the annotations in column fields in our MatrixTable. First, we'll print the existing column schema
Step13: We use the annotate_cols method to join the table with the MatrixTable containing our dataset.
Step14: Query functions and the Hail Expression Language
Step15: stats is an aggregation function that produces some useful statistics about numeric collections. We can use this to see the distribution of the CaffeineConsumption phenotype.
Step16: However, these metrics aren't perfectly representative of the samples in our dataset. Here's why
Step17: Since there are fewer samples in our dataset than in the full thousand genomes cohort, we need to look at annotations on the dataset. We can use aggregate_cols to get the metrics for only the samples in our dataset.
Step18: The functionality demonstrated in the last few cells isn't anything especially new
Step19: We can list the counts in descending order using Python's Counter class.
Step20: It's nice to see that we can actually uncover something biological from this small dataset
Step21: Quality Control
Step22: Plotting the QC metrics is a good place to start.
Step23: Often, these metrics are correlated.
Step24: Removing outliers from the dataset will generally improve association results. We can make arbitrary cutoffs and use them to filter
Step25: Next is genotype QC. It's a good idea to filter out genotypes where the reads aren't where they should be
Step26: Variant QC is a bit more of the same
Step27: These statistics actually look pretty good
Step28: These filters removed about 15% of sites (we started with a bit over 10,000). This is NOT representative of most sequencing datasets! We have already downsampled the full thousand genomes dataset to include more common variants than we'd expect by chance.
Step29: Looking at the bottom of the above printout, you can see the linear regression adds new row fields for the beta, standard error, t-statistic, and p-value.
Step30: This doesn't look like much of a skyline. Let's check whether our GWAS was well controlled using a Q-Q (quantile-quantile) plot.
Step31: Confounded!
Step32: Now that we've got principal components per sample, we may as well plot them! Human history exerts a strong effect in genetic datasets. Even with a 50MB sequencing dataset, we can recover the major human populations.
Step33: Now we can rerun our linear regression, controlling for sample sex and the first few principal components. We'll do this with input variable the number of alternate alleles as before, and again with input variable the genotype dosage derived from the PL field.
Step34: We'll first make a Q-Q plot to assess inflation...
Step35: That's more like it! This shape is indicative of a well-controlled (but not especially well-powered) study. And now for the Manhattan plot
Step36: We have found a caffeine consumption locus! Now simply apply Hail's Nature paper function to publish the result.
Step37: We use the MatrixTable.entries method to convert our matrix table to a table (with one row for each sample for each variant). In this representation, it is easy to aggregate over any fields we like, which is often the first step of rare variant analysis.
Step38: We've shown that it's easy to aggregate by a couple of arbitrary statistics. This specific examples may not provide especially useful pieces of information, but this same pattern can be used to detect effects of rare variation
|
<ASSISTANT_TASK:>
Python Code:
import hail as hl
hl.init()
from hail.plot import show
from pprint import pprint
hl.plot.output_notebook()
hl.utils.get_1kg('data/')
hl.import_vcf('data/1kg.vcf.bgz').write('data/1kg.mt', overwrite=True)
mt = hl.read_matrix_table('data/1kg.mt')
mt.rows().select().show(5)
mt.row_key.show(5)
mt.s.show(5)
mt.entry.take(5)
table = (hl.import_table('data/1kg_annotations.txt', impute=True)
.key_by('Sample'))
table.describe()
table.show(width=100)
print(mt.col.dtype)
mt = mt.annotate_cols(pheno = table[mt.s])
mt.col.describe()
pprint(table.aggregate(hl.agg.counter(table.SuperPopulation)))
pprint(table.aggregate(hl.agg.stats(table.CaffeineConsumption)))
table.count()
mt.count_cols()
mt.aggregate_cols(hl.agg.counter(mt.pheno.SuperPopulation))
pprint(mt.aggregate_cols(hl.agg.stats(mt.pheno.CaffeineConsumption)))
snp_counts = mt.aggregate_rows(hl.agg.counter(hl.Struct(ref=mt.alleles[0], alt=mt.alleles[1])))
pprint(snp_counts)
from collections import Counter
counts = Counter(snp_counts)
counts.most_common()
p = hl.plot.histogram(mt.DP, range=(0,30), bins=30, title='DP Histogram', legend='DP')
show(p)
mt.col.describe()
mt = hl.sample_qc(mt)
mt.col.describe()
p = hl.plot.histogram(mt.sample_qc.call_rate, range=(.88,1), legend='Call Rate')
show(p)
p = hl.plot.histogram(mt.sample_qc.gq_stats.mean, range=(10,70), legend='Mean Sample GQ')
show(p)
p = hl.plot.scatter(mt.sample_qc.dp_stats.mean, mt.sample_qc.call_rate, xlabel='Mean DP', ylabel='Call Rate')
show(p)
mt = mt.filter_cols((mt.sample_qc.dp_stats.mean >= 4) & (mt.sample_qc.call_rate >= 0.97))
print('After filter, %d/284 samples remain.' % mt.count_cols())
ab = mt.AD[1] / hl.sum(mt.AD)
filter_condition_ab = ((mt.GT.is_hom_ref() & (ab <= 0.1)) |
(mt.GT.is_het() & (ab >= 0.25) & (ab <= 0.75)) |
(mt.GT.is_hom_var() & (ab >= 0.9)))
fraction_filtered = mt.aggregate_entries(hl.agg.fraction(~filter_condition_ab))
print(f'Filtering {fraction_filtered * 100:.2f}% entries out of downstream analysis.')
mt = mt.filter_entries(filter_condition_ab)
mt = hl.variant_qc(mt)
mt.row.describe()
mt = mt.filter_rows(mt.variant_qc.AF[1] > 0.01)
mt = mt.filter_rows(mt.variant_qc.p_value_hwe > 1e-6)
print('Samples: %d Variants: %d' % (mt.count_cols(), mt.count_rows()))
gwas = hl.linear_regression_rows(y=mt.pheno.CaffeineConsumption,
x=mt.GT.n_alt_alleles(),
covariates=[1.0])
gwas.row.describe()
p = hl.plot.manhattan(gwas.p_value)
show(p)
p = hl.plot.qq(gwas.p_value)
show(p)
eigenvalues, pcs, _ = hl.hwe_normalized_pca(mt.GT)
pprint(eigenvalues)
pcs.show(5, width=100)
mt = mt.annotate_cols(scores = pcs[mt.s].scores)
p = hl.plot.scatter(mt.scores[0],
mt.scores[1],
label=mt.pheno.SuperPopulation,
title='PCA', xlabel='PC1', ylabel='PC2')
show(p)
gwas = hl.linear_regression_rows(
y=mt.pheno.CaffeineConsumption,
x=mt.GT.n_alt_alleles(),
covariates=[1.0, mt.pheno.isFemale, mt.scores[0], mt.scores[1], mt.scores[2]])
p = hl.plot.qq(gwas.p_value)
show(p)
p = hl.plot.manhattan(gwas.p_value)
show(p)
entries = mt.entries()
results = (entries.group_by(pop = entries.pheno.SuperPopulation, chromosome = entries.locus.contig)
.aggregate(n_het = hl.agg.count_where(entries.GT.is_het())))
results.show()
entries = entries.annotate(maf_bin = hl.if_else(entries.info.AF[0]<0.01, "< 1%",
hl.if_else(entries.info.AF[0]<0.05, "1%-5%", ">5%")))
results2 = (entries.group_by(af_bin = entries.maf_bin, purple_hair = entries.pheno.PurpleHair)
.aggregate(mean_gq = hl.agg.stats(entries.GQ).mean,
mean_dp = hl.agg.stats(entries.DP).mean))
results2.show()
table = hl.import_table('data/1kg_annotations.txt', impute=True).key_by('Sample')
mt = hl.read_matrix_table('data/1kg.mt')
mt = mt.annotate_cols(pheno = table[mt.s])
mt = hl.sample_qc(mt)
mt = mt.filter_cols((mt.sample_qc.dp_stats.mean >= 4) & (mt.sample_qc.call_rate >= 0.97))
ab = mt.AD[1] / hl.sum(mt.AD)
filter_condition_ab = ((mt.GT.is_hom_ref() & (ab <= 0.1)) |
(mt.GT.is_het() & (ab >= 0.25) & (ab <= 0.75)) |
(mt.GT.is_hom_var() & (ab >= 0.9)))
mt = mt.filter_entries(filter_condition_ab)
mt = hl.variant_qc(mt)
mt = mt.filter_rows(mt.variant_qc.AF[1] > 0.01)
eigenvalues, pcs, _ = hl.hwe_normalized_pca(mt.GT)
mt = mt.annotate_cols(scores = pcs[mt.s].scores)
gwas = hl.linear_regression_rows(
y=mt.pheno.CaffeineConsumption,
x=mt.GT.n_alt_alleles(),
covariates=[1.0, mt.pheno.isFemale, mt.scores[0], mt.scores[1], mt.scores[2]])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We will make use of the following commands
Step2: We will retrieve a lot of data, we can benefit from IPython's parallel computation tool.
|
<ASSISTANT_TASK:>
Python Code:
import pandas
from datetime import date
github = pandas.DataFrame.from_csv('../data/RPackage-Repositories-150101-150601.csv')
repositories = github[['owner.login', 'name']].rename(columns={'owner.login': 'owner', 'name': 'repositories'})
FILENAME = '../data/github-raw-150601.csv'
# Root of the directory where the repositories were collected
GIT_DIR = '/data/github/'
from IPython import parallel
clients = parallel.Client()
clients.block = False # asynchronous computations
print 'Clients:', str(clients.ids)
def get_data_from((owner, repository)):
# Move to target directory
try:
os.chdir(os.path.join(GIT_DIR, owner, repository))
except OSError as e:
# Should happen when directory does not exist
return []
data_list = []
# Get commits for DESCRIPTION
try:
commits = subprocess.check_output(['git', 'log', '--format=%H/%ci', '--', 'DESCRIPTION'])
except subprocess.CalledProcessError as e:
# Should not happen!?
raise Exception(owner + ' ' + repository + '/ log : ' + e.output)
for commit in [x for x in commits.split('\n') if len(x.strip())!=0]:
commit_sha, date = map(lambda x: x.strip(), commit.split('/'))
# Get file content
try:
content = subprocess.check_output(['git', 'show', '{id}:{path}'.format(id=commit_sha, path='DESCRIPTION')])
except subprocess.CalledProcessError as e:
# Could happen when DESCRIPTION was added in this commit. Silently ignore
continue
try:
metadata = deb822.Deb822(content.split('\n'))
except Exception as e:
# I don't known which are the exceptions that Deb822 may throw!
continue # Go further
data = {}
for md in ['Package', 'Version', 'License', 'Imports', 'Suggests', 'Depends', 'Author', 'Authors', 'Maintainer']:
data[md] = metadata.get(md, '')
data['CommitDate'] = date
data['Owner'] = owner
data['Repository'] = repository
data_list.append(data)
# Return to root directory
os.chdir(GIT_DIR)
return data_list
data = []
clients[:].execute('import subprocess, os')
clients[:].execute('from debian import deb822')
clients[:]['GIT_DIR'] = GIT_DIR
balanced = clients.load_balanced_view()
items = [(owner, repo) for idx, (owner, repo) in repositories.iterrows()]
print len(items), 'items'
res = balanced.map(get_data_from, items, ordered=False, timeout=15)
import time
while not res.ready():
time.sleep(5)
print res.progress, ' ',
for result in res.result:
data.extend(result)
df = pandas.DataFrame.from_records(data)
df.to_csv(FILENAME, encoding='utf-8')
print len(df), 'items'
print len(df.drop_duplicates(['Package'])), 'packages'
print len(df.drop_duplicates(['Owner', 'Repository'])), 'repositories'
print len(df.drop_duplicates(['Package', 'Version'])), 'pairs (package, version)'
df
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Dataset options
Step2: Generate synthetic data
Step3: twPCA options
Step4: Construct and fit twPCA model
Step5: Visualize learned warping functions
Step6: twPCA accurately recovers latent dynamics
Step7: twPCA learns better reconstructions of the data
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
np.random.seed(1234)
from scipy.ndimage.filters import convolve1d, gaussian_filter1d
from twpca import TWPCA
from twpca.regularizers import curvature
import itertools as itr
import seaborn as sns
sns.set_style("ticks")
%load_ext autoreload
%autoreload 2
n_time = 130
n_trial = 100
n_neuron = 50
tau = 10 # time constant for exponential decay of latent events
event_gap = 25 # average gap between events
n_events = 3 # number of events
max_jitter = 15 # maximum jitter of each event
# Randomly generate jitters
jitters = np.random.randint(-max_jitter, max_jitter, size=(n_trial, n_events))
ordering = np.argsort(jitters[:, 0])
jitters = jitters[ordering]
# Create one-hot matrix that encodes the location of latent events
events = np.zeros((n_trial, n_time))
for trial_idx, jitter in enumerate(jitters):
trial_event_times = np.cumsum(event_gap + jitter)
events[trial_idx, trial_event_times] = 1.0
avg_event = np.zeros(n_time)
avg_event[np.cumsum([event_gap] * n_events)] = 1.0
# Convolve latent events with an exponential filter
impulse_response = np.exp(-np.arange(n_time)/float(tau))
impulse_response /= impulse_response.sum()
latents = np.array([np.convolve(e, impulse_response, mode='full')[:n_time] for e in events])
avg_latent = np.convolve(avg_event, impulse_response, mode='full')[:n_time]
# Coupling from one dimensional latent state to each neuron
readout_weights = np.random.rand(n_neuron) + 0.1
# Probability of firing for each neuron
rates = np.exp(np.array([np.outer(latent, readout_weights) for latent in latents]))
rates -= rates.min()#(0, 1), keepdims=True)
rates /= rates.max()#0,1), keepdims=True)
# Sample spike trains
spikes = np.random.binomial(1, rates).astype(np.float32)
# Mark end of last few trials as missing data to demonstrate that twPCA works with variable-length data
spikes[80:, 60:75] = np.nan
figure(figsize=(10,3))
subplot(121)
imshow(rates[..., 0], aspect='auto', cmap=cm.viridis); colorbar()
title('Firing rate for neuron 1')
xlabel('Time')
ylabel('Trial')
subplot(122)
imshow(spikes[..., 0], aspect='auto', cmap=cm.viridis); colorbar()
title('Spikes for neuron 1')
n_components = 1
smooth_std = 1.0
warp_penalty_strength = 0.01
time_penalty_strength = 1.0
# Smooth spike trains
smoothed_spikes = gaussian_filter1d(spikes, smooth_std, axis=1)
# Add an L1 penalty on the second order finite difference of the warping functions
# This encourages the warping functions to be piecewise linear.
warp_regularizer = curvature(scale=warp_penalty_strength, power=1)
# Adds an L2 penatly on the second order finite difference of the temporal factors.
# Encourages the temporal factors to be smooth in time.
time_regularizer = curvature(scale=time_penalty_strength, power=2, axis=0)
model = TWPCA(smoothed_spikes,
n_components,
warp_regularizer=warp_regularizer,
time_regularizer=time_regularizer,
fit_trial_factors=False,
warpinit='linear')
# Fit model with gradient descent, starting with a learning rate of 1e-1 for 250 iterations,
# and then a learning rate of 1e-2 for 500 iterations
model.fit(lr=(1e-1, 1e-2), niter=(250, 500))
# Extract model parameters and reconstruction of the data (X_pred)
params = model.params
X_pred = model.predict()
plot(params['warp'].T, alpha=1);
axis('square')
xlabel('Clock time')
ylabel('Aligned time')
xlim(0, 100);
ylim(0, 100)
title('Learned warping functions')
u,s,v = np.linalg.svd(np.nanmean(spikes, axis=0), full_matrices=False)
pca_trial_recon = s[0]*np.outer(u[:,0], v[0])
s = np.sign(np.sum(u[:,0]))
s *= np.linalg.norm(avg_latent)/np.linalg.norm(u[:,0])
pca_time_factor = s*u[:,0]
plot(avg_latent, '-k', lw=5, alpha=0.7, label='True')
plot(pca_time_factor, '-b', lw=4, alpha=0.7, label='PCA')
time_fctr = params['time']
s = np.sign(np.sum(time_fctr))
s *= np.linalg.norm(avg_latent.ravel())/np.linalg.norm(time_fctr.ravel())
plot(s*time_fctr, '-r', alpha=0.7, lw=4, label='twPCA')
#ax.set_title('warp={0:.2f}, time={1:.2f}'.format(warp_reg, time_reg))
plt.tight_layout()
legend(loc='upper right',fontsize=24, bbox_to_anchor=(1.2, 0.8))#, fontsize=24)
xticks([])
yticks([])
xlabel('Time', fontsize=24)
ylabel('a.u.', fontsize=24)
sns.despine()
xlim(10, 110)
u,s,v = np.linalg.svd(np.nanmean(smoothed_spikes, axis=0), full_matrices=False)
figure(figsize=(16, 7))
gs = GridSpec(3, 4)
im_kwargs = dict(cmap='hot', aspect='auto', interpolation='none')
for i, tidx in enumerate([0, n_trial//2, -1]):
for j, x in enumerate((rates, smoothed_spikes, X_pred)):
subplot(gs[i, j])
if i == 0:
title(['Latent dynamics', 'Spikes', 'twPCA recon'][j], fontsize=28)
imshow(x[tidx].T, **im_kwargs)
if not (i == 2 and j == 0):
gca().set_xticklabels([])
gca().set_yticklabels([])
else:
xlabel('Time')
ylabel('Neuron')
subplot(gs[i, -1]);
imshow(pca_trial_recon.T, **im_kwargs)
gca().set_xticklabels([]); gca().set_yticklabels([]);
if i == 0:
title('PCA recon', fontsize=28)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 2. Key Properties --> Software Properties
Step12: 2.2. Code Version
Step13: 2.3. Code Languages
Step14: 3. Key Properties --> Timestep Framework
Step15: 3.2. Split Operator Advection Timestep
Step16: 3.3. Split Operator Physical Timestep
Step17: 3.4. Integrated Timestep
Step18: 3.5. Integrated Scheme Type
Step19: 4. Key Properties --> Meteorological Forcings
Step20: 4.2. Variables 2D
Step21: 4.3. Frequency
Step22: 5. Key Properties --> Resolution
Step23: 5.2. Canonical Horizontal Resolution
Step24: 5.3. Number Of Horizontal Gridpoints
Step25: 5.4. Number Of Vertical Levels
Step26: 5.5. Is Adaptive Grid
Step27: 6. Key Properties --> Tuning Applied
Step28: 6.2. Global Mean Metrics Used
Step29: 6.3. Regional Metrics Used
Step30: 6.4. Trend Metrics Used
Step31: 7. Transport
Step32: 7.2. Scheme
Step33: 7.3. Mass Conservation Scheme
Step34: 7.4. Convention
Step35: 8. Emissions
Step36: 8.2. Method
Step37: 8.3. Sources
Step38: 8.4. Prescribed Climatology
Step39: 8.5. Prescribed Climatology Emitted Species
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Step41: 8.7. Interactive Emitted Species
Step42: 8.8. Other Emitted Species
Step43: 8.9. Other Method Characteristics
Step44: 9. Concentrations
Step45: 9.2. Prescribed Lower Boundary
Step46: 9.3. Prescribed Upper Boundary
Step47: 9.4. Prescribed Fields Mmr
Step48: 9.5. Prescribed Fields Mmr
Step49: 10. Optical Radiative Properties
Step50: 11. Optical Radiative Properties --> Absorption
Step51: 11.2. Dust
Step52: 11.3. Organics
Step53: 12. Optical Radiative Properties --> Mixtures
Step54: 12.2. Internal
Step55: 12.3. Mixing Rule
Step56: 13. Optical Radiative Properties --> Impact Of H2o
Step57: 13.2. Internal Mixture
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Step59: 14.2. Shortwave Bands
Step60: 14.3. Longwave Bands
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Step62: 15.2. Twomey
Step63: 15.3. Twomey Minimum Ccn
Step64: 15.4. Drizzle
Step65: 15.5. Cloud Lifetime
Step66: 15.6. Longwave Bands
Step67: 16. Model
Step68: 16.2. Processes
Step69: 16.3. Coupling
Step70: 16.4. Gas Phase Precursors
Step71: 16.5. Scheme Type
Step72: 16.6. Bulk Scheme Species
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'test-institute-1', 'sandbox-2', 'aerosol')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: ๅๆฃในใใฉใใธใผใไฝฟใฃใฆใขใใซใไฟๅญใใฆ่ชญใฟ่พผใ
Step2: tf.distribute.Strategy ใไฝฟใฃใฆใใผใฟใจใขใใซใๆบๅใใพใใ
Step3: ใขใใซใใใฌใผใใณใฐใใพใใ
Step4: ใขใใซใไฟๅญใใฆ่ชญใฟ่พผใ
Step5: tf.distribute.Strategy ใไฝฟ็จใใใซใขใใซใๅพฉๅ
ใใพใใ
Step6: ใขใใซใๅพฉๅ
ใใใใcompile() ใใใไธๅบฆๅผใณๅบใใใจใชใใใฎใใฌใผใใณใฐใ็ถ่กใงใใพใใไฟๅญๅใซใณใณใใคใซๆธใฟใงใใใใใงใใใขใใซใฏใTensorFlow ใฎๆจๆบ็ใช SavedModel ใใญใจๅฝขๅผใงไฟๅญใใใฆใใพใใใใฎไปใฎ่ฉณ็ดฐใฏใsaved_model ๅฝขๅผใฌใคใใใ่ฆงใใ ใใใ
Step7: ใ่ฆงใฎ้ใใtf.distribute.Strategy ใไฝฟใฃใฆๆๅพ
้ใใซ่ชญใฟ่พผใพใใพใใใใใใงไฝฟ็จใใใในใใฉใใธใผใฏใไฟๅญๅใซไฝฟ็จใใในใใฉใใธใผใจๅใใใฎใงใใๅฟ
่ฆใฏใใใพใใใ
Step8: ่ชญใฟ่พผใฟใฏ tf.saved_model.load() ใง่กใใพใใใใใไฝใใฌใใซใซใใ API๏ผใใใใฃใฆๅบ็ฏใชใฆใผในใฑใผในใฎใใ API๏ผใงใใใใใKeras ใขใใซใ่ฟใใพใใใไปฃใใใซใๆจ่ซใ่กใใใใซไฝฟ็จใงใใ้ขๆฐใๅซใใชใใธใงใฏใใ่ฟใใพใใๆฌกใซไพใ็คบใใพใใ
Step9: ่ชญใฟ่พผใพใใใชใใธใงใฏใใซใฏ่คๆฐใฎ้ขๆฐใๅซใพใใใใใใใซใญใผใ้ข้ฃไปใใใใฆใใๅฏ่ฝๆงใใใใพใใ"serving_default" ใฏใไฟๅญใใใ Keras ใขใใซใไฝฟ็จใใๆจ่ซ้ขๆฐใฎใใใฉใซใใฎใญใผใงใใใใฎ้ขๆฐใงๆจ่ซใๅฎ่กใใใซใฏใๆฌกใฎใใใซใใพใใ
Step10: ใพใใๅๆฃๆนๆณใง่ชญใฟ่พผใใงๆจ่ซใๅฎ่กใใใใจใใงใใพใใ
Step11: ๅพฉๅ
ใใใ้ขๆฐใฎๅผใณๅบใใฏใไฟๅญใใใใขใใซ๏ผpredict๏ผใซๅฏพใใใใฉใฏใผใใในใงใใ่ชญใฟ่พผใพใใ้ขๆฐใใใฌใผใใณใฐใ็ถใใๅ ดๅใฏใฉใใงใใใใใใพใใฏ่ชญใฟ่พผใพใใ้ขๆฐใใใๅคงใใชใขใใซใซๅใ่พผใใซใฏ๏ผไธ่ฌ็ใซใฏใใใฎ่ชญใฟ่พผใพใใใชใใธใงใฏใใ Keras ใฌใคใคใผใซใฉใใใใฆ่กใใใจใใงใใพใใๅนธใใซใใTF Hub ใซใฏใๆฌกใซ็คบใใจใใใใใฎ็ฎ็ใซไฝฟ็จใงใใ hub.KerasLayer ใ็จๆใใใฆใใพใใ
Step12: ใ่ฆงใฎ้ใใhub.KerasLayer ใฏ tf.saved_model.load() ใใ่ชญใฟ่พผใพใใ็ตๆใใๅฅใฎใขใใซใฎๆง็ฏใซไฝฟ็จใงใใ Keras ใฌใคใคใผใซใฉใใใใฆใใพใใๅญฆ็ฟใ่ปข้ใใๅ ดๅใซ้ๅธธใซไพฟๅฉใชๆๆณใงใใ
Step13: ใญใผใซใซใใใคในใงไฟๅญใพใใฏ่ชญใฟ่พผใ
Step14: ่ญฆๅ
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow_datasets as tfds
import tensorflow as tf
tfds.disable_progress_bar()
mirrored_strategy = tf.distribute.MirroredStrategy()
def get_data():
datasets, ds_info = tfds.load(name='mnist', with_info=True, as_supervised=True)
mnist_train, mnist_test = datasets['train'], datasets['test']
BUFFER_SIZE = 10000
BATCH_SIZE_PER_REPLICA = 64
BATCH_SIZE = BATCH_SIZE_PER_REPLICA * mirrored_strategy.num_replicas_in_sync
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255
return image, label
train_dataset = mnist_train.map(scale).cache().shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
eval_dataset = mnist_test.map(scale).batch(BATCH_SIZE)
return train_dataset, eval_dataset
def get_model():
with mirrored_strategy.scope():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10)
])
model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(),
metrics=['accuracy'])
return model
model = get_model()
train_dataset, eval_dataset = get_data()
model.fit(train_dataset, epochs=2)
keras_model_path = "/tmp/keras_save"
model.save(keras_model_path)
restored_keras_model = tf.keras.models.load_model(keras_model_path)
restored_keras_model.fit(train_dataset, epochs=2)
another_strategy = tf.distribute.OneDeviceStrategy("/cpu:0")
with another_strategy.scope():
restored_keras_model_ds = tf.keras.models.load_model(keras_model_path)
restored_keras_model_ds.fit(train_dataset, epochs=2)
model = get_model() # get a fresh model
saved_model_path = "/tmp/tf_save"
tf.saved_model.save(model, saved_model_path)
DEFAULT_FUNCTION_KEY = "serving_default"
loaded = tf.saved_model.load(saved_model_path)
inference_func = loaded.signatures[DEFAULT_FUNCTION_KEY]
predict_dataset = eval_dataset.map(lambda image, label: image)
for batch in predict_dataset.take(1):
print(inference_func(batch))
another_strategy = tf.distribute.MirroredStrategy()
with another_strategy.scope():
loaded = tf.saved_model.load(saved_model_path)
inference_func = loaded.signatures[DEFAULT_FUNCTION_KEY]
dist_predict_dataset = another_strategy.experimental_distribute_dataset(
predict_dataset)
# Calling the function in a distributed manner
for batch in dist_predict_dataset:
another_strategy.run(inference_func,args=(batch,))
import tensorflow_hub as hub
def build_model(loaded):
x = tf.keras.layers.Input(shape=(28, 28, 1), name='input_x')
# Wrap what's loaded to a KerasLayer
keras_layer = hub.KerasLayer(loaded, trainable=True)(x)
model = tf.keras.Model(x, keras_layer)
return model
another_strategy = tf.distribute.MirroredStrategy()
with another_strategy.scope():
loaded = tf.saved_model.load(saved_model_path)
model = build_model(loaded)
model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(),
metrics=['accuracy'])
model.fit(train_dataset, epochs=2)
model = get_model()
# Saving the model using Keras's save() API
model.save(keras_model_path)
another_strategy = tf.distribute.MirroredStrategy()
# Loading the model using lower level API
with another_strategy.scope():
loaded = tf.saved_model.load(keras_model_path)
model = get_model()
# Saving the model to a path on localhost.
saved_model_path = "/tmp/tf_save"
save_options = tf.saved_model.SaveOptions(experimental_io_device='/job:localhost')
model.save(saved_model_path, options=save_options)
# Loading the model from a path on localhost.
another_strategy = tf.distribute.MirroredStrategy()
with another_strategy.scope():
load_options = tf.saved_model.LoadOptions(experimental_io_device='/job:localhost')
loaded = tf.keras.models.load_model(saved_model_path, options=load_options)
class SubclassedModel(tf.keras.Model):
output_name = 'output_layer'
def __init__(self):
super(SubclassedModel, self).__init__()
self._dense_layer = tf.keras.layers.Dense(
5, dtype=tf.dtypes.float32, name=self.output_name)
def call(self, inputs):
return self._dense_layer(inputs)
my_model = SubclassedModel()
# my_model.save(keras_model_path) # ERROR!
tf.saved_model.save(my_model, saved_model_path)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Die Differentialgleichung lautet
Step2: vstack schachtelt zwei arrays รผbereinander. Die Maus hat in der x-Koordinate (1. Zeile) immer 0 und die y-Koordinate (2. Zeile) durchlรคuft das Intervall [0, 1-step] mit Schrittweite step.
Step3: Explizites Eulerverfahren - Iteration
Step4: Wo die Katze die Maus gefangen hat. y-Koordinate ist auch Zeit.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
step = 10**(-3)
tol = 10**(-6)
length = 1/step
mouse = np.vstack((np.zeros(length-1), np.arange(0, 1-step, step)))
cat = np.zeros((2, length))
cat[0, 0] = 1 # x-Koordinate der Katze zum Zeitpunkt 0, Anfangswert
for i, m in enumerate(mouse.transpose()):
direction = m - cat[:,i]
distance = np.linalg.norm(direction)
# Abbruch, wenn die Katze die Maus fรคngt
if distance <= tol:
caught = i
break
cat[:,i+1] = cat[:,i] + step * 2 * direction / distance
plt.plot(cat[0,:caught], cat[1,:caught], 'r--', mouse[0,:caught], mouse[1,:caught], 'b')
plt.axis([-0.1, 1.1, -0.1, 1.1])
plt.grid()
plt.show()
print(mouse[:,caught])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Set Configuration
Step2: 3. Enter BigQuery Query Run Recipe Parameters
Step3: 4. Execute BigQuery Query Run
|
<ASSISTANT_TASK:>
Python Code:
!pip install git+https://github.com/google/starthinker
from starthinker.util.configuration import Configuration
CONFIG = Configuration(
project="",
client={},
service={},
user="/content/user.json",
verbose=True
)
FIELDS = {
'auth_write':'service', # Credentials used for writing data.
'query':'', # SQL with newlines and all.
'legacy':True, # Query type must match table and query format.
}
print("Parameters Set To: %s" % FIELDS)
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
TASKS = [
{
'bigquery':{
'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Credentials used for writing data.'}},
'run':{
'query':{'field':{'name':'query','kind':'text','order':1,'default':'','description':'SQL with newlines and all.'}},
'legacy':{'field':{'name':'legacy','kind':'boolean','order':2,'default':True,'description':'Query type must match table and query format.'}}
}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(CONFIG, TASKS, force=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step-1 Imports and graph setup
Step1: Step-2 Import the data and extract the part you are interested in
Step2: Step-3 Plot the data from the dataframe
Step3: Step-4 Zoom in on a time period of interest (optional)
Step4: Step-5 See what is attached to this meter (optional)
|
<ASSISTANT_TASK:>
Python Code:
from matplotlib import rcParams
import matplotlib.pyplot as plt
%matplotlib inline
rcParams['figure.figsize'] = (13, 6)
plt.style.use('ggplot')
from nilmtk import DataSet
gjw = DataSet('/Users/GJWood/nilm_gjw_data/HDF5/nilm_gjw_data.hdf5') #load the data from HDF5 file
gjw.set_window(start='2013-11-13 00:00:00', end='2013-11-14 00:00:00') #select a portion of the data
elec = gjw.buildings[1].elec #Select the relevant meter group
house = elec['fridge'] #only one meter so any selection will do
df = house.load().next() #load the first chunk of data into a dataframe
df.info() #check that the data is what we want (optional)
#note the data has two columns and a time index
#The period is one day, containing 86400 entries, one per second
df.plot()
df.ix['2013-11-13 06:30:00':'2013-11-13 07:30:00'].plot()# select a time range and plot it
house
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read raw data
|
<ASSISTANT_TASK:>
Python Code:
# Author: Roman Goj <roman.goj@gmail.com>
# Denis Engemann <denis.engemann@gmail.com>
#
# License: BSD (3-clause)
import mne
from mne.datasets import sample
from mne.time_frequency import csd_epochs
from mne.beamformer import dics_source_power
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_raw-eve.fif'
fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
subjects_dir = data_path + '/subjects'
raw = mne.io.read_raw_fif(raw_fname)
raw.info['bads'] = ['MEG 2443'] # 1 bad MEG channel
# Set picks
picks = mne.pick_types(raw.info, meg=True, eeg=False, eog=False,
stim=False, exclude='bads')
# Read epochs
event_id, tmin, tmax = 1, -0.2, 0.5
events = mne.read_events(event_fname)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,
picks=picks, baseline=(None, 0), preload=True,
reject=dict(grad=4000e-13, mag=4e-12))
evoked = epochs.average()
# Read forward operator
forward = mne.read_forward_solution(fname_fwd)
# Computing the data and noise cross-spectral density matrices
# The time-frequency window was chosen on the basis of spectrograms from
# example time_frequency/plot_time_frequency.py
# As fsum is False csd_epochs returns a list of CrossSpectralDensity
# instances than can then be passed to dics_source_power
data_csds = csd_epochs(epochs, mode='multitaper', tmin=0.04, tmax=0.15,
fmin=15, fmax=30, fsum=False)
noise_csds = csd_epochs(epochs, mode='multitaper', tmin=-0.11,
tmax=-0.001, fmin=15, fmax=30, fsum=False)
# Compute DICS spatial filter and estimate source power
stc = dics_source_power(epochs.info, forward, noise_csds, data_csds)
for i, csd in enumerate(data_csds):
message = 'DICS source power at %0.1f Hz' % csd.freqs[0]
brain = stc.plot(surface='inflated', hemi='rh', subjects_dir=subjects_dir,
time_label=message, figure=i)
brain.set_data_time_index(i)
brain.show_view('lateral')
# Uncomment line below to save images
# brain.save_image('DICS_source_power_freq_%d.png' % csd.freqs[0])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. ะะพัะปะต ะฒัะฟะพะปะฝะตะฝะธั ััะพะณะพ ะบะพะดะฐ ะผะฐััะธะฒ ั ัะตะบััะฐะผะธ ะฑัะดะตั ะฝะฐั
ะพะดะธัััั ะฒ ะฟะพะปะต newsgroups.data, ะฝะพะผะตั ะบะปะฐััะฐ โ ะฒ ะฟะพะปะต newsgroups.target.
Step2: 3. ะะดะฝะฐ ะธะท ัะปะพะถะฝะพััะตะน ัะฐะฑะพัั ั ัะตะบััะพะฒัะผะธ ะดะฐะฝะฝัะผะธ ัะพััะพะธั ะฒ ัะพะผ, ััะพ ะดะปั ะฝะธั
ะฝัะถะฝะพ ะฟะพัััะพะธัั ัะธัะปะพะฒะพะต ะฟัะตะดััะฐะฒะปะตะฝะธะต. ะะดะฝะธะผ ะธะท ัะฟะพัะพะฑะพะฒ ะฝะฐั
ะพะถะดะตะฝะธั ัะฐะบะพะณะพ ะฟัะตะดััะฐะฒะปะตะฝะธั ัะฒะปัะตััั ะฒััะธัะปะตะฝะธะต TF-IDF. ะ Scikit-Learn ััะพ ัะตะฐะปะธะทะพะฒะฐะฝะพ ะฒ ะบะปะฐััะต sklearn.feature_extraction.text.TfidfVectorizer. ะัะตะพะฑัะฐะทะพะฒะฐะฝะธะต ะพะฑััะฐััะตะน ะฒัะฑะพัะบะธ ะฝัะถะฝะพ ะดะตะปะฐัั ั ะฟะพะผะพััั ััะฝะบัะธะธ fit_transform, ัะตััะพะฒะพะน โ ั ะฟะพะผะพััั transform.
Step3: 4. ะ ะตะฐะปะธะทะฐัะธั SVM-ะบะปะฐััะธัะธะบะฐัะพัะฐ ะฝะฐั
ะพะดะธััั ะฒ ะบะปะฐััะต sklearn.svm.SVC. ะะตัะฐ ะบะฐะถะดะพะณะพ ะฟัะธะทะฝะฐะบะฐ ั ะพะฑััะตะฝะฝะพะณะพ ะบะปะฐััะธัะธะบะฐัะพัะฐ ั
ัะฐะฝัััั ะฒ ะฟะพะปะต coef_. ะงัะพะฑั ะฟะพะฝััั, ะบะฐะบะพะผั ัะปะพะฒั ัะพะพัะฒะตัััะฒัะตั i-ะน ะฟัะธะทะฝะฐะบ, ะผะพะถะฝะพ ะฒะพัะฟะพะปัะทะพะฒะฐัััั ะผะตัะพะดะพะผ get_feature_names() ั TfidfVectorizer
Step4: 5. ะะพะดะฑะพั ะฟะฐัะฐะผะตััะพะฒ ัะดะพะฑะฝะพ ะดะตะปะฐัั ั ะฟะพะผะพััั ะบะปะฐััะฐ sklearn.grid_search.GridSearchCV. ะะตัะฒัะผ ะฐัะณัะผะตะฝัะพะผ ะฒ GridSearchCV ะฟะตัะตะดะฐะตััั ะบะปะฐััะธัะธะบะฐัะพั, ะดะปั ะบะพัะพัะพะณะพ ะฑัะดัั ะฟะพะดะฑะธัะฐัััั ะทะฝะฐัะตะฝะธั ะฟะฐัะฐะผะตััะพะฒ, ะฒัะพััะผ โ ัะปะพะฒะฐัั (dict), ะทะฐะดะฐััะธะน ัะตัะบั ะฟะฐัะฐะผะตััะพะฒ ะดะปั ะฟะตัะตะฑะพัะฐ.
Step5: 6. ะะพัะปะต ัะพะณะพ, ะบะฐะบ ะฟะตัะตะฑะพั ะพะบะพะฝัะตะฝ, ะผะพะถะฝะพ ะฟัะพะฐะฝะฐะปะธะทะธัะพะฒะฐัั ะทะฝะฐัะตะฝะธั ะบะฐัะตััะฒะฐ ะดะปั ะฒัะตั
ะทะฝะฐัะตะฝะธะน ะฟะฐัะฐะผะตััะพะฒ ะธ ะฒัะฑัะฐัั ะฝะฐะธะปัััะธะน ะฒะฐัะธะฐะฝั
Step6: 7. ะะฑััะธัะต SVM ะฟะพ ะฒัะตะน ะฒัะฑะพัะบะต ั ะพะฟัะธะผะฐะปัะฝัะผ ะฟะฐัะฐะผะตััะพะผ C, ะฝะฐะนะดะตะฝะฝัะผ ะฝะฐ ะฟัะตะดัะดััะตะผ ัะฐะณะต.
Step7: 8. ะะฐะนะดะธัะต 10 ัะปะพะฒ ั ะฝะฐะธะฑะพะปััะธะผ ะฐะฑัะพะปััะฝัะผ ะทะฝะฐัะตะฝะธะตะผ ะฒะตัะฐ (ะฒะตัะฐ ั
ัะฐะฝัััั ะฒ ะฟะพะปะต coef_ ั svm.SVC). ะะฝะธ ัะฒะปััััั ะพัะฒะตัะพะผ ะฝะฐ ััะพ ะทะฐะดะฐะฝะธะต. ะฃะบะฐะถะธัะต ััะธ ัะปะพะฒะฐ ัะตัะตะท ะทะฐะฟัััั ะธะปะธ ะฟัะพะฑะตะป, ะฒ ะฝะธะถะฝะตะผ ัะตะณะธัััะต, ะฒ ะปะตะบัะธะบะพะณัะฐัะธัะตัะบะพะผ ะฟะพััะดะบะต.
|
<ASSISTANT_TASK:>
Python Code:
newsgroups = datasets.fetch_20newsgroups(
subset='all',
categories=['alt.atheism', 'sci.space']
)
X = newsgroups.data
y = newsgroups.target
print("targets: ", y)
print("target_names: ", newsgroups.target_names)
print("Extracting features from the training data using vectorizer")
vectorizer = TfidfVectorizer()
X_train = vectorizer.fit_transform(X)
print("n_samples: %d, n_features: %d" % X_train.shape)
# mapping from integer feature name to original token string
feature_names = np.asarray(vectorizer.get_feature_names())
feature_names
grid = {'C': np.power(10.0, np.arange(-5, 6))}
cv = KFold(n_splits=5, shuffle=True, random_state=241)
clf = SVC(kernel='linear', random_state=241)
gs = GridSearchCV(clf, grid, scoring='accuracy', cv=cv)
gs.fit(X_train, y)
df = pd.DataFrame.from_dict(gs.cv_results_)
df.head()
df_with_best_score = df[df['mean_test_score'] == (df['mean_test_score'].max())]
best_C = df_with_best_score['param_C'].max()
best_C
clf.set_params(C = best_C).fit(X_train, y)
absolute_data = abs(clf.coef_.toarray().reshape(-1)) # by absolute
absolute_data_sorted_desc = sorted(absolute_data, reverse=True)
weight_indexes = []
for weight in absolute_data_sorted_desc[:10]:
weight_indexes.append(absolute_data.tolist().index(weight))
words = [vectorizer.get_feature_names()[index] for index in weight_indexes]
print('%s' % (" ".join(sorted(words))))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Saliency
Step2: Time for saliency visualization.
Step3: To used guided saliency, we need to set backprop_modifier='guided'. For rectified saliency or deconv saliency, use backprop_modifier='relu'. Lets try these options quickly and see how they compare to vanilla saliency.
Step4: Both of them look a lot better than vanilla saliency! This in inline with observation in the paper.
Step5: Lets try all the classes and show original inputs and their heatmaps side by side. We cannot overlay the heatmap on original image since its grayscale.
Step6: Guided saliency seems to give the best results.
Step7: In this case it appears that saliency is better than grad-CAM as penultimate MaxPooling2D layer has (12, 12) spatial resolution which is relatively large as compared to input of (28, 28). Is is likely that the conv layer hasnt captured enough high level information and most of that is likely within dense_4 layer.
Step8: Visualization without swapping softmax
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
import numpy as np
import keras
from keras.datasets import mnist
from keras.models import Sequential, Model
from keras.layers import Dense, Dropout, Flatten, Activation, Input
from keras.layers import Conv2D, MaxPooling2D
from keras import backend as K
batch_size = 128
num_classes = 10
epochs = 5
# input image dimensions
img_rows, img_cols = 28, 28
# the data, shuffled and split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()
if K.image_data_format() == 'channels_first':
x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
input_shape = (1, img_rows, img_cols)
else:
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
input_shape=input_shape))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax', name='preds'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adam(),
metrics=['accuracy'])
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test))
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
class_idx = 0
indices = np.where(y_test[:, class_idx] == 1.)[0]
# pick some random input from here.
idx = indices[0]
# Lets sanity check the picked image.
from matplotlib import pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (18, 6)
plt.imshow(x_test[idx][..., 0])
from vis.visualization import visualize_saliency
from vis.utils import utils
from keras import activations
# Utility to search for layer index by name.
# Alternatively we can specify this as -1 since it corresponds to the last layer.
layer_idx = utils.find_layer_idx(model, 'preds')
# Swap softmax with linear
model.layers[layer_idx].activation = activations.linear
model = utils.apply_modifications(model)
grads = visualize_saliency(model, layer_idx, filter_indices=class_idx, seed_input=x_test[idx])
# Plot with 'jet' colormap to visualize as a heatmap.
plt.imshow(grads, cmap='jet')
for modifier in ['guided', 'relu']:
grads = visualize_saliency(model, layer_idx, filter_indices=class_idx,
seed_input=x_test[idx], backprop_modifier=modifier)
plt.figure()
plt.title(modifier)
plt.imshow(grads, cmap='jet')
grads = visualize_saliency(model, layer_idx, filter_indices=class_idx, seed_input=x_test[idx],
backprop_modifier='guided', grad_modifier='negate')
plt.imshow(grads, cmap='jet')
# This corresponds to the Dense linear layer.
for class_idx in np.arange(10):
indices = np.where(y_test[:, class_idx] == 1.)[0]
idx = indices[0]
f, ax = plt.subplots(1, 4)
ax[0].imshow(x_test[idx][..., 0])
for i, modifier in enumerate([None, 'guided', 'relu']):
grads = visualize_saliency(model, layer_idx, filter_indices=class_idx,
seed_input=x_test[idx], backprop_modifier=modifier)
if modifier is None:
modifier = 'vanilla'
ax[i+1].set_title(modifier)
ax[i+1].imshow(grads, cmap='jet')
from vis.visualization import visualize_cam
# This corresponds to the Dense linear layer.
for class_idx in np.arange(10):
indices = np.where(y_test[:, class_idx] == 1.)[0]
idx = indices[0]
f, ax = plt.subplots(1, 4)
ax[0].imshow(x_test[idx][..., 0])
for i, modifier in enumerate([None, 'guided', 'relu']):
grads = visualize_cam(model, layer_idx, filter_indices=class_idx,
seed_input=x_test[idx], backprop_modifier=modifier)
if modifier is None:
modifier = 'vanilla'
ax[i+1].set_title(modifier)
ax[i+1].imshow(grads, cmap='jet')
model.summary()
# Swap linear back with softmax
model.layers[layer_idx].activation = activations.softmax
model = utils.apply_modifications(model)
for class_idx in np.arange(10):
indices = np.where(y_test[:, class_idx] == 1.)[0]
idx = indices[0]
grads = visualize_saliency(model, layer_idx, filter_indices=class_idx,
seed_input=x_test[idx], backprop_modifier='guided')
f, ax = plt.subplots(1, 2)
ax[0].imshow(x_test[idx][..., 0])
ax[1].imshow(grads, cmap='jet')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Getting the data
Step2: Cleaning the data
Step3: Transforming the data
Step4: The unnecessary columns can be dropped.
Step5: The World Bank reports the life expectancy with several decimal places. After rounding, the original column is discarded.
Step6: Combining the data
Step7: Calculating the correlation
Step8: The value shows a direct correlation, i.e. richer countries tend to have longer life expectancy, but it is not very strong.
Step9: The plot shows there is no clear correlation
Step10: Conclusions
|
<ASSISTANT_TASK:>
Python Code:
import sys
sys.version
import warnings
warnings.simplefilter('ignore', FutureWarning)
from pandas import *
show_versions()
YEAR = 2013
GDP_INDICATOR = 'NY.GDP.MKTP.CD'
LIFE_INDICATOR = 'SP.DYN.LE00.IN'
gdpReset = read_csv('WB GDP 2013.csv')
gdpReset.head()
lifeReset = read_csv('WB LE 2013.csv')
lifeReset.head()
len(gdpReset)
len(lifeReset)
gdpCountries = gdpReset[34:].dropna()
lifeCountries = lifeReset[34:].dropna()
len(gdpCountries)
len(lifeCountries)
gdpCountries.head()
lifeCountries.head()
def roundToMillions (value):
return round(value / 1000000)
def usdToGBP (usd):
return usd / 1.564768
GDP = 'GDP (ยฃm)'
gdpCountries[GDP] = gdpCountries[GDP_INDICATOR].apply(usdToGBP).apply(roundToMillions)
gdpCountries.head()
COUNTRY = 'country'
headings = [COUNTRY, GDP]
gdpClean = gdpCountries[headings]
gdpClean.head()
LIFE = 'Life expectancy (years)'
lifeCountries[LIFE] = lifeCountries[LIFE_INDICATOR].apply(round)
headings = [COUNTRY, LIFE]
lifeClean = lifeCountries[headings]
lifeClean.head()
gdpVsLife = merge(gdpClean, lifeClean, on=COUNTRY, how='inner')
gdpVsLife.head()
from scipy.stats import spearmanr
gdpColumn = gdpVsLife[GDP]
lifeColumn = gdpVsLife[LIFE]
(correlation, pValue) = spearmanr(gdpColumn, lifeColumn)
print('The correlation is: {}\nAnd p-value is: {}\n'.format(correlation, format(pValue, ".15f")))
if pValue < 0.05:
print('It is statistically significant.')
else:
print('It is not statistically significant.')
%matplotlib inline
gdpVsLife.plot(x=GDP, y=LIFE, kind='scatter', grid=True, logx=True, figsize=(16, 10));
# the 10 countries with highest GDP
gdpVsLife.sort_values(GDP).tail(10)
# the 10 countries with lowest GDP
gdpVsLife.sort_values(GDP).head(10)
# the 10 countries with highest life expectancy
gdpVsLife.sort_values(LIFE).tail(10)
# the 10 countries with lowest life expectancy
gdpVsLife.sort_values(LIFE).head(10)
# Some experiments with the data
BRICS_LABEL = ['Brazil', 'Russian Federation', 'India', 'China', 'South Africa']
bricsGdpVsLife = gdpVsLife.loc[gdpVsLife['country'].isin(BRICS_LABEL)]
bricsGdpVsLife
bricsGdpVsLife.plot(x=GDP, y=LIFE, kind='scatter', grid=True, figsize=(16, 10));
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We want a way to grab the messages with the appropriate time, put them in a list, also print them out as they arrive, except for the clock ones, which we don't want because they would spam the whole thing. Also maybe include a nice signal to terminate, such as the foot pedal or something.
Step2: Now let's try it with the initial send after power on.
Step3: Hmmm, what about if we have a song selected instead of a style?
Step4: That's with song 001 selected.
Step5: Huh, it looks like it actually sends a pedal sustain for that.
Step6: With a blank user song, it looks like there are more of the settings set (to defaults?)
Step7: And how about song and style out OFF?
Step8: I guess with voice out OFF it wouldn't send that either
Step9: Whoops on that one, I needed to turn keyboard OUT back on to send the note. Ah well.
Step10: Yep, that does nothing.
Step11: Okay, i've stuffed something up here.
Step12: Ok, so that's fixed.
Step13: Now we do GM_SYSTEM ON
Step14: It looks like the GM_ON reset only affects the reverb and chorus panel settings. nice to know.
Step15: Huh, it looks like Reset Controllers does nothing to volume either.
Step16: But it works with Modulation for some reason. hmmmmmm.
Step17: Let's start from the top
Step18: No sound.
Step19: Still no sound.
Step20: We can conclude that RESET_CONTROLS does nothing to voice volume, and I have no idea what GM_ON resets it to.
Step21: All sound on the left.
Step22: All sound still on the left.
Step23: Nope, the reset doesn't work for pan either.
Step24: Still reverby.
Step25: Neither does the chorus
Step26: Pedal
Step27: Hey, it does work for pedal! Turns it OFF.
Step28: Also Panel Sustain, that is, release time
Step29: I think the default is 0x40, probably.
Step30: The modulation wheel, I guess, is reset to zero.
Step31: Expression can be reset as well. Dunno what the default is.
Step32: I don't even know what portamento even does.
Step33: Later
Step34: Both notes turn on, and both notes have to be turned off.
Step35: I hear no difference, but PTC should have one extra effect
Step36: Now the first note switches off as soon as the second comes on, and doesn't need to be switched off separately.
Step37: Okay, I think I have a grip on what Portamento Control does. Also, the channel mode messages act on each channel separately, it seems
Step38: Now, let's try pitch bend.
Step39: Let's set RPN
Step40: and decrement
Step41: Hmm, no change.
Step42: That's definitely incrementing by semitones, there.
Step43: Okay, that's some weird stuff.
Step44: Hypothesis
Step45: I've annotated the above with the results of the experiment and what I think's going on. Internally, INC/DEC seem to alter the value of an MSB variable, then set the MSB of the current RPN to what the value is now. It's the same variable, no matter which RPN is currently active; it doesn't get reset on GM_ON or XG ON or XG reset (which actually seems to work otherwise). The lesson
Step46: Okay, so now it looks like if you go below 0 to "-1", you reach a state where it doesn't actually change the value anymore until you INC, and then it jumps to 1. So it's something like
Step47: No, it does not.
Step48: No.
Step49: No.
Step50: Seems like the only one that works is the XG Resets.
Step51: Nope.
Step52: What about when we just power on?
Step53: I guess MSB starts at 0.
Step54: Testing Continues
Step55: Pitch bend
Step56: Release time
Step57: Attack time, check
Step58: Harmonic, check
Step59: Now for Portamento Control
Step60: Hmmm, it looks like XG reset also resets the MIDI Master Tuning, as well!
Step61: Seems to be the only difference
Step62: What About The Bank
Step63: Let's review
Step64: There are a lot of different organ voices for program 16, MSB 0.
Step65: Let's set Channel 0 to the rock kit, program 16
Step66: Now, Bank LSB should be totally ignored for drum kits.
Step67: The Bank LSB is treated as 0 (or at least not recognised for the drum kits), but it is still remembered so that if we change the MSB, the LSB value is used then. Setting the MSB does not reset the LSB to zero.
Step68: It seems like when the LSB is not supported, it falls back to zero, but I'm willing to bet it still remembers it.
Step69: Yep, it seems like it.
Step70: Yep, and the bank doesn't get reset either.
Step71: They're silent.
Step72: MSB 0 always has a fallback voice for any program.
Step73: If the program is not supported for drum kit, it just won't change the program.
Step74: If the program is not supported for SFX kit, it just won't play at all.
Step75: And it seems like that for the SFX voices as well.
Step76: The LSB fallback works. for SFX voices.
Step77: but not for SFX kits.
Step78: Does the INC DEC weirdness leak across channels.
Step79: No, each channel has its own weirdness variable completely separate.
Step80: LOCAL is not reset by XG on, GM on, or XG Reset
Step81: How does portamento control work with changed voices or drum kits?
Step82: Sending the same note seems to require turning the note off twice.
Step83: Portamento does not work with drum kits.
Step84: Or SFX kits.
Step85: SFX voices seem fine, though.
|
<ASSISTANT_TASK:>
Python Code:
%cd -q '..'
import mido
import copy
from commons import mido_util
from commons import timers
from commons.messages import controlstate, controls
def collector(state=None):
if state is None:
newstate = controlstate.MidiControlState()
else:
newstate = copy.deepcopy(state)
collection_list = []
with mido.open_input("DGX-505 MIDI 1") as inport:
timer = timers.offsetTimer()
for msg in inport:
msg.time = timer()
wrapped = newstate.feed_message(msg)
collection_list.append(wrapped)
if msg.type != "clock":
print(wrapped)
if wrapped.wrap_type == controls.Control.PEDAL:
break
return newstate, collection_list
def writeout(outlist, outname):
with open(outname, 'w') as outfile:
for wrapped in outlist:
outfile.write(str(wrapped.message)+'\n')
s, c = collector()
len(c)
writeout(c, 'documents/data/initial_send.txt')
s._dump()
s1, c1 = collector(s)
writeout(c1, 'documents/data/initial_send_song1.txt')
s2, c2 = collector(s1)
writeout(c2, 'documents/data/initial_send_user.txt')
def collector(state=None):
if state is None:
newstate = controlstate.MidiControlState()
else:
newstate = copy.deepcopy(state)
collection_list = []
with mido.open_input("DGX-505 MIDI 1") as inport:
timer = timers.offsetTimer()
for msg in inport:
msg.time = timer()
wrapped = newstate.feed_message(msg)
collection_list.append(wrapped)
if msg.type != "clock":
print(wrapped)
if msg.type == "note_on":
break
return newstate, collection_list
s3, c2 = collector(s2)
writeout(c2, 'documents/data/initial_send_user.txt')
s3._dump()
s4, c4 = collector(s3)
writeout(c4, 'documents/data/initial_send_songoff.txt')
s5, c5 = collector(s4)
writeout(c5, 'documents/data/initial_send_stylesongoff.txt')
s6, c6 = collector(s5)
writeout(c6, 'documents/data/initial_send_alloff.txt')
s6
s6._channels[0].control_value(controls.Control.VOLUME)
with mido.open_output('DGX-505 MIDI 1') as outport:
for i in range(16):
outport.send(controls.cc(controls.Control.VOLUME, channel=i, value=1))
s7, c7 = collector(s6)
s8, c8 = collector(s7)
cc8 = [x for x in c8 if x.message.type != "clock"]
cc8
" ".join(format(x, "02X") for x in (cc8[1].message.data))
%load_ext autoreload
%autoreload
from commons.messages import wrappers
from commons import enums
wrappers.wrap(cc8[1].message)
ns, nsc = collector(s8)
with mido.open_output('DGX-505 MIDI 1') as outport:
outport.send(controls.gm_on())
rs, rsc = collector(ns)
def nonclock(l):
return (x for x in l if x.message.type != 'clock')
for n, r in zip(nonclock(nsc), nonclock(rsc)):
if str(n) != str(r):
print(n, r)
with mido.open_output('DGX-505 MIDI 1') as outport:
for i in range(16):
outport.send(controls.cc(controls.Control.VOLUME, channel=i, value=1))
with mido.open_output('DGX-505 MIDI 1') as outport:
for i in range(16):
outport.send(controls.cc(controls.Control.RESET_CONTROLS, channel=i, value=1))
with mido.open_output('DGX-505 MIDI 1') as outport:
for i in range(16):
outport.send(controls.cc(controls.Control.VOLUME, channel=i, value=127))
with mido.open_output('DGX-505 MIDI 1') as outport:
for i in range(16):
outport.send(controls.cc(controls.Control.MODULATION, channel=i, value=127))
with mido.open_output('DGX-505 MIDI 1') as outport:
for i in range(16):
outport.send(controls.cc(controls.Control.RESET_CONTROLS, channel=i, value=1))
outport = mido.open_output('DGX-505 MIDI 1')
outport.send(controls.gm_on())
outport.send(controls.local(False))
cc = controls.cc
C = controls.Control
outport.send(cc(C.VOLUME, 0))
outport.send(cc(C.RESET_CONTROLS, 64))
outport.send(cc(C.VOLUME, 64))
outport.send(controls.gm_on())
outport.send(cc(C.VOLUME, 110))
outport.send(cc(C.PAN, 0))
outport.send(cc(C.RESET_CONTROLS, 42))
for i in range(128):
outport.send(cc(C.RESET_CONTROLS, i))
outport.send(controls.gm_on())
outport.send(cc(C.REVERB, 127))
outport.send(cc(C.RESET_CONTROLS, 42))
outport.send(controls.gm_on())
outport.send(cc(C.CHORUS, 127))
outport.send(cc(C.RESET_CONTROLS, 3))
outport.send(controls.gm_on())
outport.send(cc(C.PEDAL, 127))
outport.send(cc(C.RESET_CONTROLS, 3))
outport.send(cc(C.RELEASE, 127))
outport.send(cc(C.RESET_CONTROLS, 3))
outport.send(cc(C.RELEASE, 0))
outport.send(cc(C.RESET_CONTROLS, 3))
outport.send(controls.gm_on())
outport.send(cc(C.MODULATION, 127))
outport.send(cc(C.RESET_CONTROLS, 3))
outport.send(cc(C.MODULATION, 0))
outport.send(cc(C.EXPRESSION, 127))
outport.send(cc(C.EXPRESSION, 64))
outport.send(cc(C.RESET_CONTROLS, 3))
outport.send(cc(C.PORTAMENTO_CTRL, 12))
outport.send(cc(0x05, 127))
import time
outport = mido.open_output('DGX-505 MIDI 1')
# Let's use a square lead on channel 5, everyone's favourite channel
controls.multisend(outport, controls.set_voice_numbers(101, 5))
outport.send(mido.Message('note_on', channel=5, note=64))
time.sleep(0.5)
outport.send(mido.Message('note_on', channel=5, note=68))
time.sleep(0.5)
outport.send(mido.Message('note_off', channel=5, note=68))
time.sleep(0.5)
outport.send(mido.Message('note_off', channel=5, note=64))
time.sleep(0.5)
outport.send(controls.cc(controls.Control.PORTAMENTO_CTRL, channel=5, value=64))
outport.send(mido.Message('note_on', channel=5, note=68))
time.sleep(0.5)
outport.send(mido.Message('note_off', channel=5, note=68))
outport.send(mido.Message('note_on', channel=5, note=64))
time.sleep(0.5)
outport.send(controls.cc(controls.Control.PORTAMENTO_CTRL, channel=5, value=64))
outport.send(mido.Message('note_on', channel=5, note=68))
time.sleep(0.5)
outport.send(mido.Message('note_off', channel=5, note=68))
outport.send(mido.Message('note_on', channel=5, note=64))
outport.send(mido.Message('note_on', channel=5, note=68))
outport.send(mido.Message('note_on', channel=5, note=71))
time.sleep(0.5)
outport.send(controls.cc(controls.Control.PORTAMENTO_CTRL, channel=5, value=68))
outport.send(mido.Message('note_on', channel=5, note=56))
time.sleep(0.5)
outport.send(controls.cc(controls.Control.PORTAMENTO_CTRL, channel=5, value=64))
outport.send(mido.Message('note_on', channel=5, note=56))
time.sleep(0.5)
outport.send(controls.cc(controls.Control.PORTAMENTO_CTRL, channel=5, value=71))
outport.send(mido.Message('note_on', channel=5, note=56))
time.sleep(0.5)
outport.send(controls.cc(controls.Control.SOUND_OFF_XPOLY, channel=5, value=0))
outport.send(mido.Message('note_on', channel=5, note=64, velocity=64))
time.sleep(0.5)
outport.send(controls.cc(controls.Control.PORTAMENTO_CTRL, channel=5, value=64))
outport.send(mido.Message('note_on', channel=5, note=68, velocity=1))
time.sleep(0.5)
outport.send(mido.Message('note_on', channel=5, note=68, velocity=0))
def panic():
for i in range(16):
outport.send(controls.cc(controls.Control.SOUND_OFF, channel=i, value=0))
def pb(iv=0.5):
outport.send(mido.Message('pitchwheel', channel=5, pitch=0))
outport.send(mido.Message('note_on', channel=5, note=64))
time.sleep(iv)
outport.send(mido.Message('pitchwheel', channel=5, pitch=+8191))
time.sleep(iv)
outport.send(mido.Message('note_off', channel=5, note=64))
time.sleep(iv)
C = controls.Control
cc = controls.cc
outport.send(cc(C.RPN_MSB, channel=5, value=0))
outport.send(cc(C.RPN_LSB, channel=5, value=0))
outport.send(cc(C.DATA_DEC, channel=5, value=0))
pb()
outport.send(cc(C.DATA_MSB, channel=5, value=12))
pb()
outport.send(cc(C.DATA_DEC, channel=5, value=0))
outport.send(cc(C.DATA_MSB, channel=5, value=0))
for i in range(12):
outport.send(cc(C.DATA_INC, channel=5, value=0))
pb(0.15)
outport.send(cc(C.RPN_LSB, channel=5, value=1))
outport.send(cc(C.RPN_LSB, channel=5, value=0))
for i in range(12):
outport.send(cc(C.DATA_INC, channel=5, value=0))
pb(0.15)
outport.send(cc(C.DATA_MSB, channel=5, value=0))
pb()
outport.send(cc(C.DATA_MSB, channel=5, value=0))
for i in range(1):
outport.send(cc(C.DATA_INC, channel=5, value=0))
pb(0.15)
outport.send(cc(C.RPN_LSB, channel=5, value=1))
for i in range(1):
pb(0.15)
outport.send(cc(C.RPN_LSB, channel=5, value=0))
for i in range(1):
outport.send(cc(C.DATA_INC, channel=5, value=0))
pb(0.15)
outport.send(cc(C.RPN_MSB, channel=5, value=1))
for i in range(1):
outport.send(cc(C.DATA_INC, channel=5, value=0))
pb(0.15)
outport.send(cc(C.RPN_MSB, channel=5, value=0))
outport.send(cc(C.RPN_MSB, channel=5, value=0x7F))
outport.send(cc(C.RPN_LSB, channel=5, value=0x7F))
for i in range(4):
outport.send(cc(C.DATA_INC, channel=5, value=0))
pb(0.15)
outport.send(cc(C.RPN_MSB, channel=5, value=0))
outport.send(cc(C.RPN_LSB, channel=5, value=0))
for i in range(4):
outport.send(cc(C.DATA_INC, channel=5, value=0))
pb(0.15)
outport.send(controls.gm_on())
def pb(iv=0.1):
outport.send(mido.Message('pitchwheel', pitch=0))
outport.send(mido.Message('note_on', note=64, velocity=90))
time.sleep(iv)
outport.send(mido.Message('pitchwheel', pitch=+8191))
time.sleep(iv)
outport.send(mido.Message('note_off', note=64))
time.sleep(iv)
XGRESET = mido.Message.from_hex("F0 43 10 4C 00 00 7E 00 F7")
outport.send(controls.gm_on()) # reset all
outport.send(cc(C.DATA_MSB, value=0)) # MSB = 0
pb() # PBS == 2
for i in range(2):
outport.send(cc(C.DATA_INC, value=0)) # MSB = 2
pb() # PBS == 2
outport.send(cc(C.RPN_MSB, value=0))
outport.send(cc(C.RPN_LSB, value=0)) # RPN -> PBS
pb() # PBS == 2
#outport.send(cc(C.DATA_MSB, value=0))
outport.send(cc(C.DATA_INC, value=0)) # MSB = 3, PBS = MSB
pb() # PBS == 3
outport.send(controls.gm_on())
# reset, RPN -> NULL, PBS = 2, but MSB stays 3 (?!)
pb() # PBS == 2
outport.send(cc(C.DATA_INC, value=0)) # MSB = 4
outport.send(cc(C.RPN_MSB, value=0)) # RPN -> PBS
outport.send(cc(C.RPN_LSB, value=0))
pb() # PBS == 2
outport.send(cc(C.DATA_INC, value=0)) # MSB = 5
pb() # PBS == 5
outport.send(cc(C.RPN_MSB, value=127))
outport.send(cc(C.RPN_LSB, value=127)) # RPN -> NULL
for i in range(4):
outport.send(cc(C.DATA_DEC, value=0)) # MSB = 1
outport.send(cc(C.RPN_MSB, value=0))
outport.send(cc(C.RPN_LSB, value=0)) # RPN -> PBS
pb() # PBS == 5
outport.send(cc(C.DATA_DEC, value=0)) # MSB = 0
pb() # PBS == 0
outport.send(mido.Message.from_hex("F0 43 10 4C 00 00 7E 00 F7"))
# XG ON
pb() # PBS == 2
outport.send(cc(C.RPN_MSB, value=0))
outport.send(cc(C.RPN_LSB, value=0)) # RPN -> PBS
outport.send(cc(C.DATA_INC, value=0)) # MSB = 1
pb() # PBS == 1
outport.send(mido.Message.from_hex("F0 43 10 4C 00 00 7F 00 F7"))
# XG Reset
pb() # PBS == 2
outport.send(cc(C.DATA_INC, value=0)) # MSB = 2
outport.send(cc(C.RPN_MSB, value=0))
outport.send(cc(C.RPN_LSB, value=0)) # RPN -> PBS
outport.send(cc(C.DATA_INC, value=0)) # MSB = 3
pb() # PBS == 3
outport.send(mido.Message.from_hex("F0 43 10 4C 00 00 7F 00 F7"))
# XG Reset, RPN -> NULL
pb() # PBS == 2
outport.send(cc(C.DATA_MSB, value=12)) # MSB = 12
pb() # PBS == 2
outport.send(cc(C.RPN_MSB, value=0))
outport.send(cc(C.RPN_LSB, value=0)) # RPN -> PBS
outport.send(cc(C.DATA_DEC, value=0)) # MSB = 11
pb() # PBS == 11
outport.send(controls.gm_on()) # reset all
pb() # PBS == 2
outport.send(cc(C.RPN_MSB, value=0))
outport.send(cc(C.RPN_LSB, value=0)) # RPN -> PBS
outport.send(cc(C.DATA_MSB, value=0)) # MSB = 0
pb() # PBS == 0
outport.send(cc(C.DATA_DEC, value=0)) # MSB = 0? -1?
pb() # PBS == 0
outport.send(cc(C.DATA_DEC, value=0)) # MSB = 0? -1?
pb() # PBS == 0
outport.send(cc(C.DATA_INC, value=0)) # MSB = 1
pb() # PBS == 1
outport.send(cc(C.RPN_LSB, value=1)) # RPN -> Fine Tune
pb() # PBS == 1
outport.send(cc(C.DATA_DEC, value=0)) # MSB = 0
pb() # PBS == 1, but we've jumped down a semitone
outport.send(cc(C.RPN_LSB, value=0)) # RPN -> PBS
pb() # PBS == 1
outport.send(cc(C.DATA_DEC, value=0)) # MSB = 0? -1 ?
pb() # PBS == 1
outport.send(cc(C.DATA_DEC, value=0)) # MSB = 0? -1 ?
pb() # PBS == 1
outport.send(cc(C.DATA_DEC, value=0)) # MSB = 0? -1 ?
pb() # PBS == 1
outport.send(cc(C.DATA_INC, value=0)) # MSB = 1 ?
pb() # PBS == 1
outport.send(cc(C.DATA_INC, value=0)) # MSB = 2 ?
pb() # PBS == 2
outport.send(controls.gm_on()) # reset all
pb() # PBS == 2
outport.send(cc(C.RPN_MSB, value=0))
outport.send(cc(C.RPN_LSB, value=0)) # RPN -> PBS
outport.send(cc(C.DATA_MSB, value=4)) # MSB = 4
pb() # PBS == 4
outport.send(cc(C.RPN_LSB, value=1)) # RPN -> Fine Tune
pb() # PBS == 4
outport.send(cc(C.DATA_DEC, value=0)) # MSB = 4
pb() # PBS == 4, but we've jumped down almost a semitone
outport.send(cc(C.DATA_DEC, value=0)) # MSB = 3
outport.send(cc(C.DATA_DEC, value=0)) # MSB = 2
outport.send(cc(C.DATA_DEC, value=0)) # MSB = 1
outport.send(cc(C.DATA_DEC, value=0)) # MSB = 0
pb() # PBS == 4,
outport.send(cc(C.RPN_LSB, value=0)) # RPN -> PBS
pb() # PBS == 4
outport.send(cc(C.DATA_DEC, value=0)) # MSB = 0
pb() # PBS == 4
outport.send(cc(C.DATA_DEC, value=0)) # MSB = 0
pb() # PBS == 4
outport.send(cc(C.DATA_DEC, value=0)) # MSB = 0
pb() # PBS == 4
outport.send(cc(C.DATA_INC, value=0)) # MSB = 1
pb() # PBS == 1
outport.send(cc(C.DATA_INC, value=0)) # MSB = 2
pb() # PBS == 2
outport.send(controls.gm_on()) # reset all
pb() # PBS == 2
outport.send(cc(C.RPN_MSB, value=0))
outport.send(cc(C.RPN_LSB, value=1)) # RPN -> Fine Tune
outport.send(cc(C.DATA_MSB, value=127)) # MSB = 127
pb() # PBS == 2, but we've jumped up a semitone
outport.send(cc(C.RPN_LSB, value=0)) # RPN -> PBS
pb() # PBS == 2
outport.send(cc(C.DATA_INC, value=0)) # MSB = 127
pb() # PBS == 2
outport.send(cc(C.DATA_DEC, value=0)) # MSB = 126
pb() # PBS == 126 (it gets clamped to 24 in the tone generator)
outport.send(controls.gm_on()) # reset all
pb() # PBS == 2
outport.send(cc(C.RPN_MSB, value=0))
outport.send(cc(C.RPN_LSB, value=2)) # RPN -> Coarse Tune
outport.send(cc(C.DATA_MSB, value=62)) # MSB = 62
pb() # PBS == 2, down 2 semitones
outport.send(cc(C.RPN_MSB, value=1))
outport.send(cc(C.DATA_DEC, value=0)) # MSB = 61
pb() # PBS == 2, down 2 semitones
outport.send(cc(C.RPN_MSB, value=0)) # RPN -> Coarse Tune again
pb() # PBS == 2, down 2 semitones
outport.send(cc(C.DATA_DEC, value=0)) # MSB = 60
pb() # PBS == 2, down 4 semitones
outport.send(cc(C.DATA_DEC, value=0)) # MSB = 59
pb() # PBS == 2, down 5 semitones
outport.send(controls.gm_on()) # reset all
pb() # PBS == 2
outport.send(cc(C.RPN_MSB, value=0))
outport.send(cc(C.RPN_LSB, value=2)) # RPN -> Coarse Tune
outport.send(cc(C.DATA_MSB, value=62)) # MSB = 62
pb() # PBS == 2, down 2 semitones
outport.send(cc(C.DATA_LSB, value=32))
pb() # PBS == 2, down 2 semitones
outport.send(cc(C.DATA_DEC, value=0)) # MSB = 60
pb() # PBS == 2, down 4 semitones
outport.send(cc(C.DATA_DEC, value=0)) # MSB = 59
pb() # PBS == 2, down 5 semitones
outport.send(controls.gm_on()) # reset all
pb() # PBS == 2
outport.send(cc(C.RPN_MSB, value=0))
outport.send(cc(C.RPN_LSB, value=2)) # RPN -> Coarse Tune
outport.send(cc(C.DATA_MSB, value=65)) # MSB = 62
pb() # PBS == 2, up 1 semitone
outport.send(cc(C.RPN_LSB, value=0)) # RPN -> PBS
pb() # PBS ==2, up 1 semitone
outport.send(cc(C.DATA_LSB, value=42)) # LSB = 42
pb() # PBS == 2, still
outport.send(cc(C.DATA_DEC, value=0)) # MSB = 41
pb() # PBS == 41 (clamped to 24)
outport.send(controls.gm_on()) # reset all
pb()
outport.send(mido.Message.from_hex(
"F0 43 10 4C 00 00 04 03 F7"))
pb() # XG volume
outport.send(mido.Message.from_hex(
"F0 7F 7F 04 01 00 48 F7"))
pb() # USE volume
outport.send(mido.Message.from_hex(
"F0 43 11 4C 00 00 7F 00 F7"))
pb() # XG Reset
inport = mido.open_input('DGX-505 MIDI 1')
list(inport.iter_pending())
outport.send(mido.Message.from_hex(
"F0 43 40 4C 00 00 00 F7")) # request
outport.send(mido.Message.from_hex(
"F0 43 20 4C 00 00 01 F7")) # dump request
list(inport.iter_pending())
inport.close()
outport.close()
outport = mido.open_output('DGX-505 MIDI 1')
pb() # MSB = 0, probably
outport.send(cc(C.DATA_INC, value=0)) # MSB = 1
pb() # PBS == 2
outport.send(cc(C.RPN_MSB, value=0))
outport.send(cc(C.RPN_LSB, value=0)) # RPN -> PBS
outport.send(cc(C.DATA_INC, value=0)) # MSB = 2
pb() # PBS == 2
outport.send(cc(C.DATA_INC, value=0)) # MSB = 3
pb() # PBS == 3
outport.close()
# power on
outport = mido.open_output('DGX-505 MIDI 1')
pb()
outport.send(cc(C.RPN_MSB, value=0))
outport.send(cc(C.RPN_LSB, value=2)) # RPN -> Coarse Tuning
outport.send(cc(C.DATA_DEC, value=0)) # MSB = 0
pb() # Coarse Tuning = 64
outport.send(cc(C.DATA_INC, value=0)) # MSB
o = mido.open_output('DGX-505 MIDI 1')
def pulse(port, sleep=0.5, note=60, velocity=100, channel=0):
port.send(mido.Message('note_on', note=note, velocity=velocity, channel=channel))
time.sleep(sleep)
o.send(mido.Message('note_on', note=note, velocity=0, channel=channel))
time.sleep(sleep)
pulse(o)
# Pitch Bend
o.send(controls.gm_on())
pulse(o)
o.send(mido.Message('pitchwheel', pitch=-8192))
pulse(o)
o.send(cc(C.RESET_CONTROLS, value=0))
pulse(o)
def pulsetest(port, m, channel=0, reset=True, voice=None, *args, **kwargs):
if reset:
port.send(controls.gm_on())
if voice is not None:
controls.multisend(port, controls.set_voice_numbers(voice, channel=channel))
pulse(port, channel=channel, *args, **kwargs)
port.send(m)
pulse(port, channel=channel, *args, **kwargs)
port.send(cc(C.RESET_CONTROLS, value=0, channel=channel))
pulse(port, channel=channel, *args, **kwargs)
pulsetest(o, cc(C.RELEASE, value=0), voice=103)
pulsetest(o, cc(C.ATTACK, value=80), voice=103)
pulsetest(o, cc(C.ATTACK, value=80), voice=120)
pulsetest(o, cc(C.HARMONIC, value=127), voice=103)
pulsetest(o, cc(C.BRIGHTNESS, value=0), voice=1)
def porta(port, sleep=0.5, note1=60, note2=64, velocity=100, channel=0, reset=True, voice=None):
if reset:
port.send(controls.gm_on())
if voice is not None:
controls.multisend(port, controls.set_voice_numbers(voice, channel=channel))
port.send(mido.Message('note_on', note=note1, velocity=velocity, channel=channel))
time.sleep(sleep)
port.send(cc(C.PORTAMENTO_CTRL, value=note1))
port.send(mido.Message('note_on', note=note2, velocity=1, channel=channel))
time.sleep(sleep)
port.send(mido.Message('note_on', note=note2, velocity=0, channel=channel))
time.sleep(sleep)
port.send(mido.Message('note_on', note=note1, velocity=0, channel=channel))
time.sleep(sleep)
port.send(mido.Message('note_on', note=note1, velocity=velocity, channel=channel))
time.sleep(sleep)
port.send(cc(C.PORTAMENTO_CTRL, value=note1))
port.send(cc(C.RESET_CONTROLS, value=0, channel=channel))
port.send(mido.Message('note_on', note=note2, velocity=1, channel=channel))
time.sleep(sleep)
port.send(mido.Message('note_on', note=note2, velocity=0, channel=channel))
time.sleep(sleep)
port.send(mido.Message('note_on', note=note1, velocity=0, channel=channel))
time.sleep(sleep)
port.send(mido.Message('note_on', note=note1, velocity=velocity, channel=channel))
time.sleep(sleep)
port.send(cc(C.PORTAMENTO_CTRL, value=note1))
port.send(mido.Message('note_on', note=note2, velocity=1, channel=channel))
port.send(cc(C.RESET_CONTROLS, value=0, channel=channel))
time.sleep(sleep)
port.send(mido.Message('note_on', note=note2, velocity=0, channel=channel))
time.sleep(sleep)
port.send(mido.Message('note_on', note=note1, velocity=0, channel=channel))
time.sleep(sleep)
porta(o, voice=24)
def pbp(port, sleep=0.1, channel=0, note=64, velocity=100):
port.send(mido.Message('pitchwheel', pitch=0, channel=channel))
port.send(mido.Message('note_on', note=note, velocity=velocity, channel=channel))
time.sleep(sleep)
port.send(mido.Message('pitchwheel', pitch=+8191, channel=channel))
time.sleep(sleep)
port.send(mido.Message('note_off', note=note,channel=channel))
time.sleep(sleep)
o.send(controls.gm_on())
controls.multisend(o, controls.set_voice_numbers(1))
o.send(cc(C.RPN_MSB, value=0))
o.send(cc(C.RPN_LSB, value=2))
pbp(o)
o.send(cc(C.DATA_MSB, value=65))
pbp(o)
o.send(cc(C.RESET_CONTROLS, value=0))
pbp(o)
pbp(o)
o.send(cc(C.RPN_MSB, value=0))
o.send(cc(C.DATA_INC, value=0))
pbp(o)
o.send(cc(C.RPN_LSB, value=2))
o.send(cc(C.DATA_INC, value=0))
pbp(o)
i = mido.open_input('DGX-505 MIDI 1')
list(i.iter_pending())
d1 = list(i.iter_pending())
cs = controlstate.MidiControlState()
def grab():
return list(cs.feed_message(m) for m in i.iter_pending())
w1 = [cs.feed_message(m) for m in d1]
o.send(controls.master_vol(2))
w2 = grab()
all(a.message == b.message for a, b in zip(w1, w2))
grab()
w3 = grab()
[(a, b) for a, b in zip(w2, w3) if a.message != b.message]
o.send(mido.Message.from_hex("F0 43 10 4C 00 00 7E 00 F7")) # XG ON
w4 = grab()
[(a, b) for a, b in zip(w3, w4) if a.message != b.message]
grab()
w5 = grab()
[(a, b) for a, b in zip(w4, w5) if a.message != b.message]
o.send(mido.Message.from_hex("F0 43 10 4C 00 00 7F 00 F7")) # XG RESET
w6 = grab()
[(a, b) for a, b in zip(w5, w6) if a.message != b.message]
o.send(controls.master_tune_val(100))
o.send(controls.gm_on())
o.send(mido.Message.from_hex("F0 43 10 4C 00 00 7E 00 F7")) # XG ON
o.send(mido.Message.from_hex("F0 43 10 4C 00 00 7F 00 F7")) # XG RESET
grab()
i.close()
i = mido.open_input('DGX-505 MIDI 1')
grab()
progs = (0,1,8,16,24,25,27,32,40,48)
o.send(cc(C.BANK_MSB, value=127))
o.send(mido.Message('program_change', program=0))
for p in progs:
o.send(mido.Message('program_change', program=p))
o.send(mido.Message('note_on', note=40, velocity=33))
time.sleep(0.2)
o.send(cc(C.BANK_LSB, value=127))
from commons.messages import voices
def voice_list(prog):
v = []
for l in range(128):
try:
v.append(voices.from_bank_program(0, l, prog))
except KeyError:
pass
return v
[voice_list(x) for x in progs]
organvoices = voice_list(16)
print('\n'.join(x.voice_string_extended() for x in organvoices))
controls.multisend(o, controls.set_voice_numbers(176, channel=1))
controls.multisend(o, controls.set_voice_numbers(19, channel=2))
pulse(o, channel=1, note=40)
pulse(o, channel=2, note=40)
o.send(mido.Message('program_change', program=16, channel=0))
pulse(o, channel=0, note=40)
o.send(cc(C.BANK_MSB, value=127, channel=0))
o.send(cc(C.BANK_LSB, value=0, channel=0))
o.send(mido.Message('program_change', program=16, channel=0))
pulse(o, channel=0, note=40)
o.send(cc(C.BANK_LSB, value=112, channel=0))
o.send(mido.Message('program_change', program=16, channel=0))
pulse(o, channel=0, note=40)
o.send(cc(C.BANK_MSB, value=0, channel=0))
o.send(mido.Message('program_change', program=16, channel=0))
pulse(o, channel=0, note=40)
o.send(cc(C.BANK_MSB, value=127, channel=0))
o.send(mido.Message('program_change', program=16, channel=0))
pulse(o, channel=0, note=40)
o.send(cc(C.BANK_MSB, value=0, channel=0))
o.send(mido.Message('program_change', program=16, channel=0))
pulse(o, channel=0, note=40)
olsbs = [v.lsb for v in organvoices]
olsbs
o.send(cc(C.BANK_LSB, value=114, channel=0))
o.send(mido.Message('program_change', program=16, channel=0))
pulse(o, sleep=0.2, channel=0, note=40)
o.send(cc(C.BANK_LSB, value=115, channel=0))
o.send(mido.Message('program_change', program=16, channel=0))
pulse(o, sleep=0.2, channel=0, note=40)
o.send(cc(C.BANK_MSB, value=0, channel=0))
for l in range(120):
print(l, end=" ")
o.send(cc(C.BANK_LSB, value=l, channel=0))
o.send(mido.Message('program_change', program=16, channel=0))
pulse(o, sleep=0.1, channel=0, note=40)
o.send(cc(C.BANK_LSB, value=0, channel=0))
o.send(mido.Message('program_change', program=16, channel=0))
pulse(o, sleep=0.2, channel=0, note=40)
o.send(cc(C.BANK_LSB, value=8, channel=0))
o.send(mido.Message('program_change', program=16, channel=0))
pulse(o, sleep=0.2, channel=0, note=40)
o.send(cc(C.BANK_LSB, value=32, channel=0))
o.send(mido.Message('program_change', program=16, channel=0))
pulse(o, sleep=0.2, channel=0, note=40)
o.send(cc(C.BANK_LSB, value=8, channel=0))
o.send(mido.Message('program_change', program=16, channel=0))
pulse(o, sleep=0.2, channel=0, note=40)
# change to violins
o.send(mido.Message('program_change', program=40, channel=0))
pulse(o, sleep=0.2, channel=0, note=40)
o.send(cc(C.BANK_LSB, value=0, channel=0))
o.send(mido.Message('program_change', program=40, channel=0))
pulse(o, sleep=0.2, channel=0, note=40)
o.send(cc(C.BANK_LSB, value=8, channel=0))
o.send(mido.Message('program_change', program=40, channel=0))
pulse(o, sleep=0.2, channel=0, note=40)
o.send(cc(C.BANK_LSB, value=114, channel=0))
o.send(mido.Message('program_change', program=16, channel=0))
pulse(o, sleep=0.2, channel=0, note=40)
o.send(cc(C.RESET_CONTROLS, value=0, channel=0))
o.send(mido.Message('program_change', program=16, channel=0))
pulse(o, sleep=0.2, channel=0, note=40)
o.send(cc(C.BANK_MSB, value=0, channel=0))
o.send(cc(C.BANK_LSB, value=114, channel=0))
o.send(mido.Message('program_change', program=16, channel=0))
pulse(o, sleep=0.2, channel=0, note=40)
o.send(cc(C.BANK_MSB, value=62, channel=0))
o.send(mido.Message('program_change', program=16, channel=0))
pulse(o, sleep=0.2, channel=0, note=40)
o.send(cc(C.BANK_LSB, value=116, channel=0))
o.send(mido.Message('program_change', program=16, channel=0))
pulse(o, sleep=0.2, channel=0, note=40)
o.send(cc(C.BANK_MSB, value=0, channel=0))
o.send(mido.Message('program_change', program=16, channel=0))
pulse(o, sleep=0.2, channel=0, note=40)
o.send(cc(C.BANK_LSB, value=114, channel=0))
o.send(mido.Message('program_change', program=16, channel=0))
pulse(o, sleep=0.2, channel=0, note=40)
allvoices = [voices.from_number(x) for x in range(1, 495)]
zerovoices = sorted((v for v in allvoices if v.lsb == 0), key=lambda v: (v.msb, v.prog))
zerovoices
for v in allvoices:
try:
voices.from_bank_program(v.msb, 0, v.prog)
except KeyError:
print(v)
sorted((v for v in allvoices if v.msb != 0 and v.lsb != 0), key=lambda v: (v.msb, v.lsb, v.prog))
o.send(cc(C.BANK_MSB, value=127, channel=0))
for p in range(128):
print(p, end=" ")
o.send(mido.Message('program_change', program=p, channel=0))
pulse(o, sleep=0.1, channel=0, note=40)
o.send(cc(C.BANK_MSB, value=126, channel=0))
o.send(cc(C.BANK_LSB, value=0, channel=0))
for p in range(128):
print(p, end=" ")
o.send(mido.Message('program_change', program=p, channel=0))
pulse(o, sleep=0.05, channel=0, note=84)
progs
o.send(cc(C.BANK_MSB, value=127, channel=0))
o.send(mido.Message('program_change', program=0, channel=0))
pulse(o, sleep=0.2, channel=0, note=40)
o.send(mido.Message('program_change', program=3, channel=0))
pulse(o, sleep=0.2, channel=0, note=40)
o.send(mido.Message('program_change', program=16, channel=0))
pulse(o, sleep=0.2, channel=0, note=40)
o.send(mido.Message('program_change', program=3, channel=0))
pulse(o, sleep=0.2, channel=0, note=40)
o.send(cc(C.BANK_MSB, value=126, channel=0))
o.send(cc(C.BANK_LSB, value=0, channel=0))
o.send(mido.Message('program_change', program=0, channel=0))
pulse(o, sleep=0.2, channel=0, note=84)
o.send(mido.Message('program_change', program=6, channel=0))
pulse(o, sleep=0.2, channel=0, note=84)
o.send(mido.Message('program_change', program=1, channel=0))
pulse(o, sleep=0.2, channel=0, note=84)
o.send(mido.Message('program_change', program=3, channel=0))
pulse(o, sleep=0.2, channel=0, note=84 )
voice_list(40)
o.send(cc(C.BANK_MSB, value=64, channel=0))
for p in range(128):
print(p, end=" ")
o.send(mido.Message('program_change', program=p, channel=0))
pulse(o, sleep=0.05, channel=0, note=40)
o.send(cc(C.BANK_MSB, value=64, channel=0))
o.send(cc(C.BANK_LSB, value=0, channel=0))
o.send(mido.Message('program_change', program=70, channel=0))
pulse(o, sleep=0.2, channel=0, note=84)
o.send(mido.Message('program_change', program=71, channel=0))
pulse(o, sleep=0.2, channel=0, note=84)
o.send(mido.Message('program_change', program=64, channel=0))
pulse(o, sleep=0.2, channel=0, note=84)
o.send(mido.Message('program_change', program=71, channel=0))
pulse(o, sleep=0.2, channel=0, note=84 )
o.send(cc(C.BANK_MSB, value=64, channel=0))
for l in range(128):
print(l, end=" ")
o.send(cc(C.BANK_LSB, value=l, channel=0))
o.send(mido.Message('program_change', program=70, channel=0))
pulse(o, sleep=0.1, channel=0, note=40)
o.send(cc(C.BANK_MSB, value=126, channel=0))
for l in range(10):
print(l, end=" ")
o.send(cc(C.BANK_LSB, value=l, channel=0))
o.send(mido.Message('program_change', program=0, channel=0))
pulse(o, sleep=0.1, channel=0, note=84)
o.send(cc(C.BANK_MSB, value=126, channel=0))
o.send(cc(C.BANK_LSB, value=1, channel=0))
o.send(mido.Message('program_change', program=0, channel=0))
pulse(o, sleep=0.2, channel=0, note=84)
o.send(mido.Message('program_change', program=6, channel=0))
pulse(o, sleep=0.2, channel=0, note=84)
o.send(mido.Message('program_change', program=1, channel=0))
pulse(o, sleep=0.2, channel=0, note=84)
o.send(mido.Message('program_change', program=3, channel=0))
pulse(o, sleep=0.2, channel=0, note=84 )
o.send(controls.gm_on())
pulse(o, sleep=0.2, channel=0, note=60)
o.send(mido.Message('program_change', program=16, channel=0))
pulse(o, sleep=0.2, channel=0, note=60)
o.send(cc(C.BANK_LSB, value=113, channel=0))
o.send(mido.Message('program_change', program=16, channel=0))
pulse(o, sleep=0.2, channel=0, note=60)
o.send(cc(C.BANK_LSB, value=0, channel=0))
o.send(mido.Message('program_change', program=16, channel=0))
pulse(o, sleep=0.2, channel=0, note=60)
# drum channel
pulse(o, sleep=0.2, channel=9, note=40)
o.send(mido.Message('program_change', program=16, channel=9))
pulse(o, sleep=0.2, channel=9, note=40)
o.send(mido.Message('program_change', program=0, channel=9))
pulse(o, sleep=0.2, channel=9, note=40)
o.send(mido.Message.from_hex("F0 43 10 4C 00 00 7E 00 F7")) # XG ON
pulse(o, sleep=0.2, channel=0, note=60)
o.send(mido.Message('program_change', program=16, channel=0))
pulse(o, sleep=0.2, channel=0, note=60)
o.send(cc(C.BANK_LSB, value=113, channel=0))
o.send(mido.Message('program_change', program=16, channel=0))
pulse(o, sleep=0.2, channel=0, note=60)
o.send(cc(C.BANK_LSB, value=0, channel=0))
o.send(mido.Message('program_change', program=16, channel=0))
pulse(o, sleep=0.2, channel=0, note=60)
# drum channel
pulse(o, sleep=0.2, channel=9, note=40)
o.send(mido.Message('program_change', program=16, channel=9))
pulse(o, sleep=0.2, channel=9, note=40)
o.send(mido.Message('program_change', program=0, channel=9))
pulse(o, sleep=0.2, channel=9, note=40)
o.send(mido.Message.from_hex("F0 43 10 4C 00 00 7F 00 F7")) # XG RESET
pulse(o, sleep=0.2, channel=0, note=60)
o.send(mido.Message('program_change', program=16, channel=0))
pulse(o, sleep=0.2, channel=0, note=60)
o.send(cc(C.BANK_LSB, value=113, channel=0))
o.send(mido.Message('program_change', program=16, channel=0))
pulse(o, sleep=0.2, channel=0, note=60)
o.send(cc(C.BANK_LSB, value=0, channel=0))
o.send(mido.Message('program_change', program=16, channel=0))
pulse(o, sleep=0.2, channel=0, note=60)
# drum channel
pulse(o, sleep=0.2, channel=9, note=40)
o.send(mido.Message('program_change', program=16, channel=9))
pulse(o, sleep=0.2, channel=9, note=40)
o.send(mido.Message('program_change', program=0, channel=9))
pulse(o, sleep=0.2, channel=9, note=40)
o.send(controls.gm_on())
controls.multisend(o, controls.set_voice_numbers(1))
pbp(o)
o.send(cc(C.DATA_MSB, value=59))
o.send(cc(C.RPN_MSB, value=0))
o.send(cc(C.RPN_LSB, value=2))
pbp(o)
o.send(cc(C.DATA_INC, value=0))
pbp(o)
controls.multisend(o, controls.set_voice_numbers(1, channel=2))
pbp(o, channel=2)
#o.send(cc(C.DATA_MSB, value=59, channel=1))
o.send(cc(C.RPN_MSB, value=0, channel=2))
o.send(cc(C.RPN_LSB, value=2, channel=2))
pbp(o, channel=2)
o.send(cc(C.DATA_INC, value=0, channel=2))
pbp(o, channel=2)
controls.xg_on()
ii = mido.open_input('DGX-505 MIDI 1')
def grabm(p):
return list(p.iter_pending())
o = mido.open_output('DGX-505 MIDI 1')
for i in range(16):
o.send(controls.master_tune_val(-100))
controls.multisend(o, controls.set_voice_numbers(60))
pulse(o, 0.1)
o.send(controls.xg_parameter_change(0, 0, 0x7E, 0, n=i))
pulse(o, 0.1)
o.send(controls.xg_parameter_change(0, 0, 0x7F, 0, n=i))
pulse(o, 0.1)
grabm(ii)
o.send(controls.cc(controls.Control.LOCAL, value=0x70, channel=15))
o.send(controls.gm_on())
o.send(controls.master_vol(100))
pulse(o, 0.1)
o.send(controls.xg_parameter_change(0, 0, 0x7E, 0, n=i))
pulse(o, 0.1)
o.send(controls.master_vol(127))
pulse(o, 0.1)
o.send(controls.xg_parameter_change(0, 0, 0x7E, 0, n=i))
pulse(o, 0.1)
o.send(controls.xg_reset())
controls.multisend(o, controls.set_voice_numbers(60))
o.send(mido.Message('note_on', note=60))
time.sleep(0.5)
o.send(controls.cc(controls.Control.PORTAMENTO_CTRL, value=60))
o.send(mido.Message('note_on', note=60))
#o.send(mido.Message('note_on', note=61))
time.sleep(0.5)
o.send(mido.Message('note_off', note=60))
o.send(mido.Message('note_off', note=60))
#o.send(mido.Message('note_off', note=61))
o.send(controls.xg_reset())
controls.multisend(o, controls.set_voice_numbers(60))
o.send(mido.Message('note_on', note=60))
time.sleep(0.5)
controls.multisend(o, controls.set_voice_numbers(101))
o.send(mido.Message('note_on', note=60))
time.sleep(0.5)
o.send(controls.cc(controls.Control.PORTAMENTO_CTRL, value=60))
o.send(mido.Message('note_on', note=61))
time.sleep(0.5)
o.send(controls.cc(controls.Control.PORTAMENTO_CTRL, value=60))
o.send(mido.Message('note_on', note=61))
time.sleep(0.5)
o.send(mido.Message('note_off', note=61))
time.sleep(0.5)
o.send(mido.Message('note_off', note=61))
o.send(controls.xg_reset())
controls.multisend(o, controls.set_voice_numbers(122))
o.send(mido.Message('note_on', note=29))
time.sleep(0.5)
o.send(controls.cc(controls.Control.PORTAMENTO_CTRL, value=29))
o.send(mido.Message('note_on', note=28))
time.sleep(0.5)
o.send(mido.Message('note_off', note=28))
o.send(controls.xg_reset())
controls.multisend(o, controls.set_voice_numbers(133))
o.send(mido.Message('note_on', note=36))
time.sleep(0.5)
o.send(controls.cc(controls.Control.PORTAMENTO_CTRL, value=36))
o.send(mido.Message('note_on', note=37))
time.sleep(0.5)
o.send(mido.Message('note_off', note=37))
o.send(controls.xg_reset())
controls.multisend(o, controls.set_voice_numbers(474))
o.send(mido.Message('note_on', note=36))
time.sleep(0.1)
o.send(controls.cc(controls.Control.PORTAMENTO_CTRL, value=36))
o.send(mido.Message('note_on', note=37))
time.sleep(0.5)
o.send(mido.Message('note_off', note=37))
o.send(controls.xg_reset())
controls.multisend(o, controls.set_voice_numbers(434))
o.send(mido.Message('note_on', note=60, velocity=127))
time.sleep(0.2)
o.send(controls.cc(controls.Control.PORTAMENTO_CTRL, value=60))
o.send(mido.Message('note_on', note=61, velocity=127))
time.sleep(0.5)
o.send(mido.Message('note_off', note=61))
time.sleep(1)
o.send(cc(C.ATTACK, value=50))
o.send(cc(C.BRIGHTNESS, value=127))
o.send(cc(C.RELEASE, value=127))
o.send(mido.Message('note_on', note=60, velocity=127))
time.sleep(0.2)
o.send(controls.cc(controls.Control.PORTAMENTO_CTRL, value=60))
o.send(mido.Message('note_on', note=61, velocity=127))
time.sleep(0.5)
o.send(mido.Message('note_off', note=61))
voices.from_number(434)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create "chicago_taxi.train" and "chicago_taxi.eval" BQ tables to store results.
Step2: Sanity check on the data.
Step3: Explore Data
Step4: Step 1
Step5: Step 2
Step6: Transform takes about 6 hours in cloud. Data is fairely big (33GB) and processing locally on a single VM would be much longer.
Step7: Step 3
Step8: Training takes about 30 min with "STANRDARD_1" scale_tier. Note that we will perform 1M steps. This will take much longer if we run it locally on Datalab's VM. With CloudML Engine, it runs training in a distributed way with multiple VMs, so it runs much faster.
Step9: Step 4
Step10: Currently, batch_prediction service does not work with BigQuery data. So we export eval data to csv file.
Step11: Run batch prediction. Note that we use evaluation_model because it takes input data with target (truth) column.
Step12: Once batch prediction is done, check results files. Batch prediction service outputs to JSON files.
Step13: We can load the results back to BigQuery.
Step14: With data in BigQuery can do some query analysis. For example, RMSE.
Step15: From above, the results are better than local run with sampled data. RMSE reduced by 2.5%, MAE reduced by around 20%. Average absolute error reduced by around 30%.
Step16: There is also a feature slice visualization component designed for viewing evaluation results. It shows correlation between features and prediction results.
Step17: What we can see from above charts is that model performs worst in hour 5 and 6 (why?), and best on Sundays (less traffic?).
|
<ASSISTANT_TASK:>
Python Code:
%%bq query --name texi_query_eval
SELECT
unique_key,
fare,
CAST(EXTRACT(DAYOFWEEK FROM trip_start_timestamp) AS STRING) as weekday,
CAST(EXTRACT(DAYOFYEAR FROM trip_start_timestamp) AS STRING) as day,
CAST(EXTRACT(HOUR FROM trip_start_timestamp) AS STRING) as hour,
pickup_latitude,
pickup_longitude,
dropoff_latitude,
dropoff_longitude,
company
FROM `bigquery-public-data.chicago_taxi_trips.taxi_trips`
WHERE
fare > 2.0 AND fare < 200.0 AND
pickup_latitude IS NOT NULL AND
pickup_longitude IS NOT NULL AND
dropoff_latitude IS NOT NULL AND
dropoff_longitude IS NOT NULL AND
MOD(ABS(FARM_FINGERPRINT(unique_key)), 100) < 5
%%bq query --name texi_query_train
SELECT
unique_key,
fare,
CAST(EXTRACT(DAYOFWEEK FROM trip_start_timestamp) AS STRING) as weekday,
CAST(EXTRACT(DAYOFYEAR FROM trip_start_timestamp) AS STRING) as day,
CAST(EXTRACT(HOUR FROM trip_start_timestamp) AS STRING) as hour,
pickup_latitude,
pickup_longitude,
dropoff_latitude,
dropoff_longitude,
company
FROM `bigquery-public-data.chicago_taxi_trips.taxi_trips`
WHERE
fare > 2.0 AND fare < 200.0 AND
pickup_latitude IS NOT NULL AND
pickup_longitude IS NOT NULL AND
dropoff_latitude IS NOT NULL AND
dropoff_longitude IS NOT NULL AND
MOD(ABS(FARM_FINGERPRINT(unique_key)), 100) >= 5
%%bq datasets create --name chicago_taxi
%%bq execute
query: texi_query_eval
table: chicago_taxi.eval
mode: overwrite
%%bq execute
query: texi_query_train
table: chicago_taxi.train
mode: overwrite
%%bq query
SELECT count(*) FROM chicago_taxi.train
%%bq query
SELECT count(*) FROM chicago_taxi.eval
import google.datalab.contrib.mlworkbench.commands # this loads the %%ml commands
%%ml dataset create
name: taxi_data_full
format: bigquery
train: chicago_taxi.train
eval: chicago_taxi.eval
!gsutil mb gs://datalab-chicago-taxi-demo # Create a Storage Bucket to store results.
!gsutil rm -r -f gs://datalab-chicago-taxi-demo/analysis # Remove previous analysis results if any
%%ml analyze --cloud
output: gs://datalab-chicago-taxi-demo/analysis
data: taxi_data_full
features:
unique_key:
transform: key
fare:
transform: target
company:
transform: embedding
embedding_dim: 10
weekday:
transform: one_hot
day:
transform: one_hot
hour:
transform: one_hot
pickup_latitude:
transform: scale
pickup_longitude:
transform: scale
dropoff_latitude:
transform: scale
dropoff_longitude:
transform: scale
!gsutil -m rm -r -f gs://datalab-chicago-taxi-demo/transform # Remove previous transform results if any.
%%ml transform --cloud
output: gs://datalab-chicago-taxi-demo/transform
analysis: gs://datalab-chicago-taxi-demo/analysis
data: taxi_data_full
!gsutil list gs://datalab-chicago-taxi-demo/transform/eval-*
%%ml dataset create
name: taxi_data_transformed
format: transformed
train: gs://datalab-chicago-taxi-demo/transform/train-*
eval: gs://datalab-chicago-taxi-demo/transform/eval-*
!gsutil -m rm -r -f gs://datalab-chicago-taxi-demo/train # Remove previous training results.
%%ml train --cloud
output: gs://datalab-chicago-taxi-demo/train
analysis: gs://datalab-chicago-taxi-demo/analysis
data: taxi_data_transformed
model_args:
model: dnn_regression
hidden-layer-size1: 400
hidden-layer-size2: 200
train-batch-size: 1000
max-steps: 1000000
cloud_config:
region: us-east1
scale_tier: STANDARD_1
# Delete previous results
!gsutil -m rm -r gs://datalab-chicago-taxi-demo/batch_prediction
%%bq extract
table: chicago_taxi.eval
format: csv
path: gs://datalab-chicago-taxi-demo/eval.csv
%%ml batch_predict --cloud
model: gs://datalab-chicago-taxi-demo/train/evaluation_model
output: gs://datalab-chicago-taxi-demo/batch_prediction
format: csv
data:
csv: gs://datalab-chicago-taxi-demo/eval.csv
cloud_config:
region: us-east1
!gsutil list -l -h gs://datalab-chicago-taxi-demo/batch_prediction
%%bq load
format: json
mode: overwrite
table: chicago_taxi.eval_results
path: gs://datalab-chicago-taxi-demo/batch_prediction/prediction.results*
schema:
- name: unique_key
type: STRING
- name: predicted
type: FLOAT
- name: target
type: FLOAT
%%ml evaluate regression
bigquery: chicago_taxi.eval_results
%%bq query
SELECT
predicted,
target,
ABS(predicted-target) as error,
s.*
FROM `chicago_taxi.eval_results` as r
JOIN `chicago_taxi.eval` as s
ON r.unique_key = s.unique_key
ORDER BY error DESC
LIMIT 10
%%bq query --name error_by_hour
SELECT
COUNT(*) as count,
hour as feature,
AVG(ABS(predicted - target)) as avg_error,
STDDEV(ABS(predicted - target)) as stddev_error
FROM `chicago_taxi.eval_results` as r
JOIN `chicago_taxi.eval` as s
ON r.unique_key = s.unique_key
GROUP BY hour
# Note: the interactive output is replaced with a static image so it displays well in github.
# Please execute this cell to see the interactive component.
from google.datalab.ml import FeatureSliceView
FeatureSliceView().plot(error_by_hour)
%%bq query --name error_by_weekday
SELECT
COUNT(*) as count,
weekday as feature,
AVG(ABS(predicted - target)) as avg_error,
STDDEV(ABS(predicted - target)) as stddev_error
FROM `chicago_taxi.eval_results` as r
JOIN `chicago_taxi.eval` as s
ON r.unique_key = s.unique_key
GROUP BY weekday
# Note: the interactive output is replaced with a static image so it displays well in github.
# Please execute this cell to see the interactive component.
from google.datalab.ml import FeatureSliceView
FeatureSliceView().plot(error_by_weekday)
!gsutil -m rm -rf gs://datalab-chicago-taxi-demo
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Install the latest GA version of google-cloud-storage library as well.
Step2: Restart the kernel
Step3: Before you begin
Step4: Region
Step5: Timestamp
Step6: Authenticate your Google Cloud account
Step7: Create a Cloud Storage bucket
Step8: Only if your bucket doesn't already exist
Step9: Finally, validate access to your Cloud Storage bucket by examining its contents
Step10: Set up variables
Step11: Vertex constants
Step12: AutoML constants
Step13: Hardware Accelerators
Step14: Container (Docker) image
Step15: Tutorial
Step16: Dataset
Step17: Quick peek at your data
Step18: Dataset
Step19: Now save the unique dataset identifier for the Dataset resource instance you created.
Step20: Train the model
Step21: Construct the task requirements
Step22: Now save the unique identifier of the training pipeline you created.
Step23: Get information on a training pipeline
Step24: Deployment
Step25: Model information
Step26: Model deployment for batch prediction
Step27: Make the batch input file
Step28: Compute instance scaling
Step29: Make batch prediction request
Step30: Now get the unique identifier for the batch prediction job you created.
Step31: Get information on a batch prediction job
Step33: Get Predictions
Step34: Cleaning up
|
<ASSISTANT_TASK:>
Python Code:
import os
import sys
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install -U google-cloud-aiplatform $USER_FLAG
! pip3 install -U google-cloud-storage $USER_FLAG
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
REGION = "us-central1" # @param {type: "string"}
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
! gsutil mb -l $REGION $BUCKET_NAME
! gsutil ls -al $BUCKET_NAME
import time
from google.cloud.aiplatform import gapic as aip
from google.protobuf import json_format
from google.protobuf.struct_pb2 import Struct, Value
# API service endpoint
API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION)
# Vertex location root path for your dataset, model and endpoint resources
PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
# Tabular Dataset type
DATA_SCHEMA = "gs://google-cloud-aiplatform/schema/dataset/metadata/tables_1.0.0.yaml"
# Tabular Labeling type
LABEL_SCHEMA = (
"gs://google-cloud-aiplatform/schema/dataset/ioformat/table_io_format_1.0.0.yaml"
)
# Tabular Training task
TRAINING_SCHEMA = "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_tables_1.0.0.yaml"
if os.getenv("IS_TESTING_DEPOLY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aip.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPOLY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (aip.AcceleratorType.NVIDIA_TESLA_K80, 1)
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
# client options same for all services
client_options = {"api_endpoint": API_ENDPOINT}
def create_dataset_client():
client = aip.DatasetServiceClient(client_options=client_options)
return client
def create_model_client():
client = aip.ModelServiceClient(client_options=client_options)
return client
def create_pipeline_client():
client = aip.PipelineServiceClient(client_options=client_options)
return client
def create_job_client():
client = aip.JobServiceClient(client_options=client_options)
return client
clients = {}
clients["dataset"] = create_dataset_client()
clients["model"] = create_model_client()
clients["pipeline"] = create_pipeline_client()
clients["job"] = create_job_client()
for client in clients.items():
print(client)
IMPORT_FILE = "gs://cloud-ml-tables-data/bank-marketing.csv"
count = ! gsutil cat $IMPORT_FILE | wc -l
print("Number of Examples", int(count[0]))
print("First 10 rows")
! gsutil cat $IMPORT_FILE | head
heading = ! gsutil cat $IMPORT_FILE | head -n1
label_column = str(heading).split(",")[-1].split("'")[0]
print("Label Column Name", label_column)
if label_column is None:
raise Exception("label column missing")
TIMEOUT = 90
def create_dataset(name, schema, src_uri=None, labels=None, timeout=TIMEOUT):
start_time = time.time()
try:
if src_uri.startswith("gs://"):
metadata = {"input_config": {"gcs_source": {"uri": [src_uri]}}}
elif src_uri.startswith("bq://"):
metadata = {"input_config": {"bigquery_source": {"uri": [src_uri]}}}
dataset = aip.Dataset(
display_name=name,
metadata_schema_uri=schema,
labels=labels,
metadata=json_format.ParseDict(metadata, Value()),
)
operation = clients["dataset"].create_dataset(parent=PARENT, dataset=dataset)
print("Long running operation:", operation.operation.name)
result = operation.result(timeout=TIMEOUT)
print("time:", time.time() - start_time)
print("response")
print(" name:", result.name)
print(" display_name:", result.display_name)
print(" metadata_schema_uri:", result.metadata_schema_uri)
print(" metadata:", dict(result.metadata))
print(" create_time:", result.create_time)
print(" update_time:", result.update_time)
print(" etag:", result.etag)
print(" labels:", dict(result.labels))
return result
except Exception as e:
print("exception:", e)
return None
result = create_dataset("bank-" + TIMESTAMP, DATA_SCHEMA, src_uri=IMPORT_FILE)
# The full unique ID for the dataset
dataset_id = result.name
# The short numeric ID for the dataset
dataset_short_id = dataset_id.split("/")[-1]
print(dataset_id)
def create_pipeline(pipeline_name, model_name, dataset, schema, task):
dataset_id = dataset.split("/")[-1]
input_config = {
"dataset_id": dataset_id,
"fraction_split": {
"training_fraction": 0.8,
"validation_fraction": 0.1,
"test_fraction": 0.1,
},
}
training_pipeline = {
"display_name": pipeline_name,
"training_task_definition": schema,
"training_task_inputs": task,
"input_data_config": input_config,
"model_to_upload": {"display_name": model_name},
}
try:
pipeline = clients["pipeline"].create_training_pipeline(
parent=PARENT, training_pipeline=training_pipeline
)
print(pipeline)
except Exception as e:
print("exception:", e)
return None
return pipeline
TRANSFORMATIONS = [
{"auto": {"column_name": "Age"}},
{"auto": {"column_name": "Job"}},
{"auto": {"column_name": "MaritalStatus"}},
{"auto": {"column_name": "Education"}},
{"auto": {"column_name": "Default"}},
{"auto": {"column_name": "Balance"}},
{"auto": {"column_name": "Housing"}},
{"auto": {"column_name": "Loan"}},
{"auto": {"column_name": "Contact"}},
{"auto": {"column_name": "Day"}},
{"auto": {"column_name": "Month"}},
{"auto": {"column_name": "Duration"}},
{"auto": {"column_name": "Campaign"}},
{"auto": {"column_name": "PDays"}},
{"auto": {"column_name": "POutcome"}},
]
PIPE_NAME = "bank_pipe-" + TIMESTAMP
MODEL_NAME = "bank_model-" + TIMESTAMP
task = Value(
struct_value=Struct(
fields={
"target_column": Value(string_value=label_column),
"prediction_type": Value(string_value="classification"),
"train_budget_milli_node_hours": Value(number_value=1000),
"disable_early_stopping": Value(bool_value=False),
"transformations": json_format.ParseDict(TRANSFORMATIONS, Value()),
}
)
)
response = create_pipeline(PIPE_NAME, MODEL_NAME, dataset_id, TRAINING_SCHEMA, task)
# The full unique ID for the pipeline
pipeline_id = response.name
# The short numeric ID for the pipeline
pipeline_short_id = pipeline_id.split("/")[-1]
print(pipeline_id)
def get_training_pipeline(name, silent=False):
response = clients["pipeline"].get_training_pipeline(name=name)
if silent:
return response
print("pipeline")
print(" name:", response.name)
print(" display_name:", response.display_name)
print(" state:", response.state)
print(" training_task_definition:", response.training_task_definition)
print(" training_task_inputs:", dict(response.training_task_inputs))
print(" create_time:", response.create_time)
print(" start_time:", response.start_time)
print(" end_time:", response.end_time)
print(" update_time:", response.update_time)
print(" labels:", dict(response.labels))
return response
response = get_training_pipeline(pipeline_id)
while True:
response = get_training_pipeline(pipeline_id, True)
if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED:
print("Training job has not completed:", response.state)
model_to_deploy_id = None
if response.state == aip.PipelineState.PIPELINE_STATE_FAILED:
raise Exception("Training Job Failed")
else:
model_to_deploy = response.model_to_upload
model_to_deploy_id = model_to_deploy.name
print("Training Time:", response.end_time - response.start_time)
break
time.sleep(60)
print("model to deploy:", model_to_deploy_id)
def list_model_evaluations(name):
response = clients["model"].list_model_evaluations(parent=name)
for evaluation in response:
print("model_evaluation")
print(" name:", evaluation.name)
print(" metrics_schema_uri:", evaluation.metrics_schema_uri)
metrics = json_format.MessageToDict(evaluation._pb.metrics)
for metric in metrics.keys():
print(metric)
print("logloss", metrics["logLoss"])
print("auPrc", metrics["auPrc"])
return evaluation.name
last_evaluation = list_model_evaluations(model_to_deploy_id)
HEADING = "Age,Job,MaritalStatus,Education,Default,Balance,Housing,Loan,Contact,Day,Month,Duration,Campaign,PDays,Previous,POutcome,Deposit"
INSTANCE_1 = (
"58,managment,married,teritary,no,2143,yes,no,unknown,5,may,261,1,-1,0, unknown"
)
INSTANCE_2 = (
"44,technician,single,secondary,no,39,yes,no,unknown,5,may,151,1,-1,0,unknown"
)
import tensorflow as tf
gcs_input_uri = BUCKET_NAME + "/test.csv"
with tf.io.gfile.GFile(gcs_input_uri, "w") as f:
f.write(HEADING + "\n")
f.write(str(INSTANCE_1) + "\n")
f.write(str(INSTANCE_2) + "\n")
print(gcs_input_uri)
! gsutil cat $gcs_input_uri
MIN_NODES = 1
MAX_NODES = 1
BATCH_MODEL = "bank_batch-" + TIMESTAMP
def create_batch_prediction_job(
display_name,
model_name,
gcs_source_uri,
gcs_destination_output_uri_prefix,
parameters=None,
):
if DEPLOY_GPU:
machine_spec = {
"machine_type": DEPLOY_COMPUTE,
"accelerator_type": DEPLOY_GPU,
"accelerator_count": DEPLOY_NGPU,
}
else:
machine_spec = {
"machine_type": DEPLOY_COMPUTE,
"accelerator_count": 0,
}
batch_prediction_job = {
"display_name": display_name,
# Format: 'projects/{project}/locations/{location}/models/{model_id}'
"model": model_name,
"model_parameters": json_format.ParseDict(parameters, Value()),
"input_config": {
"instances_format": IN_FORMAT,
"gcs_source": {"uris": [gcs_source_uri]},
},
"output_config": {
"predictions_format": OUT_FORMAT,
"gcs_destination": {"output_uri_prefix": gcs_destination_output_uri_prefix},
},
"dedicated_resources": {
"machine_spec": machine_spec,
"starting_replica_count": MIN_NODES,
"max_replica_count": MAX_NODES,
},
}
response = clients["job"].create_batch_prediction_job(
parent=PARENT, batch_prediction_job=batch_prediction_job
)
print("response")
print(" name:", response.name)
print(" display_name:", response.display_name)
print(" model:", response.model)
try:
print(" generate_explanation:", response.generate_explanation)
except:
pass
print(" state:", response.state)
print(" create_time:", response.create_time)
print(" start_time:", response.start_time)
print(" end_time:", response.end_time)
print(" update_time:", response.update_time)
print(" labels:", response.labels)
return response
IN_FORMAT = "csv"
OUT_FORMAT = "csv" # [csv]
response = create_batch_prediction_job(
BATCH_MODEL, model_to_deploy_id, gcs_input_uri, BUCKET_NAME, None
)
# The full unique ID for the batch job
batch_job_id = response.name
# The short numeric ID for the batch job
batch_job_short_id = batch_job_id.split("/")[-1]
print(batch_job_id)
def get_batch_prediction_job(job_name, silent=False):
response = clients["job"].get_batch_prediction_job(name=job_name)
if silent:
return response.output_config.gcs_destination.output_uri_prefix, response.state
print("response")
print(" name:", response.name)
print(" display_name:", response.display_name)
print(" model:", response.model)
try: # not all data types support explanations
print(" generate_explanation:", response.generate_explanation)
except:
pass
print(" state:", response.state)
print(" error:", response.error)
gcs_destination = response.output_config.gcs_destination
print(" gcs_destination")
print(" output_uri_prefix:", gcs_destination.output_uri_prefix)
return gcs_destination.output_uri_prefix, response.state
predictions, state = get_batch_prediction_job(batch_job_id)
def get_latest_predictions(gcs_out_dir):
Get the latest prediction subfolder using the timestamp in the subfolder name
folders = !gsutil ls $gcs_out_dir
latest = ""
for folder in folders:
subfolder = folder.split("/")[-2]
if subfolder.startswith("prediction-"):
if subfolder > latest:
latest = folder[:-1]
return latest
while True:
predictions, state = get_batch_prediction_job(batch_job_id, True)
if state != aip.JobState.JOB_STATE_SUCCEEDED:
print("The job has not completed:", state)
if state == aip.JobState.JOB_STATE_FAILED:
raise Exception("Batch Job Failed")
else:
folder = get_latest_predictions(predictions)
! gsutil ls $folder/prediction*.csv
! gsutil cat $folder/prediction*.csv
break
time.sleep(60)
delete_dataset = True
delete_pipeline = True
delete_model = True
delete_endpoint = True
delete_batchjob = True
delete_customjob = True
delete_hptjob = True
delete_bucket = True
# Delete the dataset using the Vertex fully qualified identifier for the dataset
try:
if delete_dataset and "dataset_id" in globals():
clients["dataset"].delete_dataset(name=dataset_id)
except Exception as e:
print(e)
# Delete the training pipeline using the Vertex fully qualified identifier for the pipeline
try:
if delete_pipeline and "pipeline_id" in globals():
clients["pipeline"].delete_training_pipeline(name=pipeline_id)
except Exception as e:
print(e)
# Delete the model using the Vertex fully qualified identifier for the model
try:
if delete_model and "model_to_deploy_id" in globals():
clients["model"].delete_model(name=model_to_deploy_id)
except Exception as e:
print(e)
# Delete the endpoint using the Vertex fully qualified identifier for the endpoint
try:
if delete_endpoint and "endpoint_id" in globals():
clients["endpoint"].delete_endpoint(name=endpoint_id)
except Exception as e:
print(e)
# Delete the batch job using the Vertex fully qualified identifier for the batch job
try:
if delete_batchjob and "batch_job_id" in globals():
clients["job"].delete_batch_prediction_job(name=batch_job_id)
except Exception as e:
print(e)
# Delete the custom job using the Vertex fully qualified identifier for the custom job
try:
if delete_customjob and "job_id" in globals():
clients["job"].delete_custom_job(name=job_id)
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex fully qualified identifier for the hyperparameter tuning job
try:
if delete_hptjob and "hpt_job_id" in globals():
clients["job"].delete_hyperparameter_tuning_job(name=hpt_job_id)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: ่ทๅๆฐๆฎๅบ่ฟๆฅๅฏน่ฑก๏ผ
Step2: ๆณจ๏ผMySQLdb.connect()ๅฝๆฐๅฏไปฅๆฅๆถ็ๅธธ็จ็ๅ ไธชๅๆฐ๏ผ
Step3: ่ฎพ็ฝฎๅฝๅๆฐๆฎๅบไธบinfo๏ผ
Step4: ๆณจ๏ผไธๅปบ่ฎฎๅจpythonไธญๆไฝๆฐๆฎๅบๅๅปบ่กจใ
Step5: ๆญคๆถๅปmysqlไธญ๏ผ่ฟๅ
ฅinfoๆฐๆฎๅบ๏ผๆฅ่ฏข๏ผselect * from person;๏ผ็ปๆๅฆไธ๏ผ
Step6: ๆญคๆถๅปmysqlไธญ๏ผ่ฟๅ
ฅinfoๆฐๆฎๅบ๏ผๆฅ่ฏข๏ผselect * from person;๏ผ็ปๆๅฆไธ๏ผ
Step7: bash
Step8: bash
|
<ASSISTANT_TASK:>
Python Code:
import MySQLdb
conn = MySQLdb.connect(user = 'root',passwd = 'root',host = '127.0.0.1')
cur = conn.cursor()
conn.select_db('info')
sql = 'insert into person(name,age) values("Zhangsan",34)' # ็ป่ฃ
sql
cur.execute(sql) # ๆง่กsql
conn.commit() # ๆไบค๏ผๅฆๆๆฒกๆ่ฟๅฅ๏ผๆดๆนไธไผ็ๆ
cur.close()
conn.close() # ็จๅฎไนๅๆๅฅฝๅ
ณ้ญๆธธๆ ๅ่ฟๆฅๅฏน่ฑก
sql = 'insert into person(name,age) values(%s,%s)'
cur.execute(sql,('Lisi',23)) # ๆๅ
ฅไธๆกๆฐๆฎ
persons = [('Wangwu',32),('Zhaoliu',12),('Tianqi',45)]
cur.executemany(sql,persons) # ๆๅ
ฅๅคๆก
conn.commit() # ๆไบค
sql = 'delete from person where name = "Tianqi"'
cur.execute(sql)
conn.commit()
sql = 'update person set age = 88 where name = "Zhaoliu"'
cur.execute(sql)
conn.commit()
sql = 'select * from person'
cur.execute(sql)
# ๅๆฅๅฐ็ๆๆ็ปๆ๏ผๅนถๆๆธธๆ ็งปๅฐ็ปๅฐพ
print 'ใOutput 1ใ'
print cur.fetchall()
# ๆๆธธๆ ็งปๅฐๅผๅคด
cur.scroll(0,'absolute')
# ๅๆฅๅฐ็ๅnๆกๆฐๆฎ๏ผๅนถๆๆธธๆ ็งปๅฐ็ฌฌn+1ไฝ็ฝฎ
print 'ใOutput 2ใ'
print cur.fetchmany(3)
# ๆๆธธๆ ็งปๅฐๅผๅคด
cur.scroll(0,'absolute')
# ๅๆฅๅฐ็ไธๆกๆฐๆฎ๏ผๅนถๆๆธธๆ ๅๅ็งปๅจไธไฝ
print 'ใOutput 3ใ'
print cur.fetchone()
print cur.fetchone()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps
Step2: Inline Question #1
Step3: You should expect to see approximately 27% accuracy. Now lets try out a larger k, say k = 5
Step5: You should expect to see a slightly better performance than with k = 1.
Step6: Cross-validation
|
<ASSISTANT_TASK:>
Python Code:
# Run some setup code for this notebook.
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
from __future__ import print_function
# This is a bit of magic to make matplotlib figures appear inline in the notebook
# rather than in a new window.
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Some more magic so that the notebook will reload external python modules;
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
# Load the raw CIFAR-10 data.
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# As a sanity check, we print out the size of the training and test data.
print('Training data shape: ', X_train.shape)
print('Training labels shape: ', y_train.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
# Visualize some examples from the dataset.
# We show a few examples of training images from each class.
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
num_classes = len(classes)
samples_per_class = 7
for y, cls in enumerate(classes):
idxs = np.flatnonzero(y_train == y)
idxs = np.random.choice(idxs, samples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i * num_classes + y + 1
plt.subplot(samples_per_class, num_classes, plt_idx)
plt.imshow(X_train[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls)
plt.show()
# Subsample the data for more efficient code execution in this exercise
num_training = 5000
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
num_test = 500
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
# Reshape the image data into rows
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
print(X_train.shape, X_test.shape)
from cs231n.classifiers import KNearestNeighbor
# Create a kNN classifier instance.
# Remember that training a kNN classifier is a noop:
# the Classifier simply remembers the data and does no further processing
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
# Open cs231n/classifiers/k_nearest_neighbor.py and implement
# compute_distances_two_loops.
# Test your implementation:
dists = classifier.compute_distances_two_loops(X_test)
print(dists.shape)
# We can visualize the distance matrix: each row is a single test example and
# its distances to training examples
plt.imshow(dists, interpolation='none')
plt.show()
# Now implement the function predict_labels and run the code below:
# We use k = 1 (which is Nearest Neighbor).
y_test_pred = classifier.predict_labels(dists, k=1)
# Compute and print the fraction of correctly predicted examples
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))
y_test_pred = classifier.predict_labels(dists, k=5)
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))
# Now lets speed up distance matrix computation by using partial vectorization
# with one loop. Implement the function compute_distances_one_loop and run the
# code below:
dists_one = classifier.compute_distances_one_loop(X_test)
# To ensure that our vectorized implementation is correct, we make sure that it
# agrees with the naive implementation. There are many ways to decide whether
# two matrices are similar; one of the simplest is the Frobenius norm. In case
# you haven't seen it before, the Frobenius norm of two matrices is the square
# root of the squared sum of differences of all elements; in other words, reshape
# the matrices into vectors and compute the Euclidean distance between them.
difference = np.linalg.norm(dists - dists_one, ord='fro')
print('Difference was: %f' % (difference, ))
if difference < 0.001:
print('Good! The distance matrices are the same')
else:
print('Uh-oh! The distance matrices are different')
# Now implement the fully vectorized version inside compute_distances_no_loops
# and run the code
dists_two = classifier.compute_distances_no_loops(X_test)
# check that the distance matrix agrees with the one we computed before:
difference = np.linalg.norm(dists - dists_two, ord='fro')
print('Difference was: %f' % (difference, ))
if difference < 0.001:
print('Good! The distance matrices are the same')
else:
print('Uh-oh! The distance matrices are different')
# Let's compare how fast the implementations are
def time_function(f, *args):
Call a function f with args and return the time (in seconds) that it took to execute.
import time
tic = time.time()
f(*args)
toc = time.time()
return toc - tic
two_loop_time = time_function(classifier.compute_distances_two_loops, X_test)
print('Two loop version took %f seconds' % two_loop_time)
one_loop_time = time_function(classifier.compute_distances_one_loop, X_test)
print('One loop version took %f seconds' % one_loop_time)
no_loop_time = time_function(classifier.compute_distances_no_loops, X_test)
print('No loop version took %f seconds' % no_loop_time)
# you should see significantly faster performance with the fully vectorized implementation
num_folds = 5
k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100]
X_train_folds = []
y_train_folds = []
################################################################################
# TODO: #
# Split up the training data into folds. After splitting, X_train_folds and #
# y_train_folds should each be lists of length num_folds, where #
# y_train_folds[i] is the label vector for the points in X_train_folds[i]. #
# Hint: Look up the numpy array_split function. #
################################################################################
X_train_folds = np.array_split(X_train, num_folds)
y_train_folds = np.array_split(y_train, num_folds)
################################################################################
# END OF YOUR CODE #
################################################################################
# A dictionary holding the accuracies for different values of k that we find
# when running cross-validation. After running cross-validation,
# k_to_accuracies[k] should be a list of length num_folds giving the different
# accuracy values that we found when using that value of k.
k_to_accuracies = {}
################################################################################
# TODO: #
# Perform k-fold cross validation to find the best value of k. For each #
# possible value of k, run the k-nearest-neighbor algorithm num_folds times, #
# where in each case you use all but one of the folds as training data and the #
# last fold as a validation set. Store the accuracies for all fold and all #
# values of k in the k_to_accuracies dictionary. #
################################################################################
for k in k_choices:
print("running {}".format(k))
k_to_accuracies[k] = []
for train_id in range(0, num_folds):
classifier = KNearestNeighbor()
classifier.train(X_train_folds[train_id], y_train_folds[train_id])
accuracy = 0
for test_id in range(0, num_folds):
if(test_id == train_id):
continue
y_test_pred = classifier.predict(X_train_folds[test_id], k)
num_correct = np.sum(y_test_pred == y_train_folds[test_id])
accuracy += float(num_correct) / len(y_train_folds[test_id])
accuracy /= (num_folds - 1)
k_to_accuracies[k].append(accuracy)
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out the computed accuracies
for k in sorted(k_to_accuracies):
for accuracy in k_to_accuracies[k]:
print('k = %d, accuracy = %f' % (k, accuracy))
# plot the raw observations
for k in k_choices:
accuracies = k_to_accuracies[k]
plt.scatter([k] * len(accuracies), accuracies)
# plot the trend line with error bars that correspond to standard deviation
accuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())])
accuracies_std = np.array([np.std(v) for k,v in sorted(k_to_accuracies.items())])
plt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std)
plt.title('Cross-validation on k')
plt.xlabel('k')
plt.ylabel('Cross-validation accuracy')
plt.show()
# Based on the cross-validation results above, choose the best value for k,
# retrain the classifier using all the training data, and test it on the test
# data. You should be able to get above 28% accuracy on the test data.
best_k = 1
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
y_test_pred = classifier.predict(X_test, k=best_k)
# Compute and display the accuracy
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Getting filter data ready to use
Step2: This will cache the filter data on your disk so that everytime you call any wsynphot function like list_filters() that requires data, the data will be accessed from cache.
Step3: Listing available filters
Step4: Filter Curve
Step5: Plot the curve by plot() method
Step6: Do any required calculations on the filter curve object
|
<ASSISTANT_TASK:>
Python Code:
import wsynphot
# wsynphot.download_filter_data()
# wsynphot.update_filter_data()
wsynphot.list_filters()
filter = wsynphot.FilterCurve.load_filter('Keck/NIRC2/Kp')
filter
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (9,6)
filter.plot(plt.gca())
filter.zp_vega_f_lambda
filter.convert_vega_magnitude_to_f_lambda(0)
filter.convert_vega_magnitude_to_f_lambda(14.5)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the first (and only) scheme we used in the process. We want to have a look at what it does.
Step2: Create the builder using a factory function. You can do it directly but this way it is faster and easier.
Step3: Options
Step4: Show the mover. You see a tree depicting the dependency on the left side. The higher levels (more to the left) run independently of lower levels. This is the way the moves are constructed and allow for better analysis and reusing of submoves. On the right you see a list of ensembles and which ensembles are affected by a specific mover. Green indicates that a sample from this ensemble might be used as input for the move. "Might" means that there is a possibility given the context (see below) and the specific mover that a sample in the ensemble is requested and thus necessary to not cause a RuntimeError. Input usually means that a sample is also remove from this ensemble (although the final ensembles can be the same). A red color similar indicates that a sample will be placed in this ensemble (usually taken from the input ensembles). In almost all cases the samples from the input are removed, optionally altered and placed in the output ensembles.
Step5: In the minus moves you see that a sample is shifted to a hidden ensembles, then swapped and moved back.
Step6: And compare if the scheme will actually match the requirements
Step7: Checking for equality (neglecting multiplicity since we only have a single sample per ensemble in our case)
|
<ASSISTANT_TASK:>
Python Code:
st = p.Storage('mstis.nc', mode='r')
scheme = st.schemes.first
reload(vis)
builder = vis.MoveTreeBuilder.from_scheme(scheme)
builder.options.analysis['only_canonical'] = False
SVG(builder.svg())
minimal = dict(scheme.root_mover.in_out.ins_minimal)
minimal
all(ens in minimal for ens in scheme.list_initial_ensembles())
set(minimal) == set(scheme.list_initial_ensembles())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: Double-slit correlation model
Step5: Define a single function to explore the FFT
Step6: Replace with Gaussian LO
Step7: Adding different phase drifts to individual modes
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as sp
from numpy import pi, sin, cos, linspace, exp, real, imag, abs, conj, meshgrid, log, log10, angle, zeros, complex128, random
from numpy.fft import fft, fftshift, ifft
from mpl_toolkits.mplot3d import axes3d
import BeamOptics as bopt
%matplotlib inline
b=.08*1e-3 # the slit width
a=.5*1e-3 # the slit spacing
k=2*pi/(795*1e-9) # longitudinal wavenumber
wt=0 # let time be zero
C=1 # unit amplitude
L=1.8 # distance from slits to CCD
d=.016 # distance from signal to LO at upstream end (used to calculate k_perp)
ccdwidth = 1300 # number of pixels
pixwidth = 20e-6 # pixel width (in meters)
y = linspace(-pixwidth*ccdwidth/2,pixwidth*ccdwidth/2,ccdwidth)
# define the various double slit fields and LO:
def alpha(y,a):
return k*a*y/(2*L)
def beta(y,b):
return k*b*y/(2*L)
def E_ds(y,a,b):
Double-slit field
# From Hecht p 458:
#return b*C*(sin(beta(y)) / beta(y)) * (sin(wt-k*L) + sin(wt-k*L+2*alpha(y)))
# drop the time-dep term as it will average away:
return 2*b*C*(sin(beta(y,b)) / beta(y,b)) * cos(alpha(y,a)) #* sin(wt - k*L + alpha(y))
def E_dg(y,a,b):
Double gaussian field
# The width needs to be small enough to see interference
# otherwise the beam doesn't diffract and shows no interference.
# We're using b for the gaussian width (i.e. equal to the slit width)
w=b
#return C*exp(1j*k*0.1*d*y/L)
return 5e-3*(bopt.gaussian_beam(0,y-a/2,L,E0=1,wavelambda=795e-9,w0=w,k=[0,0,k]) +
bopt.gaussian_beam(0,y+a/2,L,E0=1,wavelambda=795e-9,w0=w,k=[0,0,k]))
def E_lo(y,d):
Plane-wave LO beam incident at small angle, transverse wavenumber k*d*y/L
return C*exp(-1j*k*d*y/L)
def plotFFT(d,a,b):
Single function version of generating the FFT output
TotalField = E_dg(y,a,b)+E_lo(y,d)
TotalIntensity=TotalField*TotalField.conj()
plt.plot(abs(fft(TotalIntensity)),".-")
plt.ylim([0,1e-2])
plt.xlim([0,650])
plt.title("FFT output")
plotFFT(d=0.046,a=0.5e-3,b=0.08e-3)
# bopt.gaussian_beam(x, y, z, E0, wavelambda, w0, k)
# set to evaluate gaussian at L (full distance to CCD) with waist width of 2 cm
# using d=0.046 for agreement with experiment
d=0.046
E_lo_gauss = bopt.gaussian_beam(0,y,L,E0=1,wavelambda=795e-9,w0=0.02,k=[0,k*d/L,k])
frames = 59
rounds = 20
drift_type= 3
# SG I made a few drift modes to model the phase drift that would be present in the lab
# drift mode two appears to be the most similar to the phase shifts we observe in the lab
time=linspace(0,2*pi,rounds*frames)
phase=[]
if drift_type == 0:
phase= [sin(t) for t in time]
#mode 0 is just a sine wave in time
elif drift_type == 1:
phase= [sin(t+random.randn()/2) for t in time]
#phase= [sin(t)+random.randn()/2 for t in time]
#mode 1 is a sine wave with some randomness added to each data point
elif drift_type == 2:
phase=[0]
for i in range(len(time)-1):
phase.append(phase[-1]+random.randn()/4*sin(time[i]))
#mode 2 is a sine wave with some randomness added to each data point, and also considering
#the location of the previous data point
elif drift_type == 3:
phase=[0]
for i in range(len(time)-1):
phase.append(phase[-1]+0.1*(random.randn()))
#mode 2 is a sine wave with some randomness added to each data point, and also considering
#the location of the previous data point
raw_intensity_data = zeros([1300,frames,rounds],dtype=complex128)
scaled = zeros([1300,frames,rounds],dtype=complex128)
i=0
for r in range(rounds):
for f in range(frames):
TotalField = E_dg(y,a,b)*exp(-1j*phase[i]) + E_lo_gauss #adds the appropriate phase
#TotalField = E_dg(y,a,b) + E_lo_gauss
TotalIntensity = TotalField * TotalField.conj()
raw_intensity_data[:,f,r] = TotalIntensity
scaled[:,f,r]=fft(TotalIntensity)
i=i+1 #increases index
#checking how phase moves around
plt.polar(phase,time,'-')
plt.title("phase shift with (simulated) time")
plt.plot((np.unwrap(angle(scaled[461,:,:].flatten("F")))))
plt.plot((np.unwrap(angle(scaled[470,:,:].flatten("F")))))
#plt.ylim(0,1e-2)
#TODO -unwrapping the phase (numpy)
plt.plot(abs(fft(TotalIntensity)),".-")
print(TotalIntensity.shape)
plt.ylim([0,0.01]) # Had to lower the LO power quite a bit, and then zoom way in.
plt.xlim([430,500])
mode_of_interest = 440
mode_offset = 300
range_to_analyze = 300
# Calculate the correlation matrix between phase of each mode.
modes = range(0,range_to_analyze)
PearsonPhase = np.zeros((range_to_analyze,range_to_analyze))
for m in modes:
output = scaled[m+mode_offset,:,:].flatten('F') # Choose the mode to analyze
x = np.angle(output)
for l in modes:
#SG added np.unwrap call to the angle
Pearson, p = sp.pearsonr(np.unwrap(np.angle(scaled[l+mode_offset].flatten('F'))), x)
if (m==l):
PearsonPhase[m,l] = 0 #AMCD Null the 1.0 auto-correlation
else:
PearsonPhase[m,l] = Pearson
plt.imshow(PearsonPhase,interpolation='none')
plt.title("Phase")
print(type(PearsonPhase))
print("max value =",np.amax(PearsonPhase))
plt.imshow(PearsonPhase,interpolation='none')
plt.title("Phase")
print("max value =",np.amax(PearsonPhase))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: With the <tt>git_bin</tt>, we can execute almost any Git command we like directly. In our hypothetical use case, we want to retrieve some information about the change frequency of files. For this, we need the complete history of the Git repo including statistics for the changed files (via <tt>--numstat</tt>).
Step2: Reading the Git log
Step3: Now we have two different kinds of content for the rows
Step4: This gives use the commit meta data for each file change!
Step5: And that's it! We are finished!
|
<ASSISTANT_TASK:>
Python Code:
import git
GIT_REPO_PATH = r'../../spring-petclinic/'
repo = git.Repo(GIT_REPO_PATH)
git_bin = repo.git
git_bin
git_log = git_bin.execute('git log --numstat --pretty=format:"\t\t\t%h\t%at\t%aN"')
git_log[:80]
import pandas as pd
from io import StringIO
commits_raw = pd.read_csv(StringIO(git_log),
sep="\t",
header=None,
names=['additions', 'deletions', 'filename', 'sha', 'timestamp', 'author']
)
commits_raw.head()
commits = commits_raw[['additions', 'deletions', 'filename']]\
.join(commits_raw[['sha', 'timestamp', 'author']].fillna(method='ffill'))
commits.head()
commits = commits.dropna()
commits.head()
# reading
git_log = pd.read_csv(
"../../spring-petclinic/git.log",
sep="\t",
header=None,
names=[
'additions',
'deletions',
'filename',
'sha',
'timestamp',
'author'])
# converting in "one line"
git_log[['additions', 'deletions', 'filename']]\
.join(git_log[['sha', 'timestamp', 'author']]\
.fillna(method='ffill'))\
.dropna().head()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The base catalog
Step2: Target_Name is the name of the (central) object at each observation, from that we see we have 681 unique sources out of the 3035 observations. GroupSize is the number of overlapping observations, the average number is ~54. Let's see how sparse are the observations in time and how do they distribute for each source.
Step3: Number of observations
Step4: Filtering the data
Step5: If we consider each group of observations of our interest -- let me call them "chunk" -- observations that distance each other no more than "X" days (for example, X=20 days) we see from this example that it happens to exist more than one "chunk" of observations per object. Here, for instance, rows 347,344,343,346 and 338,339,336,335,341 form the cluster of observations of our interest, "chunk-1" and "chunk-2", respectively.
Step6: Filtered catalog
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import HTML
HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide();
} else {
$('div.input').show();
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
<form action="javascript:code_toggle()"><input type="submit" value="Click here to toggle on/off the raw code."></form>''')
from IPython.display import HTML
HTML('''
<figure>
<img src="Stripe82_gal_projection.png" alt="Swift observations over Stripe82">
<figcaption>Figure 1: Swift observations over Stripe82</figcaption>
</figure>
''')
import pandas
cat = pandas.read_csv('Swift_Master_Stripe82_groups.ascii',
delim_whitespace=True)
print "Table 1: Sample of the catalog"
pandas.concat([cat.head(5),cat.tail(5)])
print "Table 2: Summary of the catalog columns"
cat.describe(include='all')
cat['start_time'] = pandas.to_datetime(cat['start_time'])
cat_grouped_by_target = cat[['Target_Name','start_time']].groupby(['Target_Name'])
cat_descr = cat_grouped_by_target.describe().unstack()
cat_time = cat_descr.sort_values([('start_time','count')],ascending=False)
del cat_descr
title = "Figure 2: Number of sources(Y axis) observed number of times(X axis)"
%matplotlib inline
from matplotlib import pyplot as plt
width = 16
height = 4
plt.figure(figsize=(width, height))
yticks = [2,10,50,100,200,300]
xticks = range(51)
ax = cat_time[('start_time','count')].plot.hist(bins=xticks,xlim=(0,50),title=title,grid=True,xticks=xticks,yticks=yticks,align='left')
ax.set_xlabel('Number of observations (per source)')
print "Table 3: Number counts and dates (first/last) of the observations (per object)"
cat_time
print "Table 4: Observation carried out for source 'V1647ORI' sorted in time"
g = cat_grouped_by_target.get_group('V1647ORI')
g_sorted = g.sort_values('start_time')
g_sorted
def find_clustered_observations(sorted_target_observations,time_range=10):
# Let's select a 'time_range' days window to select valid observations
window_size = time_range
g_sorted = sorted_target_observations
# an ordered dictionary works as a 'set' structure
from collections import OrderedDict
selected_allObs = OrderedDict()
# define en identificator for each cluster of observations, to ease future filtering
group_obs = 1
_last_time = None
_last_id = None
for _row in g_sorted.iterrows():
ind,row = _row
if _last_time is None:
_last_time = row.start_time
_last_id = ind
continue
_delta = row.start_time - _last_time
if _delta.days <= window_size:
selected_allObs[_last_id] = group_obs
selected_allObs[ind] = group_obs
else:
if len(selected_allObs):
group_obs = selected_allObs.values()[-1] + 1
_last_time = row.start_time
_last_id = ind
return selected_allObs
from collections import OrderedDict
obs_indx = OrderedDict()
for name,group in cat_grouped_by_target:
g_sorted = group.sort_values('start_time')
filtered_indxs = find_clustered_observations(g_sorted,time_range=20)
obs_indx.update(filtered_indxs)
import pandas
obsChunks_forFilteringCat = pandas.DataFrame(obs_indx.values(),columns=['obs_chunk'],index=obs_indx.keys())
# obsChunks_forFilteringCat.sort_index()
print "Table 5: original catalog with column 'obs_chunk' to flag which rows succeed the filtering (non-NA values)."
cat_with_obsChunksFlag = cat.join(obsChunks_forFilteringCat)
cols = list(cat_with_obsChunksFlag.columns)
cols.insert(2,cols.pop(-1))
cat_with_obsChunksFlag = cat_with_obsChunksFlag.ix[:,cols]
cat_with_obsChunksFlag
cat_filtered = cat_with_obsChunksFlag.dropna(subset=['obs_chunk'])
cat_filtered
cat_filtered.describe(include='all')
cat_filtered.to_csv('Swift_Master_Stripe82_groups_filtered.csv')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Generate the FMU
Step2: The mandatory information to pass to the builder is the Python script file through the -f option.
Step3: Use the FMU
Step4: The code specifies a parameter, which you could change
Step5: Install FMU dependencies
Step6: Include the requirement file
Step7: Installation by the end user
Step8: To install the FMU dependencies, the end user will need to execute
|
<ASSISTANT_TASK:>
Python Code:
%pycat demoslave.py
!pythonfmu build -f ./demoslave.py --no-external-tool
!pythonfmu build -h
%matplotlib inline
from pyfmi import load_fmu
import matplotlib.pyplot as p
import numpy
model = load_fmu('Resistor.fmu')
inputs = ('positive_pin_v', lambda t: 20 + 5. * numpy.cos(t))
res = model.simulate(final_time=30, input=inputs, options={'ncp': 300})
fig = p.figure()
ax1 = p.subplot(2, 1, 1)
ax1.plot(res['time'], res['i'])
ax1.set_ylabel('Intensity [A]')
ax2 = p.subplot(2, 1, 2)
ax2.plot(res['time'], res['delta_v'])
ax2.set_xlabel('time')
ax2.set_ylabel('Voltage [V]')
p.show();
R_reference = model.get_model_variables()['R'].value_reference
model.set_real([R_reference, ], [1000., ])
res2 = model.simulate(final_time=30, input=inputs, options={'ncp': 300})
fig = p.figure()
p.plot(res['time'], res['i'], res2['time'], res2['i'])
p.xlabel('time')
p.ylabel('Voltage [V]')
p.show();
model.free_instance() # It is currently not possible to load two Python FMUs simultaneously
!pythonfmu deploy -h
!pythonfmu build -f sympyslave.py environment.yml --no-external-tool
%matplotlib inline
from pyfmi import load_fmu
import matplotlib.pyplot as p
import numpy
sympy_model = load_fmu('SympySlave.fmu')
inputs = ('realIn', lambda t: 5.)
sympy_model.simulate(final_time=30, input=inputs, options={'ncp': 300});
!pythonfmu deploy -f SympySlave.fmu
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Downloading data
Step2: Data Munging
Step3: Some of these columns are duplicates, or contain information we don't want to use in our model (like college names and URLs). CivisML can take a list of columns to exclude and do this part of the data munging for us, so let's make that list here.
Step4: Basic CivisML Usage
Step5: Next, we want to train and validate the model by calling .train on the ModelPipeline object. CivisML uses 4-fold cross-validation on the training set. You can train on local data or query data from Redshift. In this case, we have our data locally, so we just pass the data frame.
Step6: This returns a ModelFuture object, which is non-blocking-- this means that you can keep doing things in your notebook while the model runs on Civis Platform in the background. If you want to make a blocking call (one that doesn't complete until your model is finished), you can use .result().
Step7: Parallel Model Tuning and Validation
Step8: Impressive!
Step9: This creates a list of columns to categorically expand, identified using the data dictionary available here.
Step10: Model Stacking
Step11: Let's plot diagnostics for each of the models. In the Civis Platform, these plots will automatically be built and displayed in the "Models" tab. But for the sake of example, let's also explicitly plot ROC curves and AUCs in the notebook.
Step12: All of the models perform quite well, so it's difficult to compare based on the ROC curves. Let's plot the AUCs themselves.
Step13: Here we can see that all models but sparse logistic perform quite well, but stacking appears to perform marginally better than the others. For more challenging modeling tasks, the difference between stacking and other models will often be more pronounced.
Step14: Looks like the probabilities here aren't exactly the same, but are directionally identical-- so, if you chose the class that had the highest probability for each row, you'd end up with the same predictions for all models. This makes sense, because all of the models performed well.
Step15: This Pipeline contains all of the steps CivisML used to train the model, from ETL to the model itself. We can print each step individually to get a better sense of what is going on.
Step16: Now we can see that there are three steps
Step17: Hyperparameter optimization with Hyperband and Neural Networks
Step18: Let's dig into the hyperband model a little bit. Like the stacking model, the model below starts with ETL and null imputation, but contains some additional steps
Step19: HyperbandSearchCV essentially works like GridSearchCV. If you want to get the best estimator without all of the extra CV information, you can access it using the best_estimator_ attribute.
Step20: To see how well the best model performed, you can look at the best_score_.
Step21: And to look at information about the different hyperparameter configurations that were tried, you can look at the cv_results_.
Step22: Just like any other model in CivisML, we can use hyperband-tuned models to make predictions using .predict() on the ModelPipeline.
|
<ASSISTANT_TASK:>
Python Code:
# first, let's import the packages we need
import requests
from io import StringIO
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn import model_selection
# import the Civis Python API client
import civis
# ModelPipeline is the class used to build CivisML models
from civis.ml import ModelPipeline
# Suppress warnings for demo purposes. This is not recommended as a general practice.
import warnings
warnings.filterwarnings('ignore')
# Downloading data; this may take a minute
# Two kind of nulls
df = pd.read_csv("https://ed-public-download.app.cloud.gov/downloads/Most-Recent-Cohorts-All-Data-Elements.csv", sep=",", na_values=['NULL', 'PrivacySuppressed'], low_memory=False)
# How many rows and columns?
df.shape
# What are some of the column names?
df.columns
# Make sure to remove any rows with nulls in the dependent variable
df = df[np.isfinite(df['CONTROL'])]
# split into training and test sets
train_data, test_data = model_selection.train_test_split(df, test_size=0.2)
# print a few sample columns
train_data.head()
to_exclude = ['ADM_RATE_ALL', 'OPEID', 'OPEID6', 'ZIP', 'INSTNM',
'INSTURL', 'NPCURL', 'ACCREDAGENCY', 'T4APPROVALDATE',
'STABBR', 'ALIAS', 'REPAY_DT_MDN', 'SEPAR_DT_MDN']
# Use a push-button workflow to fit a model with reasonable default parameters
sl_model = ModelPipeline(model='sparse_logistic',
model_name='Example sparse logistic',
primary_key='UNITID',
dependent_variable=['CONTROL'],
excluded_columns=to_exclude)
sl_train = sl_model.train(train_data)
# non-blocking
sl_train
# blocking
sl_train.result()
# loop through the metric names and print to screen
metrics = [print(key) for key in sl_train.metrics.keys()]
# ROC AUC for each of the three categories in our dependent variable
sl_train.metrics['roc_auc']
# The ETL transformer used in CivisML can be found in the civismlext module
from civismlext.preprocessing import DataFrameETL
# column indices for columns to expand
to_expand = list(df.columns[:21]) + list(df.columns[23:36]) + list(df.columns[99:290]) + \
list(df.columns[[1738, 1773, 1776]])
# create ETL estimator to pass to CivisML
etl = DataFrameETL(cols_to_drop=to_exclude,
cols_to_expand=to_expand, # we made this column list during data munging
check_null_cols='warn')
workflows = ['stacking_classifier',
'sparse_logistic',
'random_forest_classifier',
'gradient_boosting_classifier']
models = []
# create a model object for each of the four model types
for wf in workflows:
model = ModelPipeline(model=wf,
model_name=wf + ' v2 example',
primary_key='UNITID',
dependent_variable=['CONTROL'],
etl=etl # use the custom ETL we created
)
models.append(model)
# iterate over the model objects and run a CivisML training job for each
trains = []
for model in models:
train = model.train(train_data)
trains.append(train)
%matplotlib inline
# Let's look at how the model performed during validation
def extract_roc(fut_job, model_name):
'''Build a data frame of ROC curve data from the completed training job `fut_job`
with model name `model_name`. Note that this function will only work for a classification
model where the dependent variable has more than two classes.'''
aucs = fut_job.metrics['roc_auc']
roc_curve = fut_job.metrics['roc_curve_by_class']
n_classes = len(roc_curve)
fpr = []
tpr = []
class_num = []
auc = []
for i, curve in enumerate(roc_curve):
fpr.extend(curve['fpr'])
tpr.extend(curve['tpr'])
class_num.extend([i] * len(curve['fpr']))
auc.extend([aucs[i]] * len(curve['fpr']))
model_vec = [model_name] * len(fpr)
df = pd.DataFrame({
'model': model_vec,
'class': class_num,
'fpr': fpr,
'tpr': tpr,
'auc': auc
})
return df
# extract ROC curve information for all of the trained models
workflows_abbrev = ['stacking', 'logistic', 'RF', 'GBT']
roc_dfs = [extract_roc(train, w) for train, w in zip(trains, workflows_abbrev)]
roc_df = pd.concat(roc_dfs)
# create faceted ROC curve plots. Each row of plots is a different model type, and each
# column of plots is a different class of the dependent variable.
g = sns.FacetGrid(roc_df, col="class", row="model")
g = g.map(plt.plot, "fpr", "tpr", color='blue')
# Plot AUCs for each model
%matplotlib inline
auc_df = roc_df[['model', 'class', 'auc']]
auc_df.drop_duplicates(inplace=True)
plt.show(sns.swarmplot(x=auc_df['model'], y=auc_df['auc']))
# kick off a prediction job for each of the four models
preds = [model.predict(test_data) for model in models]
# This will run on Civis Platform cloud resources
[pred.result() for pred in preds]
# print the top few rows for each of the models
pred_df = [pred.table.head() for pred in preds]
import pprint
pprint.pprint(pred_df)
train_stack = trains[0] # Get the ModelFuture for the stacking model
trained_model = train_stack.estimator
# print each of the estimators in the pipeline, separated by newlines for readability
for step in train_stack.estimator.steps:
print(step[1])
print('\n')
# drop the dependent variable so we don't use it to predict itself!
predictions = trained_model.predict(test_data.drop(labels=['CONTROL'], axis=1))
# print out the class predictions. These will be integers representing the predicted
# class rather than probabilities.
predictions
# build a model specifying the MLP model with hyperband
model_mlp = ModelPipeline(model='multilayer_perceptron_classifier',
model_name='MLP example',
primary_key='UNITID',
dependent_variable=['CONTROL'],
cross_validation_parameters='hyperband',
etl=etl
)
train_mlp = model_mlp.train(train_data,
n_jobs=10) # parallel hyperparameter optimization and validation!
# block until the job finishes
train_mlp.result()
for step in train_mlp.estimator.steps:
print(step[1])
print('\n')
train_mlp.estimator.steps[3][1].best_estimator_
train_mlp.estimator.steps[3][1].best_score_
train_mlp.estimator.steps[3][1].cv_results_
predict_mlp = model_mlp.predict(test_data)
predict_mlp.table.head()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Unit Test
|
<ASSISTANT_TASK:>
Python Code:
## Constants used by this program
CONSONANTS = "bcdfghjklmnpqrstvwyz"
VOWELS = "aeiou"
def convert_pin(pin):
pin1 = pin.pop()
pass
pin1 = pi
pin = '2363'
pin1 = list(pin)
def remove_end(g):
# %load test_foo.py
from nose.tools import assert_equal
class Testconvert_pin(object):
def test_convert_pin(self):
assert_equal(convert_pin(None), ValueError)
assert_equal(convert_pin('absd'), ValueError)
assert_equal(convert_pin(0), ValueError)
assert_equal(convert_pin(4327), lohi)
assert_equal(convert_pin(1298), dizo)
print('Success: test_convert_pin')
def main():
test = Testconvert_pin()
test.test_convert_pin()
if __name__ == '__main__':
main()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This function creates two test images composed of blobs, with a fraction overlap of the dots overlapping.
Step2: Let's test see what these images look like
Step3: Now we'll run through various values of overlap, generate images, and compute colocalization
Step4: To apply Costes' significance test, we need to estimate the width of the autocorrelation function
Step5: This width we use as the blockSizes parameter to dip.CostesSignificaneTest. We explore the region of the overlap values that are interesting
|
<ASSISTANT_TASK:>
Python Code:
import diplib as dip
import random
import numpy as np
def generate_images(overlap):
sd = 0.001 # noise std. dev.
sz = 5.0 # size of dot (sigma of Gaussian)
scale = sz*sz*2*3.14159
channel1 = dip.Image([256,256], 1, 'SFLOAT')
channel2 = dip.Image([256,256], 1, 'SFLOAT')
channel1.Fill(0)
channel2.Fill(0)
for jj in range(8):
for ii in range(8):
x = 4*sz + ii * 6*sz
y = 4*sz + jj * 6*sz
dip.DrawBandlimitedPoint(channel1, [x, y], scale, sz)
if ii < 6:
# If larger, the 2nd channel doesn't have a dot
if jj * 8 + ii > 8 * 8 * overlap:
# Not overlapping points, move them!
x += 3*sz
y += 1*sz
dip.DrawBandlimitedPoint(channel2, [x, y], [scale], [sz])
channel1 = dip.ClipLow(dip.GaussianNoise(channel1, sd**2), 0)
channel2 = dip.ClipLow(dip.GaussianNoise(channel2, sd**2), 0)
return channel1, channel2
channel1, channel2 = generate_images(0.5)
dip.JoinChannels([channel1, channel2]).Show()
for overlap in [0.1, 0.3, 0.5, 0.7, 0.9]:
channel1, channel2 = generate_images(overlap)
print()
print(overlap)
print('PearsonCorrelation: ', round(dip.PearsonCorrelation(channel1, channel2), 3))
print('MandersOverlapCoefficient: ', round(dip.MandersOverlapCoefficient(channel1, channel2), 3))
print('IntensityCorrelationQuotient: ', round(dip.IntensityCorrelationQuotient(channel1, channel2), 3))
coef = dip.MandersColocalizationCoefficients(channel1, channel2, None, 0.2, 0.2)
print('MandersColocalizationCoefficients: ', round(coef[0], 3), round(coef[1], 3))
coef = dip.CostesColocalizationCoefficients(channel1, channel2)
print('CostesColocalizationCoefficients: ', round(coef[0], 3), round(coef[1], 3))
ac = dip.AutoCorrelationFT(channel1)
ac = ac > dip.Maximum(ac)[0][0] / 2 # half maximum
ac = dip.Label(ac)
cc = dip.GetImageChainCodes(ac, ac[ac.Size(0)//2, ac.Size(1)//2]) # find central blob
bb = cc[0].BoundingBox()
blockSizes = [bb[1][0] - bb[0][0], bb[1][1] - bb[0][1]] # width and height, corresponds to full width at half maximum
print(blockSizes)
for overlap in [0.00, 0.14, 0.16, 0.18, 0.20, 0.22]:
channel1, channel2 = generate_images(overlap)
print(overlap, round(dip.CostesSignificanceTest(channel1, channel2, None, blockSizes, 500), 4))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set image and catalogue filenames
Step2: Load in images, noise maps, header info and WCS information
Step3: Load in catalogue you want to fit (and make any cuts)
Step4: XID+ uses Multi Order Coverage (MOC) maps for cutting down maps and catalogues so they cover the same area. It can also take in MOCs as selection functions to carry out additional cuts. Lets use the python module pymoc to create a MOC, centered on a specific position we are interested in. We will use a HEALPix order of 15 (the resolution
Step5: XID+ is built around two python classes. A prior and posterior class. There should be a prior class for each map being fitted. It is initiated with a map, noise map, primary header and map header and can be set with a MOC. It also requires an input prior catalogue and point spread function.
Step6: Set PRF. For SPIRE, the PRF can be assumed to be Gaussian with a FWHM of 18.15, 25.15, 36.3 '' for 250, 350 and 500 $\mathrm{\mu m}$ respectively. Lets use the astropy module to construct a Gaussian PRF and assign it to the three XID+ prior classes.
|
<ASSISTANT_TASK:>
Python Code:
from astropy.io import ascii, fits
import pylab as plt
%matplotlib inline
from astropy import wcs
import numpy as np
import xidplus
from xidplus import moc_routines
import pickle
xidplus.__path__[0]
#Folder containing maps
imfolder=xidplus.__path__[0]+'/../test_files/'
pswfits=imfolder+'cosmos_itermap_lacey_07012015_simulated_observation_w_noise_PSW_hipe.fits.gz'#SPIRE 250 map
pmwfits=imfolder+'cosmos_itermap_lacey_07012015_simulated_observation_w_noise_PMW_hipe.fits.gz'#SPIRE 350 map
plwfits=imfolder+'cosmos_itermap_lacey_07012015_simulated_observation_w_noise_PLW_hipe.fits.gz'#SPIRE 500 map
#Folder containing prior input catalogue
catfolder=xidplus.__path__[0]+'/../test_files/'
#prior catalogue
prior_cat='lacey_07012015_MillGas.ALLVOLS_cat_PSW_COSMOS_test.fits'
#output folder
output_folder='./'
#-----250-------------
hdulist = fits.open(pswfits)
im250phdu=hdulist[0].header
im250hdu=hdulist[1].header
im250=hdulist[1].data*1.0E3 #convert to mJy
nim250=hdulist[2].data*1.0E3 #convert to mJy
w_250 = wcs.WCS(hdulist[1].header)
pixsize250=3600.0*w_250.wcs.cd[1,1] #pixel size (in arcseconds)
hdulist.close()
#-----350-------------
hdulist = fits.open(pmwfits)
im350phdu=hdulist[0].header
im350hdu=hdulist[1].header
im350=hdulist[1].data*1.0E3 #convert to mJy
nim350=hdulist[2].data*1.0E3 #convert to mJy
w_350 = wcs.WCS(hdulist[1].header)
pixsize350=3600.0*w_350.wcs.cd[1,1] #pixel size (in arcseconds)
hdulist.close()
#-----500-------------
hdulist = fits.open(plwfits)
im500phdu=hdulist[0].header
im500hdu=hdulist[1].header
im500=hdulist[1].data*1.0E3 #convert to mJy
nim500=hdulist[2].data*1.0E3 #convert to mJy
w_500 = wcs.WCS(hdulist[1].header)
pixsize500=3600.0*w_500.wcs.cd[1,1] #pixel size (in arcseconds)
hdulist.close()
hdulist = fits.open(catfolder+prior_cat)
fcat=hdulist[1].data
hdulist.close()
inra=fcat['RA']
indec=fcat['DEC']
# select only sources with 100micron flux greater than 50 microJy
sgood=fcat['S100']>0.050
inra=inra[sgood]
indec=indec[sgood]
from astropy.coordinates import SkyCoord
from astropy import units as u
#c = SkyCoord(ra=[150.74]*u.degree, dec=[2.03]*u.degree)
c = SkyCoord(ra=inra*u.degree, dec=indec*u.degree)
import pymoc
moc=pymoc.util.catalog.catalog_to_moc(c,100,15)
#---prior250--------
prior250=xidplus.prior(im250,nim250,im250phdu,im250hdu, moc=moc)#Initialise with map, uncertianty map, wcs info and primary header
prior250.prior_cat(inra,indec,prior_cat)#Set input catalogue
prior250.prior_bkg(-5.0,5)#Set prior on background (assumes Gaussian pdf with mu and sigma)
#---prior350--------
prior350=xidplus.prior(im350,nim350,im350phdu,im350hdu, moc=moc)
prior350.prior_cat(inra,indec,prior_cat)
prior350.prior_bkg(-5.0,5)
#---prior500--------
prior500=xidplus.prior(im500,nim500,im500phdu,im500hdu, moc=moc)
prior500.prior_cat(inra,indec,prior_cat)
prior500.prior_bkg(-5.0,5)
#pixsize array (size of pixels in arcseconds)
pixsize=np.array([pixsize250,pixsize350,pixsize500])
#point response function for the three bands
prfsize=np.array([18.15,25.15,36.3])
#use Gaussian2DKernel to create prf (requires stddev rather than fwhm hence pfwhm/2.355)
from astropy.convolution import Gaussian2DKernel
##---------fit using Gaussian beam-----------------------
prf250=Gaussian2DKernel(prfsize[0]/2.355,x_size=101,y_size=101)
prf250.normalize(mode='peak')
prf350=Gaussian2DKernel(prfsize[1]/2.355,x_size=101,y_size=101)
prf350.normalize(mode='peak')
prf500=Gaussian2DKernel(prfsize[2]/2.355,x_size=101,y_size=101)
prf500.normalize(mode='peak')
pind250=np.arange(0,101,1)*1.0/pixsize[0] #get 250 scale in terms of pixel scale of map
pind350=np.arange(0,101,1)*1.0/pixsize[1] #get 350 scale in terms of pixel scale of map
pind500=np.arange(0,101,1)*1.0/pixsize[2] #get 500 scale in terms of pixel scale of map
prior250.set_prf(prf250.array,pind250,pind250)#requires PRF as 2d grid, and x and y bins for grid (in pixel scale)
prior350.set_prf(prf350.array,pind350,pind350)
prior500.set_prf(prf500.array,pind500,pind500)
print('fitting '+ str(prior250.nsrc)+' sources \n')
print('using ' + str(prior250.snpix)+', '+ str(prior250.snpix)+' and '+ str(prior500.snpix)+' pixels')
import pickle
#from moc, get healpix pixels at a given order
from xidplus import moc_routines
order=9
tiles=moc_routines.get_HEALPix_pixels(order,prior250.sra,prior250.sdec,unique=True)
order_large=6
tiles_large=moc_routines.get_HEALPix_pixels(order_large,prior250.sra,prior250.sdec,unique=True)
print('----- There are '+str(len(tiles))+' tiles required for input catalogue and '+str(len(tiles_large))+' large tiles')
output_folder='./'
outfile=output_folder+'Master_prior_test.pkl'
with open(outfile, 'wb') as f:
pickle.dump({'priors':[prior250,prior350,prior500],'tiles':tiles,'order':order,'version':xidplus.io.git_version()},f)
from xidplus.stan_fit import SPIRE as SPIRE_stan
from xidplus.numpyro_fit import SPIRE as SPIRE_numpyro
n_sources=[]
n_area=[]
time_stan=[]
time_numpyro=[]
def run_xidplus_numpyro(priors):
fit=SPIRE_numpyro.all_bands(priors)
return xidplus.posterior_numpyro(fit,priors)
for i,order in enumerate(range(7,13)):
with open(outfile, 'rb') as f:
obj=pickle.load(f)
priors=obj['priors']
moc=moc=moc_routines.get_fitting_region(order,
moc_routines.get_HEALPix_pixels(order,150.2,2.3,unique=True))
for p in priors:
p.moc=moc
p.cut_down_prior()
p.prior_bkg(0.0,5)
p.get_pointing_matrix()
p.upper_lim_map()
print('fitting '+ str(priors[0].nsrc)+' sources \n')
print('there are '+ str(priors[0].snpix)+' pixels')
print(' order: '+str(order))
n_sources.append(priors[0].nsrc)
n_area.append(priors[0].moc.area_sq_deg)
#t_s= %timeit -n1 -r1 -o SPIRE_stan.all_bands(priors[0],priors[1],priors[2],iter=1000)
t_np = %timeit -n1 -r1 -o run_xidplus_numpyro(priors)
#time_stan.append(t_s)
time_numpyro.append(t_np)
[t.average for t in time_stan]
time_stan=[2889.8496906470973, 287.8023751830915, 32.09536925505381, 11.379522859933786]
fig, ax_f = plt.subplots()
ax_c = ax_f.twiny()
# automatically update ylim of ax2 when ylim of ax1 changes.
#ax_f.plot(n_area,time_stan,label='Stan')
ax_f.plot(n_area,[t.best for t in time_numpyro],label='NumPyro')
ax_f.set_ylabel('Time (s)')
ax_c.set_xlim(n_sources[-1],n_sources[0])
ax_f.set_title('Profiling')
ax_f.set_xlabel('Area (sq. deg)')
ax_c.set_xlabel('No. Sources')
ax_f.legend()
plt.show()
fig, ax_f = plt.subplots()
ax_c = ax_f.twiny()
# automatically update ylim of ax2 when ylim of ax1 changes.
ax_f.plot(n_area,np.asarray(time_stan)/np.asarray([t.best for t in time_numpyro]),)
ax_c.set_xlim(n_sources[-1],n_sources[0])
ax_f.set_title('Profiling')
ax_f.set_xlabel('Area (sq. deg)')
ax_c.set_xlabel('No. Sources')
ax_f.set_ylabel('Stan/NumPyro')
ax_f.legend()
plt.show()
order=9
with open(outfile, 'rb') as f:
obj=pickle.load(f)
priors=obj['priors']
moc=moc=moc_routines.get_fitting_region(order,
moc_routines.get_HEALPix_pixels(order,150.2,2.3,unique=True))
for p in priors:
p.moc=moc
p.cut_down_prior()
p.prior_bkg(0.0,5)
p.get_pointing_matrix()
p.upper_lim_map()
print('fitting '+ str(priors[0].nsrc)+' sources \n')
print('there are '+ str(priors[0].snpix)+' pixels')
print(' order: '+str(order))
#fit_stan=SPIRE_stan.all_bands(priors[0],priors[1],priors[2],iter=1000)
fit_pyro=SPIRE_numpyro.all_bands(priors)
posterior_numpyro=xidplus.posterior_numpyro(fit_pyro,priors)
xidplus.save(priors,posterior_numpyro,'test_numpyro_order_'+str(order))
#posterior=xidplus.posterior_stan(fit_stan,priors)
#xidplus.save(priors,posterior,'test_stan_order_'+str(order))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Fetch the page and get the JSON
Step2: Filter the data
|
<ASSISTANT_TASK:>
Python Code:
import json
import requests
# URL
URL = 'https://information.stpaul.gov/resource/rfbb-x7za.json'
# use the json() method, which converts the json into Python objects
vb_data = requests.get(URL).json()
# print to see what we're working with
print(vb_data)
sfr_vb = [x for x in vb_data if x['dwelling_type'] == 'Single Family Residential']
print(len(sfr_vb), 'SFR of', len(vb_data), 'total')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: ะะฐะดะฐัะฐ 2
Step2: ะะฐะดะฐัะฐ 3
Step3: ะะฐะดะฐัะฐ 4
Step4: ะะฐะดะฐัะฐ 5
|
<ASSISTANT_TASK:>
Python Code:
a = ['abc', 'hdok', 'banc', 'xkm', 'pckdax']
#b = a[::-1]
b= []
for i in a:
if (len(i) > 3):
b.append(i)
b = b[::-1]
b
a = [[1,2,3], [4,5], [6]]
numpy.hstack((a[0],a[1],a[2]))
d = {'a': 'b', 'c': 'd', 'e': 'f'}
{v: k for k, v in d.items()}
a = numpy.array([[1,2,3], [4,5,6]])
a
(a - numpy.array([[1,0,0], [0,1,0]])).transpose()
N = 10
a = numpy.array([range(1, N**2 + 1)]).reshape(N, -1)
a
F = lambda matrix: numpy.sum(numpy.diag(matrix))
F(a)
MyF = lambda K: numpy.sum(numpy.diag(numpy.array([range(1, K**2 + 1)]).reshape(K, -1)))
MyF(N)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Put the function in a python module
|
<ASSISTANT_TASK:>
Python Code:
5 / 3
5 % 3
import numpy as np
value = 4
is_prime = True
upper = int(np.ceil(np.sqrt(value)))
for divisor in range(2, upper+1):
if value % divisor == 0:
is_prime = False
break
print(is_prime)
value = 4
def isPrime(value):
is_prime = True
upper = int(np.ceil(np.sqrt(value)))
if value == 2: # THIS IS TERRIBLE
return is_prime
for divisor in range(2, upper+1):
if value % divisor == 0:
is_prime = False
break
return is_prime
for val in range(2, 20):
print("%d: %d" % (val, isPrime(val)))
import is_prime
is_prime.isPrime(4)
for val in range(2, 20):
print("%d: %d" % (val, is_prime.isPrime(val)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 2. Key Properties --> Resolution
Step9: 2.2. Canonical Horizontal Resolution
Step10: 2.3. Range Horizontal Resolution
Step11: 2.4. Number Of Vertical Levels
Step12: 2.5. High Top
Step13: 3. Key Properties --> Timestepping
Step14: 3.2. Timestep Shortwave Radiative Transfer
Step15: 3.3. Timestep Longwave Radiative Transfer
Step16: 4. Key Properties --> Orography
Step17: 4.2. Changes
Step18: 5. Grid --> Discretisation
Step19: 6. Grid --> Discretisation --> Horizontal
Step20: 6.2. Scheme Method
Step21: 6.3. Scheme Order
Step22: 6.4. Horizontal Pole
Step23: 6.5. Grid Type
Step24: 7. Grid --> Discretisation --> Vertical
Step25: 8. Dynamical Core
Step26: 8.2. Name
Step27: 8.3. Timestepping Type
Step28: 8.4. Prognostic Variables
Step29: 9. Dynamical Core --> Top Boundary
Step30: 9.2. Top Heat
Step31: 9.3. Top Wind
Step32: 10. Dynamical Core --> Lateral Boundary
Step33: 11. Dynamical Core --> Diffusion Horizontal
Step34: 11.2. Scheme Method
Step35: 12. Dynamical Core --> Advection Tracers
Step36: 12.2. Scheme Characteristics
Step37: 12.3. Conserved Quantities
Step38: 12.4. Conservation Method
Step39: 13. Dynamical Core --> Advection Momentum
Step40: 13.2. Scheme Characteristics
Step41: 13.3. Scheme Staggering Type
Step42: 13.4. Conserved Quantities
Step43: 13.5. Conservation Method
Step44: 14. Radiation
Step45: 15. Radiation --> Shortwave Radiation
Step46: 15.2. Name
Step47: 15.3. Spectral Integration
Step48: 15.4. Transport Calculation
Step49: 15.5. Spectral Intervals
Step50: 16. Radiation --> Shortwave GHG
Step51: 16.2. ODS
Step52: 16.3. Other Flourinated Gases
Step53: 17. Radiation --> Shortwave Cloud Ice
Step54: 17.2. Physical Representation
Step55: 17.3. Optical Methods
Step56: 18. Radiation --> Shortwave Cloud Liquid
Step57: 18.2. Physical Representation
Step58: 18.3. Optical Methods
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Step60: 20. Radiation --> Shortwave Aerosols
Step61: 20.2. Physical Representation
Step62: 20.3. Optical Methods
Step63: 21. Radiation --> Shortwave Gases
Step64: 22. Radiation --> Longwave Radiation
Step65: 22.2. Name
Step66: 22.3. Spectral Integration
Step67: 22.4. Transport Calculation
Step68: 22.5. Spectral Intervals
Step69: 23. Radiation --> Longwave GHG
Step70: 23.2. ODS
Step71: 23.3. Other Flourinated Gases
Step72: 24. Radiation --> Longwave Cloud Ice
Step73: 24.2. Physical Reprenstation
Step74: 24.3. Optical Methods
Step75: 25. Radiation --> Longwave Cloud Liquid
Step76: 25.2. Physical Representation
Step77: 25.3. Optical Methods
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Step79: 27. Radiation --> Longwave Aerosols
Step80: 27.2. Physical Representation
Step81: 27.3. Optical Methods
Step82: 28. Radiation --> Longwave Gases
Step83: 29. Turbulence Convection
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Step85: 30.2. Scheme Type
Step86: 30.3. Closure Order
Step87: 30.4. Counter Gradient
Step88: 31. Turbulence Convection --> Deep Convection
Step89: 31.2. Scheme Type
Step90: 31.3. Scheme Method
Step91: 31.4. Processes
Step92: 31.5. Microphysics
Step93: 32. Turbulence Convection --> Shallow Convection
Step94: 32.2. Scheme Type
Step95: 32.3. Scheme Method
Step96: 32.4. Processes
Step97: 32.5. Microphysics
Step98: 33. Microphysics Precipitation
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Step100: 34.2. Hydrometeors
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Step102: 35.2. Processes
Step103: 36. Cloud Scheme
Step104: 36.2. Name
Step105: 36.3. Atmos Coupling
Step106: 36.4. Uses Separate Treatment
Step107: 36.5. Processes
Step108: 36.6. Prognostic Scheme
Step109: 36.7. Diagnostic Scheme
Step110: 36.8. Prognostic Variables
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Step112: 37.2. Cloud Inhomogeneity
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Step114: 38.2. Function Name
Step115: 38.3. Function Order
Step116: 38.4. Convection Coupling
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Step118: 39.2. Function Name
Step119: 39.3. Function Order
Step120: 39.4. Convection Coupling
Step121: 40. Observation Simulation
Step122: 41. Observation Simulation --> Isscp Attributes
Step123: 41.2. Top Height Direction
Step124: 42. Observation Simulation --> Cosp Attributes
Step125: 42.2. Number Of Grid Points
Step126: 42.3. Number Of Sub Columns
Step127: 42.4. Number Of Levels
Step128: 43. Observation Simulation --> Radar Inputs
Step129: 43.2. Type
Step130: 43.3. Gas Absorption
Step131: 43.4. Effective Radius
Step132: 44. Observation Simulation --> Lidar Inputs
Step133: 44.2. Overlap
Step134: 45. Gravity Waves
Step135: 45.2. Sponge Layer
Step136: 45.3. Background
Step137: 45.4. Subgrid Scale Orography
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Step139: 46.2. Source Mechanisms
Step140: 46.3. Calculation Method
Step141: 46.4. Propagation Scheme
Step142: 46.5. Dissipation Scheme
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Step144: 47.2. Source Mechanisms
Step145: 47.3. Calculation Method
Step146: 47.4. Propagation Scheme
Step147: 47.5. Dissipation Scheme
Step148: 48. Solar
Step149: 49. Solar --> Solar Pathways
Step150: 50. Solar --> Solar Constant
Step151: 50.2. Fixed Value
Step152: 50.3. Transient Characteristics
Step153: 51. Solar --> Orbital Parameters
Step154: 51.2. Fixed Reference Date
Step155: 51.3. Transient Method
Step156: 51.4. Computation Method
Step157: 52. Solar --> Insolation Ozone
Step158: 53. Volcanos
Step159: 54. Volcanos --> Volcanoes Treatment
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'bcc', 'sandbox-1', 'atmos')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 0.2 View graph in TensorBoard
Step2: 1. inference
Step3: 1.2 Benchmark end-to-end latency
Step4: 1.3 Inference images.
Step5: 1.4 Inference video
Step6: 2. TFlite
Step7: 2.2 TFlite export INT8 model
Step8: 2.3 Compile EdgeTPU model (Optional)
Step9: The EdgeTPU has 8MB of SRAM for caching model paramaters (more info). This means that for models that are larger than 8MB, inference time will be increased in order to transfer over model paramaters. One way to avoid this is Model Pipelining - splitting the model into segments that can have a dedicated EdgeTPU. This can significantly improve latency.
Step10: 3. COCO evaluation
Step11: 4. Training EfficientDets on PASCAL.
Step12: 4.2 Train Pascal VOC 2007 from ImageNet checkpoint for Backbone.
Step13: 4.3 Train Pascal VOC 2007 from COCO checkpoint for the whole net.
Step14: 4.4 View tensorboard for loss and accuracy.
Step15: 5. Export to onnx
|
<ASSISTANT_TASK:>
Python Code:
%%capture
#@title
import os
import sys
import tensorflow as tf
# Download source code.
if "efficientdet" not in os.getcwd():
!git clone --depth 1 https://github.com/google/automl
os.chdir('automl/efficientdet')
sys.path.append('.')
!pip install -r requirements.txt
!pip install -U 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI'
else:
!git pull
MODEL = 'efficientdet-lite0' #@param
def download(m):
if m not in os.listdir():
if m.find('lite'):
!wget https://storage.googleapis.com/cloud-tpu-checkpoints/efficientdet/coco/{m}.tgz
!tar zxf {m}.tgz
else:
!wget https://storage.googleapis.com/cloud-tpu-checkpoints/efficientdet/coco/{m}.tar.gz
!tar zxf {m}.tar.gz
ckpt_path = os.path.join(os.getcwd(), m)
return ckpt_path
# Download checkpoint.
ckpt_path = download(MODEL)
print('Use model in {}'.format(ckpt_path))
# Prepare image and visualization settings.
image_url = 'https://user-images.githubusercontent.com/11736571/77320690-099af300-6d37-11ea-9d86-24f14dc2d540.png'#@param
image_name = 'img.png' #@param
!wget {image_url} -O img.png
import os
img_path = os.path.join(os.getcwd(), 'img.png')
min_score_thresh = 0.35 #@param
max_boxes_to_draw = 200 #@param
line_thickness = 2#@param
import PIL
# Get the largest of height/width and round to 128.
image_size = max(PIL.Image.open(img_path).size)
!python -m model_inspect --model_name={MODEL} --logdir=logs &> /dev/null
%load_ext tensorboard
%tensorboard --logdir logs
# benchmark network latency
!python -m tf2.inspector --mode=benchmark --model_name={MODEL} --hparams="mixed_precision=true" --only_network
# With colab + Tesla T4 GPU, here are the batch size 1 latency summary:
# D0 (AP=33.5): 14.9ms, FPS = 67.2 (batch size 8 FPS=)
# D1 (AP=39.6): 22.7ms, FPS = 44.1 (batch size 8 FPS=)
# D2 (AP=43.0): 27.9ms, FPS = 35.8 (batch size 8 FPS=)
# D3 (AP=45.8): 48.1ms, FPS = 20.8 (batch size 8 FPS=)
# D4 (AP=49.4): 81.9ms, FPS = 12.2 (batch size 8 FPS=)
# Benchmark end-to-end latency (: preprocess + network + posprocess).
#
# With colab + Tesla T4 GPU, here are the batch size 1 latency summary:
# D0 (AP=33.5): 22.7ms, FPS = 43.1 (batch size 4, FPS=)
# D1 (AP=39.6): 34.3ms, FPS = 29.2 (batch size 4, FPS=)
# D2 (AP=43.0): 42.5ms, FPS = 23.5 (batch size 4, FPS=)
# D3 (AP=45.8): 64.8ms, FPS = 15.4 (batch size 4, FPS=)
# D4 (AP=49.4): 93.7ms, FPS = 10.7 (batch size 4, FPS=)
batch_size = 1 # @param
saved_model_dir = 'savedmodel'
!rm -rf {saved_model_dir}
!python -m tf2.inspector --mode=export --model_name={MODEL} \
--model_dir={ckpt_path} --saved_model_dir={saved_model_dir} \
--batch_size={batch_size} --hparams="mixed_precision=true"
!python -m tf2.inspector --mode=benchmark --model_name={MODEL} \
--saved_model_dir={saved_model_dir} \
--batch_size=1 --hparams="mixed_precision=true" --input_image=testdata/img1.jpg
# first export a saved model.
saved_model_dir = 'savedmodel'
!rm -rf {saved_model_dir}
!python -m tf2.inspector --mode=export --model_name={MODEL} \
--model_dir={ckpt_path} --saved_model_dir={saved_model_dir}
# Then run saved_model_infer to do inference.
# Notably: batch_size, image_size must be the same as when it is exported.
serve_image_out = 'serve_image_out'
!mkdir {serve_image_out}
!python -m tf2.inspector --mode=infer \
--saved_model_dir={saved_model_dir} \
--model_name={MODEL} --input_image=testdata/img1.jpg \
--output_image_dir={serve_image_out}
from IPython import display
display.display(display.Image(os.path.join(serve_image_out, '0.jpg')))
# In case you need to specify different image size or batch size or #boxes, then
# you need to export a new saved model and run the inferernce.
serve_image_out = 'serve_image_out'
!mkdir {serve_image_out}
saved_model_dir = 'savedmodel'
!rm -rf {saved_model_dir}
# Step 1: export model
!python -m tf2.inspector --mode=export \
--model_name={MODEL} --model_dir={MODEL} \
--hparams="image_size=1920x1280" --saved_model_dir={saved_model_dir}
# Step 2: do inference with saved model.
!python -m tf2.inspector --mode=infer \
--model_name={MODEL} --saved_model_dir={saved_model_dir} \
--input_image=img.png --output_image_dir={serve_image_out} \
from IPython import display
display.display(display.Image(os.path.join(serve_image_out, '0.jpg')))
# step 0: download video
video_url = 'https://storage.googleapis.com/cloud-tpu-checkpoints/efficientdet/data/video480p.mov' # @param
!wget {video_url} -O input.mov
# Step 1: export model
saved_model_dir = 'savedmodel'
!rm -rf {saved_model_dir}
!python -m tf2.inspector --mode=export \
--model_name={MODEL} --model_dir={MODEL} \
--saved_model_dir={saved_model_dir} --hparams="mixed_precision=true"
# Step 2: do inference with saved model using saved_model_video
!python -m tf2.inspector --mode=video \
--model_name={MODEL} \
--saved_model_dir={saved_model_dir} --hparams="mixed_precision=true" \
--input_video=input.mov --output_video=output.mov
# Then you can view the output.mov
if 'val2017' not in os.listdir():
!wget http://images.cocodataset.org/zips/val2017.zip
!wget http://images.cocodataset.org/annotations/annotations_trainval2017.zip
!unzip -q val2017.zip
!unzip annotations_trainval2017.zip
!mkdir tfrecord
!PYTHONPATH=".:$PYTHONPATH" python dataset/create_coco_tfrecord.py \
--image_dir=val2017 \
--caption_annotations_file=annotations/captions_val2017.json \
--output_file_prefix=tfrecord/val \
--num_shards=32
# In case you need to specify different image size or batch size or #boxes, then
# you need to export a new saved model and run the inferernce.
serve_image_out = 'serve_image_out'
!mkdir {serve_image_out}
saved_model_dir = 'savedmodel'
!rm -rf {saved_model_dir}
# # Step 1: export model
!python -m tf2.inspector --mode=export --file_pattern=tfrecord/*.tfrecord \
--model_name={MODEL} --model_dir={MODEL} --num_calibration_steps=100 \
--saved_model_dir={saved_model_dir} --use_xla --tflite=INT8
# Step 2: do inference with saved model.
!python -m tf2.inspector --mode=infer --use_xla \
--model_name={MODEL} --saved_model_dir={saved_model_dir}/int8.tflite \
--input_image=testdata/img1.jpg --output_image_dir={serve_image_out}
from IPython import display
display.display(display.Image(os.path.join(serve_image_out, '0.jpg')))
# Evalute on validation set (takes about 10 mins for efficientdet-d0)
!python -m tf2.eval_tflite \
--model_name={MODEL} --tflite_path={saved_model_dir}/int8.tflite \
--val_file_pattern=tfrecord/val* \
--val_json_file=annotations/instances_val2017.json --eval_samples=100
# install edgetpu compiler
!curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
!echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list
!sudo apt-get update
!sudo apt-get install edgetpu-compiler
NUMBER_OF_TPUS = 1
!edgetpu_compiler {saved_model_dir}/int8.tflite --num_segments=$NUMBER_OF_TPUS
if 'val2017' not in os.listdir():
!wget http://images.cocodataset.org/zips/val2017.zip
!wget http://images.cocodataset.org/annotations/annotations_trainval2017.zip
!unzip -q val2017.zip
!unzip annotations_trainval2017.zip
!mkdir tfrecord
!python -m dataset.create_coco_tfrecord \
--image_dir=val2017 \
--caption_annotations_file=annotations/captions_val2017.json \
--output_file_prefix=tfrecord/val \
--num_shards=32
# Evalute on validation set (takes about 10 mins for efficientdet-d0)
!python -m tf2.eval \
--model_name={MODEL} --model_dir={ckpt_path} \
--val_file_pattern=tfrecord/val* \
--val_json_file=annotations/instances_val2017.json
# Get pascal voc 2012 trainval data
import os
if 'VOCdevkit' not in os.listdir():
!wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtrainval_06-Nov-2007.tar
!tar xf VOCtrainval_06-Nov-2007.tar
!mkdir tfrecord
!python -m dataset.create_pascal_tfrecord \
--data_dir=VOCdevkit --year=VOC2007 --output_path=tfrecord/pascal
# Pascal has 5717 train images with 100 shards epoch, here we use a single shard
# for demo, but users should use all shards pascal-*-of-00100.tfrecord.
file_pattern = 'pascal-00000-of-00100.tfrecord' # @param
images_per_epoch = 57 * len(tf.io.gfile.glob('tfrecord/' + file_pattern))
images_per_epoch = images_per_epoch // 8 * 8 # round to 64.
print('images_per_epoch = {}'.format(images_per_epoch))
# Train efficientdet from scratch with backbone checkpoint.
backbone_name = {
'efficientdet-d0': 'efficientnet-b0',
'efficientdet-d1': 'efficientnet-b1',
'efficientdet-d2': 'efficientnet-b2',
'efficientdet-d3': 'efficientnet-b3',
'efficientdet-d4': 'efficientnet-b4',
'efficientdet-d5': 'efficientnet-b5',
'efficientdet-d6': 'efficientnet-b6',
'efficientdet-d7': 'efficientnet-b6',
'efficientdet-lite0': 'efficientnet-lite0',
'efficientdet-lite1': 'efficientnet-lite1',
'efficientdet-lite2': 'efficientnet-lite2',
'efficientdet-lite3': 'efficientnet-lite3',
'efficientdet-lite3x': 'efficientnet-lite3',
'efficientdet-lite4': 'efficientnet-lite4',
}[MODEL]
# generating train tfrecord is large, so we skip the execution here.
import os
if backbone_name not in os.listdir():
if backbone_name.find('lite'):
!wget https://storage.googleapis.com/cloud-tpu-checkpoints/efficientnet/lite/{backbone_name}.tar.gz
else:
!wget https://storage.googleapis.com/cloud-tpu-checkpoints/efficientnet/ckptsaug/{backbone_name}.tar.gz
!tar xf {backbone_name}.tar.gz
!mkdir /tmp/model_dir
# key option: use --backbone_ckpt rather than --ckpt.
# Don't use ema since we only train a few steps.
!python -m tf2.train --mode=traineval \
--train_file_pattern=tfrecord/{file_pattern} \
--val_file_pattern=tfrecord/{file_pattern} \
--model_name={MODEL} \
--model_dir=/tmp/model_dir/{MODEL}-scratch \
--pretrained_ckpt={backbone_name} \
--batch_size=16 \
--eval_samples={images_per_epoch} \
--num_examples_per_epoch={images_per_epoch} --num_epochs=1 \
--hparams="num_classes=20,moving_average_decay=0,mixed_precision=true"
# generating train tfrecord is large, so we skip the execution here.
import os
if MODEL not in os.listdir():
download(MODEL)
!mkdir /tmp/model_dir/
# key option: use --ckpt rather than --backbone_ckpt.
!python -m tf2.train --mode=traineval \
--train_file_pattern=tfrecord/{file_pattern} \
--val_file_pattern=tfrecord/{file_pattern} \
--model_name={MODEL} \
--model_dir=/tmp/model_dir/{MODEL}-finetune \
--pretrained_ckpt={MODEL} \
--batch_size=16 \
--eval_samples={images_per_epoch} \
--num_examples_per_epoch={images_per_epoch} --num_epochs=1 \
--hparams="num_classes=20,moving_average_decay=0,mixed_precision=true"
%load_ext tensorboard
%tensorboard --logdir /tmp/model_dir/
# Notably, this is just a demo with almost zero accuracy due to very limited
# training steps, but we can see finetuning has smaller loss than training
# from scratch at the begining.
!pip install tf2onnx
!python -m tf2.inspector --mode=export --model_name={MODEL} --model_dir={MODEL} --saved_model_dir={saved_model_dir} --hparams="nms_configs.method='hard', nms_configs.iou_thresh=0.5, nms_configs.sigma=0.0"
!python -m tf2onnx.convert --saved-model={saved_model_dir} --output={saved_model_dir}/model.onnx --opset=11
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Init
Step2: Simulating fragments
Step3: Number of amplicons per taxon
Step4: Converting fragments to kde object
Step5: Checking ampfrag info
Step6: Making an incorp config file
Step7: Selecting incorporator taxa
Step8: Creating a community file
Step9: Plotting community rank abundances
Step10: Simulating gradient fractions
Step11: Plotting fractions
Step12: Adding diffusion
Step13: Adding DBL 'smearing'
Step14: Comparing DBL+diffusion to diffusion
Step15: Adding isotope incorporation to BD distribution
Step16: Plotting stats on BD shift from isotope incorporation
Step17: Simulating an OTU table
Step18: Plotting taxon abundances
Step19: Simulating PCR bias
Step20: Plotting change in relative abundances
Step21: Subsampling from the OTU table
Step22: Plotting seq count distribution
Step23: Plotting abundance distributions
Step24: Making a wide OTU table
Step25: Making metadata (phyloseq
Step26: Community analysis
Step27: DESeq2
Step28: Checking results of confusion matrix
Step29: Notes
Step30: qSIP
Step31: Assessing qSIP atom % excess accuracy
Step32: regression
Step33: Calculating a confusion matrix
Step34: delta BD
|
<ASSISTANT_TASK:>
Python Code:
workDir = '/home/nick/notebook/SIPSim/dev/bac_genome1147/validation_rep3/'
genomeDir = '/var/seq_data/ncbi_db/genome/Jan2016/bac_complete_spec-rep1_rn/'
R_dir = '/home/nick/notebook/SIPSim/lib/R/'
figureDir = '/home/nick/notebook/SIPSim/figures/bac_genome_n1147/'
bandwidth = 0.8
DBL_scaling = 0.5
subsample_dist = 'lognormal'
subsample_mean = 9.432
subsample_scale = 0.5
subsample_min = 10000
subsample_max = 30000
import glob
from os.path import abspath
import nestly
from IPython.display import Image
import os
%load_ext rpy2.ipython
%load_ext pushnote
%%R
library(ggplot2)
library(dplyr)
library(tidyr)
library(gridExtra)
if not os.path.isdir(workDir):
os.makedirs(workDir)
if not os.path.isdir(figureDir):
os.makedirs(figureDir)
%cd $workDir
# Determining min/max BD that
## min G+C cutoff
min_GC = 13.5
## max G+C cutoff
max_GC = 80
## max G+C shift
max_13C_shift_in_BD = 0.036
min_range_BD = min_GC/100.0 * 0.098 + 1.66
max_range_BD = max_GC/100.0 * 0.098 + 1.66
max_range_BD = max_range_BD + max_13C_shift_in_BD
print 'Min BD: {}'.format(min_range_BD)
print 'Max BD: {}'.format(max_range_BD)
# estimated coverage
mean_frag_size = 9000.0
mean_amp_len = 300.0
n_frags = 10000
coverage = round(n_frags * mean_amp_len / mean_frag_size, 1)
msg = 'Average coverage from simulating {} fragments: {}X'
print msg.format(n_frags, coverage)
!SIPSim fragments \
$genomeDir/genome_index.txt \
--fp $genomeDir \
--fr ../../515F-806R.fna \
--fld skewed-normal,9000,2500,-5 \
--flr None,None \
--nf 10000 \
--np 24 \
2> ampFrags.log \
> ampFrags.pkl
!printf "Number of taxa with >=1 amplicon: "
!grep "Number of amplicons: " ampFrags.log | \
perl -ne "s/^.+ +//; print unless /^0$/" | wc -l
!grep "Number of amplicons: " ampFrags.log | \
perl -pe 's/.+ +//' | hist
!SIPSim fragment_KDE \
ampFrags.pkl \
> ampFrags_kde.pkl
!SIPSim KDE_info \
-s ampFrags_kde.pkl \
> ampFrags_kde_info.txt
%%R
# loading
df = read.delim('ampFrags_kde_info.txt', sep='\t')
df.kde1 = df %>%
filter(KDE_ID == 1)
df.kde1 %>% head(n=3)
BD_GC50 = 0.098 * 0.5 + 1.66
%%R -w 500 -h 250
# plotting
p.amp = ggplot(df.kde1, aes(median)) +
geom_histogram(binwidth=0.001) +
geom_vline(xintercept=BD_GC50, linetype='dashed', color='red', alpha=0.7) +
labs(x='Median buoyant density') +
theme_bw() +
theme(
text = element_text(size=16)
)
p.amp
!SIPSim incorpConfigExample \
--percTaxa 10 \
--percIncorpUnif 100 \
--n_reps 3 \
> PT10_PI100.config
# checking output
!cat PT10_PI100.config
!SIPSim KDE_selectTaxa \
-f 0.1 \
ampFrags_kde.pkl \
> incorporators.txt
!SIPSim communities \
--config PT10_PI100.config \
$genomeDir/genome_index.txt \
> comm.txt
%%R -w 750 -h 300
tbl = read.delim('comm.txt', sep='\t') %>%
mutate(library = library %>% as.character %>% as.numeric,
condition = ifelse(library %% 2 == 0, 'Control', 'Treatment'))
ggplot(tbl, aes(rank, rel_abund_perc, color=condition, group=library)) +
geom_line() +
scale_y_log10() +
scale_color_discrete('Community') +
labs(x='Rank', y='Relative abundance (%)') +
theme_bw() +
theme(
text=element_text(size=16)
)
!SIPSim gradient_fractions \
--BD_min $min_range_BD \
--BD_max $max_range_BD \
comm.txt \
> fracs.txt
%%R -w 600 -h 500
tbl = read.delim('fracs.txt', sep='\t')
ggplot(tbl, aes(fraction, fraction_size)) +
geom_bar(stat='identity') +
facet_grid(library ~ .) +
labs(y='fraction size') +
theme_bw() +
theme(
text=element_text(size=16)
)
%%R -w 450 -h 250
tbl$library = as.character(tbl$library)
ggplot(tbl, aes(library, fraction_size)) +
geom_boxplot() +
labs(y='fraction size') +
theme_bw() +
theme(
text=element_text(size=16)
)
!SIPSim diffusion \
--bw $bandwidth \
--np 20 \
ampFrags_kde.pkl \
> ampFrags_kde_dif.pkl \
2> ampFrags_kde_dif.log
!SIPSim DBL \
--comm comm.txt \
--commx $DBL_scaling \
--np 20 \
-o ampFrags_kde_dif_DBL.pkl \
ampFrags_kde_dif.pkl \
2> ampFrags_kde_dif_DBL.log
# checking output
!tail -n 5 ampFrags_kde_dif_DBL.log
# none
!SIPSim KDE_info \
-s ampFrags_kde.pkl \
> ampFrags_kde_info.txt
# diffusion
!SIPSim KDE_info \
-s ampFrags_kde_dif.pkl \
> ampFrags_kde_dif_info.txt
# diffusion + DBL
!SIPSim KDE_info \
-s ampFrags_kde_dif_DBL.pkl \
> ampFrags_kde_dif_DBL_info.txt
%%R
inFile = 'ampFrags_kde_info.txt'
df.raw = read.delim(inFile, sep='\t') %>%
filter(KDE_ID == 1)
df.raw$stage = 'raw'
inFile = 'ampFrags_kde_dif_info.txt'
df.dif = read.delim(inFile, sep='\t')
df.dif$stage = 'diffusion'
inFile = 'ampFrags_kde_dif_DBL_info.txt'
df.DBL = read.delim(inFile, sep='\t')
df.DBL$stage = 'diffusion +\nDBL'
df = rbind(df.raw, df.dif, df.DBL)
df.dif = ''
df.DBL = ''
df %>% head(n=3)
%%R -w 350 -h 300
df$stage = factor(df$stage, levels=c('raw', 'diffusion', 'diffusion +\nDBL'))
ggplot(df, aes(stage)) +
geom_boxplot(aes(y=min), color='red') +
geom_boxplot(aes(y=median), color='darkgreen') +
geom_boxplot(aes(y=max), color='blue') +
labs(y = 'Buoyant density (g ml^-1)') +
theme_bw() +
theme(
text = element_text(size=16),
axis.title.x = element_blank()
)
!SIPSim isotope_incorp \
--comm comm.txt \
--shift ampFrags_BD-shift.txt \
--taxa incorporators.txt \
--np 20 \
-o ampFrags_kde_dif_DBL_incorp.pkl \
ampFrags_kde_dif_DBL.pkl \
PT10_PI100.config \
2> ampFrags_kde_dif_DBL_incorp.log
# checking log
!tail -n 5 ampFrags_kde_dif_DBL_incorp.log
%%R
inFile = 'ampFrags_BD-shift.txt'
df = read.delim(inFile, sep='\t') %>%
mutate(library = library %>% as.character)
%%R -h 275 -w 375
inFile = 'ampFrags_BD-shift.txt'
df = read.delim(inFile, sep='\t') %>%
mutate(library = library %>% as.character %>% as.numeric)
df.s = df %>%
mutate(incorporator = ifelse(min > 0.001, TRUE, FALSE),
incorporator = ifelse(is.na(incorporator), 'NA', incorporator),
condition = ifelse(library %% 2 == 0, 'control', 'treatment')) %>%
group_by(library, incorporator, condition) %>%
summarize(n_incorps = n())
# plotting
ggplot(df.s, aes(library %>% as.character, n_incorps, fill=incorporator)) +
geom_bar(stat='identity') +
labs(x='Community', y = 'Count', title='Number of incorporators\n(according to BD shift)') +
theme_bw() +
theme(
text = element_text(size=16)
)
!SIPSim OTU_table \
--abs 1e9 \
--np 20 \
ampFrags_kde_dif_DBL_incorp.pkl \
comm.txt \
fracs.txt \
> OTU_n2_abs1e9.txt \
2> OTU_n2_abs1e9.log
# checking log
!tail -n 5 OTU_n2_abs1e9.log
%%R
## BD for G+C of 0 or 100
BD.GCp0 = 0 * 0.098 + 1.66
BD.GCp50 = 0.5 * 0.098 + 1.66
BD.GCp100 = 1 * 0.098 + 1.66
%%R -w 700 -h 450
# plotting absolute abundances
# loading file
df = read.delim('OTU_n2_abs1e9.txt', sep='\t')
df.s = df %>%
group_by(library, BD_mid) %>%
summarize(total_count = sum(count))
## plot
p = ggplot(df.s, aes(BD_mid, total_count)) +
#geom_point() +
geom_area(stat='identity', alpha=0.3, position='dodge') +
geom_vline(xintercept=c(BD.GCp50), linetype='dashed', alpha=0.5) +
labs(x='Buoyant density', y='Total abundance') +
facet_grid(library ~ .) +
theme_bw() +
theme(
text = element_text(size=16)
)
p
%%R -w 700 -h 450
# plotting number of taxa at each BD
df.nt = df %>%
filter(count > 0) %>%
group_by(library, BD_mid) %>%
summarize(n_taxa = n())
## plot
p = ggplot(df.nt, aes(BD_mid, n_taxa)) +
#geom_point() +
geom_area(stat='identity', alpha=0.3, position='dodge') +
#geom_histogram(stat='identity') +
geom_vline(xintercept=c(BD.GCp50), linetype='dashed', alpha=0.5) +
labs(x='Buoyant density', y='Number of taxa') +
facet_grid(library ~ .) +
theme_bw() +
theme(
text = element_text(size=16),
legend.position = 'none'
)
p
%%R -w 700 -h 450
# plotting relative abundances
## plot
p = ggplot(df, aes(BD_mid, count, fill=taxon)) +
geom_vline(xintercept=c(BD.GCp50), linetype='dashed', alpha=0.5) +
labs(x='Buoyant density', y='Absolute abundance') +
facet_grid(library ~ .) +
theme_bw() +
theme(
text = element_text(size=16),
legend.position = 'none'
)
p + geom_area(stat='identity', position='dodge', alpha=0.5)
%%R -w 700 -h 450
p +
geom_area(stat='identity', position='fill') +
labs(x='Buoyant density', y='Relative abundance')
!SIPSim OTU_PCR \
OTU_n2_abs1e9.txt \
--debug \
> OTU_n2_abs1e9_PCR.txt
%%R -w 800 -h 300
# loading file
F = 'OTU_n2_abs1e9_PCR.txt'
df.SIM = read.delim(F, sep='\t') %>%
mutate(molarity_increase = final_molarity / init_molarity * 100,
library = library %>% as.character)
p1 = ggplot(df.SIM, aes(init_molarity, final_molarity, color=library)) +
geom_point(shape='O', alpha=0.5) +
labs(x='Initial molarity', y='Final molarity') +
theme_bw() +
theme(
text = element_text(size=16)
)
p2 = ggplot(df.SIM, aes(init_molarity, molarity_increase, color=library)) +
geom_point(shape='O', alpha=0.5) +
scale_y_log10() +
labs(x='Initial molarity', y='% increase in molarity') +
theme_bw() +
theme(
text = element_text(size=16)
)
grid.arrange(p1, p2, ncol=2)
# PCR w/out --debug (no extra output)
!SIPSim OTU_PCR \
OTU_n2_abs1e9.txt \
> OTU_n2_abs1e9_PCR.txt
!SIPSim OTU_subsample \
--dist $subsample_dist \
--dist_params mean:$subsample_mean,sigma:$subsample_scale \
--min_size $subsample_min \
--max_size $subsample_max \
OTU_n2_abs1e9_PCR.txt \
> OTU_n2_abs1e9_PCR_subNorm.txt
%%R -w 350 -h 250
df = read.csv('OTU_n2_abs1e9_PCR_subNorm.txt', sep='\t')
df.s = df %>%
group_by(library, fraction) %>%
summarize(total_count = sum(count)) %>%
ungroup() %>%
mutate(library = as.character(library))
ggplot(df.s, aes(library, total_count)) +
geom_boxplot() +
labs(y='Number of sequences\nper fraction') +
theme_bw() +
theme(
text = element_text(size=16)
)
%%R
# loading file
df.abs = read.delim('OTU_n2_abs1e9.txt', sep='\t')
df.sub = read.delim('OTU_n2_abs1e9_PCR_subNorm.txt', sep='\t')
#lib.reval = c('1' = 'control',
# '2' = 'treatment',
# '3' = 'control',
# '4' = 'treatment',
# '5' = 'control',
# '6' = 'treatment')
#df.abs = mutate(df.abs, library = plyr::revalue(as.character(library), lib.reval))
#df.sub = mutate(df.sub, library = plyr::revalue(as.character(library), lib.reval))
%%R -w 700 -h 1000
# plotting absolute abundances
## plot
p = ggplot(df.abs, aes(BD_mid, count, fill=taxon)) +
geom_vline(xintercept=c(BD.GCp50), linetype='dashed', alpha=0.5) +
labs(x='Buoyant density') +
facet_grid(library ~ .) +
theme_bw() +
theme(
text = element_text(size=16),
axis.title.y = element_text(vjust=1),
axis.title.x = element_blank(),
legend.position = 'none',
plot.margin=unit(c(1,1,0.1,1), "cm")
)
p1 = p + geom_area(stat='identity', position='dodge', alpha=0.5) +
labs(y='Total community\n(absolute abundance)')
# plotting absolute abundances of subsampled
## plot
p = ggplot(df.sub, aes(BD_mid, count, fill=taxon)) +
geom_vline(xintercept=c(BD.GCp50), linetype='dashed', alpha=0.5) +
labs(x='Buoyant density') +
facet_grid(library ~ .) +
theme_bw() +
theme(
text = element_text(size=16),
legend.position = 'none'
)
p2 = p + geom_area(stat='identity', position='dodge', alpha=0.5) +
labs(y='Subsampled community\n(absolute abundance)') +
theme(
axis.title.y = element_text(vjust=1),
axis.title.x = element_blank(),
plot.margin=unit(c(0.1,1,0.1,1), "cm")
)
# plotting relative abundances of subsampled
p3 = p + geom_area(stat='identity', position='fill') +
geom_vline(xintercept=c(BD.GCp50), linetype='dashed', alpha=0.5) +
labs(y='Subsampled community\n(relative abundance)') +
theme(
axis.title.y = element_text(vjust=1),
plot.margin=unit(c(0.1,1,1,1.35), "cm")
)
# combining plots
grid.arrange(p1, p2, p3, ncol=1)
!SIPSim OTU_wideLong -w \
OTU_n2_abs1e9_PCR_subNorm.txt \
> OTU_n2_abs1e9_PCR_subNorm_w.txt
!SIPSim OTU_sampleData \
OTU_n2_abs1e9_PCR_subNorm.txt \
> OTU_n2_abs1e9_PCR_subNorm_meta.txt
# making phyloseq object from OTU table
!SIPSimR phyloseq_make \
OTU_n2_abs1e9_PCR_subNorm_w.txt \
-s OTU_n2_abs1e9_PCR_subNorm_meta.txt \
> OTU_n2_abs1e9_PCR_subNorm.physeq
## making ordination
!SIPSimR phyloseq_ordination \
OTU_n2_abs1e9_PCR_subNorm.physeq \
OTU_n2_abs1e9_PCR_subNorm_bray-NMDS.pdf
## filtering phyloseq object to just taxa/samples of interest (eg., BD-min/max)
!SIPSimR phyloseq_edit \
--BD_min 1.71 \
--BD_max 1.75 \
OTU_n2_abs1e9_PCR_subNorm.physeq \
> OTU_n2_abs1e9_PCR_subNorm_filt.physeq
## making ordination
!SIPSimR phyloseq_ordination \
OTU_n2_abs1e9_PCR_subNorm_filt.physeq \
OTU_n2_abs1e9_PCR_subNorm_filt_bray-NMDS.pdf
# making png figures
!convert OTU_n2_abs1e9_PCR_subNorm_bray-NMDS.pdf OTU_n2_abs1e9_PCR_subNorm_bray-NMDS.png
!convert OTU_n2_abs1e9_PCR_subNorm_filt_bray-NMDS.pdf OTU_n2_abs1e9_PCR_subNorm_filt_bray-NMDS.png
Image(filename='OTU_n2_abs1e9_PCR_subNorm_bray-NMDS.png')
Image(filename='OTU_n2_abs1e9_PCR_subNorm_filt_bray-NMDS.png')
## DESeq2
!SIPSimR phyloseq_DESeq2 \
--log2 0.25 \
--hypo greater \
--cont 1,3,5 \
--treat 2,4,6 \
--occur_all 0.25 \
OTU_n2_abs1e9_PCR_subNorm_filt.physeq \
> OTU_n2_abs1e9_PCR_subNorm_DS2.txt
## Confusion matrix
!SIPSimR DESeq2_confuseMtx \
--libs 2,4,6 \
--padj 0.1 \
ampFrags_BD-shift.txt \
OTU_n2_abs1e9_PCR_subNorm_DS2.txt
%%R -w 500 -h 350
byClass = read.delim('DESeq2-cMtx_byClass.txt', sep='\t')
byClass %>% filter(variables=='Balanced Accuracy') %>% print
ggplot(byClass, aes(variables, values)) +
geom_bar(stat='identity') +
labs(y='Value') +
facet_grid(library ~ .) +
theme_bw() +
theme(
text = element_text(size=16),
axis.title.x = element_blank(),
axis.text.x = element_text(angle=45, hjust=1)
)
%%R -w 550 -h 350
df_cMtx = read.delim('DESeq2-cMtx_data.txt', sep='\t') %>%
gather(clsfy, clsfy_value, incorp.pred, incorp.known) %>%
filter(! is.na(clsfy_value))
ggplot(df_cMtx, aes(log2FoldChange, padj)) +
geom_point(size=3, shape='O') +
facet_grid(clsfy ~ clsfy_value) +
labs(x='log2 fold change', y='Adjusted P-value') +
theme_bw() +
theme(
text = element_text(size=16)
)
%%R
clsfy = function(guess,known){
if(is.na(guess) | is.na(known)){
return(NA)
}
if(guess == TRUE){
if(guess == known){
return('True positive')
} else {
return('False positive')
}
} else
if(guess == FALSE){
if(guess == known){
return('True negative')
} else {
return('False negative')
}
} else {
stop('Error: true or false needed')
}
}
%%R
df = read.delim('DESeq2-cMtx_data.txt', sep='\t')
df = df %>%
filter(! is.na(log2FoldChange), library %in% c(2,4,6)) %>%
mutate(taxon = reorder(taxon, -log2FoldChange),
cls = mapply(clsfy, incorp.pred, incorp.known))
df %>% head(n=3)
%%R -w 800 -h 350
df.TN = df %>% filter(cls == 'True negative')
df.TP = df %>% filter(cls == 'True positive')
df.FP = df %>% filter(cls == 'False negative')
ggplot(df, aes(taxon, log2FoldChange, color=cls,
ymin=log2FoldChange - lfcSE, ymax=log2FoldChange + lfcSE)) +
geom_pointrange(size=0.4, alpha=0.5) +
geom_pointrange(data=df.TP, size=0.4, alpha=0.3) +
geom_pointrange(data=df.FP, size=0.4, alpha=0.3) +
labs(x = 'Taxon', y = 'Log2 fold change') +
facet_grid(library ~ .) +
theme_bw() +
theme(
text = element_text(size=16),
panel.grid.major.x = element_blank(),
panel.grid.minor.x = element_blank(),
legend.title=element_blank(),
axis.text.x = element_blank(),
legend.position = 'bottom'
)
!SIPSim qSIP \
OTU_n2_abs1e9.txt \
OTU_n2_abs1e9_PCR_subNorm.txt \
> OTU_n2_abs1e9_PCR_subNorm_qSIP.txt
# making an experimental design file for qSIP
import itertools
x = range(1,7)
y = ['control', 'treatment']
expDesignFile = os.path.join(workDir, 'qSIP_exp_design.txt')
with open(expDesignFile, 'wb') as outFH:
for i,z in itertools.izip(x,itertools.cycle(y)):
line = '\t'.join([str(i),z])
outFH.write(line + '\n')
!head $expDesignFile
!SIPSim qSIP_atomExcess \
--np 10 \
OTU_n2_abs1e9_PCR_subNorm_qSIP.txt \
qSIP_exp_design.txt \
> OTU_n2_abs1e9_PCR_subNorm_qSIP_atom.txt
%%R
df_qSIP = read.delim('OTU_n2_abs1e9_PCR_subNorm_qSIP_atom.txt', sep='\t')
df_shift = read.delim('ampFrags_BD-shift.txt', sep='\t') %>%
filter(library %in% c(2,4,6)) %>%
group_by(taxon) %>%
summarize(median = median(median)) %>%
ungroup() %>%
rename('median_true_BD_shift' = median)
df_qSIP %>% head(n=3) %>% print
print('------------------------')
df_shift %>% head(n=3) %>% print
%%R
df.j = inner_join(df_qSIP, df_shift, c('taxon' = 'taxon')) %>%
filter(!is.na(BD_diff)) %>%
mutate(true_incorporator = ifelse(median_true_BD_shift > 0.03, TRUE, FALSE),
true_atom_fraction_excess = median_true_BD_shift / 0.036,
atom_fraction_excess = ifelse(is.na(atom_CI_low), 0, atom_fraction_excess))
df.j %>% head(n=3)
%%R -w 650 -h 300
ggplot(df.j, aes(BD_diff, fill=true_incorporator)) +
geom_histogram(binwidth=0.005, alpha=0.7, position='identity') +
scale_color_discrete('Incorporator?') +
labs(x='qSIP: BD shift (g ml^-1)') +
theme_bw() +
theme(
text = element_text(size=16)
)
%%R -w 800 -h 300
df.j$taxon = reorder(df.j$taxon, -df.j$atom_fraction_excess)
ggplot(df.j, aes(taxon, true_atom_fraction_excess,
ymin=atom_CI_low, ymax=atom_CI_high)) +
geom_linerange(alpha=0.75) +
geom_point(color='red', size=0.25) +
geom_point(aes(y=atom_fraction_excess), color='green', size=0.2) +
labs(y='13C atom fraction excess') +
theme_bw() +
theme(
text = element_text(size=16),
axis.text.x = element_blank()
)
%%R -w 500 -h 250
# true incorporator error
ggplot(df.j, aes(atom_fraction_excess - true_atom_fraction_excess,
fill=true_incorporator)) +
geom_histogram(binwidth=0.05, alpha=0.7, position='identity') +
scale_fill_discrete('Incorporator?') +
labs(x='distance from true value') +
theme_bw() +
theme(
text = element_text(size=16)
)
%%R
zip.res = pscl::zeroinfl(true_atom_fraction_excess ~ atom_fraction_excess, data=df.j)
zip.res %>% summary %>% print
%%R
update(zip.res, . ~ 1) %>% logLik
%%R
lm.res = lm(true_atom_fraction_excess ~ atom_fraction_excess, data=df.j)
lm.res %>% summary
!SIPSimR qSIP_confuseMtx \
--libs 2,4,6 \
ampFrags_BD-shift.txt \
OTU_n2_abs1e9_PCR_subNorm_qSIP_atom.txt
%%R -h 250
df = read.delim('qSIP-cMtx_byClass.txt', sep='\t') %>%
filter(library == 2)
ggplot(df, aes(variables, values)) +
geom_bar(stat='identity') +
theme_bw() +
theme(
text = element_text(size=16),
axis.text.x = element_text(angle=45, hjust=1),
axis.title.x = element_blank()
)
%%R
df
!SIPSim deltaBD \
OTU_n2_abs1e9_PCR_subNorm.txt \
qSIP_exp_design.txt \
> OTU_n2_abs1e9_PCR_subNorm_dBD.txt
%%R
df_dBD = read.delim('OTU_n2_abs1e9_PCR_subNorm_dBD.txt', sep='\t')
df_shift = read.delim('ampFrags_BD-shift.txt', sep='\t') %>%
filter(library %in% c(2,4,6)) %>%
group_by(taxon) %>%
summarize(median = median(median)) %>%
ungroup() %>%
rename('median_true_BD_shift' = median)
df_dBD %>% head(n=3) %>% print
print('------------------------')
df_shift %>% head(n=3) %>% print
%%R
df.j = inner_join(df_dBD, df_shift, c('taxon' = 'taxon')) %>%
mutate(true_incorporator = ifelse(median_true_BD_shift > 0.03, TRUE, FALSE))
df.j %>% head(n=3)
%%R -w 650 -h 300
ggplot(df.j, aes(delta_BD, fill=true_incorporator)) +
geom_histogram(binwidth=0.005, alpha=0.7, position='identity') +
scale_color_discrete('Incorporator?') +
labs(x='deltaBD: BD shift (g ml^-1)') +
theme_bw() +
theme(
text = element_text(size=16)
)
%%R -w 800 -h 300
df.j$taxon = reorder(df.j$taxon, -df.j$delta_BD)
ggplot(df.j, aes(taxon, median_true_BD_shift)) +
geom_point(color='red', size=0.25) +
geom_point(aes(y=delta_BD), color='green', size=0.2) +
labs(y='BD shift') +
theme_bw() +
theme(
text = element_text(size=16),
axis.text.x = element_blank()
)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: Key Phrase Extraction
Step4: Clustering Documents
Step5: LDA
|
<ASSISTANT_TASK:>
Python Code:
DATA = "data/ieee-xplore.csv"
def load_data(path=DATA):
with open(path, 'r') as f:
reader = csv.DictReader(f)
for row in reader:
row['Tokenized Abstract'] = tokenize(row['Abstract'])
yield row
def tokenize(text):
return [
list(nltk.wordpunct_tokenize(sent))
for sent in nltk.sent_tokenize(text)
]
corpus = list(load_data())
corpus[0].keys()
PUNCT = set(string.punctuation)
STOPWORDS = set(nltk.corpus.stopwords.words('english'))
GRAMMAR = r'KT: {(<JJ>* <NN.*>+ <IN>)? <JJ>* <NN.*>+}'
GOODTAGS = set(['JJ','JJR','JJS','NN','NNP','NNS','NNPS'])
def candidates(abstract, chunks=True, grammar=GRAMMAR, tags=GOODTAGS):
Extracts the candidate terms (chunks or individual workds).
tagged = nltk.pos_tag_sents(abstract)
if chunks:
# Extract candidate chunks
chunker = nltk.chunk.regexp.RegexpParser(grammar)
chunks = list(itertools.chain.from_iterable(
nltk.chunk.tree2conlltags(chunker.parse(sent))
for sent in tagged
))
def object_filter(group):
word, pos, chunk = group
return chunk != 'O'
# Rejoin candidates as text
candidates = [
" ".join(word for word, pos, chunk in group).lower()
for key, group in itertools.groupby(chunks, object_filter) if key
]
# Filter stopwords and puntuation
return [
cand for cand in candidates
if (
cand not in STOPWORDS and
not all(char in PUNCT for char in cand))
]
else:
tagged = itertools.chain.from_iterable(tagged)
# Return lower case and filtered words
return [
word.lower() for word, tag in tagged
if (
tag in tags and word.lower() not in STOPWORDS
and not all(char in PUNCT for char in word)
)
]
def score_keyphrases_by_tfidf(texts, fileids, N=20):
Computes the TF-IDF scoring of the corpus given a list of lists of
candidate terms where each list represents a single document.
id2word = gensim.corpora.Dictionary(texts)
corpus = [id2word.doc2bow(text) for text in texts]
tfidf = gensim.models.TfidfModel(corpus)
scored = tfidf[corpus]
output = []
for idx, doc in enumerate(scored):
output.append(u"Document '{}' key phrases:".format(fileids[idx]))
# Get the top N terms by TF-IDF score
for wid, score in heapq.nlargest(N, doc, key=itemgetter(1)):
output.append(u"{:0.3f}: {}".format(score, id2word[wid]))
output.extend([u'',u''])
return u"\n".join(output)
fileids = [doc['Document Title'] for doc in corpus]
with open('data/keyphrases.txt', 'w') as f:
f.write(
score_keyphrases_by_tfidf([candidates(doc['Tokenized Abstract']) for doc in corpus], fileids, 5)
)
corpus[0].keys()
len(corpus)
from sklearn.pipeline import Pipeline
from sklearn.cluster import MiniBatchKMeans
from sklearn.feature_extraction.text import TfidfVectorizer
model = Pipeline([
('tfidf', TfidfVectorizer()),
('kmeans', MiniBatchKMeans()),
])
model.fit([doc['Abstract'] for doc in corpus])
model.get_params()
# model.steps[-1]
centers = model.named_steps['kmeans'].cluster_centers_
from heapq import nlargest
from operator import itemgetter
def rank_center_terms(k=0, n=10):
words = model.named_steps['tfidf'].inverse_transform([centers[k]])[0]
return nlargest(n, list(zip(words, centers[k])), key=itemgetter(1))
rank_center_terms()
rank_center_terms(1)
model.named_steps['kmeans'].labels_
def titles_for_cluster(k=0):
for idx, label in enumerate(model.named_steps['kmeans'].labels_):
if label == k:
print(corpus[idx]['Document Title'])
#titles_for_cluster()
%matplotlib notebook
import seaborn as sns
import matplotlib.pyplot as plt
sns.distplot(model.named_steps['kmeans'].labels_)
import numpy as np
from sklearn.cluster import SpectralClustering
from yellowbrick.cluster import SilhouetteVisualizer
X = TfidfVectorizer().fit_transform([doc['Abstract'] for doc in corpus])
oz = SilhouetteVisualizer(MiniBatchKMeans())
oz.fit(X)
oz.poof()
from sklearn.feature_extraction import DictVectorizer
from sklearn.pipeline import FeatureUnion
from sklearn.base import BaseEstimator, TransformerMixin
class ColumnSelector(BaseEstimator, TransformerMixin):
def __init__(self, column):
self.column = column
def fit(self, X):
return self
def transform(self, X):
return [
doc[self.column] for doc in X
]
class Tokenizer(BaseEstimator, TransformerMixin):
def __init__(self, sep=";"):
self.sep = sep
def fit(self, X):
return self
def transform(self, X):
return [
{key: 1 for key in doc.split(self.sep)}
for doc in X
]
from sklearn.decomposition import TruncatedSVD
model = Pipeline([
('cols', FeatureUnion([
('abstract', Pipeline([
('select', ColumnSelector('Abstract')),
('tfidf', TfidfVectorizer()),
])),
('key terms', Pipeline([
('select', ColumnSelector('Author Keywords')),
('tokenize', Tokenizer()),
('vect', DictVectorizer()),
]))
])),
('svd', TruncatedSVD(1000))
])
docs = model.fit_transform(corpus)
docs[0].shape
import pyLDAvis
import pyLDAvis.sklearn
pyLDAvis.enable_notebook()
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn.decomposition import LatentDirichletAllocation
tfvec = CountVectorizer(strip_accents = 'unicode', stop_words = 'english', lowercase = True, token_pattern = r'\b[a-zA-Z]{3,}\b', max_df = 0.5, min_df = 10)
dtm_tf = tfvec.fit_transform(ColumnSelector('Abstract').fit_transform(corpus))
model = Pipeline([
('select', ColumnSelector('Abstract')),
('tfidf', TfidfVectorizer(**tfvec.get_params())),
('lda', LatentDirichletAllocation())
])
model.fit(corpus)
pyLDAvis.sklearn.prepare(model.named_steps['lda'], dtm_tf, tfvec)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This chapter presents a simple model of a bike share system and
Step2: The expressions in parentheses are keyword arguments.
Step3: And this
Step4: Or, to display the state variables and their values, you can just type the name of the object
Step5: These values make up the state of the system.
Step6: You don't have to use show, but I think it looks better.
Step7: Or we can use update operators, -= and +=, to subtract 1 from
Step8: The result is the same either way.
Step9: Rather than repeat them every time a bike moves, we can define a new
Step10: def is a special word in Python that indicates we are defining a new
Step11: When you call the function, it runs the statements in the body, which
Step12: When you call a function, you have to include the parentheses. If you
Step13: This result indicates that bike_to_wellesley is a function. You don't
Step14: Jupyter runs both lines, but it only displays the value of the
Step15: When you call the print function, you can put a variable name in
Step16: Python looks up the values of the variables and displays them; in this
Step17: Each time we call this version of the function, it displays a message,
Step18: And call it like this
Step19: One benefit of defining functions is that you avoid repeating chunks of
Step20: The result is one of two values
Step21: If the result from flip is True, the program displays the string
Step22: Now we can use flip to simulate the arrival of students who want to
Step23: If students arrive at the Wellesley station every 3 minutes, on average,
Step24: We can combine these snippets into a function that simulates a time
Step25: Then we can simulate a time step like this
Step26: Even though there are no values in parentheses, we have to include them.
Step27: The values of p1 and p2 are not set inside this function; instead,
Step28: The values you provide when you call the function are called
Step29: The advantage of using parameters is that you can call the same function many times, providing different arguments each time.
Step30: The syntax here should look familiar; the first line ends with a
Step31: We can create a new, empty TimeSeries like this
Step32: And we can add a quantity like this
Step33: The number in brackets is the time stamp, also called a label.
Step34: Each time through the loop, we print the value of i and call step, which updates bikeshare.
Step35: The left column is the time stamps; the right column is the quantities (which might be negative, depending on the state of the system).
Step36: Plotting
Step37: Summary
Step38: Exercise
Step39: Exercise
Step40: Under the Hood
|
<ASSISTANT_TASK:>
Python Code:
# install Pint if necessary
try:
import pint
except ImportError:
!pip install pint
# download modsim.py if necessary
from os.path import exists
filename = 'modsim.py'
if not exists(filename):
from urllib.request import urlretrieve
url = 'https://raw.githubusercontent.com/AllenDowney/ModSim/main/'
local, _ = urlretrieve(url+filename, filename)
print('Downloaded ' + local)
# import functions from modsim
from modsim import *
bikeshare = State(olin=10, wellesley=2)
bikeshare.olin
bikeshare.wellesley
bikeshare
show(bikeshare)
bikeshare.olin = 9
bikeshare.wellesley = 3
bikeshare.olin -= 1
bikeshare.wellesley += 1
bikeshare.olin -= 1
bikeshare.wellesley += 1
def bike_to_wellesley():
bikeshare.olin -= 1
bikeshare.wellesley += 1
bike_to_wellesley()
bikeshare
bike_to_wellesley
bikeshare.olin
bikeshare.wellesley
print(bikeshare.olin)
print(bikeshare.wellesley)
print(bikeshare.olin, bikeshare.wellesley)
def bike_to_wellesley():
print('Moving a bike to Wellesley')
bikeshare.olin -= 1
bikeshare.wellesley += 1
def bike_to_olin():
print('Moving a bike to Olin')
bikeshare.wellesley -= 1
bikeshare.olin += 1
bike_to_olin()
flip(0.7)
if flip(0.5):
print('heads')
if flip(0.5):
print('heads')
else:
print('tails')
if flip(0.5):
bike_to_wellesley()
if flip(0.33):
bike_to_olin()
def step():
if flip(0.5):
bike_to_wellesley()
if flip(0.33):
bike_to_olin()
step()
def step(p1, p2):
if flip(p1):
bike_to_wellesley()
if flip(p2):
bike_to_olin()
step(0.5, 0.33)
p1 = 0.5
p2 = 0.33
if flip(p1):
bike_to_wellesley()
if flip(p2):
bike_to_olin()
for i in range(3):
print(i)
bike_to_wellesley()
bikeshare = State(olin=10, wellesley=2)
results = TimeSeries()
results[0] = bikeshare.olin
for i in range(3):
print(i)
step(0.6, 0.6)
results[i+1] = bikeshare.olin
results
show(results)
results.plot()
decorate(title='Olin-Wellesley Bikeshare',
xlabel='Time step (min)',
ylabel='Number of bikes')
bikeshare = State(olin=10, wellesley=2)
bikeshare.wellesley
# Solution
bikeshare = State(olin=10, wellesley=2, babson=0)
show(bikeshare)
# Solution
def run_simulation(p1, p2, num_steps):
results = TimeSeries()
results[0] = bikeshare.olin
for i in range(num_steps):
step(p1, p2)
results[i+1] = bikeshare.olin
results.plot()
decorate(title='Olin-Wellesley Bikeshare',
xlabel='Time step (min)',
ylabel='Number of bikes')
# Solution
bikeshare = State(olin=10, wellesley=2)
run_simulation(0.3, 0.2, 60)
source_code(flip)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: File input/output
Step2: Asรญ de sencillo, ya hemos "exportado" los valores de la variable 'a' a un archivo en formato *.mat.
Step3: Imaginemos la situaciรณn inversa. ยฟCรณmo importar archivos de datos de MATLAB a Python
Step4: Podemos listar los variables importadas
Step5: Existen otras formas de cargar datos
Step6: Vaya, parece que para estas muestras los histogramas no nos muestran informaciรณn clara tampoco. Asumiendo que las muestras son representativas, podemos realizar la prueba de t de Student para ver si existen diferencias significativas entre ellas.
Step7: Podemos ver que el valor de $\frac{p}{2} > 0.05 $ por lo que la hipรณtesis nula se cumple (no existen diferencias significativas entre ellas).
Step8: Ahora que hemos realizado nuestras simulaciones, podemos calcular nuestro valor $p$, que es simplemente la proporciรณn de simulaciones que resultaron en una diferencia mayor o igual a 6.6 (la diferencia original)
Step9: $$ p = \frac{N_{>6.6}}{N_{total}} = \frac{1512}{10000} = 0.15 $$
Step10: Este tutorial de scipy.stats muestra mรกs ejemplos que podemos realizar. Por el momento, vamos a continuar explorando SciPy. Retomaremos mรกs trabajo estadรญsitco cuando lleguemos a pandas,
Step11: Para interpolar utilizaremos el paquete interpolate de SciPy
Step12: Para crear una funciรณn interpolante utilizaremos el objeto InterpolatedUnivariateSpline del paquete interpolate. A este objeto solo hay que pasarle los puntos de interpolaciรณn y el grado, y generarรก un spline.
Step13: ยฟCรณmo obtengo los puntos desde aquรญ? El resultado que hemos obtenido es una funciรณn y admite como argumento la $x$.
Step14: Vamos a representar esta funciรณn junto con los puntos de interpolaciรณn. Fรญjate en que, ahora que tenemos una funciรณn interpolante, podemos representarla en un dominio
Step15: Retrocede ahora y comprueba lo que pasa si cambias el grado del spline. Dale a un vistazo a todas las opciones que SciPy ofrece para interpolar datos.
Step16: Generaremos unos datos para ver cรณmo funcionarรญa, del tipo
Step17: Utilicemos ahora la funciรณn polynomial.polyfit, que recibe los puntos de interpolaciรณn y el grado del polinomio. El resultado serรกn los coeficientes del mismo, en orden de potencias crecientes.
Step18: ยกMuy similares a lo que esperรกbamos! Para evaluar un polinomio con estos coeficientes, o bien construimos la funciรณn nosotros mismos o usamos la funciรณn polynomial.polyval
Step19: Si la funciรณn que queremos ajustar es mรกs compleja, necesitaremos ajustar los datos a una curva mediante un algoritmo de optimizaciรณn.
Step20: Generemos una vez mรกs los datos aรฑadiendo un poco de ruรญdo. ยฟPuedes leer ya funciones descritas con NumPy?
Step21: Como en este ejemplo sintรฉtico conocemos los valores exactos, podemos ver que existe variaciรณn respecto a los valores originales debido al ruรญdo aรฑadido.
Step22: Veamos los resultados en una grรกfica
Step23: Otra forma de ajustar una funciรณn a datos experimentales es minimizando el error por mรญnimos cuadrados. Para hacer este ejemplo mรกs interesante, aรฑadamos ademรกs de valores atรญpicos (outliers, en inglรฉs). Este ejemplo estรก tomado del Cookbook de Scipy, robust regression
Step24: Una vez creado el modelo estamos listos para generar los datos.
Step25: La funciรณn que calcula los residuos se puede definir como
Step26: Ya tenemos todo lo que necesitamos para realizar el ajuste por mรญnimos cuadrados
Step27: Veamos los resultados
Step28: El paquete optimize incluye multitud de mรฉtodos para optimizaciรณn, ajuste de curvas y bรบsqueda de raรญces. La ayuda de este paquete es bastante larga (puedes consultarla tambiรฉn en http
Step29: Lo mรกs sencillo en estos casos es aplicar un filtro por ventana
Step30: Como podemos ver, si la frecuencia de muestreo no es muy alta o el tamaรฑo de nuestra ventana no es el adecuado, el resultado puede ser no satisfactorio.
Step31: Si estamos interesados en obtener una seรฑal sinuisidal y dado que el ruรญdo ocurre a una frecuencia mรกs alta, otra opciรณn es generar filtro de paso bajo.
Step32: Por รบltimo, si la seรฑal tiene un deriva (drifting) podemos corregirla fรกcilmente con
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
from scipy import io as spio
a = np.ones((3, 3)) # creamos una matriz de 3x3
spio.savemat('archivo.mat', # nombre del archivo
{'a': a}) # asignamos y referenciamos el nombre con un diccionario
%ls *.mat
data = spio.loadmat('archivo.mat')
data['a']
spio.whosmat('archivo.mat')
a = np.array([84, 72, 57, 46, 63, 76, 99, 91])
b = np.array([81, 69, 74, 61, 56, 87, 69, 65, 66, 44, 62, 69])
plt.hist(b, bins=5, alpha=0.5)
plt.hist(a, bins=5, alpha=0.5)
plt.plot(a,np.zeros(len(a)),'^')
plt.plot(b,np.zeros(len(b)),'^')
plt.title("Histogram")
plt.show()
print("La media de 'a' es {0:.1f}, y su desviaciรณn estรกndar es {1:.1f}".format(a.mean(), a.std()))
print("La media de 'b' es {0:.1f}, y su desviaciรณn estรกndar es {0:.1f}".format(b.mean(), b.std()))
print("La diferencia entre las medias es de {0:.1f}".format(a.mean()- b.mean()))
from scipy import stats
stats.ttest_ind(a,b)
samples = np.concatenate([a,b])
num_simulations = 10000
differences = np.zeros(num_simulations)
for i in range(num_simulations):
np.random.shuffle(samples)
a_new = samples[0:len(a)]
b_new = samples[len(a):len(samples)]
a_mean = a_new.mean()
b_mean = b_new.mean()
differences[i]= (a_mean-b_mean)
p = np.sum(differences>(a.mean()-b.mean()))/num_simulations
p
plt.hist(differences, bins=50)
plt.axvline((a.mean()-b.mean()),color='r')
plt.xlabel('mean difference')
plt.ylabel('number')
x_i = [0.0, 0.9, 1.8, 2.7, 3.6, 4.4, 5.3, 6.2, 7.1, 8.0]
y_i = [0.0, 0.8, 1.0, 0.5, -0.4, -1.0, -0.8, -0.1, 0.7, 1.0]
plt.plot(x_i, y_i, 'x', mew=2)
from scipy import interpolate
f_interp = interpolate.InterpolatedUnivariateSpline(x_i, y_i, k=1)
f_interp
f_interp(np.pi / 2)
x = np.linspace(0, 8)
y_interp = f_interp(x)
plt.plot(x_i, y_i, 'x', mew=2)
plt.plot(x, y_interp)
from numpy.polynomial import polynomial
x_i = np.linspace(-2, 3, num=10)
y_i = x_i ** 2 - x_i + 1 + 0.5 * np.random.randn(10)
plt.plot(x_i, y_i, 'x', mew=2)
a, b, c = polynomial.polyfit(x_i, y_i, deg=2)
a, b, c
x = np.linspace(-2, 3)
#y_fit = a + b * x + c * x ** 2
y_fit = polynomial.polyval(x, (a, b, c))
l, = plt.plot(x, y_fit)
plt.plot(x_i, y_i, 'x', mew=2, c=l.get_color())
from scipy.optimize import curve_fit
def func(x, a, b, c):
return a * np.exp(-b * x) + c
a, b, c = 2.5, 1.3, 0.5
xdata = np.linspace(0, 4, 50)
y = func(xdata, a, b, c)
y_noise = 1.5 * np.random.normal(size=xdata.size)
ydata = y + y_noise
plt.plot(xdata, ydata, 'x',mew=2, label='exp. data')
plt.plot(xdata, func(xdata, a, b, c), '-', label='true function')
plt.xlabel('x')
plt.ylabel('y')
plt.legend()
plt.show()
popt, pcov = curve_fit(func, xdata, ydata)
popt
popt_bounds, pcov_bounds = curve_fit(func, xdata, ydata,
bounds=([1, 1, 0], [3., 2., 1.]))
popt_bounds
plt.plot(xdata, ydata, 'x',mew=2, label='exp. data')
plt.plot(xdata, func(xdata, a, b, c), '-', label='true function')
plt.plot(xdata, func(xdata, *popt), 'r-', label='fit')
plt.plot(xdata, func(xdata, *popt_bounds), 'g--', label='fit-with-bounds')
plt.xlabel('x')
plt.ylabel('y')
plt.legend()
plt.show()
def generate_data(t, A, sigma, omega, noise=0, n_outliers=0, random_state=0):
y = A * np.exp(-sigma*t) * np.sin(omega*t)
rnd = np.random.RandomState(random_state)
error = noise * rnd.randn(t.size)
outliers = rnd.randint(0, t.size, n_outliers)
error[outliers] = error[outliers] * 35
return y + error
# Parametros del modelo
A = 2
sigma = 0.1
omega = 0.1 * 2 * np.pi
x_true = np.array([A, sigma, omega])
noise = 0.1
t_min = 0
t_max = 30
t= np.linspace(t_min, t_max, 30)
y_exp = generate_data(t, A, sigma, omega, noise=noise, n_outliers=4)
y_true = generate_data(t, A, sigma, omega) # ยฟpor quรฉ no necesito indicar nada mรกs?
plt.plot(t, y_exp, 'o', label='exp data')
plt.plot(t, y_true, label='true')
plt.xlabel('$t$')
plt.ylabel('$y$')
plt.legend()
def fun_res(x, t, y):
A, sigma, omega = x # parรกmetros
return (A * np.exp(-sigma * t) * np.sin(omega * t)) - y
x0 = np.ones(3) # valores inciales de A, sigma y omega
from scipy.optimize import least_squares
res_lsq = least_squares(fun_res, x0, args=(t, y_exp))
res_lsq
res_robust = least_squares(fun_res, x0,
loss='soft_l1', # Norma del tipo L1 (mรกs robusta)
f_scale=0.1, # restringe los errores
args=(t, y_exp))
res_robust
y_lsq = generate_data(t, *res_lsq.x)
y_robust = generate_data(t, *res_robust.x)
plt.plot(t, y_exp, 'o', label='exp data')
plt.plot(t, y_true, label='true')
plt.plot(t, y_lsq, label='lsq')
plt.plot(t, y_robust, label='robust lsq')
plt.xlabel('$t$')
plt.ylabel('$y$')
plt.legend()
N = 100 # number of samples
T = 1./N # sample spacing
t = np.linspace(-1, N*T, N)
y = (np.sin(
2*np.pi*0.75*t*(1-t) + 2.1) +
0.1*np.sin(2*np.pi*1.25*t + 1) +
0.18*np.cos(2*np.pi*3.85*t)
)
t_exp = (t + 1)
y_exp = y + np.random.randn(len(t)) * 0.30 # ruรญdo
plt.plot(t_exp, y_exp, label='exp data', alpha=0.75)
plt.xlabel('$t$')
plt.ylabel('$y$')
plt.legend()
from scipy.signal import medfilt
n_elements = 11 # nยบ de elementos de en el que se aplica el filtro
y_exp_filt = medfilt(y_exp, n_elements)
plt.plot(t_exp, y_exp, label='exp data', alpha=0.55)
plt.plot(t_exp, y_exp_filt, label='filt. (median)')
plt.plot(t_exp, y, '-k', label='true', )
plt.xlabel('$t$')
plt.ylabel('$y$')
plt.legend()
from scipy.signal import savgol_filter
n_elements = 11 # nยบ de elementos de en el que se aplica el filtro
polyorder = 3
y_exp_filt = savgol_filter(y_exp, n_elements, polyorder)
plt.plot(t_exp, y_exp, label='exp data', alpha=0.55)
plt.plot(t_exp, y_exp_filt, label='filt. (Savitzky-Golay)')
plt.plot(t_exp, y, '-k', label='true', )
plt.xlabel('$t$')
plt.ylabel('$y$')
plt.legend()
from scipy import signal
filt_order = 3 # Filter order
Wn = 0.2 # # Cutoff frequency [radยทs^-1]
# Create the filter
b, a = signal.butter(filt_order, Wn, btype='low')
w, h = signal.freqs(b, a)
# Second, apply the filter
y_exp_lowpass = signal.filtfilt(b,a, y_exp)
plt.plot(t_exp, y_exp, label='exp data', alpha=0.55)
plt.plot(t_exp, y_exp_lowpass, label='filt. (low pass)')
plt.plot(t_exp, y, '-k', label='true', )
plt.xlabel('$t$')
plt.ylabel('$y$')
plt.legend()
from scipy import signal
N = 100
t0 = 0
tf = 100
m = 1
b = 0
t = np.linspace(t0, tf, N)
y = m*t+b
y_exp = y + 100* np.random.rand(N)
plt.plot(t, y_exp, linewidth=2, label='exp data')
plt.plot(t, signal.detrend(y_exp), linewidth=2, label='drift correction')
plt.xlabel('$t$')
plt.ylabel('$y$')
plt.legend()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: ะะฑััะตะฝะธะต ะผะพะดะตะปะตะน
Step2: XGBoost
Step3: LightGBM
Step4: Vowpal Wabbit
Step5: Lasso
Step6: Submission
Step7: XGBoost
Step8: LightGBM
Step9: Lasso
Step10: Ensemble
Step11: Lasso
Step12: LightGBM
Step13: ะ ะตะทัะปััะฐัั ะธัะฟััะฐะฝะธะน
Step14: ะะพะปััะฐะตะผ ะฟัะตะพะฑัะฐะทะพะฒะฐะฝะฝัะต train/test
Step15: Fix from Sberbank
Step16: Auto ML
Step17: ะกะผะพััะธะผ ะฝะฐ ะดะฐะฝะฝัะต
|
<ASSISTANT_TASK:>
Python Code:
def align_to_lb_score(df):
# https://www.kaggle.com/c/sberbank-russian-housing-market/discussion/32717
df = df.copy()
trainsub = df[df.timestamp < '2015-01-01']
trainsub = trainsub[trainsub.product_type=="Investment"]
ind_1m = trainsub[trainsub.price_doc <= 1000000].index
ind_2m = trainsub[trainsub.price_doc == 2000000].index
ind_3m = trainsub[trainsub.price_doc == 3000000].index
train_index = set(df.index.copy())
for ind, gap in zip([ind_1m, ind_2m, ind_3m], [10, 3, 2]):
ind_set = set(ind)
ind_set_cut = ind.difference(set(ind[::gap]))
train_index = train_index.difference(ind_set_cut)
df = df.loc[train_index]
df["price_doc"] = np.log1p(df["price_doc"].values)
return df
def preprocess_anomaly(df):
# ัะดะฐะปัะตะผ ะธะท ะพะฑััะฐััะตะน ะฒัะฑะพัะบะธ ะฒัะต ะฝัะปะตะฒัะต ะดะฐะฝะฝัะต. ะ test ะดะฐะฝะฝัะต ะฒัะต ะทะฐะฟะพะปะฝะตะฝั
df = df.dropna(subset=["preschool_education_centers_raion", "num_room",
"max_floor", "material", "kitch_sq", "floor"])
df["product_type"].fillna("Investment", inplace=True)
df["full_sq"] = map(lambda x: x if x > 10 else float("NaN"), df["full_sq"])
df["life_sq"] = map(lambda x: x if x > 5 else float("NaN"), df["life_sq"])
df["kitch_sq"] = map(lambda x: x if x > 2 else float("NaN"), df["kitch_sq"])
# superclean
# https://www.kaggle.com/keremt/very-extensive-cleaning-by-sberbank-discussions
df.ix[df[df.life_sq > df.full_sq].index, "life_sq"] = np.NaN
df.ix[df[df.kitch_sq >= df.life_sq].index, "kitch_sq"] = np.NaN
df.ix[df[df.kitch_sq == 0].index, "kitch_sq"] = np.NaN
df.ix[df[df.kitch_sq == 1].index, "kitch_sq"] = np.NaN
df.ix[df[df.num_room == 0].index, "num_room"] = np.NaN
df.ix[df[df.floor == 0].index, "floor"] = np.NaN
df.ix[df[df.max_floor == 0].index, "max_floor"] = np.NaN
df.ix[df[df.floor > df.max_floor].index, "max_floor"] = np.NaN
df.ix[df[df.state == 33].index, "state"] = np.NaN
df.ix[df[df.build_year == 20052009].index, "build_year"] = 2005
df.ix[df[df.build_year == 20].index, "build_year"] = 2000
df.ix[df[df.build_year == 215].index, "build_year"] = 2015
df.ix[df[df.build_year < 1500].index, "build_year"] = np.NaN
df.ix[df[df.build_year > 2022].index, "build_year"] = np.NaN
return df
def smoothed_likelihood(targ_mean, nrows, globalmean, alpha=10):
try:
return (targ_mean * nrows + globalmean * alpha) / (nrows + alpha)
except Exception:
return float("NaN")
def mess_y_categorial(df, nfolds=3, alpha=10):
from copy import copy
folds = np.array_split(df, nfolds)
newfolds = []
for i in range(nfolds):
fold = folds[i]
other_folds = copy(folds)
other_folds.pop(i)
other_fold = pd.concat(other_folds)
newfolds.append(mess_y_categorial_fold(fold, other_fold, alpha=10))
return pd.concat(newfolds)
def mess_y_categorial_fold(fold_raw, other_fold, cols=None, y_col="price_doc", alpha=10):
fold = fold_raw.copy()
if not cols:
cols = list(fold.select_dtypes(include=["object"]).columns)
globalmean = other_fold[y_col].mean()
for c in cols:
target_mean = other_fold[[c, y_col]].fillna("").groupby(c).mean().to_dict()[y_col]
nrows = other_fold[c].fillna("").value_counts().to_dict()
fold[c + "_sll"] = fold[c].fillna("").apply(
lambda x: smoothed_likelihood(target_mean.get(x), nrows.get(x), globalmean, alpha)
)
return fold
def feature_exclude(df):
# ะฃะฑะธัะฐะตะผ build_year, ะฒะผะตััะพ ะฝะตะณะพ ะพััะฐะตััั age_of_building
# ะะตัะพััะฝะพ ะธะท-ะทะฐ build_year ะฟะตัะตะพะฑััะตะฝะธะต
feats = ["build_year", "build_year_cat_le"]
with open("greedy_search.tsv") as gs:
for line in gs:
row = line.strip().split("\t")
if len(row) < 6:
continue
if row[5] == "remove":
feats.append(row[0])
df = df.drop(feats, axis=1, errors="ignore")
return df
ALPHA = 50
lbl = sk.preprocessing.LabelEncoder()
def preprocess_categorial(df):
for c in list(df.columns):
if df[c].dtype == 'object':
try:
try:
lbl.fit(list(train_raw[c].values) + list(test[c].values) + list(df[c].values))
except KeyError as e:
lbl.fit(df[c].values)
df[c + "_le"] = lbl.transform(list(df[c].values))
except ValueError as e:
print c, e
raise
df = mess_y_categorial(df, 5, alpha=ALPHA)
df = df.select_dtypes(exclude=['object'])
return df
def apply_categorial(test, train):
for c in list(test.columns):
if test[c].dtype == 'object':
try:
lbl.fit(list(train_raw[c].values) + list(test[c].values) + list(train[c].values))
except KeyError:
lbl.fit(test[c].values)
test[c + "_le"] = lbl.transform(list(test[c].values))
test = mess_y_categorial_fold(test, train, alpha=ALPHA)
test = test.select_dtypes(exclude=['object'])
return test
def apply_macro(df):
macro_cols = [
'timestamp', "balance_trade", "balance_trade_growth", "eurrub", "average_provision_of_build_contract",
"micex_rgbi_tr", "micex_cbi_tr", "deposits_rate", "mortgage_value", "mortgage_rate",
"income_per_cap", "rent_price_4+room_bus", "museum_visitis_per_100_cap", "apartment_build"
]
return df.merge(macro[macro_cols], on='timestamp', how='left')
def preprocess(df):
df = df.copy()
ecology = ["no data", "poor", "satisfactory", "good", "excellent"]
df["ecology_index"] = map(ecology.index, df["ecology"].values)
df["age_of_building"] = df["timestamp"].apply(lambda x: x.split("-")[0]).astype(int) - df["build_year"]
df["is_build_in_progress"] = df["age_of_building"].apply(lambda x: "yes" if x < 0 else "no")
bool_feats = [
"thermal_power_plant_raion",
"incineration_raion",
"oil_chemistry_raion",
"radiation_raion",
"railroad_terminal_raion",
"big_market_raion",
"nuclear_reactor_raion",
"detention_facility_raion",
"water_1line",
"big_road1_1line",
"railroad_1line",
"culture_objects_top_25"
]
for bf in bool_feats:
try:
df[bf + "_bool"] = map(lambda x: x == "yes", df[bf].values)
except:
pass
df = preprocess_anomaly(df)
df['rel_floor'] = df['floor'] / df['max_floor'].astype(float)
df['rel_kitch_sq'] = df['kitch_sq'] / df['full_sq'].astype(float)
df['rel_life_sq'] = df['life_sq'] / df['full_sq'].astype(float)
df["material_cat"] = df.material.fillna(0).astype(int).astype(str).replace("0", "")
df["state_cat"] = df.state.fillna(0).astype(int).astype(str).replace("0", "")
# df["num_room_cat"] = df.num_room.fillna(0).astype(int).astype(str).replace("0", "")
# df["build_year_cat"] = df.build_year.fillna(0).astype(int).astype(str).replace("0", "")
df["build_year_ten"] = (df.build_year / 10).round()
df["ID_metro"] = df.ID_metro.fillna(-10).astype(int).astype(str).replace("-10", "")
df["ID_railroad_station_walk"] = df.ID_railroad_station_walk.replace("", "-10").fillna(-10).astype(int).astype(str).replace("-10", "")
df["ID_railroad_station_avto"] = df.ID_railroad_station_avto.fillna(-10).astype(int).astype(str).replace("-10", "")
df["ID_big_road1"] = df.ID_big_road1.fillna(-10).astype(int).astype(str).replace("-10", "")
df["ID_big_road2"] = df.ID_big_road2.fillna(-10).astype(int).astype(str).replace("-10", "")
df["ID_bus_terminal"] = df.ID_bus_terminal.fillna(-10).astype(int).astype(str).replace("-10", "")
# # ratio of living area to full area #
df["ratio_life_sq_full_sq"] = df["life_sq"] / np.maximum(df["full_sq"].astype("float"),1)
df["ratio_life_sq_full_sq"].ix[df["ratio_life_sq_full_sq"]<0] = 0
df["ratio_life_sq_full_sq"].ix[df["ratio_life_sq_full_sq"]>1] = 1
# # ratio of kitchen area to living area #
df["ratio_kitch_sq_life_sq"] = df["kitch_sq"] / np.maximum(df["life_sq"].astype("float"),1)
df["ratio_kitch_sq_life_sq"].ix[df["ratio_kitch_sq_life_sq"]<0] = 0
df["ratio_kitch_sq_life_sq"].ix[df["ratio_kitch_sq_life_sq"]>1] = 1
# # ratio of kitchen area to full area #
df["ratio_kitch_sq_full_sq"] = df["kitch_sq"] / np.maximum(df["full_sq"].astype("float"),1)
df["ratio_kitch_sq_full_sq"].ix[df["ratio_kitch_sq_full_sq"]<0] = 0
df["ratio_kitch_sq_full_sq"].ix[df["ratio_kitch_sq_full_sq"]>1] = 1
df = df.drop(["timestamp"], axis=1, errors="ignore")
return df
# train_raw = pd.read_csv("data/train.csv")
train_raw = pd.read_csv("data/train_without_noise.csv", index_col="id")
test = pd.read_csv("data/test.csv", index_col="id")
macro = pd.read_csv("data/macro.csv")
train_pr = align_to_lb_score(train_raw)
train_pr = preprocess(train_pr)
train_pr = preprocess_categorial(train_pr)
train = feature_exclude(train_pr)
train.head()
important_feats = ["full_sq", "life_sq", "kitch_sq", "max_floor"]
# important_feats = ["full_sq", "life_sq"]
# ะฃัะธะผ ะผะพะดะตะปะธ ะดะปั ะทะฐะฟะพะปะฝะตะฝะธั NA ะฒะฐะถะฝัั
ะฟะพะปะตะน, ะฟะพัะปะตะดะพะฒะฐัะตะปัะฝะพััั ะฒะฐะถะฝะฐ
feats_to_remove = ["price_doc", "rel_kitch_sq", "rel_life_sq", "id", "build_year_cat_le",
"age_of_building", "rel_floor", "num_room_cat_le", "build_year_ten",
"ratio_life_sq_full_sq", "ratio_kitch_sq_full_sq", "ratio_kitch_sq_life_sq"]
%%cache na_models.pkl na_models
na_models = {}
xgb_params = {
'max_depth': 5,
'n_estimators': 200,
'learning_rate': 0.05,
'objective': 'reg:linear',
'eval_metric': 'rmse',
'silent': 1
}
for f in important_feats:
t = train[train[f].notnull()]
fX = t.drop([f] + feats_to_remove, axis=1, errors="ignore")
fy = t[f].values
dtrain_all = xgb.DMatrix(fX.values, fy, feature_names=fX.columns)
model = xgb.train(xgb_params, dtrain_all, num_boost_round=400, verbose_eval=40)
na_models[f] = model
print f
print feat_imp(model).head(10)
def fill_na_xgb(df_orig):
df = df_orig.copy()
for f in important_feats:
X_pr = df[df[f].isnull()].drop([f] + feats_to_remove, axis=1, errors="ignore")
if not len(X_pr):
continue
X_pr = xgb.DMatrix(X_pr.values, feature_names=X_pr.columns)
df.loc[df[f].isnull(), f] = na_models[f].predict(X_pr).round()
df[f] = df[f].astype(int)
return df
train = fill_na_xgb(train)
from sklearn.model_selection import train_test_split
X = train.drop(["price_doc"], axis=1)
y = train["price_doc"].values
bound = int(len(X) * 0.7)
X_train, X_val, y_train, y_val = X[:bound].copy(), X[bound+1:].copy(), y[:bound].copy(), y[bound+1:].copy()
dtrain_all = xgb.DMatrix(X.values, y, feature_names=X.columns)
dtrain = xgb.DMatrix(X_train.values, y_train, feature_names=X.columns)
dval = xgb.DMatrix(X_val.values, y_val, feature_names=X.columns)
xgb_params = {
'eta': 0.01,
'max_depth': 5,
'subsample': 0.7,
'colsample_bytree': 0.7,
'objective': 'reg:linear',
'eval_metric': 'rmse',
'silent': 1
}
# Uncomment to tune XGB `num_boost_rounds`
model = xgb.train(xgb_params, dtrain, num_boost_round=4000, evals=[(dval, 'val')],
early_stopping_rounds=20, verbose_eval=40)
num_boost_round = model.best_iteration
cv_output = xgb.cv(xgb_params, dtrain_all, num_boost_round=4000,
verbose_eval=100, early_stopping_rounds=100, nfold=5)
xgbmodel = xgb.train(xgb_params, dtrain, num_boost_round=num_boost_round, verbose_eval=40)
y_pred = xgbmodel.predict(dtrain)
print "predict-train:", rmse(y_pred, y_train)
submdf = pd.DataFrame({"id": X_train.index, "price_doc": unlog(y_pred)})
submdf.to_csv("xgb_train_preds.csv", index=False)
y_pred = xgbmodel.predict(dval)
print "predict-val:", rmse(y_pred, y_val)
submdf = pd.DataFrame({"id": X_val.index, "price_doc": unlog(y_pred)})
submdf.to_csv("xgb_val_preds.csv", index=False)
feat_imp(model).head(10)
RS = 20170501
np.random.seed(RS)
FACT_ROUNDS=0
ROUNDS = 2000
lgb_params = {
'objective': 'regression',
'metric': 'rmse',
'boosting': 'gbdt',
'learning_rate': 0.01,
# 'verbose': 1,
# 'num_leaves': 2 ** 5,
'bagging_fraction': 0.95,
'bagging_freq': 1,
'bagging_seed': RS,
# 'feature_fraction': 0.7,
# 'feature_fraction_seed': RS,
'subsample': 0.7,
'colsample_bytree': 0.7,
# 'max_bin': 100,
'max_depth': 10,
'num_rounds': ROUNDS
}
lgb_train_all = lgb.Dataset(X, y)
lgb_train = lgb.Dataset(X_train, y_train)
cvres = pd.DataFrame(lgb.cv(params=lgb_params, train_set=lgb_train, nfold=5, shuffle=False,
early_stopping_rounds=100, verbose_eval=100, num_boost_round=ROUNDS))
FACT_ROUNDS = len(cvres)
lgbmodel = lgb.train(lgb_params, lgb_train, num_boost_round=FACT_ROUNDS or ROUNDS)
pd.DataFrame({
"name": lgbmodel.feature_name(),
"imp": lgbmodel.feature_importance()}
).sort_values("imp", ascending=False).head(20)
y_pred = lgbmodel.predict(X_train)
print "predict-train:", rmse(y_pred, y_train)
submdf = pd.DataFrame({"id": X_train.index, "price_doc": unlog(y_pred)})
submdf.to_csv("lgb_train_preds.csv", index=False)
y_pred = lgbmodel.predict(X_val)
print "predict-val:", rmse(y_pred, y_val)
submdf = pd.DataFrame({"id": X_val.index, "price_doc": unlog(y_pred)})
submdf.to_csv("lgb_val_preds.csv", index=False)
from vowpalwabbit.sklearn_vw import VWRegressor
from sklearn.base import TransformerMixin
from scipy.stats import skew
class SkewLogAlign(TransformerMixin):
skewed_feats = None
skew_treshold = 0.75
def __init__(self, skew_treshold=0.75):
self.skew_treshold = skew_treshold
def fit(self, X, y=None):
#log transform skewed numeric features:
df = pd.DataFrame(X, dtype=np.float64)
skewed_feats = df.apply(lambda x: skew(x.dropna())) #compute skewness
skewed_feats = skewed_feats[skewed_feats > 0.75]
self.skewed_feats = skewed_feats.index
return self
def transform(self, X):
df = pd.DataFrame(X, dtype=np.float64)
df[self.skewed_feats] = np.log1p(df[self.skewed_feats].values)
return df.values
import sys
class FillNaWithConstant(TransformerMixin):
nan_value = 0
inf_value = None
minf_value = None
def __init__(self, nan_value=0, inf_value=sys.maxint - 1, minf_value=-sys.maxint - 1):
self.nan_value = nan_value
self.inf_value = inf_value
self.minf_value = minf_value
def fit(self, X, y=None):
return self
def transform(self, X):
df = pd.DataFrame(X).fillna(self.nan_value)
df = df.replace(np.inf, self.inf_value)
df = df.replace(-np.inf, self.minf_value)
return df.values
from sklearn.pipeline import Pipeline
lasso_feat_pipeline = Pipeline([
("skew", SkewLogAlign()),
("fillna", FillNaWithConstant()),
])
from sklearn.linear_model import LassoCV
LASSO_alphas = [1, 0.1, 0.001, 0.0005]
lasso_cv_model = LassoCV(alphas = [1, 0.1, 0.001, 0.0005], cv=5, max_iter=50000, verbose=True, n_jobs=-1)
lasso_cv_model.fit(lasso_feat_pipeline.transform(X.values), y)
print "alpha:", lasso_cv_model.alpha_
print "MSE:"
print zip(LASSO_alphas, np.sqrt(lasso_cv_model.mse_path_))
print pd.Series(lasso_cv_model.coef_, index=X.columns).sort_values(ascending=False)[:20]
from sklearn.linear_model import Lasso
best_alpha = 0.001
lasso_model = Pipeline([
("feat", lasso_feat_pipeline),
("clf", Lasso(alpha=best_alpha, max_iter=50000))
])
lasso_model.fit(X_train.values, y_train)
y_pred = lasso_model.predict(X_train.values)
print "predict-train:", rmse(y_pred, y_train)
submdf = pd.DataFrame({"id": X_train.index, "price_doc": unlog(y_pred)})
submdf.to_csv("lasso_train_preds.csv", index=False)
y_pred = lasso_model.predict(X_val.values)
print "predict-validation:", rmse(y_pred, y_val)
submdf = pd.DataFrame({"id": X_val.index, "price_doc": unlog(y_pred)})
submdf.to_csv("lasso_val_preds.csv", index=False)
test_pr = preprocess(test)
train_pr = preprocess(train_raw)
test_pr = apply_categorial(test_pr, train_pr)
test_pr = feature_exclude(test_pr)
test_pr = fill_na_xgb(test_pr)
# XGB
dtest = xgb.DMatrix(test_pr.values, feature_names=test_pr.columns)
y_pred = xgbmodel.predict(dtest)
submdf = pd.DataFrame({"id": test_pr.index, "price_doc": unlog(y_pred)})
submdf.to_csv("xgb_sub.csv", index=False)
!head xgb_sub.csv
y_pred = lgbmodel.predict(test_pr)
submdf = pd.DataFrame({"id": test_pr.index, "price_doc": unlog(y_pred)})
submdf.to_csv("lgb_sub.csv", index=False)
!head lgb_sub.csv
y_pred = lasso_model.predict(test_pr.values)
submdf = pd.DataFrame({"id": test_pr.index, "price_doc": unlog(y_pred)})
submdf.to_csv("lasso_sub.csv", index=False)
!head lasso_sub.csv
models = ["lgb", "xgb"]
etrain = pd.DataFrame(index=X_val.index)
etrain = etrain.join(train[["price_doc"]])
for i, p in enumerate(models):
pred = pd.read_csv("%s_val_preds.csv" % p, index_col="id", names=["id", "p_%s" % i], header=0)
etrain = etrain.join(pred)
eX = etrain.drop("price_doc", axis=1)
ey = etrain["price_doc"].values
etrain.head()
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LassoCV
emodel = Pipeline([
("skew", SkewLogAlign()),
("fillna", FillNaWithConstant()),
("clf", LassoCV(alphas=None, cv=5, max_iter=50000, verbose=True, n_jobs=-1))
])
emodel.fit(eX.values, ey)
lmodel = emodel.named_steps["clf"]
print "alpha:", lmodel.alpha_
print "MSE:"
print np.sqrt(lmodel.mse_path_)
print pd.Series(lmodel.coef_, index=eX.columns).sort_values(ascending=False)[:20]
eFACT_ROUNDS = 0
elgb_train = lgb.Dataset(eX, ey)
cvres = pd.DataFrame(lgb.cv(params=lgb_params, train_set=elgb_train, nfold=7, shuffle=False,
early_stopping_rounds=100, verbose_eval=100, num_boost_round=ROUNDS))
eFACT_ROUNDS = len(cvres)
emodel = lgb.train(lgb_params, elgb_train, num_boost_round=eFACT_ROUNDS or ROUNDS)
etest = test_pr[[]].copy()
for i, p in enumerate(models):
pred = pd.read_csv("%s_sub.csv" % p, index_col="id", names=["id", "p_%s" % i], header=0)
etest = etest.join(pred)
y_pred = emodel.predict(etest.values)
df = pd.DataFrame({"id": etest.index, "price_doc": unlog(y_pred)})
df.to_csv("ensemble_sub.csv", index=False)
!head ensemble_sub.csv
from tqdm import tqdm
def get_best_score(train):
xgb_params = {
'max_depth': 5,
'n_estimators': 200,
'learning_rate': 0.01,
'objective': 'reg:linear',
'eval_metric': 'rmse',
'silent': 1
}
cvres = xgb.cv(xgb_params, train, num_boost_round=4000, early_stopping_rounds=40)
return cvres["test-rmse-mean"].min(), cvres["test-rmse-mean"].argmin()
def df2DMatrix(df):
return xgb.DMatrix(data=df.drop("price_doc", axis=1).values, label=df["price_doc"].values)
def greedy_remove_features(df, feature_importances):
train = df
with open("greedy_search.tsv", "a") as f:
best_score, iterno = get_best_score(df2DMatrix(df))
f.write("\t".join(["INITIAL", str(best_score), str(iterno)]) + "\n")
to_analyze = sorted(feature_importances.items(), key=lambda x: x[1])
for feat, feat_importance in tqdm(to_analyze):
f.flush()
candidate_train = train.drop(feat, axis=1)
cand_best_score, iterno = get_best_score(df2DMatrix(candidate_train))
if cand_best_score > best_score:
# ััะฐะปะพ ั
ัะถะต, ะพััะฐะฒะปัะตะผ ัะธัั
f.write("\t".join([feat, str(cand_best_score), str(best_score), str(feat_importance), str(iterno), "skip"]) + "\n")
f.flush()
continue
f.write("\t".join([feat, str(cand_best_score), str(best_score), str(feat_importance), str(iterno), "remove"]) + "\n")
best_score = cand_best_score
train = candidate_train
feature_importances = imp_features.set_index("feature").to_dict()["importance"]
train_gs = train
with open("greedy_search.tsv") as gs:
for line in gs:
row = line.strip().split("\t")
if len(row) < 6:
continue
if row[5] == "remove":
try:
train_gs = train_gs.drop(row[0], axis=1)
except ValueError:
pass
print "drop", row[0]
feature_importances.pop(row[0], None)
greedy_remove_features(train_gs, feature_importances)
# train_raw = pd.read_csv("data/train.csv")
train_raw = pd.read_csv("data/train_without_noise.csv")
test = pd.read_csv("data/test.csv")
macro = pd.read_csv("data/macro.csv")
train_raw.head()
train_new_pr = feature_exclude(preprocess_categorial(preprocess(train_raw, dropid=False)))
test_new_pr = feature_exclude(preprocess_categorial(preprocess(test, dropid=False)))
# ะฝัะถะฝะพ ัะดะตะปะฐัั fillna, ััะพะฑั ะฟะพะปััะธัั ัะธะปะปะตัั ะดะปั NA ะธะท ะผะพะดะตะปะตะน
filled_train = fill_na_xgb(train_new_pr)
filled_test = fill_na_xgb(test_new_pr)
filled_train = filled_train.set_index("id")
filled_test = filled_test.set_index("id")
# train_raw = pd.read_csv("data/train.csv")
train_raw = pd.read_csv("data/train_without_noise.csv")
test = pd.read_csv("data/test.csv")
macro = pd.read_csv("data/macro.csv")
train_raw.head()
train_new = preprocess_anomaly(train_raw)
test_new = preprocess_anomaly(test)
train_new = train_new.set_index("id")
test_new = test_new.set_index("id")
train_new = train_new.join(filled_train[important_feats], rsuffix="_filled")
test_new = test_new.join(filled_test[important_feats], rsuffix="_filled")
for impf in important_feats:
train_new[impf] = train_new[impf].fillna(train_new["%s_filled" % impf])
train_new = train_new.drop(["%s_filled" % impf], axis=1)
test_new[impf] = test_new[impf].fillna(test_new["%s_filled" % impf])
test_new = test_new.drop(["%s_filled" % impf], axis=1)
# train_new = feature_exclude(train_new)
# test_new = feature_exclude(test_new)
train_new.to_csv("data/train_cleaned.csv", encoding="utf_8")
test_new.to_csv("data/test_cleaned.csv", encoding="utf_8")
# train_raw = pd.read_csv("data/train.csv")
train_raw = pd.read_csv("data/train_without_noise.csv")
test = pd.read_csv("data/test.csv")
macro = pd.read_csv("data/macro.csv")
train_raw.head()
def update(source, patch):
dtypes = source.dtypes
source.update(patch, overwrite=True)
for c, t in dtypes.iteritems():
source[c] = source[c].astype(t)
return source
train_raw.set_index("id")
test.set_index("id")
fx = pd.read_excel('data/BAD_ADDRESS_FIX.xlsx').drop_duplicates('id').set_index('id')
train_raw = update(train_raw, fx)
test = update(test, fx)
train_raw.reset_index()
test.reset_index()
print('Fix in train: ', train_raw.index.intersection(fx.index).shape[0])
print('Fix in test : ', test.index.intersection(fx.index).shape[0])
train_raw.to_csv("data/train_fix.csv", index=False, encoding="utf-8")
test.to_csv("data/test_fix.csv", index=False, encoding="utf-8")
from auto_ml import Predictor
# train_raw = pd.read_csv("data/train.csv")
train_raw = pd.read_csv("data/train_without_noise.csv")
test = pd.read_csv("data/test.csv")
macro = pd.read_csv("data/macro.csv")
train_raw.head()
train_pr = preprocess(train_raw)
train_pr = preprocess_categorial(train_pr)
train = feature_exclude(train_pr)
# Tell auto_ml which column is 'output'
# Also note columns that aren't purely numerical
# Examples include ['nlp', 'date', 'categorical', 'ignore']
column_descriptions = {
'price_doc': 'output'
}
ml_predictor = Predictor(type_of_estimator='regressor', column_descriptions=column_descriptions)
ml_predictor.train(train)
file_name = ml_predictor.save()
print file_name
# Score the model on test data
test_score = ml_predictor.score(df_test, df_test.MEDV)
#Checking for missing data
NAs = pd.concat([
train.isnull().sum(),
test_pr.isnull().sum()
], axis=1, keys=['Train', 'Test'])
NAs[NAs.sum(axis=1) > 0]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In this tutorial we focus on plotting functions of
Step2: Notice that evoked is a list of evoked instances. You can read only one
Step3: Let's start with a simple one. We plot event related potentials / fields
Step4: All plotting functions of MNE-python returns a handle to the figure instance.
Step5: Now let's make it a bit fancier and only use MEG channels. Many of the
Step6: Notice the legend on the left. The colors would suggest that there may be two
Step7: By default the topomaps are drawn from evenly spread out points of time over
Step8: Or we can automatically select the peaks.
Step9: You can take a look at the documentation of
Step10: Notice that we created five axes, but had only four categories. The fifth
Step11: We can also plot the activations as images. The time runs along the x-axis
Step12: Finally we plot the sensor data as a topographical view. In the simple case
Step13: Visualizing field lines in 3D
|
<ASSISTANT_TASK:>
Python Code:
import os.path as op
import numpy as np
import matplotlib.pyplot as plt
import mne
data_path = mne.datasets.sample.data_path()
fname = op.join(data_path, 'MEG', 'sample', 'sample_audvis-ave.fif')
evoked = mne.read_evokeds(fname, baseline=(None, 0), proj=True)
print(evoked)
evoked_l_aud = evoked[0]
evoked_r_aud = evoked[1]
evoked_l_vis = evoked[2]
evoked_r_vis = evoked[3]
fig = evoked_l_aud.plot(exclude=())
fig.tight_layout()
picks = mne.pick_types(evoked_l_aud.info, meg=True, eeg=False, eog=False)
evoked_l_aud.plot(spatial_colors=True, gfp=True, picks=picks)
evoked_l_aud.plot_topomap()
times = np.arange(0.05, 0.151, 0.05)
evoked_r_aud.plot_topomap(times=times, ch_type='mag')
evoked_r_aud.plot_topomap(times='peaks', ch_type='mag')
fig, ax = plt.subplots(1, 5)
evoked_l_aud.plot_topomap(times=0.1, axes=ax[0], show=False)
evoked_r_aud.plot_topomap(times=0.1, axes=ax[1], show=False)
evoked_l_vis.plot_topomap(times=0.1, axes=ax[2], show=False)
evoked_r_vis.plot_topomap(times=0.1, axes=ax[3], show=True)
ts_args = dict(gfp=True)
topomap_args = dict(sensors=False)
evoked_r_aud.plot_joint(title='right auditory', times=[.07, .105],
ts_args=ts_args, topomap_args=topomap_args)
evoked_r_aud.plot_image(picks=picks)
title = 'MNE sample data (condition : %s)'
evoked_l_aud.plot_topo(title=title % evoked_l_aud.comment)
colors = 'yellow', 'green', 'red', 'blue'
mne.viz.plot_evoked_topo(evoked, color=colors,
title=title % 'Left/Right Auditory/Visual')
subjects_dir = data_path + '/subjects'
trans_fname = data_path + '/MEG/sample/sample_audvis_raw-trans.fif'
maps = mne.make_field_map(evoked_l_aud, trans=trans_fname, subject='sample',
subjects_dir=subjects_dir, n_jobs=1)
# explore several points in time
field_map = evoked_l_aud.plot_field(maps, time=.1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: To access the 3D structure maps use
Step2: Individual voxels in this 3D volumetric data cube can be accessed as follows
Step3: where i,j and k index voxel positions along the x,y and z axes respectively. All indices run from 0 to 255.
Step4: Units are Mpc/h.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
diva=np.load('borg_sdss_diva.npz')
#3D probabilistic maps for DIVA structures
voids=diva['voids']
sheets=diva['sheets']
filaments=diva['filaments']
clusters=diva['clusters']
k=10;j=127;i=243
voids_ijk=voids[k,j,i]
#Minimum and maximum position along the x-axis in Mpc/h
xmin=diva['ranges'][0]
xmax=diva['ranges'][1]
#Minimum and maximum position along the y-axis in Mpc/h
ymin=diva['ranges'][2]
ymax=diva['ranges'][3]
#Minimum and maximum position along the z-axis in Mpc/h
zmin=diva['ranges'][4]
zmax=diva['ranges'][5]
from matplotlib import pyplot as plt
%matplotlib inline
f, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, sharex='row', sharey='col', figsize=(12,12))
ax1.imshow(voids[:,:,128], origin='lower', extent=[ymin,ymax,zmin,zmax], vmin=0., vmax=1., cmap="viridis")
ax1.set_title("voids")
ax2.imshow(sheets[:,:,128], origin='lower', extent=[ymin,ymax,zmin,zmax], vmin=0., vmax=1., cmap="viridis")
ax2.set_title("sheets")
ax3.imshow(filaments[:,:,128], origin='lower', extent=[ymin,ymax,zmin,zmax], vmin=0., vmax=1., cmap="viridis")
ax3.set_title("filaments")
ax4.imshow(clusters[:,:,128], origin='lower', extent=[ymin,ymax,zmin,zmax], vmin=0., vmax=1., cmap="viridis")
ax4.set_title("clusters")
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Plot simple 2d data
Step2: Plot blobs dataset
Step3: Oral presentation
Step4: $H_0/H_1$ distributions
Step5: Test power highlight
Step6: Type-1 error
Step7: Noncentral chi-square / power
Step9: Visualize mean embeddings
Step10: Densities in the frequency domain
Step13: ME test
Step14: ME test vs MMD witness function
Step16: In the above figure, $\sigma_n^2 = \mathbf{S}n$ because this is a one-dimensional example. It is easily seen from the independence of $X$ and $Y$ that $\sigma_n^2 = \sigma^2_x + \sigma^2_y$, where $\sigma^2_x(v) = \frac{1}{n}\sum{i=1}^n \left( k(x_i, v) - \frac{1}{n} \sum_{j=1}^n k(x_j, v)\right)^2$ and $\sigma^2_y(v) = \frac{1}{n}\sum_{i=1}^n \left( k(y_i, v) - \frac{1}{n} \sum_{j=1}^n k(y_j, v)\right)^2$.
Step17: Variances for the two-uniform problem
|
<ASSISTANT_TASK:>
Python Code:
%load_ext autoreload
%autoreload 2
%matplotlib inline
#%config InlineBackend.figure_format = 'svg'
#%config InlineBackend.figure_format = 'pdf'
import freqopttest.util as util
import freqopttest.data as data
import freqopttest.ex.exglobal as exglo
import freqopttest.kernel as kernel
import freqopttest.tst as tst
import freqopttest.glo as glo
import freqopttest.plot as plot
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as stats
import sys
ss = data.SSGaussVarDiff(d=2)
sam = ss.sample(n=200, seed=7)
x, y = sam.xy()
plt.plot(x[:, 0], x[:, 1], 'ob', label='$\mathsf{X}$', alpha=0.9, markeredgecolor='b')
plt.plot(y[:, 0], y[:, 1], 'or', label='$\mathsf{Y}$', alpha=0.7, markeredgecolor='r')
plt.gca().get_xaxis().set_visible(False)
plt.gca().get_yaxis().set_visible(False)
plt.box(False)
plt.legend(loc='best')
plt.savefig('gvd_demo.pdf')
# font options
font = {
#'family' : 'normal',
#'weight' : 'bold',
'size' : 32
}
plt.rc('font', **font)
plt.rc('lines', linewidth=3)
matplotlib.rcParams['pdf.fonttype'] = 42
matplotlib.rcParams['ps.fonttype'] = 42
matplotlib.rc('font', **{'family': 'serif', 'serif': ['Computer Modern']})
# matplotlib.use('cairo')
matplotlib.rc('text', usetex=True)
matplotlib.rcParams['image.cmap'] = 'jet'
matplotlib.rcParams['text.usetex'] = True
plt.rc('font', **font)
# params = {'legend.fontsize': 'large',
# #'figure.figsize': (15, 5),
# 'axes.labelsize': 'x-large',
# 'axes.titlesize':'x-large',
# 'xtick.labelsize':'x-large',
# 'ytick.labelsize':'x-large'}
# plt.rcParams.update(params)
import os
def oral_save_path(rel_path):
return os.path.join('..', 'text', 'nips2016', 'oral_img', rel_path)
def prefix_path(rel_path):
return oral_save_path(rel_path)
ss_blobs = data.SSBlobs()
sam = ss_blobs.sample(n=2000, seed=2)
bx, by = sam.xy()
plt.figure()
plt.plot(by[:, 0], by[:, 1], 'ob', markersize=3, markeredgecolor='b')
plt.xlim([-12, 12])
plt.ylim([-10, 10])
plt.title('Blobs data. Sample from P.')
plt.savefig('blobs_p.pdf')
plt.figure()
plt.plot(bx[:, 0], bx[:, 1], 'or', markersize=3, markeredgecolor='r')
plt.xlim([-12, 12])
plt.ylim([-10, 10])
plt.title('Blobs data. Sample from Q.')
plt.savefig('blobs_q.pdf')
# # font options
# font = {
# #'family' : 'normal',
# #'weight' : 'bold',
# 'size' : 18
# }
# plt.rc('font', **font)
# plt.rc('lines', linewidth=3)
# matplotlib.rcParams['pdf.fonttype'] = 42
# matplotlib.rcParams['ps.fonttype'] = 42
def plot_data_2d(x, y, title, dest_fname):
plt.plot(x[:, 0], x[:, 1], 'ob', label='$\mathsf{X}$', alpha=0.8, markeredgecolor='b')
plt.plot(y[:, 0], y[:, 1], 'or', label='$\mathsf{Y}$', alpha=0.7, markeredgecolor='r')
plt.gca().get_xaxis().set_visible(False)
plt.gca().get_yaxis().set_visible(False)
plt.box(False)
plt.legend()
plt.title(title)
plt.savefig(dest_fname, bbox_inches='tight')
with util.NumpySeedContext(seed=6):
n = 200
x = np.random.rand(n, 2)
y = np.random.rand(n, 2)
plot_data_2d(x, y, '$P=Q$', oral_save_path('uniform_2d_data.pdf'))
with util.NumpySeedContext(seed=9):
n = 400
x = np.random.randn(n, 2)
y = np.random.randn(n, 2).dot(np.diag([1, 4]))
plot_data_2d(x, y, r'$P \neq Q$', oral_save_path('h1_true_data.pdf'))
J = 5
alpha = 0.03
# plot the null distribution
h0_xmin = stats.chi2.ppf(0.0001, J)
h0_xmax = stats.chi2.ppf(0.9999, J)
domain = np.linspace(h0_xmin, h0_xmax, 300)
# noncentrality parameter
nc = 20
h1_dom = np.linspace(stats.ncx2.ppf(0.0001, J, nc), stats.ncx2.ppf(0.9999, J, nc), 200)
dom = np.sort(np.hstack((domain, h1_dom)))
plt.figure(figsize=(10,4))
plt.plot(dom, stats.chi2.pdf(dom, J), 'b-',
label='$\mathbb{P}_{H_0}(\hat{\lambda}_n)$'
# label='$\chi^2(J)$'
)
# rejection threshold
thresh = stats.chi2.isf(alpha, J)
plt.plot([thresh, thresh], [0, stats.chi2.pdf(J, J)/2], '*g-', label=r'$T_\alpha$')
# plot H1 distribution. Say a noncentral Chi2
# plt.plot(dom, stats.ncx2.pdf(dom, J, nc), 'r-', label='$\mathbb{P}_{H_1}(\hat{\lambda}_n)$')
# actual test statistic
stat = 36
#plt.plot([stat, stat], [0, stats.chi2.pdf(J, J)/3], '*m-', label='$\hat{\lambda}_n$', linewidth=2)
# p-value
pval_x = np.linspace(stat, h0_xmax, 400)
pval_y = stats.chi2.pdf(pval_x, J)
#plt.fill_between(pval_x, np.zeros(len(pval_x)), pval_y, color='gray', alpha='0.5', label='a')
p1 = plt.Rectangle((0, 0), 1, 1, fc='gray')
#shade_leg = plt.legend([p1], ['p-val.'], loc='upper right',
# bbox_to_anchor=(0.96, 0.53), frameon=False)
#plt.gca().add_artist(shade_leg)
plt.legend(loc='best', frameon=True)
plt.box(False)
plt.axhline(0, color='black')
plt.gca().get_yaxis().set_visible(False)
plt.gca().xaxis.set_ticks_position('bottom')
plt.xlabel(r'$\hat{\lambda}_n$')
#plt.savefig('h0_dist.pdf', bbox_inches='tight')
plt.savefig('h0_dist_thresh.pdf', bbox_inches='tight')
# plt.savefig('h0_h1_dists.pdf', bbox_inches='tight')
plt.figure(figsize=(10,4))
plt.plot(dom, stats.chi2.pdf(dom, J), 'b-', label=r'$\chi^2(J)$')
plt.plot([thresh, thresh], [0, stats.chi2.pdf(J, J)/2], '*g-', label=r'$T_\alpha$')
plt.plot(dom, stats.ncx2.pdf(dom, J, nc), 'r-', label='$\mathbb{P}_{H_1}(\hat{\lambda}_n)$')
#plt.stem([stat], [stats.chi2.pdf(J, J)/3], 'om-', label='$\hat{\lambda}$')
# test power
pow_x = np.linspace(thresh, max(h1_dom), 400)
pow_y = stats.ncx2.pdf(pow_x, J, nc)
plt.fill_between(pow_x, np.zeros(len(pow_x)), pow_y, color='magenta', alpha=0.2)
p2 = plt.Rectangle((0, 0), 1, 1, fc='magenta', alpha=0.2)
shade_pow = plt.legend([p2], ['power'], loc='upper right',
bbox_to_anchor=(0.97, 0.53), frameon=False)
#plt.gca().add_artist(shade_pow)
plt.box(False)
plt.legend(frameon=True)
plt.gca().get_yaxis().set_visible(False)
#plt.gca().xaxis.set_visible(True)
plt.axhline(0, color='black')
plt.gca().xaxis.set_ticks_position('bottom')
plt.xlabel(r'$\hat{\lambda}_n$')
plt.savefig('test_power_demo.pdf', bbox_inches='tight')
#plt.savefig('test_power_demo2.pdf', bbox_inches='tight')
plt.figure(figsize=(10,4))
plt.plot(domain, stats.chi2.pdf(domain, J), 'b-', label=r'$\mathbb{P}_{H_0}(\hat{\lambda}_n)$')
plt.plot(h1_dom, stats.ncx2.pdf(h1_dom, J, nc), 'r-', label=r'$\mathbb{P}_{H_1}(\hat{\lambda}_n)$')
plt.plot([thresh, thresh], [0, stats.chi2.pdf(J, J)/2], '*g-', label=r'$T_\alpha$', linewidth=2)
#plt.stem([stat], [stats.chi2.pdf(J, J)/3], 'om-', label='$\hat{\lambda}$')
# type-1 error
t1_x = np.linspace(thresh, max(domain), 400)
t1_y = stats.chi2.pdf(t1_x, J)
plt.fill_between(t1_x, np.zeros(len(t1_x)), t1_y, color='brown', alpha=0.4)
p2 = plt.Rectangle((0, 0), 1, 1, fc='brown', alpha=0.4)
shade_pow = plt.legend([p2], ['type-I'], loc='upper right',
bbox_to_anchor=(0.96, 0.53), frameon=False)
plt.box(False)
plt.gca().add_artist(shade_pow)
plt.legend(loc='best', frameon=False)
plt.gca().get_yaxis().set_visible(False)
plt.xlabel(r'$\hat{\lambda}_n$')
plt.savefig('type1_error_demo.pdf')
J = 5
dom = np.linspace(1e-5, 95, 500)
thresh = 27
ncs = [10, 30, 50]
pow_colors = ['blue', 'green', 'red']
plt.figure(figsize=(10,4))
for i, nc in enumerate(ncs):
plt.plot(dom, stats.ncx2.pdf(dom, J, nc), label=r'$\chi^2(J, \, %.2g)$'%( nc),
linewidth=2)
# plot the test powers
pow_dom = np.linspace(thresh, max(dom), 500)
plt.fill_between(pow_dom, np.zeros(len(dom)),
stats.ncx2.pdf(pow_dom, J, nc), color=pow_colors[i], alpha=0.2)
plt.plot([thresh, thresh], [0, stats.ncx2.pdf(ncs[0]+J, J, ncs[0])/3],
'*m-', label='$T_\\alpha$')
plt.legend(loc='best', frameon=False)
plt.title('$J = %d$'%J)
plt.savefig('ncx2_pow_inc.pdf')
sigma2 = 1
def kgauss_me(Xte, X):
Inputs are 2d arrays
kg = kernel.KGauss(sigma2=sigma2)
K_dom_x = kg.eval(Xte, X)
me_xte = np.mean(K_dom_x, 1)
return me_xte
X = np.array([[2, 3.2, 4]]).T
Y = np.array([[4.5, 4.9, 6]]).T
xy = np.vstack((X, Y))
dom = np.linspace(min(xy)-3*sigma2, max(xy)+3*sigma2, 300)
dom_mat = dom[:, np.newaxis]
me_x = kgauss_me(dom_mat, X)
me_y = kgauss_me(dom_mat, Y)
me_diff = 1*(me_x - me_y)
# plot mean embedding as a function
# data X
markersize = 9
plt.figure(figsize=(10, 5))
plt.plot(X[:, 0], np.zeros(X.shape[0]) + 5e-3, 'ob',
markersize=markersize
# , label='$\mathsf{X}$'
)
# mu_x
plt.plot(dom, me_x, 'b-', label='$\hat{\mu}_P(\mathbf{v})$')
# data Y
plt.plot(Y[:, 0], np.zeros(Y.shape[0]) + 5e-3, 'or',
markersize=markersize
# , label='$\mathsf{Y}$'
)
# mu_y
plt.plot(dom, me_y, 'r-', label=r'$\hat{\mu}_Q(\mathbf{v})$')
# diff
#plt.plot(dom, me_diff, '-g', label='$\hat{\mu}_P(\mathbf{v}) - \hat{\mu}_Q(\mathbf{v})$')
# diff^2
maxheight = np.max(np.maximum(me_x, me_y))
max_diff2 = np.max(me_diff**2)
plt.plot(dom, me_diff**2/max_diff2*maxheight*1.2, '-g',
label='$(\hat{\mu}_P(\mathbf{v}) - \hat{\mu}_Q(\mathbf{v}))^2$')
self_me_x = kgauss_me(X, X)
for i in range(X.shape[0]):
plt.plot([X[i, 0], X[i, 0]], [0, self_me_x[i]], '--', color='blue', alpha=0.4)
self_me_y = kgauss_me(Y, Y)
for i in range(Y.shape[0]):
plt.plot([Y[i, 0], Y[i, 0]], [0, self_me_y[i]], '--', color='red', alpha=0.4)
plt.xlabel('$\mathbf{v}$')
#plt.title('$\mathsf{X}$ = blue points. $\mathsf{Y}$ = red points.')
plt.legend(loc='best')
plt.gca().get_xaxis().set_visible(False)
plt.gca().get_yaxis().set_visible(False)
plt.box(False)
#plt.title('ME Test')
plt.savefig('mean_embeddings_diff.pdf')
import scipy as sp
import scipy.signal as sig
def tri_fun(x, w=1.0):
return sig.bspline(x/w, 1)
def smooth_ker(x, w=1.0):
return stats.norm.pdf(x, loc=0, scale=w)
v1 = 3
dom = np.linspace(-4, 4, 1000)
x = tri_fun(dom, w=1)
y = tri_fun(dom, w=2)
k = smooth_ker(dom, w=1.3)
x_k = np.convolve(x, k, 'same')
y_k = np.convolve(y, k, 'same')
plt_scale = 1.0/20
plt.figure(figsize=(7, 4))
# plot v
plt.plot([v1, v1], [0, np.max(x)/3.0], '--og', label=r'$\mathbf{v}_1$')
plt.plot(dom, x, 'b-', label=r'$\hat{p}(\mathbf{\omega})$')
plt.plot(dom, y, 'r-', label=r'$\hat{q}(\mathbf{\omega})$')
plt.legend(loc='best')
plt.title('Characteristic functions $\hat{p}(\mathbf{\omega}), \hat{q}(\mathbf{\omega})$')
#plt.gca().get_xaxis().set_visible(False)
plt.gca().get_yaxis().set_visible(False)
plt.box(False)
plt.savefig('characteristic_funcs.pdf')
#plt.plot(dom, plt_scale*x_k, '-r')
plt.figure(figsize=(7, 4))
# plot v
plt.plot([v1, v1], [0, np.max(y_k)/3.0], '--og', label=r'$\mathbf{v}_1$')
plt.plot(dom, x_k, 'b-', label=r'$(l \ast \hat{p})(\mathbf{\omega})$')
plt.plot(dom, y_k, 'r-', label=r'$(l \ast \hat{p})(\mathbf{\omega})$')
plt.legend()
plt.title('Smoothed characteristic functions')
plt.gca().get_yaxis().set_visible(False)
plt.box(False)
plt.savefig('smooth_cfs.pdf')
from mpl_toolkits.mplot3d import axes3d
from matplotlib import cm
def best_loc2_testpower(X, Y, gamma, loc1):
Show a heatmap of Lambda(T) on many locations of the test points.
J=2 (two locations). Assume loc1 is given. Vary loc2 (2d).
* loc1 can be None, in which case, use only one test location.
# For simplicity, we will assume that J=2 (two frequencies)
# and that one (loc1) is fixed. We will optimize loc2 (2-dimensional).
XY = np.vstack((X,Y))
max1, max2 = np.max(XY, 0)
min1, min2 = np.min(XY, 0)
#sd1, sd2 = np.std(XY, 0)
sd1, sd2 = (0, 0)
# form a frequency grid to try
nd1 = 60
nd2 = 60
loc1_cands = np.linspace(min1-sd1/2, max1+sd1/2, nd1)
loc2_cands = np.linspace(min2-sd2/2, max2+sd2/2, nd2)
lloc1, lloc2 = np.meshgrid(loc1_cands, loc2_cands)
# nd2 x nd1 x 2
loc3d = np.dstack((lloc1, lloc2))
# #candidates x 2
all_loc2s = np.reshape(loc3d, (-1, 2) )
# all_locs = #candidates x J x 2
if loc1 is not None:
all_locs = np.array( [np.vstack((c, loc1)) for c in all_loc2s] )
else:
all_locs = np.array( [ c[np.newaxis, :] for c in all_loc2s] )
# evaluate Lambda(T) on each candidate T on the grid. Size = (#candidates, )
stat_grid = np.array([t2_stat(X, Y, T, gamma) for T in all_locs])
stat_grid = np.reshape(stat_grid, (nd2, nd1) )
#ax = fig.gca(projection='3d')
#ax.plot_surface(lloc1, lloc2, stat_grid, rstride=8, cstride=8, alpha=0.3)
#cset = ax.contourf(lloc1, lloc2, stat_grid, zdir='z', offset=0, cmap=cm.coolwarm)
#plt.figure(figsize=(7, 4))
plt.contourf(lloc1, lloc2, stat_grid, alpha=0.28)
# plt.gca().get_xaxis().set_visible(False)
# plt.gca().get_yaxis().set_visible(False)
plt.colorbar()
max_stat = np.max(stat_grid)
#plt.xlabel('')
#plt.ylabel('')
plt.title('$\mathbf{v}_2 \mapsto \hat{\lambda}_{n}^{tr}(\mathbf{v}_1, \mathbf{v}_2)$')
#ax.view_init(elev=max_stat*2, azim=90)
# plot the data
#plt.figure(figsize=(12, 4))
plt.plot(X[:, 0], X[:, 1], 'ob', label='$\mathsf{X}$',
markeredgecolor='b', markersize=4, alpha=0.9)
plt.plot(Y[:, 0], Y[:, 1], 'or', label='$\mathsf{Y}$',
markeredgecolor='r', markersize=4, alpha=0.9)
if loc1 is not None:
loc1x, loc1y = loc1
plt.plot(loc1x, loc1y, '^k', markersize=20, label='$\mathbf{v}_1$')
#plt.legend(loc='best')
plt.savefig('lambda_t2_surface.pdf')
def t2_stat(X, Y, locs, gamma):
locs: J x d
tst_data = data.TSTData(X, Y)
me = tst.MeanEmbeddingTest(locs, gamma, alpha=alpha)
result = me.perform_test(tst_data)
return result['test_stat']
alpha = 0.01
# locations to test
ss = data.SSGaussMeanDiff(d=2, my=1.0)
#ss = data.SSGaussVarDiff(d=2)
#ss = data.SSBlobs()
n = 500
tst_data = ss.sample(n=n, seed=6)
#locs = tst.MeanEmbeddingTest.init_locs_2randn(tst_data, n_test_locs=2, seed=39)
X, Y = tst_data.xy()
from __future__ import print_function
from ipywidgets import interact, interactive, fixed
from IPython.display import display
import ipywidgets as widgets
# interactively select test locations
def me_test_plot_interact(X, Y, loc1x=0, loc1y=0, gamma=1):
#locs = np.array([[loc1x, loc1y], [loc2x, loc2y]])
# compute the test statistic
loc1 = np.array([loc1x, loc1y])
best_loc2_testpower(X, Y, gamma, loc1)
loc1_bnd = (-5, 5, 0.1)
# loc1_bnd = (np.min(counts), np.max(counts), 1)
loc2_bnd = loc1_bnd
vs = interactive(me_test_plot_interact, X=fixed(X), Y=fixed(Y), loc1x=loc1_bnd,
loc1y=loc2_bnd, gamma=(0.1, 10, 0.1));
display(vs)
# Another dataset
with util.NumpySeedContext(seed=74):
# match count of each of the n players
#counts = stats.poisson.rvs(mu, size=n)
# counts = stats.uniform.rvs(loc=2, scale=30, size=n).astype(int)
counts = 1+stats.expon.rvs(scale=15, size=n).astype(int)
A = stats.uniform.rvs(loc=10*counts, scale=4, size=n)
B = stats.uniform.rvs(loc=7*counts**0.6, scale=4, size=n)
pobs = stats.beta.rvs(a=A, b=B)
rand_signs = stats.bernoulli.rvs(0.5, size=n)*2-1
pmodel = pobs + 0.5*np.random.randn(n)/counts**1.5 + rand_signs*np.random.rand(n)*0.03
pmodel = np.maximum(np.minimum(pmodel, 1), 0)
X = np.vstack((counts, pobs)).T
Y = np.vstack((counts, pmodel)).T
plt.plot(pobs, pmodel, 'ko')
plt.xlabel('P obs')
plt.ylabel('P model')
gw2 = util.meddistance(np.vstack((X, Y)), subsample=1000)
print('Gaussian width^2: {0}'.format(gw2))
plt.figure(figsize=(10, 5))
best_loc2_testpower(X, Y, gw2, loc1=None)
plt.title('Test criterion')
plt.xlabel('Match count')
plt.ylabel('P(win)')
plt.xlim([1, 40])
plt.savefig('tomminka_game_problem.pdf', bbox_inches='tight')
# sample source
m = 3000
#dim = 2
n = m
#ss = data.SSGaussMeanDiff(dim, my=0.5)
#ss = data.SSUnif(plb=[-2, -2], pub=[0, 2], qlb=[-2, -2], qub=[2, 2])
#ss = data.SSUnif(plb=[-2], pub=[0], qlb=[-2], qub=[2])
pbs = np.array([[-2, 0]])
slack = 0.0
qbs = np.array([[-2+slack, 0-slack], [2, 4]])
ss = data.SSMixUnif1D(pbs, qbs)
#ss = data.SSGaussVarDiff(d=dim)
#ss = data.SSBlobs()
tst_data = ss.sample(m, seed=9)
tr, te = tst_data.split_tr_te(tr_proportion=0.5, seed=11)
alpha = 0.01
#gwidth = util.meddistance(tr.stack_xy(), subsample=1000)**2
gwidth = 0.3
reg = 0.0
dom = np.linspace(-6, 6, 400)
# add an x very close to 0
#dom = np.append(dom, [1e-9])
dom.sort()
xtr, ytr = tr.xy()
test_stats = np.zeros(len(dom))
# unnormalized stats
sigs = np.zeros(len(dom))
un_stats = np.zeros(len(dom))
witness = np.zeros(len(dom))
varx = np.zeros(len(dom))
vary = np.zeros(len(dom))
for i, t1x in enumerate(dom):
t1 = np.array([t1x])
T = t1[np.newaxis, :]
#met_i = tst.MeanEmbeddingTest(T, gwidth, alpha)
#test_i = met_i.perform_test(te)
#stats[i] = test_i['test_stat']
g = tst.MeanEmbeddingTest.gauss_kernel(xtr, T, gwidth)
h = tst.MeanEmbeddingTest.gauss_kernel(ytr, T, gwidth)
varx[i] = np.cov(g.T)
vary[i] = np.cov(h.T)
Z = g-h
#Sig = np.cov(Z.T)
Sig = varx[i] + vary[i]
W = np.mean(Z, 0)
stat = n*(W[0]**2)/(Sig + reg)
test_stats[i] = stat
sigs[i] = Sig
un_stats[i] = n*W[0]**2
witness[i] = W[0]
print('gwidth**2: %.3f'%gwidth)
# plot location shift vs. test stat
plt.figure(figsize=(10, 5))
plt.plot(dom, test_stats, 'g-', label=r'$\hat{\lambda}_n(\mathbf{v})$')
#plt.plot(t1x_list, sigs, label=r'$\sigma_n^2 $')
# renormalized variance to match the scaling of the statistics
norm_sigs = sigs/np.max(sigs)*np.max(test_stats)/2
plt.plot(dom, norm_sigs, 'm-', label=r'$\propto \mathbf{S}_n(\mathbf{v})$')
norm_un_stats = un_stats/np.max(un_stats)*np.max(test_stats)
plt.plot(dom, norm_un_stats, 'k-',
label=r'$(\hat{\mu}_P(\mathbf{v}) - \hat{\mu}_Q(\mathbf{v}))^2$')
#plt.title(r'$\mathbf{v}_1 = [v, 0]$, $\gamma_n = %.2f$, $n=%d$'%(reg, xte.shape[0]))
plt.legend(bbox_to_anchor=(1.2, 1))
#plt.xlabel('$v$ in $1^{st}$ dim. of $\mathbf{v}_1$')
plt.xlabel(r'$\mathbf{v}$', fontsize=40)
#plt.xlim([-10, 6])
plt.gca().get_yaxis().set_visible(False)
plt.gca().xaxis.set_ticks_position('bottom')
plt.box(False)
plt.savefig('witness_vs_normalized_stat.pdf')
#plt.ylabel('Test statistic')
# sigs = varx + vary
plt.figure(figsize=(10, 4))
plt.plot(dom, varx, 'b--', label=r'$\hat{s}_\mathbf{x}(\mathbf{v})$')
plt.plot(dom, vary, 'r--', label=r'$\hat{s}_\mathbf{y}(\mathbf{v})$')
plt.plot(dom, sigs, 'm-', label=r'$\hat{s}(\mathbf{v})$', alpha=0.5)
# r'$\hat{s}(\mathbf{v}) = \hat{s}_\mathbf{x}(\mathbf{v}) + \hat{s}_\mathbf{y}(\mathbf{v})$'
#plt.title('(unnormalized) variances')
plt.xlabel('$\mathbf{v}$', fontsize=40)
plt.ylim([0, np.max(sigs)+5e-3])
plt.gca().get_yaxis().set_visible(False)
plt.gca().xaxis.set_ticks_position('bottom')
plt.box(False)
plt.legend(bbox_to_anchor=(0.3, 1))
#plt.legend()
#plt.savefig('me_var_x.pdf', bbox_inches='tight')
#plt.savefig('me_var_xy.pdf', bbox_inches='tight')
plt.savefig('me_var.pdf', bbox_inches='tight')
plt.figure(figsize=(10, 4))
plt.plot(dom, witness, 'k-')
plt.title('MMD witness function')
plt.grid(True)
def plot_witness( px_label=r'$P$', py_label=r'$Q$'):
plot_var: A list of 'x', 'y', or 'total'. Not plotting if [].
# plot as 1d. Take the first dimension
plt.figure(figsize=(8, 4))
zthresh = 1e-2
pden = ss.density_p(dom)
qden = ss.density_q(dom)
pden[pden<=zthresh] = zthresh
plt.plot(dom, pden, 'b-', alpha=1,label=px_label,)
plt.plot(dom, qden, 'r-', alpha=0.99, label=py_label,)
max_den = np.max(np.maximum(pden, qden))
max_witness = np.max(witness)*1
#toplot_witness = witness/max_witness*max_den
toplot_witness = witness
plt.plot(dom, toplot_witness, 'k-',
# label=r'$(\hat{\mu}_P(\mathbf{v}) - \hat{\mu}_Q(\mathbf{v}))^2$'
label=r'$\mathrm{witness}$'
)
#plt.xlabel(r'$\mathbf{v}$', fontsize=40)
#plt.legend(loc='upper left')
plt.legend(
fontsize=18,
bbox_to_anchor=(0.32, 1.00),
#loc='lower left'
)
plt.xlim([np.min(dom), np.max(dom)])
plt.ylim([-0.4, 0.55])
plt.yticks([ -0.25, 0, 0.25, 0.5], fontsize=18)
plt.xticks(fontsize=18)
#plt.gca().yaxis.set_visible(False)
plt.gca().xaxis.set_ticks_position('bottom')
plt.grid()
#plt.box(False)
#plt.axis('off')
######
plot_witness()
plt.savefig(prefix_path('unif_overlap_unsquared_witness.pdf'),
bbox_inches='tight'
)
alpha = 0.01
#gwidth = util.meddistance(tr.stack_xy(), subsample=1000)**2
gwidth = 1.0
reg = 0.0
t1x_list = np.linspace(-6, 6, 200)
# add an x very close to 0
#t1x_list = np.append(t1x_list, [1e-9])
#t1x_list.sort()
test_stats = np.zeros(len(t1x_list))
# unnormalized stats
sigs = np.zeros(len(t1x_list))
un_stats = np.zeros(len(t1x_list))
witness = np.zeros(len(t1x_list))
varx = np.zeros(len(t1x_list))
vary = np.zeros(len(t1x_list))
for i, t1x in enumerate(t1x_list):
t1 = np.array([t1x])
T = t1[np.newaxis, :]
#met_i = tst.MeanEmbeddingTest(T, gwidth, alpha)
#test_i = met_i.perform_test(te)
#stats[i] = test_i['test_stat']
g = tst.MeanEmbeddingTest.gauss_kernel(xtr, T, gwidth)
h = tst.MeanEmbeddingTest.gauss_kernel(ytr, T, gwidth)
Z = g-h
Sig = np.cov(Z.T)
W = np.mean(Z, 0)
stat = n*(W[0]**2)/(Sig + reg)
test_stats[i] = stat
sigs[i] = Sig
un_stats[i] = n*W[0]**2
witness[i] = W[0]
varx[i] = np.cov(g.T)
vary[i] = np.cov(h.T)
print('gwidth**2: %.3f'%gwidth)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Save frame and display JPG
Step2: 3. Gray Scale filter
Step3: 4. Sobel filter
Step4: 5
|
<ASSISTANT_TASK:>
Python Code:
from pynq import Overlay
from pynq.drivers import Frame, HDMI
from IPython.display import Image
Overlay('base.bit').download()
hdmi=HDMI('in')
hdmi.start()
frame = hdmi.frame()
orig_img_path = '/home/xilinx/jupyter_notebooks/examples/data/orig.jpg'
frame.save_as_jpeg(orig_img_path)
Image(filename=orig_img_path)
from pynq.drivers.video import MAX_FRAME_WIDTH
frame_i = frame.frame
height = hdmi.frame_height()
width = hdmi.frame_width()
for y in range(0, height):
for x in range(0, width):
offset = 3 * (y * MAX_FRAME_WIDTH + x)
gray = round((0.299*frame_i[offset+2]) +
(0.587*frame_i[offset+0]) +
(0.114*frame_i[offset+1]))
frame_i[offset:offset+3] = gray,gray,gray
gray_img_path = '/home/xilinx/jupyter_notebooks/examples/data/gray.jpg'
frame.save_as_jpeg(gray_img_path)
Image(filename=gray_img_path)
height = 1080
width = 1920
sobel = Frame(1920, 1080)
frame_i = frame.frame
for y in range(1,height-1):
for x in range(1,width-1):
offset = 3 * (y * MAX_FRAME_WIDTH + x)
upper_row_offset = offset - MAX_FRAME_WIDTH*3
lower_row_offset = offset + MAX_FRAME_WIDTH*3
gx = abs(-frame_i[lower_row_offset-3] + frame_i[lower_row_offset+3] -
2*frame_i[offset-3] + 2*frame_i[offset+3] -
frame_i[upper_row_offset-3] + frame_i[upper_row_offset+3])
gy = abs(frame_i[lower_row_offset-3] + 2*frame_i[lower_row_offset] +
frame_i[lower_row_offset+3] - frame_i[upper_row_offset-3] -
2*frame_i[upper_row_offset] - frame_i[upper_row_offset+3])
grad = min(gx + gy,255)
sobel.frame[offset:offset+3] = grad,grad,grad
sobel_img_path = '/home/xilinx/jupyter_notebooks/examples/data/sobel.jpg'
sobel.save_as_jpeg(sobel_img_path)
Image(filename=sobel_img_path)
hdmi.stop()
del sobel
del hdmi
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Type
Step7: 1.4. Elemental Stoichiometry
Step8: 1.5. Elemental Stoichiometry Details
Step9: 1.6. Prognostic Variables
Step10: 1.7. Diagnostic Variables
Step11: 1.8. Damping
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Step13: 2.2. Timestep If Not From Ocean
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Step15: 3.2. Timestep If Not From Ocean
Step16: 4. Key Properties --> Transport Scheme
Step17: 4.2. Scheme
Step18: 4.3. Use Different Scheme
Step19: 5. Key Properties --> Boundary Forcing
Step20: 5.2. River Input
Step21: 5.3. Sediments From Boundary Conditions
Step22: 5.4. Sediments From Explicit Model
Step23: 6. Key Properties --> Gas Exchange
Step24: 6.2. CO2 Exchange Type
Step25: 6.3. O2 Exchange Present
Step26: 6.4. O2 Exchange Type
Step27: 6.5. DMS Exchange Present
Step28: 6.6. DMS Exchange Type
Step29: 6.7. N2 Exchange Present
Step30: 6.8. N2 Exchange Type
Step31: 6.9. N2O Exchange Present
Step32: 6.10. N2O Exchange Type
Step33: 6.11. CFC11 Exchange Present
Step34: 6.12. CFC11 Exchange Type
Step35: 6.13. CFC12 Exchange Present
Step36: 6.14. CFC12 Exchange Type
Step37: 6.15. SF6 Exchange Present
Step38: 6.16. SF6 Exchange Type
Step39: 6.17. 13CO2 Exchange Present
Step40: 6.18. 13CO2 Exchange Type
Step41: 6.19. 14CO2 Exchange Present
Step42: 6.20. 14CO2 Exchange Type
Step43: 6.21. Other Gases
Step44: 7. Key Properties --> Carbon Chemistry
Step45: 7.2. PH Scale
Step46: 7.3. Constants If Not OMIP
Step47: 8. Tracers
Step48: 8.2. Sulfur Cycle Present
Step49: 8.3. Nutrients Present
Step50: 8.4. Nitrous Species If N
Step51: 8.5. Nitrous Processes If N
Step52: 9. Tracers --> Ecosystem
Step53: 9.2. Upper Trophic Levels Treatment
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Step55: 10.2. Pft
Step56: 10.3. Size Classes
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Step58: 11.2. Size Classes
Step59: 12. Tracers --> Disolved Organic Matter
Step60: 12.2. Lability
Step61: 13. Tracers --> Particules
Step62: 13.2. Types If Prognostic
Step63: 13.3. Size If Prognostic
Step64: 13.4. Size If Discrete
Step65: 13.5. Sinking Speed If Prognostic
Step66: 14. Tracers --> Dic Alkalinity
Step67: 14.2. Abiotic Carbon
Step68: 14.3. Alkalinity
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mpi-m', 'sandbox-3', 'ocnbgchem')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The setup
Step2: This sample data has way too many stations to plot all of them. Instead, we just select
Step3: Next grab the simple variables out of the data we have (attaching correct units), and
Step4: Notice that the names (the keys) in the dictionary are the same as those that the
Step5: All the data wrangling is finished, just need to set up plotting and go
Step6: The payoff
Step7: or instead, a custom layout can be used
|
<ASSISTANT_TASK:>
Python Code:
import cartopy.crs as ccrs
import cartopy.feature as cfeature
import matplotlib.pyplot as plt
import pandas as pd
from metpy.calc import wind_components
from metpy.cbook import get_test_data
from metpy.plots import (add_metpy_logo, simple_layout, StationPlot, StationPlotLayout,
wx_code_map)
from metpy.units import units
with get_test_data('station_data.txt') as f:
data_arr = pd.read_csv(f, header=0, usecols=(1, 2, 3, 4, 5, 6, 7, 17, 18, 19),
names=['stid', 'lat', 'lon', 'slp', 'air_temperature',
'cloud_fraction', 'dew_point_temperature', 'weather',
'wind_dir', 'wind_speed'],
na_values=-99999)
data_arr.set_index('stid', inplace=True)
# Pull out these specific stations
selected = ['OKC', 'ICT', 'GLD', 'MEM', 'BOS', 'MIA', 'MOB', 'ABQ', 'PHX', 'TTF',
'ORD', 'BIL', 'BIS', 'CPR', 'LAX', 'ATL', 'MSP', 'SLC', 'DFW', 'NYC', 'PHL',
'PIT', 'IND', 'OLY', 'SYR', 'LEX', 'CHS', 'TLH', 'HOU', 'GJT', 'LBB', 'LSV',
'GRB', 'CLT', 'LNK', 'DSM', 'BOI', 'FSD', 'RAP', 'RIC', 'JAN', 'HSV', 'CRW',
'SAT', 'BUY', '0CO', 'ZPC', 'VIH']
# Loop over all the whitelisted sites, grab the first data, and concatenate them
data_arr = data_arr.loc[selected]
# Drop rows with missing winds
data_arr = data_arr.dropna(how='any', subset=['wind_dir', 'wind_speed'])
# First, look at the names of variables that the layout is expecting:
simple_layout.names()
# This is our container for the data
data = {}
# Copy out to stage everything together. In an ideal world, this would happen on
# the data reading side of things, but we're not there yet.
data['longitude'] = data_arr['lon'].values
data['latitude'] = data_arr['lat'].values
data['air_temperature'] = data_arr['air_temperature'].values * units.degC
data['dew_point_temperature'] = data_arr['dew_point_temperature'].values * units.degC
data['air_pressure_at_sea_level'] = data_arr['slp'].values * units('mbar')
# Get the wind components, converting from m/s to knots as will be appropriate
# for the station plot
u, v = wind_components(data_arr['wind_speed'].values * units('m/s'),
data_arr['wind_dir'].values * units.degree)
data['eastward_wind'], data['northward_wind'] = u, v
# Convert the fraction value into a code of 0-8, which can be used to pull out
# the appropriate symbol
data['cloud_coverage'] = (8 * data_arr['cloud_fraction']).fillna(10).values.astype(int)
# Map weather strings to WMO codes, which we can use to convert to symbols
# Only use the first symbol if there are multiple
wx_text = data_arr['weather'].fillna('')
data['current_wx1_symbol'] = [wx_code_map[s.split()[0] if ' ' in s else s] for s in wx_text]
proj = ccrs.LambertConformal(central_longitude=-95, central_latitude=35,
standard_parallels=[35])
# Change the DPI of the resulting figure. Higher DPI drastically improves the
# look of the text rendering
plt.rcParams['savefig.dpi'] = 255
# Create the figure and an axes set to the projection
fig = plt.figure(figsize=(20, 10))
add_metpy_logo(fig, 1080, 290, size='large')
ax = fig.add_subplot(1, 1, 1, projection=proj)
# Add some various map elements to the plot to make it recognizable
ax.add_feature(cfeature.LAND)
ax.add_feature(cfeature.OCEAN)
ax.add_feature(cfeature.LAKES)
ax.add_feature(cfeature.COASTLINE)
ax.add_feature(cfeature.STATES)
ax.add_feature(cfeature.BORDERS, linewidth=2)
# Set plot bounds
ax.set_extent((-118, -73, 23, 50))
#
# Here's the actual station plot
#
# Start the station plot by specifying the axes to draw on, as well as the
# lon/lat of the stations (with transform). We also the fontsize to 12 pt.
stationplot = StationPlot(ax, data['longitude'], data['latitude'],
transform=ccrs.PlateCarree(), fontsize=12)
# The layout knows where everything should go, and things are standardized using
# the names of variables. So the layout pulls arrays out of `data` and plots them
# using `stationplot`.
simple_layout.plot(stationplot, data)
plt.show()
# Just winds, temps, and dewpoint, with colors. Dewpoint and temp will be plotted
# out to Fahrenheit tenths. Extra data will be ignored
custom_layout = StationPlotLayout()
custom_layout.add_barb('eastward_wind', 'northward_wind', units='knots')
custom_layout.add_value('NW', 'air_temperature', fmt='.1f', units='degF', color='darkred')
custom_layout.add_value('SW', 'dew_point_temperature', fmt='.1f', units='degF',
color='darkgreen')
# Also, we'll add a field that we don't have in our dataset. This will be ignored
custom_layout.add_value('E', 'precipitation', fmt='0.2f', units='inch', color='blue')
# Create the figure and an axes set to the projection
fig = plt.figure(figsize=(20, 10))
add_metpy_logo(fig, 1080, 290, size='large')
ax = fig.add_subplot(1, 1, 1, projection=proj)
# Add some various map elements to the plot to make it recognizable
ax.add_feature(cfeature.LAND)
ax.add_feature(cfeature.OCEAN)
ax.add_feature(cfeature.LAKES)
ax.add_feature(cfeature.COASTLINE)
ax.add_feature(cfeature.STATES)
ax.add_feature(cfeature.BORDERS, linewidth=2)
# Set plot bounds
ax.set_extent((-118, -73, 23, 50))
#
# Here's the actual station plot
#
# Start the station plot by specifying the axes to draw on, as well as the
# lon/lat of the stations (with transform). We also the fontsize to 12 pt.
stationplot = StationPlot(ax, data['longitude'], data['latitude'],
transform=ccrs.PlateCarree(), fontsize=12)
# The layout knows where everything should go, and things are standardized using
# the names of variables. So the layout pulls arrays out of `data` and plots them
# using `stationplot`.
custom_layout.plot(stationplot, data)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Select one of those files to use as an example, and take a look at its HTML content.
Step2: Next, use Beautiful Soup to extract text out of the HTML. Following the DOM structure of the HTML document, select the <div/> that encloses the article text, then iterate through the <p/> paragraphs to extract the text from each.
|
<ASSISTANT_TASK:>
Python Code:
%sx ls html/
file = "html/article1.html"
print(open(file, "r").readlines())
from bs4 import BeautifulSoup
with open(file, "r") as f:
soup = BeautifulSoup(f, "html.parser")
for div in soup.find_all("div", id="article-body"):
for p in div.find_all("p"):
print(p.get_text(), "\n")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Gradient descent
Step3: simple Gradient descent exercise
Step4: Implementing gradient descent
Step6: the actual implmentation
Step8: Multilayer perceptrons
Step10: Back Propogate
Step12: implement backprop.py
|
<ASSISTANT_TASK:>
Python Code:
x = [i for i in range(-10,10)]
#print(x)
def sigmoid(num):
return 1.0 / (1.0 + np.exp(-num))
plt.plot(range(-10,10), [sigmoid(i) for i in x])
plt.show()
from IPython.display import YouTubeVideo
YouTubeVideo('29PmNG7fuuM', width="560")
# Defining the sigmoid function for activations
def sigmoid(x):
return 1/(1+np.exp(-x))
# Derivative of the sigmoid function
def sigmoid_prime(x):
return sigmoid(x) * (1 - sigmoid(x))
x = np.array([0.1, 0.3])
y = 0.2
weights = np.array([-0.8, 0.5])
# The learning rate, eta in the weight step equation
learnrate = 0.5
# The neural network output
nn_output = sigmoid(x[0]*weights[0] + x[1]*weights[1])
# or nn_output = sigmoid(np.dot(x, w))
# output error
error = y - nn_output
# error gradient
error_grad = error * sigmoid_prime(np.dot(x,weights))
# Gradient descent step
del_w = [ learnrate * error_grad * x[0],
learnrate * error_grad * x[1]]
# or del_w = learnrate * error_grad * x
def sigmoid(x):
Calculate sigmoid
return 1/(1+np.exp(-x))
learnrate = 0.5
x = np.array([1, 2])
y = np.array(0.5)
# Initial weights
w = np.array([0.5, -0.5])
# Calculate one gradient descent step for each weight
# TODO: Calculate output of neural network
nn_output = sigmoid(np.dot(x,w))
# TODO: Calculate error of neural network
error = y - nn_output
# TODO: Calculate change in weights
del_w = learnrate * error * nn_output * (1 - nn_output) * x
print('Neural Network output:')
print(nn_output)
print('Amount of Error:')
print(error)
print('Change in Weights:')
print(del_w)
admissions = pd.read_csv('data/binary.csv')
admissions.head()
data = pd.concat([admissions, pd.get_dummies(admissions['rank'], prefix='rank')], axis=1)
data = data.drop('rank', axis=1)
data.head()
# Standarize features
for field in ['gre', 'gpa']:
mean, std = data[field].mean(), data[field].std()
data.loc[:,field] = (data[field]-mean)/std
data.head()
# Split off random 10% of the data for testing
np.random.seed(42)
sample = np.random.choice(data.index, size=int(len(data)*0.9), replace=False)
data, test_data = data.ix[sample], data.drop(sample)
data.shape, test_data.shape
# Split into features and targets
features, targets = data.drop('admit', axis=1), data['admit']
features_test, targets_test = test_data.drop('admit', axis=1), test_data['admit']
def sigmoid(x):
Calculate sigmoid
return 1 / (1 + np.exp(-x))
# Use to same seed to make debugging easier
np.random.seed(42)
n_records, n_features = features.shape
last_loss = None
# Initialize weights
weights = np.random.normal(scale=1 / n_features**.5, size=n_features)
# Neural Network hyperparameters
epochs = 10000
learnrate = -0.015
for e in range(epochs):
del_w = np.zeros(weights.shape)
for x, y in zip(features.values, targets):
# Loop through all records, x is the input, y is the target
# TODO: Calculate the output
output = sigmoid(np.dot(x,weights))
# TODO: Calculate the error
error = output - y
# TODO: Calculate change in weights
del_w += error * output * (1 - output) * x
# TODO: Update weights
weights += (del_w * learnrate) / n_records
# Printing out the mean square error on the training set
if e % (epochs / 10) == 0:
out = sigmoid(np.dot(features, weights))
loss = np.mean((out - targets) ** 2)
if last_loss and last_loss < loss:
print("Train loss: ", loss, " WARNING - Loss Increasing")
else:
print("Train loss: ", loss)
last_loss = loss
# Calculate accuracy on test data
tes_out = sigmoid(np.dot(features_test, weights))
predictions = tes_out > 0.5
accuracy = np.mean(predictions == targets_test)
print("Prediction accuracy: {:.3f}".format(accuracy))
# Number of records and input units
n_records, n_inputs = features.shape
# Number of hidden units
n_hidden = 2
weights = np.random.normal(0, 1/n_inputs**-2, size=(n_inputs, n_hidden))
weights
import numpy as np
def sigmoid(x):
Calculate sigmoid
return 1/(1+np.exp(-x))
# Network size
N_input = 4
N_hidden = 3
N_output = 2
np.random.seed(42)
# Make some fake data
X = np.random.randn(4)
weights_in_hidden = np.random.normal(0, scale=0.1, size=(N_input, N_hidden))
weights_hidden_out = np.random.normal(0, scale=0.1, size=(N_hidden, N_output))
# TODO: Make a forward pass through the network
hidden_layer_in = np.dot(X, weights_in_hidden)
hidden_layer_out = sigmoid(hidden_layer_in)
print('Hidden-layer Output:')
print(hidden_layer_out)
output_layer_in = np.dot(hidden_layer_out, weights_hidden_out)
output_layer_out = sigmoid(output_layer_in)
print('Output-layer Output:')
print(output_layer_out)
def sigmoid(x):
Calculate sigmoid
return 1 / (1 + np.exp(-x))
x = np.array([0.5, 0.1, -0.2])
target = 0.6
learnrate = 0.5
weights_input_hidden = np.array([[0.5, -0.6],
[0.1, -0.2],
[0.1, 0.7]])
weights_hidden_output = np.array([0.1, -0.3])
## Forward pass
hidden_layer_input = np.dot(x, weights_input_hidden)
hidden_layer_output = sigmoid(hidden_layer_input)
output_layer_in = np.dot(hidden_layer_output, weights_hidden_output)
output = sigmoid(output_layer_in)
## Backwards pass
## TODO: Calculate error
error = target - output
# TODO: Calculate error gradient for output layer
del_err_output = error * output * (1 - output)
# TODO: Calculate error gradient for hidden layer
del_err_hidden = np.dot(del_err_output, weights_hidden_output) * \
hidden_layer_output * (1 - hidden_layer_output)
# TODO: Calculate change in weights for hidden layer to output layer
delta_w_h_o = learnrate * del_err_output * hidden_layer_output
# TODO: Calculate change in weights for input layer to hidden layer
delta_w_i_o = learnrate * del_err_hidden * x[:, None]
print('Change in weights for hidden layer to output layer:')
print(delta_w_h_o)
print('Change in weights for input layer to hidden layer:')
print(delta_w_i_o)
import numpy as np
#from data_prep import features, targets, features_test, targets_test
np.random.seed(42)
def sigmoid(x):
Calculate sigmoid
return 1 / (1 + np.exp(-x))
# Hyperparameters
n_hidden = 3 # number of hidden units
epochs = 500
learnrate = -0.02
n_records, n_features = features.shape
last_loss = None
# Initialize weights
weights_input_hidden = np.random.normal(scale=1 / n_features ** .5,
size=(n_features, n_hidden))
weights_hidden_output = np.random.normal(scale=1 / n_features ** .5,
size=n_hidden)
for e in range(epochs):
del_w_input_hidden = np.zeros(weights_input_hidden.shape)
del_w_hidden_output = np.zeros(weights_hidden_output.shape)
for x, y in zip(features.values, targets):
## Forward pass ##
# TODO: Calculate the output
hidden_input = np.dot(x, weights_input_hidden)
hidden_activations = sigmoid(hidden_input)
output = np.dot(hidden_activations, weights_hidden_output)
## Backward pass ##
# TODO: Calculate the error
error = y - output
# TODO: Calculate error gradient in output unit
output_error = error * output * (1 - output)
# TODO: propagate errors to hidden layer
hidden_error = np.dot(output_error, weights_hidden_output) \
* hidden_activations * (1 - hidden_activations)
# TODO: Update the change in weights
del_w_hidden_output += hidden_error * hidden_activations
del_w_input_hidden += hidden_error * x[:, None]
# TODO: Update weights
weights_input_hidden += learnrate * del_w_input_hidden / n_records
weights_hidden_output += learnrate * del_w_hidden_output / n_records
# Printing out the mean square error on the training set
if e % (epochs / 10) == 0:
hidden_activations = sigmoid(np.dot(x, weights_input_hidden))
out = sigmoid(np.dot(hidden_activations,
weights_hidden_output))
loss = np.mean((out - targets) ** 2)
if last_loss and last_loss < loss:
print("Train loss: ", loss, " WARNING - Loss Increasing")
else:
print("Train loss: ", loss)
last_loss = loss
# Calculate accuracy on test data
hidden = sigmoid(np.dot(features_test, weights_input_hidden))
out = sigmoid(np.dot(hidden, weights_hidden_output))
predictions = out > 0.5
accuracy = np.mean(predictions == targets_test)
print("Prediction accuracy: {:.3f}".format(accuracy))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Work with a Cloudant database
Step2: 3. Work with a Dataframe
Step3: 4. Generate visualizations
|
<ASSISTANT_TASK:>
Python Code:
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
cloudantdata = spark.read.format("org.apache.bahir.cloudant")\
.option("cloudant.host","xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx-bluemix.cloudant.com")\
.option("cloudant.username", "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx-bluemix")\
.option("cloudant.password","xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx")\
.load("crimes")
# This code prints the schema and a record count
cloudantdata.printSchema()
cloudantdata.count()
# This code displays the values of the naturecode field
cloudantdata.select("properties.naturecode").show()
# This code filters the data to just those records with a naturecode of "DISTRB", and then displays that data
disturbDF = cloudantdata.filter("properties.naturecode = 'DISTRB'")
disturbDF.show()
# This code writes the filtered data to a Cloudant database called crimes_filtered. If the Cloudant database exists, the documents will be added to the database.
# If the database does not exist, set the createDBOnSave option to 'true'.
disturbDF.select("properties").write.format("org.apache.bahir.cloudant")\
.option("cloudant.host","xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx-bluemix.cloudant.com")\
.option("cloudant.username", "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx-bluemix")\
.option("cloudant.password","xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx")\
.option("createDBOnSave", "true")\
.save("crimes_filtered")
# Next, you'll see how to create a visualization of the crimes data.
# First, this line creates a DataFrame containing all of the naturecodes and a count of the crime incidents for each code.
reducedValue = cloudantdata.groupBy("properties.naturecode").count()
reducedValue.printSchema()
# This line imports two Python modules. The pprint module helps to produce pretty representations of data structures,
# and the counter subclass from the collections module helps to count hashable objects.
import pprint
from collections import Counter
# This line imports PySpark classes for Spark SQL and DataFrames.
from pyspark.sql import *
from pyspark.sql.functions import udf, asc, desc
from pyspark import SparkContext, SparkConf
from pyspark.sql.types import IntegerType
# This line converts an Apache Spark DataFrame to a Panda DataFrame, and also sorts the DataFrame by count first,
# and then by naturecode second in order to produce a sorted graph later.
import pandas as pd
pandaDF = reducedValue.orderBy(desc("count"), asc("naturecode")).toPandas()
print(pandaDF)
# This is needed to actually see the plots
%matplotlib inline
# This line imports matplotlib.pyplot which is a collection of command style functions that make matplotlib work like MATLAB
import matplotlib.pyplot as plt
# These lines assign the data to the values and labels objects.
values = pandaDF['count']
labels = pandaDF['naturecode']
# These lines provide the format for the plot.
plt.gcf().set_size_inches(16, 12, forward=True)
plt.title('Number of crimes by type')
# These lines specify that the plot should display as a horizontal bar chart with values being for the x axis
# and labels for the y axis
plt.barh(range(len(values)), values)
plt.yticks(range(len(values)), labels)
# This last line displays the plot
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship
Step3: The very same sample of the RMS Titanic data now shows the Survived feature removed from the DataFrame. Note that data (the passenger data) and outcomes (the outcomes of survival) are now paired. That means for any passenger data.loc[i], they have the survival outcome outcome[i].
Step5: Tip
Step6: Question 1
Step7: Answer
Step9: Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females did survive the ship sinking. Let's build on our previous prediction
Step10: Question 2
Step11: Answer
Step13: Examining the survival statistics, the majority of males younger than 10 survived the ship sinking, whereas most males age 10 or older did not survive the ship sinking. Let's continue to build on our previous prediction
Step14: Question 3
Step15: Answer
Step17: After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction.
Step18: Question 4
Step19: Answer
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
# RMS Titanic data visualization code
from titanic_visualizations import survival_stats
from IPython.display import display
%matplotlib inline
# Load the dataset
in_file = 'titanic_data.csv'
full_data = pd.read_csv(in_file)
# Print the first few entries of the RMS Titanic data
display(full_data.head())
# Store the 'Survived' feature in a new variable and remove it from the dataset
outcomes = full_data['Survived']
data = full_data.drop('Survived', axis = 1)
# Show the new dataset with 'Survived' removed
display(data.head())
def accuracy_score(truth, pred):
Returns accuracy score for input truth and predictions.
# Ensure that the number of predictions matches number of outcomes
if len(truth) == len(pred):
# Calculate and return the accuracy as a percent
return "Predictions have an accuracy of {:.2f}%.".format((truth == pred).mean()*100)
else:
return "Number of predictions does not match number of outcomes!"
# Test the 'accuracy_score' function
predictions = pd.Series(np.ones(5, dtype = int))
print accuracy_score(outcomes[:5], predictions)
def predictions_0(data):
Model with no features. Always predicts a passenger did not survive.
predictions = []
for _, passenger in data.iterrows():
# Predict the survival of 'passenger'
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_0(data)
print accuracy_score(outcomes, predictions)
survival_stats(data, outcomes, 'Sex')
def predictions_1(data):
Model with one feature:
- Predict a passenger survived if they are female.
predictions = []
for _, passenger in data.iterrows():
if passenger['Sex'] == 'female':
predictions.append(1)
else:
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_1(data)
print accuracy_score(outcomes, predictions)
survival_stats(data, outcomes, 'Age', ["Sex == 'male'"])
def predictions_2(data):
Model with two features:
- Predict a passenger survived if they are female.
- Predict a passenger survived if they are male and younger than 10.
predictions = []
for _, passenger in data.iterrows():
if passenger['Sex'] == 'female':
predictions.append(1)
elif passenger['Age'] < 10:
predictions.append(1)
else:
predictions.append(0)
return pd.Series(predictions)
# Make the predictions
predictions = predictions_2(data)
print accuracy_score(outcomes, predictions)
survival_stats(data, outcomes, 'Pclass')
survival_stats(data, outcomes, 'SibSp',["Sex == 'male'"])
survival_stats(data, outcomes, 'Parch')
def predictions_3(data):
Model with multiple features. Makes a prediction with an accuracy of at least 80%.
predictions = []
for _, passenger in data.iterrows():
age_threshold = 12
if passenger['Sex'] == 'female':
if passenger['SibSp'] < 3:
predictions.append(1)
else:
predictions.append(0)
elif passenger['Pclass'] == 3:
predictions.append(0)
else:
if passenger['Age'] < age_threshold:
predictions.append(1)
else:
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_3(data)
print accuracy_score(outcomes, predictions)
def information_content(data, feat):
classes = data[feat].unique()
MAX = 0
MIN = 0
for cl in classes:
survived = len(data[data[feat] == cl][data['Survived'] == 1])
dead = len(data[data[feat] == cl][data['Survived'] == 0])
MAX += max(survived, dead)
MIN += min(survived, dead)
print "Information content for" , feat, " = ", float(MAX)/(MAX + MIN)
information_content(full_data, 'Sex')
information_content(full_data, 'Pclass')
information_content(full_data, 'Parch')
information_content(full_data, 'SibSp')
def information_content_conditioned(data, feat, cond_feat):
classes = data[feat].unique()
cond_class = data[cond_feat].unique()
for cond_cl in cond_class:
MAX = 0
MIN = 0
for cl in classes:
survived = len(data[data[feat] == cl][data[cond_feat] == cond_cl][data['Survived'] == 1])
dead = len(data[data[feat] == cl][data[cond_feat] == cond_cl][data['Survived'] == 0])
MAX += max(survived, dead)
MIN += min(survived, dead)
print "Condition", cond_feat, "=", cond_cl, "Information content for" , feat, " = ", float(MAX)/(MAX + MIN)
information_content_conditioned(full_data, 'Pclass', 'Sex')
information_content_conditioned(full_data, 'Parch', 'Sex')
information_content_conditioned(full_data, 'SibSp', 'Sex')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Generate data
Step2: Solve EMD
Step3: Solve EMD with Frobenius norm regularization
Step4: Solve EMD with entropic regularization
Step5: Solve EMD with Frobenius norm + entropic regularization
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pylab as pl
import ot
import ot.plot
#%% parameters
n = 100 # nb bins
# bin positions
x = np.arange(n, dtype=np.float64)
# Gaussian distributions
a = ot.datasets.make_1D_gauss(n, m=20, s=5) # m= mean, s= std
b = ot.datasets.make_1D_gauss(n, m=60, s=10)
# loss matrix
M = ot.dist(x.reshape((n, 1)), x.reshape((n, 1)))
M /= M.max()
#%% EMD
G0 = ot.emd(a, b, M)
pl.figure(3, figsize=(5, 5))
ot.plot.plot1D_mat(a, b, G0, 'OT matrix G0')
#%% Example with Frobenius norm regularization
def f(G):
return 0.5 * np.sum(G**2)
def df(G):
return G
reg = 1e-1
Gl2 = ot.optim.cg(a, b, M, reg, f, df, verbose=True)
pl.figure(3)
ot.plot.plot1D_mat(a, b, Gl2, 'OT matrix Frob. reg')
#%% Example with entropic regularization
def f(G):
return np.sum(G * np.log(G))
def df(G):
return np.log(G) + 1.
reg = 1e-3
Ge = ot.optim.cg(a, b, M, reg, f, df, verbose=True)
pl.figure(4, figsize=(5, 5))
ot.plot.plot1D_mat(a, b, Ge, 'OT matrix Entrop. reg')
#%% Example with Frobenius norm + entropic regularization with gcg
def f(G):
return 0.5 * np.sum(G**2)
def df(G):
return G
reg1 = 1e-3
reg2 = 1e-1
Gel2 = ot.optim.gcg(a, b, M, reg1, reg2, f, df, verbose=True)
pl.figure(5, figsize=(5, 5))
ot.plot.plot1D_mat(a, b, Gel2, 'OT entropic + matrix Frob. reg')
pl.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: HNU Dataset
Step2: DC1 Dataset
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import re
import matplotlib.pyplot as plt
def memory_function(infile, dataset):
with open(infile, 'r') as mem:
lines = mem.readlines()
testar = np.asarray([line.strip() for line in lines]).astype(float)/1000
fig=plt.figure()
ax = fig.add_subplot(111)
ax.plot(range(0, testar.shape[0]), testar - min(testar))
ax.set_ylabel('memory usage in GB')
ax.set_xlabel('Time (s)')
ax.set_title(dataset + ' single subject Memory Usage; max = %.2f GB; mean = %.2f GB' % (max(testar), np.mean(testar)))
return fig
def cpu_function(infile, dataset):
with open(infile, 'r') as mem:
lines = mem.readlines()
testar = np.asarray([line.strip() for line in lines]).astype(float)
fig=plt.figure()
ax = fig.add_subplot(111)
ax.plot(range(0, testar.shape[0]), testar)
ax.set_ylabel('CPU usage (%)')
ax.set_xlabel('Time (s)')
ax.set_title(dataset + ' single subject Processor Usage; max = %.1f per; mean = %.1f per' % (max(testar), np.mean(testar)))
return fig
def disk_function(infile, dataset):
with open(infile, 'r') as mem:
lines = mem.readlines()
testar = np.asarray([line.strip() for line in lines]).astype(float)/1000000
fig=plt.figure()
ax = fig.add_subplot(111)
ax.plot(range(0, testar.shape[0]), testar - min(testar))
ax.set_ylabel('Disk usage GB')
ax.set_xlabel('Time (s)')
ax.set_title(dataset + ' single subject Disk Usage; max = %.1f GB; mean = %.1f GB' % (max(testar), np.mean(testar)))
return fig
memfig = memory_function('BNU_sub/output_single/membnu_single.txt', 'BNU 1')
diskfig = disk_function('HNU_sub/output_single/diskhnu_single.txt', 'BNU 1')
cpufig = cpu_function('BNU_sub/output_single/cpubnu_single.txt', 'BNU 1')
memfig.show()
diskfig.show()
cpufig.show()
memfig = memory_function('HNU_sub/output_single/memhnu_single.txt', 'HNU 1')
diskfig = disk_function('HNU_sub/output_single/diskhnu_single.txt', 'HNU 1')
cpufig = cpu_function('HNU_sub/output_single/cpuhnu_single.txt', 'HNU 1')
memfig.show()
diskfig.show()
cpufig.show()
memfig = memory_function('DC_sub/output_single/memsingle.txt', 'DC 1')
diskfig = disk_function('DC_sub/output_single/disksingle.txt', 'DC 1')
cpufig = cpu_function('DC_sub/output_single/cpusingle.txt', 'DC 1')
memfig.show()
diskfig.show()
cpufig.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Quitting <a id="lipdquit"></a>
Step2: Reading Files<a id="lipdread"></a>
Step3: Excel Spreadsheet Converter <a id="excel"></a>
Step4: NOAA Converter <a id="noaa"></a>
Step5: DOI Updater <a id="doi"></a>
Step6: Writing Files <a id="writelipds"></a>
Step7: Pickling Data <a id="pickle"></a>
Step8: Other Functions<a id="other"></a>
Step9: Library Data <a id="lipd_lib"></a>
Step10: Library Data (example) <a id=""></a>
Step11: TimeSeries <a id="timeseries"></a>
Step12: Pandas Dataframes<a id="pandas"></a>
Step13: <div class="alert alert-warning" role="alert" style="margin
Step14: Removing LiPDs <a id="removelipds"></a>
|
<ASSISTANT_TASK:>
Python Code:
import lipd
lipd.quit()
# Read File - GUI
lipd.readLipd()
lipd.readExcel()
lipd.readNoaa()
# Read File - with path argument - no GUI
lipd.readLipd("/path/to/file.lpd")
lipd.readExcel("/path/to/file.xls")
lipd.readNoaa("/path/to/file.txt")
# Read Directory - GUI
lipd.readLipds()
lipd.readExcels()
lipd.readNoaas()
# Read Directory - with path argument - no GUI
lipd.readLipds("/path/to/dir/")
lipd.readExcels("/path/to/dir/")
lipd.readNoaas("/path/to/dir/")
# Read Directory - all file types - GUI
lipd.readAll()
# Read Directory - all file types - no GUI
lipd.readAll("/path/to/dir/")
lipd.excel()
# Run the function
lipd.noaa()
# Choose a conversion
Which conversion?
1. LPD to NOAA
2. NOAA to LPD
lipd.doi()
# Write Files - GUI
lipd.writeLipds()
# Write with path argument - No GUI
lipd.writeLipds("/path/to/dir/")
import pickle
import gzip
# Read a pickle file
f = gzip.open('filename.pklz','rb')
newData = pickle.load(f)
f.close()
# Write a pickle file
yourData = {'a':'blah','b':range(10)}
f = gzip.open('filename.pklz','wb')
pickle.dump(yourData,f)
f.close()
odp_csv = lipd.getCsv("ODP1098B12.lpd")
odp_metadata = lipd.getMetadata("ODP1098B12.lpd")
D = lipd.getLibrary()
%%html
<img src="./d.png" />
%%html
<img src="./tso1.png" />
<img src="./tso2.png" />
time_series = lipd.extractTs()
new_time_series = lipd.find("archiveType is marine sediment", time_series)
new_time_series = lipd.find("geo_meanElev <= -1000 && geo_meanElev > -1100", time_series)
lipd.collapseTs(time_series)
dfs_lipd = lipd.lipdToDf("ODP1098B12.lpd")
lipd.showDfs(dfs_lipd)
dfs_lipd["metadata"]
dfs_lipd["paleoData"]["ODP1098B12.Paleo1.measurementTable1.csv"]
dfs_lipd["chronData"]["ODP1098B12.Chron1.measurementTable1.csv"]
dfs_ts = lipd.tsToDf(time_series, "ODP1098B12_data_SST")
showDfs(dfs_ts)
dfs_ts["metadata"]
dfs_ts["paleoData"]
dfs_ts["chronData"]["ODP1098B12"]
lipd.removeLipds()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Function for determining the impulse response of an RRC filter
Step2: Parameters
Step3: Define channel characteristics and get channel impulse response
Step4: Simulation loop
Step5: Plotting
|
<ASSISTANT_TASK:>
Python Code:
# importing
import numpy as np
from scipy import stats
import matplotlib.pyplot as plt
import matplotlib
# showing figures inline
%matplotlib inline
# plotting options
font = {'size' : 22}
plt.rc('font', **font)
plt.rc('text', usetex=matplotlib.checkdep_usetex(True))
matplotlib.rc('figure', figsize=(18, 6) )
########################
# find impulse response of an RRC filter
########################
def get_rrc_ir(K, n_up, t_symb, r):
'''
Determines coefficients of an RRC filter
Formula out of: J. Huber, Trelliscodierung, Springer, 1992, S. 15
At poles, values of wikipedia.de were used (without cross-checking)
NOTE: Length of the IR has to be an odd number
IN: length of IR, upsampling factor, symbol time, roll-off factor
OUT: filter ceofficients
'''
assert K % 2 != 0, "Filter length needs to be odd"
if r == 0:
r = 1e-32
# init
rrc = np.zeros(K)
t_sample = t_symb/n_up
i_steps = np.arange( 0, K)
k_steps = np.arange( -(K-1)/2.0, (K-1)/2.0 + 1 )
t_steps = k_steps*t_sample
for i in i_steps:
if t_steps[i] == 0:
rrc[i] = 1.0/np.sqrt(t_symb) * (1.0 - r + 4.0 * r / np.pi )
elif np.abs( t_steps[i] ) == t_symb/4.0/r:
rrc[i] = r/np.sqrt(2.0*t_symb)*((1+2/np.pi)*np.sin(np.pi/4.0/r)+ \
( 1.0 - 2.0/np.pi ) * np.cos(np.pi/4.0/r) )
else:
rrc[i] = 1.0/np.sqrt(t_symb)*( np.sin( np.pi*t_steps[i]/t_symb*(1-r) ) + \
4.0*r*t_steps[i]/t_symb * np.cos( np.pi*t_steps[i]/t_symb*(1+r) ) ) \
/ (np.pi*t_steps[i]/t_symb*(1.0-(4.0*r*t_steps[i]/t_symb)**2.0))
return rrc
# number of symbols per sequence/packet
n_symb = 32
# modulation scheme and constellation points
M = 2
constellation = [ -1, 1 ]
# EbN0 range for simulation
EbN0_dB = np.arange( 0, 30, 2)
EbN0 = 10**(EbN0_dB/10)
# maximum number of errors and symbols to be simulated
max_errors = 1000
max_syms = 1e7
# parameters of the filter
r = 0.33
n_up = 8 # samples per symbol
syms_per_filt = 4 # symbols per filter (plus minus in both directions)
K_filt = 2 * syms_per_filt * n_up + 1 # length of the fir filter
# set symbol time
t_symb = 1.0
# define channel by characterizing delays and according attenuation
channel_delays_syms = range( 7 )
channel_factors = [ 1, .4, -.3, .1, .1, .05, .05 ]
assert( len(channel_delays_syms) == len(channel_factors) ), 'Length of delays and factors has to be the same!'
# get channel
h_channel = np.zeros( (np.max(channel_delays_syms)+1) * n_up )
for k in np.arange(len(channel_delays_syms)):
h_channel[ n_up*channel_delays_syms[k] ] = channel_factors[k]
# find rrc response and normalize to energy 1
rrc = get_rrc_ir( K_filt, n_up, t_symb, r)
rrc = rrc / np.linalg.norm(rrc)
# initialize BER
ber = np.zeros_like( EbN0, dtype=float )
# theoretical values
ber_bpsk = 1 - stats.norm.cdf( np.sqrt( 2 * EbN0 ) )
# loop for snrs
for ind_ebn0, val_ebn0 in enumerate( EbN0 ):
# get noise variance for simulation
sigma2 = 1 / (np.log2(M) * val_ebn0)
# initialize counter
num_errors = 0
num_syms = 0
# loop for errors
while (num_errors<max_errors and num_syms<max_syms):
# generate random binary vector and modulate
data = np.random.randint( 2, size=n_symb)
s = [ constellation[d] for d in data ]
# prepare sequence to be filtered by upsampling
s_up = np.zeros( n_symb * n_up, dtype=complex)
s_up[ : : n_up ] = s
# apply RRC filtering for Tx pulse shaping
s_Tx = np.convolve( s_up, rrc)
# apply channel and add noise
s_Rx = np.convolve( s_Tx, h_channel)
n = np.sqrt(sigma2/2) * ( np.random.randn(len(s_Rx)) + 1j* np.random.randn(len(s_Rx)) )
r_Rx = s_Rx + n
# apply MF at the Rx
y_mf_rrc = np.convolve(r_Rx, rrc)
# down-sampling from "high rate" (n_up samples per symbol) to symbol rate
y_down = y_mf_rrc[ K_filt-1 : K_filt-1 + n_symb*n_up : n_up ]
# demodulate
data_est = [ int( np.real( rec ) > 0 ) for rec in y_down ]
# count errors and symbols
num_errors += sum( [ int( data_est[ _n ] != data[ _n ] ) for _n in range(len(data)) ] )
num_syms += n_symb
# get estimate of BER
ber[ ind_ebn0 ] = num_errors/(num_syms*1.0*np.log2(M))
print('EbN0 simulated:', 10*np.log10( val_ebn0) )
print('\n\nError floor at: \t\t\t{:0.4f}'.format(ber[-1]))
print('For comparison: 2^length of h = \t{:0.4f}'.format(2**(-len(channel_delays_syms))) )
# activate switches to plot whatever you like to see
show_signal = 1
show_ber = 1
show_symbols = 1
# plot signals
if show_signal:
plt.figure()
plt.plot( np.real(s_Tx), label='$s(t)$')
plt.plot( np.real(s_Rx), label='$s(t)* h(t)$')
plt.plot( np.real(r_Rx), label='$r(t)= s(t)*h(t)+n(t)$')
plt.plot( np.real(y_mf_rrc), label='$y(t)=r(t)*g^\star(-t)$')
plt.grid(True)
plt.xlabel('$t$ (a.u.)')
plt.autoscale(enable=True, tight=True)
plt.legend(loc='upper right')
#plt.savefig('waveform_isi.pdf',bbox_inches='tight')
# show data symbols after processing
if show_symbols:
plt.figure()
markerline, stemlines, baseline = plt.stem( np.arange(len(s)), np.real(s), label='syms Tx')
plt.setp(markerline, 'markersize', 8, 'markerfacecolor', 'b')
markerline, stemlines, baseline = plt.stem( np.arange(len(y_down)), np.real(y_down), '+', label='syms Rx')
plt.setp(markerline, 'markersize', 12, 'markerfacecolor', 'r',)
plt.legend(loc='upper right')
plt.grid(True)
plt.xlabel('$n$')
# show BER
if show_ber:
plt.figure()
plt.plot(EbN0_dB, ber_bpsk, label="BPSK, nur AWGN")
plt.plot(EbN0_dB, ber, 'o', ms=14, label="Multipath, sim.")
plt.yscale('log')
plt.grid(True)
plt.legend(loc='lower left')
plt.xlabel('$E_b/N_0$ (dB)')
plt.ylabel('BER')
plt.autoscale(enable=True, tight=True)
plt.ylim( (1e-6,1))
#plt.savefig('ber_isi.pdf',bbox_inches='tight')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Dice Simulaiton
Step2: Check
Step4: Multiple Dice Roll
Step5: Check
Step7: Capture the outcome of multiple rolls
Step9: Plot Result
Step10: Aside
Step11: Slow?
Step12: Seems like a long time... Can we make it faster? Yes!
Step13: The result is a np.array object with is like a list, but better. The most notable difference is that we can to element-wise math operations on numpy arrays easily.
Step15: Let's use this knowledge to rewrite dice_rolls
Step17: Histogram and timeit
Step18: That is way faster!
Step20: The improvement in the core function call was two orders of magnitude, but when we timed it initially, we were also waiting for the plot to render which consumed the majority of the time.
Step21: Histogram
Step22: Expected Margin
|
<ASSISTANT_TASK:>
Python Code:
import random
def single_die():
Outcome of a single die roll
return random.randint(1,6)
for _ in range(50):
print(single_die(),end=' ')
def dice_roll(dice_count):
Outcome of a rolling dice_count dice
Args:
dice_count (int): number of dice to roll
Returns:
int: sum of face values of dice
out = 0
for _ in range(dice_count):
out += single_die()
return out
for _ in range(100):
print(dice_roll(2), end=' ')
def dice_rolls(dice_count, rolls_count):
Return list of many dice rolls
Args:
dice_count (int): number of dice to roll
rolls_count (int): number of rolls to do
Returns:
list: list of dice roll values.
out = []
for _ in range(rolls_count):
out.append(dice_roll(dice_count))
return out
print(dice_rolls(2,100))
import pylab as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10, 4)
def dice_histogram(dice_count, rolls_count, bins):
Plots outcome of many dice rolls
Args:
dice_count (int): number of dice to roll
rolls_count (int): number of rolls to do
bins (int): number of histogram bins
plt.hist(dice_rolls(dice_count, rolls_count),bins)
plt.show()
dice_histogram(2, 10000, 200)
dice_histogram(100, 10000, 200)
import time
start = time.time()
dice_histogram(100, 10000, 200)
print(time.time()-start, 'seconds')
import numpy as np
np.random.randint(1,7,(2,10))
np.sum(np.random.randint(1,7,(2,10)),axis=0)
def dice_rolls_np(dice_count, rolls_count):
Return list of many dice rolls
Args:
dice_count (int): number of dice to roll
rolls_count (int): number of rolls to do
Returns:
np.array: list of dice roll values.
return np.sum(
np.random.randint(1,7,(dice_count,rolls_count)),
axis=0)
print(dice_rolls(2,100))
def dice_histogram_np(dice_count, rolls_count, bins):
Plots outcome of many dice rolls
Args:
dice_count (int): number of dice to roll
rolls_count (int): number of rolls to do
bins (int): number of histogram bins
plt.hist(dice_rolls_np(dice_count, rolls_count),bins)
plt.show()
start = time.time()
dice_histogram_np(100, 10000, 200)
print(time.time()-start, 'seconds')
%timeit dice_rolls_np(100, 1000)
%timeit dice_rolls(100, 1000)
def risk_battle():
Risk battle simulation
# get array of three dice values
attacking_dice = np.random.randint(1,7,3)
# get array of two dice values
defending_dice = np.random.randint(1,7,2)
# sort both sets and take top two values
attacking_dice_sorted = np.sort(attacking_dice)[::-1]
defending_dice_sorted = np.sort(defending_dice)[::-1]
# are the attacking values greater?
attack_wins = attacking_dice_sorted[:2] > defending_dice_sorted[:2]
# convert boolean values to -1, +1
attack_wins_pm = attack_wins*2 - 1
# sum up these outcomes
return np.sum(attack_wins_pm)
for _ in range(50):
print(risk_battle(), end=' ')
outcomes = [risk_battle() for _ in range(10000)]
plt.hist(outcomes)
plt.show()
np.mean([risk_battle() for _ in range(10000)])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step4: Code from previous chapters
Step5: In the previous chapters I presented an SIR model of infectious disease, specifically the Kermack-McKendrick model. We extended the model to include vaccination and the effect of a hand-washing campaign, and used the extended model to allocate a limited budget optimally, that is, to minimize the number of infections.
Step6: Then run the simulation for each value and print the results.
Step7: We can wrap that code in a function and store the results in a
Step8: Now we can run sweep_beta like this
Step9: And plot the results
Step10: The first line uses string operations to assemble a label for the
Step11: Remember that this figure
Step12: And run sweep_beta for each value of gamma
Step13: The following figure shows the results. When gamma is low, the
Step14: sweep_parameters takes as parameters an array of values for beta and
Step15: With the results in a SweepFrame, we can plot each column like this
Step16: Alternatively, we can plot each row like this
Step17: This example demonstrates one use of a SweepFrame
Step18: Infection rates are lowest in the lower right, where the contact rate is and the recovery rate is high. They increase as we move to the upper left, where the contact rate is high and the recovery rate is low.
|
<ASSISTANT_TASK:>
Python Code:
# install Pint if necessary
try:
import pint
except ImportError:
!pip install pint
# download modsim.py if necessary
from os.path import exists
filename = 'modsim.py'
if not exists(filename):
from urllib.request import urlretrieve
url = 'https://raw.githubusercontent.com/AllenDowney/ModSim/main/'
local, _ = urlretrieve(url+filename, filename)
print('Downloaded ' + local)
# import functions from modsim
from modsim import *
from modsim import State, System
def make_system(beta, gamma):
Make a system object for the SIR model.
beta: contact rate in days
gamma: recovery rate in days
returns: System object
init = State(S=89, I=1, R=0)
init /= sum(init)
t0 = 0
t_end = 7 * 14
return System(init=init, t0=t0, t_end=t_end,
beta=beta, gamma=gamma)
def update_func(state, t, system):
Update the SIR model.
state: State with variables S, I, R
t: time step
system: System with beta and gamma
returns: State object
s, i, r = state
infected = system.beta * i * s
recovered = system.gamma * i
s -= infected
i += infected - recovered
r += recovered
return State(S=s, I=i, R=r)
from numpy import arange
from modsim import TimeFrame
def run_simulation(system, update_func):
Runs a simulation of the system.
system: System object
update_func: function that updates state
returns: TimeFrame
frame = TimeFrame(columns=system.init.index)
frame.loc[system.t0] = system.init
for t in arange(system.t0, system.t_end):
frame.loc[t+1] = update_func(frame.loc[t], t, system)
return frame
def calc_total_infected(results, system):
s_0 = results.S[system.t0]
s_end = results.S[system.t_end]
return s_0 - s_end
from numpy import linspace
beta_array = linspace(0.1, 1.1, 11)
gamma = 0.25
for beta in beta_array:
system = make_system(beta, gamma)
results = run_simulation(system, update_func)
print(beta, calc_total_infected(results, system))
def sweep_beta(beta_array, gamma):
sweep = SweepSeries()
for beta in beta_array:
system = make_system(beta, gamma)
results = run_simulation(system, update_func)
sweep[beta] = calc_total_infected(results, system)
return sweep
infected_sweep = sweep_beta(beta_array, gamma)
label = f'gamma = {gamma}'
label
infected_sweep.plot(label=label)
decorate(xlabel='Contact rate (beta)',
ylabel='Fraction infected')
gamma_array = linspace(0.1, 0.7, 4)
for gamma in gamma_array:
infected_sweep = sweep_beta(beta_array, gamma)
label = 'gamma = ' + str(gamma)
plot(infected_sweep, label=label)
decorate()
def sweep_parameters(beta_array, gamma_array):
frame = SweepFrame(columns=gamma_array)
for gamma in gamma_array:
frame[gamma] = sweep_beta(beta_array, gamma)
return frame
frame = sweep_parameters(beta_array, gamma_array)
for gamma in gamma_array:
label = f'gamma = {gamma}'
plot(frame[gamma], label=label)
decorate(xlabel='Contact rate (beta)',
ylabel='Fraction infected')
plt.legend(bbox_to_anchor=(1.02, 1.02))
plt.tight_layout()
for beta in beta_array:
label = f'beta = {beta}'
plot(frame.loc[beta], label=label)
decorate(xlabel='Recovery rate (gamma)',
ylabel='Fraction infected')
plt.legend(bbox_to_anchor=(1.02, 1.02))
plt.tight_layout()
from modsim import contour
contour(frame)
decorate(xlabel='Recovery rate (gamma)',
ylabel='Contact rate (beta)',
title='Fraction infected, contour plot')
# Solution
# Sweep beta with fixed gamma
gamma = 1/2
infected_sweep = sweep_beta(beta_array, gamma)
# Solution
# Interpolating by eye, we can see that the infection rate passes through 0.4
# when beta is between 0.6 and 0.7
# We can use the `crossings` function to interpolate more precisely
# (although we don't know about it yet :)
beta_estimate = crossings(infected_sweep, 0.4)
# Solution
# Time between contacts is 1/beta
time_between_contacts = 1/beta_estimate
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.decomposition import NMF
from sklearn.pipeline import Pipeline
import pandas as pd
data = load_data()
pipe = Pipeline([
("tf_idf", TfidfVectorizer()),
("nmf", NMF())
])
pipe.fit_transform(data.test)
tf_idf_out = pipe.named_steps['tf_idf'].transform(data.test)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's load our Iris data set
Step2: Perform k-means analysis on iris data
Step3: Let's initialize Bokeh
|
<ASSISTANT_TASK:>
Python Code:
from sklearn.cluster import KMeans
import numpy as np
import bokeh.plotting
from bokeh.plotting import figure
from sklearn import datasets
# the iris dataset is 150 samples, each with four features
# we only want petal length and petal width
iris = datasets.load_iris()
# get only petal features, which are the third and fourth values in each sample
petal_data = iris.data[:,2:]
# there are only 3 iris flower groups: 'setosa', 'versicolor', 'virginica'
kmean = KMeans(n_clusters=3) # n_clusters asks for only 3 groupings
kmean.fit(petal_data)
bokeh.plotting.output_notebook()
# initialize our bokeh plot
plot = figure(width=500, height=500, title='Iris Petals', x_axis_label = "Petal Length", y_axis_label = "Petal Width")
# plot centroid / cluster center / group mean for each group
clus_xs = []
clus_ys = []
#we get the cluster x / y values from the k-means algorithm
for entry in kmean.cluster_centers_:
clus_xs.append(entry[0])
clus_ys.append(entry[1])
# the cluster center is marked by a circle, with a cross in it
plot.circle_cross(x=clus_xs, y=clus_ys, size=40, fill_alpha=0, line_width=2, color=['red', 'blue', 'purple'])
plot.text(text = ['setosa', 'versicolor', 'virginica'], x=clus_xs, y=clus_ys, text_font_size='30pt')
# begin plotting each petal length / width
# We get our x / y values from the original plot data.
# The k-means algorithm tells us which 'color' each plot point is
# and therefore which group it is a member of.
i = 0
for sample in petal_data:
# "labels_" tells us which cluster each plot point is a member of
if kmean.labels_[i] == 0:
plot.circle(x=sample[0], y=sample[1], size=15, color="red")
if kmean.labels_[i] == 1:
plot.circle(x=sample[0], y=sample[1], size=15, color="blue")
if kmean.labels_[i] == 2:
plot.circle(x=sample[0], y=sample[1], size=15, color="purple")
i += 1
bokeh.plotting.show(plot)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Simulated reference sequence mapping
Step2: Do ipyrad simulated reference mapping
Step3: Do ipyrad denovo+reference
Step4: Do ipyrad denovo-reference
Step5: Stacks simulated reference sequence assembly
Step6: dDocent simulated reference assembly
Step7: Empirical reference sequence mapping
Step8: Fetch the bottlenose dolphin genome
Step9: Trim reads w/ cutadapt
Step10: Import reads into ipyrad and do QC
Step11: Do ipyrad refmap empirical
Steps 1 & 2 run in ~15 minutes.
Step12: Do ipyrad denovo+reference
Step13: Do Stacks refmap empirical
Step14: Map ipyrad trimmed reads to the reference sequence
Step15: Stacks prereqs - install ngsutils
Step16: Now do the filtering
Step17: Now run stacks ref_map pipeline
Step18: Do dDocent refmap empirical
Step20: Housekeeping
Step22: Create a population map
|
<ASSISTANT_TASK:>
Python Code:
import subprocess
import ipyrad as ip
import shutil
import glob
import sys
import os
## Set the default directories for exec and data.
WORK_DIR="/home/iovercast/manuscript-analysis/"
REFMAP_EMPIRICAL_DIR=os.path.join(WORK_DIR, "Phocoena_empirical/")
REFMAP_FASTQS=os.path.join(REFMAP_EMPIRICAL_DIR, "Final_Files_forDryad/Bbif_ddRADseq/fastq/")
IPYRAD_DIR=os.path.join(WORK_DIR, "ipyrad/")
STACKS_DIR=os.path.join(WORK_DIR, "stacks/")
for dir in [WORK_DIR, REFMAP_EMPIRICAL_DIR, IPYRAD_DIR, STACKS_DIR]:
if not os.path.exists(dir):
os.makedirs(dir)
REFMAP_SIM_DIR = os.path.join(WORK_DIR, "REFMAP_SIM/")
REFMAP_DAT_DIR = os.path.join(REFMAP_SIM_DIR, "ipsimdata/")
IPYRAD_SIM_DIR = os.path.join(REFMAP_SIM_DIR, "ipyrad/")
STACKS_SIM_DIR = os.path.join(REFMAP_SIM_DIR, "stacks/")
DDOCENT_SIM_DIR = os.path.join(REFMAP_SIM_DIR, "ddocent/")
## REFMAP_DAT_DIR will be created when we untar ipsimdata.tar.gz
for d in [REFMAP_SIM_DIR, IPYRAD_SIM_DIR, STACKS_SIM_DIR, DDOCENT_SIM_DIR]:
if not os.path.exists(d):
os.makedirs(d)
os.chdir(REFMAP_SIM_DIR)
!wget https://github.com/dereneaton/ipyrad/raw/master/tests/ipsimdata.tar.gz
!tar -xzf ipsimdata.tar.gz
os.chdir(IPYRAD_SIM_DIR)
## Make a new assembly and set some assembly parameters
data = ip.Assembly("refmap-sim")
data.set_params("raw_fastq_path", REFMAP_DAT_DIR + "pairddrad_wmerge_example_R*_.fastq.gz")
data.set_params("barcodes_path", REFMAP_DAT_DIR + "pairddrad_wmerge_example_barcodes.txt")
data.set_params("project_dir", "reference-assembly")
data.set_params("assembly_method", "reference")
data.set_params("reference_sequence", REFMAP_DAT_DIR + "pairddrad_wmerge_example_genome.fa")
data.set_params("datatype", "pairddrad")
data.set_params("restriction_overhang", ("TGCAG", "CGG"))
data.write_params(force=True)
cmd = "ipyrad -p params-refmap-sim.txt -s 1234567 -c 40".format(dir)
print(cmd)
!time $cmd
data2 = data.branch("denovo_plus_reference-sim")
data2.set_params("assembly_method", "denovo+reference")
data2.write_params(force=True)
cmd = "ipyrad -p params-denovo_plus_reference-sim.txt -s 1234567 -c 40".format(dir)
print(cmd)
!time $cmd
data2 = data.branch("denovo_minus_reference-sim")
data2.set_params("assembly_method", "denovo-reference")
data2.write_params(force=True)
cmd = "ipyrad -p params-denovo_minus_reference-sim.txt -s 1234567 -c 40".format(dir)
print(cmd)
!time $cmd
IPYRAD_SIMEDITS_DIR = IPYRAD_SIM_DIR + "reference-assembly/refmap-sim_edits/"
REF_SEQ = REFMAP_DAT_DIR + "pairddrad_wmerge_example_genome.fa"
## Sim sample names
pop1 = ["1A_0", "1B_0", "1C_0", "1D_0"]
pop2 = ["2E_0", "2F_0", "2G_0", "2H_0"]
pop3 = ["3I_0", "3J_0", "3K_0", "3L_0"]
sim_sample_names = pop1 + pop2 + pop3
for samp in sim_sample_names:
R1 = IPYRAD_SIMEDITS_DIR + samp + ".trimmed_R1_.fastq.gz"
R2 = IPYRAD_SIMEDITS_DIR + samp + ".trimmed_R2_.fastq.gz"
samout = STACKS_SIM_DIR + samp + ".sam"
bamout = STACKS_SIM_DIR + samp + ".bam"
export_cmd = "export PATH=~/manuscript-analysis/dDocent:$PATH"
bwa_cmd = "bwa mem -t 40 -v 1 " + REF_SEQ\
+ " " + R1\
+ " " + R2\
+ " > " + samout
samtools_cmd = "samtools view -b -F 0x804 " + samout\
+ " | samtools sort -T /tmp/{}.sam -O bam -o {}".format(samp, bamout)
cleanup_cmd = "rm {}".format(samout)
cmd = ";".join([export_cmd, bwa_cmd, samtools_cmd, cleanup_cmd])
!$cmd
## This is how we'd do it since we weren't using a popmap file
infiles = ["-s "+ff+" " for ff in glob.glob(STACKS_SIM_DIR + "*.bam")]
## Toggle the dryrun flag for testing
DRYRUN=""
DRYRUN="-d"
## Options
## -T The number of threads to use
## -O The popmap file specifying individuals and populations
## -S Disable database business
## -o Output directory. Just write to the empirical stacks directory
## -X Tell populations to create the output formats specified
## -X and use `-m 6` which sets min depth per locus
OUTPUT_FORMATS = "--vcf --genepop --structure --phylip "
cmd = "ref_map.pl -T 40 -b 1 -S " + DRYRUN\
+ " -X \'populations:" + OUTPUT_FORMATS + "\'"\
+ " -X \'populations:-m 6\'"\
+ " -o " + STACKS_SIM_DIR + " "\
+ " ".join(infiles)
print("\nCommand to run - {}".format(cmd))
%%bash -s "$WORK_DIR" "$STACKS_SIM_DIR" "$cmd"
export PATH="$1/miniconda/bin:$PATH"; export "STACKS_SIM_DIR=$2"; export "cmd=$3"
## We have to play a little cat and mouse game here because of quoting in some of the args
## and how weird bash is we have to write the cmd to a file and then exec it.
## If you try to just run $cmd it truncates the command at the first single tic. Hassle.
cd $STACKS_SIM_DIR
echo $cmd > stacks.sh; chmod 777 stacks.sh
time ./stacks.sh
IPYRAD_SIMEDITS_DIR = IPYRAD_SIM_DIR + "reference-assembly/refmap-sim_edits/"
REF_SEQ = REFMAP_DAT_DIR + "pairddrad_wmerge_example_genome.fa"
DDOCENT_DIR = "/home/iovercast/manuscript-analysis/dDocent/"
os.chdir(DDOCENT_SIM_DIR)
## Create a simlink to the reference sequence in the current directory
cmd = "ln -sf {} reference.fasta".format(REF_SEQ)
!$cmd
## Sim sample names
pop1 = ["1A_0", "1B_0", "1C_0", "1D_0"]
pop2 = ["2E_0", "2F_0", "2G_0", "2H_0"]
pop3 = ["3I_0", "3J_0", "3K_0", "3L_0"]
sim_sample_names = pop1 + pop2 + pop3
sim_mapping_dict = {}
for pop_num, samps in enumerate([pop1, pop2, pop3]):
for samp_num, samp_name in enumerate(samps):
sim_mapping_dict[samp_name] = "Pop{}_{:03d}".format(pop_num+1, samp_num+1)
## Now we have to rename all the files in the way dDocent expects them:
## 1A_0_R1_.fastq.gz -> Pop1_001.F.fq.gz
for k, v in sim_mapping_dict.items():
## Symlink R1 and R2
for i in ["1", "2"]:
source = os.path.join(IPYRAD_SIMEDITS_DIR, k + ".trimmed_R{}_.fastq.gz".format(i))
## This is the way the current documentation says to name imported trimmed
## files, but it doesn't work.
## dest = os.path.join(DDOCENT_SIM_DIR, v + ".R{}.fq.gz".format(i))
if i == "1":
dest = os.path.join(DDOCENT_SIM_DIR, v + ".F.fq.gz".format(i))
else:
dest = os.path.join(DDOCENT_SIM_DIR, v + ".R.fq.gz".format(i))
cmd = "ln -sf {} {}".format(source, dest)
!$cmd
config_file = "{}/sim-config.txt".format(DDOCENT_SIM_DIR)
with open(config_file, 'w') as outfile:
outfile.write('Number of Processors\n40\nMaximum Memory\n0\nTrimming\nno\nAssembly?\nno\nType_of_Assembly\nPE\nClustering_Similarity%\n0.85\nMapping_Reads?\nyes\nMapping_Match_Value\n1\nMapping_MisMatch_Value\n3\nMapping_GapOpen_Penalty\n5\nCalling_SNPs?\nyes\nEmail\nwatdo@mailinator.com\n')
cmd = "export LD_LIBRARY_PATH={}/freebayes-src/vcflib/tabixpp/htslib/:$LD_LIBRARY_PATH; ".format(DDOCENT_DIR)
cmd += "export PATH={}:$PATH; time dDocent {}".format(DDOCENT_DIR, config_file)
print(cmd)
with open("ddocent.sh", 'w') as outfile:
outfile.write("#!/bin/bash\n")
outfile.write(cmd)
!chmod 777 ddocent.sh
## You have to post-process the vcf files to decompose complex genotypes and remove indels
os.chdir(DDOCENT_SIM_DIR)
exports = "export LD_LIBRARY_PATH=/home/iovercast/manuscript-analysis/dDocent//freebayes-src/vcflib/tabixpp/htslib/:$LD_LIBRARY_PATH; export PATH=/home/iovercast/manuscript-analysis/dDocent/:$PATH"
fullvcf = os.path.join(DDOCENT_SIM_DIR, "TotalRawSNPs.vcf")
filtvcf = os.path.join(DDOCENT_SIM_DIR, "Final.recode.vcf")
for f in [fullvcf, filtvcf]:
print("Finalizing - {}".format(f))
## Rename the samples to make them agree with the ipyrad/stacks names so
## the results analysis will work.
vcffile = os.path.join(DDOCENT_SIM_DIR, f)
infile = open(vcffile,'r')
filedata = infile.readlines()
infile.close()
outfile = open(vcffile,'w')
for line in filedata:
if "CHROM" in line:
for ipname, ddname in sim_mapping_dict.items():
line = line.replace(ddname, ipname)
outfile.write(line)
outfile.close()
## Naming the new outfiles as <curname>.snps.vcf
## Decompose complex genotypes and remove indels
outfile = os.path.join(DDOCENT_SIM_DIR, f.split("/")[-1].split(".vcf")[0] + ".snps.vcf")
cmd = "{}; vcfallelicprimitives {} > ddoc-tmp.vcf".format(exports, f)
print(cmd)
!$cmd
cmd = "{}; vcftools --vcf ddoc-tmp.vcf --remove-indels --recode --recode-INFO-all --out {}".format(exports, outfile)
print(cmd)
!$cmd
!rm ddoc-tmp.vcf
os.chdir(REFMAP_EMPIRICAL_DIR)
!mkdir raws
!cd raws
## Grab the sra-toolkit pre-built binaries to download from SRA
## This works, but commented for now so it doesn't keep redownloading
!wget http://ftp-trace.ncbi.nlm.nih.gov/sra/sdk/2.8.0/sratoolkit.2.8.0-ubuntu64.tar.gz
!tar -xvzf sratoolkit*
FQ_DUMP = os.path.join(REFMAP_EMPIRICAL_DIR, "sratoolkit.2.8.0-ubuntu64/bin/fastq-dump")
res = subprocess.check_output(FQ_DUMP + " -version", shell=True)
## The SRR numbers for the samples from this bioproject range from SRR4291662 to SRR4291705
## so go fetch them one by one
for samp in range(662, 706):
print("Doing {}\t".format(samp)),
res = subprocess.check_output(FQ_DUMP + " --split-files SRR4291" + str(samp), shell=True)
## The SRA download files have wonky names, like SRR1234_R1.fastq.gz, but ipyrad expects SRR1234_R1_.fastq.gz,
## so we have to fix the filenames. Filename hax...
import glob
for f in glob.glob(REFMAP_EMPIRICAL_DIR + "raws/*.fastq.gz"):
splits = f.split("/")[-1].split("_")
newf = REFMAP_EMPIRICAL_DIR + "raws/" + splits[0] + "_R" + splits[1].split(".")[0] + "_.fastq.gz"
os.rename(f, newf)
os.chdir(REFMAP_EMPIRICAL_DIR)
!mkdir TurtrunRef
!cd TurtrunRef
!wget ftp://ftp.ensembl.org/pub/release-87/fasta/tursiops_truncatus/dna/Tursiops_truncatus.turTru1.dna_rm.toplevel.fa.gz
## Ensembl distributes gzip'd reference sequence files, but samtools really wants it to be bgzipped or uncompressed
!gunzip Tursiops_truncatus.turTru1.dna_rm.toplevel.fa.gz
%%bash -s "$REFMAP_EMPIRICAL_DIR"
cd $1
mkdir trimmed
for i in `ls raws`; do echo $i; cutadapt -l 85 raws/$i | gzip > trimmed/$i; done
## Housekeeping
rm -rf raws
mv trimmed raws
IPYRAD_REFMAP_DIR = os.path.join(REFMAP_EMPIRICAL_DIR, "ipyrad/")
if not os.path.exists(IPYRAD_REFMAP_DIR):
os.makedirs(IPYRAD_REFMAP_DIR)
os.chdir(IPYRAD_REFMAP_DIR)
## Make a new assembly and set some assembly parameters
data = ip.Assembly("refmap-empirical")
data.set_params("sorted_fastq_path", REFMAP_EMPIRICAL_DIR + "raws/*.fastq.gz")
data.set_params("project_dir", "reference-assembly")
data.set_params("assembly_method", "reference")
data.set_params("reference_sequence", REFMAP_EMPIRICAL_DIR + "TurtrunRef/Tursiops_truncatus.turTru1.dna_rm.toplevel.fa")
data.set_params("datatype", "pairddrad")
data.set_params("restriction_overhang", ("TGCAG", "CGG"))
data.set_params('max_low_qual_bases', 5)
data.set_params('filter_adapters', 2)
data.write_params(force=True)
cmd = "ipyrad -p params-refmap-empirical.txt -s 1 --force".format(dir)
print(cmd)
!time $cmd
cmd = "ipyrad -p params-refmap-empirical.txt -s 2 --force".format(dir)
print(cmd)
!time $cmd
## Oops. If you run some other cell while this is running it steals stdout, so you lose track of progress.
cmd = "ipyrad -p params-refmap-empirical.txt -s 34567".format(dir)
print(cmd)
!time $cmd
data2 = data.branch("denovo_ref-empirical")
data2.set_params("assembly_method", "denovo+reference")
data2.write_params(force=True)
cmd = "ipyrad -p params-denovo_ref-empirical.txt -s 34567 -c 40".format(dir)
print(cmd)
!time $cmd
## Set directories and make the popmap file
STACKS_REFMAP_DIR = os.path.join(REFMAP_EMPIRICAL_DIR, "stacks/")
if not os.path.exists(STACKS_REFMAP_DIR):
os.makedirs(STACKS_REFMAP_DIR)
os.chdir(STACKS_REFMAP_DIR)
make_stacks_popmap(STACKS_REFMAP_DIR)
IPYRAD_EDITS_DIR = os.path.join(IPYRAD_REFMAP_DIR, "reference-assembly/refmap-empirical_edits/")
REF_SEQ = REFMAP_EMPIRICAL_DIR + "TurtrunRef/Tursiops_truncatus.turTru1.dna_rm.toplevel.fa"
## Just get the
sample_names = glob.glob(IPYRAD_EDITS_DIR + "*.trimmed_R1_.fastq.gz")
sample_names = [x.split(".")[0].split("/")[-1] for x in sample_names]
for samp in sample_names:
R1 = IPYRAD_EDITS_DIR + samp + ".trimmed_R1_.fastq.gz"
R2 = IPYRAD_EDITS_DIR + samp + ".trimmed_R2_.fastq.gz"
samout = STACKS_REFMAP_DIR + samp + ".sam"
bamout = STACKS_REFMAP_DIR + samp + ".bam"
export_cmd = "export PATH=~/manuscript-analysis/dDocent:$PATH"
bwa_cmd = "bwa mem -t 40 -v 0 " + REF_SEQ\
+ " " + R1\
+ " " + R2\
+ " > " + samout
samtools_cmd = "samtools view -b -F 0x804 " + samout\
+ " | samtools sort -T /tmp/{}.sam -O bam -o {}".format(samp, bamout)
cleanup_cmd = "rm {}".format(samout)
cmd = "; ".join([export_cmd, bwa_cmd, samtools_cmd, cleanup_cmd])
!time $cmd
%%bash -s "$REFMAP_EMPIRICAL_DIR"
cd $1/stacks
git clone https://github.com/ngsutils/ngsutils.git
cd ngsutils
make
infiles = glob.glob(STACKS_REFMAP_DIR + "SRR*.bam")
for f in infiles:
outfile = f + ".tmp"
print(f, outfile)
subprocess.call("bamutils removeclipping {} {}".format(f, outfile), shell=True)
subprocess.call("rm {}".format(f), shell=True)
subprocess.call("mv {} {}".format(outfile, f), shell=True)
## This is how we'd do it if we weren't using a popmap file
#infiles = ["-s "+ff+" " for ff in glob.glob(IPYRAD_REFMAP_DIR+"*-mapped-sorted.bam")]
## Toggle the dryrun flag for testing
DRYRUN=""
DRYRUN="-d"
## Options
## -T The number of threads to use
## -O The popmap file specifying individuals and populations
## -S Disable database business
## -o Output directory. Just write to the empirical stacks directory
## -X The first -X tells populations to create the output formats sepcified
## -X The second one passes `-m 6` which sets min depth per locus
OUTPUT_FORMATS = "--vcf --genepop --structure --phylip "
cmd = "ref_map.pl -T 40 -b 1 -S " + DRYRUN\
+ " -O {}/popmap.txt".format(STACKS_REFMAP_DIR)\
+ " --samples {}".format(STACKS_REFMAP_DIR)\
+ " -X \'populations:" + OUTPUT_FORMATS + "\'"\
+ " -X \'populations:-m 6\'"\
+ " -o " + STACKS_REFMAP_DIR
print("\nCommand to run - {}".format(cmd))
%%bash -s "$WORK_DIR" "$STACKS_REFMAP_DIR" "$cmd"
export PATH="$1/miniconda/bin:$PATH"; export "STACKS_REFMAP_DIR=$2"; export "cmd=$3"
## We have to play a little cat and mouse game here because of quoting in some of the args
## and how weird bash is we have to write the cmd to a file and then exec it.
## If you try to just run $cmd it truncates the command at the first single tic. Hassle.
cd $STACKS_REFMAP_DIR
echo $cmd > stacks.sh; chmod 777 stacks.sh
time ./stacks.sh
## A housekeeping function for getting a dictionary to map SRR* filenames in the ipyrad edits directory
## to ddocent style.
##
## Gotcha: Nice 1-based indexing for the dDocent format.
##
## For raw reads the format (for R1) is pop1_sample1.F.fq.gz format a la:
## 1A_0_R1_.fastq.gz -> Pop1_Sample1.F.fq.gz
##
## For trimmed reads the format is pop1_001.R1.fq.gz a la:
## 1A_0_R1_.fastq.gz -> Pop1_001.R1.fq.gz
## So annoying because we have to translate across a bunch of different mappings. ugh.
def get_ddocent_filename_mapping():
mapping_dict = {}
## Maps sample name to population
pop_dict = get_popdict()
pops = set(pop_dict.values())
## For each population go through and add items to the dict per sample
## So we have to map the sample name to the SRR and then make an entry
## mapping SRR file name to ddocent format
for i, pop in enumerate(pops):
## Get a list of all the samples in this population. This is probably a stupid way but it works.
samps = [item[0] for item in pop_dict.items() if item[1] == pop]
for j, samp in enumerate(samps):
mapping_dict[samp] = "Pop{}_{:03d}".format(i+1, j+1)
## For the untrimmed format, if you want dDocent to do the trimming
## mapping_dict[samp] = "Pop{}_Sample{}".format(i, j)
return mapping_dict
print(get_ddocent_filename_mapping())
## Set up directory structures. change the force flag if you want to
## blow everything away and restart
# force = True
force = ""
DDOCENT_DIR = "/home/iovercast/manuscript-analysis/dDocent/"
DDOCENT_REFMAP_DIR = os.path.join(REFMAP_EMPIRICAL_DIR, "ddocent/")
if force and os.path.exists(DDOCENT_REFMAP_DIR):
shutil.rmtree(DDOCENT_REFMAP_DIR)
if not os.path.exists(DDOCENT_REFMAP_DIR):
os.makedirs(DDOCENT_REFMAP_DIR)
os.chdir(DDOCENT_REFMAP_DIR)
## Create a simlink to the reference sequence in the current directory
REF_SEQ = REFMAP_EMPIRICAL_DIR + "TurtrunRef/Tursiops_truncatus.turTru1.dna_rm.toplevel.fa"
cmd = "ln -s {} reference.fasta".format(REF_SEQ)
!$cmd
## Now we have to rename all the files in the way dDocent expects them:
## 1A_0_R1_.fastq.gz -> Pop1_Sample1.F.fq.gz
## Make symlinks to the trimmed data files in the ipyrad directory. It _should_ work.
## Trimmed reads in the ipyrad directory are of the format: SRR4291681.trimmed_R1_.fastq.gz
IPYRAD_EDITS_DIR = os.path.join(IPYRAD_REFMAP_DIR, "reference-assembly/refmap-empirical_edits/")
name_mapping = get_ddocent_filename_mapping()
for k,v in name_mapping.items():
## Symlink R1 and R2
for i in ["1", "2"]:
source = os.path.join(IPYRAD_EDITS_DIR, k + ".trimmed_R{}_.fastq.gz".format(i))
##dest = os.path.join(DDOCENT_REFMAP_DIR, v + ".R{}.fq.gz".format(i))
if i == "1":
dest = os.path.join(DDOCENT_REFMAP_DIR, v + ".R1.fq.gz".format(i))
else:
dest = os.path.join(DDOCENT_REFMAP_DIR, v + ".R2.fq.gz".format(i))
cmd = "ln -sf {} {}".format(source, dest)
!$cmd
## Write out the config file for this run.
## Compacted the config file into one long line here to make it not take up so much room
## Trimming = no because we trimmed in ipyrad
## Assembly = no because we are providing a reverence sequence
## Type of Assembly = PE for paired-end
config_file = "{}/empirical-config.txt".format(DDOCENT_REFMAP_DIR)
with open(config_file, 'w') as outfile:
outfile.write('Number of Processors\n40\nMaximum Memory\n0\nTrimming\nno\nAssembly?\nno\nType_of_Assembly\nPE\nClustering_Similarity%\n0.85\nMapping_Reads?\nyes\nMapping_Match_Value\n1\nMapping_MisMatch_Value\n3\nMapping_GapOpen_Penalty\n5\nCalling_SNPs?\nyes\nEmail\nwatdo@mailinator.com\n')
cmd = "export LD_LIBRARY_PATH={}/freebayes-src/vcflib/tabixpp/htslib/:$LD_LIBRARY_PATH; ".format(DDOCENT_DIR)
cmd += "export PATH={}:$PATH; time dDocent {}".format(DDOCENT_DIR, config_file)
print(cmd)
with open("ddocent.sh", 'w') as outfile:
outfile.write("#!/bin/bash\n")
outfile.write(cmd)
!chmod 777 ddocent.sh
## Have to run the printed command by hand from the ddocent REALDATA dir bcz it doesn't like running in the notebook
#!$cmd
## NB: Must rename all the samples in the output vcf and then use vcf-shuffle-cols
## perl script in the vcf/perl directory to reorder the vcf file to match
## the output of stacks and ipyrad for pca/heatmaps to work.
## You have to post-process the vcf files to decompose complex genotypes and remove indels
os.chdir(DDOCENT_REFMAP_DIR)
exports = "export LD_LIBRARY_PATH=/home/iovercast/manuscript-analysis/dDocent//freebayes-src/vcflib/tabixpp/htslib/:$LD_LIBRARY_PATH; export PATH=/home/iovercast/manuscript-analysis/dDocent/:$PATH"
fullvcf = os.path.join(DDOCENT_REFMAP_DIR, "TotalRawSNPs.vcf")
filtvcf = os.path.join(DDOCENT_REFMAP_DIR, "Final.recode.vcf")
for f in [fullvcf, filtvcf]:
print("Finalizing - {}".format(f))
## Rename the samples to make them agree with the ipyrad/stacks names so
## the results analysis will work.
vcffile = f
infile = open(vcffile,'r')
filedata = infile.readlines()
infile.close()
outfile = open(vcffile,'w')
for line in filedata:
if "CHROM" in line:
for ipname, ddname in name_mapping.items():
line = line.replace(ddname, ipname)
outfile.write(line)
outfile.close()
## Rename columns to match ipyrad and then resort columns to be in same order
IPYRAD_VCF = os.path.join(IPYRAD_REFMAP_DIR, "refmap-empirical_outfiles/refmap-empirical.vcf")
os.chdir(os.path.join(DDOCENT_DIR, "vcftools_0.1.11/perl"))
tmpvcf = os.path.join(DDOCENT_REFMAP_DIR, "ddocent-tmp.vcf")
cmd = "perl vcf-shuffle-cols -t {} {} > {}".format(IPYRAD_VCF, vcffile, tmpvcf)
print(cmd)
#!$cmd
os.chdir(DDOCENT_REFMAP_DIR)
## Naming the new outfiles as <curname>.snps.vcf
## Decompose complex genotypes and remove indels
outfile = os.path.join(DDOCENT_REFMAP_DIR, f.split("/")[-1].split(".vcf")[0] + ".snps.vcf")
cmd = "{}; vcfallelicprimitives {} > ddoc-tmp.vcf".format(exports, f)
print(cmd)
!$cmd
cmd = "{}; vcftools --vcf ddoc-tmp.vcf --remove-indels --recode --recode-INFO-all --out {}".format(exports, outfile)
print(cmd)
!$cmd
!rm ddoc-tmp.vcf
def get_sampsdict():
info_header = "BioSample_s Experiment_s Library_Name_s MBases_l MBytes_l Run_s SRA_Sample_s Sample_Name_s dev_stage_s ecotype_s lat_lon_s sex_s tissue_s Assay_Type_s AssemblyName_s BioProject_s BioSampleModel_s Center_Name_s Consent_s InsertSize_l LibraryLayout_s LibrarySelection_s LibrarySource_s LoadDate_s Organism_s Platform_s ReleaseDate_s SRA_Study_s g1k_analysis_group_s g1k_pop_code_s source_s"
info = SAMN05806468 SRX2187156 Pp01 595 395 SRR4291662 SRS1709994 Pp01 <not provided> relicta 44.09 N 29.81 E female muscle OTHER <not provided> PRJNA343959 Model organism or animal <not provided> public 0 PAIRED Restriction Digest GENOMIC 2016-09-22 Phocoena phocoena ILLUMINA 2016-09-27 SRP090334 <not provided> <not provided> <not provided>
SAMN05806469 SRX2187157 Pp02 478 318 SRR4291663 SRS1709995 Pp02 <not provided> relicta 41.42 N 28.92 E female muscle OTHER <not provided> PRJNA343959 Model organism or animal <not provided> public 0 PAIRED Restriction Digest GENOMIC 2016-09-22 Phocoena phocoena ILLUMINA 2016-09-27 SRP090334 <not provided> <not provided> <not provided>
SAMN05806478 SRX2187158 Pp11 242 162 SRR4291664 SRS1709996 Pp11 adult phocoena 54.96 N 8.32 E female skin OTHER <not provided> PRJNA343959 Model organism or animal <not provided> public 0 PAIRED Restriction Digest GENOMIC 2016-09-22 Phocoena phocoena ILLUMINA 2016-09-27 SRP090334 <not provided> <not provided> <not provided>
SAMN05806479 SRX2187159 Pp12 261 174 SRR4291665 SRS1709997 Pp12 adult phocoena 54.95 N 8.32 E female skin OTHER <not provided> PRJNA343959 Model organism or animal <not provided> public 0 PAIRED Restriction Digest GENOMIC 2016-09-22 Phocoena phocoena ILLUMINA 2016-09-27 SRP090334 <not provided> <not provided> <not provided>
SAMN05806480 SRX2187160 Pp13 595 397 SRR4291666 SRS1709998 Pp13 juvenile phocoena 54.16 N 8.82 E male skin OTHER <not provided> PRJNA343959 Model organism or animal <not provided> public 0 PAIRED Restriction Digest GENOMIC 2016-09-22 Phocoena phocoena ILLUMINA 2016-09-27 SRP090334 <not provided> <not provided> <not provided>
SAMN05806481 SRX2187161 Pp14 769 511 SRR4291667 SRS1709999 Pp14 <not provided> phocoena 57.00 N 11.00 E female skin OTHER <not provided> PRJNA343959 Model organism or animal <not provided> public 0 PAIRED Restriction Digest GENOMIC 2016-09-22 Phocoena phocoena ILLUMINA 2016-09-27 SRP090334 <not provided> <not provided> <not provided>
SAMN05806482 SRX2187162 Pp15 624 414 SRR4291668 SRS1710000 Pp15 <not provided> phocoena 56.89 N 12.50 E male skin OTHER <not provided> PRJNA343959 Model organism or animal <not provided> public 0 PAIRED Restriction Digest GENOMIC 2016-09-22 Phocoena phocoena ILLUMINA 2016-09-27 SRP090334 <not provided> <not provided> <not provided>
SAMN05806483 SRX2187163 Pp16 665 446 SRR4291669 SRS1710001 Pp16 <not provided> phocoena 57.37 N 9.68 E female skin OTHER <not provided> PRJNA343959 Model organism or animal <not provided> public 0 PAIRED Restriction Digest GENOMIC 2016-09-22 Phocoena phocoena ILLUMINA 2016-09-27 SRP090334 <not provided> <not provided> <not provided>
SAMN05806484 SRX2187164 Pp17 264 177 SRR4291670 SRS1710002 Pp17 <not provided> phocoena 57.59 N 10.10 E female skin OTHER <not provided> PRJNA343959 Model organism or animal <not provided> public 0 PAIRED Restriction Digest GENOMIC 2016-09-22 Phocoena phocoena ILLUMINA 2016-09-27 SRP090334 <not provided> <not provided> <not provided>
SAMN05806485 SRX2187165 Pp18 684 453 SRR4291671 SRS1710003 Pp18 <not provided> phocoena 58.93 N 11.15 E female skin OTHER <not provided> PRJNA343959 Model organism or animal <not provided> public 0 PAIRED Restriction Digest GENOMIC 2016-09-22 Phocoena phocoena ILLUMINA 2016-09-27 SRP090334 <not provided> <not provided> <not provided>
SAMN05806486 SRX2187166 Pp19 601 398 SRR4291672 SRS1710004 Pp19 <not provided> phocoena 55.43 N 107.0 E female skin OTHER <not provided> PRJNA343959 Model organism or animal <not provided> public 0 PAIRED Restriction Digest GENOMIC 2016-09-22 Phocoena phocoena ILLUMINA 2016-09-27 SRP090334 <not provided> <not provided> <not provided>
SAMN05806487 SRX2187167 Pp20 392 261 SRR4291673 SRS1710005 Pp20 <not provided> phocoena 55.97 N 11.18 E female skin OTHER <not provided> PRJNA343959 Model organism or animal <not provided> public 0 PAIRED Restriction Digest GENOMIC 2016-09-22 Phocoena phocoena ILLUMINA 2016-09-27 SRP090334 <not provided> <not provided> <not provided>
SAMN05806470 SRX2187168 Pp03 471 316 SRR4291674 SRS1710006 Pp03 <not provided> relicta 41.48 N 28.31 E female muscle OTHER <not provided> PRJNA343959 Model organism or animal <not provided> public 0 PAIRED Restriction Digest GENOMIC 2016-09-22 Phocoena phocoena ILLUMINA 2016-09-27 SRP090334 <not provided> <not provided> <not provided>
SAMN05806488 SRX2187169 Pp21 592 397 SRR4291675 SRS1710007 Pp21 <not provided> phocoena 55.43 N 10.70 E female skin OTHER <not provided> PRJNA343959 Model organism or animal <not provided> public 0 PAIRED Restriction Digest GENOMIC 2016-09-22 Phocoena phocoena ILLUMINA 2016-09-27 SRP090334 <not provided> <not provided> <not provided>
SAMN05806489 SRX2187170 Pp22 446 300 SRR4291676 SRS1710008 Pp22 <not provided> phocoena 56.25 N 12.82 E female skin OTHER <not provided> PRJNA343959 Model organism or animal <not provided> public 0 PAIRED Restriction Digest GENOMIC 2016-09-22 Phocoena phocoena ILLUMINA 2016-09-27 SRP090334 <not provided> <not provided> <not provided>
SAMN05806490 SRX2187171 Pp23 617 409 SRR4291677 SRS1710009 Pp23 <not provided> phocoena 56.65 N 12.85 E female skin OTHER <not provided> PRJNA343959 Model organism or animal <not provided> public 0 PAIRED Restriction Digest GENOMIC 2016-09-22 Phocoena phocoena ILLUMINA 2016-09-27 SRP090334 <not provided> <not provided> <not provided>
SAMN05806491 SRX2187172 Pp24 554 367 SRR4291678 SRS1710010 Pp24 <not provided> phocoena 56.00 N 12.00 E female skin OTHER <not provided> PRJNA343959 Model organism or animal <not provided> public 0 PAIRED Restriction Digest GENOMIC 2016-09-22 Phocoena phocoena ILLUMINA 2016-09-27 SRP090334 <not provided> <not provided> <not provided>
SAMN05806492 SRX2187173 Pp25 753 500 SRR4291679 SRS1710011 Pp25 juvenile phocoena 55.00 N 10.23 E female skin OTHER <not provided> PRJNA343959 Model organism or animal <not provided> public 0 PAIRED Restriction Digest GENOMIC 2016-09-22 Phocoena phocoena ILLUMINA 2016-09-27 SRP090334 <not provided> <not provided> <not provided>
SAMN05806493 SRX2187174 Pp26 530 353 SRR4291680 SRS1710012 Pp26 <not provided> phocoena 54.38 N 10.99 E female skin OTHER <not provided> PRJNA343959 Model organism or animal <not provided> public 0 PAIRED Restriction Digest GENOMIC 2016-09-22 Phocoena phocoena ILLUMINA 2016-09-27 SRP090334 <not provided> <not provided> <not provided>
SAMN05806494 SRX2187175 Pp27 639 426 SRR4291681 SRS1710013 Pp27 juvenile phocoena 54.83 N 9.62 E female skin OTHER <not provided> PRJNA343959 Model organism or animal <not provided> public 0 PAIRED Restriction Digest GENOMIC 2016-09-22 Phocoena phocoena ILLUMINA 2016-09-27 SRP090334 <not provided> <not provided> <not provided>
SAMN05806495 SRX2187176 Pp28 646 430 SRR4291682 SRS1710014 Pp28 juvenile phocoena 54.59 N 10.03 E female skin OTHER <not provided> PRJNA343959 Model organism or animal <not provided> public 0 PAIRED Restriction Digest GENOMIC 2016-09-22 Phocoena phocoena ILLUMINA 2016-09-27 SRP090334 <not provided> <not provided> <not provided>
SAMN05806496 SRX2187177 Pp29 374 247 SRR4291683 SRS1710015 Pp29 juvenile phocoena 54.42 N 11.55 E female skin OTHER <not provided> PRJNA343959 Model organism or animal <not provided> public 0 PAIRED Restriction Digest GENOMIC 2016-09-22 Phocoena phocoena ILLUMINA 2016-09-27 SRP090334 <not provided> <not provided> <not provided>
SAMN05806497 SRX2187178 Pp30 569 376 SRR4291684 SRS1710016 Pp30 juvenile phocoena 54.53 N 11.12 E female skin OTHER <not provided> PRJNA343959 Model organism or animal <not provided> public 0 PAIRED Restriction Digest GENOMIC 2016-09-22 Phocoena phocoena ILLUMINA 2016-09-27 SRP090334 <not provided> <not provided> <not provided>
SAMN05806471 SRX2187179 Pp04 451 303 SRR4291685 SRS1710017 Pp04 <not provided> relicta 41.65 N 28.27 E female muscle OTHER <not provided> PRJNA343959 Model organism or animal <not provided> public 0 PAIRED Restriction Digest GENOMIC 2016-09-22 Phocoena phocoena ILLUMINA 2016-09-27 SRP090334 <not provided> <not provided> <not provided>
SAMN05806498 SRX2187180 Pp31 578 384 SRR4291686 SRS1710018 Pp31 adult phocoena 54.53 N 11.11 E female skin OTHER <not provided> PRJNA343959 Model organism or animal <not provided> public 0 PAIRED Restriction Digest GENOMIC 2016-09-22 Phocoena phocoena ILLUMINA 2016-09-27 SRP090334 <not provided> <not provided> <not provided>
SAMN05806499 SRX2187181 Pp32 586 392 SRR4291687 SRS1710019 Pp32 juvenile phocoena 54.32 N 13.09 E female skin OTHER <not provided> PRJNA343959 Model organism or animal <not provided> public 0 PAIRED Restriction Digest GENOMIC 2016-09-22 Phocoena phocoena ILLUMINA 2016-09-27 SRP090334 <not provided> <not provided> <not provided>
SAMN05806500 SRX2187182 Pp33 288 189 SRR4291688 SRS1710020 Pp33 juvenile phocoena 54.46 N 12.54 E female skin OTHER <not provided> PRJNA343959 Model organism or animal <not provided> public 0 PAIRED Restriction Digest GENOMIC 2016-09-22 Phocoena phocoena ILLUMINA 2016-09-27 SRP090334 <not provided> <not provided> <not provided>
SAMN05806501 SRX2187183 Pp34 587 389 SRR4291689 SRS1710021 Pp34 <not provided> phocoena 54.32 N 13.09 E female skin OTHER <not provided> PRJNA343959 Model organism or animal <not provided> public 0 PAIRED Restriction Digest GENOMIC 2016-09-22 Phocoena phocoena ILLUMINA 2016-09-27 SRP090334 <not provided> <not provided> <not provided>
SAMN05806502 SRX2187184 Pp35 496 330 SRR4291690 SRS1710022 Pp35 <not provided> phocoena 55.00 N 14.00 E female skin OTHER <not provided> PRJNA343959 Model organism or animal <not provided> public 0 PAIRED Restriction Digest GENOMIC 2016-09-22 Phocoena phocoena ILLUMINA 2016-09-27 SRP090334 <not provided> <not provided> <not provided>
SAMN05806503 SRX2187185 Pp36 1085 720 SRR4291691 SRS1710023 Pp36 juvenile phocoena 56.00 N 15.00 E male skin OTHER <not provided> PRJNA343959 Model organism or animal <not provided> public 0 PAIRED Restriction Digest GENOMIC 2016-09-22 Phocoena phocoena ILLUMINA 2016-09-27 SRP090334 <not provided> <not provided> <not provided>
SAMN05806504 SRX2187186 Pp37 214 141 SRR4291692 SRS1710024 Pp37 <not provided> phocoena 55.56 N 17.63 E female skin OTHER <not provided> PRJNA343959 Model organism or animal <not provided> public 0 PAIRED Restriction Digest GENOMIC 2016-09-22 Phocoena phocoena ILLUMINA 2016-09-27 SRP090334 <not provided> <not provided> <not provided>
SAMN05806505 SRX2187187 Pp38 397 263 SRR4291693 SRS1710025 Pp38 <not provided> phocoena 55.50 N 17.00 E female muscle OTHER <not provided> PRJNA343959 Model organism or animal <not provided> public 0 PAIRED Restriction Digest GENOMIC 2016-09-22 Phocoena phocoena ILLUMINA 2016-09-27 SRP090334 <not provided> <not provided> <not provided>
SAMN05806506 SRX2187188 Pp39 670 447 SRR4291694 SRS1710026 Pp39 juvenile phocoena 56.00 N 16.00 E male muscle OTHER <not provided> PRJNA343959 Model organism or animal <not provided> public 0 PAIRED Restriction Digest GENOMIC 2016-09-22 Phocoena phocoena ILLUMINA 2016-09-27 SRP090334 <not provided> <not provided> <not provided>
SAMN05806507 SRX2187189 Pp40 342 226 SRR4291695 SRS1710027 Pp40 <not provided> phocoena 54.73 N 18.58 E female skin OTHER <not provided> PRJNA343959 Model organism or animal <not provided> public 0 PAIRED Restriction Digest GENOMIC 2016-09-22 Phocoena phocoena ILLUMINA 2016-09-27 SRP090334 <not provided> <not provided> <not provided>
SAMN05806472 SRX2187190 Pp05 611 406 SRR4291696 SRS1710028 Pp05 <not provided> phocoena 64.78 N 13.22 E male skin OTHER <not provided> PRJNA343959 Model organism or animal <not provided> public 0 PAIRED Restriction Digest GENOMIC 2016-09-22 Phocoena phocoena ILLUMINA 2016-09-27 SRP090334 <not provided> <not provided> <not provided>
SAMN05806508 SRX2187191 Pp41 586 389 SRR4291697 SRS1710029 Pp41 <not provided> phocoena 54.80 N 18.44 E female muscle OTHER <not provided> PRJNA343959 Model organism or animal <not provided> public 0 PAIRED Restriction Digest GENOMIC 2016-09-22 Phocoena phocoena ILLUMINA 2016-09-27 SRP090334 <not provided> <not provided> <not provided>
SAMN05806509 SRX2187192 Pp42 329 219 SRR4291698 SRS1710030 Pp42 <not provided> phocoena 54.67 N 18.59 E male muscle OTHER <not provided> PRJNA343959 Model organism or animal <not provided> public 0 PAIRED Restriction Digest GENOMIC 2016-09-22 Phocoena phocoena ILLUMINA 2016-09-27 SRP090334 <not provided> <not provided> <not provided>
SAMN05806510 SRX2187193 Pp43 517 343 SRR4291699 SRS1710031 Pp43 juvenile phocoena 57.00 N 20.00 E female skin OTHER <not provided> PRJNA343959 Model organism or animal <not provided> public 0 PAIRED Restriction Digest GENOMIC 2016-09-22 Phocoena phocoena ILLUMINA 2016-09-27 SRP090334 <not provided> <not provided> <not provided>
SAMN05806511 SRX2187194 Pp44 491 326 SRR4291700 SRS1710032 Pp44 adult phocoena 57.01 N 20.00 E male skin OTHER <not provided> PRJNA343959 Model organism or animal <not provided> public 0 PAIRED Restriction Digest GENOMIC 2016-09-22 Phocoena phocoena ILLUMINA 2016-09-27 SRP090334 <not provided> <not provided> <not provided>
SAMN05806473 SRX2187195 Pp06 632 423 SRR4291701 SRS1710033 Pp06 <not provided> phocoena 64.58 N 13.58 E male skin OTHER <not provided> PRJNA343959 Model organism or animal <not provided> public 0 PAIRED Restriction Digest GENOMIC 2016-09-22 Phocoena phocoena ILLUMINA 2016-09-27 SRP090334 <not provided> <not provided> <not provided>
SAMN05806474 SRX2187196 Pp07 905 602 SRR4291702 SRS1710034 Pp07 <not provided> phocoena 64.31 N 14.00 E male skin OTHER <not provided> PRJNA343959 Model organism or animal <not provided> public 0 PAIRED Restriction Digest GENOMIC 2016-09-22 Phocoena phocoena ILLUMINA 2016-09-27 SRP090334 <not provided> <not provided> <not provided>
SAMN05806475 SRX2187197 Pp08 585 390 SRR4291703 SRS1710035 Pp08 adult phocoena 54.70 N 8.33 E female skin OTHER <not provided> PRJNA343959 Model organism or animal <not provided> public 0 PAIRED Restriction Digest GENOMIC 2016-09-22 Phocoena phocoena ILLUMINA 2016-09-27 SRP090334 <not provided> <not provided> <not provided>
SAMN05806476 SRX2187198 Pp09 590 392 SRR4291704 SRS1710036 Pp09 <not provided> phocoena 54.30 N 8.93 E female skin OTHER <not provided> PRJNA343959 Model organism or animal <not provided> public 0 PAIRED Restriction Digest GENOMIC 2016-09-22 Phocoena phocoena ILLUMINA 2016-09-27 SRP090334 <not provided> <not provided> <not provided>
SAMN05806477 SRX2187199 Pp10 625 414 SRR4291705 SRS1710037 Pp10 adult phocoena 55.47 N 8.38 E female skin OTHER <not provided> PRJNA343959 Model organism or animal <not provided> public 0 PAIRED Restriction Digest GENOMIC 2016-09-22 Phocoena phocoena ILLUMINA 2016-09-27 SRP090334 <not provided> <not provided> <not provided>.split("\n")
samps_dict = {}
for i in info:
line = i.split("\t")
samps_dict[line[2]] = line[5]
return(samps_dict)
def get_popdict():
samps_dict = get_sampsdict()
popmap = \
01 WBS
02 WBS
03 WBS
04 WBS
05 IS
06 IS
07 IS
08 NOS
09 NOS
10 NOS
11 NOS
12 NOS
13 NOS
14 SK1
15 SK1
16 SK1
17 SK1
18 SK1
19 KB1
20 KB1
21 KB1
22 KB1
23 KB1
24 KB1
25 BES2
26 BES2
27 BES2
28 BES2
29 BES2
30 BES2
31 BES2
32 BES2
33 BES2
34 BES2
35 IBS
36 IBS
37 IBS
38 IBS
39 IBS
40 IBS
41 IBS
42 IBS
43 IBS
44 IBS.split("\n")
pop_dict = {}
for i in popmap:
line = i.split("\t")
pop_dict[samps_dict["Pp"+line[0]]] = line[1]
return(pop_dict)
## Adding "-mapped-sorted" to each individual name to avoid having to rename the .bam files created by ipyrad
def make_stacks_popmap(OUTDIR):
pop_dict = get_popdict()
out = os.path.join(OUTDIR, "popmap.txt")
print("Writing popmap file to {}".format(out))
with open(out, 'w') as outfile:
for k,v in pop_dict.items():
outfile.write(k + "\t" + v + "\n")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Algebraic expressions and the inner workings of Wick&d
Step2: Orbital indices
Step3: Indices have two attributes, the orbital space and position
Step4: When we render this to LaTeX using the (member) funtion latex(), wick&d uses pretty indices instead
Step5: In this notebook we also defined a function called latex() that can render any object that has a member function called latex(). Here is what happens if we print some indices
Step6: Second quantized operators
Step7: Creation operators are indicated with a+, while annihilation operators with with a-. A second quantized operator has function to get its properties
Step8: Tensors
Step9: We can grab properties of tensors with the following functions
Step10: Here is a more elaborate example that builds the antisymmetric four-index tensor $V_{v_0 v_1}^{o_0 o_1}$
|
<ASSISTANT_TASK:>
Python Code:
import wicked as w
from IPython.display import display, Math, Latex
def latex(expr):
Function to render any object that has a member latex() function
display(Math(expr.latex()))
w.reset_space()
w.add_space("o", "fermion", "occupied", ['i','j','k','l','m'])
w.add_space("v", "fermion", "unoccupied", ['a','b','c','d','e','f'])
idx = w.index('o_0')
idx2 = w.index('o0')
print(idx)
print(idx2)
idx.pos(), idx.space()
print(idx)
idx.latex()
latex(w.index('o_0'))
latex(w.index('o_2'))
latex(w.index('v_0'))
cre = w.cre('o0')
ann = w.ann('o1')
cre, ann
ann.field_type(), ann.type(), ann.index()
t = w.tensor(label="t", lower=['v0'], upper=['o0'], symmetry=w.sym.none)
latex(t)
t.label(), t.upper(), t.lower(), t.symmetry()
t = w.tensor("V",['v0','v1'],['o0','o1'],w.sym.anti); latex(t)
t.label(), t.upper(), t.lower(), t.symmetry()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In this tutorial, I will show how to transform documents from one vector representation
Step2: Creating a transformation
Step3: We used our old corpus from tutorial 1 to initialize (train) the transformation model. Different
Step4: Or to apply a transformation to a whole corpus
Step5: In this particular case, we are transforming the same corpus that we used
Step6: Here we transformed our Tf-Idf corpus via Latent Semantic Indexing <http
Step7: (the topics are printed to log -- see the note at the top of this page about activating
Step8: Model persistency is achieved with the
Step9: The next question might be
|
<ASSISTANT_TASK:>
Python Code:
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
from collections import defaultdict
from gensim import corpora
documents = [
"Human machine interface for lab abc computer applications",
"A survey of user opinion of computer system response time",
"The EPS user interface management system",
"System and human system engineering testing of EPS",
"Relation of user perceived response time to error measurement",
"The generation of random binary unordered trees",
"The intersection graph of paths in trees",
"Graph minors IV Widths of trees and well quasi ordering",
"Graph minors A survey",
]
# remove common words and tokenize
stoplist = set('for a of the and to in'.split())
texts = [
[word for word in document.lower().split() if word not in stoplist]
for document in documents
]
# remove words that appear only once
frequency = defaultdict(int)
for text in texts:
for token in text:
frequency[token] += 1
texts = [
[token for token in text if frequency[token] > 1]
for text in texts
]
dictionary = corpora.Dictionary(texts)
corpus = [dictionary.doc2bow(text) for text in texts]
from gensim import models
tfidf = models.TfidfModel(corpus) # step 1 -- initialize a model
doc_bow = [(0, 1), (1, 1)]
print(tfidf[doc_bow]) # step 2 -- use the model to transform vectors
corpus_tfidf = tfidf[corpus]
for doc in corpus_tfidf:
print(doc)
lsi_model = models.LsiModel(corpus_tfidf, id2word=dictionary, num_topics=2) # initialize an LSI transformation
corpus_lsi = lsi_model[corpus_tfidf] # create a double wrapper over the original corpus: bow->tfidf->fold-in-lsi
lsi_model.print_topics(2)
# both bow->tfidf and tfidf->lsi transformations are actually executed here, on the fly
for doc, as_text in zip(corpus_lsi, documents):
print(doc, as_text)
import os
import tempfile
with tempfile.NamedTemporaryFile(prefix='model-', suffix='.lsi', delete=False) as tmp:
lsi_model.save(tmp.name) # same for tfidf, lda, ...
loaded_lsi_model = models.LsiModel.load(tmp.name)
os.unlink(tmp.name)
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
img = mpimg.imread('run_topics_and_transformations.png')
imgplot = plt.imshow(img)
_ = plt.axis('off')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Job Queues
Step2: The requirements section tells job queues to install a version of the library on the cluster if it does not exist yet. You can add other libraries, or add them for specific jobs. By default, virtual_env is set to None, meaning that everything runs and requirements are installed in the root python interpretor. If you provide a < name > for the value virtual_env attribute, it will search for a virtualenv in ~/virtualenvs/< name > .
Step3: Jobs
Step4: Last step is to update the queue. One update will check the current status of each job attached to jq, and process its next step, being sending it to the cluster, retrieving it, unpacking it, etc
Step5: You can tell jq to automatically do updates until all jobs are done or in error status
|
<ASSISTANT_TASK:>
Python Code:
pip_arg_xp_man = '-e git+https://github.com/wschuell/experiment_manager.git@origin/master#egg=experiment_manager'
#ssh: pip_arg_xp_man = '-e git+ssh://git@github.com/wschuell/experiment_manager.git@master#egg=experiment_manager'
try:
import experiment_manager as xp_man
except ImportError:
print('experiment_manager is not installed, you can install it with command: \n pip install '+pip_arg_xp_man)
jq_cfg_local = {'jq_type':'local'}
virtualenv = 'test_py3' # by default root python. ex: virtualenv = 'test_xp_man' for venv in ~/virtualenvs/test_xp_man
jq_cfg_plafrim = {'jq_type':'plafrim',
'modules':['slurm','language/python/3.5.2'],
'virtual_env': virtualenv,
'requirements': [pip_arg_xp_man],
#'username':'schuelle',
}
jq_cfg_avakas = {'jq_type':'avakas',
'modules':['torque','maui','python3/3.6.0'],
'without_epilogue':True,
#'username':'wschueller',
'virtual_env':virtualenv,#virtualenv,
#'requirements': [pip_arg_xp_man], IMPORTANT: install on avakas through github and https is broken due to the git version being too old. You have to install manually and via SSH...
}
jq_cfg_anyone = {'jq_type':'anyone',
'modules':[],
'virtual_env':'test_279',
#'requirements': [pip_arg_xp_man],
"hostname":"cluster_roma"
}
jq_cfg_docker = {'jq_type':'slurm',
'modules':[],
#'virtual_env':virtualenv,
#'requirements': [pip_arg_xp_man],
'ssh_cfg':{
'username':'root',
'hostname':'172.19.0.2',
'password':'dockerslurm',}
}
jq_cfg_local_multiprocess = {'jq_type':'local_multiprocess',
#'nb_process':4, #default value: number of CPUs on the local machine
}
jq_cfg = jq_cfg_local_multiprocess
jq = xp_man.job_queue.get_jobqueue(**jq_cfg)
print(jq.get_status_string())
job_cfg = {
'estimated_time':120,#in seconds
#'virtual_env':'test',
#'requirements':[],
#...,
}
job = xp_man.job.ExampleJob(**job_cfg)
jq.add_job(job) # of course, you can add as many jobs as you want, like in next cell
print(jq.get_status_string())
for i in range(20):
job_cfg_2 = { 'descr' : str(i), 'estimated_time':120,#a description for the example job
}
job = xp_man.job.ExampleJob(**job_cfg_2)
jq.add_job(job)
print(jq.get_status_string())
#jq.ssh_session.reconnect()
jq.update_queue()
jq.auto_finish_queue()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Re-train our model with trips_last_5min feature
Step2: Next, we create a table called traffic_realtime and set up the schema.
Step3: Launch Streaming Dataflow Pipeline
Step5: Make predictions from the new data
Step6: The traffic_realtime table is updated in realtime using Cloud Pub/Sub and Dataflow so, if you run the cell below periodically, you should see the traffic_last_5min feature added to the instance and change over time.
Step7: Finally, we'll use the python api to call predictions on an instance, using the realtime traffic information in our prediction. Just as above, you should notice that our resulting predicitons change with time as our realtime traffic information changes as well.
|
<ASSISTANT_TASK:>
Python Code:
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
import os
import googleapiclient.discovery
import shutil
from google.cloud import bigquery
from google.api_core.client_options import ClientOptions
from matplotlib import pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.layers import Dense, DenseFeatures
from tensorflow.keras.models import Sequential
print(tf.__version__)
PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR PROJECT ID
BUCKET = 'cloud-training-demos' # REPLACE WITH YOUR BUCKET NAME
REGION = 'us-central1' # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# For Bash Code
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
bq = bigquery.Client()
dataset = bigquery.Dataset(bq.dataset("taxifare"))
try:
bq.create_dataset(dataset) # will fail if dataset already exists
print("Dataset created.")
except:
print("Dataset already exists.")
dataset = bigquery.Dataset(bq.dataset("taxifare"))
table_ref = dataset.table("traffic_realtime")
SCHEMA = [
bigquery.SchemaField("trips_last_5min", "INTEGER", mode="REQUIRED"),
bigquery.SchemaField("time", "TIMESTAMP", mode="REQUIRED"),
]
table = bigquery.Table(table_ref, schema=SCHEMA)
try:
bq.create_table(table)
print("Table created.")
except:
print("Table already exists.")
%load_ext google.cloud.bigquery
%%bigquery
SELECT
*
FROM
`taxifare.traffic_realtime`
ORDER BY
time DESC
LIMIT 10
# TODO 2a. Write a function to take most recent entry in `traffic_realtime` table and add it to instance.
def add_traffic_last_5min(instance):
bq = bigquery.Client()
query_string =
TODO: Your code goes here
trips = bq.query(query_string).to_dataframe()['trips_last_5min'][0]
instance['traffic_last_5min'] = # TODO: Your code goes here.
return instance
add_traffic_last_5min(instance={'dayofweek': 4,
'hourofday': 13,
'pickup_longitude': -73.99,
'pickup_latitude': 40.758,
'dropoff_latitude': 41.742,
'dropoff_longitude': -73.07})
# TODO 2b. Write code to call prediction on instance using realtime traffic info.
#Hint: Look at the "Serving online predictions" section of this page https://cloud.google.com/ml-engine/docs/tensorflow/custom-prediction-routine-keras
MODEL_NAME = 'taxifare'
VERSION_NAME = 'traffic'
endpoint = f'https://{REGION}-ml.googleapis.com'
client_options = ClientOptions(api_endpoint=endpoint)
service = googleapiclient.discovery.build('ml', 'v1', cache_discovery=False, client_options=client_options)
name = 'projects/{}/models/{}/versions/{}'.format(PROJECT,
MODEL_NAME,
VERSION_NAME)
instance = {'dayofweek': 4,
'hourofday': 13,
'pickup_longitude': -73.99,
'pickup_latitude': 40.758,
'dropoff_latitude': 41.742,
'dropoff_longitude': -73.07}
instance = # TODO: Your code goes here.
response = # TODO: Your code goes here.
if 'error' in response:
raise RuntimeError(response['error'])
else:
print( # TODO: Your code goes here
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create Alanine in Vacuum and run it using OPS.
Step2: Let's have a look at the content
Step3: An OpenMM simulation in OPS needs 3 ingredients to function
Step4: 2. The system
Step5: 3. The integrator
Step6: build the engine
Step7: Let's run a simulation of 10 steps.
Step8: The engine is stored by using the XML serialization of OpenMM and put this into the storage.
Step9: try to load the stored engine
Step10: Run another 10 steps to test
Step11: And save the trajectory for future use (other tests)
|
<ASSISTANT_TASK:>
Python Code:
import openpathsampling as paths
import openmmtools as omt
import simtk.openmm as omm
import simtk.unit as u
import mdtraj as md
import openpathsampling.engines.openmm as eng
from __future__ import print_function
testsystem = omt.testsystems.AlanineDipeptideVacuum()
#! skip
{ key: type(value) for key, value in testsystem.__dict__.items()}
template = eng.snapshot_from_testsystem(testsystem)
system = testsystem.system
integrator = omm.VerletIntegrator(
0.002 * u.picoseconds
)
engine = eng.Engine(
template.topology,
system,
integrator)
traj = engine.generate(template, [paths.LengthEnsemble(10).can_append])
print(traj)
#! skip
print traj[5].coordinates
psi = md.compute_psi(traj.to_mdtraj())
#! ignore
psi[1][3:8]
st = paths.Storage('engine_store_test.nc', mode='w')
st.engines.save(engine);
st.tag['template'] = template
#! skip
st.variables['engines_json'][0][0:256] + '...'
st.save(traj);
st.close()
st = paths.AnalysisStorage('engine_store_test.nc')
engine = st.engines[0]
template = st.tag['template']
traj = engine.generate(template, [paths.LengthEnsemble(10).can_append])
print(traj)
st.close()
st = paths.Storage('engine_store_test.nc', "a")
st.save(traj)
st.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We will also load the other packages we will use in this demo. This could be done before the above import.
Step2: Generating Synthetic Data
Step3: To generate the synthetic data for this demo, we use the following simple probabilistic model. For the input $z$, we will use Bernouli-Gaussian (BG) distribution, a simple model in sparse signal processing. In the BG model, the components $z_i$ are i.i.d. where each component $z_i=0$ with probability $1-\rho$ and $z_i \sim {\mathcal N}(0,1)$ with probability $\rho$. The parameter $\rho$ is called the sparsity ratio and represents the average number of non-zero components. When $\rho$ is small, the vector $z$ is sparse. The components on which $z_i$ are non-zero are called the active components. We set the parameters below. We also set the SNR for the measurements.
Step4: Using these parameters, we can generate random sparse z following this distribution with the following simple code.
Step5: To illustrate the sparsity, we plot the vector z. We can see from this plot that the majority of the components of z are zero.
Step6: Now, we create a random transform $A$ and output $y_0 = Az$.
Step7: Finally, we add noise at the desired SNR
Step8: Creating the Vampyre estimators
Step9: We next use the vampyre class, MixEst, to describe a mixture of the two distributions. This is done by creating a list, est_list, of the estimators and an array pz with the probability of each component. The resulting estimator, est_in, is the estimator for the prior $z$, which is also the input to the transform $A$. We give this a name Input since it corresponds to the input. But, any naming is fine. Or, you can let vampyre give it a generic name.
Step10: Next, we describe the likelihood function, $p(y|z)$. Since $y=Az+w$, we can first use the MatrixLT class to define a linear transform operator Aop corresponding to the matrix A. Then, we use the LinEstim class to describe the likelihood $y=Az+w$.
Step11: Finally, the VAMP method needs a message handler to describe how to perform the Gaussian message passing. This is a more advanced feature. For most applications, you can just use the simple message handler as follows.
Step12: Running the VAMP Solver
Step13: We can print a summary of the model which indicates the dimensions and the estimators.
Step14: We now run the solver by calling the solve() method. For a small problem like this, this should be close to instantaneous.
Step15: The VAMP solver estimate is the field zhat. We plot one column of this (icol=0) and compare it to the corresponding column of the true matrix z. You should see a very good match.
Step16: We can measure the normalized mean squared error as follows. The VAMP solver also produces an estimate of the MSE in the variable zhatvar. We can extract this variable to compute the predicted MSE. We see that the normalized MSE is indeed low and closely matches the predicted value from VAMP.
Step17: Finally, we can plot the actual and predicted MSE as a function of the iteration number. When solver was contructed, we passed an argument hist_list=['zhat', 'zhatvar']. This indicated to store the value of the estimate zhat and predicted error variance zhatvar with each iteration. We can recover these values from solver.hist_dict, the history dictionary. Using the values we can compute and plot the normalized MSE on each iteartion. We see that VAMP gets a low MSE in very few iterations, about 10.
|
<ASSISTANT_TASK:>
Python Code:
import os
import sys
vp_path = os.path.abspath('../../')
if not vp_path in sys.path:
sys.path.append(vp_path)
import vampyre as vp
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
# Parameters
nz = 1000 # number of components of z
ny = 500 # number of measurements y
# Compute the shapes
zshape = (nz,) # Shape of z matrix
yshape = (ny,) # Shape of y matrix
Ashape = (ny,nz) # Shape of A matrix
sparse_rat = 0.1 # sparsity ratio
zmean1 = 0 # mean for the active components
zvar1 = 1 # variance for the active components
snr = 30 # SNR in dB
# Generate the random input
z1 = np.random.normal(zmean1, np.sqrt(zvar1), zshape)
u = np.random.uniform(0, 1, zshape) < sparse_rat
z = z1*u
ind = np.array(range(nz))
plt.plot(ind,z)
A = np.random.normal(0, 1/np.sqrt(nz), Ashape)
y0 = A.dot(z)
yvar = np.mean(np.abs(y0)**2)
wvar = yvar*np.power(10, -0.1*snr)
y = y0 + np.random.normal(0,np.sqrt(wvar), yshape)
est0 = vp.estim.DiscreteEst(0,1,zshape)
est1 = vp.estim.GaussEst(zmean1,zvar1,zshape)
est_list = [est0, est1]
pz = np.array([1-sparse_rat, sparse_rat])
est_in = vp.estim.MixEst(est_list, w=pz, name='Input')
Aop = vp.trans.MatrixLT(A,zshape)
est_out = vp.estim.LinEst(Aop,y,wvar,map_est=False, name='Output')
msg_hdl = vp.estim.MsgHdlSimp(map_est=False, shape=zshape)
nit = 20 # number of iterations
solver = vp.solver.Vamp(est_in,est_out,msg_hdl,\
hist_list=['zhat', 'zhatvar'],nit=nit)
solver.summary()
solver.solve()
zhat = solver.zhat
ind = np.array(range(nz))
plt.plot(ind,z)
plt.plot(ind,zhat)
plt.legend(['True', 'Estimate'])
zerr = np.mean(np.abs(zhat-z)**2)
zhatvar = solver.zhatvar
zpow = np.mean(np.abs(z)**2)
mse_act = 10*np.log10(zerr/zpow)
mse_pred = 10*np.log10(zhatvar/zpow)
print("Normalized MSE (dB): actual {0:f} pred {1:f}".format(mse_act, mse_pred))
# Compute the MSE as a function of the iteration
zhat_hist = solver.hist_dict['zhat']
zhatvar_hist = solver.hist_dict['zhatvar']
nit = len(zhat_hist)
mse_act = np.zeros(nit)
mse_pred = np.zeros(nit)
for it in range(nit):
zerr = np.mean(np.abs(zhat_hist[it]-z)**2)
mse_act[it] = 10*np.log10(zerr/zpow)
mse_pred[it] = 10*np.log10(zhatvar_hist[it]/zpow)
plt.plot(range(nit), mse_act, 'o-', linewidth=2)
plt.plot(range(nit), mse_pred, 's', linewidth=1)
plt.xlabel('Iteration')
plt.ylabel('Normalized MSE (dB)')
plt.legend(['Actual', 'Predicted'])
plt.grid()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <!--Empty Space for separating topics-->
Step2: We create two objects, one that contains training data and a second that contains validation data, we will assume the training data has the outliers.
Step3: We overlay the training points in red over the function that generated the data. Notice the outliers are at x=-3 and around x=2
Step4: <!--Empty Space for separating topics-->
Step5: Create the model object
Step6: We create the optimizer, the criterion function and a Data Loader object.
Step7: <!--Empty Space for separating topics-->
Step8: <!--Empty Space for separating topics-->
Step9: We will create a new linear regression object; we will use the parameters saved in the early stopping. The model must be the same input dimension and output dimension as the original model.
Step10: Load the model parameters <code>torch.load()</code>, then assign them to the object <code>model_best</code> using the method <code>load_state_dict</code>.
Step11: Let's compare the prediction from the model obtained using early stopping and the model derived from using the maximum number of iterations.
|
<ASSISTANT_TASK:>
Python Code:
# Import the libraries and set random seed
from torch import nn
import torch
import numpy as np
import matplotlib.pyplot as plt
from torch import nn,optim
from torch.utils.data import Dataset, DataLoader
torch.manual_seed(1)
# Create Data Class
class Data(Dataset):
# Constructor
def __init__(self, train = True):
if train == True:
self.x = torch.arange(-3, 3, 0.1).view(-1, 1)
self.f = -3 * self.x + 1
self.y = self.f + 0.1 * torch.randn(self.x.size())
self.len = self.x.shape[0]
if train == True:
self.y[50:] = 20
else:
self.x = torch.arange(-3, 3, 0.1).view(-1, 1)
self.y = -3 * self.x + 1
self.len = self.x.shape[0]
# Getter
def __getitem__(self, index):
return self.x[index], self.y[index]
# Get Length
def __len__(self):
return self.len
#Create train_data object and val_data object
train_data = Data()
val_data = Data(train = False)
# Plot the training data points
plt.plot(train_data.x.numpy(), train_data.y.numpy(), 'xr')
plt.plot(train_data.x.numpy(), train_data.f.numpy())
plt.show()
# Create linear regression model class
from torch import nn
class linear_regression(nn.Module):
# Constructor
def __init__(self, input_size, output_size):
super(linear_regression, self).__init__()
self.linear = nn.Linear(input_size, output_size)
# Predition
def forward(self, x):
yhat = self.linear(x)
return yhat
# Create the model object
model = linear_regression(1, 1)
# Create optimizer, cost function and data loader object
optimizer = optim.SGD(model.parameters(), lr = 0.1)
criterion = nn.MSELoss()
trainloader = DataLoader(dataset = train_data, batch_size = 1)
# Train the model
LOSS_TRAIN = []
LOSS_VAL = []
n=1;
min_loss = 1000
def train_model_early_stopping(epochs, min_loss):
for epoch in range(epochs):
for x, y in trainloader:
yhat = model(x)
loss = criterion(yhat, y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
loss_train = criterion(model(train_data.x), train_data.y).data
loss_val = criterion(model(val_data.x), val_data.y).data
LOSS_TRAIN.append(loss_train)
LOSS_VAL.append(loss_val)
if loss_val < min_loss:
value = epoch
min_loss = loss_val
torch.save(model.state_dict(), 'best_model.pt')
train_model_early_stopping(20, min_loss)
# Plot the loss
plt.plot(LOSS_TRAIN, label = 'training loss')
plt.plot(LOSS_VAL, label = 'validation loss')
plt.xlabel("epochs")
plt.ylabel("Loss")
plt.legend(loc = 'upper right')
plt.show()
# Create a new linear regression model object
model_best = linear_regression(1, 1)
# Assign the best model to model_best
model_best.load_state_dict(torch.load('best_model.pt'))
plt.plot(model_best(val_data.x).data.numpy(), label = 'best model')
plt.plot(model(val_data.x).data.numpy(), label = 'maximum iterations')
plt.plot(val_data.y.numpy(), 'rx', label = 'true line')
plt.legend()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: Normalization and Filtering
Step4: Simple frequency distributions
Step5: In the figure above, we can see that the top ~40 words occur in around 630 texts. We can see precise values using the most_common() function
Step6: It can be useful to examine the number of texts in which a word occurs, to get a better picture of its distribution over the corpus. We can use a FreqDist for this, too.
Step7: Metadata
Step8: Since we indexed our metadata using the "link" field, we can look up metadata for each text using its fileid.
Step9: Conditional Frequencies
Step10: Words over time
|
<ASSISTANT_TASK:>
Python Code:
text_root = '../../data/EmbryoProjectTexts/files'
try:
assert os.path.exists(text_root)
except AssertionError:
print "That directory doesn't exist!"
documents = nltk.corpus.PlaintextCorpusReader(text_root, 'https.+')
documents.words()
wordnet = nltk.WordNetLemmatizer()
from nltk.corpus import stopwords
stoplist = stopwords.words('english')
def normalize_token(token):
Convert token to lowercase, and stem using the Porter algorithm.
Parameters
----------
token : str
Returns
-------
token : str
return wordnet.lemmatize(token.lower())
def filter_token(token):
Evaluate whether or not to retain ``token``.
Parameters
----------
token : str
Returns
-------
keep : bool
token = token.lower()
return token not in stoplist and token.isalpha() and len(token) > 2
word_counts = nltk.FreqDist([normalize_token(token)
for token in documents.words()
if filter_token(token)])
word_counts.plot(20)
document_counts = nltk.FreqDist([
token # Each token will be counted a maximum of 1 time per text.
for fileid in documents.fileids()
for token in set( # There can be no duplicates in a set.
[normalize_token(token) # Normalize first!
for token
in documents.words(fileids=[fileid])
if filter_token(token)]
)
])
document_counts.plot(70)
document_counts.most_common(10) # Get the 10 most common words.
len(documents.fileids())
from tethne.readers import zotero
zotero_export_path = '../../data/EmbryoProjectTexts'
metadata = zotero.read(zotero_export_path, index_by='link', follow_links=False)
example_fileid = documents.fileids()[0]
print 'This is the fileid:', example_fileid, '\n'
print 'This is the metadata for this fileid:', '\n'
pprint(metadata[example_fileid].__dict__) # pprint means "pretty print".
focal_tokens = ['organism', 'ivf', 'pluripotent', 'supreme']
authorDist = nltk.ConditionalFreqDist([
(str(author[0]), normalize_token(token)) # (condition, sample)
for fileid in corpus.fileids()
for token in corpus.words(fileids=[fileid])
for author in metadata[fileid].authors
if filter_token(token)
and normalize_token(token) in focal_tokens
])
authorDist.tabulate()
focal_tokens = ['organism', 'ivf', 'pluripotent', 'supreme']
timeDist = nltk.ConditionalFreqDist([
(normalize_token(token), metadata[fileid].date)
for fileid in corpus.fileids()
for token in corpus.words(fileids=[fileid])
if filter_token(token)
and normalize_token(token) in focal_tokens
])
timeDist.plot()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We can then create variables to represent each term and then calculate their value
Step2: We can write this out mathematically
Step3: You can play with the values yourself here or use the presets to see how values change.
Step4: (Hint
|
<ASSISTANT_TASK:>
Python Code:
from minimal_example_interface import *
def multiplicative_term(kappa_val,r_array,cue_array):
assert(type(kappa_val)==float or type(kappa_val)==int)
assert(type(r_array)==np.ndarray)
assert(type(cue_array)==np.ndarray)
return({"kappa": kappa_val, "r": r_array,"cue": cue_array})
set_term_function(multiplicative_term)
def SaltSugarModelWithCue_SingleTerm(term):
return term['kappa']*term['r']*term['cue']
set_singleterm_calculation_function(SaltSugarModelWithCue_SingleTerm)
def SaltSugarModelWithCue(term_list):
return sum([SaltSugarModelWithCue_SingleTerm(term) for term in term_list])
set_model_calculation_function(SaltSugarModelWithCue)
cue=np.array([1.0,1.0,0.0])
term_Na=multiplicative_term(1.5,np.array([1.0,1.0,0.0]),cue)
term_h=multiplicative_term(1,np.array([0.0,-1.0,0.0]),cue)
term_Glc=multiplicative_term(1,np.array([0.0,0.0,1.0]),cue)
Markdown(get_and_format_main_equation_text(term_Na,term_h,term_Glc))
myb= widgets.Button(description="the fun button")
def add_text(b):
txtArea.value = txtArea.value + txtArea.value
myb.on_click(add_text)
display(myb)
math_widget.value=get_and_format_main_equation_text(term_Na,term_h,term_Glc)
bDeprivationPreset
box_bordered=widgets.Layout(border='solid 1pt black')
# def widget_header(value):
# return widgets.Label(value='### ' + value)
reward_box=widgets.VBox([widgets.Label(value='Expected reward ($ \\tilde r$)'),
widgets.HBox(ftRewardVals)],layout=box_bordered)
cue_box=widgets.VBox([widgets.Label('Cue Accessibility ($\mathbf{c}$)')]+ftCueVals, layout=box_bordered)
widgets.HBox(ftKappaVals)
widgets.VBox([
widgets.HBox([bDeprivationPreset,bSatiationPreset,tbSugarPresence,bResetAll]),
widgets.VBox([widgets.Label('Interoception ($\kappa$)'),
widgets.HBox(ftKappaVals)],layout=box_bordered),
widgets.HBox([reward_box,cue_box]),
bRecalculate,
math_widget])
reward_box
display(reward_box)
widgets.HBox([bDeprivationPreset,bSatiationPreset,tbSugarPresence,bResetAll])
display(widgets.HBox([bDeprivationPreset,bSatiationPreset,tbSugarPresence,bResetAll]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Setup a New Directory and Change Paths
Step2: Define the Model Extent, Grid Resolution, and Characteristics
Step3: Create the MODFLOW Model Object
Step4: Discretization Package
Step5: Basic Package
Step6: Layer Property Flow Package
Step7: Output Control
Step8: Preconditioned Conjugate Gradient Solver
Step9: Writing the MODFLOW Input Files
Step10: Yup. It's that simple, the model datasets are written using a single command (mf.write_input).
Step11: Running the Model
Step12: Post Processing the Results
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import sys
import os
import shutil
import numpy as np
from subprocess import check_output
# Import flopy
import flopy
# Set the name of the path to the model working directory
dirname = "P4-1_Toth"
datapath = os.getcwd()
modelpath = os.path.join(datapath, dirname)
print 'Name of model path: ', modelpath
# Now let's check if this directory exists. If not, then we will create it.
if os.path.exists(modelpath):
print 'Model working directory already exists.'
else:
print 'Creating model working directory.'
os.mkdir(modelpath)
# model domain and grid definition
# for clarity, user entered variables are all caps; python syntax are lower case or mixed case
# we will use a layer orientation profile for easy plotting (see Box 4.2 on page 126)
LX = 200.
LY = 100.
ZTOP = 1. # the "thickness" of the profile will be 1 m (= ZTOP - ZBOT)
ZBOT = 0.
NLAY = 1
NROW = 5
NCOL = 10
DELR = LX / NCOL # recall that MODFLOW convention is DELR is along a row, thus has items = NCOL; see page XXX in AW&H (2015)
DELC = LY / NROW # recall that MODFLOW convention is DELC is along a column, thus has items = NROW; see page XXX in AW&H (2015)
DELV = (ZTOP - ZBOT) / NLAY
BOTM = np.linspace(ZTOP, ZBOT, NLAY + 1)
HK = 10.
VKA = 1.
print "DELR =", DELR, " DELC =", DELC, ' DELV =', DELV
print "BOTM =", BOTM
# Assign name and create modflow model object
modelname = 'P4-1'
exe_name = os.path.join(datapath, 'mf2005')
print 'Model executable: ', exe_name
MF = flopy.modflow.Modflow(modelname, exe_name=exe_name, model_ws=modelpath)
# Create the discretization object
TOP = np.ones((NROW, NCOL),dtype=np.float)
DIS_PACKAGE = flopy.modflow.ModflowDis(MF, NLAY, NROW, NCOL, delr=DELR, delc=DELC,
top=TOP, botm=BOTM[1:], laycbd=0)
# print DIS_PACKAGE uncomment this to see information about the flopy object
# Variables for the BAS package
IBOUND = np.ones((NLAY, NROW, NCOL), dtype=np.int32) # all nodes are active (IBOUND = 1)
# make the top of the profile specified head by setting the IBOUND = -1
IBOUND[:, 0, :] = -1 #don't forget arrays are zero-based!
print IBOUND
STRT = 100 * np.ones((NLAY, NROW, NCOL), dtype=np.float32) # set starting head to 100 through out model domain
STRT[:, 0, 0] = 100. # the function from Toth is h = 0.05x + 100, so
STRT[:, 0, 1] = 0.05*20+100
STRT[:, 0, 2] = 0.05*40+100
STRT[:, 0, 3] = 0.05*60+100
STRT[:, 0, 4] = 0.05*80+100
STRT[:, 0, 5] = 0.05*100+100
STRT[:, 0, 6] = 0.05*120+100
STRT[:, 0, 7] = 0.05*140+100
STRT[:, 0, 8] = 0.05*160+100
STRT[:, 0, 9] = 0.05*180+100
print STRT
BAS_PACKAGE = flopy.modflow.ModflowBas(MF, ibound=IBOUND, strt=STRT)
# print BAS_PACKAGE # uncomment this at far left to see the information about the flopy BAS object
LPF_PACKAGE = flopy.modflow.ModflowLpf(MF, hk=HK, vka=VKA) # we defined the K and anisotropy at top of file
# print LPF_PACKAGE # uncomment this at far left to see the information about the flopy LPF object
OC_PACKAGE = flopy.modflow.ModflowOc(MF) # we'll use the defaults for the model output
# print OC_PACKAGE # uncomment this at far left to see the information about the flopy OC object
PCG_PACKAGE = flopy.modflow.ModflowPcg(MF) # we'll use the defaults for the PCG solver
# print PCG_PACKAGE # uncomment this at far left to see the information about the flopy PCG object
#Before writing input, destroy all files in folder
#This will prevent us from reading old results
modelfiles = os.listdir(modelpath)
for filename in modelfiles:
f = os.path.join(modelpath, filename)
if modelname in f:
try:
os.remove(f)
print 'Deleted: ', filename
except:
print 'Unable to delete: ', filename
#Now write the model input files
MF.write_input()
# return current working directory
modelpath
silent = False #Print model output to screen?
pause = False #Require user to hit enter? Doesn't mean much in Ipython notebook
report = True #Store the output from the model in buff
success, buff = MF.run_model(silent=silent, pause=pause, report=report)
#imports for plotting and reading the MODFLOW binary output file
import matplotlib.pyplot as plt
import flopy.utils.binaryfile as bf
#Create the headfile object and grab the results for last time.
headfile = os.path.join(modelpath, modelname + '.hds')
headfileobj = bf.HeadFile(headfile)
#Get a list of times that are contained in the model
times = headfileobj.get_times()
print 'Headfile (' + modelname + '.hds' + ') contains the following list of times: ', times
#Get a numpy array of heads for totim = 1.0
#The get_data method will extract head data from the binary file.
HEAD = headfileobj.get_data(totim=1.0)
#Print statistics on the head
print 'Head statistics'
print ' min: ', HEAD.min()
print ' max: ', HEAD.max()
print ' std: ', HEAD.std()
#Create a contour plot of heads
FIG = plt.figure(figsize=(15,15))
#setup contour levels and plot extent
LEVELS = np.arange(100, 109, 0.5)
EXTENT = (DELR/2., LX - DELR/2., DELC/2., LY - DELC/2.)
print 'Contour Levels: ', LEVELS
print 'Extent of domain: ', EXTENT
#Make a contour plot on the first axis
AX1 = FIG.add_subplot(1, 2, 1, aspect='equal')
AX1.contour(np.flipud(HEAD[0, :, :]), levels=LEVELS, extent=EXTENT)
#Make a color flood on the second axis
AX2 = FIG.add_subplot(1, 2, 2, aspect='equal')
cax = AX2.imshow(HEAD[0, :, :], extent=EXTENT, interpolation='nearest')
cbar = FIG.colorbar(cax, orientation='vertical', shrink=0.25)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <div id='intro' />
Step2: We clearly observe that the least square solutions are different!
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import scipy.linalg as spla
%matplotlib inline
# https://scikit-learn.org/stable/modules/classes.html#module-sklearn.datasets
from sklearn import datasets
import ipywidgets as widgets
from ipywidgets import interact, interact_manual
import matplotlib as mpl
mpl.rcParams['font.size'] = 14
mpl.rcParams['axes.labelsize'] = 20
mpl.rcParams['xtick.labelsize'] = 14
mpl.rcParams['ytick.labelsize'] = 14
M=8
x1,x2,x3 = 1,2,3
y1,y2,y3 = 5,-1,3
A1 = np.ones((3,2))
A1[:,1]=[x1,x2,x3]
b1=np.array([y1,y2,y3])
A1[0,:]*=4
b1[0]*=4
x1_bar=np.linalg.solve(A1.T @ A1,A1.T @ b1)
print('A1: ', A1)
print('b1: ', b1)
print('x1_bar: ', x1_bar)
A2 = np.ones((3,2))
A2[:,1]=[x1,x2,x3]
b2=np.array([y1,y2,y3])
x2_bar=np.linalg.solve(A2.T @ A2,A2.T @ b2)
print('A2: ', A2)
print('b2: ', b2)
print('x2_bar: ', x2_bar)
def showWeightedOutput(w=1):
# Number of points to be used
m = 10
# Relationship considered
fv = np.vectorize(lambda x, a0, a1: a0+a1*x)
# Coefficients considered
a0, a1 = 1, 4
np.random.seed(0)
# Standard deviation for the error
sigma = 5e-1
# Error to be added
e = np.random.normal(0,sigma,m)
# Generating data points
x = np.linspace(0,1,m)
y = fv(x,a0,a1)+e
# Build the data matrix
A = np.ones((m,2))
A[:,1] = x
# Setting up the right hand side
b = np.copy(y)
A[:2,:]*=w
b[:2]*=w
# Building and solving the normal equations
# A^T A x_bar = A^T b
x_bar = np.linalg.solve(A.T @ A, A.T @ b)
# Showing the comparison between the "original function" and the "least-squared reconstructed approximation".
# We added in red a "sample" of possible functions.
# Notice that the colors used follow the description included in the classnotes.
# This means to consider the following analogy:
# blue: data points, this correspond to the right-hand-side vector "b".
# red: this correspond to the sub-space generated by Ax, i.e. the span of the columns of A.
# violet: This correspond to the least-square solution found.
plt.figure(figsize=(10,10))
for i in range(100):
plt.plot(x,fv(x,x_bar[0]+np.random.normal(0,1),x_bar[1]+np.random.normal(0,1)),'r-',linewidth=1,alpha=0.2)
plt.plot(x,fv(x,a0,a1),'k-',linewidth=8,alpha=0.8)
plt.plot(x,fv(x,x_bar[0],x_bar[1]),'--',color='darkviolet',linewidth=4)
plt.plot(x,fv(x,x_bar[0],x_bar[1]),'r.',markersize=20)
plt.plot(x,y,'b.',markersize=10)
plt.grid(True)
plt.xlabel(r'$x$')
plt.ylabel(r'$y$')
plt.show()
interact_manual(showWeightedOutput,w=(0.01,101,0.01))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step11: Esta funรงรฃo divide um string partindo em cada coisa que nรฃo รฉ uma letra e convertendo para minรบsculo.
Step12: Vamos checar os helpers?
Step13: Veja que a palavra 'the' sรณ aparece uma vez, em um documento bom
Step14: igualmente, 'quick' sรณ aparece uma vez em um documento classificado como mau
Step15: <br><br>Agora podemos criar uma funรงรฃo place-holder para um corpus de e-mails; e vamos tentar achar SPAM.
Step16: <br><br>Adicionamos ร nossa classe funรงรตes de contagem, agora vamos extrair probabilidades
Step18: Isso se chama probabilidade condicional
Step19: Escolhendo uma categoria
Step20: <h1>Agora vamos testar o nosso classificador</h1>
|
<ASSISTANT_TASK:>
Python Code:
# coding: utf-8
import re
def getwords(doc):
splitter = re.compile('\\W*')
# Split the words by non-alpha characters
words = [s.lower() for s in splitter.split(doc) if len(s) > 2 and len(s) < 20]
#print words # usamos isso para checar o split depois
# retorno o set de palavras รNICAS!
res = dict([(w, 1) for w in words])
# print 'res:',res # veja
return res
class classifier:
um classificador genรฉrico que serve para trinar um Naรฏve Bayes
def __init__(self, getfeatures, filename=None):
# contagem de combinaรงรตes: feature/categoria
# ex: {'python': {'bad': 0, 'good': 6}, 'the': {'bad': 3, 'good': 3}}
self.featCatCombinations = {}
# contagem de documentos em cada cat.
# ex: {'good': 385, 'bad':266}
self.docCountPerCat = {}
# fn. de extraรงรฃo de features (recebida como input)
# no nosso caso รฉ a getWords
self.getfeatures = getfeatures
# veremos mais em baixo para que isso serve!
self.thresholds = {}
# #################################################################
# estes sรฃo helper methods para que nossa classe permaneรงa genรฉrica
# #################################################################
def incrFeatCount(self, f, cat):
Incrementa a contagem da feature f na categoria cat
self.featCatCombinations.setdefault(f, {})
self.featCatCombinations[f].setdefault(cat, 0)
self.featCatCombinations[f][cat] += 1
def incrCatCount(self, cat):
Incr. a contagem de uma cat
self.docCountPerCat.setdefault(cat, 0)
self.docCountPerCat[cat] += 1
def fcount(self, f, cat):
num. de vezes uma feature aparece em uma cat
if f in self.featCatCombinations and cat in self.featCatCombinations[f]:
return float(self.featCatCombinations[f][cat])
return 0.0
def catcount(self, cat):
num. de itens em uma cat.
if cat in self.docCountPerCat:
return float(self.docCountPerCat[cat])
return 0
def totalcount(self):
num. total de itens
return sum(self.docCountPerCat.values())
def categories(self):
lista de categorias
return self.docCountPerCat.keys()
# #################################################################
def train(self, item, cat):
pego um documento classificado, parto ele em features e
adiciono as contagens deste doc. ao todo
features = self.getfeatures(item)
# incr. a contagem para cada feature na cat.
for f in features:
self.incrFeatCount(f, cat)
# incr. a contagem na cat.
self.incrCatCount(cat)
# #################################################################
def fprob(self, f, cat):
probabilidade de uma feature >F< ocorrer na categoria >C<:
i.e: num. de vezes F aparece em C sobre o num de itens em C
if self.catcount(cat) == 0: return 0 # se vazio, retorna 0
return self.fcount(f, cat)/self.catcount(cat)
def weightedprob(self, f, cat, prf, weight=1.0, ap=0.5):
probabilidade ponderada (veja abaixo)
weight รฉ o peso (em qtd de palavras) que a prob assumida tem
ap รฉ a probabilidade assumida (assumed probability - ap)
# calcular a probabilidade basica
basicprob = prf(f, cat)
# contar o numero de vezes que a feature aparece em TODAS as categorias
totalOcc = sum([self.fcount(f, c) for c in self.categories()])
# calcular a probabilidade ponderada
bp = ((weight * ap) + (totalOcc * basicprob)) / (weight + totalOcc)
# = ( 1 * 0.5) + ( soma * prAssumida) / ( 1 + soma )
return bp
# #################################################################
# setter e getter para as thesholds de cada categoria, veremos abaixo
# #################################################################
def setthreshold(self, cat, t):
self.thresholds[cat] = t
def getthreshold(self, cat):
if cat not in self.thresholds: return 1.0
return self.thresholds[cat]
# #################################################################
# aqui finalmente
def classify(self, item, default=None):
probs = {}
# Encontra a classe com a maior probabilidade
max = 0.0
for cat in self.categories():
probs[cat] = self.prob(item, cat)
if probs[cat] > max:
max = probs[cat]
best = cat
# garante que a probabilidade excede threshold*next best
for cat in probs:
if cat == best: continue
if probs[cat] * self.getthreshold(best) > probs[best]: return default
return best
cl=classifier(getwords)
cl.train('the quick brown quick fox jumps over the lazy dog','good')
cl.train('make quick money in the online casino','bad')
cl.fcount('the','good')
cl.fcount('quick','bad')
def sampletrain(cl):
cl.train('Nobody owns the water.', 'good')
cl.train('the quick rabbit jumps fences', 'good')
cl.train('buy pharmaceuticals now', 'bad')
cl.train('make quick money at the online casino', 'bad')
cl.train('Mike is quick to store his money in bonds','good')
cl=classifier(getwords)
sampletrain(cl)
cl.fprob('quick','good')
class NaiveBayes(classifier):
def docprob(self, item, cat):
pega a probabilidade multiplicada de cada feature (palavra) do documento
# pegar as feature
features = self.getfeatures(item)
# Mutiplico as probabilidades de cada feature
p = 1
for f in features:
p *= self.weightedprob(f, cat, self.fprob)
return p
def prob(self, item, cat):
# calculando Pr(Classe)
catprob = self.catcount(cat) / self.totalcount()
# calculando Pr(Doc | Classe)
docprob = self.docprob(item, cat)
# retornando Pr( Classe | Doc) = Pr(Doc | Classe) * Pr(Classe)
return docprob*catprob
if __name__ == '__main__':
cl = classifier(getwords)
sampletrain(cl)
print 'fprob for "money" being good', cl.fprob('money', 'good')
print 'fprob for "money" being bad', cl.fprob('money', 'bad')
print '\nweightedprob for "money" being good', cl.weightedprob('money', 'good', cl.fprob)
print 'weightedprob for "money" being bad', cl.weightedprob('money', 'bad', cl.fprob)
s = "money, that's what I want"
print '\nretraining on new text:\t\t','"'+s+'"\n'
cl.train(s, "bad")
print 'weightedprob for "money" being good', cl.weightedprob('money', 'good', cl.fprob)
print 'weightedprob for "money" being bad', cl.weightedprob('money', 'bad', cl.fprob)
nbClassifier = NaiveBayes(getwords)
sampletrain(nbClassifier)
print "\nis 'quick' good?", nbClassifier.prob('quick', 'good')
print "is 'quick' bad?",nbClassifier.prob('quick', 'bad')
print "it is classified as:",nbClassifier.classify('quick', default='unknown')
s2 = "Take this other quick quiz and make money!"
s3 = "click here quick! Woman want to talk to you!"
print '\nretrainting on two new texts...'
print '\t\t"'+s2+'"'
print '\t\t"'+s3+'"'
nbClassifier.train(s2, "bad")
nbClassifier.train(s3, "bad")
print "\nis 'quick' good?", nbClassifier.prob('quick', 'good')
print "is 'quick' bad?",nbClassifier.prob('quick', 'bad')
print 'now it is:',nbClassifier.classify('quick', default='unknown')
print '\n\n'
sNew = 'my dog, while quick, cannot jump'
#sNew = 'my rabbit, while quick, cannot jump'
print 'agora vamos classificar o um novo texto que recebemos:'
print '\t\t"'+sNew+'"\n'
print nbClassifier.classify(sNew, default='unknown')
print 'e com threshold:',
nbClassifier.setthreshold('bad', 3)
print nbClassifier.classify(sNew, default='unknown')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The algorithm to delete an element $x$ from a list $L$ is formulated recursively. There are three
|
<ASSISTANT_TASK:>
Python Code:
def sort(L):
if L == []:
return []
x = min(L)
return [x] + sort(delete(x, L))
def delete(x, L):
assert L != [], f'delete({x}, [])'
y, *R = L
if y == x:
return R
return [y] + delete(x, R)
L = [3, 5, 7, 4, 8, 1, 2, 3, 11, 13, 2]
sort(L)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TPOT uses a genetic algorithm (implemented with DEAP library) to pick an optimal pipeline for a regression task.
Step2: Sklearn model
Step3: TPOT Classifier
Step4: Export Pseudo Pipeline Code
|
<ASSISTANT_TASK:>
Python Code:
!sudo pip install deap update_checker tqdm xgboost tpot
import pandas as pd
import numpy as np
import psycopg2
import os
import json
from tpot import TPOTClassifier
from sklearn.metrics import classification_report
conn = psycopg2.connect(
user = os.environ['REDSHIFT_USER']
,password = os.environ['REDSHIFT_PASS']
,port = os.environ['REDSHIFT_PORT']
,host = os.environ['REDSHIFT_HOST']
,database = 'tradesy'
)
query =
select
purchase_dummy
,shipping_price_ratio
,asking_price
,price_level
,brand_score
,brand_size
,a_over_b
,favorite_count
,has_blurb
,has_image
,seasonal_component
,description_length
,product_category_accessories
,product_category_shoes
,product_category_bags
,product_category_tops
,product_category_dresses
,product_category_weddings
,product_category_bottoms
,product_category_outerwear
,product_category_jeans
,product_category_activewear
,product_category_suiting
,product_category_swim
from saleability_model_v2
limit 50000
df = pd.read_sql(query, conn)
target = 'purchase_dummy'
domain = filter(lambda x: x != target, df.columns.values)
df = df.astype(float)
y_all = df[target].values
X_all = df[domain].values
idx_all = np.random.RandomState(1).permutation(len(y_all))
idx_train = idx_all[:int(.8 * len(y_all))]
idx_test = idx_all[int(.8 * len(y_all)):]
# TRAIN AND TEST DATA
X_train = X_all[idx_train]
y_train = y_all[idx_train]
X_test = X_all[idx_test]
y_test = y_all[idx_test]
from sklearn.ensemble import RandomForestClassifier
sklearn_model = RandomForestClassifier()
sklearn_model.fit(X_train, y_train)
sklearn_predictions = sklearn_model.predict(X_test)
print classification_report(y_test, sklearn_predictions)
tpot_model = TPOTClassifier(generations=3, population_size=10, verbosity=2, max_time_mins=10)
tpot_model.fit(X_train, y_train)
tpot_predictions = tpot_model.predict(X_test)
print classification_report(y_test, tpot_predictions)
tpot_model.export('optimal-saleability-model.py')
!cat optimal-saleability-model.py
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Ion
Step2: Once an ion species is initialized, you can call the properties of the ion, typically as a function of pH, ionic strength, and temperature, in that order.
Step3: Note the difference between ionic_strength parameters here. If ionic_strength is 0, the numerical value of 0 is used in each calculation. However, it is impossible to have a solution of pH 0 with ionic_strength of 0.
Step4: search_ion()
Step5: Other db functions
Step6: Solution
Step7: Solutions can be initialized with ion names instead of ions. If so, the Solution calls load_ion to determine the ion identities.
Step8: We can iterate through solutions to quickly calculate the pH of a titration between two ions
Step9: A Solution can also be initialized without ions, e.g. as water.
Step10: A Solution can also be added and multiplied through operator overloading. This can be useful when calculating the results of diltuions, as below.
Step11: Solutions can be titrated to a specified pH. To do so, make a solution, and then specify a titrant, a property, and a target.
Step12: Temperature Effects
Step13: Solution objects send their temperature correction parameters to the object that they contain. In addition, they use the temperature input to correct their ionic strength correction parameters.
Step14: Conservation Functions
Step15: The Alberty Conservation Function
Step16: The Jovin Conservation Function
Step17: The Gas Conservation Functions
Step18: Serialization, Saving, and Loading
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function, absolute_import, division
import ionize
# We'll also import numpy to set up some of our inputs.
# And pprint to prettily print some lists.
import numpy
import pprint
# And set up inline plotting.
from matplotlib.pyplot import *
%matplotlib inline
# Prettify numpy printing
numpy.set_printoptions(precision=3)
# Initialize an ion and print it.
acid = ionize.Ion('myAcid', [-1], [5], [-25e-9])
base = ionize.Ion('myBase', [1], [8], [20e-9])
print(acid) # The string includes only the class and name.
print(repr(base)) # The representation contains enough information to reconstruct the ion.
print('myAcid Ka at (I=0 M) =', acid.acidity())
print('myAcid Ka at (I=0.5 M) =', acid.acidity(ionic_strength=0.5))
pH = numpy.linspace(0,14)
for I in [None, 0., 0.001, 0.01, 0.1]:
mu = [base.mobility(p, I) for p in pH]
if I is not None:
label = 'I={} M'.format(I)
else:
label = 'I=None'
plot(pH, mu, label=label)
xlabel('pH'); xlim(0, 14)
ylabel('effective mobility (m^2/v/s)'); ylim(-.1e-8, 2.1e-8)
legend()
show()
db = ionize.Database()
histidine = db['histidine']
print(repr(histidine))
for ionic_strength in (None, 0):
mu_histidine = [histidine.mobility(p, ionic_strength=ionic_strength) for p in pH]
plot(pH, mu_histidine, label="I={}".format(ionic_strength))
xlabel('pH'); xlim([0, 14])
ylabel('effective mobility (m^2/v/s)')
legend()
show()
print("Search results for 'amino'\n--------------------------")
pprint.pprint(db.search('amino'))
print("\nSearch results for 'chloric'\n----------------------------")
pprint.pprint(db.search('chloric'))
print("\nSearch results for 'per'\n------------------------")
pprint.pprint(db.search('per'))
print('\nOh, copper is what I was looking for.')
print(db.load('copper'))
print(len(db.data), 'ions in database.')
hcl=database.load('hydrochloric acid')
tris=database.load('tris')
buffer=ionize.Solution([tris, hcl], [0.1, 0.085])
print 'pH =', buffer.pH
print 'I =', buffer.ionic_strength, 'M'
print 'conductivity =', buffer.conductivity(), 'S/m'
print 'buffering capacity =', buffer.buffering_capacity(), 'M'
print 'debye length =', buffer.debye(), 'm'
print [ion.name for ion in ionize.Solution(['bis-tris', 'acetic acid'], [0.1, 0.03]).ions]
print ionize.Solution(['bis-tris', 'acetic acid'], [0.1, 0.03]).concentration(database.load('acetic acid'))
c_tris = 0.1
c_hcl = numpy.linspace(0.0, 0.2, 50)
t_pH = [ionize.Solution(['tris', 'hydrochloric acid'], [c_tris, c_h], temperature=25).pH for c_h in c_hcl]
plot(c_hcl/c_tris, t_pH)
xlabel('[HCl]/[Tris]')
ylabel('pH')
show()
water = ionize.Solution()
print 'I =', water.ionic_strength, 'M'
print 'pH =', water.pH
print 'conductivity =', water.conductivity(), 'S/m'
print 'Stock:', buffer
dilution = 0.5 * buffer + 0.5 * water
print 'Dilution:', dilution
buff = ionize.Solution([tris], 0.1)
print buff.titrate('hydrochloric acid', 8.2)
print buff.titrate('hydrochloric acid', 3)
print buff.conductivity()
print repr(buff.titrate('hydrochloric acid', 3, titration_property = 'conductivity'))
print repr(buff.titrate('hydrochloric acid', 8))
silver = database.load('silver')
tris = database.load('tris')
T = numpy.linspace(20.0, 80.0)
mu_silver = [silver.absolute_mobility(Tp) for Tp in T]
mu_tris = [tris.absolute_mobility(Tp) for Tp in T]
pKa_silver = [silver.pKa(0, Tp) for Tp in T]
pKa_tris = [tris.pKa(0, Tp) for Tp in T]
figure()
plot(T, mu_silver, label = 'Silver')
plot(T, mu_tris, label = 'Tris')
legend(loc = 'upper left')
xlabel('Temperature ($^{\circ}$C)'); ylabel('Absolute mobility ($m^2V^{-1}s^{-1}$)')
show()
figure()
plot(T, pKa_silver, label = 'Silver')
plot(T, pKa_tris, label = 'Tris')
legend(loc = 'lower left')
xlabel('Temperature ($^{\circ}$C)'); ylabel('pKa')
show()
buffer_ref = ionize.Solution(['tris', 'hydrochloric acid'], [.200, .100], temperature=25.)
mu_ref = buffer_ref.ions[1].mobility()
mup = []
pH = []
I = []
mu=[]
cond = []
for Tp in T:
buffer = ionize.Solution([tris, hcl], [.200, .100], temperature=Tp)
mu.append(buffer.ions[1].mobility())
mup.append(buffer.ions[1].mobility()/mu_ref)
pH.append(buffer.pH)
I.append(buffer.ionic_strength)
cond.append(buffer.conductivity())
# mup.append(hcl.nightingale_function(Tp))
cond_norm = [c / cond[0] for c in cond]
figure()
plot(T, pH); xlabel('Temperature ($^{\circ}$C)'); ylabel('pH')
show()
figure()
plot(T, mup, label='chloride'); xlabel('Temperature ($^{\circ}$C)'); ylabel('$\mu$(T)/$\mu$(T$_o$)'); legend(loc='upper left')
show()
saltwater = ionize.Solution(['sodium', 'hydrochloric acid'], [0.1, 0.1])
print saltwater.kohlrausch()
print buffer_ref.ions
print buffer_ref.kohlrausch()
tcap = ionize.Solution(['tris', 'caproic acid'], [0.1, 0.05])
print tcap.alberty()
tcit = ionize.Solution(['tris', 'citric acid'], [0.1, 0.05])
print tcit.alberty()
print tcap.jovin()
print tcit.jovin()
print tcap.gas()
print tcit.gas()
# %load_ext snakeviz
# %%snakeviz
# database = ionize.Database()
# pH = np.linspace(0, 14)
# for ion in database:
# for p in pH:
# ion.mobility(p)
database
import itertools
concentrations = np.linspace(0, 0.14)
ref_mob = 50.e-9
z = [1, 2]
for zp, zm in itertools.product(z, repeat=2):
positive_ion = ionize.Ion('positive', [zp], [14], [ref_mob])
negative_ion = ionize.Ion('negative', [-zm], [0], [-ref_mob])
mob = []
i = []
for c in concentrations:
sol = ionize.Solution([positive_ion, negative_ion], [c/zp, c/zm])
mob.append(sol.ions[0].actual_mobility() / ref_mob )
i.append(sol.ionic_strength)
plot(i, mob, label='-{}:{}'.format(zm, zp))
ylim(0, 1)
# xlim(0, .14)
legend(loc='lower left')
xlabel('Concentration (M)')
ylabel('$\mu$/$\mu_o$')
show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Case study
Step2: <i><b>Caption
Step3: Observations
Step4: Emcee hammer
Step7: Applying emcee Hammer
Step8: Likelihood function (probability distribution over datasets so, conditioned on model parameters)
Step9: Full log-likelihood
Step10: Comparison Frequentisic - Bayesian
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style("whitegrid")
from IPython.display import Image
Image(filename='batcheactor.png', width=300)
#The model consist of differential equations, which needs integration (solver-based)
from scipy.integrate import odeint
def deriv_works(u, t, parameters, constants): #Derivative used in the general model function
'''
Differential equations of the respirometric model in code
'''
#Define the parameters
mumax = np.float64(parameters[0])
Y = np.float64(parameters[1])
Ks = np.float64(parameters[2])
tau = np.float64(parameters[3])
b = np.float64(constants[0])
kla = np.float64(constants[1])
SOeq = np.float64(constants[2])
monod = mumax*(u[1])/(u[1]+Ks) #Monod Kinetic
expo = 1.0 - np.exp(-t/tau)
#The model equations
dXdt = (expo*monod - b)*u[0] #Biomassa
dSsdt = -(1.0/Y)*expo*monod*u[0] #Substraat
dOdt = kla*(SOeq-u[2])-((1-Y)/Y)*expo*monod*u[0] #Oxygen
return np.array([dXdt, dSsdt, dOdt])
def respirometer_model(parameters, initial_cond, time):
'''
Run the respirometric model
'''
#Define the constants - experiment specific
b = 0.62
kla = 369.7334962
SOeq = 8.4
constant_values = np.array([b, kla, SOeq])
#Define the initial conditions (Constants)Ss0
Ss0 = 58.4899
#Define the initial conditions (Uncertain) -> X0
X0 = initial_cond[0]
yinit = np.array([X0, Ss0, SOeq])
#Define the necessary parameters
mumax = np.float64(parameters[0])
Y = np.float64(parameters[1])
Ks = np.float64(parameters[2])
tau = np.float64(parameters[3])
#Solve with LSODA scheme
y, infodic = odeint(deriv_works, yinit, time, full_output=True,
printmessg=False, args=(parameters, constant_values))
#Get outputs
X = y[:, 0]
Ss = y[:, 1]
O = y[:, 2]
OUR_ex=((1 - np.exp(-time/tau))*mumax*(1-Y)/Y*Ss/(Ss+Ks)*X)/(24*60)
return [time, X, Ss, O, OUR_ex, infodic]
#SET TIME
modeltime = np.arange(0.,0.05,0.0005)
#Since everything is in day, this reprecents more or less 1 hour and 12 minutes of time.
#set X0 as initial condition
X0 = 6.75632395e+02
#set the parameter values
mumax=4.
Y=0.78
Ks=0.4
tau=2.25e-04
parameters = [mumax, Y, Ks, tau]
uncertain_initial_condition = np.array([X0])
modeloutput = respirometer_model(parameters, uncertain_initial_condition,
modeltime)
#check if the integration succesful:
print modeloutput[-1]['message']
fig = plt.figure(figsize=(10,10))
fig.subplots_adjust(hspace=0.2, wspace = 0.3)
ax1 = fig.add_subplot(221)
ax1.plot(modeloutput[0], modeloutput[1],'k',label = 'X')
ax1.set_xticklabels([])
ax1.legend(loc=4)
ax2 = fig.add_subplot(222)
ax2.plot(modeloutput[0], modeloutput[2],'k',label = 'S')
ax2.set_xticklabels([])
ax2.legend(loc=4)
ax3 = fig.add_subplot(223)
ax3.plot(modeloutput[0], modeloutput[3],'k',label = '0')
ax3.legend(loc=4)
ax3.set_xlabel('Time')
ax4 = fig.add_subplot(224)
ax4.plot(modeloutput[0], modeloutput[4],'k',label = 'OUR')
ax4.legend(loc=4)
ax4.set_xlabel('Time')
observations = pd.read_csv("respirometer_data.txt", sep="\t", index_col=0,
names=["DO", "OURex"], skiprows=2)
observations.head()
observations.plot(subplots=True)
from IPython.display import IFrame
IFrame('http://dan.iel.fm/emcee/current/', 800, 300)
def run_respiro(parameters):
modeltime = np.arange(0.,0.05,0.0005)
X0 = 6.75632395e+02
mumax, Y, Ks, tau = parameters
modeloutput = respirometer_model(parameters, uncertain_initial_condition,
modeltime)
return modeloutput
def get_modelerror(parameters, observed):
# Run model
# Get timesteps of model
#
def lnprior(parameters):
mumax, Y, Ks, tau = parameters
if 3.5 < mumax < 4.5 and 0.77 < Y < 0.81 and 0.3 < Ks < 0.5:
return 0.0
return -np.inf
# gaussian, geen meeetfout,...
def lnlike(parameters, observed):
mumax, Y, Ks, tau = parameters
model = run_respiro()
N = len(observed)
sigma2 = np.std(observed)**2.
loglike = -N/2. - N*np.log(sigma2)/2. - ((model-observed)**2).sum()
return loglike
def lnprob(parameters, observed):
lp = lnprior(parameters)
if not np.isfinite(lp):
return -np.inf
return lp + lnlike(parameters, observed)
ndim, nwalkers = 3, 100
pos = [result["x"] + 1e-4*np.random.randn(ndim) for i in range(nwalkers)]
sampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob, args=(res))
sampler.run_mcmc(pos, 500)
import numpy as np
import corner
ndim, nsamples = 5, 10000
samples = np.random.randn(ndim * nsamples).reshape([nsamples, ndim])
figure = corner.corner(samples)
samples
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Set Configuration
Step2: 3. Enter CM360 Report To Storage Recipe Parameters
Step3: 4. Execute CM360 Report To Storage
|
<ASSISTANT_TASK:>
Python Code:
!pip install git+https://github.com/google/starthinker
from starthinker.util.configuration import Configuration
CONFIG = Configuration(
project="",
client={},
service={},
user="/content/user.json",
verbose=True
)
FIELDS = {
'auth_read':'user', # Credentials used for reading data.
'auth_write':'service', # Credentials used for writing data.
'account':'',
'report_id':'',
'report_name':'',
'bucket':'',
'path':'CM_Report',
}
print("Parameters Set To: %s" % FIELDS)
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
TASKS = [
{
'dcm':{
'auth':{'field':{'name':'auth_read','kind':'authentication','order':1,'default':'user','description':'Credentials used for reading data.'}},
'report':{
'account':{'field':{'name':'account','kind':'integer','order':2,'default':''}},
'report_id':{'field':{'name':'report_id','kind':'integer','order':3,'default':''}},
'name':{'field':{'name':'report_name','kind':'string','order':4,'default':''}}
},
'out':{
'storage':{
'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Credentials used for writing data.'}},
'bucket':{'field':{'name':'bucket','kind':'string','order':5,'default':''}},
'path':{'field':{'name':'path','kind':'string','order':6,'default':'CM_Report'}}
}
}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(CONFIG, TASKS, force=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We can also sample independently from this toy LogPDF, and add that to the visualisation
Step2: We now try to sample from the distribution with MCMC
Step3: Now check how close the result is to the expected result, using the Kullback-Leibler divergence, and compare this to the result from sampling directly.
Step4: Hamiltonian Monte Carlo fares much better on this curved density.
Step5: Visualising the path of a single HMC chain, we see that it moves naturally along contours although does occassionally suffer from divergent iterations (red dots) in the neck of the banana due to the varying posterior curvature throughout the domain.
|
<ASSISTANT_TASK:>
Python Code:
import pints
import pints.toy
import numpy as np
import matplotlib.pyplot as plt
# Create log pdf
log_pdf = pints.toy.TwistedGaussianLogPDF(dimension=2)
# Contour plot of pdf
levels = np.linspace(-50, -1, 20)
x = np.linspace(-50, 50, 250)
y = np.linspace(-100, 20, 250)
X, Y = np.meshgrid(x, y)
Z = [[log_pdf([i, j]) for i in x] for j in y]
plt.contour(X, Y, Z, levels = levels)
plt.show()
direct = log_pdf.sample(15000)
plt.contour(X, Y, Z, levels=levels, colors='k', alpha=0.2)
plt.scatter(direct[:, 0], direct[:, 1], alpha=0.2)
plt.xlim(-50, 50)
plt.ylim(-100, 20)
plt.show()
# Create an adaptive covariance MCMC routine
x0 = np.random.uniform(-25, 25, size=(3, 2))
mcmc = pints.MCMCController(log_pdf, 3, x0, method=pints.HaarioBardenetACMC)
# Stop after 10000 iterations
mcmc.set_max_iterations(3000)
# Disable logging
mcmc.set_log_to_screen(False)
# Run!
print('Running...')
chains = mcmc.run()
print('Done!')
# Discard warm-up
chains = [chain[1000:] for chain in chains]
stacked = np.vstack(chains)
plt.contour(X, Y, Z, levels=levels, colors='k', alpha=0.2)
plt.scatter(stacked[:, 0], stacked[:, 1], alpha=0.2)
plt.xlim(-50, 50)
plt.ylim(-100, 20)
plt.show()
print(log_pdf.kl_divergence(stacked))
print(log_pdf.kl_divergence(direct))
# Create an adaptive covariance MCMC routine
x0 = np.random.uniform(-25, 25, size=(3, 2))
sigma0 = [5, 5]
mcmc = pints.MCMCController(log_pdf, 3, x0, method=pints.HamiltonianMCMC, sigma0=sigma0)
# Stop after 10000 iterations
mcmc.set_max_iterations(3000)
# Disable logging
mcmc.set_log_to_screen(False)
# Run!
print('Running...')
chains = mcmc.run()
print('Done!')
chains1 = [chain[1000:] for chain in chains]
stacked = np.vstack(chains1)
print(log_pdf.kl_divergence(stacked))
print(log_pdf.kl_divergence(direct))
divergent_transitions = mcmc.samplers()[0].divergent_iterations()
plt.contour(X, Y, Z, levels=levels, colors='k', alpha=0.2)
plt.plot(chains[0][:, 0], chains[0][:, 1], alpha=0.5)
plt.scatter(chains[0][divergent_transitions, 0], chains[0][divergent_transitions, 1], color='red')
plt.xlim(-50, 50)
plt.ylim(-100, 20)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Skill 1
Step2: Skill 2
Step3: Skill 3
Step4: Skill 4
Step5: Skill 4
Step6: Save scores
Step7: Normalized Taylor diagrams
|
<ASSISTANT_TASK:>
Python Code:
import os
try:
import cPickle as pickle
except ImportError:
import pickle
run_name = '2014-07-07'
fname = os.path.join(run_name, 'config.pkl')
with open(fname, 'rb') as f:
config = pickle.load(f)
import numpy as np
from pandas import DataFrame, read_csv
from utilities import (load_secoora_ncs, to_html,
save_html, apply_skill)
fname = '{}-all_obs.csv'.format(run_name)
all_obs = read_csv(os.path.join(run_name, fname), index_col='name')
def rename_cols(df):
columns = dict()
for station in df.columns:
mask = all_obs['station'] == station
name = all_obs['station'][mask].index[0]
columns.update({station: name})
return df.rename(columns=columns)
from utilities import mean_bias
dfs = load_secoora_ncs(run_name)
df = apply_skill(dfs, mean_bias, remove_mean=False, filter_tides=False)
df = rename_cols(df)
skill_score = dict(mean_bias=df.copy())
# Filter out stations with no valid comparison.
df.dropna(how='all', axis=1, inplace=True)
df = df.applymap('{:.2f}'.format).replace('nan', '--')
html = to_html(df.T)
fname = os.path.join(run_name, 'mean_bias.html'.format(run_name))
save_html(fname, html)
html
from utilities import rmse
dfs = load_secoora_ncs(run_name)
df = apply_skill(dfs, rmse, remove_mean=True, filter_tides=False)
df = rename_cols(df)
skill_score['rmse'] = df.copy()
# Filter out stations with no valid comparison.
df.dropna(how='all', axis=1, inplace=True)
df = df.applymap('{:.2f}'.format).replace('nan', '--')
html = to_html(df.T)
fname = os.path.join(run_name, 'rmse.html'.format(run_name))
save_html(fname, html)
html
from utilities import r2
dfs = load_secoora_ncs(run_name)
df = apply_skill(dfs, r2, remove_mean=True, filter_tides=False)
df = rename_cols(df)
skill_score['r2'] = df.copy()
# Filter out stations with no valid comparison.
df.dropna(how='all', axis=1, inplace=True)
df = df.applymap('{:.2f}'.format).replace('nan', '--')
html = to_html(df.T)
fname = os.path.join(run_name, 'r2.html'.format(run_name))
save_html(fname, html)
html
from utilities import r2
dfs = load_secoora_ncs(run_name)
df = apply_skill(dfs, r2, remove_mean=True, filter_tides=True)
df = rename_cols(df)
skill_score['low_pass_r2'] = df.copy()
# Filter out stations with no valid comparison.
df.dropna(how='all', axis=1, inplace=True)
df = df.applymap('{:.2f}'.format).replace('nan', '--')
html = to_html(df.T)
fname = os.path.join(run_name, 'low_pass_r2.html'.format(run_name))
save_html(fname, html)
html
from utilities import r2
dfs = load_secoora_ncs(run_name)
# SABGOM dt = 3 hours.
dfs = dfs.swapaxes('items', 'major').resample('3H').swapaxes('items', 'major')
df = apply_skill(dfs, r2, remove_mean=True, filter_tides=False)
df = rename_cols(df)
skill_score['low_pass_resampled_3H_r2'] = df.copy()
# Filter out stations with no valid comparison.
df.dropna(how='all', axis=1, inplace=True)
df = df.applymap('{:.2f}'.format).replace('nan', '--')
html = to_html(df.T)
fname = os.path.join(run_name, 'low_pass_resampled_3H_r2.html'.format(run_name))
save_html(fname, html)
html
fname = os.path.join(run_name, 'skill_score.pkl')
with open(fname,'wb') as f:
pickle.dump(skill_score, f)
%matplotlib inline
import matplotlib.pyplot as plt
from utilities.taylor_diagram import TaylorDiagram
def make_taylor(samples):
fig = plt.figure(figsize=(9, 9))
dia = TaylorDiagram(samples['std']['OBS_DATA'],
fig=fig,
label="Observation")
colors = plt.matplotlib.cm.jet(np.linspace(0, 1, len(samples)))
# Add samples to Taylor diagram.
samples.drop('OBS_DATA', inplace=True)
for model, row in samples.iterrows():
dia.add_sample(row['std'], row['corr'], marker='s', ls='',
label=model)
# Add RMS contours, and label them.
contours = dia.add_contours(colors='0.5')
plt.clabel(contours, inline=1, fontsize=10)
# Add a figure legend.
kw = dict(prop=dict(size='small'), loc='upper right')
leg = fig.legend(dia.samplePoints,
[p.get_label() for p in dia.samplePoints],
numpoints=1, **kw)
return fig
dfs = load_secoora_ncs(run_name)
# Bin and interpolate all series to 1 hour.
freq = '1H'
for station, df in list(dfs.iteritems()):
df = df.resample(freq).interpolate().dropna(axis=1)
if 'OBS_DATA' in df:
samples = DataFrame.from_dict(dict(std=df.std(),
corr=df.corr()['OBS_DATA']))
else:
continue
samples[samples < 0] = np.NaN
samples.dropna(inplace=True)
if len(samples) <= 2: # 1 obs 1 model.
continue
fig = make_taylor(samples)
fig.savefig(os.path.join(run_name, '{}.png'.format(station)))
plt.close(fig)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Les donnรฉes
Step2: Il est impossible de le charger en mรฉmoire en entier. On regarde les premiรจres lignes.
Step3: On calcule le nombre de lignes
Step4: On peut aussi utiliser pandas.read_csv pour regarder les premiรจres lignes.
Step5: Rรฉpartition des dรฉpenses selon l'รขge
Step6: Il faut calculer la distribution des รขges pour chaque dataframe.
Step7: Les versions de pandas ont quelques diffรฉrences. Voici celle utilisรฉes pour ce notebook.
Step8: Difficile d'interprรฉter ce graphique sans le comparer ร la pyramide des รขges mais ce n'est pas l'objectif ici. Chaque statistique prend un peu de temps. Et si nous tirions un รฉchantillon alรฉatoire ร 1%.
Step9: pickle
Step10: SQLite3
Step11: C'est un peu long mais le calcul de statistique est plus rapide par la suite.
Step12: On peut aussi programmer des commandes magiques pour allรฉger la syntaxe.
Step13: dask
Step14: Les trois lignes qui suivent sont trรจs rapides.
Step15: La suivante, beaucoup moins.
Step16: C'est plus qu'avec SQLite3 mais dans cet exemple, dask relit le fichier ร chaque calcul car il ne stocke pas un dataframe dans un format intermรฉdiaire plus efficace tel que parquet. dask propose des idรฉes pour optimiser les calculs Dask DataFrame Performance Tips.
|
<ASSISTANT_TASK:>
Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
%matplotlib inline
import os
filename = "bigdata/A201612_small.csv"
# pour travailler avec un fichier plus gros (4 Go)
# filename = "bigdata/A201612.csv"
xlsfile = "bigdata/Lexique_open-DAMIR.xls"
if not os.path.exists(filename):
from pyensae.datasource import download_data
if not os.path.exists("bigdata"):
os.mkdir("bigdata")
filenamegz = filename.replace(".csv", ".csv.gz")
download_data(os.path.split(filenamegz)[-1], whereTo=".")
download_data(os.path.split(xlsfile)[-1], whereTo="bigdata")
if not os.path.exists(filename):
filename = os.path.split(filename)[-1]
os.stat(filename).st_size / 2**30
with open(filename, "r") as f:
for i, line in enumerate(f):
print(line.strip("\r\n"))
if i >= 5:
break
with open(filename, "r") as f:
nb = 0
for line in f:
nb += 1
nb
import time
import pandas
begin = time.perf_counter()
df = pandas.read_csv(filename, sep=";", nrows=100)
print(time.perf_counter() - begin)
df.head()
df.head().T
variables = pandas.read_excel(xlsfile, skiprows=2)
cols = variables.columns[:2]
variables[cols].dropna()
dfs = []
for i, df in enumerate(pandas.read_csv(filename,
sep=";", iterator=True, chunksize=10000)):
dfs.append(df)
if i >= 5:
break
dfs[0].head()
from datetime import datetime
ages = []
for i, df in enumerate(pandas.read_csv(filename,
sep=";", iterator=True, chunksize=10000)):
sub = df[["AGE_BEN_SNDS", "PRS_REM_MNT"]].copy()
gr = sub.groupby("AGE_BEN_SNDS").sum()
ages.append(gr)
if len(ages) % 100 == 0:
print(datetime.now(), len(ages), "lignes", 10000 * len(ages))
pandas.__version__
age = pandas.concat(ages)
agg = age.groupby("AGE_BEN_SNDS").sum()
agg.plot(kind="bar");
ech = []
for i, df in enumerate(pandas.read_csv(filename,
sep=";", iterator=True, chunksize=10000)):
sub = df.sample(frac=0.01)
ech.append(sub)
if len(ech) % 100 == 0:
print(datetime.now(), len(ech), "lignes", 10000 * len(ech))
echantillon = pandas.concat(ech)
echantillon.shape
echantillon[["AGE_BEN_SNDS", "PRS_REM_MNT"]] \
.groupby("AGE_BEN_SNDS").sum().plot(kind="bar");
echantillon.to_csv("echantillon.csv", sep=";", index=False)
import os
os.stat("echantillon.csv").st_size / 2**20
begin = time.perf_counter()
ech2 = pandas.read_csv("echantillon.csv", sep=";")
print(time.perf_counter()-begin)
ech2.shape
echantillon.to_pickle("echantillon.pickle")
begin = time.perf_counter()
ech3 = pandas.read_pickle("echantillon.pickle")
print(time.perf_counter()-begin)
import sqlite3
cnx = sqlite3.connect('damir2.db3')
for i, df in enumerate(pandas.read_csv(filename,
sep=";", iterator=True, chunksize=100000)):
df.to_sql(con=cnx, name="damir", if_exists="replace" if i == 0 else "append")
if (i+1) % 10 == 0:
print(datetime.now(), (i+1) * 100000, "lignes")
print("done")
import time
begin = time.perf_counter()
dfage = pandas.read_sql(
'select AGE_BEN_SNDS, count(*) from damir group by AGE_BEN_SNDS', cnx)
print("time", time.perf_counter() - begin)
dfage
%load_ext pyensae
%SQL_connect damir.db3
%SQL_tables
%%SQL
SELECT * FROM damir LIMIT 5
%lsmagic
import dask.dataframe
dfdk = dask.dataframe.read_csv(filename, sep=";")
dfdk.head(n=2)
import dask
dask.__version__
dfdk2 = dfdk[["AGE_BEN_SNDS", "PRS_REM_MNT"]]
grdk = dfdk2.groupby("AGE_BEN_SNDS").sum()
begin = time.perf_counter()
res = grdk.compute()
print(time.perf_counter() - begin)
ech4 = dfdk.sample(frac=0.01)
dfech = ech4.compute()
type(dfech)
res
import zarr
z = zarr.zeros((10000, 10000), chunks=(1000, 1000), dtype='i4')
z
z[4,5]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Launching the graph in a session
Step2: Sessions should be closed to release resources. You can also enter a Session with a "with" block. The Session closes automatically at the end of the with block.
Step3: If you have more than one GPU available on your machine, to use a GPU beyond the first you must assign ops to it explicitly. Use with...Device statements to specify which CPU or GPU to use for operations
Step4: Devices are specified with strings. The currently supported devices are
Step5: Interactive Usage
Step6: Tensors
Step7: Fetches
Step8: Feeds
|
<ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
# Create a Constant op that produces a 1x2 matrix. The op is
# added as a node to the default graph.
#
# The value returned by the constructor represents the output
# of the Constant op.
matrix1 = tf.constant([[3., 3.]])
# Create another Constant that produces a 2x1 matrix.
matrix2 = tf.constant([[2.],[2.]])
# Create a Matmul op that takes 'matrix1' and 'matrix2' as inputs.
# The returned value, 'product', represents the result of the matrix
# multiplication.
product = tf.matmul(matrix1, matrix2)
print(product)
# Launch the default graph.
sess = tf.Session()
# To run the matmul op we call the session 'run()' method, passing 'product'
# which represents the output of the matmul op. This indicates to the call
# that we want to get the output of the matmul op back.
#
# All inputs needed by the op are run automatically by the session. They
# typically are run in parallel.
#
# The call 'run(product)' thus causes the execution of three ops in the
# graph: the two constants and matmul.
#
# The output of the op is returned in 'result' as a numpy `ndarray` object.
result = sess.run(product)
print(result)
# ==> [[ 12.]]
# Close the Session when we're done.
sess.close()
with tf.Session() as sess:
result = sess.run([product])
print(result)
sess.close()
with tf.Session() as sess:
with tf.device("/gpu:1"):
matrix1 = tf.constant([[3., 3.]])
matrix2 = tf.constant([[2.],[2.]])
product = tf.matmul(matrix1, matrix2)
print(result)
sess.close()
with tf.Session("grpc://example:2222") as sess:
# Calls to sess.run(...) will be executed on the cluster.
with tf.device("/gpu:1"):
matrix1 = tf.constant([[3., 3.]])
matrix2 = tf.constant([[2.],[2.]])
product = tf.matmul(matrix1, matrix2)
#result = sess.run([product])
print(result)
sess.close()
# Enter an interactive TensorFlow Session.
import tensorflow as tf
sess = tf.InteractiveSession()
x = tf.Variable([1.0, 2.0])
a = tf.constant([3.0, 3.0])
# Initialize 'x' using the run() method of its initializer op.
x.initializer.run()
# Add an op to subtract 'a' from 'x'. Run it and print the result
sub = tf.sub(x, a)
print(sub.eval())
# ==> [-2. -1.]
# Close the Session
sess.close()
# Reset the computation graph
tf.reset_default_graph()
# Create a Variable, that will be initialized to the scalar value 0.
state = tf.Variable(0, name="counter")
# Create an Op to add one to `state`.
one = tf.constant(1)
new_value = tf.add(state, one)
update = tf.assign(state, new_value)
# Launch the graph and run the ops.
with tf.Session() as sess:
tf.global_variables_initializer().run()
print(sess.run(state))
for _ in range(3):
sess.run(update)
print(sess.run(state))
# Reset the computation graph
tf.reset_default_graph()
#
input1 = tf.constant([3.0])
input2 = tf.constant([2.0])
input3 = tf.constant([5.0])
intermed = tf.add(input2, input3)
mul = tf.mul(input1, intermed)
with tf.Session() as sess:
result = sess.run([mul, intermed])
print(result)
# Reset the computation graph
tf.reset_default_graph()
#
input1 = tf.placeholder(tf.float32)
input2 = tf.placeholder(tf.float32)
output = tf.mul(input1, input2)
with tf.Session() as sess:
print(sess.run([output], feed_dict={input1:[7.], input2:[2.]}))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We can count annotated mitochondria by referencing the mitochondria channel
Step2: We can now use the built-in connected-components to count mitochondria.
|
<ASSISTANT_TASK:>
Python Code:
import ndio.remote.OCP as OCP
oo = OCP()
token = "kasthuri2015_ramon_v1"
mito_cutout = oo.get_cutout(token, 'mitochondria', 694, 1794, 1750, 2460, 1004, 1379, resolution=3)
import ndio.utils.stats as ndstats
c, f = ndstats.connected_components(mito_cutout)
print "There are {} mitochondria total in the annotated volume.".format(f)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: For now, use just the numerical datatypes. They are below as num_columns
Step2: The shot_made_flag is the result (0 or 1) of the shot that Kobe took. Some of the values are missing (e.g. NaN). Drop them.
Step3: Use the num_columns, the kobe dataframe to fit() the models. Choose one or more of the entries in num_columns as features. These models are used to predict whether Kobe will make or miss a shot given the certain input parameters provided.
Step4: The following is a reminder of how the SciKit-Learn Models can be interfaced
|
<ASSISTANT_TASK:>
Python Code:
kobe = pd.read_csv('../data/kobe.csv')
[(col, dtype) for col, dtype in zip(kobe.columns, kobe.dtypes) if dtype != 'object']
num_columns = [col for col, dtype in zip(kobe.columns, kobe.dtypes) if dtype != 'object']
num_columns
kobe = kobe
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
sns.set(font_scale=1.5)
# fit a linear regression model and store the predictions
example = pd.DataFrame({'a':[1,2,3,4,5,6], 'b':[1,1,0,0,0,1]})
feature_cols = ['a']
X = example[feature_cols]
y = example.b
from sklearn.linear_model import LinearRegression
linreg = LinearRegression()
linreg.fit(X, y)
example['pred'] = linreg.predict(X)
# scatter plot that includes the regression line
plt.scatter(example.a, example.b)
plt.plot(example.a, example.pred, color='red')
plt.xlabel('a')
plt.ylabel('b')
from sklearn.metrics import accuracy_score
accuracy_score(example.b, example.pred.astype(int))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load capacity curves
Step2: Idealise pushover curves
Step3: Load damage state thresholds
Step4: Calculate fragility functions
Step5: Plot fragility functions
Step6: Save fragility functions
Step7: Obtain vulnerability function
Step8: Plot vulnerability function
Step9: Save vulnerability function
|
<ASSISTANT_TASK:>
Python Code:
from rmtk.vulnerability.derivation_fragility.R_mu_T_no_dispersion.dolsek_fajfar import DF2004
from rmtk.vulnerability.common import utils
%matplotlib inline
capacity_curves_file = "../../../../../../rmtk_data/capacity_curves_Vb-dfloor.csv"
input_spectrum = "../../../../../../rmtk_data/FEMAP965spectrum.txt"
capacity_curves = utils.read_capacity_curves(capacity_curves_file)
Sa_ratios = utils.get_spectral_ratios(capacity_curves, input_spectrum)
utils.plot_capacity_curves(capacity_curves)
idealised_type = "quadrilinear"
idealised_capacity = utils.idealisation(idealised_type, capacity_curves)
utils.plot_idealised_capacity(idealised_capacity, capacity_curves, idealised_type)
damage_model_file = "../../../../../../rmtk_data/damage_model_ISD.csv"
damage_model = utils.read_damage_model(damage_model_file)
montecarlo_samples = 50
corner_periods = [0.5, 1.8]
fragility_model = DF2004.calculate_fragility(capacity_curves, idealised_capacity, damage_model, montecarlo_samples, Sa_ratios, corner_periods)
minIML, maxIML = 0.01, 2.00
utils.plot_fragility_model(fragility_model, minIML, maxIML)
taxonomy = "RC"
minIML, maxIML = 0.01, 2.00
output_type = "csv"
output_path = "../../../../../../rmtk_data/output/"
utils.save_mean_fragility(taxonomy, fragility_model, minIML, maxIML, output_type, output_path)
cons_model_file = "../../../../../../rmtk_data/cons_model.csv"
imls = [0.05, 0.10, 0.15, 0.20, 0.25, 0.30, 0.35, 0.40, 0.45, 0.50,
0.60, 0.70, 0.80, 0.90, 1.00, 1.20, 1.40, 1.60, 1.80, 2.00]
distribution_type = "lognormal"
cons_model = utils.read_consequence_model(cons_model_file)
vulnerability_model = utils.convert_fragility_vulnerability(fragility_model, cons_model,
imls, distribution_type)
utils.plot_vulnerability_model(vulnerability_model)
taxonomy = "RC"
output_type = "csv"
output_path = "../../../../../../rmtk_data/output/"
utils.save_vulnerability(taxonomy, vulnerability_model, output_type, output_path)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Use this to automate the process. Be carefull it can overwrite current results
Step2: Now we will obtain the data from the calculated empirical variogram.
Step3: restricted w/ all data spatial correlation parameters
Step4: Instantiating the variogram object
Step5: Instantiating theoretical variogram model
|
<ASSISTANT_TASK:>
Python Code:
# Load Biospytial modules and etc.
%matplotlib inline
import sys
sys.path.append('/apps')
sys.path.append('..')
sys.path.append('../spystats')
import django
django.setup()
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
## Use the ggplot style
plt.style.use('ggplot')
import tools
from HEC_runs.fit_fia_logbiomass_logspp_GLS import prepareDataFrame,loadVariogramFromData,buildSpatialStructure, calculateGLS, initAnalysis, fitGLSRobust
section = initAnalysis("/RawDataCSV/idiv_share/FIA_Plots_Biomass_11092017.csv",
"/apps/external_plugins/spystats/HEC_runs/results/variogram/data_envelope.csv",
-130,-60,30,40)
#section = initAnalysis("/RawDataCSV/idiv_share/plotsClimateData_11092017.csv",
# "/apps/external_plugins/spystats/HEC_runs/results/variogram/data_envelope.csv",
# -85,-80,30,35)
# IN HEC
#section = initAnalysis("/home/hpc/28/escamill/csv_data/idiv/FIA_Plots_Biomass_11092017.csv","/home/hpc/28/escamill/spystats/HEC_runs/results/variogram/data_envelope.csv",-85,-80,30,35)
section.shape
gvg,tt = loadVariogramFromData("/apps/external_plugins/spystats/HEC_runs/results/variogram/data_envelope.csv",section)
gvg.plot(refresh=False,with_envelope=True)
resum,gvgn,resultspd,results = fitGLSRobust(section,gvg,num_iterations=1,distance_threshold=1000000)
resum.as_text
section
plt.plot(resultspd.rsq)
plt.title("GLS feedback algorithm")
plt.xlabel("Number of iterations")
plt.ylabel("R-sq fitness estimator")
resultspd.columns
a = map(lambda x : x.to_dict(), resultspd['params'])
paramsd = pd.DataFrame(a)
paramsd
plt.plot(paramsd.Intercept.loc[1:])
plt.get_yaxis().get_major_formatter().set_useOffset(False)
fig = plt.figure(figsize=(10,10))
plt.plot(paramsd.logSppN.iloc[1:])
variogram_data_path = "/apps/external_plugins/spystats/HEC_runs/results/variogram/data_envelope.csv"
thrs_dist = 100000
emp_var_log_log = pd.read_csv(variogram_data_path)
gvg = tools.Variogram(section,'logBiomass',using_distance_threshold=thrs_dist)
gvg.envelope = emp_var_log_log
gvg.empirical = emp_var_log_log.variogram
gvg.lags = emp_var_log_log.lags
#emp_var_log_log = emp_var_log_log.dropna()
#vdata = gvg.envelope.dropna()
matern_model = tools.MaternVariogram(sill=0.34,range_a=100000,nugget=0.33,kappa=4)
whittle_model = tools.WhittleVariogram(sill=0.34,range_a=100000,nugget=0.0,alpha=3)
exp_model = tools.ExponentialVariogram(sill=0.34,range_a=100000,nugget=0.33)
gaussian_model = tools.GaussianVariogram(sill=0.34,range_a=100000,nugget=0.33)
spherical_model = tools.SphericalVariogram(sill=0.34,range_a=100000,nugget=0.33)
gvg.model = whittle_model
#gvg.model = matern_model
#models = map(lambda model : gvg.fitVariogramModel(model),[matern_model,whittle_model,exp_model,gaussian_model,spherical_model])
gvg.fitVariogramModel(whittle_model)
import numpy as np
xx = np.linspace(0,1000000,1000)
gvg.plot(refresh=False,with_envelope=True)
plt.plot(xx,whittle_model.f(xx),lw=2.0,c='k')
plt.title("Empirical Variogram with fitted Whittle Model")
def randomSelection(n,p):
idxs = np.random.choice(n,p,replace=False)
random_sample = new_data.iloc[idxs]
return random_sample
#################
n = len(new_data)
p = 3000 # The amount of samples taken (let's do it without replacement)
random_sample = randomSelection(n,100)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A linear regression for datapoint matrix $X$ ($D \times N$, D datapoints and N input dimensions) and target matrix $Y$ ($D \times M$, D datapoints and M output dimensions) is defined as
Step2: A common way to integrate a bias value for many machine learning methods is to add a dimension which is constant one for all datapoints!
Step3: By using a polynomial expansion of x we can fit a polynome to the data.
Step4: Now perform the same using the linear regression function np.polyfit(x,y,5) of numpy. Notice that x,y are 1D arrays here!
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# your code here
# our solution
from solutions import *
decrypt_solution(solution_regression_1, 'foo')
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# your code here
# our solution
from solutions import *
decrypt_solution(solution_regression_2, 'foo')
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# your code here
# our solution
from solutions import *
decrypt_solution(solution_regression_3, 'foo')
y = np.cos(x[:,0])+np.random.randn(100)*0.5
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# your code here
# our solution
from solutions import *
decrypt_solution(solution_regression_4, 'foo')
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# your code here
# our solution
from solutions import *
decrypt_solution(solution_regression_5, 'foo')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Setup Batches
Step2: Rather than using batches, we could just import all the data into an array to save some processing time. (In mose examples, I'm using the batches, however - just because that's how I happened to start out.)
Step3: Re-run sample experiments on full dataset
Step4: Interestingly, with no regularization or augmentation, we're getting some reasonable results from our simple convolutional model. So with augmentation, we hopefully will see some very good results.
Step5: I'm shocked by how good these results are! We're regularly seeing 75-80% accuracy on the validation set, which puts us into the top third or better of the competition. With such a simple model and no dropout or semi-supervised learning, this really speaks to the power of this approach to data augmentation. Noted. I'm seeing the same numbers
Step6: This is looking quite a bit better - the accuracy is similar, but the stability is higher. There's still some way to go however...
Step7: (Working on getting conv_test_feat. For some reason getting a nameless "MemoryError
Step8: BatchNorm Dense layers on pretrained Conv layers
Step9: NOTE
Step10: Looking good! Let's try pre-computing 5 epochs worth of augmented data, so we can experiment with combining dropout and augmentation on the pre-trained model.
Step11: We'll use those to create a dataset of convolutional features 5x bigger than the training set.
Step12: Let's include the real trianing data as well in its non-augmented form.
Step13: Since we've now got a dataset 6x bigger than before, we'll need tocopy our labels 6 times too.
Step14: Based on some experiments the previous model works well, with bigger dense layers.
Step15: Now we can train the model as usual, with pre-computed augmented data.
Step16: Looks good - let's save those weights.
Step17: Pseudo-Labeling
Step18: ...concatenate them with our training labels...
Step19: ...and fine-tune our model using that data.
Step20: That's a distinct improvement - even although the validation set isn't very big. This looks encouraging for when we try this on the test set.
Step21: Submit
|
<ASSISTANT_TASK:>
Python Code:
import theano
import os, sys
sys.path.insert(1, os.path.join('utils'))
%matplotlib inline
from __future__ import print_function, division
path = "data/statefarm/"
import utils; reload(utils)
from utils import *
from IPython.display import FileLink
# batch_size=32
batch_size=16
batches = get_batches(path + 'train', batch_size=batch_size)
val_batches = get_batches(path + 'valid', batch_size=batch_size*2, shuffle=False)
# test_batches = get_batches(path + 'test', batch_size=batch_size, shuffle=False)
(val_classes, trn_classes, val_labels, trn_labels,
val_filenames, trn_filenames, test_filenames) = get_classes(path)
# trn = get_data(path + 'train')
# val = get_data(path + 'valid')
# save_array(path + 'results/val.dat', val)
# save_array(path + 'results/trn.dat', trn)
# val = load_array(path + 'results/val.dat')
# trn = load_array(path + 'results/trn.dat')
def conv1(batches):
model = Sequential([
BatchNormalization(axis=1, input_shape=(3,224,224)),
Convolution2D(32, 3, 3, activation='relu'),
BatchNormalization(axis=1),
MaxPooling2D((3,3)),
Convolution2D(64, 3, 3, activation='relu'),
BatchNormalization(axis=1),
MaxPooling2D((3,3)),
Flatten(),
Dense(200, activation='relu'),
BatchNormalization(),
Dense(10, activation='softmax')
])
model.compile(Adam(lr=1e-4), loss='categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(batches, batches.nb_sample, nb_epoch=2, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
model.optimizer.lr = 1e-3
model.fit_generator(batches, batches.nb_sample, nb_epoch=4, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
return model
model = conv1(batches)
gen_t = image.ImageDataGenerator(rotation_range=15, height_shift_range=0.05,
shear_range=0.1, channel_shift_range=20, width_shift_range=0.1)
batches = get_batches(path + 'train', gen_t, batch_size=batch_size)
model = conv1(batches)
model.optimizer.lr = 1e-4
model.fit_generator(batches, batches.nb_sample, nb_epoch=15, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
gen_t = image.ImageDataGenerator(rotation_range=15, height_shift_range=0.05,
shear_range=0.1, channel_shift_range=20, width_shift_range=0.1)
batches = get_batches(path + 'train', gen_t, batch_size=batch_size)
model = Sequential([
BatchNormalization(axis=1, input_shape=(3, 224, 224)),
Convolution2D(32, 3, 3, activation='relu'),
BatchNormalization(axis=1),
MaxPooling2D(),
Convolution2D(64, 3, 3, activation='relu'),
BatchNormalization(axis=1),
MaxPooling2D(),
Convolution2D(128, 3, 3, activation='relu'),
BatchNormalization(axis=1),
MaxPooling2D(),
Flatten(),
Dense(200, activation='relu'),
BatchNormalization(),
Dropout(0.5),
Dense(200, activation='relu'),
BatchNormalization(),
Dropout(0.5),
Dense(10, activation='softmax')
])
model.compile(Adam(lr=1e-5), loss='categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(batches, batches.nb_sample, nb_epoch=2, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
model.optimizer.lr=1e-3
model.fit_generator(batches, batches.nb_sample, nb_epoch=10, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
model.optimizer.lr=1e-5
model.fit_generator(batches, batches.nb_sample, nb_epoch=10, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
# os.mkdir(path + 'models')
model.save_weights(path + 'models/conv8_prelim.h5')
vgg = Vgg16()
model = vgg.model
last_conv_idx = [i for i, l in enumerate(model.layers) if type(l) is Convolution2D][-1]
conv_layers = model.layers[:last_conv_idx + 1]
conv_model = Sequential(conv_layers)
# ยก batches shuffle must be set to False when pre-computing features !
batches = get_batches(path + 'train', batch_size=batch_size, shuffle=False)
(val_classes, trn_classes, val_labels, trn_labels,
val_filenames, filenames, test_filenames) = get_classes(path)
conv_feat = conv_model.predict_generator(batches, batches.nb_sample)
conv_val_feat = conv_model.predict_generator(val_batches, val_batches.nb_sample)
# conv_test_feat = conv_model.predict_generator(test_batches, test_batches.nb_sample)
save_array(path + 'results/conv_feat.dat', conv_feat)
save_array(path + 'results/conv_val_feat.dat', conv_val_feat)
# save_array(path + 'results/conv_test_feat.dat', conv_test_feat)
conv_feat = load_array(path + 'results/conv_feat.dat')
conv_val_feat = load_array(path + 'results/conv_val_feat.dat')
# conv_test_feat = load_array(path + 'results/conv_test_feat.dat')
conv_val_feat.shape
test_batches = get_batches(path + 'test', batch_size=1, shuffle=False, class_mode=None)
save_array(path + '/results/conv_test_feat.dat', conv_model.predict_generator(test_batches, test_batches.nb_sample))
save_array(path + 'results/conv_test_feat.dat', conv_test_feat)
def get_bn_layers(p):
return [
MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]),
Flatten(),
Dropout(p/2),
Dense(128, activation='relu'),
BatchNormalization(),
Dropout(p/2),
Dense(128, activation='relu'),
BatchNormalization(),
Dropout(p),
Dense(10, activation='softmax')
]
p = 0.8
bn_model = Sequential(get_bn_layers(p))
bn_model.compile(Adam(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy'])
bn_model.fit(conv_feat, trn_labels, batch_size=batch_size, nb_epoch=1,
validation_data=(conv_val_feat, val_labels))
bn_model.optimizer.lr=0.01
bn_model.fit(conv_feat, trn_labels, batch_size=batch_size, nb_epoch=2,
validation_data=(conv_val_feat, val_labels))
bn_model.save_weights(path + 'models/conv8.h5')
# bn_model.load_weights(path + 'models/conv8.h5')
bn_model.optimizer.lr=0.001
bn_model.fit(conv_feat, trn_labels, batch_size=batch_size, nb_epoch=4,
validation_data=(conv_val_feat, val_labels))
bn_model.optimizer.lr=0.0001
bn_model.fit(conv_feat, trn_labels, batch_size=batch_size, nb_epoch=4,
validation_data=(conv_val_feat, val_labels))
bn_model.optimizer.lr=0.00001
bn_model.fit(conv_feat, trn_labels, batch_size=batch_size, nb_epoch=8,
validation_data=(conv_val_feat, val_labels))
gen_t = image.ImageDataGenerator(rotation_range=15, height_shift_range=0.05,
shear_range=0.1, channel_shif_range=20, width_shift_range=0.1)
da_batches = get_batches(path + 'train', gen_t, batch_size=batch_size, shuffle=False)
da_conv_feat = conv_model.predict_generator(da_batches, da_batches.nb_smaple*5)
save_array(path + 'results/da_conv_feat.dat', da_conv_feat)
da_conv_feat = load_array('results/da_conv_feat.dat')
da_conv_feat = np.concatenate([da_conv_feat, conv_feat])
da_trn_labels = np.concatenate([trn_labels]*6)
def get_bn_da_layers(p):
return [
MaxPooling2D(input_shape = conv_layers[-1].output_shape[1:]),
Flatten(),
Dropout(p),
Dense(256, activation='relu'),
BatchNormalization(),
Dropout(p),
Dense(256, activation='relu'),
BatchNormalization(),
Dropout(p),
Dense(10, activation='softmax')
]
p=0.8
bn_model = Sequential(get_bn_da_layers(p))
bn_model.compile(Adam(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy'])
bn_model.fit(da_conv_feat, da_trn_labels, batch_size=batch_size, nb_epoch=1,
validation_data=(conv_val_feat, val_labels))
bn_model.optimizer.lr=0.01
bn_model.fit(da_conv_feat, da_trn_labels, batch_size=batch_size, nb_epoch=4,
validation_data=(conv_val_feat, val_labels))
bn_model.optimizer.lr=1e-4
bn_model.fit(da_conv_feat, da_trn_labels, batch_size=batch_size, nb_epoch=4,
validation_data=(conv_val_feat, val_labels))
bn_model.save_weights(path + 'models/da_conv8_1.h5')
bn_model.fit(conv_feat, trn_labels, batch_size=batch_size, nb_epoch=1,
validation_data=(conv_val_feat, val_labels))
bn_model.optimizer.lr=0.01
bn_model.fit(conv_feat, trn_labels, batch_size=batch_size, nb_epoch=4,
validation_data=(conv_val_feat, val_labels))
bn_model.optimizer.lr=1e-4
bn_model.fit(conv_feat, trn_labels, batch_size=batch_size, nb_epoch=4,
validation_data=(conv_val_feat, val_labels))
bn_model.save_weights(path + 'models/conv8_bn_1.h5')
val_pseudo = bn_model.predict(conv_val_feat, batch_size=batch_size)
comb_pseudo = np.concatenate([trn_labels, val_pseudo])
comb_feat = np.concatenate([trn_labels, conv_val_feat])
comb_pseudo = np.concatenate([da_trn_labels, val_pseudo])
comb_feat = np.concatenate([da_conv_feat, conv_val_feat])
bn_model.load_weights(path _ + 'models/da_conv8_1.h5')
bn_model.fit(comb_feat, comb_pseudo, batch_size=batch_size, nb_epoch=1,
validation_data=(conv_val_feat, val_labels))
bn_model.fit(comb_feat, comb_pseudo, batch_size=batch_size, nb_epoch=4,
validation_data=(conv_val_feat, val_labels))
bn_model.optimizer.lr=1e-5
bn_model.fit(comb_feat, comb_pseudo, batch_size=batch_size, nb_epoch=4,
validation_data=(conv_val_feat, val_labels))
bn_model.save_weights(path + 'models/bn-ps8.h5')
def do_clip(arr, mx): return np.clip(arr, (1 - mx)/9, mx)
val_preds = bn_model.predict(conv_val_feat, batch_size=batch_size)
keras.metrics.categorical_crossentropy(val_labels, do_clip(val_preds, 0.93)).eval()
conb_test_feat = conv_model.predict_generator(test_batches, test_batches.n)
conv_test_feat = load_array(path + 'results/conv_test_feat.dat')
preds = bn_model.predict(conv_test_feat, batch_size=batch_size*2)
subm = do_clip(preds, 0.93)
subm_name = path + 'results/subm.gz'
classes = sorted(batches.class_indices, key=batches.class_indices.get)
submission = pd.DataFrame(subm, columns=classes)
submission.insert(0, 'img', [a[4:] for a in test_filenames]) # <-- why a[4:]?
# submission.insert(0, 'img', [f[8:] for f in test_filenames])
submission.head()
submission.to_csv(subm_name, index=False, compression='gzip')
FileLink(subm_name)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Our network will be comprised of a list of numpy arrays with each array containing the weights and bias for that layer of perceptions.
Step2: Credit to Neural Networks and Deep Learning by Michael Nielsen for the image.
Step3: This is our code from the Making Perceptrons notebook that we use for our network.
Step4: Here we define functions to train the network based on a set of training data. The first step is to run our training data through our network to find how much error the network currently has. Since digits.target is a list of integers, we need a function to convert those integers into 10 dimensional vectors
Step5: Another important function we will need is a function that will compute the output error and multipply it with the derivative ofour sigmoid function to find our output layer's deltas. These deltas will be crucial for backpropagating our error to our hidden layers.
Step6: Once we have to deltas of our output layers, we move on to getting the hidden layer's deltas. To compute this, we will take the Hadamard product of the dot product of the weight array and the deltas of the succeeding array with the derivitive of that hidden layer's output.
Step7: Now that we can find the deltas for each layer in the network, we just need a function to edit our weights based off of a list of examples. For that, we use stocastic gradient descent.
Step8: To edit the weights in of network, we take the 2D array in each layer and subtract it with the 2D array that results from the average of the dot products of the deltas and the inputs of that layer for the samples in the training data. This average is multiplied by a learning rate, $ฮท$, to give us control over how much the network will change.
Step9: So, we have everything we need to train a network. All we are missing is a network to train. Let's make one and let's call him Donnel.
Step10: So as you can see, the network "Donnel" is simply a list of 2D numpy arrays with one array for each layer of the network. His hidden layer's shape is 40 x 65 with each row being a perceptron with 64 weights and 1 bias. Since Donnel's output layer has 10 nuerons in it, we need to be able to convert Donnel's output to numbers and numbers (0-9) into a list of perceptron outputs.
Step11: Now, lets train the network with 80% of the digits data set. To do this, we will use stocastic gradient descent on batch sized iteration of the total training data set. Essentially, we're going to change our weights 15 examples at a time until we complete 80% of the dataset. Let's run this through a couple cycles as well to get our accuracy as high as possible.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.html.widgets import interact
from sklearn.datasets import load_digits
from IPython.display import Image, display
digits = load_digits()
print(digits.data.shape)
def show_examples(i):
plt.matshow(digits.images[i].reshape((8,8)), cmap='Greys_r')
display(digits.target[i])
interact(show_examples, i=[1,1797-1])
Image(url="http://neuralnetworksanddeeplearning.com/images/tikz35.png")
def gen_network(size):
weights= [np.array([[np.random.randn() for _ in range(size[n-1]+1)]
for _ in range(size[n])]) for n in range(len(size))[1:]]
return weights
a = gen_network([2,2,1,3])
a
sigmoid = lambda x: 1/(1 +np.exp(-x))
def perceptron_sigmoid(weights, inputvect):
return sigmoid(np.dot(np.append(inputvect,[1]), weights))
def propforward(network, inputvect):
outputs = []
for layer in network:
neural_input = inputvect
output = [perceptron_sigmoid(weights, neural_input) for weights in layer]
outputs.append(output)
inputvect = output
outputs = np.array(outputs)
return [outputs[:-1], outputs[-1]]
def target_convert(n):
assert n <= 9 and n >= 0
n = round(n)
result = np.zeros((10,))
result[n]=1
return result
target_convert(4)
def find_deltas_sigmoid(outputs, targets):
return [output*(1-output)*(output-target) for output, target in zip(outputs, targets)]
def backprob(network, inputvect, targets):
hidden_outputs, outputs = propforward(network, inputvect)
change_in_outputs = find_deltas_sigmoid(outputs, targets)
list_deltas = [[] for _ in range(len(network))]
list_deltas[-1] = change_in_outputs
for n in range(len(network))[-1:0:-1]:
delta = change_in_outputs
change_in_hidden_outputs= [hidden_output*(1-hidden_output)*
np.dot(delta, np.array([n[i] for n in network[n]]).transpose())
for i, hidden_output in enumerate(hidden_outputs[n-1])]
list_deltas[n-1] = change_in_hidden_outputs
change_in_outputs = change_in_hidden_outputs
return list_deltas
def stoc_descent(network, input_list, target_list, learning_rate):
mega_delta = []
hidden_output = [propforward(network, inpt)[0] for inpt in input_list]
for inpt, target in zip(input_list, target_list):
mega_delta.append(backprob(network, inpt, target))
inputs=[]
inputs.append(input_list)
for n in range(len(network)):
inputs.append(hidden_output[n])
assert len(inputs) == len(network) + 1
deltas = []
for n in range(len(network)):
deltas.append([np.array(delta[n]) for delta in mega_delta])
assert len(deltas)==len(network)
for n in range(len(network)):
edit_weights(network[n], inputs[n], deltas[n], learning_rate)
def edit_weights(layer, input_list, deltas, learning_rate):
for a, inpt in enumerate(input_list):
layer-=learning_rate/len(input_list)*np.dot(deltas[a].reshape(len(deltas[a]),1),
np.append(inpt,[1]).reshape(1,len(inpt)+1))
inputs=64
hidden_neurons=40
outputs=10
donnel = gen_network([inputs,hidden_neurons,outputs])
# Here's what Donnel looks like.
donnel
def output_reader(output):
assert len(output)==10
result=[]
for i, t in enumerate(output):
if t == max(output) and abs(t-1)<=0.5:
result.append(i)
if len(result)==1:
return result[0]
else:
return 0
output_reader([0,0,0,0,0,1,0,0,0,0])
%%timeit -r1 -n1
training_cycles = 20
numbers_per_cycle = 1438
batch_size = 15
learning_rate = 1
train_data_index = np.linspace(0,numbers_per_cycle, numbers_per_cycle + 1)
target_list = [target_convert(n) for n in digits.target[0:numbers_per_cycle]]
np.random.seed(1)
np.random.shuffle(train_data_index)
for _ in range(training_cycles):
for n in train_data_index:
if n+batch_size <= numbers_per_cycle:
training_data = digits.data[int(n):int(n+batch_size)]
target_data = target_list[int(n):int(n+batch_size)]
else:
training_data = digits.data[int(n-batch_size):numbers_per_cycle]
assert len(training_data)!=0
target_data = target_list[int(n-batch_size):numbers_per_cycle]
stoc_descent(donnel, training_data, target_data, learning_rate)
And let's check how accurate it is by testing it with the remaining 20% of the data set.
def check_net(rnge = 1438, check_number=202):
guesses = []
targets = []
number_correct = 0
rnge = range(rnge,rnge + 359)
for n in rnge:
guesses.append(output_reader(propforward(donnel, digits.data[n])[-1]))
targets.append(digits.target[n])
for guess, target in zip(guesses, targets):
if guess == target:
number_correct+=1
number_total = len(rnge)
print(number_correct/number_total*100)
print("%d/%d" %(number_correct, number_total))
print()
print(propforward(donnel, digits.data[check_number])[-1])
print()
print(output_reader(propforward(donnel, digits.data[check_number])[-1]))
show_examples(check_number)
interact(check_net, rnge=True, check_number = [1,1796])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: biethnic has no frequency in NYT
Step2: polyracial and polyethnic have no frequencies in the NYT
Step3: mixed ethnicity barely has a frequency in the present and other ethnicity has no frequency in the NYT
Step4: polyculturalism has no frequency in NYT
|
<ASSISTANT_TASK:>
Python Code:
plot_both(['bicultural', 'biracial', 'biethnic', 'interracial'])
plt.xlim(1910, 2015)
plot_both(['multicultural', 'multiracial', 'multiethnic', 'polycultural', 'polyracial', 'polyethnic'])
plt.xlim(1950, 2015)
plot_both(['mixed race', 'mixed ethnicity', 'other race', 'other ethnicity'])
plot_both(['multiculturalism', 'polyculturalism', 'cultural pluralism'])
plt.xlim(1940, 2015)
plot_both(['monocultural', 'interracial', 'bicultural', 'multicultural', 'multiracial', 'multiethnic', 'mixed race'])
plt.xlim(1920, 2015)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TD;DR
Step2: Here you can see that the render of the website is correct and in my case it pointed me to the Austin website.
Step3: So now I know there are 23 links on that section, 21 categories since there are two buttons to scroll.
Step4: After about 30 seconds I have the content on a dictionary and I can take a look at it.
Step5: After that we can just stop the driver and stop the docker container.
Step6: Scale
|
<ASSISTANT_TASK:>
Python Code:
from selenium import webdriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
driver = webdriver.Remote(command_executor='http://192.168.99.101:4444/wd/hub',
desired_capabilities=DesiredCapabilities.CHROME)
driver.get("http://www.yelp.com")
image = driver.get_screenshot_as_base64()
from IPython.display import HTML
HTML(<img src="data:image/png;base64,{0}">.format(image))
best = driver.find_element_by_id('best-of-yelp-module')
navigation = best.find_element_by_class_name('navigation')
sections = navigation.find_elements_by_tag_name('li')
len(sections)
import time
biz = {}
for section in sections:
section.click()
time.sleep(1)
content = best.find_element_by_class_name('main-content')
sec_name = content.text.split('\n')[0]
biz_names = content.find_elements_by_class_name('biz-name')
biz_names = [name.text for name in biz_names if name.text]
biz[sec_name] = biz_names
biz
driver.quit()
import os
from docker import Client
from docker.utils import kwargs_from_env
kwargs = kwargs_from_env()
kwargs['tls'].assert_hostname = False
client = Client(**kwargs)
containers = client.containers()
seleniums = [c for c in containers if c['Image'] == 'danielfrg/selenium']
urls = [s['Ports'][0]['IP'] + ':' + str(s['Ports'][0]['PublicPort']) for s in seleniums]
urls
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Step 1 - collect data
Step2: Step 2 - Build model
Step3: targets
Step4: Quick test run
Step5: Step 3 training the network
Step6: Conclusion
|
<ASSISTANT_TASK:>
Python Code:
factors(689)
max_seq_len = 682
#full_train_size = 55820
#train_size = 55800
#small_train_size = 6000 #just because of performance reasons, no statistics behind this decision
#test_size = 6200
data_path = '../../../../Dropbox/data'
phae_path = data_path + '/price_hist_autoencoder'
csv_in = '../price_history_03_seq_start_suddens_trimmed.csv'
assert path.isfile(csv_in)
npz_unprocessed = phae_path + '/price_history_full_seqs.npz'
assert path.isfile(npz_unprocessed)
npz_dates = phae_path + '/price_history_full_seqs_dates.npz'
assert path.isfile(npz_dates)
npz_train = phae_path + '/price_history_seqs_dates_normed_train.npz'
assert path.isfile(npz_train)
npz_test = phae_path + '/price_history_seqs_dates_normed_test.npz'
assert path.isfile(npz_test)
npz_path = npz_train[:-len('_train.npz')]
for key, val in np.load(npz_train).iteritems():
print key, ",", val.shape
dp = PriceHistoryAutoEncDataProvider(npz_path=npz_path, batch_size=53, with_EOS=False)
for data in dp.datalist:
print data.shape
# for item in dp.next():
# print item.shape
# model = PriceHistoryAutoencoder(rng=random_state, dtype=dtype, config=config)
# graph = model.getGraph(batch_size=53,
# enc_num_units = 10,
# dec_num_units = 10,
# ts_len=max_seq_len)
#show_graph(graph)
def experiment():
return model.run(npz_path=npz_path,
epochs=2,
batch_size = 53,
enc_num_units = 400,
dec_num_units = 400,
ts_len=max_seq_len,
learning_rate = 1e-4,
preds_gather_enabled = False,
)
dyn_stats_dic = experiment()
dyn_stats_dic['dyn_stats'].plotStats()
plt.show()
dyn_stats_dic['dyn_stats_diff'].plotStats()
plt.show()
model = PriceHistoryAutoencoder(rng=random_state, dtype=dtype, config=config)
npz_test = npz_path + '_test.npz'
assert path.isfile(npz_test)
path.abspath(npz_test)
def experiment():
return model.run(npz_path=npz_path,
epochs=50,
batch_size = 53,
enc_num_units = 450,
dec_num_units = 450,
ts_len=max_seq_len,
learning_rate = 1e-3,
preds_gather_enabled = True,
)
#%%time
# dyn_stats_dic, preds_dict, targets, twods = experiment()
dyn_stats_dic, preds_dict, targets, twods = get_or_run_nn(experiment, filename='035_autoencoder_001',
nn_runs_folder = data_path + "/nn_runs")
dyn_stats_dic['dyn_stats'].plotStats()
plt.show()
dyn_stats_dic['dyn_stats_diff'].plotStats()
plt.show()
r2_scores = [r2_score(y_true=targets[ind], y_pred=preds_dict[ind])
for ind in range(len(targets))]
ind = np.argmin(r2_scores)
ind
reals = targets[ind]
preds = preds_dict[ind]
r2_score(y_true=reals, y_pred=preds)
#sns.tsplot(data=dp.inputs[ind].flatten())
fig = plt.figure(figsize=(15,6))
plt.plot(reals, 'b')
plt.plot(preds, 'g')
plt.legend(['reals','preds'])
plt.show()
%%time
dtw_scores = [fastdtw(targets[ind], preds_dict[ind])[0]
for ind in range(len(targets))]
np.mean(dtw_scores)
coint(preds, reals)
cur_ind = np.random.randint(len(targets))
reals = targets[cur_ind]
preds = preds_dict[cur_ind]
fig = plt.figure(figsize=(15,6))
plt.plot(reals, 'b', label='reals')
plt.plot(preds, 'g')
plt.legend(['reals','preds'])
plt.show()
twod_arr = np.array(twods.values())
twod_arr.shape
plt.figure(figsize=(16,7))
plt.plot(twod_arr[:, 0], twod_arr[:, 1], 'r.')
plt.title('two dimensional representation of our time series after dimensionality reduction')
plt.xlabel('first dimension')
plt.ylabel('second dimension')
plt.show()
<END_TASK>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.