Unnamed: 0
int64 0
15.9k
| cleaned_code
stringlengths 67
124k
⌀ | cleaned_prompt
stringlengths 168
30.3k
⌀ |
|---|---|---|
12,300
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Install latest pre-release version of bigdl-chronos
# Installing bigdl-chronos from pip will automatically install pyspark, bigdl, and their dependencies.
!pip install --pre --upgrade bigdl-chronos
!pip uninstall -y torchtext # uninstall torchtext to avoid version conflict
exit() # refresh the pkg you just install
# download the dataset
!wget https://raw.githubusercontent.com/numenta/NAB/master/data/realTraffic/speed_7578.csv
import pandas as pd
df = pd.read_csv("speed_7578.csv", parse_dates=["timestamp"])
df.head()
from bigdl.chronos.data import TSDataset
tsdata = TSDataset.from_pandas(df, dt_col="timestamp", target_col="value")
df = tsdata.resample("5min")\
.impute(mode="linear")\
.to_pandas()
from bigdl.chronos.detector.anomaly import DBScanDetector
ad = DBScanDetector(eps=0.3, min_samples=6)
ad.fit(df['value'].to_numpy())
anomaly_indexes = ad.anomaly_indexes()
import matplotlib.pyplot as plt
plt.figure(figsize=(16,6))
plt.plot(df.timestamp, df.value, label='value')
plt.scatter(df.timestamp[anomaly_indexes],
df.value[anomaly_indexes],
color='red', label='anomalies value')
plt.title('the anomalies value')
plt.xlabel('datetime')
plt.legend(loc='upper left')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Environment Preparation
Step2: Unsupervised Anomaly Detection using Chronos Anomaly Detector
Step 0
Step3: Step 1
Step4: Step 2
Step5: Step 3
|
12,301
|
<ASSISTANT_TASK:>
Python Code:
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
radian = UNITS.radian
m = UNITS.meter
s = UNITS.second
params = Params(Rmin = 0.02 * m,
Rmax = 0.055 * m,
L = 47 * m,
omega = 10 * radian / s,
t_end = 130 * s,
dt = 1*s)
def estimate_k(params):
Estimates the parameter `k`.
params: Params with Rmin, Rmax, and L
returns: k in meters per radian
Rmin, Rmax, L = params.Rmin, params.Rmax, params.L
Ravg = (Rmax + Rmin) / 2
Cavg = 2 * pi * Ravg
revs = L / Cavg
rads = 2 * pi * revs
k = (Rmax - Rmin) / rads
return k
def make_system(params):
Make a system object.
params: Params with Rmin, Rmax, and L
returns: System with init, k, and ts
init = State(theta = 0 * radian,
y = 0 * m,
r = params.Rmin)
k = estimate_k(params)
return System(params, init=init, k=k)
system = make_system(params)
system.init
def slope_func(state, t, system):
Computes the derivatives of the state variables.
state: State object with theta, y, r
t: time
system: System object with r, k
returns: sequence of derivatives
theta, y, r = state
k, omega = system.k, system.omega
dydt = r * omega
drdt = k * omega
return omega, dydt, drdt
slope_func(system.init, 0, system)
def event_func(state, t, system):
Detects when we've rolled length `L`.
state: State object with theta, y, r
t: time
system: System object with L
returns: difference between `y` and `L`
theta, y, r = state
return y - system.L
event_func(system.init, 0, system)
results, details = run_ode_solver(system, slope_func, events=event_func)
details
results.tail()
unrolled = get_last_value(results.y)
radius = get_last_value(results.r)
radians = get_last_value(results.theta)
rotations = magnitude(radians) / 2 / np.pi
t_final = get_last_label(results) * s
def plot_theta(results):
plot(results.theta, color='C0', label='theta')
decorate(xlabel='Time (s)',
ylabel='Angle (rad)')
plot_theta(results)
def plot_y(results):
plot(results.y, color='C1', label='y')
decorate(xlabel='Time (s)',
ylabel='Length (m)')
plot_y(results)
def plot_r(results):
plot(results.r, color='C2', label='r')
decorate(xlabel='Time (s)',
ylabel='Radius (m)')
plot_r(results)
plot(results.r, results.y, color='C3')
decorate(xlabel='Radius (m)',
ylabel='Length (m)',
legend=False)
def plot_three(results):
subplot(3, 1, 1)
plot_theta(results)
subplot(3, 1, 2)
plot_y(results)
subplot(3, 1, 3)
plot_r(results)
plot_three(results)
savefig('figs/chap24-fig01.pdf')
from matplotlib.patches import Circle
from matplotlib.patches import Arrow
def draw_func(state, t):
# get radius in mm
theta, y, r = state
radius = r.magnitude * 1000
# draw a circle with
circle = Circle([0, 0], radius, fill=True)
plt.gca().add_patch(circle)
# draw an arrow to show rotation
dx, dy = pol2cart(theta, radius)
arrow = Arrow(0, 0, dx, dy)
plt.gca().add_patch(arrow)
# make the aspect ratio 1
plt.axis('equal')
animate(results, draw_func)
# Solution goes here
plot(dydt, label='dydt')
decorate(xlabel='Time (s)',
ylabel='Linear velocity (m/s)')
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Rolling paper
Step2: And creating a Params object with the system parameters
Step4: The following function estimates the parameter k, which is the increase in the radius of the roll for each radian of rotation.
Step6: As usual, make_system takes a Params object and returns a System object.
Step7: Testing make_system
Step9: Now we can write a slope function based on the differential equations
Step10: Testing slope_func
Step12: We'll use an event function to stop when y=L.
Step13: Now we can run the simulation.
Step14: And look at the results.
Step15: The final value of y is 47 meters, as expected.
Step16: The final value of radius is R_max.
Step17: The total number of rotations is close to 200, which seems plausible.
Step18: The elapsed time is about 2 minutes, which is also plausible.
Step19: Plotting
Step20: Plotting y
Step21: Plotting r
Step22: We can also see the relationship between y and r, which I derive analytically in the book.
Step23: And here's the figure from the book.
Step24: Animation
Step25: Exercise
Step26: Now suppose the peak velocity is the limit; that is, we can't move the paper any faster than that.
|
12,302
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import geopandas as gpd
import os
from os.path import join
import pandas as pd
import sys
sns.set(style='white')
cwd = os.getcwd()
data_path = join(cwd, '..', 'Data storage')
figure_path = join(cwd,'..', 'Figures')
file_date = '2018-03-06'
%load_ext watermark
%watermark -v -p pandas,geopandas,shapely
# Load the "autoreload" extension
%load_ext autoreload
# always reload modules marked with "%aimport"
%autoreload 1
# add the 'src' directory as one where we can import modules
src_dir = join(cwd, os.pardir, 'src')
sys.path.append(src_dir)
%aimport Plots.plot
from Plots.plot import plot_nerc_annual
path = os.path.join(data_path, 'nercregions', 'NERCregions.shp')
regions_nerc = gpd.read_file(path)
regions_nerc['nerc'] = regions_nerc['NERCregion']
regions_nerc
regions_nerc.to_crs(epsg=2163).boundary.plot()
path = os.path.join(data_path, 'cb_2016_us_state_20m', 'cb_2016_us_state_20m.shp')
states = gpd.read_file(path)
states.crs
drop_states = ['Alaska', 'Hawaii', 'Puerto Rico']
states = states.loc[~states['NAME'].isin(drop_states)]
states.to_crs(epsg=2163).plot()
path = join(data_path, 'final NERC data',
'Summary table {}.csv'.format(file_date))
index = pd.read_csv(path, index_col='nerc')
index
index.loc['USA', '2001']
# Add 2001, 2017, and % reduction values from summary table to geodataframe
for nerc in regions_nerc['nerc'].unique():
try:
val_2017 = index.loc[nerc, '2017']
val_2001 = index.loc[nerc, '2001']
reduce = index.loc[nerc, 'Percent Reduction']
regions_nerc.loc[regions_nerc['nerc']==nerc, 2017] = val_2017
regions_nerc.loc[regions_nerc['nerc']==nerc, 2001] = val_2001
regions_nerc.loc[regions_nerc['nerc']==nerc, 'reduction'] = '{:.0%}'.format(reduce)
regions_nerc.loc[regions_nerc['nerc']==nerc, 'reduction value'] = reduce
except:
pass
# Define national parameters for use in plot titles
usa_2001 = index.loc['USA', '2001']
usa_2017 = index.loc['USA', '2017']
regions_nerc
regions_albers = regions_nerc.to_crs(epsg=2163)
states_albers = states.to_crs(epsg=2163)
title = '2001 US Average\n{:.0f} g $\mathregular{{CO_2 \ kWh^{{-1}}}}$'.format(usa_2001)
kwargs = dict(
regions_lw = 1,
regions_ec = '0.1',
SERC_y = -1.5,
SPP_y = 2.25
)
vmin = regions_albers.loc[:, [2001, 2017]].min().min()
vmax = regions_albers.loc[:, [2001, 2017]].max().max()
plot_nerc_annual(regions_proj=regions_albers, states_proj=states_albers, data_col=2001,
text_col='nerc', vmin=vmin, vmax=vmax, title=title,
cbar_title='g $\mathregular{CO_2 \ kWh^{-1}}$', **kwargs)
path = join(figure_path, 'NERC map_cividis_2001.pdf')
plt.savefig(path, bbox_inches='tight')#, dpi=350)
title = '2017 US Average\n{:.0f} g $\mathregular{{CO_2 \ kWh^{{-1}}}}$ (↓ 30%)'.format(usa_2017)
kwargs = dict(
regions_lw = 1,
regions_ec = '0.1',
SERC_y = -1.5,
SPP_y = 2.25,
FRCC_x = 4.3
)
vmin = regions_albers.loc[:, [2001, 2017]].min().min()
vmax = regions_albers.loc[:, [2001, 2017]].max().max()
regions_albers['arrow reduction'] = '↓ ' + regions_albers['reduction']
plot_nerc_annual(regions_proj=regions_albers, states_proj=states_albers, data_col=2017,
text_col='arrow reduction', vmin=vmin, vmax=vmax, title=title,
cbar_title='g $\mathregular{CO_2 \ kWh^{-1}}$', **kwargs)
path = join(figure_path, 'NERC map_cividis_2017_change.pdf')
plt.savefig(path, bbox_inches='tight')#, dpi=350)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read NERC regions shapefile
Step2: Read states shapefile
Step3: Transform projection to Albers equal area
Step4: Maps of 2001 and 2017 annual values
|
12,303
|
<ASSISTANT_TASK:>
Python Code:
# Load image
import cv2
import numpy as np
from matplotlib import pyplot as plt
# Load image as grayscale
image = cv2.imread('images/plane_256x256.jpg', cv2.IMREAD_GRAYSCALE)
# Blur image
image_blurry = cv2.blur(image, (5,5))
# Show image
plt.imshow(image_blurry, cmap='gray'), plt.xticks([]), plt.yticks([])
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load Image As Greyscale
Step2: Blur Image
Step3: View Image
|
12,304
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import pymc3 as pm
import seaborn as sns
import matplotlib.pyplot as plt
from collections import defaultdict
data = np.random.randn(100)
with pm.Model() as model:
mu = pm.Normal('mu', mu=0, sd=1, testval=0)
sd = pm.HalfNormal('sd', sd=1)
n = pm.Normal('n', mu=mu, sd=sd, observed=data)
trace = pm.sample(5000)
pm.traceplot(trace);
ppc = pm.sample_ppc(trace, samples=500, model=model, size=100)
np.asarray(ppc['n']).shape
ax = plt.subplot()
sns.distplot([n.mean() for n in ppc['n']], kde=False, ax=ax)
ax.axvline(data.mean())
ax.set(title='Posterior predictive of the mean', xlabel='mean(x)', ylabel='Frequency');
# Use a theano shared variable to be able to exchange the data the model runs on
from theano import shared
def invlogit(x):
return np.exp(x) / (1 + np.exp(x))
n = 4000
n_oos = 50
coeff = 1.
predictors = np.random.normal(size=n)
# Turn predictor into a shared var so that we can change it later
predictors_shared = shared(predictors)
outcomes = np.random.binomial(1, invlogit(coeff * predictors))
outcomes
predictors_oos = np.random.normal(size=50)
outcomes_oos = np.random.binomial(1, invlogit(coeff * predictors_oos))
def tinvlogit(x):
import theano.tensor as t
return t.exp(x) / (1 + t.exp(x))
with pm.Model() as model:
coeff = pm.Normal('coeff', mu=0, sd=1)
p = tinvlogit(coeff * predictors_shared)
o = pm.Bernoulli('o', p, observed=outcomes)
trace = pm.sample(5000, n_init=5000)
# Changing values here will also change values in the model
predictors_shared.set_value(predictors_oos)
# Simply running PPC will use the updated values and do prediction
ppc = pm.sample_ppc(trace, model=model, samples=500)
plt.errorbar(x=predictors_oos, y=np.asarray(ppc['o']).mean(axis=0), yerr=np.asarray(ppc['o']).std(axis=0), linestyle='', marker='o')
plt.plot(predictors_oos, outcomes_oos, 'o')
plt.ylim(-.05, 1.05)
plt.xlabel('predictor')
plt.ylabel('outcome')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Lets generate a very simple model
Step2: This function will randomly draw 500 samples of parameters from the trace. Then, for each sample, it will draw 100 random numbers from a normal distribution specified by the values of mu and std in that sample.
Step3: Now, ppc contains 500 generated data sets (containing 100 samples each), each using a different parameter setting from the posterior
Step4: One common way to visualize is to look if the model can reproduce the patterns observed in the real data. For example, how close are the inferred means to the actual sample mean
Step5: Comparison between PPC and other model evaluation methods.
Step6: Mean predicted values plus error bars to give sense of uncertainty in prediction
|
12,305
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
from IPython.display import Image
Image("http://journalofdigitalhumanities.org/wp-content/uploads/2013/02/blei_lda_illustration.png")
import textmining_blackboxes as tm
#see if package imported correctly
tm.icantbelieve("butter")
title_info["Date"].str.replace("[^0-9]", "") #use regular expressions to clean up
title_info["Date"]=title_info["Date"].str.replace("\-\?", "5")
title_info["Date"]=title_info["Date"].str.replace("[^0-9]", "") # what assumptions have I made about the data?
title_info["Date"]=pd.to_datetime(title_info["Date"], coerce=True)
title_info["Date"]<pd.datetime(1800,1,1)
title_info[title_info["Date"]<pd.datetime(1800,1,1)]
#Let's use a brittle thing for reading in a directory of pure txt files.
our_texts=tm.readtextfiles('data/na-slave-narratives/data/texts')
#again, this is not a std python package
#returns a simple list of the document as very long strings
#note if you want the following notebook will work on any directory of text files.
our_texts, names=tm.readtextfiles("data/british-fiction-corpus")
names
our_texts=tm.data_cleanse(our_texts)
#more necessary when have messy text
#eliminate escaped characters
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer=TfidfVectorizer(min_df=0.5, stop_words='english', use_idf=True)
document_term_matrix=vectorizer.fit_transform(our_texts)
# now let's get our vocabulary--the names corresponding to the rows
vocab=vectorizer.get_feature_names()
len(vocab)
document_term_matrix.shape
document_term_matrix_dense=document_term_matrix.toarray()
dtmdf=pd.DataFrame(document_term_matrix_dense, columns=vocab)
dtmdf
#easy to program, but let's use a robust version from sklearn!
from sklearn.metrics.pairwise import cosine_similarity
similarity=cosine_similarity(document_term_matrix)
#Note here that the `cosine_similiary` can take
#an entire matrix as its argument
pd.DataFrame(similarity, index=names, columns=names)
similarity_df.ix[1].order(ascending=False)
#here's the blackbox
from sklearn.manifold import MDS
mds = MDS(n_components=2, dissimilarity="precomputed", random_state=1)
positions= mds.fit_transform(1-similarity)
positions.shape
#let's plot it: I've set up a black box
tm.plot_mds(positions,names)
names=[name.replace(".txt", "") for name in names]
tm.plot_mds(positions,names)
our_texts, names=tm.readtextfiles("Data/PCCIPtext")
our_texts=tm.data_cleanse(our_texts)
#improved stoplist--may be too complete
stop=[]
with open('data/stoplist-multilingual') as f:
stop=f.readlines()
stop=[word.strip('\n') for word in stop]
texts = [[word for word in document.lower().split() if word not in stop] for document in our_texts] #gensim requires list of list of words in documents
from gensim import corpora, models, similarities, matutils
gensim includes its own vectorizing tools
dictionary = corpora.Dictionary(texts)
corpus = [dictionary.doc2bow(text) for text in texts]
#doc2bow just means `doc`uments to `b`ag `o`f `w`ords
#ok, this has just vectorized our texts; it's another form
number_topics=40
model = models.LdaModel(corpus, id2word=dictionary, num_topics=number_topics, passes=10) #use gensim multicore LDA
model.show_topics()
topics_indexed=[[b for (a,b) in topics] for topics in model.show_topics(number_topics,10,formatted=False)]
topics_indexed=pd.DataFrame(topics_indexed)
topics_indexed
model[dictionary.doc2bow(texts[1])]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Reading at scale
Step2: IMPORTANT
Step3: Let's keep using the remarkable narratives available from Documenting the American South (http
Step4: back to boolean indexing!
Step5: For now, we'll play with the cool scientists and use the powerful and fast scikit learn package.
Step6: Our Zero-ith tool
Step7: back to vectorizer from scikit learn
Step8: While this data frame is lovely to look at and useful to think with, it's tough on your computer's memory
Step9: that is a symmetrical matrix relating each of the texts (rows) to another text (row)
Step10: can do lots of things with similarity matrix
Step11: It's an 11 by 2 matrix
Step13: What has this got us?
Step14: Now we are going to call the topic modeling black box
Step15: So which topics most significant for each document?
|
12,306
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from datetime import datetime
import numpy as np
import datacube
from dc_water_classifier import wofs_classify
from dc_utilities import perform_timeseries_analysis
import dc_au_colormaps
from dc_notebook_utilities import *
dc = datacube.Datacube(app='dc-water-analysis')
api = datacube.api.API(datacube=dc)
# Get available products
products = dc.list_products()
platform_names = list(set(products.platform))
product_names = list(products.name)
product_values = create_platform_product_gui(platform_names, product_names)
# Save the form values
platform = product_values[0].value
product = product_values[1].value
# Get the pixel resolution of the selected product
resolution = products.resolution[products.name == product]
lat_dist = resolution.values[0][0]
lon_dist = resolution.values[0][1]
# Get the extents of the cube
descriptor = api.get_descriptor({'platform': platform})[product]
min_date = descriptor['result_min'][0]
min_lat = descriptor['result_min'][1]
min_lon = descriptor['result_min'][2]
min_date_str = str(min_date.year) + '-' + str(min_date.month) + '-' + str(min_date.day)
min_lat_rounded = round(min_lat, 3)
min_lon_rounded = round(min_lon, 3)
max_date = descriptor['result_max'][0]
max_lat = descriptor['result_max'][1]
max_lon = descriptor['result_max'][2]
max_date_str = str(max_date.year) + '-' + str(max_date.month) + '-' + str(max_date.day)
max_lat_rounded = round(max_lat, 3) #calculates latitude of the pixel's center
max_lon_rounded = round(max_lon, 3) #calculates longitude of the pixel's center
# Display metadata
generate_metadata_report(min_date_str, max_date_str,
min_lon_rounded, max_lon_rounded, lon_dist,
min_lat_rounded, max_lat_rounded, lat_dist)
show_map_extents(min_lon_rounded, max_lon_rounded, min_lat_rounded, max_lat_rounded)
extent_values = create_extents_gui(min_date_str, max_date_str,
min_lon_rounded, max_lon_rounded,
min_lat_rounded, max_lat_rounded)
# Save form values
start_date = datetime.strptime(extent_values[0].value, '%Y-%m-%d')
end_date = datetime.strptime(extent_values[1].value, '%Y-%m-%d')
min_lon = extent_values[2].value
max_lon = extent_values[3].value
min_lat = extent_values[4].value
max_lat = extent_values[5].value
# Query the Data Cube
dataset_in = dc.load(platform=platform,
product=product,
time=(start_date, end_date),
lon=(min_lon, max_lon),
lat=(min_lat, max_lat))
water_class = wofs_classify(dataset_in)
acq_dates = list(water_class.time.values.astype(str))
acq_date_input = create_acq_date_gui(acq_dates)
# Save form value
acq_date = acq_date_input.value
acq_date_index = acq_dates.index(acq_date)
# Get water class for selected acquisition date and mask no data values
water_class_for_acq_date = water_class.wofs[acq_date_index]
water_class_for_acq_date.values = water_class_for_acq_date.values.astype('float')
water_class_for_acq_date.values[water_class_for_acq_date.values == -9999] = np.nan
water_observations_for_acq_date_plot = water_class_for_acq_date.plot(cmap='BuPu')
time_series = perform_timeseries_analysis(water_class)
normalized_water_observations_plot = time_series.normalized_data.plot(cmap='dc_au_WaterSummary')
total_water_observations_plot = time_series.total_data.plot(cmap='dc_au_WaterObservations')
total_clear_observations_plot = time_series.total_clean.plot(cmap='dc_au_ClearObservations')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First, we must connect to our data cube. We can then query the contents of the data cube we have connected to, including both the metadata and the actual data.
Step2: Obtain the metadata of our cube... Initially, we need to get the platforms and products in the cube. The rest of the metadata will be dependent on these two options.
Step3: Execute the following code and then use the generated form to choose your desired platfrom and product.
Step4: With the platform and product, we can get the rest of the metadata. This includes the resolution of a pixel, the latitude/longitude extents, and the minimum and maximum dates available of the chosen platform/product combination.
Step5: Execute the following code and then use the generated form to choose the extents of your desired data.
Step6: Now that we have filled out the above two forms, we have enough information to query our data cube. The following code snippet ends with the actual Data Cube query, which will return the dataset with all the data matching our query.
Step7: At this point, we have finished accessing our data cube and we can turn to analyzing our data. In this example, we will run the WOfS algorithm. The wofs_classify function, seen below, will return a modified dataset, where a value of 1 indicates the pixel has been classified as water by the WoFS algorithm and 0 represents the pixel is non-water.
Step8: Execute the following code and then use the generated form to choose your desired acquisition date. The following two code blocks are only necessary if you would like to see the water mask of a single acquisition date.
Step9: With all of the pixels classified as either water/non-water, let's perform a time series analysis over our derived water class. The function, perform_timeseries_analysis, takes in a dataset of 3 dimensions (time, latitude, and longitude), then sums the values of each pixel over time. It also keeps track of the number of clear observations we have at each pixel. We can then normalize each pixel to determine areas at risk of flooding. The normalization calculation is simply
Step10: The following plots visualize the results of our timeseries analysis. You may change the color scales with the cmap option. For color scales available for use by cmap, see http
|
12,307
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
import numpy as np
import scipy.stats as ss
import vega_datasets
x = np.array([1, 1, 1, 1, 10, 100, 1000])
y = np.array([1000, 100, 10, 1, 1, 1, 1 ])
ratio = x/y
print(ratio)
X = np.arange(len(ratio))
# Implement
# Implement
# TODO: Implement the functionality mentioned above
# The following code is just a dummy. You should load the correct dataset from vega_datasets package.
movies = pd.DataFrame({"Worldwide_Gross": np.random.sample(200), "IMDB_Rating": np.random.sample(200)})
# Implement
ax = movies["Worldwide_Gross"].hist(bins=200)
ax.set_xlabel("World wide gross")
ax.set_ylabel("Frequency")
ax = movies["Worldwide_Gross"].hist(bins=200)
ax.set_yscale('log')
ax.set_xlabel("World wide gross")
ax.set_ylabel("Frequency")
movies["IMDB_Rating"].hist(bins=range(0,11))
min(movies["Worldwide_Gross"])
movies["Worldwide_Gross"] = movies["Worldwide_Gross"]+1.0
# TODO: Replace the dummy value of bins using np.logspace.
# Create 20 bins that cover the whole range of the dataset.
bins = [1.0, 2.0, 4.0]
bins
ax = (movies["Worldwide_Gross"]+1.0).hist(bins=bins)
ax.set_yscale('log')
ax.set_xscale('log')
ax.set_xlabel("World wide gross")
ax.set_ylabel("Frequency")
# Implement
# TODO: Implement functionality mentioned above
# You must replace the dummy values with the correct code.
worldgross_sorted = np.random.sample(200)
Y = np.random.sample(200)
plt.xlabel("World wide gross")
plt.ylabel("CCDF")
plt.plot(worldgross_sorted,Y)
plt.yscale('log')
# Implement
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Ratio and logarithm
Step2: Q
Step3: Q
Step4: Log-binning
Step5: If you simply call hist() method with a dataframe object, it identifies all the numeric columns and draw a histogram for each.
Step6: As we can see, a majority of the columns are not normally distributed. In particular, if you look at the worldwide gross variable, you only see a couple of meaningful data from the histogram. Is this a problem of resolution? How about increasing the number of bins?
Step7: Maybe a bit more useful, but it doesn't tell anything about the data distribution above certain point. How about changing the vertical scale to logarithmic scale?
Step8: Now, let's try log-bin. Recall that when plotting histgrams we can specify the edges of bins through the bins parameter. For example, we can specify the edges of bins to [1, 2, 3, ... , 10] as follows.
Step9: Here, we can specify the edges of bins in a similar way. Instead of specifying on the linear scale, we do it on the log space. Some useful resources
Step10: Because there seems to be movie(s) that made $0, and because log(0) is undefined & log(1) = 0, let's add 1 to the variable.
Step11: Now we can plot a histgram with log-bin. Set both axis to be log-scale.
Step12: What is going on? Is this the right plot?
Step13: Q
Step14: We can also try in semilog scale (only one axis is in a log-scale), where the horizontal axis is linear.
Step15: A straight line in semilog scale means exponential decay (cf. a straight line in log-log scale means power-law decay). So it seems like the amount of money a movie makes across the world follows roughly an exponential distribution, while there are some outliers that make insane amount of money.
|
12,308
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'messy-consortium', 'sandbox-1', 'ocnbgchem')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Type
Step7: 1.4. Elemental Stoichiometry
Step8: 1.5. Elemental Stoichiometry Details
Step9: 1.6. Prognostic Variables
Step10: 1.7. Diagnostic Variables
Step11: 1.8. Damping
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Step13: 2.2. Timestep If Not From Ocean
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Step15: 3.2. Timestep If Not From Ocean
Step16: 4. Key Properties --> Transport Scheme
Step17: 4.2. Scheme
Step18: 4.3. Use Different Scheme
Step19: 5. Key Properties --> Boundary Forcing
Step20: 5.2. River Input
Step21: 5.3. Sediments From Boundary Conditions
Step22: 5.4. Sediments From Explicit Model
Step23: 6. Key Properties --> Gas Exchange
Step24: 6.2. CO2 Exchange Type
Step25: 6.3. O2 Exchange Present
Step26: 6.4. O2 Exchange Type
Step27: 6.5. DMS Exchange Present
Step28: 6.6. DMS Exchange Type
Step29: 6.7. N2 Exchange Present
Step30: 6.8. N2 Exchange Type
Step31: 6.9. N2O Exchange Present
Step32: 6.10. N2O Exchange Type
Step33: 6.11. CFC11 Exchange Present
Step34: 6.12. CFC11 Exchange Type
Step35: 6.13. CFC12 Exchange Present
Step36: 6.14. CFC12 Exchange Type
Step37: 6.15. SF6 Exchange Present
Step38: 6.16. SF6 Exchange Type
Step39: 6.17. 13CO2 Exchange Present
Step40: 6.18. 13CO2 Exchange Type
Step41: 6.19. 14CO2 Exchange Present
Step42: 6.20. 14CO2 Exchange Type
Step43: 6.21. Other Gases
Step44: 7. Key Properties --> Carbon Chemistry
Step45: 7.2. PH Scale
Step46: 7.3. Constants If Not OMIP
Step47: 8. Tracers
Step48: 8.2. Sulfur Cycle Present
Step49: 8.3. Nutrients Present
Step50: 8.4. Nitrous Species If N
Step51: 8.5. Nitrous Processes If N
Step52: 9. Tracers --> Ecosystem
Step53: 9.2. Upper Trophic Levels Treatment
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Step55: 10.2. Pft
Step56: 10.3. Size Classes
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Step58: 11.2. Size Classes
Step59: 12. Tracers --> Disolved Organic Matter
Step60: 12.2. Lability
Step61: 13. Tracers --> Particules
Step62: 13.2. Types If Prognostic
Step63: 13.3. Size If Prognostic
Step64: 13.4. Size If Discrete
Step65: 13.5. Sinking Speed If Prognostic
Step66: 14. Tracers --> Dic Alkalinity
Step67: 14.2. Abiotic Carbon
Step68: 14.3. Alkalinity
|
12,309
|
<ASSISTANT_TASK:>
Python Code:
%sql -d standard
SELECT
*
FROM
`nyc-tlc.yellow.trips`
LIMIT
5
%bigquery schema --table nyc-tlc:yellow.trips
%%bq query -n pickup_time
WITH subquery AS (
SELECT
EXTRACT(HOUR FROM pickup_datetime) AS hour
FROM
`nyc-tlc.yellow.trips`)
SELECT
Hour,
COUNT(Hour) AS count
FROM
subquery
GROUP BY
Hour
ORDER BY
count DESC
# Let's visualize the pick-up time distribution
%chart columns --data pickup_time
%%sql -d legacy -m vendor
SELECT
TOP(vendor_id) AS vendor,
COUNT(*) AS count
FROM
[nyc-tlc:yellow.trips]
%chart pie --data vendor
%%sql -d legacy
SELECT
QUANTILES(trip_distance, 5) AS quantile,
MIN(trip_distance) AS min,
MAX(trip_distance) AS max,
AVG(trip_distance) AS avg,
STDDEV(trip_distance) AS std_dev
FROM
[nyc-tlc:yellow.trips]
%%bq query -n pickup_location
SELECT
pickup_latitude,
pickup_longitude
FROM
`nyc-tlc.yellow.trips`
LIMIT
10
%%chart map --data pickup_location
%%bq query -n dispute
SELECT
trip_distance,
fare_amount
FROM
`nyc-tlc.yellow.trips`
WHERE
rate_code = "2"
AND payment_type = "DIS"
%%chart scatter --data dispute
height: 400
hAxis:
title: Distance
vAxis:
title: Fare Amount
trendlines:
0:
type: line
color: green
showR2: true
visibleInLegend: true
!git add *
!git commit -m "your message"
!git push
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's look at the table schema
Step2: 1. What is the most common pick-up time?
Step3: Let's name this query result pickup_time and reference it to create the chart below.
Step4: 7
Step5: Let's label this query result vendor and reference it to create the following pie chart.
Step6: 3. Provide summary statistics on trip distance
Step7: Datalab also supports LaTeX rendering. The min distance is $-4.08\times10^7$ miles (interesting!), $Q_1$ is 0.9 miles and $Q_3$ is 2.7 miles. The trip distance is skewed to the right since the mean is greater than the median (1.54 miles).
Step8: 4. Could distance and fare amount explain the payment disputes for rides from the JFK airport?
Step9: There seems to be a weak positive relationship ($r = +\sqrt{r^2} = 0.145$) between the trip distance and the fare amount for taxis that picked up rides from the airport and had payment disputes.
|
12,310
|
<ASSISTANT_TASK:>
Python Code:
x = [1,2,3,4,5,6,7,8]
for item in x:
print item
s = ['a','c','b','d','e','g','h',8]
for item in s:
print item
for item in range(1,20,2):
print item
s=0
for i in range(1,100):
s+=i*i
print s
s=0; i=1
while i<100:
s+=i*i
i+=1
print s
m = 20
n = 79
while n != 0:
r = m % n
m = n
n = r
print m
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Range function in python is a daily-use function in the for loop
Step2: This is a simple for loop that sum up all squared integer from 1 to 100
Step3: The While statement
|
12,311
|
<ASSISTANT_TASK:>
Python Code:
a = 10.1
type(a)
print(dir(a)) # Show all of the methods of a
a.is_integer()
# Create a class by using class keyword followed by name.
class MyClass:
# The 'self' variable ALWAYS needs to be the first variable given to any class method.
def __init__(self, message):
# Here we create a new variable inside "self" called "mess" and save the argument "message"
# passed from the constructor to it.
self.mess = message
def say(self):
print(self.mess)
# Don't normally need to write a destructor - one is created by Python automatically. However we do it here
# just to show you that it can be done:
def __del__(self):
print("Deleting object of type MyClass")
a = MyClass("Hello")
print(a.mess)
# But, we also defined a method called "say" which does the same thing:
a.say()
print(a)
del a
a = MyClass('Hello')
a = 2
mess = "Hello"
def say(mess):
print(mess)
say(mess)
class Box:
def __init__(self, length):
self.length = length
class Particle:
def __init__(self, r0, v0):
r0 = initial position
v0 = initial speed
self.r = r0
self.v = v0
def step(self, dt, L):
Move the particle
dt = timestep
L = length of the containing box
self.r = self.r + self.v * dt
if self.r >= L:
self.r -= L
elif self.r < 0:
self.r += L
p = Particle(0, 5)
p.step(3, 10)
print(p.r)
class Box:
def __init__(self, length):
self.length = length
self.particles = []
def add_particle(particle):
self.particles.append(particle)
class Box:
def __init__(self, length):
self.length = length
self.particles = []
self.t = 0
def add_particle(self, particle):
self.particles.append(particle)
def step(self, dt):
for particle in self.particles:
particle.step(dt, self.length)
def write(self, filename):
f = open(filename, 'w')
for particle in self.particles:
f.write('{},{}\n'.format(particle.r, particle.v))
f.close()
def plot(self):
for particle in self.particles:
plt.scatter(particle.r, 0)
def load(self, filename):
f = open(filename, 'r')
csvfile = csv.reader(f)
for position, velocity in csvfile:
p = Particle(position, velocity)
self.add_particle(p)
b = Box(10)
for i in range(10):
p = Particle(i/2, i/3)
b.add_particle(p)
b.write('test.csv')
!cat test.csv
class Particle:
def __init__(self, r0, v0):
r0 = initial position
v0 = initial speed
self._r = r0
self._v = v0
def step(self, dt, L):
Move the particle
dt = timestep
L = length of the containing box
self._r = self._r + self._v * dt
if self._r >= L:
self._r -= L
elif self._r < 0:
self._r += L
@property
def r(self):
return self._r
@r.setter
def r_setter(self, value):
self._r = value
@property
def v(self):
return self._v
@r.setter
def r_setter(self, value):
self._v = value
class SlowParticle(Particle):
def __init__(self, r0, v0, slowing_factor):
Particle.__init__(self, r0, v0)
self.factor = slowing_factor
def step(self, dt, L):
Move the particle, but change so that if the particle bounces off of a wall,
it slows down by 50%
dt = timestep
L = length of the containing box
self._r = self._r + self._v * dt
if self._r >= L:
self._r -= L
self._v /= factor
elif self._r < 0:
self._r += L
self._v /= factor
a = 1.0
print(dir(a))
class Box:
def __init__(self, length):
self.length = length
self.particles = []
self.t = 0
def __add__(self, other):
if self.length == other.length:
b = Box(self.length)
for p in self.particles:
b.add_particle(p)
for p in other.particles:
b.add_particle(p)
return b
else:
return ValueError('To add two boxes they must be of the same length')
def mean_position(self):
l = np.sum([p.r for p in self.particles])/len(self.particles)
return l
def add_particle(self, particle):
self.particles.append(particle)
def step(self, dt):
for particle in self.particles:
particle.step(dt, self.length)
def write(self, filename):
f = open(filename, 'w')
for particle in self.particles:
f.write('{},{}\n'.format(particle.r, particle.v))
f.close()
def plot(self):
for particle in self.particles:
plt.scatter(particle.r, 0)
def load(self, filename):
f = open(filename, 'r')
csvfile = csv.reader(f)
for position, velocity in csvfile:
p = Particle(position, velocity)
self.add_particle(p)
def __repr__(self):
if len(self.particles) == 1:
return 'Box containing 1 particle'
else:
return 'Box containing {} particles'.format(len(self.particles))
a = Box(10)
a.add_particle(Particle(10, 10))
b = Box(10)
b.add_particle(Particle(15, 10))
c = a + b
print(a)
print(b)
print(c)
a.mean_position(), b.mean_position(), c.mean_position()
a.step(0.5)
a.mean_position(), b.mean_position(), c.mean_position()
a.particles, c.particles
import copy
class Particle:
def __init__(self, r0, v0):
r0 = initial position
v0 = initial speed
self.r = r0
self.v = v0
def step(self, dt, L):
Move the particle
dt = timestep
L = length of the containing box
self.r = self.r + self.v * dt
if self.r >= L:
self.r -= L
elif self.r < 0:
self.r += L
def copy(self):
return copy.deepcopy(self)
class Box:
def __init__(self, length):
self.length = length
self.particles = []
self.t = 0
def __add__(self, other):
if self.length == other.length:
b = Box(self.length)
for p in self.particles:
b.add_particle(p)
for p in other.particles:
b.add_particle(p)
return b
else:
return ValueError('To add two boxes they must be of the same length')
def mean_position(self):
l = np.sum([p.r for p in self.particles])/len(self.particles)
return l
def add_particle(self, particle):
self.particles.append(particle.copy())
def step(self, dt):
for particle in self.particles:
particle.step(dt, self.length)
def write(self, filename):
f = open(filename, 'w')
for particle in self.particles:
f.write('{},{}\n'.format(particle.r, particle.v))
f.close()
def plot(self):
for particle in self.particles:
plt.scatter(particle.r, 0)
def load(self, filename):
f = open(filename, 'r')
csvfile = csv.reader(f)
for position, velocity, ptype in csvfile:
p = Particle(position, velocity)
self.add_particle(p)
def __repr__(self):
if len(self.particles) == 1:
return 'Box containing 1 particle'
else:
return 'Box containing {} particles'.format(len(self.particles))
a = Box(10)
a.add_particle(Particle(10, 10))
b = Box(10)
b.add_particle(Particle(15, 10))
c = a + b
print(a)
print(b)
print(c)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: How can I see what methods an object of type float has?
Step2: <font color='midnightblue'> Aside - What do all those underscores mean?
Step3: <font color='mediumblue'> Using the class
Step4: How do I access data stored in the class? with the ".", followed by the name.
Step5: What happens though if we reuse the variable name 'a'?
Step6: So, what happens if we either choose to store something else under the name 'a', or tell Python to delte it?
Step7: Why bother? This can be achieved without classes very easily
Step8: Need a better example!
Step11: What we're going to try and do is add particles to the box, which have some properties
Step12: Lets just check this, if a Particle is in a box of length 10, has r0 = 0, v0=5, then after 1 step of length 3, the position should be at position 5
Step13: Lets add a place to store the particles to the box class, and add a method to add particles to the box
Step14: <font color='mediumblue'> Now lets get you to do something...
Step17: <font color='mediumblue'> Class Properties
Step19: <font color='midnightblue'> Why bother? It looks the same when we use it!
Step20: Here we have inherited most of the class Particle, and just changed the method 'step' to do something differently. Because we kept the properties the same, we can use this class everywhere that we could use Particle - our Box class can take a mixture of Particles and SlowParticles
Step21: Notice that there is a method "__add__" - we can define these special methods to allow our class to do things that you can ordinarily do with built in types.
Step22: Now we've created an 'add' method, we can, create two boxes and add these together!
Step23: Looks good! But hang on...
Step24: Why has the mean position of particles in Box C changed? Look at the memory address of the particles
Step27: Boxes are pointing to the SAME particles!
Step28: Then, we should change the Box class's 'add' method, to use this copy operation rather than just append the child particles of the existing boxes
|
12,312
|
<ASSISTANT_TASK:>
Python Code:
#!pip install --user miepython
import numpy as np
import matplotlib.pyplot as plt
try:
import miepython
except ModuleNotFoundError:
print('miepython not installed. To install, uncomment and run the cell above.')
print('Once installation is successful, rerun this cell again.')
print(" miepython Wiscombe")
print(" X m.real m.imag Qback Qback ratio")
m=complex(1.55, 0.0)
x = 2*3.1415926535*0.525/0.6328
ref = 2.92534
qext, qsca, qback, g = miepython.mie(m,x)
print("%9.3f % 8.4f % 8.4f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref))
m=complex(0.0, -1000.0)
x=0.099
ref = (4.77373E-07*4.77373E-07 + 1.45416E-03*1.45416E-03)/x/x*4
qext, qsca, qback, g = miepython.mie(m,x)
print("%9.3f % 8.4f % 8.2f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref))
x=0.101
ref = (5.37209E-07*5.37209E-07 + 1.54399E-03*1.54399E-03)/x/x*4
qext, qsca, qback, g = miepython.mie(m,x)
print("%9.3f % 8.4f % 8.2f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref))
x=100
ref = (4.35251E+01*4.35251E+01 + 2.45587E+01*2.45587E+01)/x/x*4
qext, qsca, qback, g = miepython.mie(m,x)
print("%9.3f % 8.4f % 8.2f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref))
x=10000
ref = abs(2.91013E+03-4.06585E+03*1j)**2/x/x*4
qext, qsca, qback, g = miepython.mie(m,x)
print("%9.3f % 8.4f % 8.2f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref))
print()
print(" miepython Wiscombe")
print(" X m.real m.imag Qback Qback ratio")
m=complex(0.75, 0.0)
x=0.099
ref = (1.81756E-08*1.81756E-08 + 1.64810E-04*1.64810E-04)/x/x*4
qext, qsca, qback, g = miepython.mie(m,x)
print("%9.3f % 8.4f % 8.4f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref))
x=0.101
ref = (2.04875E-08*2.04875E-08 + 1.74965E-04*1.74965E-04)/x/x*4
qext, qsca, qback, g = miepython.mie(m,x)
print("%9.3f % 8.4f % 8.4f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref))
x=10.0
ref = (1.07857E+00*1.07857E+00 + 3.60881E-02*3.60881E-02)/x/x*4
qext, qsca, qback, g = miepython.mie(m,x)
print("%9.3f % 8.4f % 8.4f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref))
x=1000.0
ref = (1.70578E+01*1.70578E+01 + 4.84251E+02* 4.84251E+02)/x/x*4
qext, qsca, qback, g = miepython.mie(m,x)
print("%9.3f % 8.4f % 8.4f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref))
print()
print(" miepython Wiscombe")
print(" X m.real m.imag Qback Qback ratio")
m=complex(1.5, 0)
x=10
ref = abs(4.322E+00 + 4.868E+00*1j)**2/x/x*4
qext, qsca, qback, g = miepython.mie(m,x)
print("%9.3f % 8.4f % 8.5f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref))
x=100
ref = abs(4.077E+01 + 5.175E+01*1j)**2/x/x*4
qext, qsca, qback, g = miepython.mie(m,x)
print("%9.3f % 8.4f % 8.5f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref))
x=1000
ref = abs(5.652E+02 + 1.502E+03*1j)**2/x/x*4
qext, qsca, qback, g = miepython.mie(m,x)
print("%9.3f % 8.4f % 8.5f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref))
print()
print(" old")
print(" miepython Wiscombe")
print(" X m.real m.imag Qback Qback ratio")
m=complex(1.33, -0.00001)
x=1
ref = (2.24362E-02*2.24362E-02 + 1.43711E-01*1.43711E-01)/x/x*4
qext, qsca, qback, g = miepython.mie(m,x)
print("%9.3f % 8.4f % 8.5f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref))
x=100
ref = (5.65921E+01*5.65921E+01 + 4.65097E+01*4.65097E+01)/x/x*4
qext, qsca, qback, g = miepython.mie(m,x)
print("%9.3f % 8.4f % 8.5f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref))
x=10000
ref = abs(-1.82119E+02 -9.51912E+02*1j)**2/x/x*4
qext, qsca, qback, g = miepython.mie(m,x)
print("%9.3f % 8.4f % 8.5f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref))
print()
print(" miepython Wiscombe")
print(" X m.real m.imag Qback Qback ratio")
m=complex(1.5, -1.0)
x=0.055
ref = abs(7.66140E-05 + 8.33814E-05*1j)**2/x/x*4
qext, qsca, qback, g = miepython.mie(m,x)
print("%9.3f % 8.4f % 8.4f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref))
x=0.056
ref = (8.08721E-05*8.08721E-05 + 8.80098E-05*8.80098E-05)/x/x*4
qext, qsca, qback, g = miepython.mie(m,x)
print("%9.3f % 8.4f % 8.4f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref))
x=1.0
ref = (3.48844E-01*3.48844E-01 + 1.46829E-01*1.46829E-01)/x/x*4
qext, qsca, qback, g = miepython.mie(m,x)
print("%9.3f % 8.4f % 8.4f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref))
x=100.0
ref = (2.02936E+01*2.02936E+01 + 4.38444E+00*4.38444E+00)/x/x*4
qext, qsca, qback, g = miepython.mie(m,x)
print("%9.3f % 8.4f % 8.4f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref))
x=10000
ref = abs(-2.18472E+02 -2.06461E+03*1j)**2/x/x*4
qext, qsca, qback, g = miepython.mie(m,x)
print("%9.3f % 8.4f % 8.4f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref))
print()
print(" miepython Wiscombe")
print(" X m.real m.imag Qback Qback ratio")
m=complex(10, -10.0)
x=1
ref = abs(4.48546E-01 + 7.91237E-01*1j)**2/x/x*4
qext, qsca, qback, g = miepython.mie(m,x)
print("%9.3f % 8.4f % 8.4f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref))
x=100
ref = abs(-4.14538E+01 -1.82181E+01*1j)**2/x/x*4
qext, qsca, qback, g = miepython.mie(m,x)
print("%9.3f % 8.4f % 8.4f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref))
x=10000
ref = abs(2.25248E+03 -3.92447E+03*1j)**2/x/x*4
qext, qsca, qback, g = miepython.mie(m,x)
print("%9.3f % 8.4f % 8.4f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref))
x = np.logspace(1, 5, 20) # also in microns
kappa=1
m = 1.5 - kappa*1j
R = abs(m-1)**2/abs(m+1)**2
Qbig = R * np.ones_like(x)
qext, qsca, qback, g = miepython.mie(m,x)
plt.semilogx(x, qback, '+')
plt.semilogx(x, Qbig, ':')
plt.text(x[-1],Qbig[-1],"$\kappa$=%.3f" % kappa,va="bottom",ha='right')
kappa=0.001
m = 1.5 - kappa*1j
R = abs(m-1)**2/abs(m+1)**2
Qbig = R * np.ones_like(x)
qext, qsca, qback, g = miepython.mie(m,x)
plt.semilogx(x, qback, '+')
plt.semilogx(x, Qbig, ':')
plt.text(x[-1],Qbig[-1],"$\kappa$=%.3f" % kappa,va="bottom",ha='right')
plt.ylim(0,0.2)
plt.title("Backscattering Efficiency for m=1.5 - i $\kappa$")
plt.xlabel("Size Parameter")
plt.ylabel("$Q_{back}$")
plt.grid()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Wiscombe tests
Step2: Spheres with a smaller refractive index than their environment
Step3: Non-absorbing spheres
Step4: Water droplets
Step5: Moderately absorbing spheres
Step6: Spheres with really big index of refraction
Step7: Backscattering Efficiency for Large Absorbing Spheres
|
12,313
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib
matplotlib.rcParams['figure.figsize'] = (10.0, 8.0)
import matplotlib.pyplot as plt
import numpy as np
from scipy.interpolate import InterpolatedUnivariateSpline
from scipy.interpolate import UnivariateSpline
import json
import pandas as pd
from functools import partial
class Foo: pass
from chest import Chest
from slict import CachedSlict
from glopen import glopen, glopen_many
def load_from_archive(names, arch):
cs = []
for name in names:
cs.append(Chest(path = "{:s}-results".format(name),
open = partial(glopen, endpoint=arch),
open_many = partial(glopen_many, endpoint=arch)))
scs = [CachedSlict(c) for c in cs]
ps = []
for name in names:
with glopen(
"{:s}.json".format(name), mode='r',
endpoint = arch,
) as f:
ps.append(json.load(f))
return cs, scs, ps
config = Foo()
config.names = [
# "Wilk/Wilk_kmin_2.5/Wilk_kmin_2.5",
# "Wilk/Wilk_kmin_3.5/Wilk_kmin_3.5",
# "Wilk/Wilk_kmin_4.5/Wilk_kmin_4.5",
"Wilk/Wilk_long/Wilk_long",
]
#config.arch_end = "maxhutch#alpha-admin/~/pub/"
#config.arch_end = "alcf#dtn_mira/projects/alpha-nek/experiments/"
config.arch_end = "alcf#dtn_mira/projects/PetaCESAR/maxhutch/"
height = 'H_exp'
cs, scs, ps = load_from_archive(config.names, config.arch_end);
for c,sc in zip(cs, scs):
c.prefetch(sc[:,height].full_keys())
spls = []
for sc, p in zip(scs, ps):
T = np.array(sc[:,height].keys())
H = np.array(sc[:,height].values()) #- 2 * np.sqrt(p['conductivity']* (T + p['delta']**2 / p['conductivity'] / 4))
spls.append(UnivariateSpline(T,
H,
k = 5,
s = 1.e-12))
Frs = [spl.derivative() for spl in spls]
Tss = [np.linspace(sc[:,height].keys()[0], sc[:,height].keys()[-1], 1000) for sc in scs]
Run37 = pd.DataFrame.from_csv('WRun37 4.49.56 PM 7_3_07.txt', sep='\t')
Run58 = pd.DataFrame.from_csv('WRun058 4.32.52 PM 7_3_07.txt', sep='\t')
Run78 = pd.DataFrame.from_csv('WRun078 4.49.56 PM 7_3_07.txt', sep='\t')
def plot_exp(data, n, fmt):
norm = .5*( np.sqrt(data["Atwood"]/(1-data["Atwood"])*data["Accel. [mm/sec^2]"]* 76 / n)
+ np.sqrt(data["Atwood"]/(1+data["Atwood"])*data["Accel. [mm/sec^2]"]* 76 / n))
axs.plot(
data["AvgAmp (mm)"] * n / 76,
data["Average Velocity"]/norm, fmt);
#data["Froude Average"], fmt);
return
fig, axs = plt.subplots(1,1)
for p, spl, Fr, T in zip(ps, spls, Frs, Tss):
axs.plot(
spl(T) * p["kmin"],
Fr(T)/ np.sqrt(p["atwood"]*p["g"] / p["kmin"]),
label="{:3.1f} modes".format(p["kmin"]));
#axs.plot(Run37["AvgAmp (mm)"] * 2.5 / 76, Run37["Froude Average"], "bx");
#plot_exp(Run37, 2.5, "bx")
#plot_exp(Run78, 3.5, "gx")
plot_exp(Run58, 4.5, "bx")
axs.plot([0,10], [np.sqrt(1/np.pi), np.sqrt(1/np.pi)], 'k--')
axs.axvline(x=1.4, color='k');
axs.set_ylabel(r'Fr')
axs.set_xlabel(r'$h/\lambda$');
axs.legend(loc=4);
axs.set_xbound(0,3);
axs.set_ybound(0,1.5);
plt.savefig('Figure17_long.png')
fig, axs = plt.subplots(1,1)
for sc, p, spl, Fr, T in zip(scs, ps, spls, Frs, Tss):
axs.plot(
T,
spl(T) * p["kmin"],
label="{:3.1f} modes".format(p["kmin"]));
axs.plot(
sc[:,height].keys(),
np.array(sc[:,height].values())*p['kmin'],
'bo');
#axs.plot(Run37["Time (sec)"]-.5, Run37["AvgAmp (mm)"] * 2.5 / 76, "bx");
axs.plot(Run58["Time (sec)"]-.515, Run58["AvgAmp (mm)"] * 4.5 / 78, "bx");
#axs.plot(Run78["Time (sec)"]-.5, Run78["AvgAmp (mm)"] * 3.5 / 76, "gx");
axs.set_ylabel(r'$h/\lambda$')
axs.set_xlabel(r'T (s)');
axs.set_xbound(0.0,1.5);
axs.set_ybound(-0.0,4);
axs.legend(loc=4);
%install_ext http://raw.github.com/jrjohansson/version_information/master/version_information.py
%load_ext version_information
%version_information numpy, matplotlib, slict, chest, glopen, globussh
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: And some more specialized dependencies
Step2: Helper routines
Step3: Configuration for this figure.
Step4: Open a chest located on a remote globus endpoint and load a remote json configuration file.
Step5: We want to plot the spike depth, which is the 'H' field in the chest.
Step6: Use a spline to compute the derivative of 'H' vs time
Step7: Plot the Froude number, non-dimensionalized by the theoretical dependence on Atwood, acceleration, and wave number, vs the spike depth, normalized by wave-length.
|
12,314
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import cvxopt as opt
from cvxopt import blas, solvers
import pandas as pd
np.random.seed(123)
# Turn off progress printing
solvers.options['show_progress'] = False
## NUMBER OF ASSETS
n_assets = 4
## NUMBER OF OBSERVATIONS
n_obs = 1000
return_vec = np.random.randn(n_assets, n_obs)
plt.plot(return_vec.T, alpha=.4);
plt.xlabel('time')
plt.ylabel('returns');
def rand_weights(n):
''' Produces n random weights that sum to 1 '''
k = np.random.rand(n)
return k / sum(k)
print rand_weights(n_assets)
print rand_weights(n_assets)
def random_portfolio(returns):
'''
Returns the mean and standard deviation of returns for a random portfolio
'''
p = np.asmatrix(np.mean(returns, axis=1))
w = np.asmatrix(rand_weights(returns.shape[0]))
C = np.asmatrix(np.cov(returns))
mu = w * p.T
sigma = np.sqrt(w * C * w.T)
# This recursion reduces outliers to keep plots pretty
if sigma > 2:
return random_portfolio(returns)
return mu, sigma
n_portfolios = 500
means, stds = np.column_stack([
random_portfolio(return_vec)
for _ in xrange(n_portfolios)
])
plt.plot(stds, means, 'o', markersize=5)
plt.xlabel('std')
plt.ylabel('mean')
plt.title('Mean and standard deviation of returns of randomly generated portfolios');
def optimal_portfolio(returns):
n = len(returns)
returns = np.asmatrix(returns)
N = 100
mus = [10**(5.0 * t/N - 1.0) for t in range(N)]
# Convert to cvxopt matrices
S = opt.matrix(np.cov(returns))
pbar = opt.matrix(np.mean(returns, axis=1))
# Create constraint matrices
G = -opt.matrix(np.eye(n)) # negative n x n identity matrix
h = opt.matrix(0.0, (n ,1))
A = opt.matrix(1.0, (1, n))
b = opt.matrix(1.0)
# Calculate efficient frontier weights using quadratic programming
portfolios = [solvers.qp(mu*S, -pbar, G, h, A, b)['x']
for mu in mus]
## CALCULATE RISKS AND RETURNS FOR FRONTIER
returns = [blas.dot(pbar, x) for x in portfolios]
risks = [np.sqrt(blas.dot(x, S*x)) for x in portfolios]
## CALCULATE THE 2ND DEGREE POLYNOMIAL OF THE FRONTIER CURVE
m1 = np.polyfit(returns, risks, 2)
x1 = np.sqrt(m1[2] / m1[0])
# CALCULATE THE OPTIMAL PORTFOLIO
wt = solvers.qp(opt.matrix(x1 * S), -pbar, G, h, A, b)['x']
return np.asarray(wt), returns, risks
weights, returns, risks = optimal_portfolio(return_vec)
plt.plot(stds, means, 'o')
plt.ylabel('mean')
plt.xlabel('std')
plt.plot(risks, returns, 'y-o');
print weights
tickers = ['IBM', 'SBUX', 'XOM', 'AAPL', 'MSFT', 'TLT', 'SHY']
data = get_pricing(
tickers,
start_date='2005-01-01',
end_date='2015-01-01'
)
data.loc['price', :, :].plot()
plt.ylabel('price in $')
plt.legend(tickers);
import zipline
from zipline.api import (
set_slippage,
slippage,
set_commission,
commission,
order_target_percent,
symbols
)
from zipline import TradingAlgorithm
def initialize(context):
'''
Called once at the very beginning of a backtest (and live trading).
Use this method to set up any bookkeeping variables.
The context object is passed to all the other methods in your algorithm.
Parameters
context: An initialized and empty Python dictionary that has been
augmented so that properties can be accessed using dot
notation as well as the traditional bracket notation.
Returns None
'''
# Turn off the slippage model
set_slippage(slippage.FixedSlippage(spread=0.0))
# Set the commission model (Interactive Brokers Commission)
set_commission(commission.PerShare(cost=0.01, min_trade_cost=1.0))
context.tick = 0
context.assets = symbols('IBM', 'SBUX', 'XOM', 'AAPL', 'MSFT', 'TLT', 'SHY')
def handle_data(context, data):
'''
Called when a market event occurs for any of the algorithm's
securities.
Parameters
data: A dictionary keyed by security id containing the current
state of the securities in the algo's universe.
context: The same context object from the initialize function.
Stores the up to date portfolio as well as any state
variables defined.
Returns None
'''
# Allow history to accumulate 100 days of prices before trading
# and rebalance every day thereafter.
context.tick += 1
if context.tick < 100:
return
# Get rolling window of past prices and compute returns
prices = data.history(context.assets, 'price', 100, '1d').dropna()
returns = prices.pct_change().dropna()
try:
# Perform Markowitz-style portfolio optimization
weights, _, _ = optimal_portfolio(returns.T)
# Rebalance portfolio accordingly
for stock, weight in zip(prices.columns, weights):
order_target_percent(stock, weight)
except ValueError as e:
# Sometimes this error is thrown
# ValueError: Rank(A) < p or Rank([P; A; G]) < n
pass
# Instantinate algorithm
algo = TradingAlgorithm(initialize=initialize,
handle_data=handle_data)
# Run algorithm
results = algo.run(data.swapaxes(2, 0, 1))
results.portfolio_value.plot();
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Assume that we have 4 assets, each with a return series of length 1000. We can use numpy.random.randn to sample returns from a normal distribution.
Step2: These return series can be used to create a wide range of portfolios. We will produce random weight vectors and plot those portfolios. As we want all our capital to be invested, the weights will have to sum to one.
Step3: Next, let's evaluate how these random portfolios would perform by calculating the mean returns and the volatility (here we are using standard deviation). You can see that there is
Step4: We calculate the return using
Step5: Upon plotting these you will observe that they form a characteristic parabolic
Step6: Markowitz optimization and the Efficient Frontier
Step7: In yellow you can see the optimal portfolios for each of the desired returns (i.e. the mus). In addition, we get the weights for one optimal portfolio
Step8: Backtesting on real market data
Step9: Next, we'll create a zipline algorithm by defining two functions
|
12,315
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Load the TensorBoard notebook extension.
%load_ext tensorboard
from datetime import datetime
from packaging import version
import tensorflow as tf
from tensorflow import keras
import numpy as np
print("TensorFlow version: ", tf.__version__)
assert version.parse(tf.__version__).release[0] >= 2, \
"This notebook requires TensorFlow 2.0 or above."
data_size = 1000
# 80% of the data is for training.
train_pct = 0.8
train_size = int(data_size * train_pct)
# Create some input data between -1 and 1 and randomize it.
x = np.linspace(-1, 1, data_size)
np.random.shuffle(x)
# Generate the output data.
# y = 0.5x + 2 + noise
y = 0.5 * x + 2 + np.random.normal(0, 0.05, (data_size, ))
# Split into test and train pairs.
x_train, y_train = x[:train_size], y[:train_size]
x_test, y_test = x[train_size:], y[train_size:]
logdir = "logs/scalars/" + datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = keras.callbacks.TensorBoard(log_dir=logdir)
model = keras.models.Sequential([
keras.layers.Dense(16, input_dim=1),
keras.layers.Dense(1),
])
model.compile(
loss='mse', # keras.losses.mean_squared_error
optimizer=keras.optimizers.SGD(learning_rate=0.2),
)
print("Training ... With default parameters, this takes less than 10 seconds.")
training_history = model.fit(
x_train, # input
y_train, # output
batch_size=train_size,
verbose=0, # Suppress chatty output; use Tensorboard instead
epochs=100,
validation_data=(x_test, y_test),
callbacks=[tensorboard_callback],
)
print("Average test loss: ", np.average(training_history.history['loss']))
%tensorboard --logdir logs/scalars
print(model.predict([60, 25, 2]))
# True values to compare predictions against:
# [[32.0]
# [14.5]
# [ 3.0]]
logdir = "logs/scalars/" + datetime.now().strftime("%Y%m%d-%H%M%S")
file_writer = tf.summary.create_file_writer(logdir + "/metrics")
file_writer.set_as_default()
def lr_schedule(epoch):
Returns a custom learning rate that decreases as epochs progress.
learning_rate = 0.2
if epoch > 10:
learning_rate = 0.02
if epoch > 20:
learning_rate = 0.01
if epoch > 50:
learning_rate = 0.005
tf.summary.scalar('learning rate', data=learning_rate, step=epoch)
return learning_rate
lr_callback = keras.callbacks.LearningRateScheduler(lr_schedule)
tensorboard_callback = keras.callbacks.TensorBoard(log_dir=logdir)
model = keras.models.Sequential([
keras.layers.Dense(16, input_dim=1),
keras.layers.Dense(1),
])
model.compile(
loss='mse', # keras.losses.mean_squared_error
optimizer=keras.optimizers.SGD(),
)
training_history = model.fit(
x_train, # input
y_train, # output
batch_size=train_size,
verbose=0, # Suppress chatty output; use Tensorboard instead
epochs=100,
validation_data=(x_test, y_test),
callbacks=[tensorboard_callback, lr_callback],
)
%tensorboard --logdir logs/scalars
print(model.predict([60, 25, 2]))
# True values to compare predictions against:
# [[32.0]
# [14.5]
# [ 3.0]]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TensorBoard Scalar
Step2: 단순 회귀에 대한 데이터 설정
Step3: 모델 학습 및 손실 로깅하기
Step4: TensorBoard를 사용하여 손실 검사하기
Step5: <!-- <img class="tfo-display-only-on-site" src="https
Step7: 나쁘지 않군요!
Step8: TensorBoard를 다시 살펴보겠습니다.
Step9: <!-- <img class="tfo-display-only-on-site" src="https
|
12,316
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Martin Luessi <mluessi@nmr.mgh.harvard.edu>
# Alexandre Gramfort <alexandre.gramfort@inria.fr>
# Nicolas P. Rougier (graph code borrowed from his matplotlib gallery)
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.datasets import sample
from mne.minimum_norm import apply_inverse_epochs, read_inverse_operator
from mne.connectivity import spectral_connectivity
from mne.viz import circular_layout, plot_connectivity_circle
print(__doc__)
data_path = sample.data_path()
subjects_dir = data_path + '/subjects'
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'
fname_raw = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
fname_event = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
# Load data
inverse_operator = read_inverse_operator(fname_inv)
raw = mne.io.read_raw_fif(fname_raw)
events = mne.read_events(fname_event)
# Add a bad channel
raw.info['bads'] += ['MEG 2443']
# Pick MEG channels
picks = mne.pick_types(raw.info, meg=True, eeg=False, stim=False, eog=True,
exclude='bads')
# Define epochs for left-auditory condition
event_id, tmin, tmax = 1, -0.2, 0.5
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=dict(mag=4e-12, grad=4000e-13,
eog=150e-6))
# Compute inverse solution and for each epoch. By using "return_generator=True"
# stcs will be a generator object instead of a list.
snr = 1.0 # use lower SNR for single epochs
lambda2 = 1.0 / snr ** 2
method = "dSPM" # use dSPM method (could also be MNE or sLORETA)
stcs = apply_inverse_epochs(epochs, inverse_operator, lambda2, method,
pick_ori="normal", return_generator=True)
# Get labels for FreeSurfer 'aparc' cortical parcellation with 34 labels/hemi
labels = mne.read_labels_from_annot('sample', parc='aparc',
subjects_dir=subjects_dir)
label_colors = [label.color for label in labels]
# Average the source estimates within each label using sign-flips to reduce
# signal cancellations, also here we return a generator
src = inverse_operator['src']
label_ts = mne.extract_label_time_course(
stcs, labels, src, mode='mean_flip', return_generator=True)
fmin = 8.
fmax = 13.
sfreq = raw.info['sfreq'] # the sampling frequency
con_methods = ['pli', 'wpli2_debiased', 'ciplv']
con, freqs, times, n_epochs, n_tapers = spectral_connectivity(
label_ts, method=con_methods, mode='multitaper', sfreq=sfreq, fmin=fmin,
fmax=fmax, faverage=True, mt_adaptive=True, n_jobs=1)
# con is a 3D array, get the connectivity for the first (and only) freq. band
# for each method
con_res = dict()
for method, c in zip(con_methods, con):
con_res[method] = c[:, :, 0]
# First, we reorder the labels based on their location in the left hemi
label_names = [label.name for label in labels]
lh_labels = [name for name in label_names if name.endswith('lh')]
# Get the y-location of the label
label_ypos = list()
for name in lh_labels:
idx = label_names.index(name)
ypos = np.mean(labels[idx].pos[:, 1])
label_ypos.append(ypos)
# Reorder the labels based on their location
lh_labels = [label for (yp, label) in sorted(zip(label_ypos, lh_labels))]
# For the right hemi
rh_labels = [label[:-2] + 'rh' for label in lh_labels]
# Save the plot order and create a circular layout
node_order = list()
node_order.extend(lh_labels[::-1]) # reverse the order
node_order.extend(rh_labels)
node_angles = circular_layout(label_names, node_order, start_pos=90,
group_boundaries=[0, len(label_names) / 2])
# Plot the graph using node colors from the FreeSurfer parcellation. We only
# show the 300 strongest connections.
plot_connectivity_circle(con_res['pli'], label_names, n_lines=300,
node_angles=node_angles, node_colors=label_colors,
title='All-to-All Connectivity left-Auditory '
'Condition (PLI)')
fig = plt.figure(num=None, figsize=(8, 4), facecolor='black')
no_names = [''] * len(label_names)
for ii, method in enumerate(con_methods):
plot_connectivity_circle(con_res[method], no_names, n_lines=300,
node_angles=node_angles, node_colors=label_colors,
title=method, padding=0, fontsize_colorbar=6,
fig=fig, subplot=(1, 3, ii + 1))
plt.show()
# fname_fig = data_path + '/MEG/sample/plot_inverse_connect.png'
# fig.savefig(fname_fig, facecolor='black')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load our data
Step2: Compute inverse solutions and their connectivity
Step3: Make a connectivity plot
Step4: Make two connectivity plots in the same figure
Step5: Save the figure (optional)
|
12,317
|
<ASSISTANT_TASK:>
Python Code:
from google.colab import auth
auth.authenticate_user()
project_id = '[your project id]'
import pandas as pd
import datetime
today = datetime.datetime.utcnow().strftime("%Y%m%d")
df = pd.io.gbq.read_gbq('''
SELECT
count(*) as total
FROM
`web_instr_container.stdout_{}`
'''.format(today), project_id=project_id)
total = df.total[0]
print(f'Log records: {total}')
import pandas as pd
import datetime
today = datetime.datetime.utcnow().strftime("%Y%m%d")
df = pd.io.gbq.read_gbq('''
SELECT
EXTRACT(MINUTE FROM timestamp) AS Minute,
REGEXP_EXTRACT(textPayload, r"LoadTest: latency: ([0-9]+)\.") AS Latency
FROM
`web_instr_container.stdout_{}`
WHERE
EXTRACT(HOUR FROM timestamp) = 2
AND
EXTRACT(MINUTE FROM timestamp) > 0
AND
EXTRACT(MINUTE FROM timestamp) < 5
ORDER BY
Minute
'''.format(today), project_id=project_id)
df1 = df[pd.isnull(df['Latency']) == False]
df1 = df1.astype({'Minute': 'int64', 'Latency': 'int64'})
latency = df1.groupby('Minute').median()
ax = latency.plot()
ax.set_xlabel('Time (min)')
ax.set_ylabel('Median Client Latency (ms)')
import pandas as pd
import datetime
today = datetime.datetime.utcnow().strftime("%Y%m%d")
df = pd.io.gbq.read_gbq('''
SELECT
EXTRACT(SECOND FROM timestamp) AS Seconds,
REGEXP_EXTRACT(textPayload, r"LoadTest: latency: ([0-9]+)\.") AS Latency
FROM
`web_instr_container.stdout_{}`
WHERE
EXTRACT(HOUR FROM timestamp) = 2
AND
EXTRACT(MINUTE FROM timestamp) = 3
ORDER BY
Seconds
'''.format(today), project_id=project_id)
df1 = df[pd.isnull(df['Latency']) == False]
df1 = df1.astype({'Seconds': 'int64', 'Latency': 'int64'})
ax = df1.plot.scatter(x='Seconds', y='Latency', c='DarkBlue')
ax.set_xlabel('Time (s)')
ax.set_ylabel('Client Latency (ms)')
import pandas as pd
import datetime
today = datetime.datetime.utcnow().strftime("%Y%m%d")
df = pd.io.gbq.read_gbq('''
SELECT
EXTRACT(MINUTE FROM timestamp) as Minute,
httpRequest.status AS Status
FROM
`web_instr_load_balancer.requests_{}`
GROUP BY
Minute, Status
ORDER BY Minute, Status
'''.format(today), project_id=project_id)
df.head(10)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Log Overview
Step2: Client Latency
Step3: Scatter Plot
Step4: Responses
|
12,318
|
<ASSISTANT_TASK:>
Python Code:
import pandas
from time import time
import cobra.test
from cobra.flux_analysis import (
single_gene_deletion, single_reaction_deletion, double_gene_deletion,
double_reaction_deletion)
cobra_model = cobra.test.create_test_model("textbook")
ecoli_model = cobra.test.create_test_model("ecoli")
print('complete model: ', cobra_model.optimize())
with cobra_model:
cobra_model.reactions.PFK.knock_out()
print('pfk knocked out: ', cobra_model.optimize())
print('complete model: ', cobra_model.optimize())
with cobra_model:
cobra_model.genes.b1723.knock_out()
print('pfkA knocked out: ', cobra_model.optimize())
cobra_model.genes.b3916.knock_out()
print('pfkB knocked out: ', cobra_model.optimize())
deletion_results = single_gene_deletion(cobra_model)
single_gene_deletion(cobra_model, cobra_model.genes[:20])
single_reaction_deletion(cobra_model, cobra_model.reactions[:20])
double_gene_deletion(
cobra_model, cobra_model.genes[-5:], return_frame=True).round(4)
start = time() # start timer()
double_gene_deletion(
ecoli_model, ecoli_model.genes[:300], number_of_processes=2)
t1 = time() - start
print("Double gene deletions for 200 genes completed in "
"%.2f sec with 2 cores" % t1)
start = time() # start timer()
double_gene_deletion(
ecoli_model, ecoli_model.genes[:300], number_of_processes=1)
t2 = time() - start
print("Double gene deletions for 200 genes completed in "
"%.2f sec with 1 core" % t2)
print("Speedup of %.2fx" % (t2 / t1))
double_reaction_deletion(
cobra_model, cobra_model.reactions[2:7], return_frame=True).round(4)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Knocking out single genes and reactions
Step2: For evaluating genetic manipulation strategies, it is more interesting to examine what happens if given genes are knocked out as doing so can affect no reactions in case of redundancy, or more reactions if gene when is participating in more than one reaction.
Step3: Single Deletions
Step4: These can also be done for only a subset of genes
Step5: This can also be done for reactions
Step6: Double Deletions
Step7: By default, the double deletion function will automatically use multiprocessing, splitting the task over up to 4 cores if they are available. The number of cores can be manually specified as well. Setting use of a single core will disable use of the multiprocessing library, which often aids debugging.
Step8: Double deletions can also be run for reactions.
|
12,319
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
from scipy.interpolate import interp1d
# YOUR CODE HERE
with np.load('trajectory.npz') as data:
x = data['x']
t=data['t']
y=data['y']
plt.plot(t,x,marker='o')
assert isinstance(x, np.ndarray) and len(x)==40
assert isinstance(y, np.ndarray) and len(y)==40
assert isinstance(t, np.ndarray) and len(t)==40
# YOUR CODE HERE
newt=np.linspace(min(t),max(t),200)
xout= interp1d(t,x,kind= 'cubic')
newx= xout(newt)
yout= interp1d(t,y,kind='cubic')
newy= yout(newt)
# newx = interp1d(t, x, kind='cubic')
# newy = interp1d(t, y, kind='cubic')
# print(newx)
assert newt[0]==t.min()
assert newt[-1]==t.max()
assert len(newt)==200
assert len(newx)==200
assert len(newy)==200
# YOUR CODE HERE
plt.figure(figsize=(9,6))
plt.plot(t, x, marker='o', linestyle='', label='original x data')
plt.plot(newt, newx, marker='.', label='interpolated x function');
plt.plot(t, y, marker='o', linestyle='', label='original y data')
plt.plot(newt, newy, marker='.', label='interpolated y function');
plt.legend();
plt.xlabel('Time (sec)')
plt.ylabel('F(t)');
plt.title("Graph Showing Effect of Interpolation Function")
assert True # leave this to grade the trajectory plot
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2D trajectory interpolation
Step2: Use these arrays to create interpolated functions $x(t)$ and $y(t)$. Then use those functions to create the following arrays
Step3: Make a parametric plot of ${x(t),y(t)}$ that shows the interpolated values and the original points
|
12,320
|
<ASSISTANT_TASK:>
Python Code:
import os
import sys
import pickle
import numpy as np
import scipy
import matplotlib.pyplot as plt
import ChiantiPy.core as ch
import sunpy.instr.aia as aia
%matplotlib inline
response = aia.Response(path_to_genx_dir='../ssw_aia_response_data/')
response.calculate_wavelength_response()
response.peek_wavelength_response()
temperature = np.logspace(5.,8.,50)
density = 1.e+9
ions = ['fe_8','fe_9','fe_12','fe_14','fe_16','fe_17','fe_20','fe_23','fe_24']
search_interval = np.array([-2.5,2.5])
ion_wvl_ranges = [c+search_interval for c in [131.,171.,193.,211.,335.,94.,131.,131.,193.]]
#warning! This takes a long time!
contribution_fns = {}
for i,iwr in zip(ions,ion_wvl_ranges):
tmp_ion = ch.ion(i,temperature=temperature,eDensity=density,em=1.e+27)
tmp_ion.gofnt(wvlRange=[iwr[0],iwr[1]],top=3,plot=False)
plt.show()
contribution_fns[i] = tmp_ion.Gofnt
sorted_g = sorted([g[1] for g in contribution_fns.items()],key=lambda x: x['wvl'])
g_matrix = np.vstack((g['gofnt'] for g in sorted_g)).T
discrete_wavelengths = np.array([g['wvl'] for g in sorted_g])
for key in wavelength_response_fns:
wavelength_response_fns[key]['wavelength_interpolated'] = discrete_wavelengths[:,0]
wavelength_response_fns[key]['response_interpolated'] = np.interp(discrete_wavelengths,
wavelength_response_fns[key]['wavelength'],
wavelength_response_fns[key]['response'])[:,0]
temperature_response = {}
for key in wavelength_response_fns:
g_times_r = g_matrix*wavelength_response_fns[key]['response_interpolated']
temperature_response[key] = np.trapz(g_times_r,
wavelength_response_fns[key]['wavelength_interpolated'])
fig = plt.figure(figsize=(10,10))
ax = fig.gca()
for tresp in temperature_response.items():
ax.plot(temperature,tresp[1],label=str(tresp[0]),color=sns.xkcd_rgb[channel_colors[tresp[0]]])
ax.set_ylim([1e-28,1e-22])
ax.set_xscale('log')
ax.set_yscale('log')
ax.set_xlabel(r'$T$ (K)')
ax.set_ylabel(r'$K_i(T)$')
ax.legend(loc='best',title=r'Channel ($\mathrm{\mathring{A}}$)')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The goal of this notebook is to test the wavelength and temperature response function calculations that are currently being developed in SunPy.
Step2: Contribution Functions, $G(n,T)$
Step3: Now, make a list of all the ions that we care about so that we can easily iterate through them.
Step4: Finally, iterate through all of the ions and store the contribution function and associated information.
Step5: Calculating Temperature Response Functions
Step6: Finally, try to plot all of the temperature response functions.
|
12,321
|
<ASSISTANT_TASK:>
Python Code:
import csv
from pprint import pprint
import math
stat = {'Congruent': { 'data': [] }, 'Incongruent': { 'data': [] }, 'Difference': { 'data': [] }}
with open('./stroopdata.csv', 'r') as st_data:
reader = csv.DictReader(st_data)
for row in reader:
cong = float(row['Congruent'])
incong = float(row['Incongruent'])
diff = cong-incong
stat['Congruent']['data'].append(cong)
stat['Incongruent']['data'].append(incong)
stat['Difference']['data'].append(diff)
for k in stat:
print(k + ": ")
pprint(stat[k]['data'])
def variance(data):
This function returns variance of given sample data.
mean = sum(data)/len(data)
squared_diff = 0
for d in data:
squared_diff += pow((d - mean), 2)
return squared_diff/(len(data) - 1)
for k in stat:
list_data = sorted(stat[k]['data'])
count = len(list_data)
mean = sum(list_data)/count
median = list_data[int(count/2)]
var = variance(list_data)
std = math.sqrt(var)
if count%2 == 0:
median = (list_data[int(count/2)] + list_data[int(count/2 - 1) ])/2
stat[k]['mean'] = mean
stat[k]['median'] = median
stat[k]['variance'] = var
stat[k]['std'] = std
print('\n' + k + ': ')
print('Mean: ', mean)
print('Median: ', median)
print('Variance: ', var)
print('Standard Deviation: ', std)
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
colors = {'Congruent': 'red', 'Incongruent': 'green', 'Difference':'blue'}
for k in stat:
ax.hist(stat[k]['data'], color = colors[k], label=k, rwidth = 0.9)
legend = ax.legend(loc='upper left')
plt.title('Distribution of Time')
plt.xlabel('Time Taken')
plt.show()
list_d = stat['Difference']['data']
se = stat['Difference']['std']/math.sqrt(len(list_d))
t = (stat['Congruent']['mean'] - stat['Incongruent']['mean'])/se
print(se)
print(t)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Question 3
Step3: Question 4
Step4: Question 5
|
12,322
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function, unicode_literals, absolute_import, division
from six.moves import range, zip, map, reduce, filter
import numpy as np
import matplotlib.pyplot as plt
from IPython import display
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import seaborn as sns
sns.set_style('whitegrid')
plt.rc('figure', figsize=(7.0, 5.0))
import keras
from keras import backend as K
from keras.models import Sequential, Model
from keras.layers import Input, Dense, Activation
from keras.optimizers import Adam
from keras.callbacks import LambdaCallback
from keras.utils import np_utils
def plot_callback(func,p=20):
def plot_epoch_end(epoch,logs):
if epoch == 0 or (epoch+1) % p == 0:
plt.clf(); func(); plt.title('epoch %d' % (epoch+1))
display.clear_output(wait=True); display.display(plt.gcf())
def clear(*args):
plt.clf()
return LambdaCallback(on_epoch_end=plot_epoch_end,on_train_end=clear)
def plot_loss_acc(hist):
plt.figure(figsize=(15,4));
if len(hist.params['metrics']) == 2:
plt.subplot(121); plt.semilogy(hist.epoch,hist.history['loss'])
plt.xlabel('epoch'); plt.ylabel('loss'); plt.legend(['train'],loc='upper right')
plt.subplot(122); plt.plot(hist.epoch,hist.history['acc']);
plt.xlabel('epoch'); plt.ylabel('accuracy'); plt.legend(['train'],loc='lower right');
else:
plt.subplot(121); plt.semilogy(hist.epoch,hist.history['loss'], hist.epoch,hist.history['val_loss']);
plt.xlabel('epoch'); plt.ylabel('loss'); plt.legend(['train','test'],loc='upper right')
plt.subplot(122); plt.plot(hist.epoch,hist.history['acc'], hist.epoch,hist.history['val_acc'])
plt.xlabel('epoch'); plt.ylabel('accuracy'); plt.legend(['train','test'],loc='lower right');
iris = sns.load_dataset("iris")
iris.sample(10)
sns.pairplot(iris, hue='species');
def label_encode(arr):
uniques, ids = np.unique(arr, return_inverse=True)
return ids
classes = ('setosa', 'versicolor', 'virginica')
labels = label_encode(classes)
for i,c in enumerate(classes):
print('%10s → %d' % (c, labels[i]))
def onehot_encode(arr):
uniques, ids = np.unique(arr, return_inverse=True)
return np_utils.to_categorical(ids, len(uniques))
classes = ('setosa', 'versicolor', 'virginica')
onehot = onehot_encode(classes)
for i,c in enumerate(classes):
print('%10s → [%d,%d,%d]' % (c, onehot[i,0], onehot[i,1], onehot[i,2]))
data = iris
feature_name = 'petal_length'
data = data[[feature_name,'species']]
X = data.values[:,0]
y = label_encode(data.values[:,1])
y_oh = onehot_encode(data.values[:,1])
N = len(y)
R = np.linspace(X.min()-1,X.max()+1,100)
Xp = np.zeros(X.shape[0])-.1
Rp = np.zeros(R.shape[0])-.2
def plot_all(model=None):
plt.scatter(X, Xp, c=y, cmap='jet');
plt.xlabel(feature_name)
if model is not None:
prob = model.predict(R)
yhat = np.argmax(prob,axis=1)
plt.scatter(R, Rp, c=yhat);
plt.plot(R,prob)
leg = plt.legend(map(lambda s:'p("%s")'%s,classes),loc='upper center',frameon=False,ncol=3)
plt.xlim(X.min()-1.5,X.max()+1.5)
plt.ylim(-.4,1.2)
plot_all()
model = Sequential()
model.add(Dense(16, input_shape=(1,)))
model.add(Activation('tanh'))
model.add(Dense(3))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
hist = model.fit(X,y_oh,batch_size=5,epochs=300,verbose=0,
callbacks=[plot_callback(lambda:plot_all(model))]);
plot_loss_acc(hist)
N = iris.shape[0] # number of data points / table rows
data = iris.sample(N,replace=False) # shuffle data
X = data.values[:,0:4]
y_oh = onehot_encode(data.values[:,4])
N_train = N//2 # random 50/50 train/test split
X_train, y_train = X[:N_train], y_oh[:N_train]
X_test, y_test = X[N_train:], y_oh[N_train:]
model = Sequential()
model.add(Dense(16, input_shape=(4,)))
model.add(Activation('tanh'))
model.add(Dense(3))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
hist = model.fit(X_train, y_train, validation_data=(X_test,y_test), epochs=200, verbose=0, batch_size=5)
plot_loss_acc(hist)
loss, accuracy = model.evaluate(X_train, y_train, verbose=0)
print('train set: loss = %.5f, accuracy = %.5f' % (loss,accuracy))
loss, accuracy = model.evaluate(X_test, y_test, verbose=0)
print('test set: loss = %.5f, accuracy = %.5f' % (loss,accuracy))
N_train = 20 # only 20 of 150 samples for training, rest for testing
X_train, y_train = X[:N_train], y_oh[:N_train]
X_test, y_test = X[N_train:], y_oh[N_train:]
model = Sequential()
model.add(Dense(16, input_shape=(4,)))
model.add(Activation('tanh'))
model.add(Dense(16))
model.add(Activation('tanh'))
model.add(Dense(16))
model.add(Activation('tanh'))
model.add(Dense(3))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
hist = model.fit(X_train, y_train, validation_data=(X_test,y_test), epochs=1000, verbose=0, batch_size=5)
plot_loss_acc(hist)
loss, accuracy = model.evaluate(X_train, y_train, verbose=0)
print('train set: loss = %.5f, accuracy = %.5f' % (loss,accuracy))
loss, accuracy = model.evaluate(X_test, y_test, verbose=0)
print('test set: loss = %.5f, accuracy = %.5f' % (loss,accuracy))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Adapted from http
Step2: Image from https
Step3: Label Encoding
Step4: Multiclass Classification
Step5: Plotting
Step6: Model (Logistic Regression)
Step7: Softmax
Step8: Full dataset (all 4 features)
Step9: Overfitting
|
12,323
|
<ASSISTANT_TASK:>
Python Code:
hidden_weights = 256
w = tf.Variable(tf.random_normal([n_input, hidden_weights]))
b = tf.Variable(tf.random_normal([hidden_weights]))
w2 = tf.Variable(tf.random_normal([hidden_weights, hidden_weights]))
w3 = tf.Variable(tf.random_normal([hidden_weights, n_classes]))
input_layer = tf.add(tf.matmul(x, w), b)
hidden = tf.matmul(input_layer, w2)
perceptron = tf.matmul(hidden, w3)
# Loss & Optimizer
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=perceptron, labels=y))
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cross_entropy)
# Initialize the variables
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for epoch in range(training_epochs):
avg_cost = 0.
total_batch = int(mnist.train.num_examples/batch_size)
for i in range(total_batch):
batch_x, batch_y = mnist.train.next_batch(batch_size)
_, c = sess.run([optimizer, cross_entropy], feed_dict={x: batch_x, y: batch_y})
avg_cost += c / total_batch
if epoch % display_step == 0:
print("Epoch:", "%04d" % (epoch+1), "cost=", "{:.9f}".format(avg_cost))
print("Finished")
correct_prediction = tf.equal(tf.argmax(perceptron, 1), tf.argmax(y, 1))
# Accuracy
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print("Accuracy:", accuracy.eval({x: mnist.test.images, y: mnist.test.labels}))
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
import numpy as np
classification_x, classification_y = make_classification(1000, n_features=100, n_informative=30)
classification_y = np.expand_dims(classification_y, axis=1)
print(classification_x.shape, classification_y.shape)
train_x, test_x, train_y, test_y = train_test_split(classification_x, classification_y, test_size=0.33)
n_input = classification_x.shape[1]
n_classes = classification_y.shape[1]
x = tf.placeholder("float", [None, n_input])
y = tf.placeholder("float", [None, n_classes])
w = tf.Variable(tf.random_normal([n_input, n_classes]))
b = tf.Variable(tf.random_normal([n_classes]))
perceptron = tf.add(tf.matmul(x, w), b)
perceptron_loss = tf.reduce_mean(tf.maximum(0., -y * tf.add(tf.matmul(x, w), b)))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate).minimize(perceptron_loss)
init = tf.global_variables_initializer()
training_epochs = 100
learning_rate = 0.001
display_step = 10
with tf.Session() as sess:
sess.run(init)
for epoch in range(training_epochs):
avg_cost = 0.
total_batch = int(train_x.shape[0]/batch_size)
for i in range(total_batch):
start = i * batch_size
end = start + batch_size
batch_x = train_x[start:end]
batch_y = train_y[start:end]
_, c = sess.run([optimizer, perceptron_loss], feed_dict={x: batch_x, y: batch_y})
avg_cost += c / total_batch
if epoch % display_step == 0:
print("Epoch:", "%04d" % (epoch+1), "cost=", "{:.9f}".format(avg_cost))
print("Finished")
correct_prediction = tf.equal(tf.argmax(perceptron, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print("Accuracy:", accuracy.eval({x: test_x, y: test_y}))
margin = 0.1
x = tf.placeholder("float", [None, n_input])
y = tf.placeholder("float", [None, n_classes])
w = tf.Variable(tf.random_normal([n_input, n_classes]))
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for epoch in range(training_epochs):
avg_cost = 0.
total_batch = int(train_x.shape[0]/batch_size)
for i in range(total_batch):
start = i * batch_size
end = start + batch_size
batch_x = train_x[start:end]
batch_y = train_y[start:end]
loss = tf.reduce_mean(tf.maximum(0., -y * tf.matmul(x, w), b))
is_mistake = tf.less_equal(y * tf.matmul(x, w), margin)
eta = (margin - (y[is_mistake] * tf.matmul(x[is_mistake], w))) / (tf.matmul(x[is_mistake], tf.transpose(x[is_mistake])) + 1)
update = tf.assign(w, eta * tf.matmul(x[is_mistake], y[is_mistake]))
_, c = sess.run([update, loss])
avg_cost += c / total_batch
if epoch % display_step == 0:
print("Epoch:", "%04d" % (epoch+1), "cost=", "{:.9f}".format(avg_cost))
print("Finished")
correct_prediction = tf.equal(tf.argmax(perceptron, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print("Accuracy:", accuracy.eval({x: test_x, y: test_y}))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Perceptron (No cheating, mostly)
Step2: Perceptron (No cheating, for real)
|
12,324
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
return x / 255
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_normalize(normalize)
import numpy as np
from sklearn import preprocessing
lb = preprocessing.LabelBinarizer()
labels = np.array(range(10))
lb.fit(labels)
def one_hot_encode(x):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
return lb.transform(x)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_one_hot_encode(one_hot_encode)
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
DON'T MODIFY ANYTHING IN THIS CELL
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
# TODO: Implement Function
shape = list(image_shape)
shape.insert (0, None)
return tf.placeholder (tf.float32, shape=shape, name='x')
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
# TODO: Implement Function
return tf.placeholder (tf.float32, shape=[None, n_classes], name='y')
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
# TODO: Implement Function
return tf.placeholder (tf.float32, name='keep_prob')
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
# TODO: Implement Function
input_shape = [conv_ksize[0], conv_ksize[1], x_tensor.get_shape().as_list()[3], conv_num_outputs]
weights = tf.Variable (tf.truncated_normal (input_shape, mean=0.0, stddev=0.10, dtype=tf.float32))
biases = tf.Variable (tf.zeros (conv_num_outputs, dtype=tf.float32))
strides = [1, conv_strides[0], conv_strides[1], 1]
conv = tf.nn.conv2d (x_tensor, weights, strides=strides, padding='SAME')
conv = tf.nn.bias_add (conv, biases)
conv = tf.nn.relu (conv)
ksize = [1, pool_ksize[0], pool_ksize[1], 1]
strides = [1, pool_strides[0], pool_strides[1], 1]
maxpool = tf.nn.max_pool (conv, ksize=ksize, strides=strides, padding='SAME')
return maxpool
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
import operator
import functools
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
# TODO: Implement Function
image_size = functools.reduce(operator.mul, x_tensor.get_shape().as_list()[1:], 1)
return tf.reshape (x_tensor, [-1, image_size])
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_flatten(flatten)
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
weights = tf.Variable (tf.truncated_normal ([x_tensor.get_shape().as_list()[1], num_outputs], mean=0.0, stddev=0.10, dtype=tf.float32))
biases = tf.Variable (tf.zeros (num_outputs, dtype=tf.float32))
activations = tf.add (tf.matmul (x_tensor, weights), biases)
return tf.nn.relu (activations)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_fully_conn(fully_conn)
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
weights = tf.Variable (tf.truncated_normal ([x_tensor.get_shape().as_list()[1], num_outputs], mean=0.0, stddev=0.10, dtype=tf.float32))
biases = tf.Variable (tf.zeros (num_outputs, dtype=tf.float32))
return tf.add (tf.matmul (x_tensor, weights), biases)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_output(output)
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
conv1 = conv2d_maxpool (x, conv_num_outputs=64, conv_ksize=(8,8), conv_strides=(2,2), pool_ksize=(2,2), pool_strides=(2,2))
conv2 = conv2d_maxpool (conv1, conv_num_outputs=128, conv_ksize=(6,6), conv_strides=(2,2), pool_ksize=(2,2), pool_strides=(2,2))
flat = flatten (conv2)
conn1 = fully_conn (flat, 2048)
conn1 = tf.nn.dropout (conn1, keep_prob)
conn2 = fully_conn (conn1, 1024)
conn2 = tf.nn.dropout (conn2, keep_prob)
out = output (conn2, 10)
return out
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
# TODO: Implement Function
session.run (optimizer, feed_dict={x:feature_batch, y:label_batch, keep_prob:keep_probability})
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_train_nn(train_neural_network)
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
# TODO: Implement Function
loss = session.run (cost, feed_dict={x:feature_batch, y:label_batch, keep_prob:1.0})
valid_acc = session.run (accuracy, feed_dict={x:valid_features, y:valid_labels, keep_prob:1.0})
print ('Loss = {:8.6f}, Validation Accuracy = {:.4f}'.format (loss, valid_acc))
# TODO: Tune Parameters
epochs = 20
batch_size = 256
keep_probability = 0.50
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
print ("Initializing the variables")
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
print ("Initializing the variables")
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
if epoch % 10 == 0:
# Save Model
print ("Save model...")
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Image Classification
Step2: Explore the Data
Step5: Implement Preprocess Functions
Step8: One-hot encode
Step10: Randomize Data
Step12: Check Point
Step17: Build the network
Step20: Convolution and Max Pooling Layer
Step23: Flatten Layer
Step26: Fully-Connected Layer
Step29: Output Layer
Step32: Create Convolutional Model
Step35: Train the Neural Network
Step37: Show Stats
Step38: Hyperparameters
Step40: Train on a Single CIFAR-10 Batch
Step42: Fully Train the Model
Step45: Checkpoint
|
12,325
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
def bsm(S0,r,sigma,T,K,R = 100000 , seed=500):
np.random.seed(seed)
z = np.random.standard_normal(R)
ST = S0 * np.exp(( r - 0.5 * sigma ** 2) * T + sigma * np.sqrt(T) * z)
hT = np.maximum(ST - K, 0)
C0 = np.exp(-r * T) * np.sum(hT) / R
return C0
import time
tm = time.time()
C0 = bsm(S0=105,r=0.06,sigma=0.22,T=1.0,K=109,R = 100000 , seed=500)
pm = time.time() - tm
print("Value of European Call Option: {0:.4g}".format(C0)+" - time[{0:.4g} secs]".format(pm))
tm = time.time()
C0 = bsm(S0=105,r=0.06,sigma=0.22,T=1.0,K=109,R = 70000000 , seed=500)
pm = time.time() - tm
print("Value of European Call Option: {0:.4g}".format(C0)+" - time[{0:.4g} secs]".format(pm))
import numexpr as ne
def bsm_ne(S0,r,sigma,T,K,R = 70000000 , seed=500):
np.random.seed(seed)
z = np.random.standard_normal(R)
ST = ne.evaluate('S0 * exp(( r - 0.5 * sigma ** 2) * T + sigma * sqrt(T) * z)')
hT = np.maximum(ST - K, 0)
C0 = np.exp(-r * T) * np.sum(hT) / R
return C0
tm = time.time()
C0 = bsm_ne(S0=105,r=0.06,sigma=0.22,T=1.0,K=109,R = 70000000 , seed=500)
pm = time.time() - tm
print("Value of the European Call Option: {0:.4g}".format(C0)+" - time[{0:.4g} secs]".format(pm))
import numpy as np
import pandas as pd
import pandas.io.data as web
df_final = web.DataReader(['GOOG','SPY'], data_source='yahoo',
start='1/21/2010', end='4/15/2016')
print(df_final)
print(df_final.shape)
df_final.ix[:,:,'SPY'].head()
print(type(df_final.ix[:,:,'SPY']))
print("\n>>> null values:"+str(pd.isnull(df_final.ix[:,:,'GOOG']).sum().sum()))
df_final = web.DataReader(['GOOG','SPY'], data_source='yahoo',
start='1/21/1999', end='4/15/2016')
df_final.ix[:,:,'GOOG'].head()
print(type(df_final.ix[:,:,'GOOG']))
print("\n>>> null values:"+str(pd.isnull(df_final.ix[:,:,'GOOG']).sum().sum()))
import matplotlib.pyplot as plt
def get_data(symbols,
add_ref=True,
data_source='yahoo',
price='Adj Close',
start='1/21/2010',
end='4/15/2016'):
Read stock data (adjusted close) for given symbols from.
if add_ref and 'SPY' not in symbols: # add SPY for reference, if absent
symbols.insert(0, 'SPY')
df = web.DataReader(symbols,
data_source=data_source,
start=start,
end=end)
return df[price,:,:]
get_data(symbols=['GOOG','SPY']).tail()
df_stock = get_data(symbols=['GOOG','SPY'],start='1/21/1999',end='4/15/2016')
print(">> Trading days from pandas:"+str(df_stock.shape[0]))
dates = pd.date_range('1/21/1999', '4/15/2016')
df = pd.DataFrame(index=dates)
print(">> Calendar days:"+str(df.shape[0]))
df = df.join(df_stock)
print(">> After join:"+str(df.shape[0]))
df = df.dropna(subset=["SPY"])
print(">> After removing non trading days:"+str(df.shape[0]))
ax = get_data(symbols=['GOOG','SPY','IBM','GLD'],start='1/21/1999', end='4/15/2016').plot(title="Stock Data", fontsize=9)
ax.set_xlabel("Date")
ax.set_ylabel("Price")
plt.show()
def fill_missing_values(df_data):
Fill missing values in data frame, in place.
df_data.fillna(method='ffill',inplace=True)
df_data.fillna(method='backfill',inplace=True)
return df_data
ax = fill_missing_values(get_data(symbols=['GOOG','SPY','IBM','GLD'],
start='1/21/1999',
end='4/15/2016')).plot(title="Stock Data", fontsize=9)
ax.set_xlabel("Date")
ax.set_ylabel("Price")
plt.show()
def normalize_data(df):
return df/df.ix[0,:]
ax = normalize_data(
fill_missing_values(
get_data(
symbols=['GOOG','SPY','IBM','GLD'],
start='1/21/1999',
end='4/15/2016'))).plot(title="Stock Data", fontsize=9)
ax.set_xlabel("Date")
ax.set_ylabel("Normalized price")
plt.show()
df = fill_missing_values(
get_data(
symbols=['GOOG','SPY','IBM','GLD'],
start='4/21/2015',
end='7/15/2016'))
# 1. Computing rolling mean using a 20-day window
rm_df = pd.DataFrame.rolling(df, window=20).mean()
ax = rm_df.plot(title="Rolling Mean")
ax.set_xlabel("Date")
ax.set_ylabel("Price")
plt.show()
# 2. Computing rolling standard deviation using a 20-day window
rstd_df = pd.DataFrame.rolling(df, window=20).std()
ax = rstd_df.plot(title="Rolling Standard Deviation")
ax.set_xlabel("Date")
ax.set_ylabel("Price")
plt.show()
# 3. Compute upper and lower bands
def get_bollinger_bands(rm, rstd):
Return upper and lower Bollinger Bands.
upper_band, lower_band = rm + 2 * rstd, rm - 2 * rstd
return upper_band, lower_band
upper_band, lower_band = get_bollinger_bands(rm_df, rstd_df)
# Plot raw SPY values, rolling mean and Bollinger Bands
ax = df['SPY'].plot(title="Bollinger Bands",label='SPY')
rm_df['SPY'].plot(label='Rolling mean', ax=ax)
upper_band['SPY'].plot(label='upper band', ax=ax)
lower_band['SPY'].plot(label='lower band', ax=ax)
# Add axis labels and legend
ax.set_xlabel("Date")
ax.set_ylabel("Price")
ax.legend(loc='lower left')
plt.show()
def compute_daily_returns_2(df):
Compute and return the daily return values.
# Note: Returned DataFrame must have the same number of rows
daily_returns = df.copy()
daily_returns[1:] = (df[1:]/df[:-1].values) - 1
daily_returns.ix[0,:] = 0
return daily_returns
def compute_daily_returns(df):
Compute and return the daily return values.
# Note: Returned DataFrame must have the same number of rows
daily_returns = (df / df.shift(1)) - 1
daily_returns.ix[0,:] = 0
return daily_returns
pd.util.testing.assert_frame_equal(
compute_daily_returns(
fill_missing_values(
get_data(
symbols=['GOOG','SPY','IBM','GLD'],
start='4/21/2016',
end='7/15/2016'))),
compute_daily_returns_2(
fill_missing_values(
get_data(
symbols=['GOOG','SPY','IBM','GLD'],
start='4/21/2016',
end='7/15/2016'))))
ax = compute_daily_returns(fill_missing_values(
get_data(
symbols=['GOOG','SPY','IBM','GLD'],
start='4/21/2016',
end='7/15/2016'))).plot(title="Daily returns")
ax.set_xlabel("Date")
ax.set_ylabel("Daily return")
plt.show()
df = compute_daily_returns(fill_missing_values(get_data(
symbols=['SPY'],
start='4/21/2000',
end='7/15/2016')))
plt.hist(df['SPY'],bins=30,color='c',label=['Daily return'])
plt.axvline(df['SPY'].mean(), color='b', linestyle='dashed', linewidth=2 , label='Mean')
plt.axvline(-df['SPY'].std(), color='r', linestyle='dashed', linewidth=2 , label='Std')
plt.axvline(df['SPY'].std(), color='r', linestyle='dashed', linewidth=2 )
plt.title('SPY daily return distribution')
plt.xlabel('Daily return')
plt.grid(True)
plt.legend()
plt.show()
def cumulative_returns(df):
return df/df.ix[0,:] - 1
ax = cumulative_returns(fill_missing_values(
get_data(
symbols=['GOOG','SPY','IBM','GLD'],
start='4/21/2016',
end='7/15/2016'))).plot(title="Cumulative returns")
ax.set_xlabel("Date")
ax.set_ylabel("Cumulative return")
plt.show()
def sharpe_ratio(df,sample_freq='d',risk_free_rate=0.0):
sr = (df - risk_free_rate).mean() / df.std()
if sample_freq == 'd':
sr = sr * np.sqrt(252)
elif sample_freq == 'w':
sr = sr * np.sqrt(52)
elif sample_freq == 'm':
sr = sr * np.sqrt(12)
else:
raise Exception('unkown sample frequency :'+str(sample_freq))
return sr
# Sharpe ratio
sharpe_ratio(
compute_daily_returns(
fill_missing_values(
get_data(
symbols=['GOOG','SPY','IBM','GLD'],
start='4/21/2015',
end='7/15/2016'))))
df = fill_missing_values(get_data(symbols=['GOOG','SPY','IBM','GLD'],
start='4/21/2015',
end='7/15/2016'))
# 1. Cumulative return
cumulative_returns(df).ix[-1,:]
# 2. Average daily return
compute_daily_returns(df).mean()
# 3. Rsk (Standard deviation of daily return)
compute_daily_returns(df).std()
# 4. Sharpe ratio
sharpe_ratio(compute_daily_returns(df))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's see how much time is necessary for 70,000,000 iterations intead of 100,000 iterations.
Step2: Let's see how we can speed up the computation with the numexpr package.
Step3: Key Factors for Evaluating the Performance of a Portfolio
Step5: There is a couple of observations to be done
Step6: Also, notice that it is not necessary to perform an initial join with the data range of interest filtering out
Step7: Plotting stock prices
Step9: Imputing missing values
Step10: Normalizing prices
Step12: Rolling statistics
Step15: Daily returns
Step16: Cumulative returns
Step17: Sharpe Ratio
Step18: Summary
|
12,326
|
<ASSISTANT_TASK:>
Python Code:
# Import necessary modules
import geopandas as gpd
# Set filepath
fp = "data/limitebairro.json"
# Read file using gpd.read_file()
data = gpd.read_file(fp, driver='GeoJSON')
type(data)
data.head(5)
data.plot()
# Create a output path for the data
outfp = "data/limitebairro.shp"
# Select first 50 rows
selection = data[0:50]
# Write those rows into a new Shapefile (the default output file format is Shapefile)
selection.to_file(outfp)
## CODE HERE ##
# It is possible to get a specific column by specifying the column name within square brackets []
print(data['geometry'].head(5))
# Make a selection that contains only the first five rows
selection = data[0:5]
# Iterate over rows and print the area of a Polygon
for index, row in selection.iterrows():
# Get the area of the polygon
poly_area = row['geometry'].area
# Print information for the user
print("Polygon area at index {index} is: {area:.5f}".format(index=index, area=poly_area))
# Create a new column called 'area' and assign the area of the Polygons into it
data['area'] = data.area
# Print first 5 rows of the data
data.head(5)
# Maximum area
max_area = data['area'].max()
# Minimum area
min_area = data['area'].min()
# Mean area
mean_area = data['area'].mean()
print("Max area: {max}\nMin area: {min}\nMean area: {mean}".format(max=round(max_area, 5), min=round(min_area, 5), mean=round(mean_area, 5)))
# Import necessary modules first
import geopandas as gpd
from shapely.geometry import Point, Polygon
# Create an empty geopandas GeoDataFrame
newdata = gpd.GeoDataFrame()
# Let's see what we have at the moment
print(newdata)
# Create a new column called 'geometry' to the GeoDataFrame
newdata['geometry'] = None
# Let's again see what's inside
print(newdata)
# Approximate coordinates of Campus do Pici, UFC (Fortaleza/CE)
coordinates = [(-3.73745, -38.57351), (-3.74404, -38.58482), (-3.74632, -38.58471), (-3.74620, -38.58146),
(-3.74711, -38.58200), (-3.75738, -38.57242), (-3.75680, -38.57186), (-3.75608, -38.57256),
(-3.75490, -38.57090), (-3.75400, -38.57079), (-3.74978, -38.57464), (-3.74648, -38.57122),
(-3.74857, -38.56920), (-3.74600, -38.56670), (-3.74132, -38.56938), (-3.73688, -38.56933)]
# Create a Shapely polygon from the coordinate-tuple list
poly = Polygon(coordinates)
# Let's see what we have
print(poly)
poly
# Insert the polygon into 'geometry' -column at index 0
newdata.loc[0, 'geometry'] = poly
# Let's see what we have now
print(newdata)
# Add a new column and insert data
newdata.loc[0, 'location'] = 'Campus do Pici'
# Let's check the data
print(newdata)
print(newdata.crs)
# Set the GeoDataFrame's coordinate system to WGS84 (i.e. epsg code 4326)
newdata.crs = 'EPSG:4326'
# Let's see how the crs definition looks like
print(newdata.crs)
# Determine the output path for the Shapefile
outfp = "data/pici.shp"
# Write the data into that Shapefile
newdata.to_file(outfp)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Agora que o arquivo json foi lido na variável data, vamos ver o seu formato
Step2: Podemos ver que o tipo da variável é um GeoDataFrame. O objeto GeoDataFrame herda as funcionalidades do pandas.DataFrame adicionando operações sobre dados espaciais, usando abordagens semelhantes as estruturas de dados do Pandas (por isso o nome Geopandas). O GeoDataFrame possui algumas características especiais e funções internas que facilitam análises de estruturas geometricas.
Step3: Quando utilizamos dados espaciais, é uma boa ideia explorar os dados utilizando um mapa. Criar um mapa simples de um GeoDataFrame é muito fácil
Step4: Como podemos ver, é realmente muitio fácil produzir um mapa de um arquivo JSON utilizando o geopandas. O módulo automáticamente posiciona seu mapa de uma forma que todos os dados são inseridos.
Step5: Perceba que os arquivos Shapefile são constituidos de muitos arquivos separados, como ´.dbf´ que contém informações dos atributos, o arquivo '.prj' que contém informações sobre coordenada do sistema de referência.
Step6: 1.3 Operando com geometrias no Geopandas
Step7: Como podemos ver a coluna geometry contém valores familiares, que são basicamente os objetos Polygons que usamos anteriormente do módulo Shapely. Como os dados espaciais estão armazenados em objetos Shapely, logo é possível usar todas as funcionalidades do módulo Shapely.
Step8: Como você pode observar, todas as funcionalidade do Pandas, como a função iterrows() estão diretamente disponíveis no Geopandas sem precisar chamar o pandas diretamente, isso porque o Geopandas é uma extensão do Pandas.
Step9: Como podemos ver o nosso dataset ganhou uma nova coluna com nome area em que foram armazenadas as áreas de cada polígono.
Step10: O polígono com maior área no nosso dataset apresenta 0.00112 graus decimais quadrados. Para realizar operações sobre esses dados, seria mais interessante ter essas informações em km², na próxima aula, vamos ver como realizar uma transformação sobre os dados para obter resultados em metros e kilometros.
Step11: como pode ser observado, o GeoDataFrame está vázio, pois ainda não armazenamos nenhum dado nele.
Step12: Agora temos a colina geometry no nosso GeoDataFrame, mas não temos nenhum dado armazenado ainda.
Step13: Agora que temos no objeto Polygon, vamos inseri-lo na coluna ´geoemtry´ do nosso GeoDataFrame, na posição 0
Step14: Legal, agora nós temos um GeoDataFrame com um polígono, já podemos exportar para um arquivo Shapefile. Além disso podemos querer adicionar mais informações as nossas geometrias.
Step15: Certo, agora temos informações adicionais para reconhecer o que a nossa geoemetria representa.
Step16: Vamos adicionar um crs para o nosso GeoDataFrame. O padrão utilizado no noso exemplo foi o sistema de coordenadas WGS84 (código epsg
Step17: Como podemos ver, agora nós possuimos um sistema de referência de coordenadas (CRS) no nosso
|
12,327
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from __future__ import print_function
import datetime
from itertools import (islice, chain)
import re
import time
from collections import (Counter, OrderedDict)
# writing for eventual Python 2/3 compatability
try:
from urllib.parse import urlencode
except ImportError:
from urllib import urlencode
import requests
from lxml.html import fromstring, parse
import matplotlib.pyplot as plt
from pandas import (DataFrame, Series)
from IPython.display import (display, HTML, Markdown)
def quarry_result(query_id, rev_id=None, result_format='json'):
returns a dict {'status', 'output', 'query_meta' 'latest_run', 'latest_rev'} for a query_id on quarry.wmflabs.org
BASE_URL = "http://quarry.wmflabs.org"
# get the metadata for the query
# https://github.com/wikimedia/analytics-quarry-web/blob/e2346c5ec47d63f9514b8aef9190211fa5ff0e45/quarry/web/app.py#L402
url = BASE_URL + "/query/{0}/meta".format(query_id)
try:
query_meta = requests.get(url).json()
status = query_meta['latest_run']['status']
latest_run_id = query_meta['latest_run']['id']
latest_rev_id = query_meta['latest_rev']['id']
# if status is 'complete' and return_output is True,
# grab the results -- unless result_format is None
# https://github.com/wikimedia/analytics-quarry-web/blob/e2346c5ec47d63f9514b8aef9190211fa5ff0e45/quarry/web/app.py#L350
if (status == 'complete' and result_format is not None):
# TO DO: figure out whether 0 should be a variable
# re: https://github.com/wikimedia/analytics-quarry-web/blob/e2346c5ec47d63f9514b8aef9190211fa5ff0e45/quarry/web/app.py#L351
# 0 is default value for resultset_id
url = BASE_URL + "/run/{0}/output/0/{1}".format(latest_run_id, result_format)
if result_format == 'json':
output = requests.get(url).json()
else:
output = requests.get(url).text
else:
output = None
return {'query_meta':query_meta,
'status': status,
'latest_run': latest_run_id,
'latest_rev': latest_rev_id,
'output': output
}
except Exception as e:
return e
# assume for now latest rev id same as latest run id.
# 'status':query_meta["status"]
def display_objs_for_q_result(q):
returns IPython/Jupyter display object to describe query metadata and SQL content
(first pass)
description = q['query_meta']['query']['description']
if description is None:
description = ""
return ( HTML("<b>{0}</b>".format(q['query_meta']['query']['title'])),
HTML("<p>id: {0} ({1})</p>".format(q['query_meta']['query']['id'],
q['query_meta']['query']['timestamp'])),
HTML("<p>{0}</p>".format(description)),
Markdown(```sql\n{0}\n```.format(q['query_meta']['latest_rev']['sql']))
)
q = quarry_result(3659, result_format='json')
display(*display_objs_for_q_result(q))
q['output']
df = DataFrame(q['output']['rows'], columns=['year', 'count'])
df.set_index(keys='year', inplace=True, drop=True)
df
df.plot(kind='bar', title="revision counts for JSB article vs year")
# types of queries
from collections import OrderedDict
query_type_headers = OrderedDict([
('published', 'Published queries'),
('draft', 'Draft Queries'),
('starred', 'Starred Queries')
])
def user_queries(username):
get all queries for a user
e.g., parse http://quarry.wmflabs.org/RaymondYee
url = u"http://quarry.wmflabs.org/{0}".format(username)
r = requests.get(url)
page = requests.get(url).content.decode("UTF-8")
doc = fromstring(page)
# xpath expressions correlate with template
# https://github.com/wikimedia/analytics-quarry-web/blob/e2346c5ec47d63f9514b8aef9190211fa5ff0e45/quarry/web/templates/user.html
# number of queries
queries = dict()
queries['num_queries'] =int(doc.xpath('//*[@class="user-stat"]/h2/text()')[0])
# loop through all the query types
for (qtype, qheader) in query_type_headers.items():
q_elements = doc.xpath('//h3[contains(text(),"{0}")][1]/following-sibling::ul[1]/li/a'.format(qheader))
q_results = []
for q in q_elements:
q_id = int(q.attrib['href'].split('/')[-1])
#result = quarry_result(q_id, result_format=None)
q_results.append( (q_id, q.text))
queries[qtype] = q_results
return queries
uq = user_queries('RaymondYee')
uq
queries_display = ""
queries_display += "<p>number of queries: {0}</p>".format(uq['num_queries'] )
for (qtype, qheader) in query_type_headers.items():
queries_display += "<b>{0}</b><br/>".format(qheader)
for (q_id, q_title) in uq[qtype]:
queries_display += "<div>{0} <a href='http://quarry.wmflabs.org/query/{0}'>{1}</a></div>".format(q_id, q_title)
queries_display += "<br/>"
HTML(queries_display)
uq
ry_results = [quarry_result(q_id, result_format=None)
for q_id in chain([q_id for (q_id, title) in uq['published']],
[q_id for (q_id, title) in uq['draft']])
]
# the following way to display the results is a bit opaque and should be rewritten
display(*list(chain(*[display_objs_for_q_result(r) for r in ry_results if not isinstance(r, Exception)])))
# problematic queries
# https://quarry.wmflabs.org/query/{q_id/}meta
try:
from itertools import izip as zip
except:
pass
q_ids = list(chain([q_id for (q_id, title) in uq['published']],
[q_id for (q_id, title) in uq['draft']]))
[(id_) for (id_,r) in zip(q_ids, ry_results) if isinstance(r, Exception)]
def anchor_info_or_text(td):
anchors = td.xpath('a')
if anchors:
#if anchor text in form of "/query/query_id", return just query_id
href = anchors[0].attrib['href']
query_search = re.search("/query/(\d+)", href)
if query_search:
return (int(query_search.group(1)), anchors[0].text )
else:
return (href, anchors[0].text )
else:
return td.text
def filter_none(d):
return dict d without any items with None for value
return dict([(k,v) for (k,v) in d.items() if v is not None])
def runs_list(limit=None, from_=None, _sleep=0):
Generator for all the queries on http://quarry.wmflabs.org/query/runs/all
url = ("http://quarry.wmflabs.org/query/runs/all?" +
urlencode(filter_none({'from':from_, 'limit':limit})))
more_pages = True
while more_pages:
r = requests.get(url)
page = requests.get(url).content.decode("UTF-8")
doc = fromstring(page)
# grab headers
headers = [th.text for th in doc.xpath("//th")]
# yield rows
for tr in doc.xpath("//tr[td]"):
yield [anchor_info_or_text(td) for td in tr]
# next link
next_links = doc.xpath('//li[@class="next"]/a')
if next_links:
url = ("http://quarry.wmflabs.org/query/runs/all?" +
next_links[0].attrib['href'])
else:
more_pages = False
time.sleep(_sleep)
print ("time of analysis:", datetime.datetime.utcnow().isoformat(), "\n")
queries = []
# loop and print out which row we're on
for (i, item) in enumerate(islice(runs_list(_sleep=0.5), None)):
print("\r {0}".format(i), end="")
queries.append( (item[0][0], item[0][1], item[1][1], item[2], item[3]))
df = DataFrame(queries, columns=['id', 'title', 'creator','status', 'time'])
df.tail()
len(df[df.creator=='RaymondYee'])
# how many unique ids
len(df.id.unique()), len(df.creator.unique())
# loop through users to accumlate data on all user queries
# on the way to computing most starred query
queries_by_user = dict()
for (i, username) in enumerate(islice(df.creator.unique(),None)):
print ("\r {0}".format(i), end="")
queries_by_user[username] = user_queries(username)
# let's look at what starred
starred_q = Counter()
for (u, d) in queries_by_user.items():
for q in d['starred']:
starred_q.update([q])
starred_q_display = ""
for (q, count) in starred_q.most_common():
starred_q_display += u"<div><a href='http://quarry.wmflabs.org/query/{0}'>{1}</a>: {2}</div>".format(q[0],q[1],count)
HTML(starred_q_display)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step4: Working with results from a specific query
Step5: Now to retrieve and display the metadata for the query 3659 http
Step6: Show the output from the query
Step7: Put the output into a pandas DataFrame
Step8: Make a bar chart of the revision counts for the Bach article by year
Step10: This chart alone brings up many questions for future exploration. Why is there a large amount of activity between 2005 to 2007, followed by a dramatic decrease in 2008? Did activity in authorship about Bach-related articles decline only for Johann Sebastian Bach - Wikipedia, the free encyclopedia alone or was there movement in authorship to related articles?
Step11: Let's now retrieve my queries
Step12: Putting a bit more work, we can make it easier to jump back to the original queries on Quarry.
Step13: We can print out all the SQL for the queries too. Here's the description and SQL for my published and draft queries.
Step16: One of the next steps I'd want to pursue is to figure out how to programmatically write to Quarry, for example, formulate queries in Python and then send them to Qurray. As a Wikimedia Lab user, I'd want to be able to move queries between Quarry and the job submission facility on the Labs.
Step17: Let's scrape the pages. As of 2015-05-30, there are few enough queries for me to practically get all the pages returned by Quarry.
Step18: Display the last five results
Step19: How many queries records belong to me?
Step20: How many unique query identifiers and creators are there in the data set?
Step21: I am surprised such a small number of unique queries. It's possible my code has a bug that causes me to miss many queries. Alternative, Quarry might only return a limited selection. (To figure out.)
Step22: Now let's compute the queries that have been starred by others and the number of stars they have received.
|
12,328
|
<ASSISTANT_TASK:>
Python Code:
def countdown(n):
print '> counting down from {}'.format(n)
while n > 0:
yield n
n -= 1
print ''
print '< countdown'
for n in countdown(10):
print n,
# calling generator fucntion creates the generator object not start the function
x = countdown(3)
print x
# call `.next()` starts generator object.
print x.next()
print x.next()
print x.next()
print x.next()
import os
print os.getcwd()
generator 강의자료에 run/foo/xxx 등이 있는데 거기에 있는걸 복사해서 실행환경을 만들면 됨
# follow.py
#
# Follow a file like tail -f.
import time
def follow(thefile):
thefile.seek(0,2)
while True:
line = thefile.readline()
if not line:
time.sleep(0.1)
continue
yield line
# Example use
# Note : This example requires the use of an apache log simulator.
#
# Go to the directory run/foo and run the program 'logsim.py' from
# that directory. Run this program as a background process and
# leave it running in a separate window. We'll write program
# that read the output file being generated
#
#logfile = open("run/foo/access-log","r")
logfile = open(r'd:\work.python\python_async_stuffs\coroutine_www.dabeaz.com\run\foo\access-log')
for line in follow(logfile):
print line,
def grep(pattern):
print 'looking for %s' % pattern
while True:
line = (yield)
if pattern in line:
print line,
g = grep("python")
g.next() # prime it!
g.send("hey!!")
g.send("welcome to koread")
g.send("wow python rocks!")
g.send("really?")
def coroutine(func):
def start_coroutine(*args, **kwargs):
crtn = func(*args, **kwargs);
crtn.next()
return crtn
return start_coroutine
@coroutine
def grep(pattern):
print '[*] looking for %s' % pattern
try:
while True:
line = (yield)
if pattern in line:
print line,
except GeneratorExit as e:
print '\n[*] Going away, bye (gc or U called close())'
g = grep('python')
#g.next() # no need to call next()
g.send("hey!!")
g.send("welcome to koread")
g.send("wow python rocks!")
g.close()
g = grep('python')
g.send('wow! python is rock!')
g.send('wow! python is rock!')
g.send('wow! python is rock!')
g.throw(RuntimeError, "exception thrown")
# bogus.py
#
# Bogus example of a generator that produces and receives values
def countdown(n):
print "Counting down from", n
while n >= 0:
newvalue = (yield n)
# If a new value got sent in, reset n with it
if newvalue is not None:
n = newvalue
else:
n -= 1
# The holy grail countdown
c = countdown(5)
for x in c:
print x
if x == 5:
c.send(3)
import time
def coroutine(func):
A decorator function that takes care of starting a coroutine
automatically on call.
def start(*args,**kwargs):
cr = func(*args,**kwargs)
cr.next()
#print 'coroutine started...'
return cr
return start
# data source
def follow(thefile, target):
thefile.seek(0, 2) # goto end of the file
while True:
line = thefile.readline()
if not line:
time.sleep(0.1)
continue
target.send(line) # 최초로 호출되는 시점에 객체(target)가 생성됨
# sink - a coroutine that receives data
@coroutine
def printer():
while True:
line = (yield)
print line
# useage
f = open(r'd:\work.python\python_async_stuffs\coroutine_www.dabeaz.com\run\foo\access-log')
#follow(f, printer) # 요거는 오류 남, 'AttributeError: 'function' object has no attribute 'send'
follow( f, printer() )
# data source
def follow(thefile, target):
thefile.seek(0, 2)
while True:
line = thefile.readline()
if not line:
time.sleep(0.1)
continue
target.send(line)
# filter
@coroutine
def grep(pattern, target):
while True:
line = (yield) # receive a line
if pattern in line:
target.send(line) # send to next stage
# sink - a coroutine that receives data
@coroutine
def printer():
while True:
line = (yield)
print line,
# useage
f = open(r'd:\work.python\python_async_stuffs\coroutine_www.dabeaz.com\run\foo\access-log')
follow( f, grep('python', printer()) )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: generator 함수를 호출하는것은 generator 객체를 생성하는것이지 함수를 실행하는 것이 아님
Step3: tail -f (python version)
Step4: Coroutine
Step6: next() 는 까먹기 쉬우니까 decorator 를 이용하자.
Step7: 예외 던지기
Step8: bogus sample
Step10: 정리
Step11: Filter example
|
12,329
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import h5py
from sklearn import svm, cross_validation, preprocessing
# First we load the file
file_location = '../results_database/text_wall_street_big.hdf5'
run_name = '/low-resolution'
f = h5py.File(file_location, 'r')
# Now we need to get the letters and align them
text_directory = '../data/wall_street_letters.npy'
letters_sequence = np.load(text_directory)
Nletters = len(letters_sequence)
symbols = set(letters_sequence)
# Nexa parameters
Nspatial_clusters = 5
Ntime_clusters = 15
Nembedding = 3
parameters_string = '/' + str(Nspatial_clusters)
parameters_string += '-' + str(Ntime_clusters)
parameters_string += '-' + str(Nembedding)
nexa = f[run_name + parameters_string]
delay = 4
N = 5000
cache_size = 1000
# Exctrat and normalized SLM
SLM = np.array(f[run_name]['SLM'])
print('Standarized')
X = SLM[:,:(N - delay)].T
y = letters_sequence[delay:N]
# We now scale X
X = preprocessing.scale(X)
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.10)
clf_linear = svm.SVC(C=1.0, cache_size=cache_size, kernel='linear')
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.10)
print('Score in linear', score)
clf_rbf = svm.SVC(C=1.0, cache_size=cache_size, kernel='rbf')
clf_rbf.fit(X_train, y_train)
score = clf_rbf.score(X_test, y_test) * 100.0
print('Score in rbf', score)
print('Not standarized')
X = SLM[:,:(N - delay)].T
y = letters_sequence[delay:N]
# We now scale X
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.10)
clf_linear = svm.SVC(C=1.0, cache_size=cache_size, kernel='linear')
clf_linear.fit(X_train, y_train)
score = clf_linear.score(X_test, y_test) * 100.0
print('Score in linear', score)
clf_rbf = svm.SVC(C=1.0, cache_size=cache_size, kernel='linear')
clf_rbf.fit(X_train, y_train)
score = clf_rbf.score(X_test, y_test) * 100.0
print('Score in rbf', score)
# Exctrat and normalized SLM
SLM = np.array(f[run_name]['SLM'])
SLM[SLM < 200] = 0
SLM[SLM >= 200] = 1
print('Standarized')
X = SLM[:,:(N - delay)].T
y = letters_sequence[delay:N]
# We now scale X
X = preprocessing.scale(X)
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.10)
clf_linear = svm.SVC(C=1.0, cache_size=cache_size, kernel='linear')
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.10)
print('Score in linear', score)
clf_rbf = svm.SVC(C=1.0, cache_size=cache_size, kernel='rbf')
clf_rbf.fit(X_train, y_train)
score = clf_rbf.score(X_test, y_test) * 100.0
print('Score in rbf', score)
print('Not standarized')
X = SLM[:,:(N - delay)].T
y = letters_sequence[delay:N]
# We now scale X
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.10)
clf_linear = svm.SVC(C=1.0, cache_size=cache_size, kernel='linear')
clf_linear.fit(X_train, y_train)
score = clf_linear.score(X_test, y_test) * 100.0
print('Score in linear', score)
clf_rbf = svm.SVC(C=1.0, cache_size=cache_size, kernel='linear')
clf_rbf.fit(X_train, y_train)
score = clf_rbf.score(X_test, y_test) * 100.0
print('Score in rbf', score)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We load the file
Step2: Accuracy with non-normalized SLM
Step3: Accuracy with normalized SLM
|
12,330
|
<ASSISTANT_TASK:>
Python Code:
running_id = 0
output = [[0]]
with open("E:/output.txt") as file_open:
for row in file_open.read().split("\n"):
cols = row.split(",")
if cols[0] == output[-1][0]:
output[-1].append(cols[1])
output[-1].append(True)
else:
output.append(cols)
output = output[1:]
for row in output:
if len(row) == 6:
row += [datetime(2016, 5, 3, 20, 36, 8, 92165), False]
output = output[1:-1]
def convert_to_days(dt):
day_diff = dt / np.timedelta64(1, 'D')
if day_diff == 0:
return 23.0
else:
return day_diff
df = pd.DataFrame(output, columns=["id", "advert_time", "male","age","search","brand","conversion_time","event"])
df["lifetime"] = pd.to_datetime(df["conversion_time"]) - pd.to_datetime(df["advert_time"])
df["lifetime"] = df["lifetime"].apply(convert_to_days)
df["male"] = df["male"].astype(int)
df["search"] = df["search"].astype(int)
df["brand"] = df["brand"].astype(int)
df["age"] = df["age"].astype(int)
df["event"] = df["event"].astype(int)
df = df.drop('advert_time', 1)
df = df.drop('conversion_time', 1)
df = df.set_index("id")
df = df.dropna(thresh=2)
df.median()
df
###Parametric Bayes
#Shout out to Cam Davidson-Pilon
## Example fully worked model using toy data
## Adapted from http://blog.yhat.com/posts/estimating-user-lifetimes-with-pymc.html
## Note that we've made some corrections
censor = np.array(df["event"].apply(lambda x: 0 if x else 1).tolist())
alpha = pm.Uniform("alpha", 0,50)
beta = pm.Uniform("beta", 0,50)
@pm.observed
def survival(value=df["lifetime"], alpha = alpha, beta = beta ):
return sum( (1-censor)*(np.log( alpha/beta) + (alpha-1)*np.log(value/beta)) - (value/beta)**(alpha))
mcmc = pm.MCMC([alpha, beta, survival ] )
mcmc.sample(10000)
pm.Matplot.plot(mcmc)
mcmc.trace("alpha")[:]
#Solution to question 4:
def weibull_median(alpha, beta):
return beta * ((log(2)) ** ( 1 / alpha))
plt.hist([weibull_median(x[0], x[1]) for x in zip(mcmc.trace("alpha"), mcmc.trace("beta"))])
#Solution to question 4:
### Increasing the burn parameter allows us to discard results before convergence
### Thinning the results removes autocorrelation
mcmc = pm.MCMC([alpha, beta, survival ] )
mcmc.sample(10000, burn = 3000, thin = 20)
pm.Matplot.plot(mcmc)
#Solution to Q5
## Adjusting the priors impacts the overall result
## If we give a looser, less informative prior then we end up with a broader, shorter distribution
## If we give much more informative priors, then we get a tighter, taller distribution
censor = np.array(df["event"].apply(lambda x: 0 if x else 1).tolist())
## Note the narrowing of the prior
alpha = pm.Normal("alpha", 1.7, 10000)
beta = pm.Normal("beta", 18.5, 10000)
####Uncomment this to see the result of looser priors
## Note this ends up pretty much the same as we're already very loose
#alpha = pm.Uniform("alpha", 0, 30)
#beta = pm.Uniform("beta", 0, 30)
@pm.observed
def survival(value=df["lifetime"], alpha = alpha, beta = beta ):
return sum( (1-censor)*(np.log( alpha/beta) + (alpha-1)*np.log(value/beta)) - (value/beta)**(alpha))
mcmc = pm.MCMC([alpha, beta, survival ] )
mcmc.sample(10000, burn = 5000, thin = 20)
pm.Matplot.plot(mcmc)
#plt.hist([weibull_median(x[0], x[1]) for x in zip(mcmc.trace("alpha"), mcmc.trace("beta"))])
## Solution to bonus
## Super easy to do in the Bayesian framework, all we need to do is look at what % of samples
## meet our criteria
medians = [weibull_median(x[0], x[1]) for x in zip(mcmc.trace("alpha"), mcmc.trace("beta"))]
testing_value = 15.6
number_of_greater_samples = sum([x >= testing_value for x in medians])
100 * (number_of_greater_samples / len(medians))
#Cox model
#Fitting solution
cf = lifelines.CoxPHFitter()
cf.fit(df, 'lifetime', event_col = 'event')
cf.summary
#Solution to 1
fig, axis = plt.subplots(nrows=1, ncols=1)
cf.baseline_survival_.plot(ax = axis, title = "Baseline Survival")
# Solution to prediction
regressors = np.array([[1,45,0,0]])
survival = cf.predict_survival_function(regressors)
survival
#Solution to plotting multiple regressors
fig, axis = plt.subplots(nrows=1, ncols=1, sharex=True)
regressor1 = np.array([[1,45,0,1]])
regressor2 = np.array([[1,23,1,1]])
survival_1 = cf.predict_survival_function(regressor1)
survival_2 = cf.predict_survival_function(regressor2)
plt.plot(survival_1,label = "32 year old male")
plt.plot(survival_2,label = "46 year old female")
plt.legend(loc = "lower left")
#Difference in survival
odds = survival_1 / survival_2
plt.plot(odds, c = "red")
##Solution to 1
from pyBMA import CoxPHFitter
bmaCox = pyBMA.CoxPHFitter.CoxPHFitter()
bmaCox.fit(df, "lifetime", event_col= "event", priors= [0.5]*4)
print(bmaCox.summary)
#Low probability for everything favours parsimonious models
bmaCox = pyBMA.CoxPHFitter.CoxPHFitter()
bmaCox.fit(df, "lifetime", event_col= "event", priors= [0.1]*4)
print(bmaCox.summary)
#Low probability for everything favours parsimonious models
bmaCox = pyBMA.CoxPHFitter.CoxPHFitter()
bmaCox.fit(df, "lifetime", event_col= "event", priors= [0.9]*4)
print(bmaCox.summary)
#Low probability for everything favours parsimonious models
bmaCox = pyBMA.CoxPHFitter.CoxPHFitter()
bmaCox.fit(df, "lifetime", event_col= "event", priors= [0.3, 0.9, 0.001, 0.3])
print(bmaCox.summary)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Problems
Step2: If we want to look at covariates, we need a new approach. We'll use Cox proprtional hazards. More information here.
Step3: Once we've fit the data, we need to do something useful with it. Try to do the following things
Step4: Model selection
|
12,331
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'hammoz-consortium', 'sandbox-3', 'aerosol')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 2. Key Properties --> Software Properties
Step12: 2.2. Code Version
Step13: 2.3. Code Languages
Step14: 3. Key Properties --> Timestep Framework
Step15: 3.2. Split Operator Advection Timestep
Step16: 3.3. Split Operator Physical Timestep
Step17: 3.4. Integrated Timestep
Step18: 3.5. Integrated Scheme Type
Step19: 4. Key Properties --> Meteorological Forcings
Step20: 4.2. Variables 2D
Step21: 4.3. Frequency
Step22: 5. Key Properties --> Resolution
Step23: 5.2. Canonical Horizontal Resolution
Step24: 5.3. Number Of Horizontal Gridpoints
Step25: 5.4. Number Of Vertical Levels
Step26: 5.5. Is Adaptive Grid
Step27: 6. Key Properties --> Tuning Applied
Step28: 6.2. Global Mean Metrics Used
Step29: 6.3. Regional Metrics Used
Step30: 6.4. Trend Metrics Used
Step31: 7. Transport
Step32: 7.2. Scheme
Step33: 7.3. Mass Conservation Scheme
Step34: 7.4. Convention
Step35: 8. Emissions
Step36: 8.2. Method
Step37: 8.3. Sources
Step38: 8.4. Prescribed Climatology
Step39: 8.5. Prescribed Climatology Emitted Species
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Step41: 8.7. Interactive Emitted Species
Step42: 8.8. Other Emitted Species
Step43: 8.9. Other Method Characteristics
Step44: 9. Concentrations
Step45: 9.2. Prescribed Lower Boundary
Step46: 9.3. Prescribed Upper Boundary
Step47: 9.4. Prescribed Fields Mmr
Step48: 9.5. Prescribed Fields Mmr
Step49: 10. Optical Radiative Properties
Step50: 11. Optical Radiative Properties --> Absorption
Step51: 11.2. Dust
Step52: 11.3. Organics
Step53: 12. Optical Radiative Properties --> Mixtures
Step54: 12.2. Internal
Step55: 12.3. Mixing Rule
Step56: 13. Optical Radiative Properties --> Impact Of H2o
Step57: 13.2. Internal Mixture
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Step59: 14.2. Shortwave Bands
Step60: 14.3. Longwave Bands
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Step62: 15.2. Twomey
Step63: 15.3. Twomey Minimum Ccn
Step64: 15.4. Drizzle
Step65: 15.5. Cloud Lifetime
Step66: 15.6. Longwave Bands
Step67: 16. Model
Step68: 16.2. Processes
Step69: 16.3. Coupling
Step70: 16.4. Gas Phase Precursors
Step71: 16.5. Scheme Type
Step72: 16.6. Bulk Scheme Species
|
12,332
|
<ASSISTANT_TASK:>
Python Code:
print('Hello world!')
print(list(range(5)))
import numpy as np
# To proceed, implement the missing code, and remove the 'raise NotImplementedException()'
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# three = ?
assert three == 3
print('Good job!')
import numpy as np
# This is a two-dimensional numpy array:
arr = np.array([[1,2,3,4],[5,6,7,8]])
print(arr, '\n')
# The shape is a tuple describing the size of each dimension
print(f"shape={arr.shape}")
# Elements are selected by specifying two indices, counting from 0
print(f"arr[1][3] = {arr[1][3]}", '\n')
# This is a three-dimensional numpy array
arr3 = np.array([[[1, 2, 3, 4], [5, 6, 7, 8]], [[9, 10, 11, 12], [13, 14, 15, 16]]])
print(arr3, '\n')
print(f"shape={arr3.shape}")
# Elements in a three dimensional array are selected by specifying three indices, counting from 0
print(f"arr3[1][0][2] = {arr3[1][0][2]}")
# The numpy reshape method allows one to change the shape of an array, while keeping the underlying data.
# One can leave one dimension unspecified by passing -1, it will be determined from the size of the data.
print("Original array:")
print(arr, '\n')
print("As 4x2 matrix")
print(np.reshape(arr, (4,2)), '\n')
print("As 8x1 matrix")
print(np.reshape(arr, (-1,1)), '\n')
print("As 2x2x2 array")
print(np.reshape(arr, (2,2,-1)))
# the numpy sum, mean min and max can be used to calculate aggregates across any axis
table = np.array([[10.9, 12.1, 15.2, 7.3], [3.9, 1.2, 34.6, 8.3], [1.9, 23.3, 1.2, 3.7]])
print(table)
# Calculating the maximum across the first axis (=0).
max0 = np.max(table, axis=0)
# Calculating the maximum across the second axis (=1).
max1 = np.max(table, axis=1)
# Calculating the overall maximum.
max_overall = np.max(table)
print(f"Maximum over the rows of the table = {max0}")
print(f"Maximum over the columns of the table = {max1}")
print(f"Overall maximum of the table = {max_overall}")
x = np.array([1.,2.,3.])
y = np.array([4.,5.,6.])
print(x + y)
print(x - y)
print(x * y)
print(x / y)
import matplotlib.pyplot as plt
plt.plot([7.3, 8.2, 1.2, 3.2, 9.1, 1.5])
plt.show()
plt.plot([100.0, 110.0, 120.0, 130.0, 140.0, 150.0],[2.3, 8.1, 9.3, 9.7, 9.8, 20.0])
plt.show()
x = np.arange(0, 10, 0.1)
print(x)
# Most numpy function work on arrays as well by applying the function to each element in turn
y_cos = np.cos(x)
y_sin = np.sin(x)
plt.figure(figsize=(13, 6))
plt.plot(x, y_cos, x, y_sin)
plt.show()
import pandas as pd
data = pd.read_csv("data/request_rate_vs_CPU.csv")
print(f'The dataframe has {data.shape[0]} rows and {data.shape[1]} columns')
data.head()
data.plot(figsize=(13, 6), y="CPU")
plt.show()
data.drop(columns='CPU').plot(figsize=(13, 6))
plt.show()
request_names = data.drop(columns='CPU').columns.values
request_names
request_rates = data.drop(columns='CPU').values
print(request_rates.shape)
request_rates
cpu = data['CPU'].values
cpu
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Imports
Step2: Parts to be implemented
Step3: Numpy arrays
Step4: Notice how we used print(f"foo {bar}") to enter a variable bar into the print statement using curly brackets? This syntax is called 'f-strings' (notice the f before the first quote), extra documentation can be found here.
Step5: Basic arithmetical operations on arrays of the same shape are done elementwise
Step6: Data plotting
Step7: A basic plot a list of values
Step8: Plotting a list of X and Y values
Step9: Using mathematical functions and plotting more than one line on a graph
Step10: Data import and inspection (optional)
Step11: The head command allows one to quickly inspect the first rows of the loaded data
Step12: We can select the CPU column and plot the data
Step13: Note that to show the plot, we need to import matplotlib.pyplot and execute the show() function.
Step14: Now to continue and start on modelling the data, we'll work with basic numpy arrays. By doing this we also drop the time-information as shown in the plots above.
Step15: We extract the request rates as a 2-dimensional numpy array
Step16: and the cpu usage as a one-dimensional numpy array
|
12,333
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import division, absolute_import, print_function
from pytz import timezone
import elasticsearch
import elasticsearch.helpers
from idb import config
from idb.helpers.logging import idblogger
from idb.helpers.conversions import fields, custom_mappings
# u = "4dce41dc-2af6-448c-99e1-abfd3a9cc3e5"
u = "db4bb0df-8539-4617-ab5f-eb118aa3126b"
serverlist = config.config["elasticsearch"]["servers"]
def get_connection(**kwargs):
kwargs.setdefault('hosts', config.config["elasticsearch"]["servers"])
kwargs.setdefault('retry_on_timeout', True) # this isn't valid until >=1.3
kwargs.setdefault('sniff_on_start', False)
kwargs.setdefault('sniff_on_connection_fail', False)
kwargs.setdefault('max_retries', 10)
kwargs.setdefault('timeout', 30)
return elasticsearch.Elasticsearch(**kwargs)
conn = get_connection(hosts=serverlist)
r = conn.search(index="idigbio", doc_type="recordsets", _source=["etag"], body={
"query": {
"bool": {
"must": [
{
"query_string": {
"default_field": "_id",
"query": u
}
}
],
"must_not": [],
"should": []
}
},
"from": 0,
"size": 10,
"sort": [],
"aggs": {}
})
r['hits']
# parent = r['hits']['hits'][0]['_parent']
# conn.delete(index='idigbio', id=u, parent=parent, doc_type='mediarecords')
conn.delete(index='idigbio', id=u, doc_type='recordsets')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: u is the uuid of the recordset that we wish to delete
Step2: This from Nathan's example of deleting a mediarecord where we need the parent record
|
12,334
|
<ASSISTANT_TASK:>
Python Code:
##1년은 52주로 구성됨
week = list(range(1, 53)) #range 함수의 첫 번째 파라매터에는 시작할 숫자, 두 번째 파라매터에는 끝나는 숫자보다 1 큰 수를 넣어줌
week
len(week)
##한 주는 7일로 구성되어 있으므로 첫 번째 주의 가운데 날인 4번째 일을 그 주의 대표값(일)로 표시 (두 번째 주의 대표일은 11일)
representative_day = list(range(4, 365, 7))#range의 세 번째 파라매터에는 간격이 들어감
representative_day
len(representative_day)
##나중에 날짜에 대한 기능을 배우면 이 인덱스로 날짜 갖다 붙이기 가능
import random
random.randint(0, 10) #0~10 사이의 정수 난수 생성(0과 10 포함)
search = [] #비어있는 리스트 생성
## 52주(1년) 동안의 주간 검색량 생성
for i in range(0, 52) :
search = search + [random.randint(0, 100)] #검색량 트렌드 데이터가 0~100 사이로 검색량을 나타내므로 0과 100 사이의 숫자 생성
search
len(search)
from pandas import DataFrame
###예시
#dataframe 2차원같은 자료구조
#키는 컬럼 안에 들어가는애들은 리스트형식 으로 하면 예쁜 모양으로 그려짐
data = {
'state': ['Ohio', 'Ohio', 'Ohio', 'Nevada', 'Nevada'],
'year': [2000, 2001, 2002, 2001, 2002],
} #사전객체
frame = DataFrame(data)
frame
data = {
'week': week,
'day': representative_day,
'search_volume': search,
}
frame = DataFrame(data)
frame
a = [1, 7]
a
b = (a[1]-a[1-1])/3
a1 = a[1-1] + b
a1
a2 = a1 + b
a2
a3 = a2 + b
round(a3, 5) == a[1]
final = [a[1-1]] + [a1] + [a2]
final
final = []
part = []
for i in range(1, 52) :
b = (search[i] - search[i-1])/7
a1 = search[i-1] + b
a2 = a1 + b
a3 = a2 + b
a4 = a3 + b
a5 = a4 + b
a6 = a5 + b
part = [search[i-1]] + [a1] + [a2] + [a3] + [a4] + [a5] + [a6]
final = final + part
final
len(final)
day = list(range(4,361)) #첫째주의 중간 일인 4일부터 마지막주(52주)의 중간 일인 361일 전날까지에 대해서 산출한 것이므로
day
len(day)
full_data = {
'day': day,
'search volume': final,
}
full_frame = DataFrame(full_data)
full_frame
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 각 주의 대표일 리스트 만들기
Step2: 난수생성기
Step3: 난수 생성으로 검색량 리스트 만들기
Step4: 데이터 프레임 만들기
Step5: 일 단위 데이터로 늘리기
Step6: 실제 데이터로 생성
Step7: 나중에 plot을 배우면 더 직관적으로 이해하기 좋을 것 같음
Step8: 최종 데이터 프레임 만들기
|
12,335
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
import os
from xgboost import XGBRegressor
from sklearn.linear_model import LinearRegression
from sklearn.svm import SVR, LinearSVR
from sklearn.model_selection import GridSearchCV
from sklearn.preprocessing import StandardScaler
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.model_selection import validation_curve, learning_curve
path = '../Final report'
X_fn = 'x.csv'
y_fn = 'y.csv'
X_path = os.path.join(path, X_fn)
y_path = os.path.join(path, y_fn)
X = pd.read_csv(X_path)
y = pd.read_csv(y_path)
X.head()
for fuel in ['All coal', 'Lignite', 'Subbituminous']:
X.loc[:,fuel] = X.loc[:,fuel].values/X.loc[:,'NG Price ($/mcf)'].values
X.drop('NG Price ($/mcf)', axis=1, inplace=True)
cluster_ids = X['cluster'].unique()
for cluster in cluster_ids:
X['cluster_{}'.format(cluster)] = np.eye(len(cluster_ids))[X['cluster'],cluster]
X.head()
X.tail()
free_cap_dict = {}
for cluster in range(6):
free_cap_dict[cluster] = X.loc[X['cluster'] == cluster, ['DATETIME', 'nameplate_capacity', 'GROSS LOAD (MW)']]
col_name = 'cluster_' + str(cluster) + ' free capacity'
free_cap_dict[cluster].loc[:,col_name] = (free_cap_dict[cluster].loc[:,'nameplate_capacity'].values -
free_cap_dict[cluster].loc[:,'GROSS LOAD (MW)'].values)
free_cap_dict[0].head()
for cluster in range(6):
col_name = 'cluster_' + str(cluster) + ' free capacity'
X = pd.merge(X, free_cap_dict[cluster].loc[:,['DATETIME', col_name]], on='DATETIME')
X.head(n=10)
for idx in X.index:
datetime = X.loc[idx, 'DATETIME']
for cluster in range(6):
col_name = 'cluster_' + cluster + ' free capacity'
X.loc[idx, col_name] =
y.tail()
X_cols = ['nameplate_capacity', 'GROSS LOAD (MW)', 'ERCOT Load, MW',
'Total Wind Installed, MW', 'Total Wind Output, MW', 'Net Load Change (MW)',
'All coal', 'Lignite', 'Subbituminous']
X_cluster_cols = ['cluster_{}'.format(cluster) for cluster in cluster_ids]
# X_cluster_free_cols = ['cluster_{} free capacity'.format(cluster) for cluster in cluster_ids]
X_clean = X.loc[:,X_cols+X_cluster_cols]#+X_cluster_free_cols]
X_clean.fillna(0, inplace=True)
y_clean = y.loc[:,'Gen Change (MW)']
y_clean.fillna(0, inplace=True)
print X_clean.shape
print y_clean.shape
X_clean.head()
X_train = X_clean.loc[(X['Year']<2012),:]
y_train = y_clean.loc[(X['Year']<2012)]
X_va = X_clean.loc[X['Year'].isin([2012, 2013]),:]
y_va = y_clean.loc[X['Year'].isin([2012, 2013])]
X_test = X_clean.loc[X['Year']>2013,:]
y_test = y_clean.loc[X['Year']>2013]
X_train_scaled = StandardScaler().fit_transform(X_train)
X_va_scaled = StandardScaler().fit_transform(X_va)
X_test_scaled = StandardScaler().fit_transform(X_test)
print X_train_scaled.shape, y_train.shape
print X_va_scaled.shape, y_va.shape
print X_test_scaled.shape, y_test.shape
lm = LinearRegression()
lm.fit(X_train_scaled, y_train)
lm.score(X_va_scaled, y_va)
y_pr = lm.predict(X_va_scaled)
y_va.values.shape, y_pr.shape, X.loc[X['Year'].isin([2012, 2013]),'cluster'].values.shape
y_lm_resids = pd.DataFrame(dict(zip(['Gen Change (MW)', 'y_pr', 'cluster'],
[y_va.values, y_pr, X.loc[X['Year'].isin([2012, 2013]),'cluster'].values])))
# y_lm_resids['y_pr'] = y_pr
# y_lm_resids['cluster'] = X.loc[:,'cluster']
y_lm_resids.head()
y_lm_resids.loc[:,'residuals'] = y_lm_resids.loc[:,'y_pr'] - y_lm_resids.loc[:,'Gen Change (MW)']
g = sns.FacetGrid(y_lm_resids, hue='cluster', col='cluster',
col_wrap=3)
g.map(plt.scatter, 'Gen Change (MW)', 'residuals')
g.add_legend()
from xgboost import XGBRegressor
param_values = [25, 100, 250, 350]
train_scores, valid_scores = validation_curve(XGBRegressor(), X_train, y_train, "n_estimators", param_values,
n_jobs=-1, verbose=3)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
valid_scores_mean = np.mean(valid_scores, axis=1)
valid_scores_std = np.std(valid_scores, axis=1)
plt.title("Validation Curve with XGBoost", size=15)
plt.xlabel("n_estimators", size=15)
plt.ylabel("Score", size=15)
plt.ylim(0.0, 1.1)
lw = 2
plt.plot(param_values, train_scores_mean, label="Training score",
color="darkorange", lw=lw)
plt.fill_between(param_values, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.2,
color="darkorange", lw=lw)
plt.plot(param_values, valid_scores_mean, label="Cross-validation score",
color="navy", lw=lw)
plt.fill_between(param_values, valid_scores_mean - valid_scores_std,
valid_scores_mean + valid_scores_std, alpha=0.2,
color="navy", lw=lw)
plt.legend(loc="best")
plt.savefig('XGBoost n_estimators validation curve.pdf', bbox_inches='tight')
param_values = [1,3,5,9,15]
train_scores, valid_scores = validation_curve(XGBRegressor(n_estimators=250), X_train, y_train, "max_depth", param_values,
n_jobs=-1, verbose=3)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
valid_scores_mean = np.mean(valid_scores, axis=1)
valid_scores_std = np.std(valid_scores, axis=1)
plt.title("Validation Curve with XGBoost", size=15)
plt.xlabel("max_depth", size=15)
plt.ylabel("Score", size=15)
plt.ylim(0.0, 1.1)
lw = 2
plt.plot(param_values, train_scores_mean, label="Training score",
color="darkorange", lw=lw)
plt.fill_between(param_values, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.2,
color="darkorange", lw=lw)
plt.plot(param_values, valid_scores_mean, label="Cross-validation score",
color="navy", lw=lw)
plt.fill_between(param_values, valid_scores_mean - valid_scores_std,
valid_scores_mean + valid_scores_std, alpha=0.2,
color="navy", lw=lw)
plt.legend(loc="best")
plt.savefig('XGBoost max_depth validation curve.pdf', bbox_inches='tight')
param_values = np.logspace(-5, 1, 7)
train_scores, valid_scores = validation_curve(XGBRegressor(n_estimators=250), X_train, y_train, "reg_alpha", param_values,
n_jobs=-1, verbose=3)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
valid_scores_mean = np.mean(valid_scores, axis=1)
valid_scores_std = np.std(valid_scores, axis=1)
plt.title("Validation Curve with XGBoost")
plt.xlabel("reg_alpha")
plt.ylabel("Score")
plt.ylim(0.0, 1.1)
lw = 2
plt.semilogx(param_values, train_scores_mean, label="Training score",
color="darkorange", lw=lw)
plt.fill_between(param_values, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.2,
color="darkorange", lw=lw)
plt.semilogx(param_values, valid_scores_mean, label="Cross-validation score",
color="navy", lw=lw)
plt.fill_between(param_values, valid_scores_mean - valid_scores_std,
valid_scores_mean + valid_scores_std, alpha=0.2,
color="navy", lw=lw)
plt.legend(loc="best")
param_values = [1,3,5,9,15]
train_sizes, train_scores, valid_scores = learning_curve(XGBRegressor(n_estimators=250), X_train, y_train,
n_jobs=-1, verbose=3)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
valid_scores_mean = np.mean(valid_scores, axis=1)
valid_scores_std = np.std(valid_scores, axis=1)
plt.title("Learning Curve with XGBoost", size=15)
plt.xlabel("Sample size", size=15)
plt.ylabel("Score", size=15)
plt.ylim(0.0, 1.1)
lw = 2
plt.plot(train_sizes, train_scores_mean, label="Training score",
color="darkorange", lw=lw)
plt.fill_between(train_sizes, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.2,
color="darkorange", lw=lw)
plt.plot(train_sizes, valid_scores_mean, label="Cross-validation score",
color="navy", lw=lw)
plt.fill_between(train_sizes, valid_scores_mean - valid_scores_std,
valid_scores_mean + valid_scores_std, alpha=0.2,
color="navy", lw=lw)
plt.legend(loc="best")
plt.savefig('XGBoost learning curve.pdf', bbox_inches='tight')
xgbr = XGBRegressor(n_estimators=250)
xgbr.fit(X_train, y_train)
y_pr = xgbr.predict(X_va)
y_xgbr_resids = pd.DataFrame(dict(zip(['Gen Change (MW)', 'y_pr', 'cluster'],
[y_va.values, y_pr, X.loc[X['Year'].isin([2012, 2013]),'cluster'].values])))
y_xgbr_resids.loc[:,'residuals'] = y_xgbr_resids.loc[:,'y_pr'] - y_xgbr_resids.loc[:,'Gen Change (MW)']
plt.scatter()
with sns.axes_style('whitegrid'):
g = sns.FacetGrid(y_xgbr_resids, hue='cluster', col='cluster',
col_wrap=3)
g.map(plt.scatter, 'y_pr', 'residuals', s=5, alpha=.3)
g.set_xlabels(size=15)
g.set_ylabels(size=15)
plt.savefig('XGBR residuals.pdf')
model = XGBRegressor()
subsample = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 1.0]
param_grid = dict(subsample=subsample)
grid_search = GridSearchCV(model, param_grid, n_jobs=-1, verbose=3)
result = grid_search.fit(X_train_scaled, y_train)
result.cv_results_
model = XGBRegressor()
colsample_bytree = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 1.0]
param_grid = dict(colsample_bytree=colsample_bytree)
grid_search = GridSearchCV(model, param_grid, n_jobs=-1, verbose=3)
result = grid_search.fit(X_train_scaled, y_train)
result.cv_results_
model = XGBRegressor()
colsample_bylevel = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 1.0]
param_grid = dict(colsample_bylevel=colsample_bylevel)
grid_search = GridSearchCV(model, param_grid, n_jobs=-1, verbose=3)
result = grid_search.fit(X_train_scaled, y_train)
result.cv_results_
model = XGBRegressor()
max_depth = [3, 6, 9]
n_estimators = [100, 250, 500]
reg_alpha = [1e-5, 1e-3, 0.1]
reg_lambda = [1e-3, 0.1, 1]
param_grid = dict(max_depth=max_depth, n_estimators=n_estimators,
reg_alpha=reg_alpha, reg_lambda=reg_lambda)
grid_search = GridSearchCV(model, param_grid, n_jobs=-1, verbose=2)
result = grid_search.fit(X_train_scaled, y_train)
import cPickle as pickle
pickle.dump((grid_search, result), open( "xgb gridsearch and results.pkl", "wb" ) )
result.cv_results_
result.best_estimator_
grid_search.score(X_va_scaled, y_va)
xgb = XGBRegressor(n_estimators=250, reg_alpha=0.1)
xgb.fit(X_train, y_train)
xgb.score(X_va, y_va)
y_pr = xgb.predict(X_va)
y_xgb_resids = pd.DataFrame(dict(zip(['Gen Change (MW)', 'y_pr', 'cluster'],
[y_va.values, y_pr, X.loc[X['Year'].isin([2012, 2013]),'cluster'].values])))
y_xgb_resids.loc[:,'residuals'] = y_xgb_resids.loc[:,'y_pr'] - y_xgb_resids.loc[:,'Gen Change (MW)']
g = sns.FacetGrid(y_xgb_resids, hue='cluster', col='cluster',
col_wrap=3)
g.map(plt.scatter, 'y_pr', 'residuals')
g.add_legend()
y_xgb_resids.describe()
X_train.columns
X_train_ratio = X_train.copy()
X_va_ratio = X_va.copy()
for fuel in ['All coal', 'Lignite', 'Subbituminous']:
X_train_ratio.loc[:,fuel] = X_train_ratio.loc[:,fuel].values/X_train_ratio.loc[:,'NG Price ($/mcf)'].values
X_va_ratio.loc[:,fuel] = X_va.loc[:,fuel]/X_va.loc[:,'NG Price ($/mcf)']
X_train_ratio.drop('NG Price ($/mcf)', axis=1, inplace=True)
X_va_ratio.drop('NG Price ($/mcf)', axis=1, inplace=True)
X_train.head()
X_train_ratio.head()
xgb_ratio = XGBRegressor(n_estimators=250, reg_alpha=0.1)
xgb_ratio.fit(X_train_ratio, y_train)
xgb_ratio.score(X_va_ratio, y_va)
y_pr = xgb_ratio.predict(X_va_ratio)
y_xgb_resids = pd.DataFrame(dict(zip(['Gen Change (MW)', 'y_pr', 'cluster'],
[y_va.values, y_pr, X.loc[X['Year'].isin([2012, 2013]),'cluster'].values])))
y_xgb_resids.loc[:,'residuals'] = y_xgb_resids.loc[:,'y_pr'] - y_xgb_resids.loc[:,'Gen Change (MW)']
g = sns.FacetGrid(y_xgb_resids, hue='cluster', col='cluster',
col_wrap=3)
g.map(plt.scatter, 'y_pr', 'residuals')
g.add_legend()
y_xgb_resids.describe()
from xgboost import plot_importance
plot_importance(xgb_ratio)
lm = LinearRegression(normalize=True)
lm.fit(X_train_ratio, y_train)
lm.score(X_va_ratio, y_va)
y_pr = lm.predict(X_va_scaled)
y_va.values.shape, y_pr.shape, X.loc[X['Year'].isin([2012, 2013]),'cluster'].values.shape
y_lm_resids = pd.DataFrame(dict(zip(['Gen Change (MW)', 'y_pr', 'cluster'],
[y_va.values, y_pr, X.loc[X['Year'].isin([2012, 2013]),'cluster'].values])))
# y_lm_resids['y_pr'] = y_pr
# y_lm_resids['cluster'] = X.loc[:,'cluster']
y_lm_resids.head()
y_lm_resids.loc[:,'residuals'] = y_lm_resids.loc[:,'y_pr'] - y_lm_resids.loc[:,'Gen Change (MW)']
g = sns.FacetGrid(y_lm_resids, hue='cluster', col='cluster',
col_wrap=3)
g.map(plt.scatter, 'Gen Change (MW)', 'residuals')
g.add_legend()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import data
Step2: Make fuel price a ratio of the coal price to the natural gas price
Step3: One-hot encoding of the cluster variable
Step4: Add free capacity of every group for that hour.
Step5: Drop unnecessary columns and replace nan's with 0
Step6: Split into training, validation, testing
Step7: Need scaled versions of the X data for some of the models
Step8: Check size of all arrays
Step9: Linear Regression (OLS)
Step10: XGBoost
Step11: Validation curve for n_estimators
Step12: Validation curve for n_estimators
Step13: Validation curve for reg_alpha
Step14: Learning curve for n_estimators=250 and max_depth=3
Step15: Try XGBoost on non-scaled data
Step16: Try ratio of fuel prices
Step17: Linear Regression (OLS) with ratio of fuel prices
|
12,336
|
<ASSISTANT_TASK:>
Python Code:
%run ../../utils/load_notebook.py
from instabilities import *
import numpy as np
He_coeff = 1.34
def flat_end(argument):
'''декоратор для того, чтобы продолжать функцию на уровне последнего значения'''
def real_decorator(function):
def wrapper(*args, **kwargs):
if args[0] < argument:
return function(*args, **kwargs)
else:
return function(argument, *args[1:], **kwargs)
return wrapper
return real_decorator
# sig_maj_lim=None
# spl_maj=None
# @flat_end(sig_maj_lim)
# def sig_R_maj_minmin(r, spl_maj=spl_maj):
# return spl_maj(r).item()
# @flat_end(sig_maj_lim)
# def sig_R_maj_min(r, spl_maj=spl_maj):
# return spl_maj(r).item()/sqrt(sin_i**2 + 0.49*cos_i**2)
# @flat_end(sig_maj_lim)
# def sig_R_maj_max(r, spl_maj=spl_maj):
# return spl_maj(r).item()/sqrt(0.5*sin_i**2 + 0.09*cos_i**2)
# @flat_end(sig_maj_lim)
# def sig_R_maj_maxmax(r, spl_maj=spl_maj):
# return spl_maj(r)*sqrt(2)/sin_i
# @flat_end(sig_maj_lim)
# def sig_R_maj_maxmaxtrue(r, spl_maj=spl_maj):
# return spl_maj(r)/sin_i/sqrt(sigPhi_to_sigR_real(r))
# sig_min_lim=None
# spl_min=None
# @flat_end(sig_min_lim)
# def sig_R_minor_minmin(r, spl_min=spl_min):
# return spl_min(r).item()
# @flat_end(sig_min_lim)
# def sig_R_minor_min(r, spl_min=spl_min):
# return spl_min(r).item()/sqrt(sin_i**2 + 0.49*cos_i**2)
# @flat_end(sig_min_lim)
# def sig_R_minor_max(r, spl_min=spl_min):
# return spl_min(r).item()/sqrt(sin_i**2 + 0.09*cos_i**2)
# @flat_end(sig_min_lim)
# def sig_R_minor_maxmax(r, spl_min=spl_min):
# return spl_min(r)/sin_i
# TODO: move to proper place
def plot_data_lim(ax, data_lim):
'''Вертикальная линия, обозначающая конец данных'''
ax.axvline(x=data_lim, ls='-.', color='black', alpha=0.5)
def plot_disc_scale(scale, ax, text=None):
'''Обозначает масштаб диска'''
ax.plot([scale, scale], [0., 0.05], '-', lw=6., color='black')
if text:
ax.annotate(text, xy=(scale, 0.025), xytext=(scale, 0.065), textcoords='data', arrowprops=dict(arrowstyle="->"))
def plot_Q_levels(ax, Qs, style='--', color='grey', alpha=0.4):
'''Функция, чтобы рисовать горизонтальные линии различных уровней $Q^{-1}$:'''
for Q in Qs:
ax.axhline(y=1./Q, ls=style, color=color, alpha=alpha)
def plot_2f_vs_1f(ax=None, total_gas_data=None, epicycl=None, gas_approx=None, sound_vel=None, scale=None, sigma_max=None, sigma_min=None, star_density_max=None,
star_density_min=None, data_lim=None, color=None, alpha=0.3, disk_scales=[], label=None):
'''Картинка сравнения 2F и 1F критерия для разных фотометрий и величин sig_R,
куда подается весь газ, результат НЕ исправляется за осесимметричные возмущения.'''
invQg, invQs, invQeff_min = zip(*get_invQeff_from_data(gas_data=total_gas_data,
epicycl=epicycl,
gas_approx=gas_approx,
sound_vel=sound_vel,
scale=scale,
sigma=sigma_max,
star_density=star_density_min))
invQg, invQs, invQeff_max = zip(*get_invQeff_from_data(gas_data=total_gas_data,
epicycl=epicycl,
gas_approx=gas_approx,
sound_vel=sound_vel,
scale=scale,
sigma=sigma_min,
star_density=star_density_max))
# invQg = map(lambda l: l*1.6, invQg)
# invQeff_min = map(lambda l: l*1.6, invQeff_min)
# invQeff_max = map(lambda l: l*1.6, invQeff_max)
rr = zip(*total_gas_data)[0]
ax.fill_between(rr, invQeff_min, invQeff_max, color=color, alpha=alpha, label=label)
ax.plot(rr, invQeff_min, 'd-', color=color, alpha=0.6)
ax.plot(rr, invQeff_max, 'd-', color=color, alpha=0.6)
ax.plot(rr, invQg, 'v-', color='b')
ax.set_ylim(0., 1.5)
ax.set_xlim(0., data_lim+50.)
# plot_SF(ax)
plot_data_lim(ax, data_lim)
for h, annot in disk_scales:
plot_disc_scale(h, ax, annot)
plot_Q_levels(ax, [1., 1.5, 2., 3.])
ax.legend()
def epicyclicFreq_real(poly_gas, R, resolution):
'''Честное вычисление эпициклической частоты на расстоянии R для сплайна или полинома'''
try:
return sqrt(2.0) * poly_gas(R) * sqrt(1 + R * poly_gas.deriv()(R) / poly_gas(R)) / (R * resolution )
except:
return sqrt(2.0) * poly_gas(R) * sqrt(1 + R * poly_gas.derivative()(R) / poly_gas(R)) / (R * resolution )
def Sigma_crit_S04(gas_dens, r_gas, star_surf_dens):
return 6.1 * gas_dens / (gas_dens + star_surf_dens(r_gas))
def oort_a(r, gas_vel):
try:
return 0.5 * (gas_vel(r)/r - gas_vel.deriv()(r))
except:
return 0.5 * (gas_vel(r)/r - gas_vel.derivative()(r))
def Sigma_crit_A(r, gas_vel, alpha, sound_vel):
G = 4.32
return alpha * (sound_vel*oort_a(r, gas_vel)) / (np.pi*G)
from scipy.special import i0, i1, k0, k1
def disc_vel(r, Sigma0, h, scale, Sigma0_2=None, h_2=None):
G = 4.3
bessels = i0(0.5*r/h)*k0(0.5*r/h) - i1(0.5*r/h)*k1(0.5*r/h)
if h_2 is None:
return np.sqrt(2*np.pi*G*Sigma0*r*scale * 0.5*r/h * bessels)
else: #двухдисковая модель
bessels2 = i0(0.5*r/h_2)*k0(0.5*r/h_2) - i1(0.5*r/h_2)*k1(0.5*r/h_2)
return np.sqrt(2*np.pi*G*Sigma0*r*scale * 0.5*r/h * bessels + 2*np.pi*G*Sigma0_2*r*scale * 0.5*r/h_2 * bessels2)
from tabulate import tabulate
import pandas as pd
def show_all_photometry_table(all_photometry, scale):
'''scale in kpc/arcsec'''
copy = [list(l) for l in all_photometry]
#все это дальше из-за того, что бывают двухдисковые модели и их нужно по другому использовать
for entry in copy:
if type(entry[5]) == tuple:
entry[5] = (round(entry[5][0], 2), round(entry[5][1], 2))
else:
entry[5] = round(entry[5], 2)
for entry in copy:
if type(entry[4]) == tuple:
entry[4] = (round(entry[4][0], 2), round(entry[4][1], 2))
else:
entry[4] = round(entry[4], 2)
for entry in copy:
if type(entry[5]) == tuple:
entry.append(2*math.pi*entry[5][0]**2 * entry[-1][0](0) * (scale * 1000.)**2 +
2*math.pi*entry[5][1]**2 * entry[-1][1](0) * (scale * 1000.)**2)
else:
entry.append(2*math.pi*entry[5]**2 * entry[-1](0) * (scale * 1000.)**2)
for entry in copy:
if type(entry[5]) == tuple:
entry.append(entry[7][0](0) + entry[7][1](0))
else:
entry.append(entry[7](0))
df = pd.DataFrame(data=copy, columns=['Name', 'r_eff', 'mu_eff', 'n', 'mu0_d', 'h_disc', 'M/L', 'surf', 'M_d/M_sun', 'Sigma_0'])
df['M/L'] = df['M/L'].apply(lambda l: '%2.2f'%l)
# df['Sigma_0'] = df['surf'].map(lambda l:l(0))
df['Sigma_0'] = df['Sigma_0'].apply(lambda l: '%2.0f' % l)
# df['M_d/M_sun'] = 2*math.pi*df['h_disc']**2 * df['surf'].map(lambda l:l(0)) * (scale * 1000.)**2
df['M_d/M_sun'] = df['M_d/M_sun'].apply(lambda l: '%.2E.' % l)
df.drop('surf', axis=1, inplace=True)
print tabulate(df, headers='keys', tablefmt='psql', floatfmt=".2f")
BAD_MODEL_PREFIX = 'b:'
def tot_dens(dens):
if type(dens) == tuple:
star_density = lambda l: dens[0](l) + dens[1](l)
else:
star_density = lambda l: dens(l)
return star_density
from itertools import cycle
lines = ["-","--","-.",":"]
linecycler = cycle(lines)
def foreground_zebra(ax, step, alpha):
for i in range(int(ax.get_xlim()[1])+1):
if i%2 == 0:
ax.axvspan(i*step, (i+1)*step, color='grey', alpha=alpha)
from math import pi
def romeo_Qinv(r=None, epicycl=None, sound_vel=11., sigma_R=None, star_density=None,
HI_density=None, CO_density=None, alpha=None, scale=None, gas_approx=None, verbose=False, show=False):
G = 4.32
kappa = epicycl(gas_approx, r, scale)
Q_star = kappa*sigma_R(r)/(pi*G*star_density(r))
Q_CO = kappa*sound_vel/(pi*G*CO_density)
Q_HI = kappa*sound_vel/(pi*G*HI_density)
T_CO, T_HI = 1.5, 1.5
if alpha > 0 and alpha <= 0.5:
T_star = 1. + 0.6*alpha**2
else:
T_star = 0.8 + 0.7*alpha
# TODO: оставить только show или verbose
if show:
print 'r={:7.3f} Qg={:7.3f} Qs={:7.3f} Qg^-1={:7.3f} Qs^-1={:7.3f}'.format(r, Q_HI, Q_star, 1./Q_HI, 1./Q_star)
dispersions = [sigma_R(r), sound_vel, sound_vel]
QTs = [Q_star*T_star, Q_HI*T_HI, Q_CO*T_CO]
components = ['star', 'HI', 'H2']
index = QTs.index(min(QTs))
if verbose:
print 'QTs: {}'.format(QTs)
print 'min index: {}'.format(index)
print 'min component: {}'.format(components[index])
sig_m = dispersions[index]
def W_i(sig_m, sig_i):
return 2*sig_m*sig_i/(sig_m**2 + sig_i**2)
return W_i(sig_m, dispersions[0])/QTs[0] + W_i(sig_m, dispersions[1])/QTs[1] + W_i(sig_m, dispersions[2])/QTs[2], components[index]
def romeo_Qinv_thin(r=None, epicycl=None, sound_vel=11., sigma_R=None, star_density=None,
HI_density=None, CO_density=None, alpha=None, scale=None, gas_approx=None, verbose=False, show=False):
G = 4.32
kappa = epicycl(gas_approx, r, scale)
Q_star = kappa*sigma_R(r)/(pi*G*star_density(r))
Q_CO = kappa*sound_vel/(pi*G*CO_density)
Q_HI = kappa*sound_vel/(pi*G*HI_density)
# TODO: оставить только show или verbose
if show:
print 'r={:7.3f} Qg={:7.3f} Qs={:7.3f} Qg^-1={:7.3f} Qs^-1={:7.3f}'.format(r, Q_HI, Q_star, 1./Q_HI, 1./Q_star)
dispersions = [sigma_R(r), sound_vel, sound_vel]
QTs = [Q_star, Q_HI, Q_CO]
components = ['star', 'HI', 'H2']
index = QTs.index(min(QTs))
if verbose:
print 'QTs: {}'.format(QTs)
print 'min index: {}'.format(index)
print 'min component: {}'.format(components[index])
sig_m = dispersions[index]
def W_i(sig_m, sig_i):
return 2*sig_m*sig_i/(sig_m**2 + sig_i**2)
return W_i(sig_m, dispersions[0])/QTs[0] + W_i(sig_m, dispersions[1])/QTs[1] + W_i(sig_m, dispersions[2])/QTs[2], components[index]
def plot_RF13_vs_2F(r_g_dens=None, HI_gas_dens=None, CO_gas_dens=None, epicycl=None, sound_vel=None, sigma_R_max=None, sigma_R_min=None,
star_density=None, alpha_max=None, alpha_min=None, scale=None, gas_approx=None, thin=True, show=False):
'''Плотности газа передаются не скорр. за гелий.'''
if thin:
romeo_Q = romeo_Qinv_thin
else:
romeo_Q = romeo_Qinv
fig = plt.figure(figsize=[20, 5])
ax = plt.subplot(131)
totgas = zip(r_g_dens, [He_coeff*(l[0]+l[1]) for l in zip(HI_gas_dens, CO_gas_dens)])[1:]
if show:
print 'sig_R_max case:'
romeo_min = []
for r, g, co in zip(r_g_dens, HI_gas_dens, CO_gas_dens):
rom, _ = romeo_Q(r=r, epicycl=epicycl, sound_vel=sound_vel, sigma_R=sigma_R_max,
star_density=star_density, HI_density=He_coeff*g, CO_density=He_coeff*co,
alpha=alpha_min, scale=scale, gas_approx=gas_approx, show=show)
romeo_min.append(rom)
if _ == 'star':
color = 'g'
elif _ == 'HI':
color = 'b'
else:
color = 'm'
ax.scatter(r, rom, 10, marker='o', color=color)
invQg, invQs, invQeff_min = zip(*get_invQeff_from_data(gas_data=totgas,
epicycl=epicycl,
gas_approx=gas_approx,
sound_vel=sound_vel,
scale=scale,
sigma=sigma_R_max,
star_density=star_density))
if show:
print 'sig_R_min case:'
romeo_max = []
for r, g, co in zip(r_g_dens, HI_gas_dens, CO_gas_dens):
rom, _ = romeo_Q(r=r, epicycl=epicycl, sound_vel=sound_vel, sigma_R=sigma_R_min,
star_density=star_density, HI_density=He_coeff*g, CO_density=He_coeff*co,
alpha=alpha_max, scale=scale, gas_approx=gas_approx, show=show)
romeo_max.append(rom)
if _ == 'star':
color = 'g'
elif _ == 'HI':
color = 'b'
else:
color = 'm'
ax.scatter(r, rom, 10, marker = 's', color=color)
invQg, invQs, invQeff_max = zip(*get_invQeff_from_data(gas_data=totgas,
epicycl=epicycl,
gas_approx=gas_approx,
sound_vel=sound_vel,
scale=scale,
sigma=sigma_R_min,
star_density=star_density))
ax.plot(r_g_dens[1:], invQeff_min, '-', alpha=0.5, color='r')
ax.plot(r_g_dens[1:], invQeff_max, '-', alpha=0.5, color='r')
plot_Q_levels(ax, [1., 1.5, 2., 3.])
ax.set_xlim(0)
ax.set_ylim(0)
ax.legend([matplotlib.lines.Line2D([0], [0], linestyle='none', mfc='g', mec='none', marker='o'),
matplotlib.lines.Line2D([0], [0], linestyle='none', mfc='b', mec='none', marker='o'),
matplotlib.lines.Line2D([0], [0], linestyle='none', mfc='m', mec='none', marker='o')],
['star', 'HI', 'H2'], numpoints=1, markerscale=1, loc='upper right') #add custom legend
ax.set_title('RF13: major component')
ax = plt.subplot(132)
ax.plot(romeo_min[1:], invQeff_min, 'o')
ax.plot(romeo_max[1:], invQeff_max, 'o', color='m', alpha=0.5)
ax.set_xlabel('Romeo')
ax.set_ylabel('2F')
ax.set_xlim(0., 1.)
ax.set_ylim(0., 1.)
ax.plot(ax.get_xlim(), ax.get_ylim(), '--')
ax = plt.subplot(133)
ax.plot(r_g_dens[1:], [l[1]/l[0] for l in zip(romeo_min[1:], invQeff_min)], 'o-')
ax.plot(r_g_dens[1:], [l[1]/l[0] for l in zip(romeo_max[1:], invQeff_max)], 'o-', color='m', alpha=0.5)
ax.set_xlabel('R')
ax.set_ylabel('[2F]/[Romeo]');
def mu_face_on(mu0d, cos_i):
return mu0d + 2.5*np.log10(1./cos_i)
def plot_param_depend(ax=None, N=None, data_lim=None, color=None, alpha=0.3, disk_scales=[], label=None, **kwargs):
params = kwargs.copy()
for p in params.keys():
if p == 'total_gas_data':
depth = lambda L: isinstance(L, list) and max(map(depth, L))+1 #depth of nested lists
if depth(params[p]) == 1:
params[p] = [params[p]]*N
elif type(params[p]) is not list:
params[p] = [params[p]]*N
result = []
for i in range(N):
invQg, invQs, invQeff_min = zip(*get_invQeff_from_data(gas_data=params['total_gas_data'][i],
epicycl=params['epicycl'][i],
gas_approx=params['gas_approx'][i],
sound_vel=params['sound_vel'][i],
scale=params['scale'][i],
sigma=params['sigma_max'][i],
star_density=params['star_density_min'][i]))
invQg, invQs, invQeff_max = zip(*get_invQeff_from_data(gas_data=params['total_gas_data'][i],
epicycl=params['epicycl'][i],
gas_approx=params['gas_approx'][i],
sound_vel=params['sound_vel'][i],
scale=params['scale'][i],
sigma=params['sigma_min'][i],
star_density=params['star_density_max'][i]))
result.append((invQeff_min, invQeff_max))
rr = zip(*params['total_gas_data'][0])[0]
qmins = []
qmaxs = []
for ind, rrr in enumerate(rr):
qmin = [result[l][0][ind] for l in range(len(result))]
qmax = [result[l][1][ind] for l in range(len(result))]
qmins.append((np.mean(qmin), np.std(qmin)))
qmaxs.append((np.mean(qmax), np.std(qmax)))
ax.errorbar(rr, zip(*qmins)[0], fmt='o-', yerr=zip(*qmins)[1], elinewidth=6, alpha=0.3);
ax.errorbar(rr, zip(*qmaxs)[0], fmt='o-', yerr=zip(*qmaxs)[1])
ax.axhline(y=1., ls='-', color='grey')
ax.set_ylim(0.)
ax.set_xlim(0.)
plot_data_lim(ax, data_lim)
plot_Q_levels(ax, [1., 1.5, 2., 3.]);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Коэффициент для учета вклада гелия в массу газа (см. Notes)
Step2: Для большой оси
Step3: Для случая бесконечного тонкого диска
Step4: Два других механизма из http
Step5: Hunter et al (1998), 'competition with shear' according to Leroy
Step6: Кривая вращения тонкого диска
Step7: Функция для печати статистики по фотометриям, куда добавлена также информация о полной массе диска $M_d = 2\pi h^2 \Sigma(0)$ (только нужно учесть, что там пк в arcsec надо перевести)
Step8: Префикс для плохих фотометрий, чтобы их не брать в конце.
Step9: Функция, которая возвращает суммарный профиль плотности для двухдисковой модели, если это необходимо
Step10: Цикл по стилям линий для того, что бы они отличались там, где их много на одной картинке и цвета сливаются.
Step11: Раскрашиваем задний фон в "зебру" с некоей периодичностью.
Step12: Сравним с оценкой Romeo & Falstad (2013) https
Step13: Для тонкого диска
Step14: Функция-сравнение с наблюдениями
Step15: Функция для исправления центральной поверхностной яркости за наклон (приведение диска к виду "плашмя"). Взята из http
Step16: Функция для анализа влияния параметров. Берет стандартные параметры, делает из них списки и прогоняет для всех в списке, после чего измеряет среднее и std. Можно варьировать несколько параметров одновременно.
|
12,337
|
<ASSISTANT_TASK:>
Python Code:
import importlib
autograd_available = True
# if automatic differentiation is available, use it
try:
import autograd
except ImportError:
autograd_available = False
pass
if autograd_available:
import autograd.numpy as np
from autograd import grad, hessian
else:
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
from ipywidgets import interactive
import ipywidgets as widgets
%matplotlib inline
font = {'size' : 20}
plt.rc('font', **font)
plt.rc('text', usetex=matplotlib.checkdep_usetex(True))
matplotlib.rcParams['text.latex.preamble']=[r"\usepackage{amsmath}"]
if autograd_available:
print('Using autograd to compute gradients')
else:
print('Using hand-calculated gradient')
# Valley
def myfun(x):
return (x[0]**2)/16 + 9*(x[1]**2)
if autograd_available:
gradient = grad(myfun)
Hessian = hessian(myfun)
else:
def gradient(x):
grad = [x[0]/8, 18*x[1]]
return grad;
def Hessian(x):
H = [[1/8, 0.0], [0.0, 18.0]]
return np.array(H)
x = np.arange(-5.0, 5.0, 0.02)
y = np.arange(-2.0, 2.0, 0.02)
X, Y = np.meshgrid(x, y)
fZ = myfun([X,Y])
plt.figure(1,figsize=(10,6))
plt.rcParams.update({'font.size': 14})
plt.contourf(X,Y,fZ,levels=20)
plt.colorbar()
plt.xlabel("$x$")
plt.ylabel("$y$")
plt.show()
epsilon = 0.5
start = np.array([-4.0,-1.0])
points = []
while len(points) < 200:
points.append( (start,myfun(start)) )
g = gradient(start)
H = Hessian(start)
Hinv = np.linalg.inv(H)
start = start - epsilon * (Hinv @ g)
trajectory_x = [points[i][0][0] for i in range(len(points))]
trajectory_y = [points[i][0][1] for i in range(len(points))]
plt.figure(1,figsize=(16,6))
plt.subplot(121)
plt.rcParams.update({'font.size': 14})
plt.contourf(X,Y,fZ,levels=20)
plt.xlim(-5,0)
plt.ylim(-2,2)
plt.xlabel("$x$")
plt.ylabel("$y$")
plt.plot(trajectory_x, trajectory_y,marker='.',color='w',linewidth=2)
plt.subplot(122)
plt.plot(range(0,len(points)),list(zip(*points))[1])
plt.grid(True)
plt.xlabel("Step $i$")
plt.ylabel(r"$f(\boldsymbol{x}^{(i)})$")
plt.show()
def plot_function(epsilon, start_x, start_y):
start = np.array([start_x,start_y])
points = []
while len(points) < 200:
points.append( (start,myfun(start)) )
g = gradient(start)
H = Hessian(start)
Hinv = np.linalg.inv(H)
start = start - epsilon * (Hinv @ g)
trajectory_x = [points[i][0][0] for i in range(len(points))]
trajectory_y = [points[i][0][1] for i in range(len(points))]
plt.figure(3,figsize=(15,5))
plt.subplot(121)
plt.rcParams.update({'font.size': 14})
plt.contourf(X,Y,fZ,levels=20)
plt.xlim(-5,0)
plt.ylim(-2,2)
plt.xlabel("$x$")
plt.ylabel("$y$")
plt.plot(trajectory_x, trajectory_y,marker='.',color='w',linewidth=2)
plt.subplot(122)
plt.plot(range(0,len(points)),list(zip(*points))[1])
plt.grid(True)
plt.xlabel("Step $i$")
plt.ylabel(r"$f(\boldsymbol{x}^{(i)})$")
plt.show()
epsilon_values = np.arange(0.0,1,0.0002)
interactive_update = interactive(plot_function, \
epsilon = widgets.SelectionSlider(options=[("%g"%i,i) for i in epsilon_values], value=0.1, continuous_update=False,description='epsilon',layout=widgets.Layout(width='50%')),
start_x = widgets.FloatSlider(min=-5.0,max=0.0,step=0.001,value=-4.0, continuous_update=False, description='x'), \
start_y = widgets.FloatSlider(min=-1.0, max=1.0, step=0.001, value=-1.0, continuous_update=False, description='y'))
output = interactive_update.children[-1]
output.layout.height = '370px'
interactive_update
# Rosenbrock function
def rosenbrock_fun(x):
return (1-x[0])**2+100*((x[1]-(x[0])**2)**2)
if autograd_available:
rosenbrock_gradient = grad(rosenbrock_fun)
rosenbrock_Hessian = hessian(rosenbrock_fun)
else:
def rosenbrock_gradient(x):
grad = [-2*(1-x[0])-400*(x[1]-x[0]**2)*x[0], 200*(x[1]-x[0]**2)]
return grad
def rosenbrock_Hessian(x):
H = np.array([[2 - 400*(x[1] - 3*x[0]**2), -400*x[0]], [-400*x[0], 200]])
return H
def temp_rosenbrock_Hessian(x):
H = np.array([[2 - 400*(x[1] - 3*x[0]**2), -400*x[0]], [-400*x[0], 200]])
return H
xr = np.arange(-1.6, 1.6, 0.01)
yr = np.arange(-1.0, 3.0, 0.01)
Xr, Yr = np.meshgrid(xr, yr)
fZr = rosenbrock_fun([Xr,Yr])
def plot_function_rosenbrock(epsilon, start_x, start_y):
start = np.array([start_x,start_y])
points = []
while len(points) < 1000:
points.append( (start,rosenbrock_fun(start)) )
g = rosenbrock_gradient(start)
H = rosenbrock_Hessian(start)
Hinv = np.linalg.inv(H)
start = start - epsilon * (Hinv @ g)
trajectory_x = [points[i][0][0] for i in range(len(points))]
trajectory_y = [points[i][0][1] for i in range(len(points))]
plt.figure(4,figsize=(15,5))
plt.subplot(121)
plt.rcParams.update({'font.size': 14})
plt.contourf(Xr,Yr,fZr,levels=20)
plt.xlabel("$x$")
plt.ylabel("$y$")
plt.plot(trajectory_x, trajectory_y,marker='.',color='w',linewidth=2)
plt.xlim((min(xr), max(xr)))
plt.ylim((min(yr), max(yr)))
plt.subplot(122)
plt.plot(range(0,len(points)),list(zip(*points))[1])
plt.grid(True)
plt.xlabel("Step $i$")
plt.ylabel(r"$f(\boldsymbol{x}^{(i)})$")
plt.show()
epsilon_values = np.arange(0.0,1,0.00002)
interactive_update = interactive(plot_function_rosenbrock, \
epsilon = widgets.SelectionSlider(options=[("%g"%i,i) for i in epsilon_values], value=0.1, continuous_update=False,description='epsilon',layout=widgets.Layout(width='50%')),
start_x = widgets.FloatSlider(min=-1.6,max=1.6,step=0.0001,value=0.6, continuous_update=False, description='x'), \
start_y = widgets.FloatSlider(min=-1.0, max=3.0, step=0.0001, value=0.1, continuous_update=False, description='y'))
output = interactive_update.children[-1]
output.layout.height = '350px'
interactive_update
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Specify the function to minimize as a simple python function.<br>
Step2: Plot the function as a 2d surface plot. Different colors indicate different values of the function.
Step3: Carry out the simple gradient descent strategy by using only the sign of the gradient. Carry out 200 iterations (without using a stopping criterion). The values of epsilon and the starting point are specified
Step4: Plot the trajectory and the value of the function (right subplot). Note that the minimum of this function is achieved for (0,0) and is 0.
Step5: This is an interactive demonstration of gradient descent, where you can specify yourself the starting point as well as the step value. You can see that depending on the step size, the minimization can get unstable
Step6: Next, we consider the so-called Rosenbrock function, which is given by
|
12,338
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import sys
import os
import matplotlib.pyplot as plt
from matplotlib import dates
from odm2api.ODMconnection import dbconnection
from odm2api.ODM2.services.readService import *
# Create a connection to the ODM2 database
# ----------------------------------------
odm2db_fpth = '/home/mayorga/Desktop/TylerYeats/ODM2-LittleBear1.sqlite'
session_factory = dbconnection.createConnection('sqlite', odm2db_fpth, 2.0)
read = ReadODM2(session_factory)
# Run some basic sample queries.
# ------------------------------
# Get all of the variables from the database and print their names to the console
allVars = read.getVariables()
for x in allVars:
print x.VariableCode + ": " + x.VariableNameCV
# Get all of the people from the database
allPeople = read.getPeople()
for x in allPeople:
print x.PersonFirstName + " " + x.PersonLastName
try:
print "\n-------- Information about an Affiliation ---------"
allaff = read.getAllAffiliations()
for x in allaff:
print x.PersonObj.PersonFirstName + ": " + str(x.OrganizationID)
except Exception as e:
print "Unable to demo getAllAffiliations", e
allaff = read.getAllAffiliations()
type(allaff)
from odm2api.ODM2.models import SamplingFeatures
read._session.query(SamplingFeatures).filter_by(SamplingFeatureTypeCV='Site').all()
read.getSamplingFeaturesByType('Site')
# Get all of the SamplingFeatures from the database that are Sites
try:
siteFeatures = read.getSamplingFeaturesByType('Site')
numSites = len(siteFeatures)
for x in siteFeatures:
print x.SamplingFeatureCode + ": " + x.SamplingFeatureName
except Exception as e:
print "Unable to demo getSamplingFeaturesByType", e
read.getSamplingFeatures()
read.getSamplingFeatureByCode('USU-LBR-Mendon')
# Now get the SamplingFeature object for a SamplingFeature code
sf = read.getSamplingFeatureByCode('USU-LBR-Mendon')
# vars(sf)
# 1/31/2016: Leads to error with latest from odm2api master:
# "TypeError: vars() argument must have __dict__ attribute"
print sf, "\n"
print type(sf)
print type(sf.FeatureGeometry)
vars(sf.FeatureGeometry)
sf.FeatureGeometry.__doc__
sf.FeatureGeometry.geom_wkb, sf.FeatureGeometry.geom_wkt
type(sf.shape()), sf.shape().wkt
# Drill down and get objects linked by foreign keys
print "\n------------ Foreign Key Example --------- \n",
try:
# Call getResults, but return only the first result
firstResult = read.getResults()[0]
action_firstResult = firstResult.FeatureActionObj.ActionObj
print "The FeatureAction object for the Result is: ", firstResult.FeatureActionObj
print "The Action object for the Result is: ", action_firstResult
print ("\nThe following are some of the attributes for the Action that created the Result: \n" +
"ActionTypeCV: " + action_firstResult.ActionTypeCV + "\n" +
"ActionDescription: " + action_firstResult.ActionDescription + "\n" +
"BeginDateTime: " + str(action_firstResult.BeginDateTime) + "\n" +
"EndDateTime: " + str(action_firstResult.EndDateTime) + "\n" +
"MethodName: " + action_firstResult.MethodObj.MethodName + "\n" +
"MethodDescription: " + action_firstResult.MethodObj.MethodDescription)
except Exception as e:
print "Unable to demo Foreign Key Example: ", e
# Now get a particular Result using a ResultID
print "\n------- Example of Retrieving Attributes of a Time Series Result -------"
try:
tsResult = read.getTimeSeriesResultByResultId(1)
# Get the site information by drilling down
sf_tsResult = tsResult.ResultObj.FeatureActionObj.SamplingFeatureObj
print(
"Some of the attributes for the TimeSeriesResult retrieved using getTimeSeriesResultByResultID(): \n" +
"ResultTypeCV: " + tsResult.ResultObj.ResultTypeCV + "\n" +
# Get the ProcessingLevel from the TimeSeriesResult's ProcessingLevel object
"ProcessingLevel: " + tsResult.ResultObj.ProcessingLevelObj.Definition + "\n" +
"SampledMedium: " + tsResult.ResultObj.SampledMediumCV + "\n" +
# Get the variable information from the TimeSeriesResult's Variable object
"Variable: " + tsResult.ResultObj.VariableObj.VariableCode + ": " + tsResult.ResultObj.VariableObj.VariableNameCV + "\n" +
"AggregationStatistic: " + tsResult.AggregationStatisticCV + "\n" +
# Get the site information by drilling down
"Elevation_m: " + str(sf_tsResult.Elevation_m) + "\n" +
"SamplingFeature: " + sf_tsResult.SamplingFeatureCode + " - " +
sf_tsResult.SamplingFeatureName)
except Exception as e:
print "Unable to demo Example of retrieving Attributes of a time Series Result: ", e
tsResult = read.getTimeSeriesResultByResultId(1)
type(tsResult), vars(tsResult)
vars(tsResult.ResultObj)
# Get the values for a particular TimeSeriesResult
print "\n-------- Example of Retrieving Time Series Result Values ---------"
tsValues = read.getTimeSeriesResultValuesByResultId(1) # Return type is a pandas dataframe
# Print a few Time Series Values to the console
# tsValues.set_index('ValueDateTime', inplace=True)
tsValues.head()
# Plot the time series
try:
fig = plt.figure()
ax = fig.add_subplot(111)
tsValues.plot(x='ValueDateTime', y='DataValue', kind='line',
title=tsResult.ResultObj.VariableObj.VariableNameCV + " at " +
tsResult.ResultObj.FeatureActionObj.SamplingFeatureObj.SamplingFeatureName,
ax=ax)
ax.set_ylabel(tsResult.ResultObj.VariableObj.VariableNameCV + " (" +
tsResult.ResultObj.UnitsObj.UnitsAbbreviation + ")")
ax.set_xlabel("Date/Time")
ax.xaxis.set_minor_locator(dates.MonthLocator())
ax.xaxis.set_minor_formatter(dates.DateFormatter('%b'))
ax.xaxis.set_major_locator(dates.YearLocator())
ax.xaxis.set_major_formatter(dates.DateFormatter('\n%Y'))
ax.grid(True)
except Exception as e:
print "Unable to demo plotting of tsValues: ", e
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: SamplingFeatures tests
Step2: Back to the rest of the demo
|
12,339
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
from numpy import *
from matplotlib.pylab import *
%pylab --no-import-all inline
a1 = array([1.0, 2.0, 3.0])
a2 = arange(1.0, 5.0, 0.5)
a3 = linspace(1.0, 10.0, 17)
print(a1)
print(a2)
print(a3)
m1 = array([[1.0, 2.0],
[3.0, 4.0]])
print(m1)
a4 = ones(10)
m2 = zeros((5,5))
a5 = empty(10)
print(a4)
print(m2)
print(a5)
a7 = ones_like(a1)
m2 = zeros_like(m1)
# zeros_like, empty_like
print(a7)
print(m2)
print(a7.shape)
print(m1.shape)
print(a1[1])
print(a2[1:7])
print(a3[2::3])
print(a2)
a2[3] = 7.0
print(a2)
a2[3:5] = 2.0
print(a2)
m3 = zeros((10,10))
m3[5,:] = arange(0.0, 10.0)
print(m3)
m3[2:4,:] = 42.0
print(m3)
m3[3:9, 5:7] = 23.0
print(m3)
m4 = ones((10,10))
m4[ [2,7],: ] = 42.0
m4[ :, [3,4] ] = 42.0
print(m4)
m4[5,:] = 23.0
print(m4)
# swap lines!
m4[ [5,2],: ] = m4[ [2,5],: ]
print(m4)
a1 = ones((10,10))
a2 = a1 + a1
print(a2)
a3 = 2*a2 - a1/2.
print(a3)
print(sin(2.0))
xs = linspace(0.0, 2*pi, 10)
print(sin(xs))
def f(x):
return x**2 + 1.0
print(f(2.0))
print(f(2*ones(10)))
xs = linspace(0.0, 2*pi, 100)
plot(sin(xs))
figure()
plot(xs, sin(xs))
# define size of the figure
#rcParams['figure.figsize'] = 10, 7
plot(xs, sin(xs), 'o-', label="sin(x)")
plot(xs, cos(xs), 'rx--', label="cos(x)")
xlabel("x")
ylabel("f(x)")
legend()
subplot(221)
plot(xs, sin(xs))
subplot(222)
plot(xs, cos(xs))
subplot(223)
plot(xs, exp(xs))
subplot(224)
plot(xs, log(xs))
xs = linspace(1.0, 10.0, 100)
subplot(221)
plot(xs)
plot(xs**2)
plot(0.001*exp(xs))
subplot(222)
loglog(xs)
loglog(xs**2)
loglog(exp(xs))
subplot(223)
semilogy(xs)
semilogy(xs**2)
semilogy(0.001*exp(xs))
subplot(224)
semilogx(xs)
semilogx(xs**2)
semilogx(0.001*exp(xs))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: NumPy Arrays
Step2: Create arrays with array, ones, zeros, empty.
Step3: Arrays can be accessed like lists (index, slicing).
Step4: Math operations and functions in NumPy all work elementwise with arrays.
Step5: plot to plot numpy arrays.
Step6: Subplots can be created.
|
12,340
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import os
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
from scipy import stats
from scipy.stats import norm
#Valores da tabela
y=[-1,0,1] #colunas
x=[-0.25,0,0.25] #linhas
probXY=[[[] for i in range(3)] for i in range(3)]
pxy=[0.05,0.07,0.26,0.075,0.21,0.12,0.125,0.07,0.02]
k=0
for i in range(3):
for j in range(3):
probXY[i][j]=pxy[k]
k+=1
print(probXY)
#item 1
#Distribuição de X
probX=[0,0,0]
for i in range(3):
for j in range(3):
probX[i]+=probXY[i][j]
for i in range(3):
print("Probabilidade de X=",x[i]," é igual a ", probX[i])
espX=0
varX=0
for i in range(3):
espX+=x[i]*probX[i]
for i in range(3):
varX+=(x[i]-espX)**2*probX[i]
print("Esperança de X=",espX)
print("Variância de X=",varX)
#item 1
#Distribuição de Y
probY=[0,0,0]
for i in range(3):
for j in range(3):
probY[j]+=probXY[i][j]
for i in range(3):
print("Probabilidade de Y=",y[i]," é igual a ", probY[i])
espY=0
varY=0
for i in range(3):
espY+=y[i]*probY[i]
for i in range(3):
varY+=(y[i]-espY)**2*probY[i]
print("Esperança de Y=",espY)
print("Variância de Y=",varY)
#item 1
#Covariância e Correlação
cov=0
for i in range(3):
for j in range(3):
cov+=(x[i]-espX)*(y[j]-espY)*probXY[i][j]
corr=cov/(varX*varY)**(0.5)
print("Covariância entre X e Y=", cov)
print("Correlação entre X e Y=", corr)
#item 2
#Distribuição do G=0.5*X +0.5*Y
g=[]
probG=[]
for i in range(3):
for j in range(3):
a = 0.5*x[i]+0.5*y[j],
if a in g:
probG[g.index(a)] += probX[i][j]
else:
g.append(a)
probG.append(probXY[i][j])
for i in range(len(g)):
print("Probabilidade de G=",g[i]," é igual a ", probG[i])
#item 3
#Esperança e variância de G
espG=0
varG=0
for i in range(len(g)):
espG+=g[i]*probG[i]
for i in range(len(g)):
varG+=(g[i]-espG)**2*probG[i]
print("Esperança e variância de G usando distribuição de probabilidade de G:")
print("Esperança de G=",espG)
print("Variância de G=",varG)
#item 4
#Esperança e variância de G usando propriedades de soma de variáveis aleatórias
#G=0.5X + 0.5Y
#G=0.5*(X+Y)
espGp = 0.5*(espX+espY)
varGp = 0.5**2*(varX+varY+2*cov)
print("Esperança e variância de G usando propriedades:")
print("Esperança de G=",espGp)
print("Variância de G=",varGp)
#Informações do enunciado
muX = 21
varX = 4
muY = 18.90
varY = 2.25
corXY = 0.95
covXY = corXY*(varX*varY)**(0.5)
mean = [muX, muY]
cov = [[varX, covXY], [covXY, varY]] # diagonal covariance
n=100
x, y = np.random.multivariate_normal(mean, cov, n).T
print("Matriz de covariâncias a partir dos n valores correlacionados:")
print(np.cov(x,y))
gasto = []
for i in range(len(x)):
gasto.append(x[i]+y[i])
data = pd.Series(gasto)
x2 = np.arange(20,80,1)
y2 = norm.pdf(x2, muX + muY, (varX + varY + 2*covXY)**0.5)
plt.plot(x2,y2, lw = 3, alpha = 0.7)
hist2 = data.plot(kind = 'hist',bins = 12, normed = True)
plt.show()
print('Média real:',data.mean())
print('Variância real:',data.var())
n=1000
x, y = np.random.multivariate_normal(mean, cov, n).T
print("Matriz de covariâncias a partir dos n valores correlacionados:")
print(np.cov(x,y))
gasto = []
for i in range(len(x)):
gasto.append(x[i]+y[i])
data = pd.Series(gasto)
x2 = np.arange(20,80,1)
y2 = norm.pdf(x2, muX + muY, (varX + varY + 2*covXY)**0.5)
plt.plot(x2,y2, lw = 3, alpha = 0.7)
hist2 = data.plot(kind = 'hist',bins = 12, normed = True)
plt.show()
print('Média real:',data.mean())
print('Variância real:',data.var())
n=10000
x, y = np.random.multivariate_normal(mean, cov, n).T
print("Matriz de covariâncias a partir dos n valores correlacionados:")
print(np.cov(x,y))
gasto = []
for i in range(len(x)):
gasto.append(x[i]+y[i])
data = pd.Series(gasto)
x2 = np.arange(20,80,1)
y2 = norm.pdf(x2, muX + muY, (varX + varY + 2*covXY)**0.5)
plt.plot(x2,y2, lw = 3, alpha = 0.7)
hist2 = data.plot(kind = 'hist',bins = 12, normed = True)
plt.show()
print('Média real:',data.mean())
print('Variância real:',data.var())
muX = 21
varX = 4
muY = 18.90
varY = 2.25
corXY = 0
covXY = corXY*(varX*varY)**(0.5)
mean = [muX, muY]
cov = [[varX, covXY], [covXY, varY]]
n=100
x, y = np.random.multivariate_normal(mean, cov, n).T
print("Matriz de covariâncias a partir dos n valores correlacionados:")
print(np.cov(x,y))
gasto = []
for i in range(len(x)):
gasto.append(x[i]+y[i])
data = pd.Series(gasto)
x2 = np.arange(20,80,1)
y2 = norm.pdf(x2, muX + muY, (varX + varY + 2*covXY)**0.5)
plt.plot(x2,y2,lw = 3, alpha = 0.7)
hist2 = data.plot(kind = 'hist',bins = 12, normed = True)
plt.show()
print('Média real:',data.mean())
print('Variância real:',data.var())
n=1000
x, y = np.random.multivariate_normal(mean, cov, n).T
print("Matriz de covariâncias a partir dos n valores correlacionados:")
print(np.cov(x,y))
gasto = []
for i in range(len(x)):
gasto.append(x[i]+y[i])
data = pd.Series(gasto)
#temp
x2 = np.arange(20,80,1)
y2 = norm.pdf(x2, muX + muY, (varX + varY + 2*covXY)**0.5)
plt.plot(x2,y2,lw = 3, alpha = 0.7)
hist2 = data.plot(kind = 'hist',bins = 12, normed = True)
plt.show()
print('Média real:',data.mean())
print('Variância real:',data.var())
n=10000
x, y = np.random.multivariate_normal(mean, cov, n).T
print("Matriz de covariâncias a partir dos n valores correlacionados:")
print(np.cov(x,y))
gasto = []
for i in range(len(x)):
gasto.append(x[i]+y[i])
data = pd.Series(gasto)
x2 = np.arange(20,80,1)
y2 = norm.pdf(x2, muX + muY, (varX + varY + 2*covXY)**0.5)
plt.plot(x2,y2,lw = 3, alpha = 0.7)
hist2 = data.plot(kind = 'hist',bins = 12, normed = True)
plt.show()
print('Média real:',data.mean())
print('Variância real:',data.var())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <font color='blue'>Exercício 1 - Exemplo 3 da Aula 17 </font>
Step2: <font color='blue'>Exercício 2 - Soma de normais correlacionadas</font>
Step3: item 5
|
12,341
|
<ASSISTANT_TASK:>
Python Code:
pip install looker_sdk
import looker_sdk #Note that the pip install required a hyphen but the import is an underscore.
import os #We import os here in order to manage environment variables for the tutorial. You don't need to do this on a local system or anywhere you can more conveniently set environment variables.
import json #This is a handy library for doing JSON work.
os.environ["LOOKERSDK_BASE_URL"] = "https://your.looker.com:19999" #If your looker URL has .cloud in it (hosted on GCP), do not include :19999 (ie: https://your.cloud.looker.com).
os.environ["LOOKERSDK_API_VERSION"] = "3.1" #3.1 is the default version. You can change this to 4.0 if you want.
os.environ["LOOKERSDK_VERIFY_SSL"] = "true" #Defaults to true if not set. SSL verification should generally be on unless you have a real good reason not to use it. Valid options: true, y, t, yes, 1.
os.environ["LOOKERSDK_TIMEOUT"] = "120" #Seconds till request timeout. Standard default is 120.
#Get the following values from your Users page in the Admin panel of your Looker instance > Users > Your user > Edit API keys. If you know your user id, you can visit https://your.looker.com/admin/users/<your_user_id>/edit.
os.environ["LOOKERSDK_CLIENT_ID"] = "clientid" #No defaults.
os.environ["LOOKERSDK_CLIENT_SECRET"] = "clientsecret" #No defaults. This should be protected at all costs. Please do not leave it sitting here, even if you don't share this document.
print("All environment variables set.")
sdk = looker_sdk.init31()
print('Looker SDK 3.1 initialized successfully.')
#Uncomment out the lines below if you want to instead initialize the 4.0 SDK. It's that easy— Just replace init31 with init40.
#sdk = looker_sdk.init40()
#print('Looker SDK 4.0 initialized successfully.')
my_user = sdk.me()
#Output is an instance of the User model, but can also be read like a python dict. This applies to all Looker API calls that return Models.
#Example: The following commands return identical output. Feel free to use whichever style is more comfortable for you.
print(my_user.first_name) #Model dot notation
print(my_user["first_name"]) #Dictionary
#Example of creating a user using a Model instance (WriteUser)
user = sdk.create_user(
body = looker_sdk.models.WriteUser(first_name="Jane", last_name="Doe")
)
print(user.id, user.first_name)
#Example of creating a user using a simple python dictionary
user = sdk.create_user(
body={"first_name": "Jane", "last_name": "Doe"}
)
print(user.id, user.first_name)
#Enter your Look ID. If your URL is https://your.cloud.looker.com/looks/25, your Look ID is 25.
look_id = 25
look = sdk.look(look_id=look_id)
# This gives us a Look object. We'll print the ID of it to verify everything's working.
print(look.id)
#You actually don't need to do anything further for this case, using a Look.
#If you wanted to use an Explore instead, you'd have to get the underlying query first, which might look like this:
#explore_id = "Q4pXny1FEtuxMuj9Atf0Gg"
#If your URL looks like https://your.cloud.looker.com/explore/ecommerce_data/order_items?qid=Q4pXny1FEtuxMuj9Atf0Gg&origin_space=15&toggle=vis, your explore_id/QID is Q4pXny1FEtuxMuj9Atf0Gg.
#explore_query = sdk.query_for_slug(slug=explore_id)
#This would return a Query object that we could then run to get results in step 2 using the run_query endpoints.
#We'll use a try/except block here, to make debugging easier.
#In general, this kind of thing isn't really necessary in notebooks as each cell is already isolated from the rest,
#but it's a good practice in larger scripts and certainly in applications where fatal errors can break the entire app.
#You should get into the habit of using them.
try:
response = sdk.run_look(
look_id=look.id,
result_format= "json" # Options here are csv, json, json_detail, txt, html, md, xlsx, sql (returns the raw query), png, jpg. JSON is the easiest to work with in python, so we return it.
)
data = json.loads(response) #The response is just a string, so we have to use the json library to load it as a json dict.
print(data) #If our query was successful we should see an array of rows.
except:
raise Exception(f'Error running look {look.id}')
#Before we move on, here's a simple example of that. Let's print the first 10 rows.
#This script is set up to always only look at the first column, assuming our Look returns 1 column.
first_field = list(
data[0].keys()
)[0] #This looks at the first row of the data and returns the first field name. keys() returns a set, so we wrap it in list() to return an array.
for i in range(0,10):
print(i,data[i][first_field])
#If we _know_ the name of the first field, why did we go to all this list(data[0].keys()[0]) trouble? Well, we know the name of the first field for ONE look.
#This little trickery above makes it so that our script will always work for any Look, no matter what the name is, without having to edit the code.
def get_data_for_look(look_id):
try:
look = sdk.look(look_id=look_id)
except:
raise Exception(f'Look {look_id} not found.')
print(f'Successfully got Look {look.id}')
try:
response = sdk.run_look(
look_id=look.id,
result_format = "json"
)
data = json.loads(response)
first_field = list(
data[0].keys()
)[0]
list_of_values = []
for i in data:
list_of_values.append(i[first_field])
#Ultimately, we're going to want to pass Looker a filter expression that's a comma-separated-list of values.
#Here, we use .join on the array of values to generate that filter expression.
string_list = ",".join(list_of_values)
return({"filter_values": string_list, "first_field": first_field})
except:
raise Exception('Error running Look.')
test = get_data_for_look(25)
#This should return successful.
#Start off the same as before, by getting the Look.
second_look_id = 26
second_look = sdk.look(look_id=second_look_id)
print(second_look.id) #just verifying we obtained the Look properly.
#Now we can extract the underlying query from the second Look, in order to modify it. We'll print it to see what it looks like.
second_query = second_look.query
print(second_query)
#We want to edit the filters, so let's start by inspecting the query and see how filters are set.
#We can see all the available keys in the query object by running the command below.
#👀 What are keys? Objects are key: value pairs. ex: {"name": "Izzy"}. The key is name, the value for that key is Izzy.
print(second_query.keys())
#Looks like there's a filters key, so we can run the following to see the filters:
print("Filters: ", second_query.filters)
response = sdk.run_query(query_id = second_query.id, result_format="json")
print(response)
#Let's create a new variable, altered_query, and assign it to be identical to second_query to begin.
altered_query = second_query
#Then, let's set a new key on the filters dict, which adds a new filter to the query. We'll name it the name of the field we want to filter on.
#This must exactly match the **fully scoped** field name in Looker (ie: view_name.field_name).
#We will then set that key's value equal to our data. In this case, we'll set it equal to that comma-separated string we generated earlier.
filter_data = get_data_for_look(25) # This is that function we built earlier.
field_name = filter_data['first_field']
filter_values = filter_data['filter_values']
altered_query.filters[field_name] = filter_values
#This should now print an object with the filters you've just added. Nice!
print(altered_query.filters)
#Before we can run this query, we need to do a little bit of pruning to it, since we copied it directly from an existing query.
#If you do not remove the ID and client_id, you'll get an error that this query already exists.
altered_query.client_id = None
altered_query.id = None
#Option a. Run inline query. This is the simplest option for just getting the data now.
#This should return your newly filtered data from Look #2
response = sdk.run_inline_query(body=altered_query, result_format="json")
print(response)
#Option b. Creating a brand new query object, then running that query from the ID.
#You might want to do this if you're planning to run the query asynchronously using create_query_task.
new_query = sdk.create_query(body=altered_query)
response = sdk.run_query(query_id=new_query.id, result_format="json")
print(response)
#@title Look Filterer
#@markdown Enter a look ID (`first_look_id`) that returns one column of values you want to apply as filters to another Look (`second_look_id`).
#@markdown The first Look must return one column only. Once you've entered the look IDs, run this block.
#@markdown _👀 If you get an error, make sure you've run the earlier code blocks that initialize the Looker API and create the get_data_for_look() function._
first_look_id = 25 #@param {type:"integer"}
second_look_id = 26#@param {type:"integer"}
final_result_format = "json" #@param ["json", "json_detail", "csv", "png", "jpg", "txt", "html", "md"]
first_look = get_data_for_look(first_look_id)
filter_field = first_look['first_field']
filter_values = first_look['filter_values']
second_look = sdk.look(look_id=second_look_id)
second_query = second_look.query
altered_query = second_query
altered_query.filters[filter_field] = filter_values
altered_query.client_id = None
altered_query.id = None
response = sdk.run_inline_query(body=altered_query, result_format=final_result_format)
print(f"Results of look {second_look_id} filtered with values from {first_look_id}:", response)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Configuring & Initializing the SDK
Step2: Now that we've set all the necessary environment variables, we should be able to initialize the Looker SDK. When initializing the SDK, you choose which version to use— 3.1 or 4.0. The 4.0 SDK has some new endpoints and bug fixes, but is still experimental.
Step3: Making API Calls
Step4: Huzzah! You've just made your first Looker API calls, and should see your first name printed out twice after running the cell above. If you're getting errors, triple and quadruple check the environment variable settings and your API keys.
Step5: Running the cell above should create two identical users and return two lines, each with a different User ID but the same name. You're welcome to use either method you prefer going forward, but we'll show examples using models as it's the most foolproof and Lookery way of doing things.
Step6: 2. Run query #1 and save results
Step7: This might be a good time to mention the methods.py file. You may be wondering "how did you know what arguments to pass to sdk.run_look()?
Step8: Before moving on, let's package our work up so far into a function to make things easier later on.
Step9: Now we can just call this function whenever we want to grab data from a look in filter expression format.
Step10: 4. Create or get query #2
Step11: For my example query, that returns {'order_items.order_count'
Step12: That's pretty simple. You just reference the existing query ID we got from the Look, and pass it to the run_query function. But that query doesn't include the new filters we want to add yet, and to do that we actually need to create a new query.
Step13: 6. Run the new query
Step14: These methods have the same outcome. Both blocks should have returned identical output. You may want to create_query rather than running inline if you want to create and run asynchronous queries and poll for their results using the create_query_task and query_task_results endpoints.
|
12,342
|
<ASSISTANT_TASK:>
Python Code:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
! pip uninstall tensorflow -y
! pip install tensorflow-gpu==1.15
import tensorflow as tf
print(tf.__version__)
! git clone https://github.com/tensorflow/models.git
! cd models && git checkout f788046ca876a8820e05b0b48c1fc2e16b0955bc
! git clone https://github.com/google-coral/tutorials.git
! cp tutorials/docker/classification/scripts/* models/research/slim/
%cd models/research/slim
! ./prepare_checkpoint_and_dataset.sh --network_type mobilenet_v1
%env NUM_TRAINING_STEPS=300
! ./start_training.sh --network_type mobilenet_v1 --train_steps $NUM_TRAINING_STEPS
! ./run_evaluation.sh --network_type mobilenet_v1
! ./convert_checkpoint_to_edgetpu_tflite.sh --network_type mobilenet_v1 --checkpoint_num $NUM_TRAINING_STEPS
! curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
! echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list
! sudo apt-get update
! sudo apt-get install edgetpu-compiler
%cd transfer_learn/models/
! ls
! edgetpu_compiler output_tflite_graph.tflite
from google.colab import files
files.download('output_tflite_graph_edgetpu.tflite')
files.download('labels.txt')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Retrain a classification model for Edge TPU with quant-aware training (TF 1.15)
Step2: Clone the model and training repos
Step3: Convert training data to TFRecord
Step4: Perform transfer-learning
Step5: As training progresses, you can see new checkpoint files appear in the models/research/slim/transfer_learn/train/ directory.
Step6: Compile for the Edge TPU
Step7: Download the files
|
12,343
|
<ASSISTANT_TASK:>
Python Code:
strat_train_set_copy = strat_train_set.copy()
housing.plot(kind="scatter", x='longitude', y='latitude')
housing.plot(kind="scatter", x='longitude', y='latitude', alpha=0.1)
strat_train_set_copy.plot(kind='scatter', x='longitude', y='latitude', alpha=0.4,
s=strat_train_set_copy.population/100,
c=strat_train_set_copy.median_house_value,
cmap=plt.get_cmap("jet"),
label="population", figsize=(15, 15),
colorbar=True)
plt.legend()
corr_matrix = strat_train_set_copy.corr()
corr_matrix.median_house_value.sort_values(ascending=False)
from pandas.plotting import scatter_matrix
attributes = ["median_house_value", "median_income", "total_rooms",
"housing_median_age"]
scatter_matrix(housing[attributes], figsize=(12, 8))
strat_train_set_copy.plot.scatter(x="median_income", y="median_house_value", alpha=0.1)
housing["rooms_per_household"] = housing["total_rooms"] / housing["households"]
housing["bedrooms_per_room"] = housing["total_bedrooms"]/housing["total_rooms"]
housing["population_per_household"]=housing["population"]/housing["households"]
housing.info()
corr_matrix = housing.corr()
corr_matrix['median_house_value'].sort_values(ascending=False)
housing = strat_train_set.drop('median_house_value', axis=1)
housing_labels = strat_train_set['median_house_value'].copy()
housing.info()
housing.dropna(subset=['total_bedrooms']).info()
housing.drop('total_bedrooms', axis=1).info()
housing['total_bedrooms'].fillna(housing['total_bedrooms'].median()).describe()
from sklearn.impute import SimpleImputer
imputer = SimpleImputer(strategy='median')
housing_num = housing.drop("ocean_proximity", axis=1)
imputer.fit(housing_num)
imputer.statistics_
imputer.strategy
housing.drop("ocean_proximity", axis=1).median().values
X = imputer.transform(housing_num)
X
housing_tr = pd.DataFrame(X, columns=housing_num.columns)
housing_tr.head()
from sklearn.preprocessing import LabelEncoder
encoder = LabelEncoder()
housing_cat = housing.ocean_proximity
housing_cat.describe()
housing_cat.value_counts()
housing_cat_encoded = encoder.fit_transform(housing_cat)
housing_cat_encoded
type(housing_cat_encoded)
print(encoder.classes_)
from sklearn.preprocessing import OneHotEncoder
encoder = OneHotEncoder()
print(housing_cat_encoded.shape)
print(type(housing_cat_encoded))
(housing_cat_encoded.reshape(-1, 1)).shape
housing_cat_1hot = encoder.fit_transform(housing_cat_encoded.reshape(-1, 1))
housing_cat_1hot
type(housing_cat_1hot)
housing_cat_1hot.toarray()
from sklearn.preprocessing import LabelBinarizer
encoder = LabelBinarizer(sparse_output=False)
housing_cat_1hot = encoder.fit_transform(housing_cat)
housing_cat_1hot
type(housing_cat_1hot)
rooms_ix, bedrooms_ix, population_ix, households_ix = 3, 4, 5, 6
housing.head()
housing.iloc[:, 3]
X = housing.values
# This can be achieved by the iloc, with using .values
housing.iloc[:, [rooms_ix, bedrooms_ix, households_ix, population_ix]].head()
rooms_per_household = X[:, rooms_ix] / X[:, households_ix]
population_per_household = X[:, population_ix] / X[:, households_ix]
bedrooms_per_room = X[:, bedrooms_ix] / X[:, rooms_ix]
np.c_[X, rooms_per_household, population_per_household]
np.c_[X, rooms_per_household, population_per_household, bedrooms_per_room]
from sklearn.base import BaseEstimator, TransformerMixin
rooms_ix, bedrooms_ix, population_ix, households_ix = 3, 4, 5, 6
class CombinedAttributesAdder(BaseEstimator, TransformerMixin):
def __init__(self, add_bedrooms_per_room=False):
self.add_bedrooms_per_room = add_bedrooms_per_room
def fit(self, X, y=None):
return self
def transform(self, X, y=None):
rooms_per_household = X[:, rooms_ix] / X[:, households_ix]
population_per_household = X[:, population_ix] / X[:, households_ix]
if self.add_bedrooms_per_room:
bedrooms_per_room = X[:, bedrooms_ix] / X[:, rooms_ix]
return np.c_[X, rooms_per_household, population_per_household, bedrooms_per_room]
else:
return np.c_[X, rooms_per_household, population_per_household]
attr_adder = CombinedAttributesAdder(add_bedrooms_per_room=False)
housing_extra_attribs = attr_adder.transform(X)
print(housing_extra_attribs.shape)
print(housing.shape)
# Convert back to data frame -- My way
new_columns = housing.columns.append(
pd.Index(['rooms_per_household', 'population_per_household'])
)
new_columns
housing_extra_attribs_df = pd.DataFrame(housing_extra_attribs, columns=new_columns)
housing_extra_attribs_df.head()
housing.describe()
housing.total_rooms.describe()
from sklearn.preprocessing import MinMaxScaler
scalar = MinMaxScaler()
scalar.fit(housing["total_rooms"].values.reshape(-1, 1))
pd.DataFrame(scalar.transform(housing["total_rooms"].values.reshape(-1, 1)), columns=["total_rooms"])["total_rooms"].describe()
from sklearn.preprocessing import StandardScaler
scalar = StandardScaler()
scalar.fit(housing["total_rooms"].values.reshape(-1, 1))
pd.DataFrame(scalar.transform(housing["total_rooms"].values.reshape(-1, 1)), columns=["total_rooms"])["total_rooms"].describe()
from sklearn.pipeline import Pipeline
num_pipeline = Pipeline([
('imputer', SimpleImputer(strategy="median")),
('attr_adder', CombinedAttributesAdder()),
('std_scaler', StandardScaler())
])
# I want to verify the pipelined version
# doest the same thing as the separated steps
num_pipeline_stage1 = Pipeline([
('imputer', SimpleImputer(strategy="median")),
])
X_pipeline = num_pipeline_stage1.fit_transform(housing_num)
X = imputer.transform(housing_num)
X_pipeline
np.array_equal(X, X_pipeline)
num_pipeline_stage2 = Pipeline([
('imputer', SimpleImputer(strategy="median")),
('attr_adder', CombinedAttributesAdder()),
])
Y = attr_adder.fit_transform(X)
Y_pipeline = num_pipeline_stage2.fit_transform(housing_num)
np.array_equal(Y, Y_pipeline)
num_pipeline_stage3 = Pipeline([
('imputer', SimpleImputer(strategy="median")),
('attr_adder', CombinedAttributesAdder()),
('std_scaler', StandardScaler())
])
Z = scalar.fit_transform(Y)
Z.std(), Z.mean()
Z_pipeline = num_pipeline_stage3.fit_transform(housing_num)
np.array_equal(Z, Z_pipeline)
from sklearn.base import BaseEstimator, TransformerMixin
class DataFrameSelector(BaseEstimator, TransformerMixin):
def __init__(self, attribute_names):
self.attribute_names = attribute_names
def fit(self, X, y=None):
return self
def transform(self, X):
return X[self.attribute_names].values
class CustomizedLabelBinarizer(BaseEstimator, TransformerMixin):
def __init__(self, sparse_output=False):
self.encode = LabelBinarizer(sparse_output = sparse_output)
def fit(self, X, y=None):
return self.encode.fit(X)
def transform(self, X):
return self.encode.transform(X)
num_attribs = list(housing_num)
cat_attribs = ["ocean_proximity"]
num_pipeline = Pipeline([
('selector', DataFrameSelector(num_attribs)),
('imputer', SimpleImputer(strategy="median")),
('attr_adder', CombinedAttributesAdder()),
('std_scaler', StandardScaler()),
]
)
cat_pipeline = Pipeline([
('selector', DataFrameSelector(cat_attribs)),
('label_binarizer', CustomizedLabelBinarizer()),
]
)
# LabelBinarizer().fit_transform(DataFrameSelector(cat_attribs).fit_transform(housing))
# num_pipeline.fit_transform(housing)
# cat_pipeline.fit_transform(housing)
from sklearn.pipeline import FeatureUnion
full_pipeline = FeatureUnion(transformer_list=[
('num_pipeline', num_pipeline),
('cat_pipeline', cat_pipeline),
])
housing_prepared = full_pipeline.fit_transform(housing)
print(housing_prepared.shape)
housing_prepared
from sklearn.linear_model import LinearRegression
lin_reg = LinearRegression()
lin_reg.fit(housing_prepared, housing_labels)
some_data = housing[:5]
some_data
some_labels = housing_labels[:5]
some_labels
some_data_prepared = full_pipeline.transform(some_data)
some_data_prepared
print(f'Prediction:\t{lin_reg.predict(some_data_prepared)}')
print(f'Lables:\t\t{list(some_labels)}')
from sklearn.metrics import mean_squared_error
housing_prediction = lin_reg.predict(housing_prepared)
lin_mse = mean_squared_error(housing_prediction, housing_labels)
lin_rmse = np.sqrt(lin_mse)
lin_rmse
from sklearn.tree import DecisionTreeRegressor
tree_reg = DecisionTreeRegressor()
tree_reg.fit(housing_prepared, housing_labels)
tree_predictions = tree_reg.predict(housing_prepared)
tree_mse = mean_squared_error(tree_predictions, housing_labels)
tree_rmse = np.sqrt(tree_mse)
tree_rmse
from sklearn.model_selection import cross_val_score
scores = cross_val_score(tree_reg, housing_prepared, housing_labels, scoring="neg_mean_squared_error", cv=10)
rmse_scores = np.sqrt(-scores)
rmse_scores
def display_scores(scores):
print(f'Scores: {scores}')
print(f'Mean: {scores.mean()}')
print(f'STD: {scores.std()}')
display_scores(rmse_scores)
from sklearn.ensemble import RandomForestRegressor
forest_reg = RandomForestRegressor()
forest_reg.fit(housing_prepared, housing_labels)
forest_scores = cross_val_score(forest_reg, housing_prepared, housing_labels, scoring="neg_mean_squared_error", cv=10)
forest_rmse_scores = np.sqrt(-forest_scores)
display_scores(forest_rmse_scores)
forest_prediction = forest_reg.predict(housing_prepared)
forest_rmse = np.sqrt(mean_squared_error(forest_prediction, housing_labels))
forest_rmse
from sklearn.model_selection import RandomizedSearchCV
from scipy.stats import uniform
from scipy.stats import randint
distributions = {
'n_estimators': randint(low=3, high=63),
'max_features': randint(low=2, high=8),
'bootstrap' : [True, False],
}
rand_search = RandomizedSearchCV(forest_reg, distributions, random_state=42, cv=5, scoring="neg_mean_squared_error")
rand_search.fit(housing_prepared, housing_labels)
cvres = rand_search.cv_results_
for mean_score, params in zip(cvres["mean_test_score"], cvres["params"]):
print(np.sqrt(-mean_score), params)
# from sklearn.model_selection import GridSearchCV
# param_grid = [
# {'n_estimators': [3, 10, 30], 'max_features': [2,4,6,8]},
# {'bootstrap': [False], 'n_estimators': [3, 10, 30], 'max_features': [2,4,6,8]}
# ]
# forest_reg = RandomForestRegressor()
# grid_search = GridSearchCV(forest_reg, param_grid, cv=5, scoring="neg_mean_squared_error")
# grid_search.fit(housing_prepared, housing_labels)
# grid_search.best_params_
# grid_search.best_estimator_
# cvres = grid_search.cv_results_
# for mean_score, params in zip(cvres["mean_test_score"], cvres["params"]):
# print(np.sqrt(-mean_score), params)
# feature_importances = grid_search.best_estimator_.feature_importances_
# feature_importances
# extra_attribs = ['rooms_per_hhold', 'pop_per_hhold']
# cat_one_hot_attribs = list(encoder.classes_)
# cat_one_hot_attribs
# attributes = num_attribs + extra_attribs + cat_one_hot_attribs
# attributes, len(attributes)
# sorted(zip(feature_importances, attributes), reverse=True)
# final_model = grid_search.best_estimator_
# X_test = strat_test_set.drop("median_house_value", axis=1)
# y_test = strat_test_set.median_house_value.copy()
# X_test_prepared = full_pipeline.transform(X_test)
# final_predictions = final_model.predict(X_test_prepared)
# final_mse = mean_squared_error(final_predictions, y_test)
# final_rmse = np.sqrt(final_mse)
# final_rmse
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Experimenting with Attribute Combinations
Step2: 2.5 Prepare the Data for Machine Learning Algorithms
Step3: Handling Text and Categorical Attributes
Step4: One hot encoding
Step5: Combine
Step6: Custom Transformers
Step7: 2.5.4 Feature Scaling
Step8: 2.5.5 Transformation Pipeline
Step9: 2.6.1 Training and Evaluating on the Training Set
Step10: Tree model
Step11: 2.6.2 Better Evaluation Using Cross-Validation
Step12: Random Forest
Step13: Ex02
Step14: These are the results I got, similar to GridSearchCV
Step15: 2.7.4 Analyze the best models and their errors
Step16: 2.7.5 Evaluate Your System on the Test Set
|
12,344
|
<ASSISTANT_TASK:>
Python Code:
# Define paths to model files
import os
MODELS_DIR = 'models/'
if not os.path.exists(MODELS_DIR):
os.mkdir(MODELS_DIR)
MODEL_TF = MODELS_DIR + 'model'
MODEL_NO_QUANT_TFLITE = MODELS_DIR + 'model_no_quant.tflite'
MODEL_TFLITE = MODELS_DIR + 'model.tflite'
MODEL_TFLITE_MICRO = MODELS_DIR + 'model.cc'
! pip install tensorflow==2.4.0
# TensorFlow is an open source machine learning library
import tensorflow as tf
# Keras is TensorFlow's high-level API for deep learning
from tensorflow import keras
# Numpy is a math library
import numpy as np
# Pandas is a data manipulation library
import pandas as pd
# Matplotlib is a graphing library
import matplotlib.pyplot as plt
# Math is Python's math library
import math
# Set seed for experiment reproducibility
seed = 1
np.random.seed(seed)
tf.random.set_seed(seed)
# Number of sample datapoints
SAMPLES = 1000
# Generate a uniformly distributed set of random numbers in the range from
# 0 to 2π, which covers a complete sine wave oscillation
x_values = np.random.uniform(
low=0, high=2*math.pi, size=SAMPLES).astype(np.float32)
# Shuffle the values to guarantee they're not in order
np.random.shuffle(x_values)
# Calculate the corresponding sine values
y_values = np.sin(x_values).astype(np.float32)
# Plot our data. The 'b.' argument tells the library to print blue dots.
plt.plot(x_values, y_values, 'b.')
plt.show()
# Add a small random number to each y value
y_values += 0.1 * np.random.randn(*y_values.shape)
# Plot our data
plt.plot(x_values, y_values, 'b.')
plt.show()
# We'll use 60% of our data for training and 20% for testing. The remaining 20%
# will be used for validation. Calculate the indices of each section.
TRAIN_SPLIT = int(0.6 * SAMPLES)
TEST_SPLIT = int(0.2 * SAMPLES + TRAIN_SPLIT)
# Use np.split to chop our data into three parts.
# The second argument to np.split is an array of indices where the data will be
# split. We provide two indices, so the data will be divided into three chunks.
x_train, x_test, x_validate = np.split(x_values, [TRAIN_SPLIT, TEST_SPLIT])
y_train, y_test, y_validate = np.split(y_values, [TRAIN_SPLIT, TEST_SPLIT])
# Double check that our splits add up correctly
assert (x_train.size + x_validate.size + x_test.size) == SAMPLES
# Plot the data in each partition in different colors:
plt.plot(x_train, y_train, 'b.', label="Train")
plt.plot(x_test, y_test, 'r.', label="Test")
plt.plot(x_validate, y_validate, 'y.', label="Validate")
plt.legend()
plt.show()
# We'll use Keras to create a simple model architecture
model_1 = tf.keras.Sequential()
# First layer takes a scalar input and feeds it through 8 "neurons". The
# neurons decide whether to activate based on the 'relu' activation function.
model_1.add(keras.layers.Dense(8, activation='relu', input_shape=(1,)))
# Final layer is a single neuron, since we want to output a single value
model_1.add(keras.layers.Dense(1))
# Compile the model using the standard 'adam' optimizer and the mean squared error or 'mse' loss function for regression.
model_1.compile(optimizer='adam', loss='mse', metrics=['mae'])
# Train the model on our training data while validating on our validation set
history_1 = model_1.fit(x_train, y_train, epochs=500, batch_size=64,
validation_data=(x_validate, y_validate))
# Draw a graph of the loss, which is the distance between
# the predicted and actual values during training and validation.
train_loss = history_1.history['loss']
val_loss = history_1.history['val_loss']
epochs = range(1, len(train_loss) + 1)
plt.plot(epochs, train_loss, 'g.', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
# Exclude the first few epochs so the graph is easier to read
SKIP = 50
plt.plot(epochs[SKIP:], train_loss[SKIP:], 'g.', label='Training loss')
plt.plot(epochs[SKIP:], val_loss[SKIP:], 'b.', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf()
# Draw a graph of mean absolute error, which is another way of
# measuring the amount of error in the prediction.
train_mae = history_1.history['mae']
val_mae = history_1.history['val_mae']
plt.plot(epochs[SKIP:], train_mae[SKIP:], 'g.', label='Training MAE')
plt.plot(epochs[SKIP:], val_mae[SKIP:], 'b.', label='Validation MAE')
plt.title('Training and validation mean absolute error')
plt.xlabel('Epochs')
plt.ylabel('MAE')
plt.legend()
plt.show()
# Calculate and print the loss on our test dataset
test_loss, test_mae = model_1.evaluate(x_test, y_test)
# Make predictions based on our test dataset
y_test_pred = model_1.predict(x_test)
# Graph the predictions against the actual values
plt.clf()
plt.title('Comparison of predictions and actual values')
plt.plot(x_test, y_test, 'b.', label='Actual values')
plt.plot(x_test, y_test_pred, 'r.', label='TF predictions')
plt.legend()
plt.show()
model = tf.keras.Sequential()
# First layer takes a scalar input and feeds it through 16 "neurons". The
# neurons decide whether to activate based on the 'relu' activation function.
model.add(keras.layers.Dense(16, activation='relu', input_shape=(1,)))
# The new second and third layer will help the network learn more complex representations
model.add(keras.layers.Dense(16, activation='relu'))
# Final layer is a single neuron, since we want to output a single value
model.add(keras.layers.Dense(1))
# Compile the model using the standard 'adam' optimizer and the mean squared error or 'mse' loss function for regression.
model.compile(optimizer='adam', loss="mse", metrics=["mae"])
# Train the model
history = model.fit(x_train, y_train, epochs=500, batch_size=64,
validation_data=(x_validate, y_validate))
# Save the model to disk
model.save(MODEL_TF)
# Draw a graph of the loss, which is the distance between
# the predicted and actual values during training and validation.
train_loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(train_loss) + 1)
# Exclude the first few epochs so the graph is easier to read
SKIP = 100
plt.figure(figsize=(10, 4))
plt.subplot(1, 2, 1)
plt.plot(epochs[SKIP:], train_loss[SKIP:], 'g.', label='Training loss')
plt.plot(epochs[SKIP:], val_loss[SKIP:], 'b.', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.subplot(1, 2, 2)
# Draw a graph of mean absolute error, which is another way of
# measuring the amount of error in the prediction.
train_mae = history.history['mae']
val_mae = history.history['val_mae']
plt.plot(epochs[SKIP:], train_mae[SKIP:], 'g.', label='Training MAE')
plt.plot(epochs[SKIP:], val_mae[SKIP:], 'b.', label='Validation MAE')
plt.title('Training and validation mean absolute error')
plt.xlabel('Epochs')
plt.ylabel('MAE')
plt.legend()
plt.tight_layout()
# Calculate and print the loss on our test dataset
test_loss, test_mae = model.evaluate(x_test, y_test)
# Make predictions based on our test dataset
y_test_pred = model.predict(x_test)
# Graph the predictions against the actual values
plt.clf()
plt.title('Comparison of predictions and actual values')
plt.plot(x_test, y_test, 'b.', label='Actual values')
plt.plot(x_test, y_test_pred, 'r.', label='TF predicted')
plt.legend()
plt.show()
# Convert the model to the TensorFlow Lite format without quantization
converter = tf.lite.TFLiteConverter.from_saved_model(MODEL_TF)
model_no_quant_tflite = converter.convert()
# Save the model to disk
open(MODEL_NO_QUANT_TFLITE, "wb").write(model_no_quant_tflite)
# Convert the model to the TensorFlow Lite format with quantization
def representative_dataset():
for i in range(500):
yield([x_train[i].reshape(1, 1)])
# Set the optimization flag.
converter.optimizations = [tf.lite.Optimize.DEFAULT]
# Enforce integer only quantization
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.int8
converter.inference_output_type = tf.int8
# Provide a representative dataset to ensure we quantize correctly.
converter.representative_dataset = representative_dataset
model_tflite = converter.convert()
# Save the model to disk
open(MODEL_TFLITE, "wb").write(model_tflite)
def predict_tflite(tflite_model, x_test):
# Prepare the test data
x_test_ = x_test.copy()
x_test_ = x_test_.reshape((x_test.size, 1))
x_test_ = x_test_.astype(np.float32)
# Initialize the TFLite interpreter
interpreter = tf.lite.Interpreter(model_content=tflite_model)
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()[0]
output_details = interpreter.get_output_details()[0]
# If required, quantize the input layer (from float to integer)
input_scale, input_zero_point = input_details["quantization"]
if (input_scale, input_zero_point) != (0.0, 0):
x_test_ = x_test_ / input_scale + input_zero_point
x_test_ = x_test_.astype(input_details["dtype"])
# Invoke the interpreter
y_pred = np.empty(x_test_.size, dtype=output_details["dtype"])
for i in range(len(x_test_)):
interpreter.set_tensor(input_details["index"], [x_test_[i]])
interpreter.invoke()
y_pred[i] = interpreter.get_tensor(output_details["index"])[0]
# If required, dequantized the output layer (from integer to float)
output_scale, output_zero_point = output_details["quantization"]
if (output_scale, output_zero_point) != (0.0, 0):
y_pred = y_pred.astype(np.float32)
y_pred = (y_pred - output_zero_point) * output_scale
return y_pred
def evaluate_tflite(tflite_model, x_test, y_true):
global model
y_pred = predict_tflite(tflite_model, x_test)
loss_function = tf.keras.losses.get(model.loss)
loss = loss_function(y_true, y_pred).numpy()
return loss
# Calculate predictions
y_test_pred_tf = model.predict(x_test)
y_test_pred_no_quant_tflite = predict_tflite(model_no_quant_tflite, x_test)
y_test_pred_tflite = predict_tflite(model_tflite, x_test)
# Compare predictions
plt.clf()
plt.title('Comparison of various models against actual values')
plt.plot(x_test, y_test, 'bo', label='Actual values')
plt.plot(x_test, y_test_pred_tf, 'ro', label='TF predictions')
plt.plot(x_test, y_test_pred_no_quant_tflite, 'bx', label='TFLite predictions')
plt.plot(x_test, y_test_pred_tflite, 'gx', label='TFLite quantized predictions')
plt.legend()
plt.show()
# Calculate loss
loss_tf, _ = model.evaluate(x_test, y_test, verbose=0)
loss_no_quant_tflite = evaluate_tflite(model_no_quant_tflite, x_test, y_test)
loss_tflite = evaluate_tflite(model_tflite, x_test, y_test)
# Compare loss
df = pd.DataFrame.from_records(
[["TensorFlow", loss_tf],
["TensorFlow Lite", loss_no_quant_tflite],
["TensorFlow Lite Quantized", loss_tflite]],
columns = ["Model", "Loss/MSE"], index="Model").round(4)
df
# Calculate size
size_tf = os.path.getsize(MODEL_TF)
size_no_quant_tflite = os.path.getsize(MODEL_NO_QUANT_TFLITE)
size_tflite = os.path.getsize(MODEL_TFLITE)
# Compare size
pd.DataFrame.from_records(
[["TensorFlow", f"{size_tf} bytes", ""],
["TensorFlow Lite", f"{size_no_quant_tflite} bytes ", f"(reduced by {size_tf - size_no_quant_tflite} bytes)"],
["TensorFlow Lite Quantized", f"{size_tflite} bytes", f"(reduced by {size_no_quant_tflite - size_tflite} bytes)"]],
columns = ["Model", "Size", ""], index="Model")
# Install xxd if it is not available
!apt-get update && apt-get -qq install xxd
# Convert to a C source file, i.e, a TensorFlow Lite for Microcontrollers model
!xxd -i {MODEL_TFLITE} > {MODEL_TFLITE_MICRO}
# Update variable names
REPLACE_TEXT = MODEL_TFLITE.replace('/', '_').replace('.', '_')
!sed -i 's/'{REPLACE_TEXT}'/g_model/g' {MODEL_TFLITE_MICRO}
# Print the C source file
!cat {MODEL_TFLITE_MICRO}
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Setup Environment
Step2: Import Dependencies
Step3: Dataset
Step4: 2. Add Noise
Step5: 3. Split the Data
Step6: Training
Step7: 2. Train the Model
Step8: 3. Plot Metrics
Step9: The graph shows the loss (or the difference between the model's predictions and the actual data) for each epoch. There are several ways to calculate loss, and the method we have used is mean squared error. There is a distinct loss value given for the training and the validation data.
Step10: From the plot, we can see that loss continues to reduce until around 200 epochs, at which point it is mostly stable. This means that there's no need to train our network beyond 200 epochs.
Step11: This graph of mean absolute error tells another story. We can see that training data shows consistently lower error than validation data, which means that the network may have overfit, or learned the training data so rigidly that it can't make effective predictions about new data.
Step12: Oh dear! The graph makes it clear that our network has learned to approximate the sine function in a very limited way.
Step13: 2. Train the Model
Step14: 3. Plot Metrics
Step15: Great results! From these graphs, we can see several exciting things
Step16: Much better! The evaluation metrics we printed show that the model has a low loss and MAE on the test data, and the predictions line up visually with our data fairly well.
Step17: 2. Compare Model Performance
Step18: 1. Predictions
Step19: 2. Loss (MSE/Mean Squared Error)
Step20: 3. Size
Step21: Summary
Step22: Deploy to a Microcontroller
|
12,345
|
<ASSISTANT_TASK:>
Python Code:
from shenfun import *
N = 8
T = FunctionSpace(N, 'Chebyshev', domain=(-1, 1))
u = Function(T)
T = FunctionSpace(N, 'Chebyshev', domain=(0, 1))
T = FunctionSpace(N, 'Chebyshev', domain=(-1, 1))
u = Function(T, val=1)
import sympy as sp
x = sp.Symbol('x', real=True)
u = Function(T, buffer=4*x**3-3*x)
print(u)
T = FunctionSpace(0, 'Chebyshev', domain=(-1, 1))
u = Function(T, buffer=sp.cos(20*x))
print(len(u))
Tu = u.function_space()
print(Tu.N)
T1 = FunctionSpace(0, 'Chebyshev', domain=(0, 100))
u = Function(T1, buffer=sp.besselj(0, x))
print(len(u))
u = Function(T1, buffer=sp.besselj(0, x), reltol=1e-14)
print(len(u))
%matplotlib inline
import matplotlib.pyplot as plt
Tu = u.function_space()
plt.plot(Tu.mesh(), u.backward())
xj = np.linspace(0, 100, 1000)
plt.plot(xj, u(xj))
up = u.refine(200)
Tp = up.function_space()
plt.plot(Tp.mesh(), up.backward())
print(up)
import numpy.polynomial.chebyshev as cheb
c = cheb.Chebyshev(u, domain=(0, 100))
z = Tu.map_true_domain(cheb.chebroots(u))
z2 = z[np.where((z.imag == 0)*(z.real > 0)*(z.real < 100))].real
print(z2[:5])
Td = FunctionSpace(0, 'C', bc=(sp.besselj(0, 0), sp.besselj(0, 100)), domain=(0, 100))
ud = Function(Td, buffer=sp.besselj(0, x))
print(len(ud))
C0 = FunctionSpace(20, 'C')
C1 = FunctionSpace(20, 'C')
T = TensorProductSpace(comm, (C0, C1))
u = Function(T)
y = sp.Symbol('y', real=True)
u = Function(T, buffer=sp.cos(10*x)*sp.cos(10*y))
X = T.local_mesh(True)
plt.contourf(X[0], X[1], u.backward())
X = T.mesh()
for xj in X[0]:
for yj in X[1]:
plt.plot((xj, xj), (X[1][0, 0], X[1][0, -1]), 'k')
plt.plot((X[0][0], X[0][-1]), (yj, yj), 'k')
X = T.local_mesh(broadcast=True, uniform=True)
plt.contourf(X[0], X[1], u.backward(kind={'chebyshev': 'uniform'}))
r, theta = psi = sp.symbols('x,y', real=True, positive=True)
rv = (r*sp.cos(theta), r*sp.sin(theta))
B0 = FunctionSpace(20, 'C', domain=(0, 1))
F0 = FunctionSpace(20, 'F')
T = TensorProductSpace(comm, (B0, F0), coordinates=(psi, rv))
u = Function(T, buffer=(1-r)*r*sp.sin(sp.cos(theta)))
X = T.local_mesh(True)
plt.contourf(X[0], X[1], u.backward(), 100)
X = T.local_cartesian_mesh()
plt.contourf(X[0], X[1], u.backward(), 100)
up = u.backward()
xp, yp, up = wrap_periodic([X[0], X[1], up], axes=[1])
plt.contourf(xp, yp, up, 100)
B0 = FunctionSpace(0, 'C', domain=(0, 1))
F0 = FunctionSpace(0, 'F')
T = TensorProductSpace(comm, (B0, F0), coordinates=(psi, rv))
u = Function(T, buffer=((1-r)*r)**2*sp.sin(sp.cos(theta)))
print(u.shape)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The function $u(x)$ can now be created with all N coefficients
Step2: When using Chebyshev polynomials the computational domain is always
Step3: and under the hood shenfun will then map this domain to the reference
Step4: but that is not very useful. A third method to initialize
Step5: Here the analytical Sympy function will first be evaluated
Step6: This is useful if you want to approximate a function and
Step7: We see that $N=45$ is required to resolve this function. This agrees
Step8: To further show that shenfun is compatible with Chebfun we can also
Step9: which gives 83 basis functions, in close agreement with Chebfun (89).
Step10: Plotting the function on its quadrature points looks
Step11: To improve the quality of this plot we can instead evaluate the
Step12: Alternatively, we can refine the function, which simply
Step13: The padded expansion coefficients are now given as
Step14: More features
Step15: The Chebyshev series in Numpy has a wide range of possibilities,
Step16: Note that the roots are found on the reference domain $[-1, 1]$
Step17: Here np.where returns the indices where the condition is true. The condition
Step18: As we can see this leads to a function space of dimension
Step19: Here $\text{T} = \text{C0} \otimes \text{C1}$, the basis function is
Step20: Like for 1D the coefficients are computed through projection,
Step21: We may alternatively plot on a uniform mesh
Step22: Curvilinear coordinates
Step23: Note that we are using a Fourier space for the azimuthal
Step24: However, when this is plotted it may not be what you expect
Step25: We see that the function has been plotted in computational coordinates,
Step26: Notice.
Step27: Adaptive functions in multiple dimensions
|
12,346
|
<ASSISTANT_TASK:>
Python Code:
from pybotics.robot import Robot
from pybotics.predefined_models import ur10
robot = Robot.from_parameters(ur10())
import numpy as np
np.set_printoptions(suppress=True)
joints = np.deg2rad([5,5,5,5,5,5])
pose = robot.fk(joints)
display(pose)
solved_joints = robot.ik(pose)
display(np.rad2deg(solved_joints))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Forward Kinematics
Step2: Inverse Kinematics
|
12,347
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
seriesLabel = ['label1', 'label2', 'label3']
exampleList = [5, 10, 20]
pd.Series(exampleList)
pd.Series(exampleList, seriesLabel)
exampleNumpyArray = np.array([6, 12, 18])
pd.Series(exampleNumpyArray)
pd.Series(exampleNumpyArray, seriesLabel)
exampleDictionary = { 'label4': 7, 'label5': 14, 'label6': 21 }
# No need to mention labels parameter
pd.Series(exampleDictionary)
# If you mention different labels for a dictionary
pd.Series(exampleDictionary, seriesLabel)
def sampleFunc1():
pass
def sampleFunc2():
pass
def sampleFunc3():
pass
pd.Series(data=[sampleFunc1, sampleFunc2, sampleFunc3])
pd.Series(['a', 2, 'hey'])
pd.Series(data=[sampleFunc1, sampleFunc2, sampleFunc3], index=['a', 'b', 'c'])
pd.Series(['a', 2, 'hey'], ['label', 2, 'key'])
pd.DataFrame(data = np.random.randint(1,51, (4,3)), index = ['row1', 'row2', 'row3', 'row4'], columns = ['col1', 'col2', 'col3'])
dataFrame = pd.DataFrame(data = np.random.randint(1,51, (4,3)), index = ['row1', 'row2', 'row3', 'row4'], columns = ['col1', 'col2', 'col3'])
dataFrame
dataFrame['col1']
dataFrame[['col1', 'col2']]
dataFrame['newCol1'] = dataFrame['col3'] - dataFrame['col2']
dataFrame
dataFrame['newCol2'] = dataFrame['col1'] * dataFrame['col3']
dataFrame
# axis -> 0 means that we are targeting the rows
# axis -> 1 means that we are targeting the columns
dataFrame.drop('newCol1', axis=1)
# we did not really drop the column
dataFrame
# Pandas saves us from accidentally dropping the columns
# Inorder to delete it
dataFrame.drop('newCol1', axis=1, inplace=True)
dataFrame
dataFrame.drop('newCol2', axis=1, inplace=True)
dataFrame
dataFrame.loc['row1']
dataFrame.loc[['row1', 'row2']]
dataFrame.iloc[1]
dataFrame.drop('row1', axis=0)
# again pandas didn't drop it completely
dataFrame
# we should use 'inplace' to drop the row
# dataFrame.drop('row1', axis=0, inplace=True)
# dataFrame
dataFrame.loc['row1', 'col2']
dataFrame.loc[['row1', 'row2', 'row3'],['col2', 'col3']]
dataFrame.iloc[0,1]
dataFrame.iloc[[0,1]]
dataFrame
dataFrame > 10
dataFrame[dataFrame > 10]
dataFrame[dataFrame['col2'] > 10]
dataFrame[dataFrame['col2'] > 10]['col3']
dataFrame[dataFrame['col2'] > 10][['col1', 'col3']]
dataFrame[(dataFrame['col1'] > 10) & (dataFrame['col3'] > 10)]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Series
Step2: Using Numpy Arrays
Step3: Using Dictionary
Step4: Data and Index Parameter in Series
Step5: Index
Step6: DataFrames
Step7: Selection and Indexing
Step8: Selecting a single column
Step9: Selecting multiple columns
Step10: Creation of new columns using arithmetic operators
Step11: Removal of columns
Step12: Selecting a single Row
Step13: Selecting multiple rows
Step14: Selecting rows based on their index number
Step15: Removal of rows
Step16: Selecting both columns and rows
Step17: Conditional Selection
Step18: Instead of getting true and false values, we can also get the actual value if the condition is satisfied
Step19: We can also target individual columns
Step20: We can also output columns only that we want
Step21: If we want to apply conditional operators on multiple columns then we do so by
|
12,348
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
np.set_printoptions(precision=3)
import matplotlib.pyplot as plt
import math
import os
import warnings
import pandas as pd
# from scipy.interpolate import BSpline
# from scipy.stats import gaussian_kde
!mkdir figures
!pip install -q numpyro@git+https://github.com/pyro-ppl/numpyro
import jax
print("jax version {}".format(jax.__version__))
print("jax backend {}".format(jax.lib.xla_bridge.get_backend().platform))
import jax.numpy as jnp
from jax import random, vmap
rng_key = random.PRNGKey(0)
rng_key, rng_key_ = random.split(rng_key)
import numpyro
import numpyro.distributions as dist
from numpyro.distributions import constraints
from numpyro.distributions.transforms import AffineTransform
from numpyro.diagnostics import hpdi, print_summary
from numpyro.infer import Predictive
from numpyro.infer import MCMC, NUTS
from numpyro.infer import SVI, Trace_ELBO, init_to_value
from numpyro.infer.autoguide import AutoLaplaceApproximation
import numpyro.optim as optim
!pip install arviz
import arviz as az
# url = 'https://github.com/fehiepsi/rethinking-numpyro/tree/master/data/Howell1.csv?raw=True'
url = "https://raw.githubusercontent.com/fehiepsi/rethinking-numpyro/master/data/Howell1.csv"
Howell1 = pd.read_csv(url, sep=";")
d = Howell1
d.info()
d.head()
# get data for adults
d2 = d[d.age >= 18]
N = len(d2)
ndx = jax.random.permutation(rng_key, N)
data = d2.height.values[ndx]
N = 20 # take a subset of the 354 samples
data = data[:N]
print(len(data))
print(np.mean(data))
print(np.std(data))
mu_prior = dist.Normal(178, 20)
sigma_prior = dist.Uniform(0, 50)
mu_range = [150, 160]
sigma_range = [4, 14]
ngrid = 100
plot_square = False
mu_list = jnp.linspace(start=mu_range[0], stop=mu_range[1], num=ngrid)
sigma_list = jnp.linspace(start=sigma_range[0], stop=sigma_range[1], num=ngrid)
mesh = jnp.meshgrid(mu_list, sigma_list)
print([mesh[0].shape, mesh[1].shape])
print(mesh[0].reshape(-1).shape)
post = {"mu": mesh[0].reshape(-1), "sigma": mesh[1].reshape(-1)}
post["LL"] = vmap(lambda mu, sigma: jnp.sum(dist.Normal(mu, sigma).log_prob(data)))(post["mu"], post["sigma"])
logprob_mu = mu_prior.log_prob(post["mu"])
logprob_sigma = sigma_prior.log_prob(post["sigma"])
post["prob"] = post["LL"] + logprob_mu + logprob_sigma
post["prob"] = jnp.exp(post["prob"] - jnp.max(post["prob"]))
prob = post["prob"] / jnp.sum(post["prob"]) # normalize over the grid
prob2d = prob.reshape(ngrid, ngrid)
prob_mu = jnp.sum(prob2d, axis=0)
prob_sigma = jnp.sum(prob2d, axis=1)
plt.figure()
plt.plot(mu_list, prob_mu, label="mu")
plt.legend()
plt.savefig("figures/gauss_params_1d_post_grid_marginal_mu.pdf", dpi=300)
plt.show()
plt.figure()
plt.plot(sigma_list, prob_sigma, label="sigma")
plt.legend()
plt.savefig("figures/gauss_params_1d_post_grid_marginal_sigma.pdf", dpi=300)
plt.show()
plt.contour(
post["mu"].reshape(ngrid, ngrid),
post["sigma"].reshape(ngrid, ngrid),
post["prob"].reshape(ngrid, ngrid),
)
plt.xlabel(r"$\mu$")
plt.ylabel(r"$\sigma$")
if plot_square:
plt.axis("square")
plt.savefig("figures/gauss_params_1d_post_grid_contours.pdf", dpi=300)
plt.show()
plt.imshow(
post["prob"].reshape(ngrid, ngrid),
origin="lower",
extent=(mu_range[0], mu_range[1], sigma_range[0], sigma_range[1]),
aspect="auto",
)
plt.xlabel(r"$\mu$")
plt.ylabel(r"$\sigma$")
if plot_square:
plt.axis("square")
plt.savefig("figures/gauss_params_1d_post_grid_heatmap.pdf", dpi=300)
plt.show()
nsamples = 5000 # int(1e4)
sample_rows = dist.Categorical(probs=prob).sample(random.PRNGKey(0), (nsamples,))
sample_mu = post["mu"][sample_rows]
sample_sigma = post["sigma"][sample_rows]
samples = {"mu": sample_mu, "sigma": sample_sigma}
print_summary(samples, 0.95, False)
plt.scatter(samples["mu"], samples["sigma"], s=64, alpha=0.1, edgecolor="none")
plt.xlim(mu_range[0], mu_range[1])
plt.ylim(sigma_range[0], sigma_range[1])
plt.xlabel(r"$\mu$")
plt.ylabel(r"$\sigma$")
plt.axis("square")
plt.show()
az.plot_kde(samples["mu"], samples["sigma"])
plt.xlim(mu_range[0], mu_range[1])
plt.ylim(sigma_range[0], sigma_range[1])
plt.xlabel(r"$\mu$")
plt.ylabel(r"$\sigma$")
if plot_square:
plt.axis("square")
plt.savefig("figures/gauss_params_1d_post_grid.pdf", dpi=300)
plt.show()
print(hpdi(samples["mu"], 0.95))
print(hpdi(samples["sigma"], 0.95))
fig, ax = plt.subplots()
az.plot_kde(samples["mu"], ax=ax, label=r"$\mu$")
fig, ax = plt.subplots()
az.plot_kde(samples["sigma"], ax=ax, label=r"$\sigma$")
def model(data):
mu = numpyro.sample("mu", mu_prior)
sigma = numpyro.sample("sigma", sigma_prior)
numpyro.sample("height", dist.Normal(mu, sigma), obs=data)
guide = AutoLaplaceApproximation(model)
svi = SVI(model, guide, optim.Adam(1), Trace_ELBO(), data=data)
svi_result = svi.run(random.PRNGKey(0), 2000)
plt.figure()
plt.plot(svi_result.losses)
start = {"mu": data.mean(), "sigma": data.std()}
guide = AutoLaplaceApproximation(model, init_loc_fn=init_to_value(values=start))
svi = SVI(model, guide, optim.Adam(0.1), Trace_ELBO(), data=data)
svi_result = svi.run(random.PRNGKey(0), 2000)
plt.figure()
plt.plot(svi_result.losses)
samples = guide.sample_posterior(random.PRNGKey(1), svi_result.params, (nsamples,))
print_summary(samples, 0.95, False)
plt.scatter(samples["mu"], samples["sigma"], s=64, alpha=0.1, edgecolor="none")
plt.xlim(mu_range[0], mu_range[1])
plt.ylim(sigma_range[0], sigma_range[1])
plt.xlabel(r"$\mu$")
plt.ylabel(r"$\sigma$")
plt.show()
az.plot_kde(samples["mu"], samples["sigma"])
plt.xlim(mu_range[0], mu_range[1])
plt.ylim(sigma_range[0], sigma_range[1])
plt.xlabel(r"$\mu$")
plt.ylabel(r"$\sigma$")
if plot_square:
plt.axis("square")
plt.savefig("figures/gauss_params_1d_post_laplace.pdf", dpi=300)
plt.show()
print(hpdi(samples["mu"], 0.95))
print(hpdi(samples["sigma"], 0.95))
fig, ax = plt.subplots()
az.plot_kde(samples["mu"], ax=ax, label=r"$\mu$")
fig, ax = plt.subplots()
az.plot_kde(samples["sigma"], ax=ax, label=r"$\sigma$")
post = guide.get_posterior(svi_result.params)
print(post.mean)
print(post.covariance_matrix)
def logit(p):
return jnp.log(p / (1 - p))
def sigmoid(a):
return 1 / (1 + jnp.exp(-a))
scale = 50
print(logit(7.7 / scale))
print(sigmoid(-1.7) * scale)
unconstrained_samples = post.sample(rng_key, sample_shape=(nsamples,))
constrained_samples = guide._unpack_and_constrain(unconstrained_samples, svi_result.params)
print(unconstrained_samples.shape)
print(jnp.mean(unconstrained_samples, axis=0))
print(jnp.mean(constrained_samples["mu"], axis=0))
print(jnp.mean(constrained_samples["sigma"], axis=0))
samples = guide.sample_posterior(random.PRNGKey(1), params, (nsamples,))
x = jnp.stack(list(samples.values()), axis=0)
print(x.shape)
print("mean of ssamples\n", jnp.mean(x, axis=1))
vcov = jnp.cov(x)
print("cov of samples\n", vcov) # variance-covariance matrix
# correlation matrix
R = vcov / jnp.sqrt(jnp.outer(jnp.diagonal(vcov), jnp.diagonal(vcov)))
print("corr of samples\n", R)
def guide(data):
data_mean = jnp.mean(data)
data_std = jnp.std(data)
m = numpyro.param("m", data_mean)
s = numpyro.param("s", 10, constraint=constraints.positive)
a = numpyro.param("a", data_std, constraint=constraints.positive)
b = numpyro.param("b", 1, constraint=constraints.positive)
mu = numpyro.sample("mu", dist.Normal(m, s))
sigma = numpyro.sample("sigma", dist.Gamma(a, b))
optimizer = numpyro.optim.Momentum(step_size=0.001, mass=0.1)
svi = SVI(model, guide, optimizer, loss=Trace_ELBO())
nsteps = 2000
svi_result = svi.run(rng_key_, nsteps, data=data)
print(svi_result.params)
print(svi_result.losses.shape)
plt.plot(svi_result.losses)
plt.title("ELBO")
plt.xlabel("step")
plt.ylabel("loss");
print(svi_result.params)
a = np.array(svi_result.params["a"])
b = np.array(svi_result.params["b"])
m = np.array(svi_result.params["m"])
s = np.array(svi_result.params["s"])
print("empirical mean", jnp.mean(data))
print("empirical std", jnp.std(data))
print(r"posterior mean and std of $\mu$")
post_mean = dist.Normal(m, s)
print([post_mean.mean, jnp.sqrt(post_mean.variance)])
print(r"posterior mean and std of unconstrained $\sigma$")
post_sigma = dist.Gamma(a, b)
print([post_sigma.mean, jnp.sqrt(post_sigma.variance)])
predictive = Predictive(guide, params=svi_result.params, num_samples=nsamples)
samples = predictive(rng_key, data)
print_summary(samples, 0.95, False)
plt.scatter(samples["mu"], samples["sigma"], s=64, alpha=0.1, edgecolor="none")
plt.xlim(mu_range[0], mu_range[1])
plt.ylim(sigma_range[0], sigma_range[1])
plt.xlabel(r"$\mu$")
plt.ylabel(r"$\sigma$")
plt.show()
az.plot_kde(samples["mu"], samples["sigma"])
plt.xlim(mu_range[0], mu_range[1])
plt.ylim(sigma_range[0], sigma_range[1])
plt.xlabel(r"$\mu$")
plt.ylabel(r"$\sigma$")
if plot_square:
plt.axis("square")
plt.savefig("figures/gauss_params_1d_post_vi.pdf", dpi=300)
plt.show()
print(hpdi(samples["mu"], 0.95))
print(hpdi(samples["sigma"], 0.95))
fig, ax = plt.subplots()
az.plot_kde(samples["mu"], ax=ax, label=r"$\mu$")
fig, ax = plt.subplots()
az.plot_kde(samples["sigma"], ax=ax, label=r"$\sigma$")
conditioned_model = numpyro.handlers.condition(model, {"data": data})
nuts_kernel = NUTS(conditioned_model)
mcmc = MCMC(nuts_kernel, num_warmup=100, num_samples=nsamples)
mcmc.run(rng_key_, data)
mcmc.print_summary()
samples = mcmc.get_samples()
print_summary(samples, 0.95, False)
plt.scatter(samples["mu"], samples["sigma"], s=64, alpha=0.1, edgecolor="none")
plt.xlim(mu_range[0], mu_range[1])
plt.ylim(sigma_range[0], sigma_range[1])
plt.xlabel(r"$\mu$")
plt.ylabel(r"$\sigma$")
plt.show()
az.plot_kde(samples["mu"], samples["sigma"])
plt.xlim(mu_range[0], mu_range[1])
plt.ylim(sigma_range[0], sigma_range[1])
plt.xlabel(r"$\mu$")
plt.ylabel(r"$\sigma$")
if plot_square:
plt.axis("square")
plt.savefig("figures/gauss_params_1d_post_mcmc.pdf", dpi=300)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data
Step2: Empirical mean and std.
Step3: Model
Step4: Posterior samples.
Step5: posterior marginals.
Step6: Laplace approximation
Step7: Posterior samples.
Step8: Extract 2d joint posterior
Step9: We can sample from the posterior, which return results in the original parameterization.
Step10: Variational inference
Step11: Extract Variational parameters.
Step12: Posterior samples
Step13: MCMC
|
12,349
|
<ASSISTANT_TASK:>
Python Code:
import json
import numpy as np
import pandas as pd
from jupyter_scisheets_widget import scisheets_widget
import pandas_datareader as pdr
ibm_data = pdr.get_data_yahoo('IBM')
income_data = pd.read_csv('income_data.csv', sep=';')
income_data
tbl2 = scisheets_widget.HandsonDataFrame(income_data)
tbl2.show()
tbl2._df
tbl2._widget._model_data
tbl2._widget._model_header
def bob(df):
df_bob = df
df_bob['help'] = df_bob['State']
return df_bob
bob(income_data)
bob2 = income_data
income_data.update(bob2)
income_data2 = income_data.copy()
income_data2.loc[1, '2006'] = 'Tewnty'
income_data2
income_data
income_data.update(income_data2)
income_data
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load data into the notebook
Step2: Display the loaded data as a scisheet widget
Step3: Sanity check to explore df functionality
|
12,350
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from flare.gp import GaussianProcess
# make gp model
hyps = np.array([0.1, 1, 0.01])
hyp_labels = ['Signal Std', 'Length Scale', 'Noise Std']
cutoffs = {'threebody':3.9}
gp = \
GaussianProcess(kernels=['threebody'],
hyps=hyps,
cutoffs=cutoffs,
hyp_labels=hyp_labels,
maxiter=50)
import os
# set up DFT calculator
qe_input = './pwscf.in' # quantum espresso input file
dft_loc = os.environ.get('PWSCF_COMMAND')
# set up OTF parameters
dt = 0.001 # timestep (ps)
number_of_steps = 100 # number of steps
std_tolerance_factor = 1
max_atoms_added = 2
freeze_hyps = 3
otf = OTF(qe_input, dt, number_of_steps, gp, dft_loc,
std_tolerance_factor,
init_atoms=[0],
calculate_energy=True,
output_name='al_otf_qe',
freeze_hyps=freeze_hyps,
skip=5,
max_atoms_added=max_atoms_added,
write_model=3)
# run OTF MD
otf.run()
new_otf = OTF.from_checkpoint(log_name + "_checkpt.json")
new_otf.run()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Some Explanation about the parameters
Step2: Step 3
Step3: Some Explanation about the parameters
Step4: After OTF training is finished, we can check log file al_otf_qe.out for all the information dumped. This output file can be parsed using our otf_parser.py module, which will be introduced in the after-training-tutorial
Step 5 (Optional)
|
12,351
|
<ASSISTANT_TASK:>
Python Code:
# suposing the datset is downloaded here
workdir = '/media/samuel/dataspikesorting/DataSpikeSortingHD2/kampff/ultra dense/'
filename = workdir + 'T2/amplifier2017-02-08T21_38_55.bin'
%matplotlib notebook
import numpy as np
import matplotlib.pyplot as plt
import tridesclous as tdc
from tridesclous import DataIO, CatalogueConstructor, Peeler
import os, shutil
dirname = workdir + 'tdc_amplifier2017-02-02T17_18_46'
if os.path.exists(dirname):
#remove is already exists
shutil.rmtree(dirname)
dataio = DataIO(dirname=dirname)
# feed DataIO with one file
dataio.set_data_source(type='RawData', filenames=[filename],
sample_rate=20000., dtype='int16', total_channel=256,
bit_to_microVolt=0.195)
print(dataio)
# set the probe file
dataio.set_probe_file('kampff_ultra_dense_256.prb')
cc = CatalogueConstructor(dataio=dataio, chan_grp=0)
fullchain_kargs = {
'duration' : 300.,
'preprocessor' : {
'highpass_freq' : 400.,
'lowpass_freq' : 5000.,
'smooth_size' : 0,
'chunksize' : 1024,
'lostfront_chunksize' : 128,
'signalpreprocessor_engine' : 'numpy',
},
'peak_detector' : {
'peakdetector_engine' : 'numpy',
'peak_sign' : '-',
'relative_threshold' : 5.,
'peak_span' : 0.0002,
},
'noise_snippet' : {
'nb_snippet' : 300,
},
'extract_waveforms' : {
'n_left' : -20,
'n_right' : 30,
'mode' : 'rand',
'nb_max' : 20000,
'align_waveform' : False,
},
'clean_waveforms' : {
'alien_value_threshold' : 100.,
},
}
feat_method = 'peak_max'
feat_kargs = {}
clust_method = 'sawchaincut'
clust_kargs = {}
tdc.apply_all_catalogue_steps(cc, fullchain_kargs,
feat_method, feat_kargs,clust_method, clust_kargs)
print(cc)
dataio = DataIO(dirname=dirname)
tdc.summary_noise(dataio=dataio, chan_grp=0)
tdc.summary_catalogue_clusters(dataio=dataio, chan_grp=0, label=0)
cc.make_catalogue_for_peeler()
initial_catalogue = dataio.load_catalogue(chan_grp=0)
peeler = Peeler(dataio)
peeler.change_params(catalogue=initial_catalogue,
use_sparse_template=True,
sparse_threshold_mad=1.5,
use_opencl_with_sparse=True,
cl_platform_index=1,
cl_device_index=0)
peeler.run(duration=300.,
progressbar=True)
tdc.summary_after_peeler_clusters(dataio, chan_grp=0, label=0, neighborhood_radius=None, show_channels=False)
tdc.summary_after_peeler_clusters(dataio, chan_grp=0, label=1, neighborhood_radius=None, show_channels=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: create a DataIO (and remove if already exists)
Step2: CatalogueConstructor
Step3: Noise measurement
Step4: Inspect waveform quality at catalogue level
Step5: construct catalogue
Step6: apply peeler
Step7: final inspection of cells
|
12,352
|
<ASSISTANT_TASK:>
Python Code:
import sklearn.svm as svm
### BEGIN SOLUTION
### END SOLUTION
try:
train_svm
except:
assert False
else:
assert True
import numpy as np
np.random.seed(598497)
X = np.random.random((20, 2))
y = np.random.randint(2, size = 20)
m1 = train_svm(X, y, 10000.0)
assert m1.C == 10000.0
np.testing.assert_allclose(m1.coef_, np.array([[ -0.345056, -0.6118 ]]), rtol=1e-6)
try:
test_svm
except:
assert False
else:
assert True
np.random.seed(99766)
X = np.random.random((20, 2))
y = np.random.randint(2, size = 20)
m1 = train_svm(X[:18], y[:18], 10.0)
acc1 = test_svm(X[18:], y[18:], m1)
np.testing.assert_allclose(acc1, 0.5)
import numpy as np
import sklearn.datasets as datasets
import sklearn.cross_validation as cv
# Set up the iris data.
iris = datasets.load_iris()
X = iris.data[:, :2]
y = iris.target
# Some variables you're welcome to change, if you want.
C = 1.0 # SVM penalty term
folds = 5 # The "k" in "k-fold cross-validation"
# Set up the cross-validation loop.
kfold = cv.KFold(X.shape[0], n_folds = folds, shuffle = True, random_state = 10)
for train, test in kfold:
### BEGIN SOLUTION
### END SOLUTION
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: B
Step2: C
|
12,353
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from __future__ import print_function
from statsmodels.compat import lmap
import numpy as np
from scipy import stats
import matplotlib.pyplot as plt
import statsmodels.api as sm
norms = sm.robust.norms
def plot_weights(support, weights_func, xlabels, xticks):
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(111)
ax.plot(support, weights_func(support))
ax.set_xticks(xticks)
ax.set_xticklabels(xlabels, fontsize=16)
ax.set_ylim(-.1, 1.1)
return ax
help(norms.AndrewWave.weights)
a = 1.339
support = np.linspace(-np.pi*a, np.pi*a, 100)
andrew = norms.AndrewWave(a=a)
plot_weights(support, andrew.weights, ['$-\pi*a$', '0', '$\pi*a$'], [-np.pi*a, 0, np.pi*a]);
help(norms.Hampel.weights)
c = 8
support = np.linspace(-3*c, 3*c, 1000)
hampel = norms.Hampel(a=2., b=4., c=c)
plot_weights(support, hampel.weights, ['3*c', '0', '3*c'], [-3*c, 0, 3*c]);
help(norms.HuberT.weights)
t = 1.345
support = np.linspace(-3*t, 3*t, 1000)
huber = norms.HuberT(t=t)
plot_weights(support, huber.weights, ['-3*t', '0', '3*t'], [-3*t, 0, 3*t]);
help(norms.LeastSquares.weights)
support = np.linspace(-3, 3, 1000)
lst_sq = norms.LeastSquares()
plot_weights(support, lst_sq.weights, ['-3', '0', '3'], [-3, 0, 3]);
help(norms.RamsayE.weights)
a = .3
support = np.linspace(-3*a, 3*a, 1000)
ramsay = norms.RamsayE(a=a)
plot_weights(support, ramsay.weights, ['-3*a', '0', '3*a'], [-3*a, 0, 3*a]);
help(norms.TrimmedMean.weights)
c = 2
support = np.linspace(-3*c, 3*c, 1000)
trimmed = norms.TrimmedMean(c=c)
plot_weights(support, trimmed.weights, ['-3*c', '0', '3*c'], [-3*c, 0, 3*c]);
help(norms.TukeyBiweight.weights)
c = 4.685
support = np.linspace(-3*c, 3*c, 1000)
tukey = norms.TukeyBiweight(c=c)
plot_weights(support, tukey.weights, ['-3*c', '0', '3*c'], [-3*c, 0, 3*c]);
x = np.array([1, 2, 3, 4, 500])
x.mean()
np.median(x)
x.std()
stats.norm.ppf(.75)
print(x)
sm.robust.scale.stand_mad(x)
np.array([1,2,3,4,5.]).std()
np.random.seed(12345)
fat_tails = stats.t(6).rvs(40)
kde = sm.nonparametric.KDEUnivariate(fat_tails)
kde.fit()
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(111)
ax.plot(kde.support, kde.density);
print(fat_tails.mean(), fat_tails.std())
print(stats.norm.fit(fat_tails))
print(stats.t.fit(fat_tails, f0=6))
huber = sm.robust.scale.Huber()
loc, scale = huber(fat_tails)
print(loc, scale)
sm.robust.stand_mad(fat_tails)
sm.robust.stand_mad(fat_tails, c=stats.t(6).ppf(.75))
sm.robust.scale.mad(fat_tails)
from statsmodels.graphics.api import abline_plot
from statsmodels.formula.api import ols, rlm
prestige = sm.datasets.get_rdataset("Duncan", "car", cache=True).data
print(prestige.head(10))
fig = plt.figure(figsize=(12,12))
ax1 = fig.add_subplot(211, xlabel='Income', ylabel='Prestige')
ax1.scatter(prestige.income, prestige.prestige)
xy_outlier = prestige.ix['minister'][['income','prestige']]
ax1.annotate('Minister', xy_outlier, xy_outlier+1, fontsize=16)
ax2 = fig.add_subplot(212, xlabel='Education',
ylabel='Prestige')
ax2.scatter(prestige.education, prestige.prestige);
ols_model = ols('prestige ~ income + education', prestige).fit()
print(ols_model.summary())
infl = ols_model.get_influence()
student = infl.summary_frame()['student_resid']
print(student)
print(student.ix[np.abs(student) > 2])
print(infl.summary_frame().ix['minister'])
sidak = ols_model.outlier_test('sidak')
sidak.sort('unadj_p', inplace=True)
print(sidak)
fdr = ols_model.outlier_test('fdr_bh')
fdr.sort('unadj_p', inplace=True)
print(fdr)
rlm_model = rlm('prestige ~ income + education', prestige).fit()
print(rlm_model.summary())
print(rlm_model.weights)
dta = sm.datasets.get_rdataset("starsCYG", "robustbase", cache=True).data
from matplotlib.patches import Ellipse
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(111, xlabel='log(Temp)', ylabel='log(Light)', title='Hertzsprung-Russell Diagram of Star Cluster CYG OB1')
ax.scatter(*dta.values.T)
# highlight outliers
e = Ellipse((3.5, 6), .2, 1, alpha=.25, color='r')
ax.add_patch(e);
ax.annotate('Red giants', xy=(3.6, 6), xytext=(3.8, 6),
arrowprops=dict(facecolor='black', shrink=0.05, width=2),
horizontalalignment='left', verticalalignment='bottom',
clip_on=True, # clip to the axes bounding box
fontsize=16,
)
# annotate these with their index
for i,row in dta.ix[dta['log.Te'] < 3.8].iterrows():
ax.annotate(i, row, row + .01, fontsize=14)
xlim, ylim = ax.get_xlim(), ax.get_ylim()
from IPython.display import Image
Image(filename='star_diagram.png')
y = dta['log.light']
X = sm.add_constant(dta['log.Te'], prepend=True)
ols_model = sm.OLS(y, X).fit()
abline_plot(model_results=ols_model, ax=ax)
rlm_mod = sm.RLM(y, X, sm.robust.norms.TrimmedMean(.5)).fit()
abline_plot(model_results=rlm_mod, ax=ax, color='red')
infl = ols_model.get_influence()
h_bar = 2*(ols_model.df_model + 1 )/ols_model.nobs
hat_diag = infl.summary_frame()['hat_diag']
hat_diag.ix[hat_diag > h_bar]
sidak2 = ols_model.outlier_test('sidak')
sidak2.sort('unadj_p', inplace=True)
print(sidak2)
fdr2 = ols_model.outlier_test('fdr_bh')
fdr2.sort('unadj_p', inplace=True)
print(fdr2)
del ax.lines[-1]
weights = np.ones(len(X))
weights[X[X['log.Te'] < 3.8].index.values - 1] = 0
wls_model = sm.WLS(y, X, weights=weights).fit()
abline_plot(model_results=wls_model, ax=ax, color='green')
yy = y.values[:,None]
xx = X['log.Te'].values[:,None]
%load_ext rpy2.ipython
%R library(robustbase)
%Rpush yy xx
%R mod <- lmrob(yy ~ xx);
%R params <- mod$coefficients;
%Rpull params
%R print(mod)
print(params)
abline_plot(intercept=params[0], slope=params[1], ax=ax, color='green')
np.random.seed(12345)
nobs = 200
beta_true = np.array([3, 1, 2.5, 3, -4])
X = np.random.uniform(-20,20, size=(nobs, len(beta_true)-1))
# stack a constant in front
X = sm.add_constant(X, prepend=True) # np.c_[np.ones(nobs), X]
mc_iter = 500
contaminate = .25 # percentage of response variables to contaminate
all_betas = []
for i in range(mc_iter):
y = np.dot(X, beta_true) + np.random.normal(size=200)
random_idx = np.random.randint(0, nobs, size=int(contaminate * nobs))
y[random_idx] = np.random.uniform(-750, 750)
beta_hat = sm.RLM(y, X).fit().params
all_betas.append(beta_hat)
all_betas = np.asarray(all_betas)
se_loss = lambda x : np.linalg.norm(x, ord=2)**2
se_beta = lmap(se_loss, all_betas - beta_true)
np.array(se_beta).mean()
all_betas.mean(0)
beta_true
se_loss(all_betas.mean(0) - beta_true)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: An M-estimator minimizes the function
Step2: Andrew's Wave
Step3: Hampel's 17A
Step4: Huber's t
Step5: Least Squares
Step6: Ramsay's Ea
Step7: Trimmed Mean
Step8: Tukey's Biweight
Step9: Scale Estimators
Step10: The mean is not a robust estimator of location
Step11: The median, on the other hand, is a robust estimator with a breakdown point of 50%
Step12: Analagously for the scale
Step13: Median Absolute Deviation
Step14: The default for Robust Linear Models is MAD
Step15: Duncan's Occupational Prestige data - M-estimation for outliers
Step16: Hertzprung Russell data for Star Cluster CYG 0B1 - Leverage Points
Step17: Why? Because M-estimators are not robust to leverage points.
Step18: Let's delete that line
Step19: MM estimators are good for this type of problem, unfortunately, we don't yet have these yet.
Step20: Exercise
Step21: Squared error loss
|
12,354
|
<ASSISTANT_TASK:>
Python Code:
import os.path as op
import numpy as np
import mne
data_path = mne.datasets.sample.data_path()
fname = op.join(data_path, 'MEG', 'sample', 'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(fname)
raw.set_eeg_reference('average', projection=True) # set EEG average reference
order = np.arange(raw.info['nchan'])
order[9] = 312 # We exchange the plotting order of two channels
order[312] = 9 # to show the trigger channel as the 10th channel.
raw.plot(n_channels=10, order=order, block=True)
events = mne.find_events(raw)
print('Found %s events, first five:' % len(events))
print(events[:5])
# Plot the events to get an idea of the paradigm
# Specify colors and an event_id dictionary for the legend.
event_id = {'Auditory/Left': 1, 'Auditory/Right': 2,
'Visual/Left': 3, 'Visual/Right': 4,
'smiley': 5, 'button': 32}
color = {1: 'green', 2: 'yellow', 3: 'red', 4: 'c', 5: 'black', 32: 'blue'}
mne.viz.plot_events(events, raw.info['sfreq'], raw.first_samp, color=color,
event_id=event_id)
raw.plot(events=events, n_channels=10, order=order)
tmin, tmax = -0.2, 0.5
event_id = {'Auditory/Left': 1, 'Auditory/Right': 2,
'Visual/Left': 3, 'Visual/Right': 4}
# Only pick MEG and EOG channels.
picks = mne.pick_types(raw.info, meg=True, eeg=False, eog=True)
baseline = (None, 0.0)
reject = {'mag': 4e-12, 'eog': 200e-6}
epochs = mne.Epochs(raw, events=events, event_id=event_id, tmin=tmin,
tmax=tmax, baseline=baseline, reject=reject, picks=picks)
epochs.plot(block=True)
epochs.plot_drop_log()
picks = mne.pick_types(epochs.info, meg=True, eog=True)
evoked_left = epochs['Auditory/Left'].average(picks=picks)
evoked_right = epochs['Auditory/Right'].average(picks=picks)
epochs_left = epochs['Left']
# ... or to select a very specific subset. This is the same as above:
evoked_left = epochs['Left/Auditory'].average(picks=picks)
evoked_left.plot(time_unit='s')
evoked_right.plot(time_unit='s')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In MNE, epochs refers to a collection of single trials or short segments
Step2: To create time locked epochs, we first need a set of events that contain the
Step3: Notice channel STI 014 at the bottom. It is the trigger channel that
Step4: The event list contains three columns. The first column corresponds to
Step5: In this tutorial we are only interested in triggers 1, 2, 3 and 4. These
Step6: Now we have everything we need to construct the epochs. To get some
Step7: Let's plot the epochs to see the results. The number at the top refers to the
Step8: To see why the epochs were rejected, we can plot the drop log.
Step9: To get the evoked response you can simply do epochs.average(). It
Step10: Notice we have used forward slashes ('/') to separate the factors of the
Step11: <div class="alert alert-info"><h4>Note</h4><p>It is also possible to add metadata to Epochs objects, allowing for
|
12,355
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nims-kma', 'sandbox-3', 'land')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Description
Step7: 1.4. Land Atmosphere Flux Exchanges
Step8: 1.5. Atmospheric Coupling Treatment
Step9: 1.6. Land Cover
Step10: 1.7. Land Cover Change
Step11: 1.8. Tiling
Step12: 2. Key Properties --> Conservation Properties
Step13: 2.2. Water
Step14: 2.3. Carbon
Step15: 3. Key Properties --> Timestepping Framework
Step16: 3.2. Time Step
Step17: 3.3. Timestepping Method
Step18: 4. Key Properties --> Software Properties
Step19: 4.2. Code Version
Step20: 4.3. Code Languages
Step21: 5. Grid
Step22: 6. Grid --> Horizontal
Step23: 6.2. Matches Atmosphere Grid
Step24: 7. Grid --> Vertical
Step25: 7.2. Total Depth
Step26: 8. Soil
Step27: 8.2. Heat Water Coupling
Step28: 8.3. Number Of Soil layers
Step29: 8.4. Prognostic Variables
Step30: 9. Soil --> Soil Map
Step31: 9.2. Structure
Step32: 9.3. Texture
Step33: 9.4. Organic Matter
Step34: 9.5. Albedo
Step35: 9.6. Water Table
Step36: 9.7. Continuously Varying Soil Depth
Step37: 9.8. Soil Depth
Step38: 10. Soil --> Snow Free Albedo
Step39: 10.2. Functions
Step40: 10.3. Direct Diffuse
Step41: 10.4. Number Of Wavelength Bands
Step42: 11. Soil --> Hydrology
Step43: 11.2. Time Step
Step44: 11.3. Tiling
Step45: 11.4. Vertical Discretisation
Step46: 11.5. Number Of Ground Water Layers
Step47: 11.6. Lateral Connectivity
Step48: 11.7. Method
Step49: 12. Soil --> Hydrology --> Freezing
Step50: 12.2. Ice Storage Method
Step51: 12.3. Permafrost
Step52: 13. Soil --> Hydrology --> Drainage
Step53: 13.2. Types
Step54: 14. Soil --> Heat Treatment
Step55: 14.2. Time Step
Step56: 14.3. Tiling
Step57: 14.4. Vertical Discretisation
Step58: 14.5. Heat Storage
Step59: 14.6. Processes
Step60: 15. Snow
Step61: 15.2. Tiling
Step62: 15.3. Number Of Snow Layers
Step63: 15.4. Density
Step64: 15.5. Water Equivalent
Step65: 15.6. Heat Content
Step66: 15.7. Temperature
Step67: 15.8. Liquid Water Content
Step68: 15.9. Snow Cover Fractions
Step69: 15.10. Processes
Step70: 15.11. Prognostic Variables
Step71: 16. Snow --> Snow Albedo
Step72: 16.2. Functions
Step73: 17. Vegetation
Step74: 17.2. Time Step
Step75: 17.3. Dynamic Vegetation
Step76: 17.4. Tiling
Step77: 17.5. Vegetation Representation
Step78: 17.6. Vegetation Types
Step79: 17.7. Biome Types
Step80: 17.8. Vegetation Time Variation
Step81: 17.9. Vegetation Map
Step82: 17.10. Interception
Step83: 17.11. Phenology
Step84: 17.12. Phenology Description
Step85: 17.13. Leaf Area Index
Step86: 17.14. Leaf Area Index Description
Step87: 17.15. Biomass
Step88: 17.16. Biomass Description
Step89: 17.17. Biogeography
Step90: 17.18. Biogeography Description
Step91: 17.19. Stomatal Resistance
Step92: 17.20. Stomatal Resistance Description
Step93: 17.21. Prognostic Variables
Step94: 18. Energy Balance
Step95: 18.2. Tiling
Step96: 18.3. Number Of Surface Temperatures
Step97: 18.4. Evaporation
Step98: 18.5. Processes
Step99: 19. Carbon Cycle
Step100: 19.2. Tiling
Step101: 19.3. Time Step
Step102: 19.4. Anthropogenic Carbon
Step103: 19.5. Prognostic Variables
Step104: 20. Carbon Cycle --> Vegetation
Step105: 20.2. Carbon Pools
Step106: 20.3. Forest Stand Dynamics
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
Step109: 22.2. Growth Respiration
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
Step111: 23.2. Allocation Bins
Step112: 23.3. Allocation Fractions
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
Step115: 26. Carbon Cycle --> Litter
Step116: 26.2. Carbon Pools
Step117: 26.3. Decomposition
Step118: 26.4. Method
Step119: 27. Carbon Cycle --> Soil
Step120: 27.2. Carbon Pools
Step121: 27.3. Decomposition
Step122: 27.4. Method
Step123: 28. Carbon Cycle --> Permafrost Carbon
Step124: 28.2. Emitted Greenhouse Gases
Step125: 28.3. Decomposition
Step126: 28.4. Impact On Soil Properties
Step127: 29. Nitrogen Cycle
Step128: 29.2. Tiling
Step129: 29.3. Time Step
Step130: 29.4. Prognostic Variables
Step131: 30. River Routing
Step132: 30.2. Tiling
Step133: 30.3. Time Step
Step134: 30.4. Grid Inherited From Land Surface
Step135: 30.5. Grid Description
Step136: 30.6. Number Of Reservoirs
Step137: 30.7. Water Re Evaporation
Step138: 30.8. Coupled To Atmosphere
Step139: 30.9. Coupled To Land
Step140: 30.10. Quantities Exchanged With Atmosphere
Step141: 30.11. Basin Flow Direction Map
Step142: 30.12. Flooding
Step143: 30.13. Prognostic Variables
Step144: 31. River Routing --> Oceanic Discharge
Step145: 31.2. Quantities Transported
Step146: 32. Lakes
Step147: 32.2. Coupling With Rivers
Step148: 32.3. Time Step
Step149: 32.4. Quantities Exchanged With Rivers
Step150: 32.5. Vertical Grid
Step151: 32.6. Prognostic Variables
Step152: 33. Lakes --> Method
Step153: 33.2. Albedo
Step154: 33.3. Dynamics
Step155: 33.4. Dynamic Lake Extent
Step156: 33.5. Endorheic Basins
Step157: 34. Lakes --> Wetlands
|
12,356
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn
plt.rcParams['figure.figsize'] = 9, 6
from sklearn import datasets, svm
from sklearn.feature_selection import SelectPercentile, f_classif
iris = datasets.load_iris()
iris.data.shape
# vygenerujme si 20 uplne nahodnych atributov a pricapme ich k povodnym datam
E = np.random.uniform(0, 0.1, size=(len(iris.data), 20))
X = np.hstack((iris.data, E))
y = iris.target
X_indices = np.arange(X.shape[-1])
X_indices
X.shape
# povodne data
iris.data[:2]
# data rozsirene o dalsich 20 nahodnych atributov
# len tie prve by mali davat zmysel
X[:2]
from sklearn.feature_selection import SelectPercentile, f_classif
selector = SelectPercentile(f_classif, percentile=10)
selector.fit(X, y)
scores = -np.log10(selector.pvalues_)
scores /= scores.max()
plt.bar(X_indices, scores)
from sklearn.decomposition import PCA
import sklearn.datasets as ds
data = ds.load_breast_cancer()['data']
pca_trafo = PCA().fit(data)
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
line, = ax.plot(pca_trafo.explained_variance_ratio_, '--o')
ax.set_yscale('log') # skus si vyhodit logaritmicku mierku, uvidis, ze je tam asi problem
ax.set_title('Prispevok komponentov k vysvetleniu variancie datasetu')
import sklearn.datasets as ds
from sklearn.decomposition import PCA
pca_trafo = PCA()
data = ds.load_breast_cancer()['data']
pca_data = pca_trafo.fit_transform(data)
ax = seaborn.heatmap(np.log(pca_trafo.inverse_transform(np.eye(data.shape[1]))), cmap="hot", cbar=False)
ax.set_xlabel('features')
ax.set_ylabel('components')
means = np.mean(pca_trafo.inverse_transform(np.eye(data.shape[1])), axis=0)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(means)
ax.set_ylabel('mean contrib. in components')
ax.set_xlabel('feature #')
# PCA sa pokusa vysvetliv varianciu v datach. Ak ma kazdy atribut inu strednu hodnotu (varianciu), tak nevysvetli mnozstvo informacie v atribute ale len jeho varianciu
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(np.std(data, axis=0))
ax.set_ylabel('standard deviation')
ax.set_xlabel('feature #')
# ax.set_yscale('log')
import sklearn.datasets as ds
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler # vykona z-normalizaciu na kazdom atribute
z_scaler = StandardScaler()
data = ds.load_breast_cancer()['data']
z_data = z_scaler.fit_transform(data)
pca_trafo = PCA().fit(z_data)
plt.plot(pca_trafo.explained_variance_ratio_, '--o') # mnozstvo vysvetlenej variancie per atribut
plt.plot(pca_trafo.explained_variance_ratio_.cumsum(), '--o') # kumulativna suma vysvetlenej variancie ak si chcem vybrat atributy
plt.ylim((0,1.0))
import sklearn.datasets as ds
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
z_scaler = StandardScaler()
data = ds.load_breast_cancer()['data']
pca_trafo = PCA()
z_data = z_scaler.fit_transform(data)
pca_data = pca_trafo.fit_transform(z_data)
ax = seaborn.heatmap(np.log(pca_trafo.inverse_transform(np.eye(data.shape[1]))), cmap="hot", cbar=False)
ax.set_xlabel('features')
ax.set_ylabel('components')
means = np.mean(pca_trafo.inverse_transform(np.eye(data.shape[1])), axis=0)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(means)
ax.set_ylabel('mean contrib. in components')
ax.set_xlabel('feature #')
iris = datasets.load_iris()
iris.data.shape
# vygenerujme si 20 uplne nahodnych atributov a pricapme ich k povodnym datam
E = np.random.uniform(0, 0.1, size=(len(iris.data), 20))
X = np.hstack((iris.data, E))
y = iris.target
print('Tvar povodnych dat', iris.data.shape)
print('Tvar upravenych dat', X.shape)
X_indices = np.arange(X.shape[-1])
z_scaler = StandardScaler()
pca_trafo = PCA()
z_data = z_scaler.fit_transform(X)
pca_data = pca_trafo.fit_transform(z_data)
ax = seaborn.heatmap(np.log(pca_trafo.inverse_transform(np.eye(X.shape[1]))), cmap="hot", cbar=False)
ax.set_xlabel('features')
ax.set_ylabel('components')
means = np.mean(pca_trafo.inverse_transform(np.eye(X.shape[1])), axis=0)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.bar(X_indices, means)
ax.set_ylabel('mean contrib. in components')
ax.set_xlabel('feature #')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Skusme najskor priklad toho ako by sme z nejakeho datasetu vyberali najdolezitejsie atributy pomocou filtra
Step2: Pouzijeme oblubeny dataset kvietkov ktory ma 150 pozorovani a 4 atributy
Step3: Pre porovnanie sa pozireme na dva riadky povodnych a novych dat
Step4: Mozeme skusit najst najdolezitejsie atributy. Mali by to byt prve 4
Step5: Naozaj sa nam podarilo najst tie data, ktore suviseli s predikovanou premennou.
Step6: Skusme pouzit PCA na zobrazenie toho, kolko potrebuje komponentov na vysvetlenie datasetu
Step7: Mozeme skusit pouzit PCA na oznacenie tych atributov, ktore najviac prispievaju k variancii v datach
Step8: matica nieje uplne nahodna ale su tam 3 pruhy, ktore zobrazuju 3 skupiny vlastnosti, ktore su v komponentoch odrazene vyraznejsie ako ostatne. Zda sa, ze toto su tie najdolezitejsie atributy.
Step9: Zda sa, ze su tam nejake atributy, ktore su v tych komponentoch odrazene velmi silno.
Step10: Tu sa nam asi nieco nezda
Step11: Teraz potrebujem trochu viac komponentov na to aby som vysvetlil rovnake mnozstvo variancie
Step12: Teraz ta heatmapa vyzera nahodne a neda sa jasne pouzit na urcenie najdolezitejsich vlastnosti
Step13: Po normalizovani dat sa priemrny prispevok pre vsetky atributy pohybuje tesne okolo 0. Neda as teda povedat ktory je najdolezitejsi a PCA nam povedalo len to, ktory ma najviac variancie.
Step14: Z tejto heatmapy sa neda vycitat ziadny jasny trend. Skusme este tie priemery prispevkov do komponentov per atribut.
|
12,357
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
from keras.layers.core import Dense, Activation
from keras.models import Sequential
from keras.utils import np_utils
from sklearn.cross_validation import train_test_split
from sklearn.datasets.samples_generator import make_blobs
from sklearn.metrics import classification_report, confusion_matrix
n_samples = 10000
n_classes = 3
n_features = 2
# centers - number of classes
# n_features - dimension of the data
X, y_int = make_blobs(n_samples=n_samples, centers=n_classes, n_features=n_features, \
cluster_std=0.5, random_state=0)
# No need to convert the features and targets to the 32-bit format as in plain theano.
# labels need to be one-hot encoded (binary vector of size N for N classes)
y = np_utils.to_categorical(y_int, n_classes)
# visualize the data for better understanding
def plot_2d_blobs(dataset):
X, y = dataset
axis('equal')
scatter(X[:, 0], X[:, 1], c=y, alpha=0.1, edgecolors='none')
plot_2d_blobs((X, y_int))
# split the data into training, validation and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
# the model is just a sequence of transformations - layer weights, activations, etc.
model = Sequential()
# weights from input to hidden layer - linear transform
model.add(Dense(3, input_dim=n_features))
# basic non-linearity
model.add(Activation("tanh"))
# weights from hidden to output layer
model.add(Dense(n_classes))
# nonlinearity suitable for a classifier
model.add(Activation("softmax"))
# - loss function suitable for multi-class classification
# - plain stochastic gradient descent with mini-batches
model.compile(loss='categorical_crossentropy', optimizer='sgd')
model.fit(X_train, y_train, nb_epoch=5, batch_size=32);
def evaluate_accuracy(X, y, label):
_, accuracy = model.evaluate(X_train, y_train, show_accuracy=True)
print('training accuracy:', 100 * accuracy, '%')
evaluate_accuracy(X_train, X_train, 'training')
evaluate_accuracy(X_test, X_test, 'test')
y_test_pred = model.predict_classes(X_test)
plot_2d_blobs((X_test, y_test_pred))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Generate data
Step2: Split the data into training and test set
Step3: Create the model
Step4: Train the model
Step5: Evaluate the model
|
12,358
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cccma', 'sandbox-1', 'atmoschem')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Chemistry Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 1.8. Coupling With Chemical Reactivity
Step12: 2. Key Properties --> Software Properties
Step13: 2.2. Code Version
Step14: 2.3. Code Languages
Step15: 3. Key Properties --> Timestep Framework
Step16: 3.2. Split Operator Advection Timestep
Step17: 3.3. Split Operator Physical Timestep
Step18: 3.4. Split Operator Chemistry Timestep
Step19: 3.5. Split Operator Alternate Order
Step20: 3.6. Integrated Timestep
Step21: 3.7. Integrated Scheme Type
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
Step23: 4.2. Convection
Step24: 4.3. Precipitation
Step25: 4.4. Emissions
Step26: 4.5. Deposition
Step27: 4.6. Gas Phase Chemistry
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Step30: 4.9. Photo Chemistry
Step31: 4.10. Aerosols
Step32: 5. Key Properties --> Tuning Applied
Step33: 5.2. Global Mean Metrics Used
Step34: 5.3. Regional Metrics Used
Step35: 5.4. Trend Metrics Used
Step36: 6. Grid
Step37: 6.2. Matches Atmosphere Grid
Step38: 7. Grid --> Resolution
Step39: 7.2. Canonical Horizontal Resolution
Step40: 7.3. Number Of Horizontal Gridpoints
Step41: 7.4. Number Of Vertical Levels
Step42: 7.5. Is Adaptive Grid
Step43: 8. Transport
Step44: 8.2. Use Atmospheric Transport
Step45: 8.3. Transport Details
Step46: 9. Emissions Concentrations
Step47: 10. Emissions Concentrations --> Surface Emissions
Step48: 10.2. Method
Step49: 10.3. Prescribed Climatology Emitted Species
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Step51: 10.5. Interactive Emitted Species
Step52: 10.6. Other Emitted Species
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
Step54: 11.2. Method
Step55: 11.3. Prescribed Climatology Emitted Species
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Step57: 11.5. Interactive Emitted Species
Step58: 11.6. Other Emitted Species
Step59: 12. Emissions Concentrations --> Concentrations
Step60: 12.2. Prescribed Upper Boundary
Step61: 13. Gas Phase Chemistry
Step62: 13.2. Species
Step63: 13.3. Number Of Bimolecular Reactions
Step64: 13.4. Number Of Termolecular Reactions
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Step67: 13.7. Number Of Advected Species
Step68: 13.8. Number Of Steady State Species
Step69: 13.9. Interactive Dry Deposition
Step70: 13.10. Wet Deposition
Step71: 13.11. Wet Oxidation
Step72: 14. Stratospheric Heterogeneous Chemistry
Step73: 14.2. Gas Phase Species
Step74: 14.3. Aerosol Species
Step75: 14.4. Number Of Steady State Species
Step76: 14.5. Sedimentation
Step77: 14.6. Coagulation
Step78: 15. Tropospheric Heterogeneous Chemistry
Step79: 15.2. Gas Phase Species
Step80: 15.3. Aerosol Species
Step81: 15.4. Number Of Steady State Species
Step82: 15.5. Interactive Dry Deposition
Step83: 15.6. Coagulation
Step84: 16. Photo Chemistry
Step85: 16.2. Number Of Reactions
Step86: 17. Photo Chemistry --> Photolysis
Step87: 17.2. Environmental Conditions
|
12,359
|
<ASSISTANT_TASK:>
Python Code:
def print_n_numbers(n):
#TODO: write a loop that prints numbers from 0 to n (excluding n)
for i in xrange(n):
print(i)
# now we execute the function
print_n_numbers(5)
def print_list(ll):
# Prints the list
print('\n'.join(ll))
print_list(['Visual Turing Test', 'Summer School', 'Dr. Mario Fritz', 'Mateusz Malinowski'])
! ls
! python boring_function.py 'hello world'
! tail boring_function.py
! nvidia-smi
! head -15 data/daquar/qa.894.raw.train.format_triple
#TODO: Execute the following procedure (Shift+Enter)
from kraino.utils import data_provider
dp = data_provider.select['daquar-triples']
dp
# check the keys of the representation of DAQUAR train
train_text_representation = dp['text'](train_or_test='train')
train_text_representation.keys()
# let's check some entries of the text's representation
n_elements = 10
print('== Questions:')
print_list(train_text_representation['x'][:n_elements])
print
print('== Answers:')
print_list(train_text_representation['y'][:n_elements])
print
print('== Image Names:')
print_list(train_text_representation['img_name'][:n_elements])
from toolz import frequencies
train_raw_x = train_text_representation['x']
# we start from building the frequencies table
wordcount_x = frequencies(' '.join(train_raw_x).split(' '))
# print the most and least frequent words
n_show = 5
print(sorted(wordcount_x.items(), key=lambda x: x[1], reverse=True)[:n_show])
print(sorted(wordcount_x.items(), key=lambda x: x[1])[:n_show])
# Kraino is a framework that helps in fast prototyping Visual Turing Test models
from kraino.utils.input_output_space import build_vocabulary
# This function takes wordcounts and returns word2index - mapping from words into indices,
# and index2word - mapping from indices to words.
word2index_x, index2word_x = build_vocabulary(
this_wordcount=wordcount_x,
truncate_to_most_frequent=0)
word2index_x
from kraino.utils.input_output_space import encode_questions_index
one_hot_x = encode_questions_index(train_raw_x, word2index_x)
print(train_raw_x[:3])
print(one_hot_x[:3])
# We use another framework that is useful to build deep learning models - Keras
from keras.preprocessing import sequence
MAXLEN=30
train_x = sequence.pad_sequences(one_hot_x, maxlen=MAXLEN)
train_x[:3]
# for simplicity, we consider only first answer words; that is, if answer is 'knife,fork' we encode only 'knife'
# note, however, that the tutorial supports multiple word answers (is_only_first_answer_word=False)
# with MAX_ANSWER_TIME_STEPS defining number of answer words (ignored if is_only_first_answer_word=True)
MAX_ANSWER_TIME_STEPS=10
from kraino.utils.input_output_space import encode_answers_one_hot
train_raw_y = train_text_representation['y']
wordcount_y = frequencies(' '.join(train_raw_y).replace(', ',',').split(' '))
word2index_y, index2word_y = build_vocabulary(this_wordcount=wordcount_y)
train_y, _ = encode_answers_one_hot(
train_raw_y,
word2index_y,
answer_words_delimiter=train_text_representation['answer_words_delimiter'],
is_only_first_answer_word=True,
max_answer_time_steps=MAX_ANSWER_TIME_STEPS)
print(train_x.shape)
print(train_y.shape)
# word2index_y
test_text_representation = dp['text'](train_or_test='test')
test_raw_x = test_text_representation['x']
test_one_hot_x = encode_questions_index(test_raw_x, word2index_x)
test_x = sequence.pad_sequences(test_one_hot_x, maxlen=MAXLEN)
print_list(test_raw_x[:3])
test_x[:3]
import theano
import theano.tensor as T
# Theano is using symbolic calculations, so we need to first create symbolic variables
theano_x = T.scalar()
# we define a relationship between a symbolic input and a symbolic output
theano_y = T.maximum(0,theano_x)
# now it's time for a symbolic gradient wrt. to symbolic variable x
theano_nabla_y = T.grad(theano_y, theano_x)
# we can see that both variables are symbolic, they don't have any numerical values
print(theano_x)
print(theano_y)
print(theano_nabla_y)
# theano.function compiles the symbolic representation of the network
theano_f_x = theano.function([theano_x], theano_y)
print(theano_f_x(3))
print(theano_f_x(-3))
# and now for gradients
nabla_f_x = theano.function([theano_x], theano_nabla_y)
print(nabla_f_x(3))
print(nabla_f_x(-3))
# we sample from noisy x^2 function
from numpy import asarray
from numpy import random
def myfun(x):
return x*x
NUM_SAMPLES = 10000
HIGH_VALUE=10
keras_x = asarray(random.randint(low=0, high=HIGH_VALUE, size=NUM_SAMPLES))
keras_noise = random.normal(loc=0.0, scale=0.1, size=NUM_SAMPLES)
keras_noise = asarray([max(x,0) for x in keras_noise])
keras_y = asarray([myfun(x) + n for x,n in zip(keras_x, keras_noise)])
# print('X:')
# print(keras_x)
# print('Noise')
# print(keras_noise)
# print('Noisy X^2:')
# print(keras_y)
keras_x = keras_x.reshape(keras_x.shape[0],1)
keras_y = keras_y.reshape(keras_y.shape[0],1)
# import keras packages
from keras.models import Sequential
from keras.layers import Dense, Activation, Dropout
# build a regression network
KERAS_NUM_HIDDEN = 150
KERAS_NUM_HIDDEN_SECOND = 150
KERAS_NUM_HIDDEN_THIRD = 150
KERAS_DROPOUT_FRACTION = 0.5
m = Sequential()
m.add(Dense(KERAS_NUM_HIDDEN, input_dim=1))
m.add(Activation('relu'))
m.add(Dropout(KERAS_DROPOUT_FRACTION))
#TODO: add one more layer
# m.add(Dense(KERAS_NUM_HIDDEN_SECOND))
# m.add(Activation('relu'))
# m.add(Dropout(KERAS_DROPOUT_FRACTION))
#TODO: add one more layer
# m.add(Dense(KERAS_NUM_HIDDEN_THIRD))
# m.add(Activation('relu'))
# m.add(Dropout(KERAS_DROPOUT_FRACTION))
m.add(Dense(1))
# compile and fit
m.compile(loss='mse', optimizer='adam')
m.fit(keras_x, keras_y, nb_epoch=100, batch_size=250)
keras_x_predict = asarray([1,3,6,12,HIGH_VALUE+10])
keras_x_predict = keras_x_predict.reshape(keras_x_predict.shape[0],1)
keras_predictions = m.predict(keras_x_predict)
print("{0:>10}{1:>10}{2:>10}".format('X', 'Y', 'GT'))
for x,y in zip(keras_x_predict, keras_predictions):
print("{0:>10}{1:>10.2f}{2:>10}".format(x[0], y[0], myfun(x[0])))
#== Model definition
# First we define a model using keras/kraino
from keras.layers.core import Activation
from keras.layers.core import Dense
from keras.layers.core import Dropout
from keras.layers.core import TimeDistributedMerge
from keras.layers.embeddings import Embedding
from kraino.core.model_zoo import AbstractSequentialModel
from kraino.core.model_zoo import AbstractSingleAnswer
from kraino.core.model_zoo import AbstractSequentialMultiplewordAnswer
from kraino.core.model_zoo import Config
from kraino.core.keras_extensions import DropMask
from kraino.core.keras_extensions import LambdaWithMask
from kraino.core.keras_extensions import time_distributed_masked_ave
# This model inherits from AbstractSingleAnswer, and so it produces single answer words
# To use multiple answer words, you need to inherit from AbstractSequentialMultiplewordAnswer
class BlindBOW(AbstractSequentialModel, AbstractSingleAnswer):
BOW Language only model that produces single word answers.
def create(self):
self.add(Embedding(
self._config.input_dim,
self._config.textual_embedding_dim,
mask_zero=True))
self.add(LambdaWithMask(time_distributed_masked_ave, output_shape=[self.output_shape[2]]))
self.add(DropMask())
self.add(Dropout(0.5))
self.add(Dense(self._config.output_dim))
self.add(Activation('softmax'))
model_config = Config(
textual_embedding_dim=500,
input_dim=len(word2index_x.keys()),
output_dim=len(word2index_y.keys()))
model = BlindBOW(model_config)
model.create()
model.compile(
loss='categorical_crossentropy',
optimizer='adam')
text_bow_model = model
#== Model training
text_bow_model.fit(
train_x,
train_y,
batch_size=512,
nb_epoch=40,
validation_split=0.1,
show_accuracy=True)
train_raw_x[0]
#== Model definition
# First we define a model using keras/kraino
from keras.layers.core import Activation
from keras.layers.core import Dense
from keras.layers.core import Dropout
from keras.layers.embeddings import Embedding
from keras.layers.recurrent import GRU
from keras.layers.recurrent import LSTM
from kraino.core.model_zoo import AbstractSequentialModel
from kraino.core.model_zoo import AbstractSingleAnswer
from kraino.core.model_zoo import AbstractSequentialMultiplewordAnswer
from kraino.core.model_zoo import Config
from kraino.core.keras_extensions import DropMask
from kraino.core.keras_extensions import LambdaWithMask
from kraino.core.keras_extensions import time_distributed_masked_ave
# This model inherits from AbstractSingleAnswer, and so it produces single answer words
# To use multiple answer words, you need to inherit from AbstractSequentialMultiplewordAnswer
class BlindRNN(AbstractSequentialModel, AbstractSingleAnswer):
RNN Language only model that produces single word answers.
def create(self):
self.add(Embedding(
self._config.input_dim,
self._config.textual_embedding_dim,
mask_zero=True))
#TODO: Replace averaging with RNN (you can choose between LSTM and GRU)
# self.add(LambdaWithMask(time_distributed_masked_ave, output_shape=[self.output_shape[2]]))
self.add(GRU(self._config.hidden_state_dim,
return_sequences=False))
self.add(Dropout(0.5))
self.add(Dense(self._config.output_dim))
self.add(Activation('softmax'))
model_config = Config(
textual_embedding_dim=500,
hidden_state_dim=500,
input_dim=len(word2index_x.keys()),
output_dim=len(word2index_y.keys()))
model = BlindRNN(model_config)
model.create()
model.compile(
loss='categorical_crossentropy',
optimizer='adam')
text_rnn_model = model
#== Model training
text_rnn_model.fit(
train_x,
train_y,
batch_size=512,
nb_epoch=40,
validation_split=0.1,
show_accuracy=True)
%env NLTK_DATA=/home/ubuntu/data/visual_turing_test/nltk_data
from nltk.corpus import wordnet as wn
armchair_synset = wn.synset('armchair.n.01')
chair_synset = wn.synset('chair.n.01')
wardrobe_synset = wn.synset('wardrobe.n.01')
print(armchair_synset.wup_similarity(armchair_synset))
print(armchair_synset.wup_similarity(chair_synset))
print(armchair_synset.wup_similarity(wardrobe_synset))
wn.synset('chair.n.01').wup_similarity(wn.synset('person.n.01'))
test_text_representation = dp['text'](train_or_test='test')
test_raw_x = test_text_representation['x']
test_one_hot_x = encode_questions_index(test_raw_x, word2index_x)
test_x = sequence.pad_sequences(test_one_hot_x, maxlen=MAXLEN)
from numpy import argmax
# predict the probabilities for every word
predictions_scores = text_bow_model.predict([test_x])
print(predictions_scores.shape)
# follow the maximum likelihood principle, and get the best indices to vocabulary
predictions_best = argmax(predictions_scores, axis=-1)
print(predictions_best.shape)
# decode the predicted indices into word answers
predictions_answers = [index2word_y[x] for x in predictions_best]
print(len(predictions_answers))
from kraino.utils import print_metrics
test_raw_y = test_text_representation['y']
_ = print_metrics.select['wups'](
gt_list=test_raw_y,
pred_list=predictions_answers,
verbose=1,
extra_vars=None)
from numpy import random
test_image_name_list = test_text_representation['img_name']
indices_to_see = random.randint(low=0, high=len(test_image_name_list), size=5)
for index_now in indices_to_see:
print(test_raw_x[index_now], predictions_answers[index_now])
from matplotlib.pyplot import axis
from matplotlib.pyplot import figure
from matplotlib.pyplot import imshow
import numpy as np
from PIL import Image
%matplotlib inline
for index_now in indices_to_see:
image_name_now = test_image_name_list[index_now]
pil_im = Image.open('data/daquar/images/{0}.png'.format(image_name_now), 'r')
fig = figure()
fig.text(.2,.05,test_raw_x[index_now], fontsize=14)
axis('off')
imshow(np.asarray(pil_im))
print('question, prediction, ground truth answer')
for index_now in indices_to_see:
print(test_raw_x[index_now], predictions_answers[index_now], test_raw_y[index_now])
from kraino.core.model_zoo import word_generator
# we first need to add word_generator to _config (we could have done this before, in the Config constructor)
# we use maximum likelihood as a word generator
text_rnn_model._config.word_generator = word_generator['max_likelihood']
predictions_answers = text_rnn_model.decode_predictions(
X=test_x,
temperature=None,
index2word=index2word_y,
verbose=0)
_ = print_metrics.select['wups'](
gt_list=test_raw_y,
pred_list=predictions_answers,
verbose=1,
extra_vars=None)
# this contains a list of the image names of our interest;
# it also makes sure that visual and textual features are aligned correspondingly
train_image_names = train_text_representation['img_name']
# the name for visual features that we use
# CNN_NAME='vgg_net'
# CNN_NAME='googlenet'
CNN_NAME='fb_resnet'
# the layer in CNN that is used to extract features
# PERCEPTION_LAYER='fc7'
# PERCEPTION_LAYER='pool5-7x7_s1'
# PERCEPTION_LAYER='res5c-152'
PERCEPTION_LAYER='l2_res5c-152' # l2 prefix since there are l2-normalized visual features
train_visual_features = dp['perception'](
train_or_test='train',
names_list=train_image_names,
parts_extractor=None,
max_parts=None,
perception=CNN_NAME,
layer=PERCEPTION_LAYER,
second_layer=None
)
train_visual_features.shape
train_input = [train_x, train_visual_features]
#== Model definition
# First we define a model using keras/kraino
from keras.models import Sequential
from keras.layers.core import Activation
from keras.layers.core import Dense
from keras.layers.core import Dropout
from keras.layers.core import Layer
from keras.layers.core import Merge
from keras.layers.core import TimeDistributedMerge
from keras.layers.embeddings import Embedding
from kraino.core.model_zoo import AbstractSequentialModel
from kraino.core.model_zoo import AbstractSingleAnswer
from kraino.core.model_zoo import AbstractSequentialMultiplewordAnswer
from kraino.core.model_zoo import Config
from kraino.core.keras_extensions import DropMask
from kraino.core.keras_extensions import LambdaWithMask
from kraino.core.keras_extensions import time_distributed_masked_ave
# This model inherits from AbstractSingleAnswer, and so it produces single answer words
# To use multiple answer words, you need to inherit from AbstractSequentialMultiplewordAnswer
class VisionLanguageBOW(AbstractSequentialModel, AbstractSingleAnswer):
BOW Language only model that produces single word answers.
def create(self):
language_model = Sequential()
language_model.add(Embedding(
self._config.input_dim,
self._config.textual_embedding_dim,
mask_zero=True))
language_model.add(LambdaWithMask(
time_distributed_masked_ave,
output_shape=[language_model.output_shape[2]]))
language_model.add(DropMask())
visual_model = Sequential()
if self._config.visual_embedding_dim > 0:
visual_model.add(Dense(
self._config.visual_embedding_dim,
input_shape=(self._config.visual_dim,)))
else:
visual_model.add(Layer(input_shape=(self._config.visual_dim,)))
self.add(Merge([language_model, visual_model], mode=self._config.multimodal_merge_mode))
self.add(Dropout(0.5))
self.add(Dense(self._config.output_dim))
self.add(Activation('softmax'))
# dimensionality of embeddings
EMBEDDING_DIM = 500
# kind of multimodal fusion (ave, concat, mul, sum)
MULTIMODAL_MERGE_MODE = 'concat'
model_config = Config(
textual_embedding_dim=EMBEDDING_DIM,
visual_embedding_dim=0,
multimodal_merge_mode=MULTIMODAL_MERGE_MODE,
input_dim=len(word2index_x.keys()),
output_dim=len(word2index_y.keys()),
visual_dim=train_visual_features.shape[1])
model = VisionLanguageBOW(model_config)
model.create()
model.compile(
loss='categorical_crossentropy',
optimizer='adam')
#== Model training
model.fit(
train_input,
train_y,
batch_size=512,
nb_epoch=40,
validation_split=0.1,
show_accuracy=True)
#== Model definition
# First we define a model using keras/kraino
from keras.models import Sequential
from keras.layers.core import Activation
from keras.layers.core import Dense
from keras.layers.core import Dropout
from keras.layers.core import Layer
from keras.layers.core import Merge
from keras.layers.core import TimeDistributedMerge
from keras.layers.embeddings import Embedding
from kraino.core.model_zoo import AbstractSequentialModel
from kraino.core.model_zoo import AbstractSingleAnswer
from kraino.core.model_zoo import AbstractSequentialMultiplewordAnswer
from kraino.core.model_zoo import Config
from kraino.core.keras_extensions import DropMask
from kraino.core.keras_extensions import LambdaWithMask
from kraino.core.keras_extensions import time_distributed_masked_ave
# This model inherits from AbstractSingleAnswer, and so it produces single answer words
# To use multiple answer words, you need to inherit from AbstractSequentialMultiplewordAnswer
class VisionLanguageBOW(AbstractSequentialModel, AbstractSingleAnswer):
BOW Language only model that produces single word answers.
def create(self):
language_model = Sequential()
language_model.add(Embedding(
self._config.input_dim,
self._config.textual_embedding_dim,
mask_zero=True))
language_model.add(LambdaWithMask(
time_distributed_masked_ave,
output_shape=[language_model.output_shape[2]]))
language_model.add(DropMask())
visual_model = Sequential()
if self._config.visual_embedding_dim > 0:
visual_model.add(Dense(
self._config.visual_embedding_dim,
input_shape=(self._config.visual_dim,)))
else:
visual_model.add(Layer(input_shape=(self._config.visual_dim,)))
self.add(Merge([language_model, visual_model], mode=self._config.multimodal_merge_mode))
self.add(Dropout(0.5))
self.add(Dense(self._config.output_dim))
self.add(Activation('softmax'))
# dimensionality of embeddings
EMBEDDING_DIM = 500
# kind of multimodal fusion (ave, concat, mul, sum)
MULTIMODAL_MERGE_MODE = 'mul'
model_config = Config(
textual_embedding_dim=EMBEDDING_DIM,
visual_embedding_dim=EMBEDDING_DIM,
multimodal_merge_mode=MULTIMODAL_MERGE_MODE,
input_dim=len(word2index_x.keys()),
output_dim=len(word2index_y.keys()),
visual_dim=train_visual_features.shape[1])
model = VisionLanguageBOW(model_config)
model.create()
model.compile(
loss='categorical_crossentropy',
optimizer='adam')
text_image_bow_model = model
#== Model training
text_image_bow_model.fit(
train_input,
train_y,
batch_size=512,
nb_epoch=40,
validation_split=0.1,
show_accuracy=True)
#== Model definition
# First we define a model using keras/kraino
from keras.models import Sequential
from keras.layers.core import Activation
from keras.layers.core import Dense
from keras.layers.core import Dropout
from keras.layers.core import Layer
from keras.layers.core import Merge
from keras.layers.core import TimeDistributedMerge
from keras.layers.embeddings import Embedding
from keras.layers.recurrent import GRU
from keras.layers.recurrent import LSTM
from kraino.core.model_zoo import AbstractSequentialModel
from kraino.core.model_zoo import AbstractSingleAnswer
from kraino.core.model_zoo import AbstractSequentialMultiplewordAnswer
from kraino.core.model_zoo import Config
from kraino.core.keras_extensions import DropMask
from kraino.core.keras_extensions import LambdaWithMask
from kraino.core.keras_extensions import time_distributed_masked_ave
# This model inherits from AbstractSingleAnswer, and so it produces single answer words
# To use multiple answer words, you need to inherit from AbstractSequentialMultiplewordAnswer
class VisionLanguageLSTM(AbstractSequentialModel, AbstractSingleAnswer):
BOW Language only model that produces single word answers.
def create(self):
language_model = Sequential()
language_model.add(Embedding(
self._config.input_dim,
self._config.textual_embedding_dim,
mask_zero=True))
#TODO: Replace averaging with RNN (you can choose between LSTM and GRU)
# language_model.add(LambdaWithMask(time_distributed_masked_ave, output_shape=[self.output_shape[2]]))
language_model.add(LSTM(self._config.hidden_state_dim,
return_sequences=False))
visual_model = Sequential()
if self._config.visual_embedding_dim > 0:
visual_model.add(Dense(
self._config.visual_embedding_dim,
input_shape=(self._config.visual_dim,)))
else:
visual_model.add(Layer(input_shape=(self._config.visual_dim,)))
self.add(Merge([language_model, visual_model], mode=self._config.multimodal_merge_mode))
self.add(Dropout(0.5))
self.add(Dense(self._config.output_dim))
self.add(Activation('softmax'))
# dimensionality of embeddings
EMBEDDING_DIM = 500
# kind of multimodal fusion (ave, concat, mul, sum)
MULTIMODAL_MERGE_MODE = 'sum'
model_config = Config(
textual_embedding_dim=EMBEDDING_DIM,
visual_embedding_dim=EMBEDDING_DIM,
hidden_state_dim=EMBEDDING_DIM,
multimodal_merge_mode=MULTIMODAL_MERGE_MODE,
input_dim=len(word2index_x.keys()),
output_dim=len(word2index_y.keys()),
visual_dim=train_visual_features.shape[1])
model = VisionLanguageLSTM(model_config)
model.create()
model.compile(
loss='categorical_crossentropy',
optimizer='adam')
text_image_rnn_model = model
#== Model training
text_image_rnn_model.fit(
train_input,
train_y,
batch_size=5500,
nb_epoch=40,
validation_split=0.1,
show_accuracy=True)
#== Model training
text_image_rnn_model.fit(
train_input,
train_y,
batch_size=1,
nb_epoch=1,
validation_split=0.1,
show_accuracy=True)
#== Model training
text_image_rnn_model.fit(
train_input,
train_y,
batch_size=512,
nb_epoch=40,
validation_split=0.1,
show_accuracy=True)
test_image_names = test_text_representation['img_name']
test_visual_features = dp['perception'](
train_or_test='test',
names_list=test_image_names,
parts_extractor=None,
max_parts=None,
perception=CNN_NAME,
layer=PERCEPTION_LAYER,
second_layer=None
)
test_visual_features.shape
test_input = [test_x, test_visual_features]
from kraino.core.model_zoo import word_generator
# we first need to add word_generator to _config (we could have done this before, in the Config constructor)
# we use maximum likelihood as a word generator
text_image_bow_model._config.word_generator = word_generator['max_likelihood']
predictions_answers = text_image_bow_model.decode_predictions(
X=test_input,
temperature=None,
index2word=index2word_y,
verbose=0)
_ = print_metrics.select['wups'](
gt_list=test_raw_y,
pred_list=predictions_answers,
verbose=1,
extra_vars=None)
from kraino.core.model_zoo import word_generator
# we first need to add word_generator to _config (we could have done this before, in the Config constructor)
# we use maximum likelihood as a word generator
text_image_rnn_model._config.word_generator = word_generator['max_likelihood']
predictions_answers = text_image_rnn_model.decode_predictions(
X=test_input,
temperature=None,
index2word=index2word_y,
verbose=0)
_ = print_metrics.select['wups'](
gt_list=test_raw_y,
pred_list=predictions_answers,
verbose=1,
extra_vars=None)
#TODO: Execute the following procedure (Shift+Enter)
from kraino.utils import data_provider
vqa_dp = data_provider.select['vqa-real_images-open_ended']
# VQA has a few answers associated with one question.
# We take the most frequently occuring answers (single_frequent).
# Formal argument 'keep_top_qa_pairs' allows to filter out rare answers with the associated questions.
# We use 0 as we want to keep all question answer pairs, but you can change into 1000 and see how the results differ
vqa_train_text_representation = vqa_dp['text'](
train_or_test='train',
answer_mode='single_frequent',
keep_top_qa_pairs=1000)
vqa_val_text_representation = vqa_dp['text'](
train_or_test='val',
answer_mode='single_frequent')
from toolz import frequencies
vqa_train_raw_x = vqa_train_text_representation['x']
vqa_train_raw_y = vqa_train_text_representation['y']
vqa_val_raw_x = vqa_val_text_representation['x']
vqa_val_raw_y = vqa_val_text_representation['y']
# we start from building the frequencies table
vqa_wordcount_x = frequencies(' '.join(vqa_train_raw_x).split(' '))
# we can keep all answer words in the answer as a class
# therefore we use an artificial split symbol '{' to not split the answer into words
# you can see the difference if you replace '{' with ' ' and print vqa_wordcount_y
vqa_wordcount_y = frequencies('{'.join(vqa_train_raw_y).split('{'))
vqa_wordcount_y
from keras.preprocessing import sequence
from kraino.utils.input_output_space import build_vocabulary
from kraino.utils.input_output_space import encode_questions_index
from kraino.utils.input_output_space import encode_answers_one_hot
MAXLEN=30
vqa_word2index_x, vqa_index2word_x = build_vocabulary(this_wordcount = vqa_wordcount_x)
vqa_word2index_y, vqa_index2word_y = build_vocabulary(this_wordcount = vqa_wordcount_y)
vqa_train_x = sequence.pad_sequences(encode_questions_index(vqa_train_raw_x, vqa_word2index_x), maxlen=MAXLEN)
vqa_val_x = sequence.pad_sequences(encode_questions_index(vqa_val_raw_x, vqa_word2index_x), maxlen=MAXLEN)
vqa_train_y, _ = encode_answers_one_hot(
vqa_train_raw_y,
vqa_word2index_y,
answer_words_delimiter=vqa_train_text_representation['answer_words_delimiter'],
is_only_first_answer_word=True,
max_answer_time_steps=1)
vqa_val_y, _ = encode_answers_one_hot(
vqa_val_raw_y,
vqa_word2index_y,
answer_words_delimiter=vqa_train_text_representation['answer_words_delimiter'],
is_only_first_answer_word=True,
max_answer_time_steps=1)
from kraino.core.model_zoo import Config
from kraino.core.model_zoo import word_generator
# We are re-using the BlindBOW mode
# Please make sure you have run the cell with the class definition
# VQA is larger, so we can increase the dimensionality of the embedding
vqa_model_config = Config(
textual_embedding_dim=1000,
input_dim=len(vqa_word2index_x.keys()),
output_dim=len(vqa_word2index_y.keys()),
word_generator = word_generator['max_likelihood'])
vqa_text_bow_model = BlindBOW(vqa_model_config)
vqa_text_bow_model.create()
vqa_text_bow_model.compile(
loss='categorical_crossentropy',
optimizer='adam')
vqa_text_bow_model.fit(
vqa_train_x,
vqa_train_y,
batch_size=512,
nb_epoch=10,
validation_split=0.1,
show_accuracy=True)
# we first need to add word_generator to _config (we could have done this before, in the Config constructor)
# we use maximum likelihood as a word generator
vqa_predictions_answers = vqa_text_bow_model.decode_predictions(
X=vqa_val_x,
temperature=None,
index2word=vqa_index2word_y,
verbose=0)
# Using VQA is unfortunately not that transparent
# we need extra VQA object.
vqa_vars = {
'question_id':vqa_val_text_representation['question_id'],
'vqa_object':vqa_val_text_representation['vqa_object'],
'resfun':
lambda x: \
vqa_val_text_representation['vqa_object'].loadRes(x, vqa_val_text_representation['questions_path'])
}
from kraino.utils import print_metrics
_ = print_metrics.select['vqa'](
gt_list=vqa_val_raw_y,
pred_list=vqa_predictions_answers,
verbose=1,
extra_vars=vqa_vars)
# the name for visual features that we use
VQA_CNN_NAME='vgg_net'
# VQA_CNN_NAME='googlenet'
# the layer in CNN that is used to extract features
VQA_PERCEPTION_LAYER='fc7'
# PERCEPTION_LAYER='pool5-7x7_s1'
vqa_train_visual_features = vqa_dp['perception'](
train_or_test='train',
names_list=vqa_train_text_representation['img_name'],
parts_extractor=None,
max_parts=None,
perception=VQA_CNN_NAME,
layer=VQA_PERCEPTION_LAYER,
second_layer=None
)
vqa_train_visual_features.shape
vqa_val_visual_features = vqa_dp['perception'](
train_or_test='val',
names_list=vqa_val_text_representation['img_name'],
parts_extractor=None,
max_parts=None,
perception=VQA_CNN_NAME,
layer=VQA_PERCEPTION_LAYER,
second_layer=None
)
vqa_val_visual_features.shape
from kraino.core.model_zoo import Config
from kraino.core.model_zoo import word_generator
# dimensionality of embeddings
VQA_EMBEDDING_DIM = 1000
# kind of multimodal fusion (ave, concat, mul, sum)
VQA_MULTIMODAL_MERGE_MODE = 'mul'
vqa_model_config = Config(
textual_embedding_dim=VQA_EMBEDDING_DIM,
visual_embedding_dim=VQA_EMBEDDING_DIM,
multimodal_merge_mode=VQA_MULTIMODAL_MERGE_MODE,
input_dim=len(vqa_word2index_x.keys()),
output_dim=len(vqa_word2index_y.keys()),
visual_dim=vqa_train_visual_features.shape[1],
word_generator=word_generator['max_likelihood'])
vqa_text_image_bow_model = VisionLanguageBOW(vqa_model_config)
vqa_text_image_bow_model.create()
vqa_text_image_bow_model.compile(
loss='categorical_crossentropy',
optimizer='adam')
vqa_train_input = [vqa_train_x, vqa_train_visual_features]
vqa_val_input = [vqa_val_x, vqa_val_visual_features]
#== Model training
vqa_text_image_bow_model.fit(
vqa_train_input,
vqa_train_y,
batch_size=512,
nb_epoch=10,
validation_split=0.1,
show_accuracy=True)
# we first need to add word_generator to _config (we could have done this before, in the Config constructor)
# we use maximum likelihood as a word generator
vqa_predictions_answers = vqa_text_image_bow_model.decode_predictions(
X=vqa_val_input,
temperature=None,
index2word=vqa_index2word_y,
verbose=0)
# Using VQA is unfortunately not that transparent
# we need extra VQA object.
vqa_vars = {
'question_id':vqa_val_text_representation['question_id'],
'vqa_object':vqa_val_text_representation['vqa_object'],
'resfun':
lambda x: \
vqa_val_text_representation['vqa_object'].loadRes(x, vqa_val_text_representation['questions_path'])
}
from kraino.utils import print_metrics
_ = print_metrics.select['vqa'](
gt_list=vqa_val_raw_y,
pred_list=vqa_predictions_answers,
verbose=1,
extra_vars=vqa_vars)
! python neural_solver.py --dataset=daquar-triples --model=sequential-blind-temporal_fusion-single_answer --validation_split=0.1 --metric=wups --max_epoch=20 --max_era=2 --verbosity=monitor_test_metric --word_representation=one_hot
! python neural_solver.py --textual_embedding_size=500 --dataset=daquar-triples --model=sequential-blind-temporal_fusion-single_answer --validation_split=0.1 --metric=wups --max_epoch=20 --max_era=2 --verbosity=monitor_test_metric --word_representation=one_hot
! python neural_solver.py --dataset=daquar-triples --model=sequential-blind-recurrent_fusion-single_answer --validation_split=0.1 --metric=wups --max_epoch=20 --max_era=1 --verbosity=monitor_test_metric --word_representation=one_hot
! python neural_solver.py --text_encoder=gru --dataset=daquar-triples --model=sequential-blind-recurrent_fusion-single_answer --validation_split=0.1 --metric=wups --max_epoch=20 --max_era=1 --verbosity=monitor_test_metric --word_representation=one_hot
! python neural_solver.py --text_encoder=gru --dataset=daquar-triples --model=sequential-blind-cnn_fusion-single_answer_with_temporal_fusion --temporal_fusion=sum --validation_split=0.1 --metric=wups --max_epoch=20 --max_era=1 --verbosity=monitor_test_metric --word_representation=one_hot
! python neural_solver.py --text_encoder=gru --dataset=daquar-triples --model=sequential-multimodal-recurrent_fusion-at_last_timestep_multimodal_fusion-single_answer --temporal_fusion=sum --validation_split=0.1 --metric=wups --max_epoch=20 --max_era=1 --verbosity=monitor_test_metric --word_representation=one_hot --multimodal_fusion=mul
! python neural_solver.py --text_encoder=gru --dataset=daquar-triples --model=sequential-multimodal-recurrent_fusion-at_last_timestep_multimodal_fusion-single_answer --temporal_fusion=sum --validation_split=0.1 --metric=wups --max_epoch=20 --max_era=1 --verbosity=monitor_test_metric --word_representation=one_hot --multimodal_fusion=sum --perception=fb_resnet --perception_layer=l2_res5c-152
! python neural_solver.py --dataset=vqa-real_images-open_ended --model=sequential-blind-recurrent_fusion-single_answer --vqa_answer_mode=single_frequent --metric=vqa --max_epoch=10 --max_era=1 --verbosity=monitor_val_metric --word_representation=one_hot --number_most_frequent_qa_pairs=2000 --use_whole_answer_as_answer_word
! python neural_solver.py --dataset=vqa-real_images-open_ended --model=sequential-multimodal-recurrent_fusion-at_last_timestep_multimodal_fusion-single_answer --vqa_answer_mode=single_frequent --metric=vqa --max_epoch=10 --max_era=1 --verbosity=monitor_val_metric --word_representation=one_hot --number_most_frequent_qa_pairs=2000 --use_whole_answer_as_answer_word --perception=fb_resnet --perception_layer=l2_res5c-152 --multimodal_fusion=mul
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The function below print each element in the list in a new line. We will use this function later, so please run the interpreter over the following cell (Shift+Enter).
Step2: The notebook can also interface with the command line. Try the following line (again Shift+Enter).
Step3: And now let's execute python's 'boring_function' with an argument. It prints the available GPU, the argument, as well as versions of Theano, and Keras (we will talk about both frameworks later in the tutorial). Since the boring_function imports Theano, its execution may take a while.
Step4: To view the content of the function, run the following
Step5: The command below checks the available GPU machines.
Step6: Challenge
Step7: Let's have a look at the figure in Introduction->Challenge. The figure lists images with associated question-answer pairs. It also comments on challenges associated with every question-answer-triplet. We see that to answer properly on the questions, the answerer needs to understand the scene visually, understand the question, but also, arguably, has to resort to common sense knowledge, or even know the preferences of the person asking a question ('What is behind the table?' - what 'behind' means?).
Step8: The code above returns a dictionary of three representations of the DAQUAR dataset. For now, we will look only into the 'text' representation. dp['text'] returns a function from dataset split into the dataset's textual representation. It will be more clear after executing the following instruction.
Step9: This representation specifies how questions are ended ('?'), answers are ended ('.'), answer words are delimited (DAQUAR sometimes has a set of answer words as an answer, for instance 'knife, fork' may be a valid answer), but most important, it has questions (key 'x'), answers (key 'y'), and names of the corresponding images (key 'img_name').
Step10: Summary
Step11: In addition, we are using a few special symbols that don't occur in the training dataset.
Step12: As we can see, the sequences have different elements. We will pad the sequences to have the same length $MAXLEN$.
Step13: And do the same with the answers.
Step14: Finally, we can also encode test questions. We need it later to see how well our models generalise to new question,answer,image triplets. Remember however that we should use vocabulary we generated from training samples.
Step15: With encoded question, answer pairs we finish the first section. But before delving into building and training new models, let's have a look at summary to see bigger picture.
Step16: Summary
Step18: You can play with the example above.
Step19: Recurrent Neural Network
Step21: If we swap 'chair' with 'telephone' in the question, we would get a different meaning, wouldn't we? Recurrent Neural Networks (RNNs) have been developed to mitigate this issue by directly processing the time series. As the figure below illustrates, the (temporarily) first word's embedding is given to an RNN unit. The RNN unit next 'processes' such embedding and outputs to the second RNN unit. This unit takes both the output of the first RNN unit and the 2nd word's embedding as inputs, and outputs some algebraic combination of both inputs. And so on. The last recurrent unit builds the representation of the whole sequence. Its output is next given to Softmax for the classification. One of the challenged that such approaches have to deal with are keeping long-term dependencies. Roughly speaking, as new inputs are coming it's getting easier to 'forget' information from the beginning. LSTM and GRU are two particularly successful Recurrent Neural Networks that can preserve such longer dependencies to some degree.
Step22: At the end of this Tutorial, you are free to experiment with two examples above.
Step23: Ambiguities
Step24: From the code we see that 'armchair' and 'wardrobe' are surprisingly close to each other. It is because, for large ontologies like WordNet, all the indoor things are essentially 'indoor things'.
Step25: Given encoded test questions, we use the maximum likelihood principle to withdraw answers.
Step26: We can now evaluate the answers using WUPS scores. For this tutorial, we care only about Accuracy, and WUPS at 0.9.
Step27: Let's also see the predictions.
Step28: Do you agree with the answers given above? What are your guesses?
Step29: Finally, let's also see the ground truths.
Step30: In the code above, we have randomly taken questions, so for different executations we may get different answers.
Step31: Visualise question, predicted answers, ground truth answers as before.
Step32: Vision+Language
Step34: BOW + Vision
Step35: Now, we can train the model.
Step37: Interestingly, if we use a piece-wise multiplication to merge both modalities together, we will get better results.
Step38: If we merge language and visual features with 'mul', do we need to set both embeddings to have the same number of dimensions (textual_embedding_dim == visual_embedding_dim)?
Step40: RNN + Vision
Step41: Batch Size
Step42: Ooops, apparently we run out of memory in our GPUs. Note, how large our batches are!
Step43: Ok. Please, stop it! Batch size 1 is not good neither. Training is very slow.
Step44: ```
Step45: Predictions (Bow with Vision)
Step46: Predictions (RNN with Vision)
Step47: VQA
Step48: Language-Only
Step49: VQA Language+Vision
Step50: Kraino
Step51: Maybe we should use smaller embedding layer with --embedding_size=500 (500 dimensions).
Step52: Now we replace the temporal by the recurrent fussion (LSTM) with --model=sequential-blind-reccurent_fusion-single_answer
Step53: We can easily replace LSTM by GRU as a question encoder with --text_encoder=gru.
Step54: We can also use 1 dimensional CNN to represent questions with --model=sequential-blind-cnn_fusion-single_answer_with_temporal_fusion.
Step55: Or we can combine GRU with visual CNN.
Step56: Or use the above with Resnet (by default it's GoogLeNet) with piece-wise summation.
Step57: But there are more possibilities.
Step58: Or we can use Vision + Language model. If there are memory problems try either smaller batches (--batch_size=...), smaller model (--hidden_state_size or --textual_embedding_size), or use BOW model (--model=equential-multimodal-temporal_fusion-single_answer).
|
12,360
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
a = np.array([11,1,2,3,4,5,12,-3,-4,7,4])
print('a = ',a)
print('np.clip(a,0,10) = ', np.clip(a,0,10))
a = np.arange(10).astype(np.int)
print('a=',a)
print('np.clip(a,2.5,7.5)=',np.clip(a,2.5,7.5))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exemplo com ponto flutuante
|
12,361
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from scipy.stats import norm
import matplotlib
import matplotlib.pyplot as plt
# plotting options
font = {'size' : 20}
plt.rc('font', **font)
plt.rc('text', usetex=matplotlib.checkdep_usetex(True))
matplotlib.rc('figure', figsize=(18, 6) )
# capacity of the BSC
def C_BSC(delta):
binary_entropy = -delta*np.log2(delta) - (1-delta)*np.log2(1-delta)
if delta < 1e-20:
binary_entropy = 0
return 1 - binary_entropy
# dispersion of the BSC
def V_BSC(delta):
V = np.square(np.log2((1-delta)/delta)) * delta * (1-delta)
if delta < 1e-20:
V = 0
return V
def get_Pe_finite_length_BSC(n, r, delta):
# compute capacity
C = C_BSC(delta)
# compute dispersion
V = V_BSC(delta)
# Q-function is "norm.sf" (survival function)
return norm.sf((n*(C-r) + 0.5*np.log2(n))/np.sqrt(n*V))
delta_range = np.linspace(0.01,0.12,100)
Pe_BSC_r12_n100 = [get_Pe_finite_length_BSC(100, 0.5, delta) for delta in delta_range]
Pe_BSC_r12_n500 = [get_Pe_finite_length_BSC(500, 0.5, delta) for delta in delta_range]
Pe_BSC_r12_n1000 = [get_Pe_finite_length_BSC(1000, 0.5, delta) for delta in delta_range]
Pe_BSC_r12_n5000 = [get_Pe_finite_length_BSC(5000, 0.5, delta) for delta in delta_range]
fig = plt.figure(1,figsize=(12,7))
plt.semilogy(delta_range, Pe_BSC_r12_n100)
plt.semilogy(delta_range, Pe_BSC_r12_n500)
plt.semilogy(delta_range, Pe_BSC_r12_n1000)
plt.semilogy(delta_range, Pe_BSC_r12_n5000)
plt.axvspan(0.11, 0.12, alpha=0.5, color='gray')
plt.axvline(x=0.11, color='k')
plt.ylim((1e-8,1))
plt.xlim((0.01,0.12))
plt.xlabel('BSC Error probability $\delta$', fontsize=16)
plt.ylabel('$P_e$', fontsize=16)
plt.legend(['$n = 100$', '$n=500$','$n=1000$', '$n=5000$', 'C'], fontsize=16)
plt.text(0.11, 1e-4, 'Capacity limit', {'color': 'k', 'fontsize': 20, 'rotation': -90})
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.grid(True)
#plt.savefig('BSC_Pe_R12.pdf',bbox_inches='tight')
#specify \delta
delta = 0.11
n_range = np.linspace(10,2000,100)
C = C_BSC(delta)
V = V_BSC(delta)
r_Pe_1em3 = [C - np.sqrt(V/n)*norm.isf(1e-3) + 0.5*np.log2(n)/n for n in n_range]
r_Pe_1em6 = [C - np.sqrt(V/n)*norm.isf(1e-6) + 0.5*np.log2(n)/n for n in n_range]
r_Pe_1em9 = [C - np.sqrt(V/n)*norm.isf(1e-9) + 0.5*np.log2(n)/n for n in n_range]
fig = plt.figure(1,figsize=(12,7))
plt.plot(n_range, r_Pe_1em3)
plt.plot(n_range, r_Pe_1em6)
plt.plot(n_range, r_Pe_1em9)
plt.axhline(y=C, color='k')
plt.ylim((0,0.55))
plt.xlim((0,2000))
plt.xlabel('Length $n$', fontsize=16)
plt.ylabel('Rate $r$ (bit/channel use)', fontsize=16)
plt.legend(['$P_e = 10^{-3}$', '$P_e = 10^{-6}$','$P_e = 10^{-9}$', '$C$'], fontsize=16)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.grid(True)
#plt.savefig('BSC_r_delta_011.pdf',bbox_inches='tight')
# capacity of the BSC
def C_BEC(epsilon):
return 1 - epsilon
# dispersion of the BSC
def V_BEC(epsilon):
return epsilon*(1-epsilon)
def get_Pe_finite_length_BEC(n, r, epsilon):
# compute capacity
C = C_BEC(epsilon)
# compute dispersion
V = V_BEC(epsilon)
# Q-function is "norm.sf" (survival function)
return norm.sf((n*(C-r))/np.sqrt(n*V))
epsilon_range = np.linspace(0.2,0.6,100)
Pe_BEC_r12_n100 = [get_Pe_finite_length_BEC(100, 0.5, epsilon) for epsilon in epsilon_range]
Pe_BEC_r12_n500 = [get_Pe_finite_length_BEC(500, 0.5, epsilon) for epsilon in epsilon_range]
Pe_BEC_r12_n1000 = [get_Pe_finite_length_BEC(1000, 0.5, epsilon) for epsilon in epsilon_range]
Pe_BEC_r12_n5000 = [get_Pe_finite_length_BEC(5000, 0.5, epsilon) for epsilon in epsilon_range]
fig = plt.figure(1,figsize=(12,7))
plt.semilogy(epsilon_range, Pe_BEC_r12_n100)
plt.semilogy(epsilon_range, Pe_BEC_r12_n500)
plt.semilogy(epsilon_range, Pe_BEC_r12_n1000)
plt.semilogy(epsilon_range, Pe_BEC_r12_n5000)
plt.axvspan(0.5, 0.55, alpha=0.5, color='gray')
plt.axvline(x=0.5, color='k')
plt.ylim((1e-8,1))
plt.xlim((0.2,0.55))
plt.xlabel('BEC Erasure probability $\epsilon$', fontsize=16)
plt.ylabel('$P_e$', fontsize=16)
plt.legend(['$n = 100$', '$n=500$','$n=1000$', '$n=5000$', 'C'], fontsize=16)
plt.text(0.5, 1e-4, 'Capacity limit', {'color': 'k', 'fontsize': 20, 'rotation': -90})
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.grid(True)
#plt.savefig('BEC_Pe_R12.pdf',bbox_inches='tight')
#specify \epsilon
epsilon = 0.5
n_range = np.linspace(10,2000,100)
C = C_BEC(epsilon)
V = V_BEC(epsilon)
r_Pe_1em3 = [C - np.sqrt(V/n)*norm.isf(1e-3) for n in n_range]
r_Pe_1em6 = [C - np.sqrt(V/n)*norm.isf(1e-6) for n in n_range]
r_Pe_1em9 = [C - np.sqrt(V/n)*norm.isf(1e-9) for n in n_range]
fig = plt.figure(1,figsize=(12,7))
plt.plot(n_range, r_Pe_1em3)
plt.plot(n_range, r_Pe_1em6)
plt.plot(n_range, r_Pe_1em9)
plt.axhline(y=C, color='k')
plt.ylim((0,0.55))
plt.xlim((0,2000))
plt.xlabel('Length $n$', fontsize=16)
plt.ylabel('Rate $r$ (bit/channel use)', fontsize=16)
plt.legend(['$P_e = 10^{-3}$', '$P_e = 10^{-6}$','$P_e = 10^{-9}$', '$C$'], fontsize=16)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.grid(True)
#plt.savefig('BEC_r_epsilon_05.pdf',bbox_inches='tight')
from scipy.special import comb
def get_Pe_RCU_BSC(n, r, delta):
binomials = [comb(n,t,exact=False) for t in range(n+1)]
return np.sum([binomials[t] * (delta**t) * ((1-delta)**(n-t)) * min(1, np.sum([binomials[j] for j in range(t+1)]) * 2**(-n*(1-r))) for t in range(n+1)])
delta_range = np.linspace(0.01,0.12,100)
Pe_BSC_r12_n100 = [get_Pe_finite_length_BSC(100, 0.5, delta) for delta in delta_range]
Pe_BSC_r12_n500 = [get_Pe_finite_length_BSC(500, 0.5, delta) for delta in delta_range]
Pe_BSC_r12_n1000 = [get_Pe_finite_length_BSC(1000, 0.5, delta) for delta in delta_range]
Pe_BSC_r12_n5000 = [get_Pe_finite_length_BSC(5000, 0.5, delta) for delta in delta_range]
Pe_RCU_BSC_r12_n100 = [get_Pe_RCU_BSC(100, 0.5, delta) for delta in delta_range]
Pe_RCU_BSC_r12_n500 = [get_Pe_RCU_BSC(500, 0.5, delta) for delta in delta_range]
Pe_RCU_BSC_r12_n1000 = [get_Pe_RCU_BSC(1000, 0.5, delta) for delta in delta_range]
fig = plt.figure(1,figsize=(10,7))
plt.semilogy(delta_range, Pe_BSC_r12_n100)
plt.semilogy(delta_range, Pe_BSC_r12_n500)
plt.semilogy(delta_range, Pe_BSC_r12_n1000)
plt.semilogy(delta_range, Pe_BSC_r12_n5000)
plt.axvline(x=0.11, color='k')
plt.gca().set_prop_cycle(None)
#d ashed curves represnt the RCU bound
plt.semilogy(delta_range, Pe_RCU_BSC_r12_n100, '--')
plt.semilogy(delta_range, Pe_RCU_BSC_r12_n500, '--')
plt.semilogy(delta_range, Pe_RCU_BSC_r12_n1000, '--')
plt.axvspan(0.11, 0.12, alpha=0.5, color='gray')
plt.ylim((1e-8,1))
plt.xlim((0.01,0.12))
plt.xlabel('BSC Error probability $\delta$', fontsize=16)
plt.ylabel('$P_e$', fontsize=16)
plt.legend(['$n = 100$', '$n=500$','$n=1000$', '$n=5000$', 'C'], fontsize=16)
plt.text(0.11, 1e-4, 'Capacity limit', {'color': 'k', 'fontsize': 20, 'rotation': -90})
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.grid(True)
#plt.savefig('BSC_Pe_R12.pdf',bbox_inches='tight')
def get_Pe_RCU_BEC(n, r, epsilon):
return np.sum([comb(n,t,exact=True) * (epsilon**t) * ((1-epsilon)**(n-t)) * min(1, 2**(-n*(1-r)+t)) for t in range(n+1)])
epsilon_range = np.linspace(0.2,0.6,100)
Pe_BEC_r12_n100 = [get_Pe_finite_length_BEC(100, 0.5, epsilon) for epsilon in epsilon_range]
Pe_BEC_r12_n500 = [get_Pe_finite_length_BEC(500, 0.5, epsilon) for epsilon in epsilon_range]
Pe_BEC_r12_n1000 = [get_Pe_finite_length_BEC(1000, 0.5, epsilon) for epsilon in epsilon_range]
Pe_BEC_r12_n5000 = [get_Pe_finite_length_BEC(5000, 0.5, epsilon) for epsilon in epsilon_range]
Pe_RCU_BEC_r12_n100 = [get_Pe_RCU_BEC(100, 0.5, epsilon) for epsilon in epsilon_range]
Pe_RCU_BEC_r12_n500 = [get_Pe_RCU_BEC(500, 0.5, epsilon) for epsilon in epsilon_range]
Pe_RCU_BEC_r12_n1000 = [get_Pe_RCU_BEC(1000, 0.5, epsilon) for epsilon in epsilon_range]
fig = plt.figure(1,figsize=(10,7))
plt.semilogy(epsilon_range, Pe_BEC_r12_n100)
plt.semilogy(epsilon_range, Pe_BEC_r12_n500)
plt.semilogy(epsilon_range, Pe_BEC_r12_n1000)
plt.semilogy(epsilon_range, Pe_BEC_r12_n5000)
plt.axvline(x=0.5, color='k')
plt.gca().set_prop_cycle(None)
plt.semilogy(epsilon_range, Pe_RCU_BEC_r12_n100, '--')
plt.semilogy(epsilon_range, Pe_RCU_BEC_r12_n500, '--')
plt.semilogy(epsilon_range, Pe_RCU_BEC_r12_n1000, '--')
plt.axvspan(0.5, 0.55, alpha=0.5, color='gray')
plt.axvline(x=0.5, color='k')
plt.ylim((1e-8,1))
plt.xlim((0.2,0.55))
plt.xlabel('BEC Erasure probability $\epsilon$', fontsize=16)
plt.ylabel('$P_e$', fontsize=16)
plt.legend(['$n = 100$', '$n=500$','$n=1000$', '$n=5000$', 'C'], fontsize=16)
plt.text(0.5, 1e-4, 'Capacity limit', {'color': 'k', 'fontsize': 20, 'rotation': -90})
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.grid(True)
#plt.savefig('BEC_Pe_R12.pdf',bbox_inches='tight')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Binary Symmetric Channel (BSC)
Step2: The finite-length capacity for the BSC channel is given by
Step3: Show finite length capacity estimates for some codes of different lengths $n$
Step4: Different representation, for a given channel (and here, we pick $\delta = 0.11$), show the rate the code should at most have to allow for decoding with an error rate $P_e$ (here we specify different $P_e$) if a certain length $n$ is available.
Step5: Binary Erasure Channel (BEC)
Step6: The finite-length capacity for the BEC channel is given by (note, here we do not use the correction term)
Step7: Show finite length capacity estimates for some codes of different lengths $n$
Step8: Different representation, for a given channel (and here, we pick $\epsilon = 0.5$), show the rate the code should at most have to allow for decoding with an error rate $P_e$ (here we specify different $P_e$) if a certain length $n$ is available.
Step9: Extra Material, Random Coding Union Bound for the BSC
Step10: Extra Material, Random Coding Union Bound for the BEC
|
12,362
|
<ASSISTANT_TASK:>
Python Code:
%%javascript
$.getScript('misc/kmahelona_ipython_notebook_toc.js')
fn = r"data/drinks.csv"
# Answer:
df = pd.read_csv(fn, sep=",")
# Answer:
df.head(10)
# Answer
df.sort_values("total_litres_of_pure_alcohol", ascending=False).head()
# Answer
# df.groupby("continent").beer_servings.mean()
# or
df.groupby("continent")["beer_servings"].mean()
# Answer:
df["continent"].unique()
# or
# set(df["continent"].tolist())
# Answer:
df[df["continent"].isnull()]
# Answer:
df.loc[df["continent"].isnull(), "continent"] = "NAs"
# Answer:
# df.groupby('continent').wine_servings.describe()
# or
df.groupby('continent')["wine_servings"].describe()
# Answer:
df.groupby('continent').median()
# Answer:
df.groupby('continent').spirit_servings.agg(['mean', 'min', 'max'])
# Answer:
df.groupby("continent").boxplot(figsize=(12, 9), rot=90)
# Answer:
cars1 = pd.read_csv("data/cars1.csv")
cars2 = pd.read_csv("data/cars2.csv")
# Answer:
cars1 = cars1.loc[:, "mpg":"car"]
# Answer:
cars = cars1.append(cars2)
# or
cars_ = pd.concat([cars1, cars2], axis=0)
# check for equality
cars_.equals(cars)
# Answer
def grep_firt_word(string_):
return string_.split()[0]
cars["vendor"] = cars["car"].apply(grep_firt_word)
# or
cars["vendor"] = cars["car"].apply(lambda s: s.split()[0])
df1 = pd.DataFrame({'A': ['A0', 'A1', 'A2', 'A3'],
'B': ['B0', 'B1', 'B2', 'B3'],
'C': ['C0', 'C1', 'C2', 'C3'],
'D': ['D0', 'D1', 'D2', 'D3']},
index=[0, 1, 2, 3])
df2 = pd.DataFrame({'A': ['A4', 'A5', 'A6', 'A7'],
'B': ['B4', 'B5', 'B6', 'B7'],
'C': ['C4', 'C5', 'C6', 'C7'],
'D': ['D4', 'D5', 'D6', 'D7']},
index=[4, 5, 6, 7])
df3 = pd.DataFrame({'A': ['A8', 'A9', 'A10', 'A11'],
'B': ['B8', 'B9', 'B10', 'B11'],
'C': ['C8', 'C9', 'C10', 'C11'],
'D': ['D8', 'D9', 'D10', 'D11']},
index=[8, 9, 10, 11])
# Answer:
pd.concat([df1, df2, df3], axis=0)
# or
# df1.append(df2).append(df3)
# Answer
print sum(pd.concat([df1, df2, df3], axis=1).isnull().sum())
# or
shape1 = df1.shape
shape2 = df2.shape
shape3 = df3.shape
num_true_values = shape1[0]*shape1[1] + shape2[0]*shape2[1] + shape3[0]*shape3[1]
total_num_values = (shape1[0] + shape2[0] + shape3[0]) * (shape1[1] + shape2[1] + shape3[1])
print total_num_values - num_true_values
assert sum(pd.concat([df1, df2, df3], axis=1).isnull().sum()) == total_num_values - num_true_values
raw_data_1 = {
'subject_id': ['1', '2', '3', '4', '5'],
'first_name': ['Alex', 'Amy', 'Allen', 'Alice', 'Ayoung'],
'last_name': ['Anderson', 'Ackerman', 'Ali', 'Aoni', 'Atiches']}
raw_data_2 = {
'subject_id': ['4', '5', '6', '7', '8', '9', '10'],
'first_name': ['Alice', 'Ayoung', 'Bran', 'Bryce', 'Betty', 'Jane', np.nan],
'last_name': ['Aoni', 'Atiches', 'Balwner', 'Brice', 'Btisan', np.nan, 'Doe']}
raw_data_3 = {
'subject_id': ['1', '2', '3', '4', '5', '7', '8', '9', '10', '11'],
'test_id': [51, 15, 15, 61, 16, 14, 15, 1, 61, 16]}
data1 = pd.DataFrame(raw_data_1, columns = ['subject_id', 'first_name', 'last_name'])
data2 = pd.DataFrame(raw_data_2, columns = ['subject_id', 'first_name', 'last_name'])
data3 = pd.DataFrame(raw_data_3, columns = ['subject_id','test_id'])
# Answer:
all_data = pd.concat([data1, data2], ignore_index=True)
assert len(set(all_data.index)) == all_data.shape[0]
# otherwise you might get unexpected behaviour:
all_data = pd.concat([data1, data2], ignore_index=False)
all_data.loc[0, :]
# Answer:
all_data_col = pd.concat([data1, data2], axis = 1)
all_data_col
# Answer:
pd.merge(all_data, data3)
# Answer:
dfm = pd.merge(all_data, data3, how="outer")
cond = dfm["test_id"].notnull() & (dfm["first_name"].isnull() | dfm["last_name"].isnull())
sum(cond)
# Answer:
pd.merge(data1, data2, how='inner')
df = pd.DataFrame([[1, 1], [1, -1], [2, 1], [2, 2]], columns=["groups", "vals"])
df
# Answer
def replace_within_group(group):
mask = group < 0
# Select those values where it is < 0, and replace
# them with the mean of the values which are not < 0.
group[mask] = group[~mask].mean() # "~" is the "invert" or "complement" operation
return group
df.groupby(['groups'])['vals'].transform(replace_within_group)
df = pd.DataFrame({'a':[1,2,3,4,5,6],
'b':[1,2,3,4,5,6],
'c':['q', 'q', 'q', 'q', 'w', 'w'],
'd':['z','z','z','o','o','o']})
# Answer:
df['e'] = df['a'] + df['b']
df['f'] = (df.groupby(['c', 'd'])['e'].transform('sum'))
index = pd.date_range('10/1/1999', periods=1100)
ser = pd.Series(np.random.normal(0.5, 2, 1100), index=index)
ser = ser.rolling(window=100,min_periods=100).mean().dropna()
# Answer:
key = lambda x: x.year
zscore = lambda x: (x - x.mean()) / x.std()
transformed = ser.groupby(key).transform(zscore)
# Answer:
grouped = transformed.groupby(key)
grouped.mean()
grouped.std()
# Answer:
compare = pd.DataFrame({'Original': ser, 'Transformed': transformed}, columns=["Transformed", "Original"])
compare.plot(figsize=(12,9), color=["blue", "red"])
df = pd.DataFrame({"Person": ["a", "a", "a", "b", "c", "c"], "Code": ["D99", "E32", "A41", "D99", "D99", "A41"]}, columns=["Person", "Code"])
df
# Answer:
df["Values"] = pd.Series([True]*df.shape[0])
dfp = df.pivot(index="Person", columns="Code", values="Values")
dfp = dfp.fillna(value=False)
dfp = dfp.reset_index()
dfp
# Answer
metadata = pd.read_excel(r"data/microbiome/metadata.xls", sheetname='Sheet1')
chunks = []
for i in range(9):
df_temp = pd.read_excel(r"data/microbiome/MID{0}.xls".format(i+1), 'Sheet 1', header=None, names=['Taxon', 'Count'])
df_temp = df_temp.set_index("Taxon")
df_temp.columns = ['Count']
df_temp.index.name = 'Taxon'
for colname in metadata.columns:
df_temp[colname] = metadata.loc[i, colname]
chunks.append(df_temp)
df = pd.concat(chunks)
df = df.reset_index()
df.head()
# Answers
titanic = pd.read_excel("data/titanic.xls", "titanic")
# 1.
# explicit version
# titanic.groupby('sex')['survived'].agg("mean")
# or short version
titanic.groupby('sex')['survived'].mean()
# 2.
titanic.groupby(['pclass','sex'])['survived'].mean()
# 3.
titanic['agecat'] = pd.cut(titanic.age, [0, 13, 20, 64, 100], labels=['child', 'adolescent', 'adult', 'senior'])
titanic.groupby(['agecat', 'pclass','sex'])['survived'].mean()
titanic.groupby(['agecat', 'pclass','sex'])['survived'].agg(["mean", "count"])
titanic.groupby(['sex','pclass'])["survived"].sum().plot(kind='barh')
# Answer
# 1.
titanic.survived = titanic.survived.astype(bool)
# 2.
death_counts = pd.crosstab([titanic.pclass, titanic.sex], titanic.survived)
# 3.
death_counts.plot(kind='bar', stacked=True)
# Answer
# 1.
sum_death_counts = death_counts.sum(axis=1).astype(float)
# 2.
adjusted_death_counts = death_counts.div(sum_death_counts, axis=0)
# 3.
adjusted_death_counts.plot(kind='barh', stacked=True)
# or in short:
# death_counts.div(death_counts.sum(1).astype(float), axis=0).plot(kind='barh', stacked=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Getting and Knowing your Data
Step2: Task
Step3: Task
Step4: Groupby
Step5: Task
Step6: Task
Step7: Task
Step8: Task
Step9: Task
Step10: Task
Step11: Task
Step12: Concatenate, Merge & Join
Step13: Task
Step14: Task
Step15: Apply (interspersed)
Step16: Consider the following DataFrames for the next exercises
Step17: Task
Step18: Task
Step19: Let's consider another data set to do some more Merge, Join & Concatenate exerciseses
Step20: Task
Step21: Task
Step22: Task
Step23: Task
Step24: Task
Step25: Transform
Step26: Task
Step27: Task
Step28: Task
Step29: Task
Step30: Pivot
Step31: Combine DataFrames
Step32: GroupBy Titanic data
Step33: Let's plot the number of survivors grouped by sex and passenger class.
Step34: Task
Step35: Task
|
12,363
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nerc', 'sandbox-1', 'seaice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Variables
Step7: 3. Key Properties --> Seawater Properties
Step8: 3.2. Ocean Freezing Point Value
Step9: 4. Key Properties --> Resolution
Step10: 4.2. Canonical Horizontal Resolution
Step11: 4.3. Number Of Horizontal Gridpoints
Step12: 5. Key Properties --> Tuning Applied
Step13: 5.2. Target
Step14: 5.3. Simulations
Step15: 5.4. Metrics Used
Step16: 5.5. Variables
Step17: 6. Key Properties --> Key Parameter Values
Step18: 6.2. Additional Parameters
Step19: 7. Key Properties --> Assumptions
Step20: 7.2. On Diagnostic Variables
Step21: 7.3. Missing Processes
Step22: 8. Key Properties --> Conservation
Step23: 8.2. Properties
Step24: 8.3. Budget
Step25: 8.4. Was Flux Correction Used
Step26: 8.5. Corrected Conserved Prognostic Variables
Step27: 9. Grid --> Discretisation --> Horizontal
Step28: 9.2. Grid Type
Step29: 9.3. Scheme
Step30: 9.4. Thermodynamics Time Step
Step31: 9.5. Dynamics Time Step
Step32: 9.6. Additional Details
Step33: 10. Grid --> Discretisation --> Vertical
Step34: 10.2. Number Of Layers
Step35: 10.3. Additional Details
Step36: 11. Grid --> Seaice Categories
Step37: 11.2. Number Of Categories
Step38: 11.3. Category Limits
Step39: 11.4. Ice Thickness Distribution Scheme
Step40: 11.5. Other
Step41: 12. Grid --> Snow On Seaice
Step42: 12.2. Number Of Snow Levels
Step43: 12.3. Snow Fraction
Step44: 12.4. Additional Details
Step45: 13. Dynamics
Step46: 13.2. Transport In Thickness Space
Step47: 13.3. Ice Strength Formulation
Step48: 13.4. Redistribution
Step49: 13.5. Rheology
Step50: 14. Thermodynamics --> Energy
Step51: 14.2. Thermal Conductivity
Step52: 14.3. Heat Diffusion
Step53: 14.4. Basal Heat Flux
Step54: 14.5. Fixed Salinity Value
Step55: 14.6. Heat Content Of Precipitation
Step56: 14.7. Precipitation Effects On Salinity
Step57: 15. Thermodynamics --> Mass
Step58: 15.2. Ice Vertical Growth And Melt
Step59: 15.3. Ice Lateral Melting
Step60: 15.4. Ice Surface Sublimation
Step61: 15.5. Frazil Ice
Step62: 16. Thermodynamics --> Salt
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Step65: 17.2. Constant Salinity Value
Step66: 17.3. Additional Details
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Step68: 18.2. Constant Salinity Value
Step69: 18.3. Additional Details
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Step72: 20.2. Additional Details
Step73: 21. Thermodynamics --> Melt Ponds
Step74: 21.2. Formulation
Step75: 21.3. Impacts
Step76: 22. Thermodynamics --> Snow Processes
Step77: 22.2. Snow Aging Scheme
Step78: 22.3. Has Snow Ice Formation
Step79: 22.4. Snow Ice Formation Scheme
Step80: 22.5. Redistribution
Step81: 22.6. Heat Diffusion
Step82: 23. Radiative Processes
Step83: 23.2. Ice Radiation Transmission
|
12,364
|
<ASSISTANT_TASK:>
Python Code:
from IPython.parallel import Client
%pylab inline
client = Client()
client.block = True # Computations run synchronously.
print client.ids
dview = client.direct_view()
def f(x):
return x
dview.apply(f, "Hello World")
even_dview = client[::2]
even_dview.apply(f, "Hello World")
bview = client.load_balanced_view()
bview.apply(f, "Hello World")
import os
print os.getpid()
dview.apply(os.getpid)
@dview.remote(block=True)
def getpid():
import os
return os.getpid()
getpid()
%%px
import os
print os.getpid()
res = dview.execute('print os.getpid()')
res.display_outputs()
dview.execute('import numpy')
%px import numpy
with dview.sync_imports():
import numpy
%%file script.py
def hello_world():
print "Hello World"
hello_world()
dview.run("script.py").display_outputs()
%%px
%pylab inline
z = np.random.random(10)
hist(z);
dview['a'] = 5
dview['b'] = 10
print dview['a']
print dview['os.getpid']
dview.push(dict(a=1, b=2))
dview.pull(('a','b'))
a = np.arange(32)
dview.scatter('a', a)
%px print a
%px a = a**2
dview.gather('a')
def random_number(factor):
import time
import os
import random
r = random.random()
time.sleep(5*r)
return os.getpid(), factor*r
bview.block=True
bview.apply(random_number,1)
bview.block = False
res = bview.map(random_number, range(10))
res.ready()
print res.ready()
print res.result
for result in bview.map(random_number, range(10)):
print result
def f(z, c=-.065+.66j):
return z**2 + c
m=1024
n=1024
zmin = -1.3 - 1j * 1.3
zmax = 1.3 + 1j * 1.3
xs = np.linspace(zmin.real, zmax.real, n)
ys = np.linspace(zmin.imag, zmax.imag, m)
X, Y = np.meshgrid(xs, ys)
Z = X + 1j * Y
print Z[:2,:2]
def julia(f, Z, tmax=256):
J = np.ones(Z.shape) * tmax
for t in xrange(tmax):
mask = np.abs(Z) <= 2.
Z[ mask] = f(Z[mask])
J[-mask] -= 1
return J
ZZ = np.copy(Z)
J = julia(f, ZZ)
figure(figsize=[10,10])
imshow(J)
axis('off');
%%px
import numpy as np
def f(z, c=-.065+.66j):
return z**2 + c
def julia_parallel(f,Z,tmax=256):
J = np.ones(Z.shape) * tmax
for t in xrange(tmax):
mask = np.abs(Z) <= 2.
Z[mask] = f(Z[mask])
J[-mask] -= 1
return J
dview.scatter('Z', Z)
%px ZZ = np.copy(Z)
%px J = julia_parallel(f,ZZ)
PJ = dview.gather('J')
figure(figsize=[10,10])
imshow(PJ)
axis('off');
Z = X + 1j * Y
%%time
J = julia(f, Z)
Z = X + 1j * Y
%%time
dview.scatter('Z', Z)
%px ZZ = np.copy(Z)
%px J = julia_parallel(f,ZZ)
PJ = dview.gather('J')
np.allclose(J,PJ)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Before we start, we first need to assign a number of engines in our cluster. This can be done through the IPython notebook interface or through the command line. The basic syntax is
Step2: If there are no engines running, or they are incorrectly set up then this will throw an error.
Step3: One can also select a subset of the engines directly using list slicing
Step4: Load Balanced Views
Step5: How do we execute code?
Step6: As shown before, we can apply a function (with parameters if needed) to the view, which returns the result for each engine in a list.
Step7: Alternatively we can apply a function decorator to our function and call it as normal. A function decorator is a wrapper function, or a function which has one or more functions as a parameter.
Step8: Another method uses the cell magic %%px or line magic %px. This executes the line or cell on a direct view of the current client (specifically its current list of targets).
Step9: Finally, and most fundamentally, one can excute a string of code using the execute function.
Step10: Imports and Environments
Step11: The later method uses the context manager sync_imports which simultaneously sets the local and remote imports.
Step12: Since the computational engines are all running with IPython, then %magics are supported, as well as inline plotting, for example,
Step13: Accessing and Setting Data
Step14: Values can be set and retrieved using the more conventional push and pull methods respectively.
Step15: Arrays can be distributed evenly over the engines using the scatter method.
Step16: This allows us to compute the result for each array before using the gather method to piece them back together.
Step17: Load Balanced Views
Step18: If we apply a function to the load balanced view we receive only one result as the function has been only been executed once.
Step19: To pass in multiple inputs (effectively creating a queue of parameters), we can use the map function. By setting bview.block=False we allow the results to stream in asynchronously.
Step20: The map instantly returns an AsyncResult object which begins empty and collects the results as they are computed. To check whether the computation is complete we run
Step21: which in this case is False. After waiting a sufficient amount of time, the complete result becomes available
Step22: If we call result before the computation is complete then the system will wait until it is. Note that there is no particular ordering to the engine used - the scheduler is picking the next available engine to run the function. The load balanced view map is also iterable which allows for results to be returned as and when they become available.
Step23: Applying Parallel Computing to Fractals (Julia sets)
Step24: Note that Python uses the letter j rather than i for the complex unit.
Step25: Using the meshgrid function we can easily create an array of complex numbers.
Step26: Application
Step27: The above function carries out this function in a very succinct fashion. Firstly, we initialise an array with our maximum iteration number. We then check our complex plane array to see if any values have a modulus greater than 2. This creates a boolean index array which we can use to select only the elements in our array Z which satisfy the condition.
Step28: In parallel we first feed the functions to each engine.
Step29: We can then scatter/distribute the array amoungst the different engines. Each engine runs the same number of loops on the same sized array so in theory the load should be evenly distributed. In reality the size of the masks may vary so one would expect some deviation in the workloads.
Step30: Piecing together all the parts we arrive at the same fractal as before.
Step31: So did we speed it up?
Step32: So we see that the computational time has indeed been reduced by a factor of the number of engines we chose at the start (plus some overhead). For this small example the overhead is relatively large, however if we wanted to double or triple the resolution then this overhead would become insignificant.
|
12,365
|
<ASSISTANT_TASK:>
Python Code:
#@test {"skip": true}
!pip install --quiet --upgrade tensorflow-federated
!pip install --quiet --upgrade nest-asyncio
import nest_asyncio
nest_asyncio.apply()
import collections
import time
import tensorflow as tf
import tensorflow_federated as tff
source, _ = tff.simulation.datasets.emnist.load_data()
def map_fn(example):
return collections.OrderedDict(
x=tf.reshape(example['pixels'], [-1, 784]), y=example['label'])
def client_data(n):
ds = source.create_tf_dataset_for_client(source.client_ids[n])
return ds.repeat(10).shuffle(500).batch(20).map(map_fn)
train_data = [client_data(n) for n in range(10)]
element_spec = train_data[0].element_spec
def model_fn():
model = tf.keras.models.Sequential([
tf.keras.layers.InputLayer(input_shape=(784,)),
tf.keras.layers.Dense(units=10, kernel_initializer='zeros'),
tf.keras.layers.Softmax(),
])
return tff.learning.from_keras_model(
model,
input_spec=element_spec,
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=[tf.keras.metrics.SparseCategoricalAccuracy()])
trainer = tff.learning.build_federated_averaging_process(
model_fn, client_optimizer_fn=lambda: tf.keras.optimizers.SGD(0.02))
def evaluate(num_rounds=10):
state = trainer.initialize()
for _ in range(num_rounds):
t1 = time.time()
state, metrics = trainer.next(state, train_data)
t2 = time.time()
print('metrics {m}, round time {t:.2f} seconds'.format(
m=metrics, t=t2 - t1))
evaluate()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 단일 머신 시뮬레이션
|
12,366
|
<ASSISTANT_TASK:>
Python Code:
from pred import Predictor
from pred import sequence_vector
from pred import chemical_vector
par = ["pass", "ADASYN", "SMOTEENN", "random_under_sample", "ncl", "near_miss"]
benchmarks = ["Data/Training/phos_CDK1.csv", "Data/Training/phos_CK2.csv", "Data/Training/phos_MAPK1.csv", "Data/Training/phos_PKA.csv", "Data/Training/phos_PKC.csv"]
for j in benchmarks:
for i in par:
print("y", i, " ", j)
y = Predictor()
y.load_data(j)
y.process_data(vector_function="sequence", amino_acid="S", imbalance_function=i, random_data=0)
y.supervised_training("bagging")
del y
print("x", i, " ", j)
x = Predictor()
x.load_data(file=j)
x.process_data(vector_function="sequence", amino_acid="S", imbalance_function=i, random_data=1)
x.supervised_training("bagging")
del x
par = ["pass", "ADASYN", "SMOTEENN", "random_under_sample", "ncl", "near_miss"]
benchmarks = ["Data/Training/phos_CDK1.csv", "Data/Training/phos_CK2.csv", "Data/Training/phos_MAPK1.csv", "Data/Training/phos_PKA.csv", "Data/Training/phos_PKC.csv"]
for j in benchmarks:
for i in par:
try:
print("y", i, " ", j)
y = Predictor()
y.load_data(j)
y.process_data(vector_function="sequence", amino_acid="Y", imbalance_function=i, random_data=0)
y.supervised_training("bagging")
del y
print("x", i, " ", j)
x = Predictor()
x.load_data(file=j)
x.process_data(vector_function="sequence", amino_acid="Y", imbalance_function=i, random_data=1)
x.supervised_training("bagging")
del x
except:
print("NotValidBenchmark")
par = ["pass", "ADASYN", "SMOTEENN", "random_under_sample", "ncl", "near_miss"]
benchmarks = ["Data/Training/phos_CDK1.csv", "Data/Training/phos_CK2.csv", "Data/Training/phos_MAPK1.csv", "Data/Training/phos_PKA.csv", "Data/Training/phos_PKC.csv"]
for j in benchmarks:
for i in par:
print("y", i, " ", j)
y = Predictor()
y.load_data(j)
y.process_data(vector_function="sequence", amino_acid="T", imbalance_function=i, random_data=0)
y.supervised_training("bagging")
del y
print("x", i, " ", j)
x = Predictor()
x.load_data(file=j)
x.process_data(vector_function="sequence", amino_acid="T", imbalance_function=i, random_data=1)
x.supervised_training("bagging")
del x
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Controlling for Random Negatve vs Sans Random in Imbalanced Techniques using S, T, and Y Phosphorylation.
Step2: Y Phosphorylation
Step3: T Phosphorylation
|
12,367
|
<ASSISTANT_TASK:>
Python Code:
#mean and std of multivariate normal dist to generate samples
mu=np.array([5,0,-2])
σ=np.array([[9,1, -1],
[1, 3, -2],
[-1, -2,2],])
if not is_covariance(σ):
print("Warning: σ is not a valid covariance matrix (not symmetric or positive definite)")
n=1000 # number of samples
x=np.random.multivariate_normal(mu,σ,size=(n,))
#plot generated data
plt.close("all")
plot_data(x,"data in original space",eigen=True)
plt.show()
pca_exact = PCA(n_components=3) # since x has 3 features, this PCA model does not do compression
pca_exact.fit(x) # calculate eigen decomposition of cov(x)
x_transformed=pca_exact.transform(x) #encode x with the eigenvectors as basis
plot_data(x_transformed,"x in natural (eigenvectors) space",eigen=True)
#save the eigenvectors and eigenvalues
eigenvectors=pca_exact.components_
eigenvalues=pca_exact.explained_variance_
y=np.random.multivariate_normal(mu,σ,size=(n,))
plot_data(y,"y in original space",eigen=True)
y_missing=y[:,0:2]
plt.figure()
plt.scatter(y_missing[:,0],y_missing[:,1])
plt.title("y_missing in original space (2d)")
pca_reconstruction=PCA(n_components=2)
pca_reconstruction.fit(x)
print(pca_reconstruction.components_)
print(eigenvectors)
y_augmented=np.copy(y_missing)
y3=np.zeros((n,1))+mu[2]
y_augmented=np.hstack([y_missing,y3])
y_eigen=pca_exact.transform(y_augmented)
least_eigenvalue_index=np.argmin(eigenvalues)
y_eigen_2d=y_eigen[:,np.arange(3)!=least_eigenvalue_index]
y_reconstructed=pca_reconstruction.inverse_transform(y_eigen_2d)
plot_data(y_reconstructed, "y_reconstructed",eigen=True)
mean_reconstruction_error=((y_reconstructed-y)**2).sum()/n
print(mean_reconstruction_error)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Transform the generated data $x$ to a new basis.
Step2: Generate another dataset $y$ with the same distribution as $x$ (this is a very strong assumption!)
Step3: Lets simulate the fact that for $y$ we can't measure all values. In this case, we will create y_missing, which only has 2 features
Step4: Now, lets assume that we can recover the last feature of y_missing using information from the eigendecomposition of $cov(x)$.
Step5: We still can't reconstruct y from y_missing with pca_reconstruction because it is encoded with the canonical basis. We first need to encode it in the eigenbasis. To do that we will augment y_missing with a new feature with value equal to the mean of that feature (a reasonable assumption), and encode it.
|
12,368
|
<ASSISTANT_TASK:>
Python Code:
%%writefile train.py
print("hello world!")
job = TrainJob("train.py", backend=KubeflowGKEBackend())
job.submit()
def train():
print("simple train job!")
job = TrainJob(train, backend=KubeflowGKEBackend())
job.submit()
%%writefile requirements.txt
papermill
jupyter
job = TrainJob("train.ipynb", backend=KubeflowGKEBackend(), input_files=["requirements.txt"])
job.submit()
import multiprocessing
import os
def train():
print("CPU count: {}".format(multiprocessing.cpu_count()))
print("Memory: {}", os.sysconf('SC_PAGE_SIZE') * os.sysconf('SC_PHYS_PAGES')/(1024.**3))
train()
job = TrainJob(train, base_docker_image=None, docker_registry=None, backend=KubeflowGKEBackend(),
pod_spec_mutators=[get_resource_mutator(cpu=90, memory=600)])
job.submit()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Executing a python function
Step2: Executing a complete notebook
Step3: Executing it with large #CPUs and huge Memory
|
12,369
|
<ASSISTANT_TASK:>
Python Code:
n=RichStr("I am ", "normal")
n
r=RichStr("RED", sheet=groups["Fore"]["red"])
r=RichStr("RED", sheet=groups.Fore.red)
str(r)
print(r)
print(r.toHTML())
IPython.display.display_html(r.toHTML(), raw=True)
pureRed=RGBColor("PureRed", 0xFF, bg=True)
prs=RichStr("Pure red", sheet=pureRed)
print(repr(str(prs)))
print(prs)
lightGoldenrod1=RGBColor("lightGoldenrod1", 0xff, 0xff, 0x5f, True)
blackOnGold=Sheet({
"Back":lightGoldenrod1, # requires 3rd-party libraries
"Fore":groups.Fore.black
})
g=RichStr(r, " on GOLD", sheet=blackOnGold)
str(g)
print(g)
print(g.toHTML())
IPython.display.display_html(g.toHTML(), raw=True)
g.sheetRepr()
g.optimizedCodeRepr()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: __repr__esentation of a rich string shows a "flat" representation of a RichStr - a sequence of styles and strings where style applies to everything after it. This is how terminal works. This representation is useful for debugging.
Step2: To apply a style or a stylesheet you use sheet named argument of RichStr
Step3: You can also use dot notation
Step4: __str__ is overloaded, so you can print. Note that the red is not pure red, it is because here it is indexed, indexed colors depend on terminal palete
Step5: There is a quick and dirty conversion to HTML, but don't use it, it is too unfinished dirty. There are some methods to get CSS rules for some styles.
Step6: you can create stylesheets from styles
Step7: with RichStr.optimizedCodeRepr you can get optimized code sequence in a machine-readable form
|
12,370
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'bcc', 'sandbox-3', 'aerosol')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 2. Key Properties --> Software Properties
Step12: 2.2. Code Version
Step13: 2.3. Code Languages
Step14: 3. Key Properties --> Timestep Framework
Step15: 3.2. Split Operator Advection Timestep
Step16: 3.3. Split Operator Physical Timestep
Step17: 3.4. Integrated Timestep
Step18: 3.5. Integrated Scheme Type
Step19: 4. Key Properties --> Meteorological Forcings
Step20: 4.2. Variables 2D
Step21: 4.3. Frequency
Step22: 5. Key Properties --> Resolution
Step23: 5.2. Canonical Horizontal Resolution
Step24: 5.3. Number Of Horizontal Gridpoints
Step25: 5.4. Number Of Vertical Levels
Step26: 5.5. Is Adaptive Grid
Step27: 6. Key Properties --> Tuning Applied
Step28: 6.2. Global Mean Metrics Used
Step29: 6.3. Regional Metrics Used
Step30: 6.4. Trend Metrics Used
Step31: 7. Transport
Step32: 7.2. Scheme
Step33: 7.3. Mass Conservation Scheme
Step34: 7.4. Convention
Step35: 8. Emissions
Step36: 8.2. Method
Step37: 8.3. Sources
Step38: 8.4. Prescribed Climatology
Step39: 8.5. Prescribed Climatology Emitted Species
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Step41: 8.7. Interactive Emitted Species
Step42: 8.8. Other Emitted Species
Step43: 8.9. Other Method Characteristics
Step44: 9. Concentrations
Step45: 9.2. Prescribed Lower Boundary
Step46: 9.3. Prescribed Upper Boundary
Step47: 9.4. Prescribed Fields Mmr
Step48: 9.5. Prescribed Fields Mmr
Step49: 10. Optical Radiative Properties
Step50: 11. Optical Radiative Properties --> Absorption
Step51: 11.2. Dust
Step52: 11.3. Organics
Step53: 12. Optical Radiative Properties --> Mixtures
Step54: 12.2. Internal
Step55: 12.3. Mixing Rule
Step56: 13. Optical Radiative Properties --> Impact Of H2o
Step57: 13.2. Internal Mixture
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Step59: 14.2. Shortwave Bands
Step60: 14.3. Longwave Bands
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Step62: 15.2. Twomey
Step63: 15.3. Twomey Minimum Ccn
Step64: 15.4. Drizzle
Step65: 15.5. Cloud Lifetime
Step66: 15.6. Longwave Bands
Step67: 16. Model
Step68: 16.2. Processes
Step69: 16.3. Coupling
Step70: 16.4. Gas Phase Precursors
Step71: 16.5. Scheme Type
Step72: 16.6. Bulk Scheme Species
|
12,371
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import scipy as sp
import matplotlib as mpl
import matplotlib.cm as cm
import matplotlib.pyplot as plt
import pandas as pd
pd.set_option('display.width', 500)
pd.set_option('display.max_columns', 100)
pd.set_option('display.notebook_repr_html', True)
import seaborn as sns
sns.set_style("whitegrid")
sns.set_context("poster")
critics = pd.read_csv('./critics.csv')
#let's drop rows with missing quotes
critics = critics[~critics.quote.isnull()]
critics.head()
n_reviews = len(critics)
n_movies = critics.rtid.unique().size
n_critics = critics.critic.unique().size
print("Number of reviews: %i" % n_reviews)
print("Number of critics: %i" % n_critics)
print("Number of movies: %i" % n_movies)
df = critics.copy()
df['fresh'] = df.fresh == 'fresh'
grp = df.groupby('critic')
counts = grp.critic.count() # number of reviews by each critic
means = grp.fresh.mean() # average freshness for each critic
means[counts > 100].hist(bins=10, edgecolor='w', lw=1)
plt.xlabel("Average rating per critic")
plt.ylabel("N")
plt.yticks([0, 2, 4, 6, 8, 10]);
from sklearn.feature_extraction.text import CountVectorizer
text = ['Hop on pop', 'Hop off pop', 'Hop Hop hop']
print("Original text is\n", '\n'.join(text))
vectorizer = CountVectorizer(min_df=0)
# call `fit` to build the vocabulary
vectorizer.fit(text)
# call `transform` to convert text to a bag of words
x = vectorizer.transform(text)
# CountVectorizer uses a sparse array to save memory, but it's easier in this assignment to
# convert back to a "normal" numpy array
x = x.toarray()
print
print("Transformed text vector is \n", x)
# `get_feature_names` tracks which word is associated with each column of the transformed x
print
print("Words for each feature:")
print(vectorizer.get_feature_names())
# Notice that the bag of words treatment doesn't preserve information about the *order* of words,
# just their frequency
def make_xy(critics, vectorizer=None):
#Your code here
if vectorizer is None:
vectorizer = CountVectorizer()
X = vectorizer.fit_transform(critics.quote)
X = X.tocsc() # some versions of sklearn return COO format
y = (critics.fresh == 'fresh').values.astype(np.int)
return X, y
X, y = make_xy(critics)
#your turn
from sklearn.naive_bayes import MultinomialNB
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=4)
naive_model = MultinomialNB().fit(X_train,y_train)
test_score = naive_model.score(X_test,y_test)
train_score = naive_model.score(X_train,y_train)
print("Multinomial test accuracy: %0.2f%%" % (100*test_score))
print("Multinomial train accuracy: %0.2f%%" % (100*train_score))
from sklearn.model_selection import KFold
def cv_score(clf, X, y, scorefunc):
result = 0.
nfold = 5
kf = KFold(n_splits = nfold)
for train, test in kf.split(X): # split data into train/test groups, 5 times
clf.fit(X[train], y[train]) # fit
result += scorefunc(clf, X[test], y[test]) # evaluate score function on held-out data
return result / nfold # average
def log_likelihood(clf, x, y):
prob = clf.predict_log_proba(x)
rotten = y == 0
fresh = ~rotten
return prob[rotten, 0].sum() + prob[fresh, 1].sum()
itrain, itest = train_test_split(range(critics.shape[0]), train_size=0.7)
mask=np.ones(critics.shape[0], dtype='int')
mask[itrain]=1
mask[itest]=0
mask = (mask==1)
#the grid of parameters to search over
alphas = [0, .1, 1, 5, 10, 50]
min_dfs = [1e-5, 1e-4, 1e-3, 1e-2, 1e-1]
#Find the best value for alpha and min_df, and the best classifier
best_alpha = None
best_min_df = None
maxscore=-np.inf
for alpha in alphas:
for min_df in min_dfs:
vectorizer = CountVectorizer(min_df = min_df)
Xthis, ythis = make_xy(critics, vectorizer)
Xtrainthis=Xthis[mask]
ytrainthis=ythis[mask]
#your turn
naive_bayes = MultinomialNB(alpha=alpha)
crossval_score = cv_score(naive_bayes,Xtrainthis,ytrainthis, log_likelihood)
if crossval_score > maxscore:
maxscore = crossval_score
best_alpha,best_min_df = alpha,min_df
print("alpha: %f" % best_alpha)
print("min_df: %f" % best_min_df)
vectorizer = CountVectorizer(min_df=best_min_df)
X, y = make_xy(critics, vectorizer)
xtrain=X[mask]
ytrain=y[mask]
xtest=X[~mask]
ytest=y[~mask]
clf = MultinomialNB(alpha=best_alpha).fit(xtrain, ytrain)
#your turn. Print the accuracy on the test and training dataset
training_accuracy = clf.score(xtrain, ytrain)
test_accuracy = clf.score(xtest, ytest)
print("Accuracy on training data: %0.2f%%" % (training_accuracy))
print("Accuracy on test data: %0.2f%%" % (test_accuracy))
from sklearn.metrics import confusion_matrix
print(confusion_matrix(ytest, clf.predict(xtest)))
words = np.array(vectorizer.get_feature_names())
x = np.eye(xtest.shape[1])
probs = clf.predict_log_proba(x)[:, 0]
ind = np.argsort(probs)
good_words = words[ind[:10]]
bad_words = words[ind[-10:]]
good_prob = probs[ind[:10]]
bad_prob = probs[ind[-10:]]
print("Good words\t P(fresh | word)")
for w, p in zip(good_words, good_prob):
print("%20s" % w, "%0.2f" % (1 - np.exp(p)))
print("Bad words\t P(fresh | word)")
for w, p in zip(bad_words, bad_prob):
print("%20s" % w, "%0.2f" % (1 - np.exp(p)))
x, y = make_xy(critics, vectorizer)
prob = clf.predict_proba(x)[:, 0]
predict = clf.predict(x)
bad_rotten = np.argsort(prob[y == 0])[:5]
bad_fresh = np.argsort(prob[y == 1])[-5:]
print("Mis-predicted Rotten quotes")
print ('---------------------------')
for row in bad_rotten:
print (critics[y == 0].quote.iat[row])
print()
print("Mis-predicted Fresh quotes")
print('--------------------------')
for row in bad_fresh:
print(critics[y == 1].quote.iat[row])
print()
#your turn
clf.predict_proba(vectorizer.transform(['This movie is not remarkable, touching, or superb in any way']))
#http://scikit-learn.org/dev/modules/feature_extraction.html#text-feature-extraction
#http://scikit-learn.org/dev/modules/classes.html#text-feature-extraction-ref
from sklearn.feature_extraction.text import TfidfVectorizer
tfidfvectorizer = TfidfVectorizer(min_df=1, stop_words='english')
Xtfidf=tfidfvectorizer.fit_transform(critics.quote)
Xtfidf[0].toarray()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Table of Contents
Step2: Explore
Step3: The Vector space model and a search engine.
Step4: Naive Bayes
Step5: The accuracy score is good for the test set, but not great. When we see how it performs on the training set however, it becomes clear that the classifier is overfit. There is a $\approx 16\%$ difference in score.
Step6: We use the log-likelihood as the score here in scorefunc. Indeed, what we do in cv_score above is to implement the cross-validation part of GridSearchCV.
Step7: Your turn
Step8: Your turn
Step9: Work with the best params
Step10: The classifier performs slightly worse on the test data, but the closeness of the scores suggests that we are no longer over fitting. One would need to get new novel data and test against that to be sure, but the initial impression is that this classifier will perform better over a greater variety of datasets.
Step11: Your turn
Step12: Predicting the freshness for a new review
Step13: This classifier gives a 98.6% probability of being fresh, despite the fact that the sentence is clearly negative. The word 'not' should negate all the postive adjectives which follow, but our simple bag-of-words approach doesn't have any way of dealing with this, and simply takes the positive features as is. Thus, a completely naive approach fails when confronted with the subleties of language.
Step14: Your turn (extra credit)
|
12,372
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile('cifar-10-python.tar.gz'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
'cifar-10-python.tar.gz',
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open('cifar-10-python.tar.gz') as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
# TODO: Implement Function
normalized = np.array((x) / (255))
return normalized
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_normalize(normalize)
from sklearn import preprocessing
encoding_map = None
def one_hot_encode(x):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
# TODO: Implement Function
global encoding_map
if encoding_map is None:
encoding_map = preprocessing.LabelBinarizer()
encoding_map.fit(x)
one_hot_encoding = encoding_map.transform(x)
return one_hot_encoding
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_one_hot_encode(one_hot_encode)
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
DON'T MODIFY ANYTHING IN THIS CELL
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
# TODO: Implement Function
image_input = tf.placeholder(tf.float32, shape=[None, *image_shape], name="x")
return image_input
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
# TODO: Implement Function
label_input = tf.placeholder(tf.float32, shape=(None, n_classes), name="y")
return label_input
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
# TODO: Implement Function
keep_prob = tf.placeholder(tf.float32, name="keep_prob")
return keep_prob
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
# TODO: Implement Function
### Convolutional Layer
# Image height,image width, and colour channels
batch_size, image_height, image_width, colour_channels = x_tensor.get_shape().as_list()
# Filter height and widith
filter_height, filter_width = conv_ksize
# Conv Strides Dimensions
stride_height, stride_width = conv_strides
# Weight input for convolutional layer
weight = tf.Variable(tf.truncated_normal(
[filter_height, filter_width, colour_channels, conv_num_outputs], stddev=0.05))
# Bias input for convolutional layer
bias = tf.Variable(tf.zeros(conv_num_outputs))
# Apply Convolution
conv_layer = tf.nn.conv2d(x_tensor, weight, strides=[1, stride_height, stride_width, 1], padding='SAME')
# Add bias
conv_layer = tf.nn.bias_add(conv_layer, bias)
# Apply relu activation function
conv_layer = tf.nn.relu(conv_layer)
### Max Pooling Layer
# Pool Size Dimensions
pool_size_height, pool_size_width = pool_ksize
# Pool Strides Dimensions
pool_strides_height, pool_strides_width = pool_strides
# Max Pooling Function
conv_maxpool = tf.nn.max_pool(conv_layer,
ksize=[1, pool_size_height, pool_size_width, 1],
strides=[1, pool_strides_height, pool_strides_width, 1],
padding="SAME")
return conv_maxpool
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
# TODO: Implement Function
unflattened_shape = x_tensor.get_shape().as_list()
flattened_dimensions = np.prod(unflattened_shape[1:])
flattened_shape = tf.reshape(x_tensor, [-1, flattened_dimensions])
return flattened_shape
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_flatten(flatten)
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
# Batch_size, depth
batch_size, depth = x_tensor.get_shape().as_list()
# Weights and Biases
weight = tf.Variable(tf.truncated_normal([depth, num_outputs], stddev=0.05))
bias = tf.Variable(tf.zeros(num_outputs))
fully_connected_layer = tf.add(tf.matmul(x_tensor, weight), bias)
fully_connected_layer = tf.nn.relu(fully_connected_layer)
return fully_connected_layer
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_fully_conn(fully_conn)
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
# Batch_size, depth
batch_size, depth = x_tensor.get_shape().as_list()
# Weights and Biases
weight = tf.Variable(tf.truncated_normal([depth, num_outputs], stddev=0.05))
bias = tf.Variable(tf.zeros(num_outputs))
output = tf.add(tf.matmul(x_tensor, weight), bias)
return output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_output(output)
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
x_tensor = x
conv_num_outputs = [32, 48, 64]
conv_ksize = (3,3)
conv_strides = (1,1)
pool_ksize = (2,2)
pool_strides = (2,2)
num_outputs = [256, 512, 1024, 10]
# Three convolutional layers
conv_max1 = conv2d_maxpool(x_tensor, conv_num_outputs[0], conv_ksize, conv_strides, pool_strides, pool_strides)
conv_max2 = conv2d_maxpool(conv_max1, conv_num_outputs[1], conv_ksize, conv_strides, pool_ksize, pool_strides)
conv_max3 = conv2d_maxpool(conv_max2, conv_num_outputs[2], conv_ksize, conv_strides, pool_ksize, pool_strides)
# TODO: Apply a Flatten Layer
# Function Definition from Above:
flat = flatten(conv_max3)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# First fully connected layer then dropout applied
fcl1 = fully_conn(flat, num_outputs[0])
fcl1 = tf.nn.dropout(fcl1, keep_prob)
# Second fully connected layer then dropout applied
fcl2 = fully_conn(fcl1, num_outputs[1])
fcl2 = tf.nn.dropout(fcl2, keep_prob)
# Third fully connected layer then dropout applied
fcl3 = fully_conn(fcl2, num_outputs[2])
fcl3 = tf.nn.dropout(fcl3, keep_prob)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
out = output(fcl3, num_outputs[3])
# TODO: return output
return out
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
# TODO: Implement Function
session.run(optimizer, feed_dict={x: feature_batch, y: label_batch, keep_prob: keep_probability})
pass
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_train_nn(train_neural_network)
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
# TODO: Implement Function
loss = session.run(cost, feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.})
valid_acc = session.run(accuracy, feed_dict={x: valid_features, y: valid_labels, keep_prob: 1.})
print('Loss: {:>10.4f} Validation Accuracy: {:.6f}'.format(loss, valid_acc))
pass
# TODO: Tune Parameters
epochs = 25
batch_size = 256
keep_probability = .5
DON'T MODIFY ANYTHING IN THIS CELL
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_training.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for train_feature_batch, train_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: train_feature_batch, loaded_y: train_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Image Classification
Step2: Explore the Data
Step5: Implement Preprocess Functions
Step8: One-hot encode
Step10: Randomize Data
Step12: Check Point
Step17: Build the network
Step20: Convolution and Max Pooling Layer
Step23: Flatten Layer
Step26: Fully-Connected Layer
Step29: Output Layer
Step32: Create Convolutional Model
Step35: Train the Neural Network
Step37: Show Stats
Step38: Hyperparameters
Step40: Train on a Single CIFAR-10 Batch
Step42: Fully Train the Model
Step45: Checkpoint
|
12,373
|
<ASSISTANT_TASK:>
Python Code:
import requests
from bs4 import BeautifulSoup
def listFD(url, ext=''):
page = requests.get(url).text
soup = BeautifulSoup(page, 'html.parser')
return [url + node.get('href') for node in soup.find_all('a')
if node.get('href').endswith(ext)]
site = 'http://people.duke.edu/~ccc14/misc/'
ext = 'png'
for i, file in enumerate(listFD(site, ext)):
if i == 5:
break
print(file)
def download_one(url, path):
r = requests.get(url, stream=True)
img = r.raw.read()
with open(path, 'wb') as f:
f.write(img)
%%time
for url in listFD(site, ext):
filename = os.path.split(url)[-1]
download_one(url, filename)
%%time
from concurrent.futures import ThreadPoolExecutor
args = [(url, os.path.split(url)[-1])
for url in listFD(site, ext)]
with ThreadPoolExecutor(max_workers=4) as pool:
pool.map(lambda x: download_one(x[0], x[1]), args)
%%time
from multiprocessing import Pool
args = [(url, os.path.split(url)[-1])
for url in listFD(site, ext)]
with Pool(processes=4) as pool:
pool.starmap(download_one, args)
n = 100
p = 10
xs = np.random.random((n, p))
def dist(x, y):
return np.sqrt(np.sum((x - y)**2))
def pdist(xs):
m = np.empty((len(xs), len(xs)))
for i, x in enumerate(xs):
for j, y in enumerate(xs):
m[i, j] = dist(x, y)
return m
%timeit pdist(xs)
n = 10
x = np.linspace(0, 10, n)
y = 3*x**2 - 7*x + 2 + np.random.normal(0, 10, n)
X = np.c_[np.ones(n), x, x**2]
beta = np.linalg.lstsq(X, y)[0]
beta
plt.scatter(x, y)
plt.plot(x, X @ beta, 'red')
pass
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. (25 points) Accelerating CPU bound procedures
Step2: 3. (25 points) Use C++ to
|
12,374
|
<ASSISTANT_TASK:>
Python Code:
import o2sclpy
import matplotlib.pyplot as plot
import ctypes
import numpy
import sys
plots=True
if 'pytest' in sys.modules:
plots=False
link=o2sclpy.linker()
link.link_o2scl()
cu=link.o2scl_settings.get_convert_units()
b=o2sclpy.eos_tov_buchdahl(link)
ts=o2sclpy.tov_solve(link)
ts.set_eos(b);
ts.fixed(1.4,1.0e-4)
print('Exact radius is %7.6e, computed radius is %7.6e.' %
(b.rad_from_gm(1.4),ts.rad))
print('Relative difference %7.6e.' %
(abs(b.rad_from_gm(1.4)-ts.rad)/ts.rad))
tov_table=ts.get_results()
beta=ts.mass*b.G_km_Msun/ts.rad
radial_grid=[]
rel_diff=[]
for i in range(1,tov_table.get_nlines()):
r=tov_table['r'][i]
radial_grid.append(r)
enc_mass=r*(1.0-1.0/b.exp2lam_from_r_gm(tov_table['r'][i],
beta))/2.0/b.G_km_Msun
enc_mass2=tov_table['gm'][i]
rel_diff.append(abs(enc_mass-enc_mass2)/enc_mass)
if plots:
pl=o2sclpy.plotter()
pl.canvas()
plot.plot(tov_table['r'][0:tov_table.get_nlines()],
tov_table['gm'][0:tov_table.get_nlines()])
pl.xtitle('radius (km)')
pl.ytitle('gravitational mass (Msun)')
plot.show()
pl.canvas_flag=False
pl.canvas()
plot.plot(radial_grid,rel_diff)
pl.xtitle('radius (km)')
pl.ytitle('rel. error in enclosed grav. mass')
plot.show()
def test_fun():
assert numpy.allclose(b.rad_from_gm(1.4),ts.rad,rtol=1.0e-9,atol=0)
for i in range(0,len(rel_diff)):
assert numpy.allclose(rel_diff[i],0.0,atol=5.0e-11)
return
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Link the O$_2$scl library
Step2: Get a copy (a pointer to) the O$_2$scl unit conversion object
Step3: Create the Buchdahl EOS object
Step4: Create the TOV solve object, set the EOS and compute the M-R curve
Step5: Get the table for the TOV results
Step6: The compactness of a 1.4 solar mass NS
Step7: Construct two lists, a radius grid and a list containing the
Step8: Initialize the plotting object
Step9: Plot the enclosed gravitational mass as a function of
Step10: For the enclosed gravitational mass, plot the relative
Step11: For testing using pytest
|
12,375
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
import speclite
print(speclite.version.version)
import bossdata
print(bossdata.__version__)
finder = bossdata.path.Finder()
mirror = bossdata.remote.Manager()
spAll = bossdata.meta.Database(lite=True)
sky_table = spAll.select_all(where='PLATE=6641 and OBJTYPE="SKY"')
print('Found {0} sky fibers for plate 6641.'.format(len(sky_table)))
def plot_stack(data, truncate_percentile):
valid = data['ivar'] > 0
wlen = data['wavelength'][valid]
flux = data['flux'][valid]
dflux = data['ivar'][valid]**(-0.5)
plt.figure(figsize=(12,5))
plt.fill_between(wlen, flux, lw=0, color='red')
plt.errorbar(wlen, flux, dflux, color='black', alpha=0.5, ls='None', capthick=0)
plt.xlim(np.min(wlen), np.max(wlen))
plt.ylim(0, np.percentile(flux + dflux, truncate_percentile))
plt.xlabel('Wavelength ($\AA$)')
plt.ylabel('Flux $10^{-17}$ erg/(s cm$^2 \AA$)')
plt.tight_layout();
spec_sky = None
for row in sky_table:
filename = finder.get_spec_path(plate=row['PLATE'], mjd=row['MJD'], fiber=row['FIBER'], lite=True)
spectrum = bossdata.spec.SpecFile(mirror.get(filename))
data = spectrum.get_valid_data(include_sky=True, use_ivar=True, fiducial_grid=True)
spec_sky = speclite.accumulate(spec_sky, data, data_out=spec_sky, join='wavelength',
add=('flux', 'sky'), weight='ivar')
spec_sky['flux'] += spec_sky['sky']
plot_stack(spec_sky, truncate_percentile=97.5)
plate_sky = None
filename = finder.get_plate_spec_path(plate=6641, mjd=56383)
plate = bossdata.plate.PlateFile(mirror.get(filename))
plate_data = plate.get_valid_data(sky_table['FIBER'], include_sky=True, use_ivar=True, fiducial_grid=True)
for data in plate_data:
plate_sky = speclite.accumulate(plate_sky, data, data_out=plate_sky, join='wavelength',
add=('flux', 'sky'), weight='ivar')
plate_sky['flux'] += plate_sky['sky']
plot_stack(plate_sky, truncate_percentile=97.5)
DR12Q = bossdata.meta.Database(finder, mirror, quasar_catalog=True)
qso_table = DR12Q.select_all(where='PLATE=6641 and ZWARNING=0', what='PLATE,MJD,FIBER,Z_VI')
print('Found {0} QSO targets for plate 6641.'.format(len(qso_table)))
plt.hist(qso_table['Z_VI'], bins=25);
plt.xlabel('Redshift z')
plt.ylabel('Quasars')
plt.tight_layout();
fiducial_grid = np.arange(1000.,3000.)
rest_frame, resampled, spec_qso = None, None, None
for row in qso_table:
filename = finder.get_spec_path(plate=row['PLATE'], mjd=row['MJD'], fiber=row['FIBER'], lite=True)
spectrum = bossdata.spec.SpecFile(mirror.get(filename))
data = spectrum.get_valid_data(use_ivar=True, fiducial_grid=True)
rest_frame = speclite.redshift(z_in=row['Z_VI'], z_out=0, data_in=data, data_out=rest_frame, rules=[
dict(name='wavelength', exponent=+1),
dict(name='flux', exponent=-1),
dict(name='ivar', exponent=+2)])
resampled = speclite.resample(rest_frame, x_in='wavelength', x_out=fiducial_grid, y=('flux', 'ivar'),
data_out=resampled)
spec_qso = speclite.accumulate(spec_qso, resampled, data_out=spec_qso, join='wavelength',
add='flux', weight='ivar')
plot_stack(spec_qso, truncate_percentile=99.5)
filename = finder.get_plate_spec_path(plate=6641, mjd=56383)
plate = bossdata.plate.PlateFile(mirror.get(filename))
plate_data = plate.get_valid_data(qso_table['FIBER'], use_ivar=True, fiducial_grid=True)
zorder = np.argsort(qso_table['Z_VI'])
z_in = qso_table['Z_VI'][:,np.newaxis]
plate_data = speclite.redshift(z_in=z_in, z_out=0, data_in=plate_data, data_out=plate_data, rules=[
dict(name='wavelength', exponent=+1),
dict(name='flux', exponent=-1),
dict(name='ivar', exponent=+2)
])
resampled, plate_qso = None, None
for data in plate_data:
resampled = speclite.resample(data, x_in='wavelength', x_out=fiducial_grid, y=('flux', 'ivar'), data_out=resampled)
plate_qso = speclite.accumulate(spec_qso, resampled, data_out=plate_qso, join='wavelength',
add='flux', weight='ivar')
plot_stack(plate_qso, truncate_percentile=99.5)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Stacked Sky
Step2: Plot a stacked spectrum
Step3: Stack individual Spec-lite files
Step4: Stack Spectra from one Plate file
Step5: Stacked Quasars
Step6: Plot the redshift distribution of the selected quasars
Step7: Stack spectra from individual Spec-lite files
Step8: Stack spectra from one Plate file
Step9: Transform each spectrum to its quasar rest frame. We perform this operation in place (re-using the memory of the input array) and in parallel on all spectra.
Step10: Resample each spectrum to a uniform rest wavelength grid and stack them together to calculate the mean rest-frame quasar spectrum. The resample() and accumulate() operations re-use the same memory for each input spectrum, so this loop has fixed (small) memory requirements, independent of the number of spectra being stacked.
|
12,376
|
<ASSISTANT_TASK:>
Python Code:
# Execute this cell to load the notebook's style sheet, then ignore it
from IPython.core.display import HTML
css_file = '../style/custom.css'
HTML(open(css_file, "r").read())
print("Hello World!")
x = 2**8
x < 64
str_1 = 'hello'
str_2 = 'world'
new_string = str_1 + str_2
print(new_string)
my_string = str_1 + ' ' + str_2
print(my_string)
my_string[0]
#If we want the 3rd element we do:
my_string[2]
my_string[1] #this is how we access the second element of a string
len(my_string)
my_string[11]
my_string[10]
my_string[-1]
my_string[-2]
my_string[0:5]
my_string[6:]
# Define your string
eng_string = 'engineer'
# Grab 'gin'slice
eng_string[2:5]
AE_quote = "Everybody is a genius. But if you judge a fish by its ability to climb a tree, it will live its whole life believing that it is stupid."
AE_quote.count('e')
AE_quote.count('e', 0, 20)
AE_quote.count('Everybody')
AE_quote.find('fish')
len('fish')
AE_quote[42: 42 + len('fish')]
AE_quote.find('albert')
AE_quote.index('fish')
AE_quote.index('albert')
idx_start = AE_quote.index('fish')
idx_end = AE_quote.index('by') - 1 # -1 to get the index off the space after 'fish'
AE_quote[idx_start:idx_end]
ER_quote = " Great minds discuss ideas; average minds discuss events; small minds discuss people. "
ER_quote = ER_quote.strip()
ER_quote
ER_quote.strip('.')
ER_quote
ER_quote.startswith('great')
ER_quote.startswith('Great')
ER_quote.startswith('Gre')
print(AE_quote.split())
print(ER_quote.split())
print(ER_quote.split(';'))
# A list of integers
[1, 4, 7, 9]
# A list of strings
['apple', 'banana', 'orange']
# A list with different element types
[2, 'apple', 4.5, [5, 10]]
integers = [1, 2, 3, 4, 5]
fruits = ['apple', 'banana', 'orange']
print(integers)
print(fruits)
new_list = [integers, fruits]
print(new_list)
len(new_list)
new_list[0]
new_list[1]
# Accessing the first two elements of the list fruits
fruits[0:2]
integers.append(6)
print(integers)
'strawberry' in fruits
'strawberry' not in fruits
alist = [1, 2, 3, '4', [5, 'six'], [7]]
alist.index('4')
alist[3]
alist[3] = 4
alist
4 in alist
string = 'This is a string.'
string[-1]
string[-1] = '!'
fruits = ['apple', 'banana', 'orange', 'cherry', 'mandarin']
for fruit in fruits:
print("Eat your", fruit)
names = ['sam', 'zoe', 'naty', 'gil', 'tom']
for i, name in enumerate(names):
names[i] = name.capitalize()
print(names)
fullnames = [ ['sam','jones'], ['zoe','smith'],['joe','cheek'],['tom','perez'] ]
firstnames = []
lastnames = []
# Write your code here
a = 8
b = 3
if a > b:
print('a is bigger than b')
# We pick a number, but you can change it
x = 1547
if x % 17 == 0:
print('Your number is a multiple of 17.')
else:
print('Your number is not a multiple of 17.')
#x = float(input('Insert your number: '))
a = 3
b = 5
if a > b:
print('a is bigger than b')
elif a < b:
print('a is smaller than b')
else:
print('a is equal to b')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Play with data in Jupyter
Step2: Edit mode and Command mode
Step3: Remember that we can concatenate strings ("add"), for example
Step4: What if we want to add a space that separates hello from world? We directly add the string ' ' in the middle of the two variables. A space is a character!
Step5: Exercise
Step6: You might have noticed that in the cell above we have a line before the code that starts with the # sign. That line seems to be ignored by Python
Step7: How do we know the index of the last element in the string?
Step8: Great! Now we know that my_string is eleven characters long. What happens if we enter this number as an index?
Step9: Oops. We have an error
Step10: Python also offers a clever way to grab the last element so we don't need to calculate the lenghth and substract one
Step11: What if we use a -2 as index?
Step12: That is the last l in the string hello world. Python is so clever, it can count backwards!
Step13: You can skip the start index, if you want to slice from the beginning of the string, and you can skip the end of a slice, indicating you want to go all the way to the end of the string. For example, if we want to grab the word 'world' from my_string, we could do the following
Step14: A helpful way to visualize slices is to imagine that the indices point to the spaces between characters in the string. That way, when you write my_string[i], you would be referring to the "character to the right of i" (Reference 2).
Step15: Exercises
Step16: The count() method gives the number of ocurrences of a substring in a range. The arguments for the range are optional.
Step17: If we want to know how many of those 'e' charachters are in the range [0
Step18: We can look for more complex strings, for example
Step19: The find() method tells us if a string 'substr' occurs in the string we are applying the method on. The arguments for the range are optional.
Step20: If we know the length of our sub-string, we can now apply slice notation to grab the word "fish".
Step21: Let's see what happens when we try to look for a string that is not in the quote.
Step22: It returns -1… but careful, that doesn't mean that the position is at the end of the original string! If we read the documentation, we confirm that a returned value of -1 indicates that the sub-string we are looking for is not in the string we are searching in.
Step23: In the example above, we used the len() function to calculate the length of the string 'fish', and we used the result to calculate the ending index. However, if the string is too long, having a line that calculates the length might be inconvenient or may make your code look messy. To avoid this, we can use the find() or index() methods to calculate the end position. In the 'fish' example, we could look for the index of the word 'by' (the word that follows 'fish') and subtract 1 from that index to get the index that corresponds to the space right after 'fish'. There are many ways to slice strings, only limited by your imagination!
Step24: Exercises
Step25: Notice that the string we defined above contains extra white spaces at the beginning and at the end. In this case, we did it on purpose, but bothersome extra spaces are often present when reading text from a file (perhaps due to paragraph indentation).
Step26: Let's supose you want to strip the period at the end; you could do the following
Step27: Check the string variable to confirm that it didn't change (it still has the period at the end)
Step28: Another useful method is startswith(), to find out if a string starts with a certain character.
Step29: The output is False because the word is not capitalized! Upper-case and lower-case letters are distinct characters.
Step30: It's important to mention that we don't need to match the character until we hit the white space.
Step31: The last string method we'll mention is split()
Step32: Let's split the ER_quote by a different character, a semicolon
Step33: Think...
Step34: In the last list example, the last element of the list is actually another list. Yes! we can totally do that.
Step35: Notice that this new_list has only 2 elements. We can check that with the len() function
Step36: Each element of new_list is, of course, another list.
Step37: Exercises
Step38: Let's check that the integer list now has a 6 at the end
Step39: List membership
Step40: Exercises
Step41: We can find the position of a certain element with the index() method, just like with strings. For example, if we want to know where the element '4' is, we can do
Step42: Let's replace it with the integer value 4
Step43: Exercise
Step44: Suppose we want to replace the period ('.') by an exaclamation mark ('!'). Can we just modify this string element?
Step45: Told you! Python is confirming that we cannot change the elements of a string by item assignment.
Step46: Pay attention
Step47: Challenge question
Step48: Conditionals with if statements
Step49: (2) If-else statement
Step50: Note
Step51: (3) If-elif-else statement
|
12,377
|
<ASSISTANT_TASK:>
Python Code:
# Imports
# Numeric Packages
from __future__ import division
import numpy as np
import pandas as pd
import scipy.stats as sps
# Plotting packages
import matplotlib.pyplot as plt
from matplotlib import ticker
import seaborn as sns
%matplotlib inline
sns.set_style('whitegrid')
sns.set_context('talk')
# Other
from datetime import datetime, timedelta
import statsmodels.api as sm
# Import turnstile data and convert datetime column to datetime python objects
df = pd.read_csv('turnstile_weather_v2.csv')
df['datetime'] = pd.to_datetime(df['datetime'])
W, p = sps.shapiro(df.ENTRIESn_hourly.tolist())
print 'Probability that data is the realisation of a gaussian random variable: {:.3f}'.format(p)
plt.figure(figsize=[8,5])
sns.distplot(df.ENTRIESn_hourly.tolist(), bins=np.arange(0,10001,500), kde=False)
plt.xlim(0,10000)
plt.yticks(np.arange(0,16001,4000))
plt.title('Histogram of Entry Count')
plt.show()
raindata = np.array(df[df.rain==1].ENTRIESn_hourly.tolist())
noraindata = np.array(df[df.rain==0].ENTRIESn_hourly.tolist())
U, p = sps.mannwhitneyu(raindata, noraindata)
print 'Results'
print '-------'
print 'p-value: {:.2f}'.format(p) # Note that p value calculated by scipy is single-tailed
print 'Mean with rain: {:.0f}'.format(raindata.mean())
print 'Mean without rain: {:.0f}'.format(noraindata.mean())
# Because the hour '0' is actually the entries from 20:00 to 24:00, it makes more sense to label it 24 when plotting data
df.datetime -= timedelta(seconds=1)
df['day']= df.datetime.apply(lambda x: x.day)
df['hour'] = df.datetime.apply(lambda x: x.hour+1)
df['weekday'] = df.datetime.apply(lambda x: not bool(x.weekday()//5))
df['day_week'] = df.datetime.apply(lambda x: x.weekday())
# The dataset includes the Memorial Day Public Holiday, which makes more sense to be classify as a weekend.
df.loc[df['day']==30,'weekday'] = False
# Create a new column, stall_num2, representing the proportion of entries through a stall across the entire period.
total_patrons = df.ENTRIESn_hourly.sum()
# Dataframe with the units, and total passing through each unit across the time period
total_by_stall = pd.DataFrame(df.groupby('UNIT').ENTRIESn_hourly.sum())
# Create new variable = proportion of total entries
total_by_stall['stall_num2'] = total_by_stall.ENTRIESn_hourly/total_patrons
# Normalise by mean and standard deviation... fixes orders of magnitude errors in the output
total_stall_mean = total_by_stall.stall_num2.mean()
total_stall_stddev = total_by_stall.stall_num2.std()
total_by_stall.stall_num2 = (
(total_by_stall.stall_num2 - total_stall_mean)
/ total_stall_stddev
)
# Map the new variable back on the original dataframe
df['stall_num2'] = df.UNIT.apply(lambda x: total_by_stall.stall_num2[x])
for i in df.columns.tolist(): print i,
plt.figure(figsize=[8,6])
corr = df[['ENTRIESn_hourly',
'EXITSn_hourly',
'day_week', # Day of the week (0-6)
'weekday', # Whether it is a weekday or not
'day', # Day of the month
'hour', # In set [4, 8, 12, 16, 20, 24]
'fog',
'precipi',
'rain',
'tempi',
'wspdi']].corr()
sns.heatmap(corr)
plt.title('Correlation matrix between potential features')
plt.show()
# Construct and fit the model
mod = sm.OLS.from_formula('ENTRIESn_hourly ~ rain:C(hour) + stall_num2*C(hour) + stall_num2*weekday', data=df)
res = mod.fit_regularized()
s = res.summary2()
s.tables[1].ix[['Intercept', 'stall_num2']]
s.tables[1].ix[[i for i in s.tables[1].index if i[:5]=='stall']]
s.tables[1].ix[[i for i in s.tables[1].index if i[:4]=='rain']]
print 'Model Coefficient of Determination (R-squared): {:.3f}'.format(res.rsquared)
residuals = res.resid
sns.set_style('whitegrid')
sns.distplot(residuals,bins=np.arange(-10000,10001,200),
kde = False, # kde_kws={'kernel':'gau', 'gridsize':4000, 'bw':100},
fit=sps.cauchy, fit_kws={'gridsize':4000})
plt.xlim(-5000,5000)
plt.title('Distribution of Residuals\nwith fitted cauchy Distribution overlaid')
plt.show()
sns.set_style('whitegrid')
fig = plt.figure(figsize=[6,6])
plt.xlabel('ENTRIESn_hourly')
plt.ylabel('Residuals')
plt.scatter(df.ENTRIESn_hourly, residuals,
c=(df.stall_num2*total_stall_stddev+total_stall_mean)*100, # denormalise values
cmap='YlGnBu')
plt.colorbar(label='UNIT Relative Traffic (%)')
plt.plot([0,20000],[0,-20000], ls=':', c='0.7', lw=2) # Line to show negative prediction values (i.e. negative entries)
plt.xlim(xmin=0)
plt.ylim(-20000,25000)
plt.xticks(rotation='45')
plt.title('Model Residuals vs. Expected Value')
plt.show()
print 'Condition Number: {:.2f}'.format(res.condition_number)
sns.set_style('white')
sns.set_context('talk')
mydf = df.copy()
mydf['rain'] = mydf.rain.apply(lambda x: 'Raining' if x else 'Not Raining')
raindata = df[df.rain==1].ENTRIESn_hourly.tolist()
noraindata = df[df.rain==0].ENTRIESn_hourly.tolist()
fig = plt.figure(figsize=[9,6])
ax = fig.add_subplot(111)
plt.hist([raindata,noraindata],
normed=True,
bins=np.arange(0,11500,1000),
color=['dodgerblue', 'indianred'],
label=['Raining', 'Not Raining'],
align='right')
plt.legend()
sns.despine(left=True, bottom=True)
# http://stackoverflow.com/questions/9767241/setting-a-relative-frequency-in-a-matplotlib-histogram
def adjust_y_axis(x, pos):
return '{:.0%}'.format(x * 1000)
ax.yaxis.set_major_formatter(ticker.FuncFormatter(adjust_y_axis))
plt.title('Histogram of Subway Entries per 4 hour Block per Gate')
plt.ylabel('Proportion of Total Entries')
plt.xlim(500,10500)
plt.xticks(np.arange(1000,10001,1000))
plt.show()
# Plot to illustrate the average riders per time block for each weekday.
# First we need to sum up the entries per hour (category) per weekday across all units.
# This is done for every day, whilst retaining the 'day_week' field for convenience. reset_index puts it back into a standard dataframe
# For the sake of illustration, memorial day has been excluded since it would incorrectly characterise the Monday ridership
mydf = df.copy()
mydf = mydf[mydf.day!=30].pivot_table(values='ENTRIESn_hourly', index=['day','day_week','hour'], aggfunc=np.sum).reset_index()
# The second pivot takes the daily summed data, and finds the mean for each weekday/hour block.
mydf = mydf.pivot_table(values='ENTRIESn_hourly', index='hour', columns='day_week', aggfunc=np.mean)
# Generate plout using the seaborn heatplot function.
fig = plt.figure(figsize=[9,6])
timelabels = ['Midnight - 4am','4am - 8am','8am - 12pm','12pm - 4pm','4pm - 8pm','8pm - Midnight']
weekdays = ['Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat', 'Sun']
plot = sns.heatmap(mydf, yticklabels=timelabels, xticklabels=weekdays)
plt.xlabel('') # The axis ticks are descriptive enough to negate the need for axis labels
plt.ylabel('')
plot.tick_params(labelsize=14) # Make stuff bigger!
# Make heatmap ticks bigger http://stackoverflow.com/questions/27832054/change-tick-size-on-colorbar-of-seaborn-heatmap
cax = plt.gcf().axes[-1]
cax.tick_params(labelsize=14)
plt.title('Daily NYC Subway Ridership\n(Data from May 2011)', fontsize=20)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 0. References
Step2: 1.3 What results did you get from this statistical test? These should include the following numerical values
Step3: 1.4 What is the significance and interpretation of these results?
Step4: 2.1 What approach did you use to compute the coefficients theta and produce prediction for ENTRIESn_hourly in your regression model
Step5: 2.3 Why did you select these features in your model?
Step6: Some parameters are going to be clearly important
Step7: The final selection of variables was determined through trial and error of rational combinations of variables. The station popularity was captured in using the stall_num2 variable, since it appears to create a superior model compared with just using UNIT dummies, and because it allowed the creation of combinations. Combining the station with hour was useful, and is intuitive since stations in the CBD will have the greatest patronage and have greater entries in the evening peak hour. A similar logic applies to combining the station and whether it is a weekday.
Step8: Due to the use of several combinations, there are very few non-dummy features, with the coefficients illustrated below. Since stall_num2 is also used in several combinations, it's individual coefficient doesn't prove very useful.
Step9: However when looking at all the combinations for stall_num2 provides greater insight. Here we can see that activity is greater on weekdays, and greatest in the 16
Step10: Even more interesting are the coefficient for the rain combinations. These appear to indicate that patronage increases in the 08
Step11: 2.5 What is your model’s R2 (coefficients of determination) value?
Step12: The final R-squared value of 0.74 is much greater than earlier models that used UNIT as a dummy variable, which had R-squared values around 0.55.
Step13: Secondly, a scatterplot of the residuals against the expected values is plotted. As expected, the largest residuals are associated with cases where the traffic is largest. In general the model appears to underpredict the traffic at the busiest of units. Also clear on this plot is how individual stations form a 'streak' of points on the diagonal. This is because the model essentially makes a prediction for each station per hour per for weekdays and weekends. The natural variation of the actual result in this timeframe creates the run of points.
Step14: Additionally, note that the condition number for the final model is relatively low, hence there don't appear to be any collinearity issues with this model. By comparison, when UNIT was included as a dummy variable instead, the correlation was weaker and the condition number was up around 220.
Step15: In summary, it appears that this linear model has done a reasonable job of predicting ridership in this instance. Clearly some improvements are possible (like fixing the predictions of negative entries!), but given there will always be a degree of random variation, an R-squared value of 0.74 for a linear model seems quite reasonable. To be sure of the model suitability the data should be split into training/test sets. Additionally, more data from extra months could prove beneficial.
Step16: Once both plots are normalised, the difference between subway entries when raining and not raining are almost identical. No useful differentiation can be made between the two datasets here.
|
12,378
|
<ASSISTANT_TASK:>
Python Code:
import os
PROJECT = 'cloud-training-demos' # CHANGE THIS
REGION = 'us-central1' # Choose an available region for Cloud MLE from https://cloud.google.com/ml-engine/docs/regions.
BUCKET = 'cloud-training-demos-ml' # REPLACE WITH YOUR BUCKET NAME. Use a regional bucket in the region you selected.
# for bash
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
!grep -A 2 add_argument taxifare/trainer/task.py
!grep -A 5 add_eval_metrics taxifare/trainer/model.py
!grep -A 5 "trial" taxifare/trainer/task.py
%%writefile hyperparam.yaml
trainingInput:
scaleTier: STANDARD_1
hyperparameters:
goal: MINIMIZE
maxTrials: 30
maxParallelTrials: 3
hyperparameterMetricTag: rmse
params:
- parameterName: train_batch_size
type: INTEGER
minValue: 64
maxValue: 512
scaleType: UNIT_LOG_SCALE
- parameterName: nbuckets
type: INTEGER
minValue: 10
maxValue: 20
scaleType: UNIT_LINEAR_SCALE
- parameterName: hidden_units
type: CATEGORICAL
categoricalValues: ["128 32", "256 128 16", "64 64 64 8"]
%%bash
OUTDIR=gs://${BUCKET}/taxifare/ch4/taxi_trained
JOBNAME=lab4a_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
gcloud ml-engine jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=${PWD}/taxifare/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=STANDARD_1 \
--runtime-version=1.4 \
--config=hyperparam.yaml \
-- \
--train_data_paths="gs://$BUCKET/taxifare/ch4/taxi_preproc/train*" \
--eval_data_paths="gs://${BUCKET}/taxifare/ch4/taxi_preproc/valid*" \
--output_dir=$OUTDIR \
--train_steps=5000
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <h1> 1. Command-line parameters to task.py </h1>
Step2: <h1> 2. Evaluation metric </h1>
Step3: <h1> 3. Make sure outputs do not clobber each other </h1>
Step4: <h1> 4. Create hyper-parameter configuration </h1>
Step5: <h1> 5. Run the training job </h1>
|
12,379
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function, division, unicode_literals
import oddt
from oddt.shape import usr, usr_similarity
print(oddt.__version__)
heroin = oddt.toolkit.readstring('smi',
'CC(=O)Oc1ccc2c3c1O[C@@H]4[C@]35CC[NH+]([C@H](C2)[C@@H]5C=C[C@@H]4OC(=O)C)C')
smiles = ['CC(=O)Oc1ccc2c3c1O[C@@H]4[C@]35CC[NH+]([C@H](C2)[C@@H]5C=C[C@@H]4OC(=O)Cc6cccnc6)C',
'CC(=O)O[C@@H]1C=C[C@@H]2[C@H]3Cc4ccc(c5c4[C@]2([C@H]1O5)CC[NH+]3C)OC',
'C[N+]1(CC[C@@]23c4c5ccc(c4O[C@H]2[C@@H](C=C[C@@H]3[C@@H]1C5)O)OC)C',
'C[NH2+][C@@H]1Cc2ccc(c3c2[C@]4([C@@H]1CC=C([C@H]4O3)OC)C=C)OC',
'CCOC(=O)CNC(=O)O[C@H]1C=C[C@H]2[C@H]3Cc4ccc(c5c4[C@]2([C@H]1O5)CC[NH+]3C)OCOC',
'CC(=O)OC1=CC[C@H]2[C@@H]3Cc4ccc(c5c4[C@@]2([C@@H]1O5)CC[NH+]3C)OC',
'C[NH+]1CC[C@]23c4c5cc(c(c4O[C@H]2[C@H](C=C[C@H]3[C@H]1C5)O)O)c6cc7c8c(c6O)O[C@@H]9[C@]81CC[NH+]([C@H](C7)[C@@H]1C=C[C@@H]9O)C']
molecules = [oddt.toolkit.readstring('smi', smi) for smi in smiles]
heroin.make3D()
heroin.removeh()
for mol in molecules:
mol.make3D()
mol.removeh()
usr_heroin = usr(heroin)
usr_heroin
usr_similarity(usr_heroin, usr(molecules[0]))
similar_mols = []
for i, mol in enumerate(molecules):
sim = usr_similarity(usr_heroin, usr(mol))
similar_mols.append((i, sim))
similar_mols.sort(key=lambda similarity: similarity[1], reverse=True)
similar_mols
heroin
idx_most = similar_mols[0][0]
molecules[idx_most]
idx_least = similar_mols[-1][0]
molecules[idx_least]
usr_most = usr(molecules[idx_most])
usr_least = usr(molecules[idx_least])
usr_similarity(usr_most, usr_least)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We'd like to compare the shape of heroin with other molecules.
Step2: To compute the shape using USR we need the molecule's 3D coordinates.
Step3: Now we can use the usr function.
Step4: USR represents shape with 12 descriptors, which summarize the distribution of atomic distances in the molecule. For more details see Ballester & Richards (2007).<br/>
Step5: The similarity function returns a number in range (0, 1], where a higher number means that the molecules are more similar and 1 means that the molecules have identical shapes.<br/>
Step6: Heroin
Step7: The most similar molecule
Step8: The least similar molecule
|
12,380
|
<ASSISTANT_TASK:>
Python Code:
# Versão da Linguagem Python
from platform import python_version
print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version())
import datetime
agora = datetime.datetime.now()
agora
t = datetime.time(7, 43, 28)
print (t)
print ('Hora :', t.hour)
print ('Minute:', t.minute)
print ('Segundo:', t.second)
print ('Microsegundo:', t.microsecond)
print(datetime.time.min)
hoje = datetime.date.today()
print (hoje)
print ('ctime:', hoje.ctime())
print ('Ano:', hoje.year)
print ('Mês :', hoje.month)
print ('Dia :', hoje.day)
d1 = datetime.date(2015, 4, 28)
print ('d1:', d1)
d2 = d1.replace(year=2016)
print ('d2:', d2)
# Diferença em dias entre duas datas
d2 - d1
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Datetime
|
12,381
|
<ASSISTANT_TASK:>
Python Code:
import sys
sys.path.insert(0, "../..")
from insights.core import dr
# Here's our component type with the clever name "component."
# Insights Core provides several types that we'll come to later.
class component(dr.ComponentType):
pass
import random
# Make two components with no dependencies
@component()
def rand():
return random.random()
@component()
def three():
return 3
# Make a component that depends on the other two. Notice that we depend on two
# things, and there are two arguments to the function.
@component(rand, three)
def mul_things(x, y):
return x * y
# Now that we have a few components defined, let's run them.
from pprint import pprint
# If you call run with no arguments, all components of every type (with a few caveats
# I'll address later) are run, and their values or exceptions are collected in an
# object called a broker. The broker is like a fancy dictionary that keeps up with
# the state of an evaluation.
broker = dr.run()
pprint(broker.instances)
class stage(dr.ComponentType):
pass
@stage(mul_things)
def spam(m):
return int(m)
broker = dr.run()
print "All Instances"
pprint(broker.instances)
print
print "Components"
pprint(broker.get_by_type(component))
print
print "Stages"
pprint(broker.get_by_type(stage))
class thing(dr.ComponentType):
def invoke(self, broker):
return self.component(broker)
@thing(rand, three)
def stuff(broker):
r = broker[rand]
t = broker[three]
return r + t
broker = dr.run()
print broker[stuff]
@stage()
def boom():
raise Exception("Boom!")
broker = dr.run()
e = broker.exceptions[boom][0]
t = broker.tracebacks[e]
pprint(e)
print
print t
@stage("where's my stuff at?")
def missing_stuff(s):
return s
broker = dr.run()
print broker.missing_requirements[missing_stuff]
@stage("a", "b", [rand, "d"], ["e", "f"])
def missing_more_stuff(a, b, c, d, e, f):
return a + b + c + d + e + f
broker = dr.run()
print broker.missing_requirements[missing_more_stuff]
@stage(rand, optional=['test'])
def is_greater_than_ten(r, t):
return (int(r*10.0) < 5.0, t)
broker = dr.run()
print broker[is_greater_than_ten]
class mything(dr.ComponentType):
requires = [rand]
@mything()
def dothings(r):
return 4 * r
broker = dr.run(broker=broker)
pprint(broker[dothings])
pprint(dr.get_dependencies(dothings))
class anotherthing(dr.ComponentType):
metadata={"a": 3}
@anotherthing(metadata={"b": 4, "c": 5})
def four():
return 4
dr.get_metadata(four)
class grouped(dr.ComponentType):
group = "grouped"
@grouped()
def five():
return 5
b = dr.Broker()
dr.run(dr.COMPONENTS["grouped"], broker=b)
pprint(b.instances)
from insights.core import dr
@stage()
def six():
return 6
@stage(six)
def times_two(x):
return x * 2
# If the component's full name was foo.bar.baz.six, this would print "baz"
print "\nModule (times_two):", dr.get_base_module_name(times_two)
print "\nComponent Type (times_two):", dr.get_component_type(times_two)
print "\nDependencies (times_two): "
pprint(dr.get_dependencies(times_two))
print "\nDependency Graph (stuff): "
pprint(dr.get_dependency_graph(stuff))
print "\nDependents (rand): "
pprint(dr.get_dependents(rand))
print "\nGroup (six):", dr.get_group(six)
print "\nMetadata (four): ",
pprint(dr.get_metadata(four))
# prints the full module name of the component
print "\nModule Name (times_two):", dr.get_module_name(times_two)
# prints the module name joined to the component name by a "."
print "\nName (times_two):", dr.get_name(times_two)
print "\nSimple Name (times_two):", dr.get_simple_name(times_two)
from insights.core import dr
from insights.core.context import HostContext
from insights.core.spec_factory import (simple_file,
glob_file,
simple_command,
listdir,
foreach_execute,
foreach_collect,
first_file,
first_of)
release = simple_file("/etc/redhat-release")
hostname = simple_file("/etc/hostname")
ctx = HostContext()
broker = dr.Broker()
broker[HostContext] = ctx
broker = dr.run(broker=broker)
print broker[release].path, broker[release].content
print broker[hostname].path, broker[hostname].content
host_stuff = glob_file("/etc/host*", ignore="(allow|deny)")
broker = dr.run(broker=broker)
print broker[host_stuff]
uptime = simple_command("/usr/bin/uptime")
broker = dr.run(broker=broker)
print (broker[uptime].cmd, broker[uptime].args, broker[uptime].rc, broker[uptime].content)
interfaces = listdir("/sys/class/net")
broker = dr.run(broker=broker)
pprint(broker[interfaces])
ethtool = foreach_execute(interfaces, "ethtool %s")
broker = dr.run(broker=broker)
pprint(broker[ethtool])
from insights.specs.default import format_rpm
from insights.core.context import DockerImageContext
from insights.core.plugins import datasource
from insights.core.spec_factory import CommandOutputProvider
rpm_format = format_rpm()
cmd = "/usr/bin/rpm -qa --qf '%s'" % rpm_format
host_rpms = simple_command(cmd, context=HostContext)
@datasource(DockerImageContext)
def docker_installed_rpms(ctx):
root = ctx.root
cmd = "/usr/bin/rpm -qa --root %s --qf '%s'" % (root, rpm_format)
result = ctx.shell_out(cmd)
return CommandOutputProvider(cmd, ctx, content=result)
installed_rpms = first_of([host_rpms, docker_installed_rpms])
broker = dr.run(broker=broker)
pprint(broker[installed_rpms])
from insights.core import Parser
from insights.core.plugins import parser
@parser(hostname)
class HostnameParser(Parser):
def parse_content(self, content):
self.host, _, self.domain = content[0].partition(".")
broker = dr.run(broker=broker)
print "Host:", broker[HostnameParser].host
@parser(ethtool)
class Ethtool(Parser):
def parse_content(self, content):
self.link_detected = None
self.device = None
for line in content:
if "Settings for" in line:
self.device = line.split(" ")[-1].strip(":")
if "Link detected" in line:
self.link_detected = line.split(":")[-1].strip()
broker = dr.run(broker=broker)
for eth in broker[Ethtool]:
print "Device:", eth.device
print "Link? :", eth.link_detected, "\n"
from insights.core.plugins import rule, make_fail, make_pass
ERROR_KEY = "IS_LOCALHOST"
@rule(HostnameParser)
def report(hn):
return make_pass(ERROR_KEY) if "localhost" in hn.host else make_fail(ERROR_KEY)
brok = dr.Broker()
brok[HostContext] = HostContext()
brok = dr.run(broker=brok)
pprint(brok.get(report))
def observer(c, broker):
if c not in broker:
return
value = broker[c]
pprint(value)
broker.add_observer(observer, component_type=parser)
broker = dr.run(broker=broker)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: How do I use it?
Step2: Component Types
Step3: Component Invocation
Step4: Notice that broker can be used as a dictionary to get the value of components that have already executed without directly looking at the broker.instances attribute.
Step5: Missing Dependencies
Step6: Notice that the first elements in the dependency list after @stage are simply "a" and "b", but the next two elements are themselves lists. This means that at least one element of each list must be present. The first "any" list has [rand, "d"], and rand is available, so it resolves. However, neither "e" nor "f" are available, so the resolution fails. Our missing dependencies list includes the first two standalone elements as well as the second "any" list.
Step7: Automatic Dependencies
Step8: Metadata
Step9: Component Groups
Step10: If a group isn't specified in the type definition or in the component decorator, the default group is assumed. Likewise, the default group is assumed when calling run if one isn't provided.
Step11: Loading Components
Step12: glob_file
Step13: simple_command
Step14: listdir
Step15: foreach_execute
Step16: Notice each element in the list returned by interfaces is a single string. The system interpolates each element into the ethtool command string and evaluates each result. This produces a list of objects, one for each input element, instead of a single object. If the list created by interfaces contained tuples with n elements, then our command string would have had n substitution parameters.
Step17: What datasources does Insights Core provide?
Step18: Notice that the parser decorator accepts only one argument, the datasource the component needs. Also notice that our parser has a sensible default constructor that accepts a datasource and passes its content into a parse_content function.
Step19: We provide curated parsers for all of our datasources. They're in insights.parsers.
Step20: Conditions and Incidents
|
12,382
|
<ASSISTANT_TASK:>
Python Code:
import os
PROJECT = !(gcloud config get-value core/project)
PROJECT = PROJECT[0]
BUCKET = PROJECT
os.environ["BUCKET"] = BUCKET
%%writefile tpu_models/trainer/task.py
TPU trainer command line interface
import argparse
import sys
import tensorflow as tf
from . import model, util
def _parse_arguments(argv):
Parses command-line arguments.
parser = argparse.ArgumentParser()
parser.add_argument(
"--epochs", help="The number of epochs to train", type=int, default=5
)
parser.add_argument(
"--steps_per_epoch",
help="The number of steps per epoch to train",
type=int,
default=500,
)
parser.add_argument(
"--train_path",
help="The path to the training data",
type=str,
default="gs://cloud-ml-data/img/flower_photos/train_set.csv",
)
parser.add_argument(
"--eval_path",
help="The path to the evaluation data",
type=str,
default="gs://cloud-ml-data/img/flower_photos/eval_set.csv",
)
parser.add_argument(
"--tpu_address",
help="The path to the TPUs we will use in training",
type=str,
required=True,
)
parser.add_argument(
"--hub_path",
help="The path to TF Hub module to use in GCS",
type=str,
required=True,
)
parser.add_argument(
"--job-dir",
help="Directory where to save the given model",
type=str,
required=True,
)
return parser.parse_known_args(argv)
def main():
Parses command line arguments and kicks off model training.
args = _parse_arguments(sys.argv[1:])[0]
# TODO: define a TPU strategy
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(
tpu=args.tpu_address
)
tf.config.experimental_connect_to_cluster(resolver)
tf.tpu.experimental.initialize_tpu_system(resolver)
strategy = tf.distribute.experimental.TPUStrategy(resolver)
with strategy.scope():
train_data = util.load_dataset(args.train_path)
eval_data = util.load_dataset(args.eval_path, training=False)
image_model = model.build_model(args.job_dir, args.hub_path)
model_history = model.train_and_evaluate(
image_model,
args.epochs,
args.steps_per_epoch,
train_data,
eval_data,
args.job_dir,
)
return model_history
if __name__ == "__main__":
main()
!wget https://tfhub.dev/google/imagenet/mobilenet_v2_100_224/feature_vector/4?tf-hub-format=compressed
%%bash
rm -r tpu_models/hub
mkdir tpu_models/hub
tar xvzf 4?tf-hub-format=compressed -C tpu_models/hub/
!gsutil rm -r gs://$BUCKET/tpu_models
!gsutil cp -r tpu_models gs://$BUCKET/tpu_models
!echo "gsutil cp -r gs://$BUCKET/tpu_models ."
%%bash
export TPU_NAME=my-tpu
echo "export TPU_NAME="$TPU_NAME
echo "python3 -m tpu_models.trainer.task \
--tpu_address=\$TPU_NAME \
--hub_path=gs://$BUCKET/tpu_models/hub/ \
--job-dir=gs://$BUCKET/flowers_tpu_$(date -u +%y%m%d_%H%M%S)"
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step4: Packaging the Model
Step5: The TPU server
Step6: This model is still compressed, so lets uncompress it with the tar command below and place it in our tpu_models directory.
Step7: Finally, we need to transfer our materials to the TPU. We'll use GCS as a go-between, using gsutil cp to copy everything.
Step8: Spinning up a TPU
Step9: Time to shine, TPU! Run the below cell and copy the output into your TPU terminal. Training will be slow at first, but it will pick up speed after a few minutes once the Tensorflow graph has been built out.
|
12,383
|
<ASSISTANT_TASK:>
Python Code:
def pass_through(x):
return x
data_lm = load_data(path, bs=120)
learn = language_model_learner(data=data_lm,
arch=AWD_LSTM,
pretrained=False)
learn.lr_find()
learn.recorder.plot()
best_lr = 1e-2 * 2
escb = EarlyStoppingCallback(learn=learn, patience=5)
smcb = SaveModelCallback(learn=learn)
rpcb = ReduceLROnPlateauCallback(learn=learn, patience=3)
sgcb = ShowGraph(learn=learn)
csvcb = CSVLogger(learn=learn)
callbacks = [escb, smcb, rpcb, sgcb, csvcb]
learn.fit_one_cycle(cyc_len=1,
max_lr=1e-3,
tot_epochs=10,
callbacks=callbacks)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Instantiate Language Model
Step2: 3. Train Language Model
Step3: Define callbacks
Step4: Train Model
|
12,384
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
import concarne
import concarne.patterns
import concarne.training
import lasagne
import theano.tensor as T
%pylab inline
try:
import sklearn.linear_model as sklm
except:
print (
You don't have scikit-learn installed; install it to compare
learning with side information to simple supervised learning)
sklm = None
import numpy as np
num_samples = 300
input_dim = 50
side_dim = 50
# generate some random data with 100 samples
# and 5 dimensions
X = np.random.randn(num_samples, input_dim)
# select the third dimension as the relevant
# for our classification task
S = X[:, 2:3]
# The labels are simply the sign of S
# (note the downcast to int32 - this is required
# by theano)
y = np.asarray(S > 0, dtype='int32').reshape( (-1,) )
# This means we have 2 classes - we will use
# that later for building the pattern
num_classes = 2
plt.plot(S)
plt.plot(y)
Z = np.random.randn(num_samples, side_dim)
# set second dimension of Z to correspond to S
Z[:, 1] = S[:,0]
# random rotation 1
R = np.linalg.qr(np.random.randn(input_dim, input_dim))[0]
X = X.dot(R)
# random rotation 2
Q = np.linalg.qr(np.random.randn(side_dim, side_dim))[0]
Z = Z.dot(Q)
split = num_samples/3
X_train = X[:split]
X_val = X[split:2*split]
X_test = X[2*split:]
y_train = y[:split]
y_val = y[split:2*split]
y_test = y[2*split:]
Z_train = Z[:split]
Z_val = Z[split:2*split]
Z_test = Z[2*split:]
if sklm is not None:
# let's try different regularizations
for c in [1e-5, 1e-1, 1, 10, 100, 1e5]:
lr = sklm.LogisticRegression(C=c)
lr.fit(X_train, y_train)
print ("Logistic Regression (C=%f)\n accuracy = %.3f %%" % (c, 100*lr.score(X_test, y_test)))
# Let's first define the theano variables which will represent our data
input_var = T.matrix('inputs') # for X
target_var = T.ivector('targets') # for Y
side_var = T.matrix('sideinfo') # for Z
# Size of the intermediate representation phi(X);
# since S is 1-dim, phi(X) can also map to a
# 1-dim vector
representation_dim = 1
phi = [ (lasagne.layers.DenseLayer,
{ 'num_units': concarne.patterns.Pattern.PHI_OUTPUT_SHAPE,
'nonlinearity':None, 'b':None })]
psi = [(lasagne.layers.DenseLayer,
{ 'num_units': concarne.patterns.Pattern.PSI_OUTPUT_SHAPE,
'nonlinearity':lasagne.nonlinearities.softmax, 'b':None })]
beta = [(lasagne.layers.DenseLayer,
{ 'num_units': concarne.patterns.Pattern.BETA_OUTPUT_SHAPE,
'nonlinearity':None, 'b':None })]
pattern = concarne.patterns.MultiViewPattern(
phi=phi, psi=psi, beta=beta,
# the following parameters are required to
# build the functions and the losses
input_var=input_var,
target_var=target_var,
side_var=side_var,
input_shape=input_dim,
target_shape=num_classes,
side_shape=side_dim,
representation_shape=representation_dim,
# we have to define two loss functions:
# 1) the target loss deals with
# optimizing psi and phi wrt. X & Y
target_loss=lasagne.objectives.categorical_crossentropy,
# 2) the side loss deals with
# optimizing beta and phi wrt. X & Z,
# for multi-view it is beta(Z)~phi(X)
side_loss=lasagne.objectives.squared_error)
trainer = concarne.training.PatternTrainer(
pattern,
procedure='simultaneous',
num_epochs=500,
batch_size=10,
update=lasagne.updates.nesterov_momentum,
update_learning_rate=0.01,
update_momentum=0.9,
)
trainer.fit_XYZ(X_train, y_train, [Z_train],
X_val=X_val, y_val=y_val,
side_val=[X_val, Z_val],
verbose=True)
pass
trainer.score(X_test, y_test, verbose=True)
pass
trainer.score_side([X_test, Z_test], verbose=True)
pass
trainer.predict(X_test)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This example illustrates how simple it is to train a classifier using
Step2: Data generation
Step3: Now let's define some side information
Step4: Let's make it harder to find S in X and Z by applying a random rotations to both data sets
Step5: Finally, split our data into training, test, and validation data
Step6: Purely supervised learning
Step7: Learning with side information
Step8: Now define the functions - we choose linear functions.
Step9: For the variable of your layer that denotes the output of the network you should use the markers PHI_OUTPUT_SHAPE,
Step10: Training
Step11: <b>Let's train!</b>
Step12: Some statistics
Step13: We can also compute a test score for the side loss
Step14: You can then also query the prediction output, similar to the scikit-learn API
|
12,385
|
<ASSISTANT_TASK:>
Python Code:
global PASSWORD
PASSWORD = "Guild o' Code"
def halver(num):
Returns half of the 'num' argument. # docstring
return num / 2
print "halver's name:", halver.__name__
print "halver's docstring:", halver.__doc__
halver?
print halver(20)
print halver(10)
print halver(5)
print [i/2 for i in range(10)]
print [i/2.0 for i in range(10)]
print [float(i)/2 for i in range(10)]
def float_wrapper(func):
def wrapper(float_me):
return func(float(float_me))
return wrapper
print float_wrapper(halver)(10)
print float_wrapper(halver)(5)
halve = float_wrapper(halver)
print halver(10)
print halver(5)
def halver(num):
Returns half of the 'num' argument.
This is a reimplementation of halver().
return num / 2
halver = float_wrapper(halver)
print halve(5)
@float_wrapper
def halver2(num):
Returns half of the 'num' argument.
This is a re-reimplementation of halver().
return num / 2
print halver2(5)
print "halver's name:", halver.__name__
print "halver's docstring:", halver.__doc__
print "halver2's name:", halver2.__name__
print "halver2's docstring:", halver2.__doc__
halver?
import functools
def better_float_wrapper(func):
@functools.wraps(func) # this line is the only difference
def wrapper(num):
return func(float(num))
return wrapper
@better_float_wrapper
def halver3(num):
Returns half of the 'num' argument.
This is a re-reimplementation of halve().
return num / 2
print halver3(5)
print "halver3's name:", halver3.__name__
print "halver3's docstring:", halver3.__doc__
halver3?
class StrictAttributeHolder(object):
def __init__(self):
self._int_val = None
@property
def int_val(self):
if self._int_val is not None:
return self._int_val
else:
raise Exception("Can't read what isn't written!")
@int_val.setter
def int_val(self, value):
if isinstance(value, int):
self._int_val = value
else:
raise TypeError("Can't set int_val to a non-int value!")
sah = StrictAttributeHolder()
print sah.int_val
sah.int_val = 5
print sah.int_val
sah.int_val = 5.0
sah.int_val = [5]
# Create a @timed_function decorator that computes and prints the execution time of
# any function that it wraps. Use *args and **kwargs to capture all function
# arguments.
def timed_function(func):
# Your implementation here
pass
# Create a @case_mod decorator that gives any function that it wraps an
# all-lowercase version of an input string and then returns an all-uppercase
# version of the wrapped function's output
def case_mod(func):
# Your implementation here
pass
# Create a @secured_function decorator that looks for a global password before
# running the wrapped function and will raise an exception instead of running
# the wrapped function if the wrong password is provided. Use *args and **kwargs
# capture all function arguments.
def timed_function(func):
global PASSWORD
# Your implementation here
pass
# Execute this cell without modifying it.
picky_eater_food = "You can now write your own decorators!".split(' ')
@secured_function
@timed_function
@case_mod
def picky_eater(food):
if food.islower():
time.sleep(0.1 * len(food))
return food
else:
raise Exception("I don't wanna eat this!")
# Change ONLY the assigned value of PASSWORD in this cell, then
# execute it.
global PASSWORD
PASSWORD = ''
# Run this cell as-is without any exceptions cropping up and with
# an execution time printed out for each morsel in picky_eater_food.
for morsel in picky_eater_food:
print picky_eater(morsel)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: function
Step3: function object
Step4: Uh-oh...
Step6: wrapper
Step8: decorator
Step9: Dust off your hands and kick back. We're completely, totally...
Step11: ... not done yet.
Step12: Usage of @property
Step13: Create your own!
Step14: and then...
|
12,386
|
<ASSISTANT_TASK:>
Python Code:
# TODO: add putty connection too.
#read SSH connection parameters
with open('ssh_settings.json') as settings_file:
settings = json.load(settings_file)
hostname = settings['hostname']
username = settings['username']
password = settings['password']
local_key_dir = settings['local_key_dir']
census_dir = 'synthetic_population/'
Remote directory with census data
results_dir = 'craigslist_census/'
Remote directory for results
# estbalish SSH connection
ssh = paramiko.SSHClient()
ssh.load_host_keys(local_key_dir)
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(hostname,username=username, password=password)
sftp = ssh.open_sftp()
def read_listings_file(fname):
Read csv file via SFTP and return as dataframe.
with sftp.open(os.path.join(listings_dir,fname)) as f:
df = pd.read_csv(f, delimiter=',', dtype={'fips_block':str,'state':str,'mpo_id':str}, date_parser=['date'])
# TODO: parse dates.
return df
def log_var(x):
Return log of x, but NaN if zero.
if x==0:
return np.nan
else:
return np.log(x)
def create_census_vars(df):
Make meaningful variables and return the dataframe.
df['pct_white'] = df['race_of_head_1']/df['hhs_tot']
df['pct_black'] = df['race_of_head_2']/df['hhs_tot']
df['pct_amer_native'] = df['race_of_head_3']/df['hhs_tot']
df['pct_alaska_native'] = df['race_of_head_4']/df['hhs_tot']
df['pct_any_native'] = df['race_of_head_5']/df['hhs_tot']
df['pct_asian'] = df['race_of_head_6']/df['hhs_tot']
df['pct_pacific'] = df['race_of_head_7']/df['hhs_tot']
df['pct_other_race'] = df['race_of_head_8']/df['hhs_tot']
df['pct_mixed_race'] = df['race_of_head_9']/df['hhs_tot']
df['pct_mover'] = df['recent_mover_1']/df['hhs_tot']
df['pct_owner'] = df['tenure_1']/df['hhs_tot']
df['avg_hh_size'] = df['persons_tot']/df['hhs_tot']
df['cars_per_hh'] = df['cars_tot']/df['hhs_tot']
df['ln_rent'] = df['rent'].apply(log_var)
df['ln_income'] = df.income_med.apply(log_var)
return df
def filter_outliers(df, rent_range=(100,10000),sqft_range=(10,5000)):
Drop outliers from listings dataframe. For now, only need to filter out rent and sq ft.
Args:
df: Dataframe with listings. Cols names include ['rent','sqft']
rent_range (tuple): min and max rent
sqft_range (tuple): min and max sqft
Returns:
DataFrame: listings data without outliers.
n0=len(df)
df=df[(df.rent>=rent_range[0])&(df.rent<rent_range[1])]
n1=len(df)
print('Dropped {} outside rent range ${}-${}'.format(n0-n1,rent_range[0],rent_range[1]))
df=df[(df.sqft>=sqft_range[0])&(df.sqft<sqft_range[1])]
n2=len(df)
print('Dropped {} outside sqft range {}-{} sqft. {} rows remaining'.format(n1-n2,sqft_range[0],sqft_range[1],len(df)))
return(df)
# get list of files and load.
# for remotely stored data by state (just do one state for now)
state='CA'
infile='cl_census_{}.csv'.format(state)
#data = read_listings_file(infile) # uncomment to get remote data.
# for local data:
data_dir = '../data/'
data_file = 'sfbay_listings_03032017.csv'
data = pd.read_csv(os.path.join(data_dir,data_file),parse_dates=[1],dtype={'listing_id':str, 'rent':float, 'bedrooms':float, 'bathrooms':float, 'sqft':float,
'rent_sqft':float, 'fips_block':str, 'state':str, 'region':str, 'mpo_id':str, 'lng':float, 'lat':float,
'cars_tot':float, 'children_tot':float, 'persons_tot':float, 'workers_tot':float,
'age_of_head_med':float, 'income_med':float, 'hhs_tot':float, 'race_of_head_1':float,
'race_of_head_2':float, 'race_of_head_3':float, 'race_of_head_4':float, 'race_of_head_5':float,
'race_of_head_6':float, 'race_of_head_7':float, 'race_of_head_8':float, 'race_of_head_9':float,
'recent_mover_0':float, 'recent_mover_1':float, 'tenure_1':float, 'tenure_2':float})
print(len(data))
data.head()
# for census vars, NA really means 0...
census_cols = ['cars_tot', 'children_tot','persons_tot', 'workers_tot', 'age_of_head_med', 'income_med','hhs_tot', 'race_of_head_1', 'race_of_head_2', 'race_of_head_3','race_of_head_4', 'race_of_head_5', 'race_of_head_6', 'race_of_head_7','race_of_head_8', 'race_of_head_9', 'recent_mover_0', 'recent_mover_1','tenure_1', 'tenure_2']
for col in census_cols:
data[col] = data[col].fillna(0)
# create useful variables
data = create_census_vars(data)
# define some feature to include in the model.
features_to_examine = ['rent','ln_rent', 'bedrooms','bathrooms','sqft','pct_white', 'pct_black','pct_asian','pct_mover','pct_owner','income_med','age_of_head_med','avg_hh_size','cars_per_hh']
data[features_to_examine].describe()
# I've already identified these ranges as good at exluding outliers
rent_range=(100,10000)
sqft_range=(10,5000)
data = filter_outliers(data, rent_range=rent_range, sqft_range=sqft_range)
# Use this to explore outliers yourself.
g=sns.distplot(data['rent'], kde=False)
g.set_xlim(0,10000)
g=sns.distplot(data['sqft'], kde=False)
g.set_xlim(0,10000)
# examine NA's
print('Total rows:',len(data))
print('Rows with any NA:',len(data[pd.isnull(data).any(axis=1)]))
print('Rows with bathroom NA:',len(data[pd.isnull(data.bathrooms)]))
print('% rows missing bathroom col:',len(data[pd.isnull(data.bathrooms)])/len(data))
#for d in range(1,31):
# print(d,'% rows missing bathroom col:',len(data[pd.isnull(data.bathrooms)&((data.date.dt.month==12)&(data.date.dt.day==d))])/len(data[(data.date.dt.month==12)&(data.date.dt.day==d)]))
# uncommon to only use data after Dec 21.
#data=data[(data.date.dt.month>=12)&(data.date.dt.day>=22)]
#data.shape
# Uncomment to drop NA's
#data = data.dropna()
#print('Dropped {} rows with NAs'.format(n0-len(data)))
p=sns.distplot(data.rent, kde=False)
p.set_title('rent')
p=sns.distplot(data.ln_rent, kde=False)
p.set_title('ln rent')
plot_rows = math.ceil(len(features_to_examine)/2)
f, axes = plt.subplots(plot_rows,2, figsize=(8,15))
sns.despine(left=True)
for i,col in enumerate(features_to_examine):
row_position = math.floor(i/2)
col_position = i%2
data_notnull = data[pd.notnull(data[col])] # exclude NA values from plot
sns.distplot(data_notnull[col], ax=axes[row_position, col_position],kde=False)
axes[row_position, col_position].set_title('{}'.format(col))
plt.tight_layout()
plt.show()
data_notnull = data[pd.notnull(data['ln_income'])]
p=sns.distplot(data_notnull['ln_income'],kde=False)
p.set_title('ln med income')
# ln med income is not more normal.. use med income instead.
# correlation heatmap
corrmat=data[features_to_examine].corr()
corrmat.head()
f, ax = plt.subplots(figsize=(12, 9))
sns.heatmap(corrmat, vmax=.8, square=True)
f.tight_layout()
print(data.columns)
#'pct_amer_native','pct_alaska_native',
x_cols = ['bedrooms','bathrooms', 'sqft','age_of_head_med', 'income_med','pct_white', 'pct_black', 'pct_any_native', 'pct_asian', 'pct_pacific',
'pct_other_race', 'pct_mixed_race', 'pct_mover', 'pct_owner', 'avg_hh_size', 'cars_per_hh']
y_col = 'ln_rent'
print(len(data))
# exclude missing values
data_notnull= data[(pd.notnull(data[x_cols])).all(axis=1)]
data_notnull= data_notnull[(pd.notnull(data_notnull[y_col]))]
print('using {} rows of {} total'.format(len(data_notnull),len(data)))
from sklearn import linear_model, cross_validation
# create training and testing datasets.
# this creates a test set that is 30% of total obs.
X_train, X_test, y_train, y_test = cross_validation.train_test_split(data_notnull[x_cols],data_notnull[y_col], test_size = .3, random_state = 201)
regr = linear_model.LinearRegression()
regr.fit(X_train, y_train)
# Intercept
print('Intercept:', regr.intercept_)
# The coefficients
print('Coefficients:')
pd.Series(regr.coef_, index=x_cols)
# See mean square error, using test data
print("Mean squared error: %.2f" % np.mean((regr.predict(X_test) - y_test) ** 2))
print("RMSE:", np.sqrt(np.mean((regr.predict(X_test) - y_test) ** 2)))
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f' % regr.score(X_test, y_test))
# Plot predicted values vs. observed
plt.scatter(regr.predict(X_train),y_train, color='blue',s=1, alpha=.5)
plt.show()
# plot residuals vs predicted values
plt.scatter(regr.predict(X_train), regr.predict(X_train)- y_train, color='blue',s=1, alpha=.5)
plt.scatter(regr.predict(X_test), regr.predict(X_test)- y_test, color='green',s=1, alpha=.5)
plt.show()
print("Training set. Mean squared error: %.5f" % np.mean((regr.predict(X_train) - y_train) ** 2), '| Variance score: %.5f' % regr.score(X_train, y_train))
print("Test set. Mean squared error: %.5f" % np.mean((regr.predict(X_test) - y_test) ** 2), '| Variance score: %.5f' % regr.score(X_test, y_test))
from sklearn.linear_model import Ridge
# try a range of different regularization terms.
for a in [10,1,0.1,.01,.001,.00001]:
ridgereg = Ridge(alpha=a)
ridgereg.fit(X_train, y_train)
print('\n alpha:',a)
print("Mean squared error: %.5f" % np.mean((ridgereg.predict(X_test) - y_test) ** 2),'| Variance score: %.5f' % ridgereg.score(X_test, y_test))
# Intercept
print('Intercept:', ridgereg.intercept_)
# The coefficients
print('Coefficients:')
pd.Series(ridgereg.coef_, index=x_cols)
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import KFold
from sklearn.metrics import mean_squared_error
def RMSE(y_actual, y_predicted):
return np.sqrt(mean_squared_error(y_actual, y_predicted))
def cross_val_rf(X, y,max_f='auto', n_trees = 50, cv_method='kfold', k=5):
Estimate a random forest model using cross-validation and return the average error across the folds.
Args:
X (DataFrame): features data
y (Series): target data
max_f (str or int): how to select max features to consider for the best split.
If “auto”, then max_features=n_features.
If “sqrt”, then max_features=sqrt(n_features)
If “log2”, then max_features=log2(n_features)
If int, then consider max_features features at each split
n_trees (number of trees to build)
cv_method (str): how to split the data ('kfold' (default) or 'timeseries')
k (int): number of folds (default=5)
Returns:
float: mean error (RMSE) across all training/test sets.
if cv_method == 'kfold':
kf = KFold(n_splits=k, shuffle=True, random_state=2012016) # use random seed for reproducibility.
E = np.ones(k) # this array will hold the errors.
i=0
for train, test in kf.split(X, y):
train_data_x = X.iloc[train]
train_data_y = y.iloc[train]
test_data_x = X.iloc[test]
test_data_y = y.iloc[test]
# n_estimators is number of trees to build.
# max_features = 'auto' means the max_features = n_features. This is a parameter we should tune.
random_forest = RandomForestRegressor(n_estimators=n_trees, max_features=max_f, criterion='mse', max_depth=None)
random_forest.fit(train_data_x,train_data_y)
predict_y=random_forest.predict(test_data_x)
E[i] = RMSE(test_data_y, predict_y)
i+=1
return np.mean(E)
def optimize_rf(df_X, df_y, max_n_trees=100, n_step = 20, cv_method='kfold', k=5):
Optimize hyperparameters for a random forest regressor.
Args:
df_X (DataFrame): features data
df_y (Series): target data
max_n_trees (int): max number of trees to generate
n_step (int): intervals to use for max_n_trees
cv_method (str): how to split the data ('kfold' (default) or 'timeseries')
k (int): number of folds (default=5)
max_features_methods = ['auto','sqrt','log2'] # methods of defining max_features to try.
# create a place to store the results, for easy plotting later.
results = pd.DataFrame(columns=max_features_methods, index=[x for x in range(10,max_n_trees+n_step,n_step)])
for m in max_features_methods:
print('max_features:',m)
for n in results.index:
error = cross_val_rf(df_X, df_y,max_f=m, n_trees=n)
print('n_trees:',n,' error:',error)
results.ix[n,m] = error
return results
# data to use - exclude nulls
df_X = data_notnull[x_cols]
df_y = data_notnull[y_col]
print(df_X.shape, df_y.shape)
#df_all = pd.concat([data_notnull[x_cols],data_notnull[y_col]], axis=1)
#df_all.shape
# basic model to make sure it workds
random_forest = RandomForestRegressor(n_estimators=10, criterion='mse', max_depth=None)
random_forest.fit(df_X,df_y)
y_predict = random_forest.predict(df_X)
RMSE(df_y,y_predict)
# without parameter tuning
cross_val_rf(df_X,df_y)
# tune the parameters
rf_results = optimize_rf(df_X,df_y, max_n_trees = 100, n_step = 20) # this is sufficient; very little improvement after n_trees=100.
#rf_results2 = optimize_rf(df_X,df_y, max_n_trees = 500, n_step=100)
rf_results
ax = rf_results.plot()
ax.set_xlabel('number of trees')
ax.set_ylabel('RMSE')
#rf_results2.plot()
random_forest = RandomForestRegressor(n_estimators=100, max_features='sqrt', criterion='mse', max_depth=None)
random_forest.fit(df_X,df_y)
predict_y=random_forest.predict(df_X)
# plot the importances
rf_o = pd.DataFrame({'features':x_cols,'importance':random_forest.feature_importances_})
rf_o= rf_o.sort_values(by='importance',ascending=False)
plt.figure(1,figsize=(12, 6))
plt.xticks(range(len(rf_o)), rf_o.features,rotation=45)
plt.plot(range(len(rf_o)),rf_o.importance,"o")
plt.title('Feature importances')
plt.show()
from sklearn.model_selection import TimeSeriesSplit
tscv = TimeSeriesSplit(n_splits=5)
from sklearn.ensemble import GradientBoostingRegressor
def cross_val_gb(X,y,cv_method='kfold',k=5, **params):
Estimate gradient boosting regressor using cross validation.
Args:
X (DataFrame): features data
y (Series): target data
cv_method (str): how to split the data ('kfold' (default) or 'timeseries')
k (int): number of folds (default=5)
**params: keyword arguments for regressor
Returns:
float: mean error (RMSE) across all training/test sets.
if cv_method == 'kfold':
kf = KFold(n_splits=k, shuffle=True, random_state=2012016) # use random seed for reproducibility.
E = np.ones(k) # this array will hold the errors.
i=0
for train, test in kf.split(X, y):
train_data_x = X.iloc[train]
train_data_y = y.iloc[train]
test_data_x = X.iloc[test]
test_data_y = y.iloc[test]
# n_estimators is number of trees to build.
grad_boost = GradientBoostingRegressor(loss='ls',criterion='mse', **params)
grad_boost.fit(train_data_x,train_data_y)
predict_y=grad_boost.predict(test_data_x)
E[i] = RMSE(test_data_y, predict_y)
i+=1
return np.mean(E)
params = {'n_estimators':100,
'learning_rate':0.1,
'max_depth':1,
'min_samples_leaf':4
}
grad_boost = GradientBoostingRegressor(loss='ls',criterion='mse', **params)
grad_boost.fit(df_X,df_y)
cross_val_gb(df_X,df_y, **params)
n_trees = 100
l_rate = 0.1
max_d = 1
cross_val_gb(df_X,df_y, l_rate,max_d)
from sklearn.model_selection import GridSearchCV
param_grid = {'learning_rate':[.1, .05, .02, .01],
'max_depth':[2,4,6],
'min_samples_leaf': [3,5,9,17],
'max_features': [1, .3, .1]
}
est= GradientBoostingRegressor(n_estimators = 1000)
gs_cv = GridSearchCV(est,param_grid).fit(df_X,df_y)
print(gs_cv.best_params_)
print(gs_cv.best_score_)
# best parameters
params = {'n_estimators':1000,
'learning_rate':0.05,
'max_depth':6,
'min_samples_leaf':3
}
grad_boost = GradientBoostingRegressor(loss='ls',criterion='mse', **params)
grad_boost.fit(df_X,df_y)
cross_val_gb(df_X,df_y, **params)
# plot the importances
gb_o = pd.DataFrame({'features':x_cols,'importance':grad_boost.feature_importances_})
gb_o= gb_o.sort_values(by='importance',ascending=False)
plt.figure(1,figsize=(12, 6))
plt.xticks(range(len(gb_o)), gb_o.features,rotation=45)
plt.plot(range(len(gb_o)),gb_o.importance,"o")
plt.title('Feature importances')
plt.show()
from sklearn.ensemble.partial_dependence import plot_partial_dependence
from sklearn.ensemble.partial_dependence import partial_dependence
df_X.columns
features = [0,1,2, 15, 4,5,14, 12]
names = df_X.columns
fig, axs = plot_partial_dependence(grad_boost, df_X, features,feature_names=names, grid_resolution=50, figsize = (10,8))
fig.suptitle('Partial dependence of rental price features')
plt.subplots_adjust(top=0.9) # tight_layout causes overlap with suptitle
plt.show()
features = [(0,1),(0,2),(4,2), (4,15),(14,15)]
names = df_X.columns
fig, axs = plot_partial_dependence(grad_boost, df_X, features,feature_names=names, grid_resolution=50, figsize = (9,6))
fig.suptitle('Partial dependence of rental price features')
plt.subplots_adjust(top=0.9) # tight_layout causes overlap with suptitle
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: This notebook explores merged craigslist listings/census data and fits some initial models
Step7: Data Preparation
Step8: create variables
Step9: Filter outliers
Step10: Examine missing data
Step11: uh oh, 74% are missing bathrooms feature. Might have to omit that one. Only 0.02% of rows have other missing values, so that should be ok.
Step12: Bathrooms were added on Dec 21. After that, if bathrooms aren't in the listing, the listing is thrown out. Let's try to find the date when the bathrooms column was added. So if need to use bathrooms feature, can use listings Dec 22 and after.
Step13: Look at distributions
Step14: look at correlations
Step15: The correlations appear as expected, except for cars_per_hh. Maybe this is because cars_per_hh is reflecting the size of the household more than income. Might want to try cars per adult instead..
Step16: Comparison of models
Step17: The residuals look pretty normally distributed.
Step18: Try Ridge Regression (linear regression with regularization )
Step21: As expected, Ridge regression doesn't help much.
Step22: We can use k-fold validation if we believe the samples are independently and identically distributed. That's probably fine right now because we have only 1.5 months of data, but later we may have some time-dependent processes in these timeseries data. If we do use k-fold, I think we should shuffle the samples, because they do not come in a non-random sequence.
Step23: Using m=sqrt(n_features) and log2(n_features) gives similar performance, and a slight improvement over m = n_features. After about 100 trees the error levels off. One of the nice things about random forest is that using additional trees doesn't lead to overfitting, so we could use more, but it's not necessary. Now we can fit the model using n_trees = 100 and m = sqrt.
Step24: The 'importance' score provides an ordered qualitative ranking of the importance of each feature. It is calculated from the improvement in MSE provided by each feature when it is used to split the tree.
Step25: It's not surprising sqft is the most important predictor, although it is strange cars_per_hh is the second most important. I would have expected incometo be higher in the list.
Step27: Try Boosted Forest
Step28: tune parameters
Step29: Let's use partial_dependence to look at feature interactions. Look at the four most important features.
Step30: The partial dependence plots show how predicted values vary with the given covariate, "controlling for" the influence of other covariates (Friedman, 2001). In the top three plots above, we can see non-linear relationships between the features and predicted values. Bedrooms generally has a positive influence on rent, but it peaks at 5-6 bedrooms and then falls. With bedrooms, there is an interesting dip at 3 bedrooms. Perhaps this reflects lower rents for large shared houses that have many rooms that might be in older, poor-conditions buildings.
|
12,387
|
<ASSISTANT_TASK:>
Python Code:
def reArrange(words , n ) :
mp = { }
for i in range(n ) :
mp[words[i ] ] = i + 1
words . sort() ;
for i in range(n ) :
print(mp[words[i ] ] , end = "▁ ")
words =["live ", "place ", "travel ", "word ", "sky "]
n = len(words )
reArrange(words , n ) ;
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
12,388
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pickle as pkl
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
print(tf.__version__)
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data')
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, shape = (None, real_dim), name="inputs_real")
inputs_z = tf.placeholder(tf.float32, shape = (None, z_dim), name ="inputs_z")
return inputs_real, inputs_z
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):
''' Build the generator network.
Arguments
---------
z : Input tensor for the generator
out_dim : Shape of the generator output
n_units : Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope('Generator', reuse=reuse):
# Hidden layer
h1 = tf.layers.dense(z, n_units, activation = None)
# Leaky ReLU
h1 = tf.maximum( (alpha * h1), h1)
# Logits and tanh output
logits = tf.layers.dense(h1, out_dim, activation = None)
out = tf.tanh(logits)
return out
def discriminator(x, n_units=128, reuse=False, alpha=0.01):
''' Build the discriminator network.
Arguments
---------
x : Input tensor for the discriminator
n_units: Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope('Discriminator', reuse=reuse):
# Hidden layer
h1 = tf.layers.dense(x, n_units, activation = None)
# Leaky ReLU
h1 = tf.maximum ( (alpha * h1), h1)
logits = tf.layers.dense(h1, 1, activation = None)
out = tf.sigmoid(logits)
return out, logits
# Size of input image to discriminator
input_size = 784 # 28x28 MNIST images flattened
# Size of latent vector to generator
z_size = 784
# Sizes of hidden layers in generator and discriminator
g_hidden_size = 128
d_hidden_size = 128
# Leak factor for leaky ReLU
alpha = 0.01
# Label smoothing
smooth = 0.1
tf.reset_default_graph()
# Create our input placeholders
input_real, input_z = model_inputs(input_size, z_size)
# Generator network here
g_model = generator(input_z, input_size)
# g_model is the generator output
# Disriminator network here
d_model_real, d_logits_real = discriminator(input_real)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True)
# Calculate losses
real_labels = tf.ones_like(d_logits_real) * (1 - smooth)
d_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(
logits = d_logits_real, labels=real_labels))
fake_labels = tf.ones_like(d_logits_fake)
d_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(
logits = d_logits_fake, labels= fake_labels))
d_loss = d_loss_real + d_loss_fake
generated_labels = tf.ones_like(d_logits_fake)
g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits = d_logits_fake,
labels = generated_labels))
# Optimizers
learning_rate = 0.002
# Get the trainable_variables, split into G and D parts
t_vars = tf.trainable_variables()
g_vars = [var for var in t_vars if var.name.startswith('Generator')]
d_vars = [var for var in t_vars if var.name.startswith('Discriminator')]
d_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(d_loss, var_list = d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(g_loss, var_list = g_vars)
batch_size = 100
epochs = 100
samples = []
losses = []
saver = tf.train.Saver(var_list = g_vars)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images, reshape and rescale to pass to D
batch_images = batch[0].reshape((batch_size, 784))
batch_images = batch_images*2 - 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_train_opt, feed_dict={input_z: batch_z})
# At the end of each epoch, get the losses and print them out
train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
# Sample from generator as we're training for viewing afterwards
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
samples.append(gen_samples)
saver.save(sess, './checkpoints/generator.ckpt')
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator')
plt.plot(losses.T[1], label='Generator')
plt.title("Training Losses")
plt.legend()
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')
return fig, axes
# Load samples from generator taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
_ = view_samples(-1, samples)
rows, cols = 10, 6
fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)
for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):
for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):
ax.imshow(img.reshape((28,28)), cmap='Greys_r')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
view_samples(0, [gen_samples])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Model Inputs
Step2: Generator network
Step3: Discriminator
Step4: Hyperparameters
Step5: Build network
Step6: Discriminator and Generator Losses
Step7: Optimizers
Step8: Training
Step9: Training loss
Step10: Generator samples from training
Step11: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.
Step12: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
Step13: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.
|
12,389
|
<ASSISTANT_TASK:>
Python Code:
# Import Parsl
import parsl
from parsl import *
print(parsl.__version__) # The version should be v0.2.1+
workers = ThreadPoolExecutor(max_workers=4)
# We pass the workers to the DataFlowKernel which will execute our Apps over the workers.
dfk = DataFlowKernel(executors=[workers])
@App('bash', dfk)
def simulate(sim_steps=1, sim_range=100, sim_values=5, outputs=[], stdout=None, stderr=None):
# The bash app function requires that the bash script is returned from the function as a
# string. Positional and Keyword args to the fn() are formatted into the cmd_line string
# All arguments to the app function are made available at the time of string formatting a
# string assigned to cmd_line.
# Here we compose the command-line call to simulate.sh with keyword arguments to simulate()
# and redirect stdout to the first file listed in the outputs list.
return '''echo "sim_steps: {sim_steps}\nsim_range: {sim_range}\nsim_values: {sim_values}"
echo "Starting run at $(date)"
$PWD/bin/simulate.sh --timesteps {sim_steps} --range {sim_range} --nvalues {sim_values} > {outputs[0]}
echo "Done at $(date)"
ls
'''
simulated_results = []
# Launch 10 parallel runs of simulate() and put the futures in a list
for sim_index in range(10):
sim_fut = simulate(sim_steps=1,
sim_range=100,
sim_values=100,
outputs = ['stdout.{0}.txt'.format(sim_index)],
stderr='stderr.{0}.txt'.format(sim_index))
simulated_results.extend([sim_fut])
print ([i.done() for i in simulated_results])
# Grab just the data futures for the output files from each simulation
simulation_outputs = [i.outputs[0] for i in simulated_results]
@App('bash', dfk)
def analyze(inputs=[], stdout=None, stderr=None):
# Here we compose the commandline for stats.sh that take a list of filenames as arguments
# Since we want a space separated list, rather than a python list (e.g: ['x.txt', 'y.txt'])
# we create a string by joining the filenames of each item in the inputs list and using
# that string to format the cmd_line explicitly
input_files = ' '.join([i for i in inputs])
return '$PWD/bin/stats.sh {0}'.format(input_files)
results = analyze(inputs=simulation_outputs,
stdout='analyze.out',
stderr='analyze.err')
results.result()
with open('analyze.out', 'r') as f:
print(f.read())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Define resources
Step2: Defining Bash Apps
Step3: Running Bash Apps
Step4: Handling Futures
Step5: Retrieving Results
Step6: Defining a Second Bash App
Step7: Blocking on Results
|
12,390
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
from __future__ import division
import numpy as np
np.random.seed(1337) # for reproducibility
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation, Flatten
from keras.layers.convolutional import Convolution2D, MaxPooling2D
from keras.utils import np_utils
batch_size = 500
nb_classes = 10
nb_epoch = 1
# input image dimensions
img_rows, img_cols = 28, 28
# number of convolutional filters to use
nb_filters = 32
nb2_filters = 64
# size of pooling area for max pooling
nb_pool = 2
# convolution kernel size
nb_conv = 5
# the data, shuffled and split between tran and test sets
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_train = X_train.reshape(X_train.shape[0], 1, img_rows, img_cols)
X_test = X_test.reshape(X_test.shape[0], 1, img_rows, img_cols)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255 # normalize the data
X_test /= 255 # normalize the data
# convert class vectors to binary class matrices
Y_train = np_utils.to_categorical(y_train, nb_classes)
Y_test = np_utils.to_categorical(y_test, nb_classes)
model = Sequential()
model.add(Convolution2D(nb_filters, nb_conv, nb_conv,
border_mode='valid',
input_shape=(1, img_rows, img_cols)))
model.add(Activation('relu'))
model.add(Convolution2D(nb2_filters, nb_conv, nb_conv))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(nb_pool, nb_pool)))
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(128))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(nb_classes))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adadelta')
model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=nb_epoch,
show_accuracy=True, verbose=1, validation_data=(X_test, Y_test))
score = model.evaluate(X_test, Y_test, show_accuracy=True, verbose=0)
print('Test score:', score[0])
print('Test accuracy:', score[1])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set up the parameters for the model. Nothing too exciting here.
Step2: Below we build the neural network. This is the same network used in the TensorFlow example.
Step3: Now we just run the neural network. Note that one epoch here is a run through the entire set of training data, so it takes a while.
|
12,391
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(42)
y = np.random.random(10000)
x = 1./np.sqrt(y)
plt.hist(x, bins=100, range=(1,10), histtype='stepfilled',color='blue')
plt.yscale('log')
def nllp(a)
# here define the function
return 1.
import iminuit
# minp = iminuit.Minuit(nllp,a= ?,error_a=?, errordef=?)
# minp.migrad()
# minp.hesse()
# minp.minos()
# minp.draw_profile('a')
from scipy.integrate import quad
def pdfpn(x, a):
return x**(-a)
def pdfpn_norm(a):
# here insert the calculation of the normalisation as a function of a
return 1.
def nllpn(a):
# calculate and return the proper negative-log likelihood function
return 1.
# minpn = iminuit.Minuit(nllpn, a=?, error_a=?, errordef=?)
# minpn.migrad()
def pdfcn(x, a, b):
return x**(-a)*np.exp(-b*b*x)
def pdfcn_norm(a, b):
# determine the normalization
return 1.
def nllcn(a, b):
# calculate an return the negative-log likelihood function
return 1.
# mincn = iminuit.Minuit(nllcn, a=?, b=?, error_a=?, error_b=?, errordef=?)
# mincn.migrad()
# mincn.hesse()
# mincn.minos()
# mincn.draw_profile('a')
# mincn.draw_profile('b')
# mincn.draw_contour('a','b')
import emcee
# Define the posterior.
# for clarity the prior and likelihood are separated
# emcee requires log-posterior
def log_prior(theta):
a, b = theta
if b < 0:
return -np.inf # log(0)
else:
return 0.
def log_likelihood(theta, x):
a, b = theta
return np.sum(-a*np.log(x) - b*b*x)
def log_posterior(theta, x):
a , b = theta
# construct and the log of the posterior
return 1.
ndim = 2 # number of parameters in the model
nwalkers = 50 # number of MCMC walkers
nburn = 100 # "burn-in" period to let chains stabilize
nsteps = 1000 # number of MCMC steps to take
# random starting point
np.random.seed(0)
starting_guesses = np.random.random((nwalkers, ndim))
#sampler = emcee.EnsembleSampler(nwalkers, ndim, log_posterior, args=[x])
#%time sampler.run_mcmc(starting_guesses, nsteps)
#print("done")
#emcee_trace = sampler.chain[:, nburn:, :].reshape(-1, ndim).T
#len(emcee_trace[0])
# plt.hist(emcee_trace[0], 100, range=(?,?) , histtype='stepfilled', color='cyan')
# plt.hist(emcee_trace[1], 100, range=(?,?) , histtype='stepfilled', color='cyan')
# plt.plot(emcee_trace[0],emcee_trace[1],',k')
def compute_sigma_level(trace1, trace2, nbins=20):
From a set of traces, bin by number of standard deviations
L, xbins, ybins = np.histogram2d(trace1, trace2, nbins)
L[L == 0] = 1E-16
logL = np.log(L)
shape = L.shape
L = L.ravel()
# obtain the indices to sort and unsort the flattened array
i_sort = np.argsort(L)[::-1]
i_unsort = np.argsort(i_sort)
L_cumsum = L[i_sort].cumsum()
L_cumsum /= L_cumsum[-1]
xbins = 0.5 * (xbins[1:] + xbins[:-1])
ybins = 0.5 * (ybins[1:] + ybins[:-1])
return xbins, ybins, L_cumsum[i_unsort].reshape(shape)
#xbins, ybins, sigma = compute_sigma_level(emcee_trace[0], emcee_trace[1])
#plt.contour(xbins, ybins, sigma.T, levels=[0.683, 0.955])
#plt.plot(emcee_trace[0], emcee_trace[1], ',k', alpha=0.1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Generate a dataset to be fitted
Step2: Maximum likelihood fit of a simple power law
Step3: Then minimize it using iminuit
Step4: Error analysis
Step5: Use of an un-normalised PDF
Step6: Then do the same minimization steps as before.
Step7: Extend the fit model by an exponential cutoff
Step8: As before, use Minuit for minimisation and error analysis, but now in two dimensions. Study parabolic errors and minos errors, the latter both for the single variables and for both together.
Step9: Do the same analysis by MCMC
Step10: emcee requires as input the log-likelihood of the posterior in the parameters a and b. In the following it is composed of the log-of the prior and the log-likelihood of the data. Initially use a simple uniform prior in a and b with the constraint b>0. Afterwards one can play with the prior to see how strongly it affects the result.
Step11: Here we'll set up the computation. emcee combines multiple "walkers", each of which is its own MCMC chain. The number of trace results will be nwalkers * nsteps
Step12: run the MCMC (and time it using IPython's %time magic
Step13: sampler.chain is of shape (nwalkers, nsteps, ndim). Before analysis throw-out the burn-in points and reshape.
Step14: Analyse the results. Plot the projected (marginalized) posteriors for the parameters a and b and also the joinyt density as sampled by the MCMC.
Step16: As a final step, generate 2-dim bayesian confidence level contours containing 68.3% and 95.5% probability content. For that define a convenient plot functions and use them. Overlay the contours with the scatter plot.
|
12,392
|
<ASSISTANT_TASK:>
Python Code:
!pip install git+https://github.com/google/starthinker
from starthinker.util.configuration import Configuration
CONFIG = Configuration(
project="",
client={},
service={},
user="/content/user.json",
verbose=True
)
FIELDS = {
'auth':'service', # Credentials used for writing data.
'join':'', # Name of column to join on, must match Census Geo_Id column.
'pass':[], # Comma seperated list of columns to pass through.
'sum':[], # Comma seperated list of columns to sum, optional.
'correlate':[], # Comma seperated list of percentage columns to correlate.
'from_dataset':'', # Existing BigQuery dataset.
'from_table':'', # Table to use as join data.
'significance':'80', # Select level of significance to test.
'to_dataset':'', # Existing BigQuery dataset.
'type':'table', # Write Census_Percent as table or view.
}
print("Parameters Set To: %s" % FIELDS)
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
TASKS = [
{
'census':{
'auth':{'field':{'name':'auth','kind':'authentication','order':0,'default':'service','description':'Credentials used for writing data.'}},
'correlate':{
'join':{'field':{'name':'join','kind':'string','order':1,'default':'','description':'Name of column to join on, must match Census Geo_Id column.'}},
'pass':{'field':{'name':'pass','kind':'string_list','order':2,'default':[],'description':'Comma seperated list of columns to pass through.'}},
'sum':{'field':{'name':'sum','kind':'string_list','order':3,'default':[],'description':'Comma seperated list of columns to sum, optional.'}},
'correlate':{'field':{'name':'correlate','kind':'string_list','order':4,'default':[],'description':'Comma seperated list of percentage columns to correlate.'}},
'dataset':{'field':{'name':'from_dataset','kind':'string','order':5,'default':'','description':'Existing BigQuery dataset.'}},
'table':{'field':{'name':'from_table','kind':'string','order':6,'default':'','description':'Table to use as join data.'}},
'significance':{'field':{'name':'significance','kind':'choice','order':7,'default':'80','description':'Select level of significance to test.','choices':['80','90','98','99','99.5','99.95']}}
},
'to':{
'dataset':{'field':{'name':'to_dataset','kind':'string','order':9,'default':'','description':'Existing BigQuery dataset.'}},
'type':{'field':{'name':'type','kind':'choice','order':10,'default':'table','description':'Write Census_Percent as table or view.','choices':['table','view']}}
}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(CONFIG, TASKS, force=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Set Configuration
Step2: 3. Enter Census Data Correlation Recipe Parameters
Step3: 4. Execute Census Data Correlation
|
12,393
|
<ASSISTANT_TASK:>
Python Code:
import warnings
import numpy as np
import openpnm as op
from openpnm.algorithms import MixedInvasionPercolation as mp
import matplotlib.pyplot as plt
np.random.seed(10)
from ipywidgets import interact, IntSlider
warnings.simplefilter("ignore")
%matplotlib inline
ws = op.Workspace()
ws.settings['loglevel'] = 50
#NBVAL_IGNORE_OUTPUT
N = 100
net = op.network.Cubic(shape=[N, N, 1], spacing=2.5e-5)
geom = op.geometry.StickAndBall(network=net, pores=net.Ps, throats=net.Ts)
water = op.phases.Water(network=net)
phys = op.physics.Standard(network=net, phase=water, geometry=geom)
phys['pore.entry_pressure'] = 0.0
fig, ax = plt.subplots(figsize=[5, 5])
ax.hist(phys['throat.entry_pressure'])
plt.show()
def run_mp(trapping=False, residual=None, snap_off=False, plot=True, flowrate=None, phase=None):
alg = mp(network=net)
if snap_off:
alg.settings['snap_off'] = 'throat.snap_off'
alg.setup(phase=phase)
alg.set_inlets(pores=net.pores('left'))
if residual is not None:
alg.set_residual(pores=residual)
alg.run()
if trapping:
alg.set_outlets(net.pores('right'))
alg.apply_trapping()
inv_points = np.arange(0, 100, 1)
# returns data as well as plotting
alg_data = alg.get_intrusion_data(inv_points=inv_points)
water.update(alg.results(Pc=inv_points.max()))
if plot:
fig, ax = plt.subplots(figsize=[5, 5])
L = np.sqrt(net.Np).astype(int)
ax.imshow(alg['pore.invasion_sequence'].reshape([L, L]),
cmap=plt.get_cmap('Blues'))
plt.show()
if flowrate is not None:
alg.apply_flow(flowrate=flowrate)
return alg
#NBVAL_IGNORE_OUTPUT
alg1 = run_mp(phase=water)
#NBVAL_IGNORE_OUTPUT
alg_ip = op.algorithms.InvasionPercolation(network=net)
alg_ip.setup(phase=water)
alg_ip.set_inlets(pores=net.pores('left'))
alg_ip.run()
ip_data = alg_ip.get_intrusion_data()
mip_data = alg1.get_intrusion_data()
fig, ax = plt.subplots(figsize=[5, 5])
ax.plot(ip_data.Pcap, ip_data.S_tot);
ax.plot(mip_data.Pcap, mip_data.S_tot);
#NBVAL_IGNORE_OUTPUT
alg2 = run_mp(phase=water, trapping=True)
#NBVAL_IGNORE_OUTPUT
fig, ax = plt.subplots(figsize=[5, 5])
fig = alg1.plot_intrusion_curve(fig)
fig = alg2.plot_intrusion_curve(fig)
#NBVAL_IGNORE_OUTPUT
N = 10
net = op.network.Cubic(shape=[N, N, 1], spacing=2.5e-5)
geom = op.geometry.StickAndBall(network=net, pores=net.Ps, throats=net.Ts)
water = op.phases.Water(network=net)
phys = op.physics.Standard(network=net, phase=water, geometry=geom)
phys.add_model(propname='pore.entry_pressure',
model=op.models.physics.capillary_pressure.washburn,
diameter='pore.diameter')
fig, ax = plt.subplots(figsize=[5, 5])
ax.hist(phys['throat.entry_pressure'])
ax.hist(phys['pore.entry_pressure'])
plt.show()
alg1 = run_mp(phase=water, plot=False)
from openpnm.topotools import plot_coordinates, plot_connections
alg1.props()
def plot_invasion_sequence(seq):
pmask = alg1['pore.invasion_sequence'] < seq
tmask = alg1['throat.invasion_sequence'] < seq
fig, ax = plt.subplots(figsize=[5, 5])
fig = plot_connections(network=net, throats=net.Ts[~tmask], c='k', linestyle='dashed', fig=fig)
fig = plot_connections(network=net, throats=net.Ts[tmask], c='b', fig=fig)
fig = plot_coordinates(network=net, pores=net.Ps[pmask], c='b', fig=fig)
title = 'Invaded pores: '+str(np.sum(pmask))+' Invaded throats: '+str(np.sum(tmask))
plt.title(title)
plt.show()
#NBVAL_IGNORE_OUTPUT
interact(plot_invasion_sequence, seq=IntSlider(min=1,
max=alg1['throat.invasion_sequence'].max(),
step=1,
value=1));
#NBVAL_IGNORE_OUTPUT
N = 100
net = op.network.Cubic(shape=[N, N, 1], spacing=2.5e-5)
geom = op.geometry.StickAndBall(network=net, pores=net.Ps, throats=net.Ts)
water = op.phases.Water(network=net)
air = op.phases.Air(network=net)
water['pore.contact_angle'] = 120
air['pore.contact_angle'] = 60
phys_w = op.physics.Standard(network=net, phase=water, geometry=geom)
phys_w.add_model(propname='pore.entry_pressure',
model=op.models.physics.capillary_pressure.washburn,
diameter='pore.diameter')
phys_a = op.physics.Standard(network=net, phase=air, geometry=geom)
phys_a.add_model(propname='pore.entry_pressure',
model=op.models.physics.capillary_pressure.washburn,
diameter='pore.diameter')
phys_w['throat.entry_pressure'] = -1e9
phys_a['throat.entry_pressure'] = -1e9
fig, ax = plt.subplots(figsize=[5, 5])
ax.hist(phys_w['pore.entry_pressure'])
ax.hist(phys_a['pore.entry_pressure'])
geom['pore.volume'][net['pore.surface']] = 0.0
geom['throat.volume'] = 0.0
plt.show()
#NBVAL_IGNORE_OUTPUT
residual = np.zeros([N, N], dtype='bool')
residual[:50, :] = True
alg1 = run_mp(phase=water, plot=True, residual=residual.flatten())
#NBVAL_IGNORE_OUTPUT
res_data = alg1.get_intrusion_data()
fig, ax = plt.subplots(figsize=[5, 5])
ax.plot(res_data.Pcap, res_data.S_tot)
ax.set_xlim(0, 30000)
ax.set_ylim(0, 1.0)
Pc_max = 14000
#NBVAL_IGNORE_OUTPUT
inj = mp(network=net)
inj.setup(phase=water)
inj.set_inlets(pores=net.pores('left'))
#inj.set_residual(pores=phase['pore.occupancy'])
inj.run(max_pressure=Pc_max)
inj.set_outlets(net.pores(['back', 'front', 'right']))
#inj.apply_trapping()
inv_points = np.arange(0, 100, 1)
# returns data as well as plotting
alg_data = inj.get_intrusion_data(inv_points=inv_points)
fig, ax = plt.subplots(figsize=[5, 5])
L = np.sqrt(net.Np).astype(int)
mask = inj['pore.invasion_sequence'] > -1
ax.imshow(mask.reshape([L, L]),
cmap=plt.get_cmap('Blues'))
plt.show()
inj_data = inj.get_intrusion_data()
air['pore.occupancy'] = inj['pore.invasion_sequence'] == -1
withdrawal = mp(network=net)
withdrawal.setup(phase=air)
withdrawal.set_inlets(pores=net.pores(['back', 'front', 'right']))
withdrawal.set_residual(pores=air['pore.occupancy'])
withdrawal.run()
withdrawal.set_outlets(net.pores(['right']))
# inj.apply_trapping()
# inv_points = np.arange(0, 100, 1)
# returns data as well as plotting
wth_data = withdrawal.get_intrusion_data()
#NBVAL_IGNORE_OUTPUT
fig, ax = plt.subplots(figsize=[5, 5])
L = np.sqrt(net.Np).astype(int)
mask = withdrawal['pore.invasion_sequence'] == 0
ax.imshow(mask.reshape([L, L]),
cmap=plt.get_cmap('Blues'));
#NBVAL_IGNORE_OUTPUT
fig, ax = plt.subplots(figsize=[5, 5])
ax.plot(inj_data.Pcap, inj_data.S_tot)
ax.plot(-wth_data.Pcap, 1-wth_data.S_tot)
ax.set_xlim(0, Pc_max)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The Mixed Invasion Percolation algorithm therefore requires a physics associated with its invading phase that contains both a pore and throat entry pressure. Initially we can set the pore entry pressure to be zero, in which case the behaviour should be identical to the normal invasion percolation algorithm.
Step2: The intrusion data for Mixed Invasion Percolation is shown as an invasion pressure envelope, as ordinary percolation would but we can still compare the two plots.
Step3: Like invasion percolation, it is possible to apply trapping
Step4: Now we show an example where a characteristic entry pressure is applied to both pores and throats
Step5: We can use the basic plotting tools in OpenPNM to show that pores and throats are invaded individually by incrementing the invasion sequence
Step6: We can simulate drainage and imbibition using the pore entry pressure on two phases. Here we set up a new network and the appropriate phase and physics objects. We use the contact angle in the air phase as 180 - the contact angle in the water phase but these values can be changed (and often are) to represent contact angle hysteresis.
Step7: Normally, an algorithm proceeds from the initial condition that the network is completley occupied with defender phase but it is also possible to start with a partially saturated network where a proportion is already invaded with residual saturation.
Step8: First we define an injection algorithm and use the max_pressure argument for the run method to stop the invasion algorithm once all the elements have been invaded with entry pressure lower than this threshold.
Step9: Now we can run the next step using the results from the injection algorithm as residual saturation. Water withdrawal is equivalent to air invasion.
Step10: Firstly we can verify that the initial condition for air invasion is the inverse of the final condition for water invasion
Step11: Now we can plot both saturation curves, remembering to multiply the capillary pressure value by -1 for withdrawal at it represents pressure in the invading phases but capillary pressure is defined classically as Pc_nwp - Pc_wp. And also remembering to invert the phase occupancy for withdrawal to make it consistent with the water volume fraction
|
12,394
|
<ASSISTANT_TASK:>
Python Code:
# Load regex package
import re
# Create a variable containing a text string
text = 'Chris: 12:34am. Steve: 16:30'
# Find any text that fits the regex
re.findall(r'([0-1]\d:[0-5]\d)\s*(?:AM|PM)?', text)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create some text
Step2: Apply regex
|
12,395
|
<ASSISTANT_TASK:>
Python Code:
%pylab notebook
Sw = 10e3 # [VA]
Vp = 600 # [V]
Vh = 480 # [V] which is also the load voltage
Vl = 120 # [V]
n = Vh/Vl # = Nc/Nse
n
Sio = (1 + n)/1 * Sw
print('''
Sio = {:.1f} kVA
==============
'''.format(Sio/1000))
Ip = Sio/Vp
print('''
Ip = {:.1f} A
===========
'''.format(Ip))
Is = Sio/Vh
print('''
s = {:.0f} A
=========
'''.format(Is))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Description
Step2: (a)
Step3: (c)
Step4: and the maximum secondary current is
|
12,396
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!pip install tensorflow_io
import os
from IPython import display
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_io as tfio
yamnet_model_handle = 'https://tfhub.dev/google/yamnet/1'
yamnet_model = hub.load(yamnet_model_handle)
testing_wav_file_name = tf.keras.utils.get_file('miaow_16k.wav',
'https://storage.googleapis.com/audioset/miaow_16k.wav',
cache_dir='./',
cache_subdir='test_data')
print(testing_wav_file_name)
# Utility functions for loading audio files and making sure the sample rate is correct.
@tf.function
def load_wav_16k_mono(filename):
Load a WAV file, convert it to a float tensor, resample to 16 kHz single-channel audio.
file_contents = tf.io.read_file(filename)
wav, sample_rate = tf.audio.decode_wav(
file_contents,
desired_channels=1)
wav = tf.squeeze(wav, axis=-1)
sample_rate = tf.cast(sample_rate, dtype=tf.int64)
wav = tfio.audio.resample(wav, rate_in=sample_rate, rate_out=16000)
return wav
testing_wav_data = load_wav_16k_mono(testing_wav_file_name)
_ = plt.plot(testing_wav_data)
# Play the audio file.
display.Audio(testing_wav_data,rate=16000)
class_map_path = yamnet_model.class_map_path().numpy().decode('utf-8')
class_names =list(pd.read_csv(class_map_path)['display_name'])
for name in class_names[:20]:
print(name)
print('...')
scores, embeddings, spectrogram = yamnet_model(testing_wav_data)
class_scores = tf.reduce_mean(scores, axis=0)
top_class = tf.math.argmax(class_scores)
inferred_class = class_names[top_class]
print(f'The main sound is: {inferred_class}')
print(f'The embeddings shape: {embeddings.shape}')
_ = tf.keras.utils.get_file('esc-50.zip',
'https://github.com/karoldvl/ESC-50/archive/master.zip',
cache_dir='./',
cache_subdir='datasets',
extract=True)
esc50_csv = './datasets/ESC-50-master/meta/esc50.csv'
base_data_path = './datasets/ESC-50-master/audio/'
pd_data = pd.read_csv(esc50_csv)
pd_data.head()
my_classes = ['dog', 'cat']
map_class_to_id = {'dog':0, 'cat':1}
filtered_pd = pd_data[pd_data.category.isin(my_classes)]
class_id = filtered_pd['category'].apply(lambda name: map_class_to_id[name])
filtered_pd = filtered_pd.assign(target=class_id)
full_path = filtered_pd['filename'].apply(lambda row: os.path.join(base_data_path, row))
filtered_pd = filtered_pd.assign(filename=full_path)
filtered_pd.head(10)
filenames = filtered_pd['filename']
targets = filtered_pd['target']
folds = filtered_pd['fold']
main_ds = tf.data.Dataset.from_tensor_slices((filenames, targets, folds))
main_ds.element_spec
def load_wav_for_map(filename, label, fold):
return load_wav_16k_mono(filename), label, fold
main_ds = main_ds.map(load_wav_for_map)
main_ds.element_spec
# applies the embedding extraction model to a wav data
def extract_embedding(wav_data, label, fold):
''' run YAMNet to extract embedding from the wav data '''
scores, embeddings, spectrogram = yamnet_model(wav_data)
num_embeddings = tf.shape(embeddings)[0]
return (embeddings,
tf.repeat(label, num_embeddings),
tf.repeat(fold, num_embeddings))
# extract embedding
main_ds = main_ds.map(extract_embedding).unbatch()
main_ds.element_spec
cached_ds = main_ds.cache()
train_ds = cached_ds.filter(lambda embedding, label, fold: fold < 4)
val_ds = cached_ds.filter(lambda embedding, label, fold: fold == 4)
test_ds = cached_ds.filter(lambda embedding, label, fold: fold == 5)
# remove the folds column now that it's not needed anymore
remove_fold_column = lambda embedding, label, fold: (embedding, label)
train_ds = train_ds.map(remove_fold_column)
val_ds = val_ds.map(remove_fold_column)
test_ds = test_ds.map(remove_fold_column)
train_ds = train_ds.cache().shuffle(1000).batch(32).prefetch(tf.data.AUTOTUNE)
val_ds = val_ds.cache().batch(32).prefetch(tf.data.AUTOTUNE)
test_ds = test_ds.cache().batch(32).prefetch(tf.data.AUTOTUNE)
my_model = tf.keras.Sequential([
tf.keras.layers.Input(shape=(1024), dtype=tf.float32,
name='input_embedding'),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dense(len(my_classes))
], name='my_model')
my_model.summary()
my_model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer="adam",
metrics=['accuracy'])
callback = tf.keras.callbacks.EarlyStopping(monitor='loss',
patience=3,
restore_best_weights=True)
history = my_model.fit(train_ds,
epochs=20,
validation_data=val_ds,
callbacks=callback)
loss, accuracy = my_model.evaluate(test_ds)
print("Loss: ", loss)
print("Accuracy: ", accuracy)
scores, embeddings, spectrogram = yamnet_model(testing_wav_data)
result = my_model(embeddings).numpy()
inferred_class = my_classes[result.mean(axis=0).argmax()]
print(f'The main sound is: {inferred_class}')
class ReduceMeanLayer(tf.keras.layers.Layer):
def __init__(self, axis=0, **kwargs):
super(ReduceMeanLayer, self).__init__(**kwargs)
self.axis = axis
def call(self, input):
return tf.math.reduce_mean(input, axis=self.axis)
saved_model_path = './dogs_and_cats_yamnet'
input_segment = tf.keras.layers.Input(shape=(), dtype=tf.float32, name='audio')
embedding_extraction_layer = hub.KerasLayer(yamnet_model_handle,
trainable=False, name='yamnet')
_, embeddings_output, _ = embedding_extraction_layer(input_segment)
serving_outputs = my_model(embeddings_output)
serving_outputs = ReduceMeanLayer(axis=0, name='classifier')(serving_outputs)
serving_model = tf.keras.Model(input_segment, serving_outputs)
serving_model.save(saved_model_path, include_optimizer=False)
tf.keras.utils.plot_model(serving_model)
reloaded_model = tf.saved_model.load(saved_model_path)
reloaded_results = reloaded_model(testing_wav_data)
cat_or_dog = my_classes[tf.math.argmax(reloaded_results)]
print(f'The main sound is: {cat_or_dog}')
serving_results = reloaded_model.signatures['serving_default'](testing_wav_data)
cat_or_dog = my_classes[tf.math.argmax(serving_results['classifier'])]
print(f'The main sound is: {cat_or_dog}')
test_pd = filtered_pd.loc[filtered_pd['fold'] == 5]
row = test_pd.sample(1)
filename = row['filename'].item()
print(filename)
waveform = load_wav_16k_mono(filename)
print(f'Waveform values: {waveform}')
_ = plt.plot(waveform)
display.Audio(waveform, rate=16000)
# Run the model, check the output.
scores, embeddings, spectrogram = yamnet_model(waveform)
class_scores = tf.reduce_mean(scores, axis=0)
top_class = tf.math.argmax(class_scores)
inferred_class = class_names[top_class]
top_score = class_scores[top_class]
print(f'[YAMNet] The main sound is: {inferred_class} ({top_score})')
reloaded_results = reloaded_model(waveform)
your_top_class = tf.math.argmax(reloaded_results)
your_inferred_class = my_classes[your_top_class]
class_probabilities = tf.nn.softmax(reloaded_results, axis=-1)
your_top_score = class_probabilities[your_top_class]
print(f'[Your model] The main sound is: {your_inferred_class} ({your_top_score})')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <table class="tfo-notebook-buttons" align="left">
Step2: About YAMNet
Step3: With the model loaded, you can follow the YAMNet basic usage tutorial and download a sample WAV file to run the inference.
Step5: You will need a function to load audio files, which will also be used later when working with the training data. (Learn more about reading audio files and their labels in Simple audio recognition.
Step6: Load the class mapping
Step7: Run inference
Step8: Note
Step9: Explore the data
Step10: Filter the data
Step11: Load the audio files and retrieve embeddings
Step12: Split the data
Step13: Create your model
Step14: Let's run the evaluate method on the test data just to be sure there's no overfitting.
Step15: You did it!
Step16: Save a model that can directly take a WAV file as input
Step17: Load your saved model to verify that it works as expected.
Step18: And for the final test
Step19: If you want to try your new model on a serving setup, you can use the 'serving_default' signature.
Step20: (Optional) Some more testing
|
12,397
|
<ASSISTANT_TASK:>
Python Code:
%load_ext watermark
%watermark -u -v -d -p matplotlib,numpy
%matplotlib inline
import numpy as np
import random
from matplotlib import pyplot as plt
data = np.random.normal(0, 20, 1000)
# fixed bin size
bins = np.arange(-100, 100, 5) # fixed bin size
plt.xlim([min(data)-5, max(data)+5])
plt.hist(data, bins=bins, alpha=0.5)
plt.title('Random Gaussian data (fixed bin size)')
plt.xlabel('variable X (bin size = 5)')
plt.ylabel('count')
plt.show()
import numpy as np
import random
import math
from matplotlib import pyplot as plt
data = np.random.normal(0, 20, 1000)
bins = np.linspace(math.ceil(min(data)),
math.floor(max(data)),
20) # fixed number of bins
plt.xlim([min(data)-5, max(data)+5])
plt.hist(data, bins=bins, alpha=0.5)
plt.title('Random Gaussian data (fixed number of bins)')
plt.xlabel('variable X (20 evenly spaced bins)')
plt.ylabel('count')
plt.show()
import numpy as np
import random
from matplotlib import pyplot as plt
data1 = [random.gauss(15,10) for i in range(500)]
data2 = [random.gauss(5,5) for i in range(500)]
bins = np.arange(-60, 60, 2.5)
plt.xlim([min(data1+data2)-5, max(data1+data2)+5])
plt.hist(data1, bins=bins, alpha=0.3, label='class 1')
plt.hist(data2, bins=bins, alpha=0.3, label='class 2')
plt.title('Random Gaussian data')
plt.xlabel('variable X')
plt.ylabel('count')
plt.legend(loc='upper right')
plt.show()
smooth = interp1d(bins, y, kind='cubic')
smooth
import numpy as np
import random
import math
from matplotlib import pyplot as plt
import matplotlib.mlab as mlab
from scipy.stats import norm
from scipy.interpolate import interp1d
data = np.random.normal(0, 20, 10000)
# plotting the histogram
n, bins, patches = plt.hist(data, bins=20, normed=1, alpha=0.5, color='lightblue')
# fitting the data
mu, sigma = norm.fit(data)
# adding the fitted line
y = mlab.normpdf(bins, mu, sigma)
interp = interp1d(bins, y, kind='cubic')
plt.plot(bins, interp(y), linewidth=2, color='blue')
plt.xlim([min(data)-5, max(data)+5])
plt.title('Random Gaussian data (fixed number of bins)')
plt.xlabel('variable X (20 evenly spaced bins)')
plt.ylabel('count')
plt.show()
# Generate a random Gaussian dataset with different means
# 5 rows with 30 columns, where every row represents 1 sample.
import numpy as np
data = np.ones((5,30))
for i in range(5):
data[i,:] = np.random.normal(loc=i/2, scale=1.0, size=30)
from math import floor, ceil # for rounding up and down
data_min = floor(data.min()) # minimum val. of the dataset rounded down
data_max = floor(data.max()) # maximum val. of the dataset rounded up
bins_size = 0.5
bins = np.arange(floor(data_min), ceil(data_max), bin_size)
np.histogram(data[0,:], bins=bins)
from matplotlib import pyplot as plt
markers = ['^', 'v', 'o', 'p', 'x', 's', 'p', ',']
plt.figure(figsize=(13,8))
for row in range(data.shape[0]):
hist = np.histogram(data[row,:], bins=bins)
plt.errorbar(hist[1][:-1] + bin_size/2,
hist[0],
alpha=0.3,
xerr=bin_size/2,
capsize=0,
fmt=None,
linewidth=8,
)
plt.legend(['sample %s'%i for i in range(1, 6)])
plt.grid()
plt.title('Histogram showing bar heights but without area under the bars', fontsize=18)
plt.ylabel('count', fontsize=14)
plt.xlabel('X value (bin size = %s)'%bin_size, fontsize=14)
plt.xticks(bins + bin_size)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <font size="1.5em">More info about the %watermark extension</font>
Step2: <br>
Step3: <br>
Step4: <br>
Step5: <br>
Step6: Via the numpy.histogram function, we can categorize our data into distinct bins.
Step7: The numpy.histogram function returns a tuple, where the first value is an array of how many samples fall into the first bin, the second bin, and so forth.
|
12,398
|
<ASSISTANT_TASK:>
Python Code:
import os
import numpy as np
import mne
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_filt-0-40_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file)
print(raw)
print(raw.info)
raw.plot_psd(fmax=50)
raw.plot(duration=5, n_channels=30)
# set up and fit the ICA
ica = mne.preprocessing.ICA(n_components=20, random_state=97, max_iter=800)
ica.fit(raw)
ica.exclude = [1, 2] # details on how we picked these are omitted here
ica.plot_properties(raw, picks=ica.exclude)
orig_raw = raw.copy()
raw.load_data()
ica.apply(raw)
# show some frontal channels to clearly illustrate the artifact removal
chs = ['MEG 0111', 'MEG 0121', 'MEG 0131', 'MEG 0211', 'MEG 0221', 'MEG 0231',
'MEG 0311', 'MEG 0321', 'MEG 0331', 'MEG 1511', 'MEG 1521', 'MEG 1531',
'EEG 001', 'EEG 002', 'EEG 003', 'EEG 004', 'EEG 005', 'EEG 006',
'EEG 007', 'EEG 008']
chan_idxs = [raw.ch_names.index(ch) for ch in chs]
orig_raw.plot(order=chan_idxs, start=12, duration=4)
raw.plot(order=chan_idxs, start=12, duration=4)
events = mne.find_events(raw, stim_channel='STI 014')
print(events[:5]) # show the first 5
event_dict = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,
'visual/right': 4, 'smiley': 5, 'buttonpress': 32}
fig = mne.viz.plot_events(events, event_id=event_dict, sfreq=raw.info['sfreq'])
fig.subplots_adjust(right=0.7) # make room for the legend
reject_criteria = dict(mag=4000e-15, # 4000 fT
grad=4000e-13, # 4000 fT/cm
eeg=150e-6, # 150 μV
eog=250e-6) # 250 μV
epochs = mne.Epochs(raw, events, event_id=event_dict, tmin=-0.2, tmax=0.5,
reject=reject_criteria, preload=True)
conds_we_care_about = ['auditory/left', 'auditory/right',
'visual/left', 'visual/right']
epochs.equalize_event_counts(conds_we_care_about) # this operates in-place
aud_epochs = epochs['auditory']
vis_epochs = epochs['visual']
del raw, epochs # free up memory
aud_epochs.plot_image(picks=['MEG 1332', 'EEG 021'])
frequencies = np.arange(7, 30, 3)
power = mne.time_frequency.tfr_morlet(aud_epochs, n_cycles=2, return_itc=False,
freqs=frequencies, decim=3)
power.plot(['MEG 1332'])
aud_evoked = aud_epochs.average()
vis_evoked = vis_epochs.average()
mne.viz.plot_compare_evokeds(dict(auditory=aud_evoked, visual=vis_evoked),
legend='upper left', show_sensors='upper right')
aud_evoked.plot_joint(picks='eeg')
aud_evoked.plot_topomap(times=[0., 0.08, 0.1, 0.12, 0.2], ch_type='eeg')
evoked_diff = mne.combine_evoked([aud_evoked, -vis_evoked], weights='equal')
evoked_diff.pick_types('mag').plot_topo(color='r', legend=False)
# load inverse operator
inverse_operator_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis-meg-oct-6-meg-inv.fif')
inv_operator = mne.minimum_norm.read_inverse_operator(inverse_operator_file)
# set signal-to-noise ratio (SNR) to compute regularization parameter (λ²)
snr = 3.
lambda2 = 1. / snr ** 2
# generate the source time course (STC)
stc = mne.minimum_norm.apply_inverse(vis_evoked, inv_operator,
lambda2=lambda2,
method='MNE') # or dSPM, sLORETA, eLORETA
# path to subjects' MRI files
subjects_dir = os.path.join(sample_data_folder, 'subjects')
# plot
stc.plot(initial_time=0.1, hemi='split', views=['lat', 'med'],
subjects_dir=subjects_dir)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Loading data
Step2: By default,
Step3:
Step4: Preprocessing
Step5: Once we're confident about which component(s) we want to remove, we pass them
Step6: Detecting experimental events
Step7: The resulting events array is an ordinary 3-column
Step8: Event dictionaries like this one are used when extracting epochs from
Step9: For paradigms that are not event-related (e.g., analysis of resting-state
Step10: We'll also pass the event dictionary as the event_id parameter (so we can
Step11: Next we'll pool across left/right stimulus presentations so we can compare
Step12: Like
Step13: <div class="alert alert-info"><h4>Note</h4><p>Both
Step14: Estimating evoked responses
Step15: We can also get a more detailed view of each
Step16: Evoked objects can also be combined to show contrasts between conditions,
Step17: Inverse modeling
Step18: Finally, in order to plot the source estimate on the subject's cortical
|
12,399
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import bet.sensitivity.gradients as grad
import bet.sensitivity.chooseQoIs as cqoi
import bet.calculateP.simpleFunP as simpleFunP
import bet.calculateP.calculateP as calculateP
import bet.postProcess.postTools as postTools
import bet.Comm as comm
import bet.sample as sample
def initialize_problem(input_dim, output_dim, num_samples=1E5, num_centers=10):
# Let the map Q be a random matrix of size (output_dim, input_dim)
# np.random.seed(0)
Q = np.random.random([output_dim, input_dim])
# Initialize some sample objects we will need
input_samples = sample.sample_set(input_dim)
output_samples = sample.sample_set(output_dim)
# Choose random samples in parameter space to solve the model
domain_min, domain_max = 0, 1
input_samples.set_values(np.random.uniform(domain_min, domain_max,
[np.int(num_samples), input_dim]))
input_samples.set_domain(np.array([[domain_min, domain_max]
for _ in range(input_dim)]))
# Make the MC assumption and compute the volumes of each voronoi cell
input_samples.estimate_volume_mc()
# Compute the output values with the map Q
output_samples.set_values(Q.dot(input_samples.get_values().transpose()).\
transpose())
# Calculate the gradient vectors at some subset of the samples. Here the
# *normalize* argument is set to *True* because we are using bin_ratio to
# determine the uncertainty in our data.
cluster_discretization = sample.discretization(input_samples, output_samples)
# We will approximate the jacobian at each of the centers
center_discretization = grad.calculate_gradients_rbf(cluster_discretization,
num_centers, normalize=True)
return input_samples, output_samples, center_discretization, Q
def solve_problem(my_discretization, Q_ref, QoI_indices, percentile = 1.0, measure=True):
input_samples = my_discretization.get_input_sample_set()
output_samples = my_discretization.get_output_sample_set()
# Choose some QoI indices to solve the inverse problem with
output_samples._dim = len(QoI_indices)
output_samples.set_values(output_samples.get_values()[:, QoI_indices])
# bin_ratio defines the uncertainty in our data
# Define the level of uncertainty in the measured reference datum
uncertainty = rect_scale = bin_ratio = 0.25
# Make the MC assumption and compute the volumes of each voronoi cell
input_samples.estimate_volume_mc()
# Find the simple function approximation
if measure:
simpleFunP.regular_partition_uniform_distribution_rectangle_size(
data_set=my_discretization, Q_ref=Q_ref, rect_size=uncertainty,
cells_per_dimension=1)
else:
simpleFunP.regular_partition_uniform_distribution_rectangle_scaled(
data_set=my_discretization, Q_ref=Q_ref, rect_scale=uncertainty,
cells_per_dimension=1)
# Calculate probabilities making the Monte Carlo assumption
calculateP.prob(my_discretization)
# Sort samples by highest probability density and find how many samples lie in
# the support of the inverse solution. With the Monte Carlo assumption, this
# also tells us the approximate volume of this support.
(num_samples, _, indices_in_inverse) =\
postTools.sample_highest_prob(top_percentile=percentile,
sample_set=input_samples, sort=True)
# Print the approximate percentage of the measure of the parameter space defined
# by the support of the inverse density
if comm.rank == 0:
print('The approximate percentage of the measure of the parameter space defined')
print('by the support of the inverse density associated with the choice of QoI map is')
print('%2.4f%% with '%(100*np.sum(input_samples.get_volumes()[indices_in_inverse])),
num_samples, ' samples.')
return num_samples, indices_in_inverse
############ MAKE SELECTION ############
independent_error = True # is the uncertainty in the data independent of the range of the data?
measure = True # if True, optimize w/r/t the size of the inverse set (expected scaling effect)
########################################
# Set up the info for the spaces
num_samples = 1E5
num_centers = 10
# feel free to change the following, but ideally, keep input_dim <= output_dim
input_dim = 5
output_dim = 10
np.random.seed(0) # (optional) set seed for repeatable results.
input_samples, output_samples, center_discretization, Q = \
initialize_problem(input_dim, output_dim, num_samples, num_centers)
input_samples_center = center_discretization.get_input_sample_set()
num_best_sets = 3 # what is the worst-ranked option you want to investigate?
if output_dim > 50: # optional tolerances for large problems (output space dimension)
best_sets = cqoi.chooseOptQoIs_large(input_samples_center, measure=measure,
max_qois_return=5, num_optsets_return=num_best_sets,
inner_prod_tol=0.9, measskew_tol=1E2)
else:
best_sets = cqoi.chooseOptQoIs_large(input_samples_center, measure=measure,
num_optsets_return=num_best_sets)
for i in range(num_best_sets):
print(best_sets[i], '\n')
############ MAKE SELECTION ############
num_qoi = 2 # select the number of quantities of interest
ranking_selection = 1 # select your choice (1st, 2nd, 3rd) best (start at 1)
########################################
QoI_indices = best_sets[num_qoi-2][ranking_selection-1, 1:].astype(int) # Chooses the optimal set of 2 QoI
print("Your QoI sub-indices selection: ", QoI_indices)
# Create discretization object and solve problem
my_discretization = sample.discretization(input_sample_set=input_samples,
output_sample_set=output_samples)
# Define the reference point in the output space to correspond to the center of the input space.
param_ref = 0.5 * np.ones(input_dim)
Q_ref = Q[QoI_indices, :].dot(param_ref)
num_samples, indices_in_inverse = solve_problem(my_discretization, Q_ref, QoI_indices,
measure=measure, percentile=1.0)
%store my_discretization
%store param_ref
%store Q_ref
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Define Methods
Step2: Suggested Changes
Step3: With these gradient vectors, we are now ready to choose an optimal set of QoIs to use in the inverse problem, based on minimizing the support of the inverse solution (measure).
Step4: The number in the first column represents the expected volume of the inverse image of a unit hypercube in the data space if measure=True, and it is the expected skewness if measure=False.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.