Unnamed: 0
int64 0
15.9k
| cleaned_code
stringlengths 67
124k
⌀ | cleaned_prompt
stringlengths 168
30.3k
⌀ |
|---|---|---|
12,800
|
<ASSISTANT_TASK:>
Python Code:
from bokeh.palettes import YlOrRd as palette #Spectral6 as palette
from bokeh.plotting import figure, save
from bokeh.models import ColumnDataSource, HoverTool, LogColorMapper
from bokeh.palettes import RdYlGn10 as palette
import geopandas as gpd
import pysal as ps
import numpy as np
# Filepaths
fp = r"C:\HY-Data\HENTENKA\KOODIT\Opetus\Automating-GIS-processes\AutoGIS-Sphinx\data\dataE5\TravelTimes_to_5975375_RailwayStation.shp"
roads_fp = r"C:\HY-Data\HENTENKA\KOODIT\Opetus\Automating-GIS-processes\AutoGIS-Sphinx\data\dataE5\roads.shp"
metro_fp = r"C:\HY-Data\HENTENKA\KOODIT\Opetus\Automating-GIS-processes\AutoGIS-Sphinx\data\dataE5\metro.shp"
# Read the data with Geopandas
data = gpd.read_file(fp)
roads = gpd.read_file(roads_fp)
metro = gpd.read_file(metro_fp)
# Ensure that the CRS is the same than in the all layers
data['geometry'] = data['geometry'].to_crs(epsg=3067)
roads['geometry'] = roads['geometry'].to_crs(epsg=3067)
metro['geometry'] = metro['geometry'].to_crs(epsg=3067)
def getXYCoords(geometry, coord_type):
Returns either x or y coordinates from geometry coordinate sequence. Used with LineString and Polygon geometries.
if coord_type == 'x':
return geometry.coords.xy[0]
elif coord_type == 'y':
return geometry.coords.xy[1]
def getPolyCoords(geometry, coord_type):
Returns Coordinates of Polygon using the Exterior of the Polygon.
ext = geometry.exterior
return getXYCoords(ext, coord_type)
def getLineCoords(geometry, coord_type):
Returns Coordinates of Linestring object.
return getXYCoords(geometry, coord_type)
def getPointCoords(geometry, coord_type):
Returns Coordinates of Point object.
if coord_type == 'x':
return geometry.x
elif coord_type == 'y':
return geometry.y
def multiGeomHandler(multi_geometry, coord_type, geom_type):
Function for handling multi-geometries. Can be MultiPoint, MultiLineString or MultiPolygon.
Returns a list of coordinates where all parts of Multi-geometries are merged into a single list.
Individual geometries are separated with np.nan which is how Bokeh wants them.
# Bokeh documentation regarding the Multi-geometry issues can be found here (it is an open issue)
# https://github.com/bokeh/bokeh/issues/2321
for i, part in enumerate(multi_geometry):
# On the first part of the Multi-geometry initialize the coord_array (np.array)
if i == 0:
if geom_type == "MultiPoint":
coord_arrays = np.append(getPointCoords(part, coord_type), np.nan)
elif geom_type == "MultiLineString":
coord_arrays = np.append(getLineCoords(part, coord_type), np.nan)
elif geom_type == "MultiPolygon":
coord_arrays = np.append(getPolyCoords(part, coord_type), np.nan)
else:
if geom_type == "MultiPoint":
coord_arrays = np.concatenate([coord_arrays, np.append(getPointCoords(part, coord_type), np.nan)])
elif geom_type == "MultiLineString":
coord_arrays = np.concatenate([coord_arrays, np.append(getLineCoords(part, coord_type), np.nan)])
elif geom_type == "MultiPolygon":
coord_arrays = np.concatenate([coord_arrays, np.append(getPolyCoords(part, coord_type), np.nan)])
# Return the coordinates
return coord_arrays
def getCoords(row, geom_col, coord_type):
Returns coordinates ('x' or 'y') of a geometry (Point, LineString or Polygon) as a list (if geometry is LineString or Polygon).
Can handle also MultiGeometries.
# Get geometry
geom = row[geom_col]
# Check the geometry type
gtype = geom.geom_type
# "Normal" geometries
# -------------------
if gtype == "Point":
return getPointCoords(geom, coord_type)
elif gtype == "LineString":
return list( getLineCoords(geom, coord_type) )
elif gtype == "Polygon":
return list( getPolyCoords(geom, coord_type) )
# Multi geometries
# ----------------
else:
return list( multiGeomHandler(geom, coord_type, gtype) )
# Calculate the x and y coordinates of the grid
data['x'] = data.apply(getCoords, geom_col="geometry", coord_type="x", axis=1)
data['y'] = data.apply(getCoords, geom_col="geometry", coord_type="y", axis=1)
# Calculate the x and y coordinates of the roads
roads['x'] = roads.apply(getCoords, geom_col="geometry", coord_type="x", axis=1)
roads['y'] = roads.apply(getCoords, geom_col="geometry", coord_type="y", axis=1)
# Calculate the x and y coordinates of metro
metro['x'] = metro.apply(getCoords, geom_col="geometry", coord_type="x", axis=1)
metro['y'] = metro.apply(getCoords, geom_col="geometry", coord_type="y", axis=1)
# Replace No Data values (-1) with large number (999)
data = data.replace(-1, 999)
# Classify our travel times into 5 minute classes until 200 minutes
# Create a list of values where minumum value is 5, maximum value is 200 and step is 5.
breaks = [x for x in range(5, 200, 5)]
classifier = ps.User_Defined.make(bins=breaks)
pt_classif = data[['pt_r_tt']].apply(classifier)
car_classif = data[['car_r_t']].apply(classifier)
# Rename columns
pt_classif.columns = ['pt_r_tt_ud']
car_classif.columns = ['car_r_t_ud']
# Join back to main data
data = data.join(pt_classif)
data = data.join(car_classif)
# Create names for the legend (until 60 minutes)
upper_limit = 60
step = 5
# This will produce: ["0-5", "5-10", "10-15", ... , "60 <"]
names = ["%s-%s " % (x-5, x) for x in range(step, upper_limit, step)]
# Add legend label for over 60
names.append("%s <" % upper_limit)
# Assign legend names for the classes
data['label_pt'] = None
data['label_car'] = None
for i in range(len(names)):
# Update rows where class is i
data.loc[data['pt_r_tt_ud'] == i, 'label_pt'] = names[i]
data.loc[data['car_r_t_ud'] == i, 'label_car'] = names[i]
# Update all cells that didn't get any value with "60 <"
data['label_pt'] = data['label_pt'].fillna("%s <" % upper_limit)
data['label_car'] = data['label_car'].fillna("%s <" % upper_limit)
# Select only necessary columns for our plotting to keep the amount of data minumum
df = data[['x', 'y', 'pt_r_tt_ud', 'pt_r_tt', 'car_r_t', 'from_id', 'label_pt']]
dfsource = ColumnDataSource(data=df)
# Exclude geometry from roads as well
rdf = roads[['x', 'y']]
rdfsource = ColumnDataSource(data=rdf)
# Exclude geometry from metro as well
mdf = metro[['x','y']]
mdfsource = ColumnDataSource(data=mdf)
TOOLS = "pan,wheel_zoom,box_zoom,reset,save"
# Flip the colors in color palette
palette.reverse()
color_mapper = LogColorMapper(palette=palette)
p = figure(title="Travel times to Helsinki city center by public transportation", tools=TOOLS,
plot_width=650, plot_height=500, active_scroll = "wheel_zoom" )
# Do not add grid line
p.grid.grid_line_color = None
# Add polygon grid and a legend for it
grid = p.patches('x', 'y', source=dfsource, name="grid",
fill_color={'field': 'pt_r_tt_ud', 'transform': color_mapper},
fill_alpha=1.0, line_color="black", line_width=0.03, legend="label_pt")
# Add roads
r = p.multi_line('x', 'y', source=rdfsource, color="grey")
# Add metro
m = p.multi_line('x', 'y', source=mdfsource, color="red")
# Modify legend location
p.legend.location = "top_right"
p.legend.orientation = "vertical"
# Insert a circle on top of the Central Railway Station (coords in EurefFIN-TM35FIN)
station_x = 385752.214
station_y = 6672143.803
circle = p.circle(x=[station_x], y=[station_y], name="point", size=6, color="yellow")
# Add two separate hover tools for the data
phover = HoverTool(renderers=[circle])
phover.tooltips=[("Destination", "Railway Station")]
ghover = HoverTool(renderers=[grid])
ghover.tooltips=[("YKR-ID", "@from_id"),
("PT time", "@pt_r_tt"),
("Car time", "@car_r_t"),
]
p.add_tools(ghover)
p.add_tools(phover)
# Output filepath to HTML
output_file = r"C:\HY-Data\HENTENKA\KOODIT\Opetus\Automating-GIS-processes\AutoGIS-Sphinx\data\accessibility_map_3.html"
# Save the map
save(p, output_file);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step7: Next, let's create a set of functions that are used for getting the x and y coordinates of the geometries. Shapefiles etc. can often have Multi-geometries (MultiLineStrings etc.), thus we need to handle those as well which makes things slightly more complicated. It is always a good practice to slice your functions into small pieces which is what we have done here
Step8: Now we can apply our functions and calculate the x and y coordinates of any kind of geometry by using the same function, i.e. getCoords().
Step9: Next, we need to classify the travel time values into 5 minute intervals using Pysal's user defined classifier. We also create legend labels for the classes.
Step10: Finally, we can visualize our layers with Bokeh, add a legend for travel times and add HoverTools for Destination Point and the grid values (travel times)
|
12,801
|
<ASSISTANT_TASK:>
Python Code:
# Import pandas and numpy
import pandas as pd
import numpy as np
# Import the classifiers we will be using
from sklearn.naive_bayes import GaussianNB
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
# Import train/test split function
from sklearn.model_selection import train_test_split
# Import cross validation scorer
from sklearn.model_selection import cross_val_score
# Import ROC AUC scoring function
from sklearn.metrics import roc_auc_score
# Read in our dataset, using the parameter 'index_col' to select the index
df = pd.read_csv('../data/breast_cancer.csv', index_col='id')
df.head()
df.shape
# Remove the target from the features
features = df.drop(['diagnosis'], axis=1)
# Select the target
target = df['diagnosis']
X_train, X_test, y_train, y_test = train_test_split(features, target, test_size=0.3, random_state=0)
# Choose the model
tree_model = DecisionTreeClassifier(random_state=0)
# Fit the model
tree_model.fit(X_train, y_train)
# Make the predictions
y_pred = tree_model.predict_proba(X_test)
# Score the predictions
score = roc_auc_score(y_test, y_pred[:,1])
print("ROC AUC: " + str(score))
print("Number of mislabeled points out of a total %d points: %d" % (y_test.shape[0],(y_test != np.round_(y_pred[:,1])).sum()))
# Choose the classifer
tree_model = DecisionTreeClassifier(random_state=0)
# Fit, predict and score in one step!
# The arguments, in order:
#1. Model
#2. Features
#3. Target
#4. Number of k-folds
#5. Scoring function
#6. Number of CPU cores to use
score_tree_model = cross_val_score(tree_model, features, target, cv=5, scoring='roc_auc', n_jobs=-1)
print("ROC AUC scores: " + str(score_tree_model))
print("Average ROC AUC: " + str(score_tree_model.mean()))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read the data
Step2: Train/test split
Step3: Modelling with standard train/test split
Step4: Modelling with k-fold cross validation
|
12,802
|
<ASSISTANT_TASK:>
Python Code:
# Licensed under the Apache License, Version 2.0 (the "License")
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# https://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!pip install -q --upgrade jaxlib==0.1.71+cuda111 -f https://storage.googleapis.com/jax-releases/jax_releases.html
!pip install -q --upgrade jax==0.2.21
!pip install -q git+https://github.com/google/trax.git
!pip install -q pickle5
!pip install -q gin
# Execute this for a proper TPU setup!
# Make sure the Colab Runtime is set to Accelerator: TPU.
import jax
import requests
import os
if 'TPU_DRIVER_MODE' not in globals():
url = 'http://' + os.environ['COLAB_TPU_ADDR'].split(':')[0] + ':8475/requestversion/tpu_driver0.1-dev20200416'
resp = requests.post(url)
TPU_DRIVER_MODE = 1
# The following is required to use TPU Driver as JAX's backend.
from jax.config import config
config.FLAGS.jax_xla_backend = "tpu_driver"
config.FLAGS.jax_backend_target = "grpc://" + os.environ['COLAB_TPU_ADDR']
print(config.FLAGS.jax_backend_target)
jax.devices()
# Download ImageNet32 data (the url in tfds is down)
!gdown https://drive.google.com/uc?id=1OV4lBnuIcbqeuoiK83jWtlnQ9Afl6Tsr
!tar -zxf /content/im32.tar.gz
# tfds hack for imagenet32
import json
json_path = '/content/content/drive/MyDrive/imagenet/downsampled_imagenet/32x32/2.0.0/dataset_info.json'
with open(json_path, mode='r') as f:
ds_info = json.load(f)
if 'moduleName' in ds_info:
del ds_info['moduleName']
with open(json_path, mode='w') as f:
json.dump(ds_info, f)
!mkdir -p /root/tensorflow_datasets/downsampled_imagenet/32x32
!cp -r /content/content/drive/MyDrive/imagenet/downsampled_imagenet/32x32/2.0.0 /root/tensorflow_datasets/downsampled_imagenet/32x32
# Download and set up ImageNet64 (validation only) data
!gdown https://drive.google.com/uc?id=1ZoI3ZKMUXfrIlqPfIBCcegoe0aJHchpo
!tar -zxf im64_valid.tar.gz
!mkdir -p /root/tensorflow_datasets/downsampled_imagenet/64x64/2.0.0
!cp im64_valid/* /root/tensorflow_datasets/downsampled_imagenet/64x64/2.0.0
# Download gin configs
!wget -q https://raw.githubusercontent.com/google/trax/master/trax/supervised/configs/hourglass_imagenet32.gin
!wget -q https://raw.githubusercontent.com/google/trax/master/trax/supervised/configs/hourglass_imagenet64.gin
import gin
gin.parse_config_file('hourglass_imagenet32.gin')
model = trax.models.HourglassLM(mode='eval')
model.init_from_file(
'gs://trax-ml/hourglass/imagenet32/model_470000.pkl.gz',
weights_only=True,
)
loss_fn = trax.layers.WeightedCategoryCrossEntropy()
model_eval = trax.layers.Accelerate(trax.layers.Serial(
model,
loss_fn
))
import gin
import trax
# Here is the hacky part to remove shuffling of the dataset
def get_eval_dataset():
dataset_name = gin.query_parameter('data_streams.dataset_name')
data_dir = trax.data.tf_inputs.download_and_prepare(dataset_name, None)
train_data, eval_data, keys = trax.data.tf_inputs._train_and_eval_dataset(
dataset_name, data_dir, eval_holdout_size=0)
bare_preprocess_fn = gin.query_parameter('data_streams.bare_preprocess_fn')
eval_data = bare_preprocess_fn.scoped_configurable_fn(eval_data, training=False)
return trax.fastmath.dataset_as_numpy(eval_data)
from trax import fastmath
from trax.fastmath import numpy as jnp
from tqdm import tqdm
def batched_inputs(data_gen, batch_size):
inp_stack, mask_stack = [], []
for input_example, mask in data_gen:
inp_stack.append(input_example)
mask_stack.append(mask)
if len(inp_stack) % batch_size == 0:
if len(set(len(example) for example in inp_stack)) > 1:
for x, m in zip(inp_stack, mask_stack):
yield x, m
else:
input_batch = jnp.stack(inp_stack)
mask_batch = jnp.stack(mask_stack)
yield input_batch, mask_batch
inp_stack, mask_stack = [], []
if len(inp_stack) > 0:
for inp, mask in zip(inp_stack, mask_stack):
yield inp, mask
def run_full_evaluation(accelerated_model_with_loss, examples_data_gen,
batch_size, pad_to_len=None):
# Important: we assume batch size per device = 1
assert batch_size % fastmath.local_device_count() == 0
assert fastmath.local_device_count() == 1 or \
batch_size == fastmath.local_device_count()
loss_sum, n_tokens = 0.0, 0
def pad_right(inp_tensor):
if pad_to_len:
return jnp.pad(inp_tensor,
[[0, 0], [0, max(0, pad_to_len - inp_tensor.shape[1])]])
else:
return inp_tensor
batch_gen = batched_inputs(examples_data_gen, batch_size)
def batch_leftover_example(input_example, example_mask):
def extend_shape_to_batch_size(tensor):
return jnp.repeat(tensor, repeats=batch_size, axis=0)
return map(extend_shape_to_batch_size,
(input_example[None, ...], example_mask[None, ...]))
for i, (inp, mask) in tqdm(enumerate(batch_gen)):
leftover_batch = False
if len(inp.shape) == 1:
inp, mask = batch_leftover_example(inp, mask)
leftover_batch = True
inp, mask = map(pad_right, [inp, mask])
example_losses = accelerated_model_with_loss((inp, inp, mask))
if leftover_batch:
example_losses = example_losses[:1]
mask = mask[:1]
example_lengths = mask.sum(axis=-1)
loss_sum += (example_lengths * example_losses).sum()
n_tokens += mask.sum()
if i % 200 == 0:
print(f'Batches: {i}, current loss: {loss_sum / float(n_tokens)}')
return loss_sum / float(n_tokens)
def data_gen(dataset):
for example in dataset:
example = example['image']
mask = jnp.ones_like(example)
yield example, mask
BATCH_SIZE = 8
eval_data_gen = data_gen(get_eval_dataset())
loss = run_full_evaluation(model_eval, eval_data_gen, BATCH_SIZE)
print(f'Final perplexity: {loss}, final bpd: {loss / jnp.log(2)}')
gin.parse_config_file('hourglass_imagenet64.gin')
model = trax.models.HourglassLM(mode='eval')
model.init_from_file(
'gs://trax-ml/hourglass/imagenet64/model_300000.pkl.gz',
weights_only=True,
)
loss_fn = trax.layers.WeightedCategoryCrossEntropy()
model_eval = trax.layers.Accelerate(trax.layers.Serial(
model,
loss_fn
))
BATCH_SIZE = 8
eval_data_gen = data_gen(get_eval_dataset())
loss = run_full_evaluation(model_eval, eval_data_gen, BATCH_SIZE)
print(f'Final perplexity: {loss}, final bpd: {loss / jnp.log(2)}')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Hourglass
Step2: Download ImageNet32/64 data
Step3: Load the ImageNet32 model
Step4: Evaluate on the validation set
Step5: ImageNet32 evaluation
Step6: ImageNet64 evaluation
|
12,803
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
#Typical imports
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import pandas as pd
# plots on fleek
matplotlib.style.use('ggplot')
# Read the housing data from the txt file into a pandas dataframe
# delim_whitespace tells the read_table method to look for
# whitespace as a separator.
df = pd.read_table('data/bldgstories.txt',
usecols=[1,2], delim_whitespace=True, header=0,
names=['Height in Feet', 'No. Stories'],
dtype=np.float32)
# Display the first few records
df.head()
# Visualize the data as a scatter plot
# with sq. ft. as the independent variable.
df.plot(x='No. Stories', y='Height in Feet', kind='scatter')
# First we declare our placeholders
x = tf.placeholder(tf.float32, [None, 1])
y_ = tf.placeholder(tf.float32, [None, 1])
# Then our variables
W = tf.Variable(tf.zeros([1,1]))
b = tf.Variable(tf.zeros([1]))
# And now we can make our linear model: y = Wx + b
y = tf.matmul(x, W) + b
# Finally we choose our cost function (SSE in this case)
cost = tf.reduce_sum(tf.square(y_-y))
# Call tf's gradient descent function with a learning rate and instructions to minimize the cost
learn_rate = .00000005
train = tf.train.GradientDescentOptimizer(learn_rate).minimize(cost)
# Prepare our data to be read into the training session. The data needs to match the
# shape we specified earlier -- in this case (n, 1) where n is the number of data points.
xdata = np.asarray([[i] for i in df['No. Stories']])
y_data = np.asarray([[i] for i in df['Height in Feet']])
# Create a tensorflow session, initialize the variables, and run gradient descent
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(10000):
# This is the actual training step - feed_dict specifies the data to be read into
# the placeholders x and y_ respectively.
sess.run(train, feed_dict={x:xdata, y_:y_data})
# Convert our variables from tensors to scalars so we can use them outside tf
height_story = np.asscalar(sess.run(W))
bias = np.asscalar(sess.run(b))
print("Model: y = %sx + %s" % (round(height_story,2), round(bias,2)))
# Create the empty plot
fig, axes = plt.subplots()
# Draw the scatter plot on the axes we just created
df.plot(x='No. Stories', y='Height in Feet', kind='scatter', ax=axes)
# Create a range of x values to plug into our model
stories = np.arange(10, 120, 1)
# Plot the model
plt.plot(stories, height_story*stories + bias)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: If you look at the raw data, you can see that the columns are separated by tabs, not commas. This changes the way we need to read the data in.
Step2: Let's take a look at the dataframe to ensure it looks the way we expect.
Step3: Looks good! Now let's take a different view and consider possible models -- a scatter plot would be useful here.
Step4: It seems a linear model could be appropriate in this case. How can we build it with TensorFlow?
Step5: And here's where all the magic will happen
Step6: Seems reasonable - according to our model, a story is about 13 feet. Let's plot the line so we can eyeball the fit.
|
12,804
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import xarray as xr
import cartopy.crs as ccrs
from sklearn.cluster import KMeans
z500 = xr.open_dataset('data\z500.DJF.anom.1979.2010.nc', decode_times=False)
print(z500)
da = z500.sel(P=500).phi.load()
print(da.name, da.dims)
print(da.coords)
data = da.values
nt,ny,nx = data.shape
data = np.reshape(data, [nt, ny*nx], order='F')
mk = KMeans(n_clusters=4, random_state=0, n_jobs=-1).fit(data)
def get_cluster_fraction(m, label):
return (m.labels_==label).sum()/(m.labels_.size*1.0)
x,y = np.meshgrid(da.X, da.Y)
proj = ccrs.Orthographic(0,45)
fig, axes = plt.subplots(2,2, figsize=(8,8), subplot_kw=dict(projection=proj))
regimes = ['NAO$^-$', 'NAO$^+$', 'Blocking', 'Atlantic Ridge']
tags = list('abcd')
for i in range(mk.n_clusters):
onecen = mk.cluster_centers_[i,:].reshape(ny,nx, order='F')
cs = axes.flat[i].contourf(x, y, onecen,
levels=np.arange(-150, 151, 30),
transform=ccrs.PlateCarree(),
cmap='RdBu_r')
cb=fig.colorbar(cs, ax=axes.flat[i], shrink=0.8, aspect=20)
cb.set_label('[unit: m]',labelpad=-7)
axes.flat[i].coastlines()
axes.flat[i].set_global()
title = '{}, {:4.1f}%'.format(regimes[i], get_cluster_fraction(mk, i)*100)
axes.flat[i].set_title(title)
plt.text(0, 1, tags[i],
transform=axes.flat[i].transAxes,
va='bottom',
fontsize=plt.rcParams['font.size']*2,
fontweight='bold')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Load data
Step2: 3. Perform KMeans clustering to idenfity weather regimes
Step3: Get the fraction of a given cluster denoted by label.
Step4: 4. Visualize weather regimes
|
12,805
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
df = pd.DataFrame({'AAA' : [4,5,6,7],
'BBB' : [10,20,30,40],
'CCC' : [100,50,-30,-50]})
df
# If AAA >= 5, BBB = -1
df.loc[df.AAA >= 5, 'BBB'] = -1; df
df.loc[df.AAA >= 5, ['BBB','CCC']] = 555; df
df.loc[df.AAA < 5, ['BBB', 'CCC']] = 2000; df
df_mask = pd.DataFrame({'AAA' : [True] * 4, 'BBB' : [False] * 4, 'CCC' : [True,False] * 2})
df_mask
df.where(df_mask, -1000)
df = pd.DataFrame({'AAA' : [4,5,6,7], 'BBB' : [10,20,30,40], 'CCC' : [100,50,-30,-50]}); df
# New column -- if AAA > 5, then high, else low:
df['logic'] = np.where(df['AAA'] > 5,'high','low'); df
df = pd.DataFrame({'AAA' : [4,5,6,7], 'BBB' : [10,20,30,40], 'CCC' : [100,50,-30,-50]}); df
df_low = df[df.AAA <= 5]; df_low
df_high = df[df.AAA > 5]; df_high
df = pd.DataFrame({'AAA' : [4,5,6,7], 'BBB' : [10,20,30,40],'CCC' : [100,50,-30,-50]}); df
# And operation without assignment:
newseries = df.loc[(df['BBB'] < 25) & (df['CCC'] >= -40), 'AAA']; newseries
# Or operation without assignment:
newseries = df.loc[(df['BBB'] > 25) | (df['CCC'] >= -40), 'AAA']; newseries
# Or operation with assignment modifies the dataframe:
df.loc[(df['BBB'] > 25) | (df['CCC'] >= 75), 'AAA'] = 0.1; df
df = pd.DataFrame({'AAA' : [4,5,6,7], 'BBB' : [10,20,30,40],'CCC' : [100,50,-30,-50]}); df
aValue = 43.0
df.loc[(df.CCC-aValue).abs().argsort()]
df = pd.DataFrame({'AAA' : [4,5,6,7], 'BBB' : [10,20,30,40], 'CCC' : [100,50,-30,-50]}); df
Crit1 = df.AAA <= 5.5; Crit1
Crit2 = df.BBB == 10.0; Crit2
Crit3 = df.CCC > -40.0; Crit3
AllCrit = Crit1 & Crit2 & Crit3; AllCrit
CritList = [Crit1,Crit2,Crit3]; CritList
import functools
AllCrit = functools.reduce(lambda x,y: x & y, CritList); AllCrit
df[AllCrit]
# Using both row labels and value conditionals:
df = pd.DataFrame({'AAA' : [4,5,6,7], 'BBB' : [10,20,30,40], 'CCC' : [100,50,-30,50]}); df
df[(df.AAA <= 6) & (df.index.isin([0,2,4]))]
data = {'AAA' : [4,5,6,7], 'BBB' : [10,20,30,40], 'CCC' : [100,50,-30,50]}
pd.DataFrame(data)
df = pd.DataFrame(data=data,index=['foo','bar','goo','car']); df
# Label-oriented:
df.loc['bar' : 'car']
# Positional-oriented:
df.iloc[0:3]
# Begin index at 1 instead of 0
df2 = pd.DataFrame(data=data,index=[1,2,3,4]); df2
df2.iloc[1:3]
df2.loc[1:3]
df = pd.DataFrame({'AAA' : [4,5,6,7], 'BBB' : [10,20,30,40], 'CCC' : [100,50,-30,-50]}); df
df[~((df.AAA <= 6) & (df.index.isin([0,2,4])))]
rng = pd.date_range('1/1/2018',periods=100,freq='D')
data = np.random.randn(100,4)
print(rng[:5])
print()
print(data[:5])
cols = ['A','B','C','D']; cols
# Another reason why Python is so wonderful:
df1, df2, df3 = pd.DataFrame(data, rng, cols), pd.DataFrame(data, rng, cols), pd.DataFrame(data, rng, cols)
df1[:5]
pf = pd.Panel({'df1':df1,'df2':df2,'df3':df3});pf
pf.loc[:,:,'F'] = pd.DataFrame(data,rng,cols);pf
df = pd.DataFrame({'AAA' : [1,2,1,3], 'BBB' : [1,1,2,2], 'CCC' : [2,1,3,1]}); df
source_cols = df.columns # any subset would work
source_cols
new_cols = [str(x) + "_cat" for x in source_cols]
new_cols
categories = { 1 : 'Alpha', 2 : 'Beta', 3 : 'Charlie' }
df[new_cols] = df[source_cols].applymap(categories.get); df
df = pd.DataFrame({'AAA' : [1,1,1,2,2,2,3,3], 'BBB' : [2,1,3,4,5,1,2,3]}); df
# Method 1 -- use idxmin() to get the index of the mins:
df.loc[df.groupby('AAA')['BBB'].idxmin()]
# Method 2 -- sort the entries, then take the first of each.
# Notice that the resulting dataframe is the same except for the index.
df.sort_values(by='BBB').groupby('AAA', as_index=False).first()
df = pd.DataFrame({'Row' : [0,1,2],
'one_X' : [1.1,1.1,1.1],
'one_Y' : [1.2,1.2,1.2],
'two_X' : [1.11,1.11,1.11],
'two_Y' : [1.22,1.22,1.22]})
df
# As a labeled index:
df = df.set_index('Row'); df
# With hierarchical columns:
df.columns = pd.MultiIndex.from_tuples([tuple(c.split('_')) for c in df.columns]); df
# Now you can stack and reset them:
df = df.stack(0).reset_index(1); df
# Notice how the label 'level_1' got added for you in the DataFrame above.
# You can fix your own labels, too:
df.columns = ['Sample','All_X', 'All_Y']; df
cols = pd.MultiIndex.from_tuples([(x,y) for x in ['A','B','C'] for y in ['O','I']]); cols
df = pd.DataFrame(cols); df
df = pd.DataFrame(np.random.randn(2,6),index=['n','m'],columns=cols); df
df = df.div(df['C'],level=1); df
df = df.div(df['B'],level=1); df
coords = [('AA','one'),('AA','six'),('BB','one'),('BB','two'),('BB','six')]
df = pd.DataFrame(coords); df
index = pd.MultiIndex.from_tuples(coords); index
df = pd.DataFrame(index); df
df = pd.DataFrame([11,22,33,44,55],index,['MyData']); df
# Take the cross section of the 1st level and 1st axis
df.xs('BB',level=0,axis=0)
# Now take the 2nd level of the first axis
df.xs('six',level=1,axis=0)
import itertools
index = list(itertools.product(['Ada','Quinn','Violet'],['Comp','Math','Sci'])); index
headr = list(itertools.product(['Exams','Labs'],['I','II'])); headr
indx = pd.MultiIndex.from_tuples(index,names=['Student','Course']); indx
pd.DataFrame(indx)
# Notice that the labelsin the cols multi-index are unnamed
cols = pd.MultiIndex.from_tuples(headr); cols
data = [[70+x+y+(x*y)%3 for x in range(4)] for y in range(9)]; data
df = pd.DataFrame(data,indx,cols); df
# https://stackoverflow.com/questions/38208416/colon-none-slicenone-in-numpy-array-indexers
All = slice(None); All
df.loc['Violet']
df.loc['Violet']
df.loc[(All,'Math'),All]
df.loc[(slice('Ada','Quinn'),'Math'),All]
df.loc[(All,'Math'),('Exams')]
df.loc[(All,'Math'),(All,'II')]
df.sort_values(by=('Labs','II'),ascending=False)
df = pd.DataFrame(np.random.randn(6,1),
index=pd.date_range('2018-08-01',periods=6,freq='B'),
columns=list('A'))
df
df.loc[df.index[3],'A'] = np.nan; df
df.reindex(df.index[::-1]).ffill()
df = pd.DataFrame({'animal' : 'cat dog cat fish dog cat cat'.split(),
'size' : list('SSMMMLL'),
'weight' : [8,10,11,1,20,12,12],
'adult' : [False] * 5 + [True] * 2})
df
df.groupby('animal').apply(lambda subf: subf['size'][subf['weight'].idxmax()])
gb = df.groupby(['animal'])
gb.get_group('cat')
def GrowUp(x):
avg_weight = sum(x[x['size'] == 'S'].weight * 1.5)
avg_weight += sum(x[x['size'] == 'M'].weight * 1.25)
avg_weight += sum(x[x['size'] == 'L'].weight)
avg_weight /=len(x)
return pd.Series(['L',avg_weight,True],index=['size', 'weight', 'adult'])
expected_df = gb.apply(GrowUp); expected_df
S = pd.Series([i / 100.0 for i in range(1,11)]); S
def CumRet(x,y):
return x * (1 + y)
def Red(x):
return functools.reduce(CumRet,x,1.0)
S.expanding().apply(Red)
df = pd.DataFrame({'A' : [1,1,2,2], 'B' : [1,-1,1,2]})
gb = df.groupby('A'); gb
def replace(g):
# Select all values less than zero:
mask = g < 0
# Replace those values with the mean of the other values that are positive:
g.loc[mask] = g[~mask].mean()
return g
gb.transform(replace)
df = pd.DataFrame({'code' : ['foo','bar','baz'] * 2,
'data' : [0.16,-0.21,0.33,0.45,-0.59,0.62],
'flag' : [False,True] * 3})
df
code_groups = df.groupby('code'); code_groups
agg_n_sort_order = code_groups[['data']].transform(sum).sort_values(by='data'); agg_n_sort_order
sorted_df = df.loc[agg_n_sort_order.index]; sorted_df
rng = pd.date_range(start="2018-7-10",periods=10,freq='2min'); rng
ts = pd.Series(data=list(range(10)),index=rng); ts
def MyCust(x):
if len(x) > 2:
return x[1] * 1.234
return pd.NaT
mhc = {'Mean' : np.mean, 'Max' : np.max, 'Custom' : MyCust}
print(ts.resample('5min').apply(mhc), ts)
df = pd.DataFrame({'Color' : 'Red Red Red Blue'.split(),
'Value' : [100, 150, 50, 50]})
df
df['Counts'] = df.groupby(['Color']).transform(len); df
df = pd.DataFrame({u'line_race' : [10, 10, 8, 10, 10, 8],
u'beyer' : [99, 102, 103, 103, 88, 100]},
index=[u'Last Gunfighter', u'Last Gunfighter', u'Last Gunfighter',
u'Paynter', u'Paynter', u'Paynter'])
df
df['beyer_shifted'] = df.groupby(level=0)['beyer'].shift(1); df
df = pd.DataFrame({'host' : ['other','other','that','this','this'],
'service' : ['mail','web','mail','mail','web'],
'no' : [1,2,1,2,1]}).set_index(['host', 'service'])
df
mask = df.groupby(level=0).agg('idxmax'); mask
df_count = df.loc[mask['no']].reset_index(); df_count
df = pd.DataFrame([0,1,0,1,1,1,0,1,1], columns=['A']); df
df.A.groupby((df.A != df.A.shift()).cumsum()).groups
df.A.groupby((df.A != df.A.shift()).cumsum()).cumsum()
df = pd.DataFrame(data={'Case' : ['A','A','A','B','A','A','B','A','A'],
'Data' : np.random.randn(9)}); df
dfs = list(zip(*df.groupby((1*(df['Case']=='B')).cumsum().rolling(window=3,min_periods=1).median())))[-1]
dfs
dfs[0]
dfs[1]
dfs[2]
df = pd.DataFrame(data={'Province' : ['ON','QC','BC','AL','AL','MN','ON'],
'City' : ['Toronto','Montreal','Vancouver','Calgary','Edmonton','Winnipeg','Windsor'],
'Sales' : [13,6,16,8,4,3,1]})
df
table = pd.pivot_table(df,values=['Sales'],index=['Province'],columns=['City'],aggfunc=np.sum,margins=True); table
table.stack('City')
grades = [48,99,75,80,42,80,72,68,36,78]
df = pd.DataFrame( {'ID': ["x%d" % r for r in range(10)],
'Gender' : ['F', 'M', 'F', 'M', 'F', 'M', 'F', 'M', 'M', 'M'],
'ExamYear': ['2007','2007','2007','2008','2008','2008','2008','2009','2009','2009'],
'Class': ['algebra', 'stats', 'bio', 'algebra', 'algebra', 'stats', 'stats', 'algebra', 'bio', 'bio'],
'Participated': ['yes','yes','yes','yes','no','yes','yes','yes','yes','yes'],
'Passed': ['yes' if x > 50 else 'no' for x in grades],
'Employed': [True,True,True,False,False,False,False,True,True,False],
'Grade': grades})
df
df.groupby('ExamYear').agg({'Participated' : lambda x: x.value_counts()['yes'],
'Passed' : lambda x: sum(x == 'yes'),
'Employed' : lambda x: sum(x),
'Grade' : lambda x : sum(x) / len(x)})
df = pd.DataFrame({'value' : np.random.randn(36)},
index=pd.date_range('2011-01-01',freq='M',periods=36))
df
pv = pd.pivot_table(df,index=df.index.month,
columns=df.index.year,
values='value',
aggfunc='sum')
pv
from matplotlib import pyplot as plt
pv.plot()
plt.show()
df = pd.DataFrame(data={'A' : [[2,4,8,16],[100,200],[10,20,30]],
'B' : [['a','b','c'],['jj','kk'],['ccc']]},
index=['I','II','III'])
df
def SeriesFromSubList(aList):
return pd.Series(aList)
organized_df = pd.concat(dict([(ind,row.apply(SeriesFromSubList)) for ind,row in df.iterrows()])); organized_df
df = pd.DataFrame(data=np.random.randn(2000,2)/10000,
index=pd.date_range('2001-01-01',periods=2000),
columns=['A','B'])
df.head()
def gm(aDF,Const):
v = ((((aDF.A+aDF.B)+1).cumprod())-1)*Const
return (aDF.index[0],v.iloc[-1])
S = pd.Series(dict([gm(df.iloc[i:min(i+51,len(df)-1)],5) for i in range(len(df)-50)])); S
rng = pd.date_range(start='2014-01-01',periods=100)
rng[:5]
df = pd.DataFrame({'Open' : np.random.randn(len(rng)),
'Close' : np.random.randn(len(rng)),
'Volume' : np.random.randint(100,2000,len(rng))},index=rng)
df.describe()
def vwap(bars) : return ((bars.Close*bars.Volume).sum()/bars.Volume.sum())
window = 5
s = pd.concat([(pd.Series(vwap(df.iloc[i:i+window]),
index=[df.index[i+window]]))
for i in range(len(df)-window)])
s.round(2)
def expand_grid(data_dict):
rows = itertools.product(*data_dict.values())
return pd.DataFrame.from_records(rows, columns=data_dict.keys())
df = expand_grid({'height' : [60,70],
'weight' : [100,140,180],
'sex' : ['Male','Female']})
df.describe()
df.head(10)
import datetime
ts.between_time(datetime.time(18), datetime.time(9), include_start=False, include_end=False)
rng = pd.date_range('1/1/2000', periods=24, freq='H'); rng
ts = pd.Series(pd.np.random.randn(len(rng)), index=rng); ts
# Select the rows from 10am to 2pm (inclusive:)
ts.iloc[ts.index.indexer_between_time(datetime.time(10), datetime.time(14))]
# The same syntax works for a DataFrame:
df = pd.DataFrame(ts); df
df.iloc[df.index.indexer_between_time(datetime.time(10), datetime.time(14))]
index = pd.date_range('2013-1-1',periods=10,freq='15Min'); index
data = pd.DataFrame(data=[1,2,3,4,5,6,7,8,9,0],columns=['value'],index=index); data
data.index.indexer_between_time(start_time='1:15',end_time='2:00')
data.iloc[data.index.indexer_between_time('1:15','2:00')]
rng = pd.date_range('20130101 09:00','20130110 16:00',freq='30T'); rng
# Eliminate times that are outside of the range:
rng = rng.take(rng.indexer_between_time('09:30','16:00')); rng
# Eliminate non-weekdays:
rng = rng[rng.weekday<5]; rng
# Convert rng to a pandas Series:
rng.to_series()
dates = pd.date_range('2000-01-01',periods=5); dates
dates.to_period(freq='M').to_timestamp()
# The resample basics:
rng = pd.date_range('1/1/2012', periods=100, freq='S'); rng
ts = pd.Series(np.random.randint(0, 500, len(rng)), index=rng); ts
ts.resample('5Min').sum()
ts.resample('5Min').mean()
ts.resample('5Min').ohlc()
ts.resample('5Min').max()
df = pd.DataFrame([
('Monthly','2014-02-1', 529.1),
('Monthly','2014-03-1', 67.1),
('Monthly','2014-04-1', np.nan),
('Monthly','2014-05-1', 146.8),
('Monthly','2014-06-1', 469.7),
('Monthly','2014-07-1', 82.9),
('Monthly','2014-08-1', 636.9),
('Monthly','2014-09-1', 520.9),
('Monthly','2014-10-1', 217.4),
('Monthly','2014-11-1', 776.6),
('Monthly','2014-12-1', 18.4),
('Monthly','2015-01-1', 376.7),
('Monthly','2015-02-1', 266.5),
('Monthly','2015-03-1', np.nan),
('Monthly','2015-04-1', 144.1),
('Monthly','2015-05-1', 385.0),
('Monthly','2015-06-1', 527.1),
('Monthly','2015-07-1', 748.5),
('Monthly','2015-08-1', 518.2)],
columns=['Frequency','Date','Value'])
df
df['Date'] = pd.to_datetime(df['Date'])
df.set_index(['Frequency','Date'],inplace=True); df
# Write a function that will return the sum unless an entry is NaN.
# The function will also return NaN if a month is missing.
gpy = df.groupby(pd.Grouper(level='Date',freq='Q')); gpy
gpy.agg(lambda x: np.nan if (np.isnan(x).any() or len(x)<3) else x.sum())
data = pd.concat([pd.DataFrame([['A']*72, list(pd.date_range('1/1/2011', periods=72, freq='H')),
list(np.random.rand(72))], index = ['Group', 'Time', 'Value']).T,
pd.DataFrame([['B']*72, list(pd.date_range('1/1/2011', periods=72, freq='H')),
list(np.random.rand(72))], index = ['Group', 'Time', 'Value']).T,
pd.DataFrame([['C']*72, list(pd.date_range('1/1/2011', periods=72, freq='H')),
list(np.random.rand(72))], index = ['Group', 'Time', 'Value']).T],
axis = 0).set_index(['Group', 'Time'])
data.describe()
data.head()
# Change 'Value' column to type float and then use Grouper:
data['Value'] = data['Value'].astype(float)
daily_counts = data.groupby([pd.TimeGrouper('D', level='Time'),
pd.Grouper(level='Group')])['Value'].mean()
daily_counts
# Another solution:
data = data.reset_index(level='Group')
data.groupby('Group').resample('D')['Value'].mean()
df = pd.DataFrame({
'Branch' : 'A A A A A B'.split(),
'Buyer': 'Carl Mark Carl Joe Joe Carl'.split(),
'Quantity': [1,3,5,8,9,3],
'Date' : [
pd.datetime(2013,1,1,13,0),
pd.datetime(2013,1,1,13,5),
pd.datetime(2013,10,1,20,0),
pd.datetime(2013,10,3,10,0),
pd.datetime(2013,12,2,12,0),
pd.datetime(2013,12,2,14,0),
]}); df
df.set_index('Quantity').groupby(pd.Grouper(key='Date', freq='6M')).sum()
# http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases
# Use Grouper with another critera, in this case an anonymous function.
df.set_index('Date').groupby(pd.Grouper(freq='6M'))['Quantity'].apply(lambda x: x.count())
# Now use groupby with Grouper and 'Branch':
newdf = df.set_index('Date').groupby(pd.Grouper(freq='6M')).apply(lambda x: x.groupby('Branch')); newdf
dir(newdf)
newdf.__add__
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Idioms
Step2: Execute an if-then statement on one column
Step3: If-then with assignment to 2 columns
Step4: Now you can perform another operation on the first row
Step5: You can also use pandas after setting up a mask
Step6: Use numpy's where() to perform an if-then-else operation.
Step7: Splitting
Step8: Building Criteria
Step9: Select the rows with data that's closest to a target value
Step10: Dynamically reduce a list of criteria using binary operators
Step11: If you want to hard code a solution
Step12: You may want to work with a list of dynamically built criteria
Step13: Selection
Step14: Use loc for label-oriented slicing and iloc for positional slicing
Step15: There are two explicit slicing methods and an available third option
Step16: Ambiguity arises when an index consists of integers with a non-zero start or non-unit increment.
Step17: Using the inverse operator ~ to take the complement of a mask
Step18: Panels
Step19: New Columns
Step20: Keep other columns when using min() with groupby
Step21: MultiIndexing
Step22: Arithmetic
Step23: Slicing
Step24: Navigating the indices in a MultiIndex
Step25: Having fun yet?
Step26: Levels
Step27: cumsum reset at NaN values
Step28: Now list the size of the animals with the highest weight.
Step29: Grouping using get_group()
Step30: Apply to different items in a group
Step31: Expanding Apply
Step32: Replacing some values with the mean of the rest of a group
Step33: Sort groups by their aggregated data
Step34: Create multiple aggregated columns
Step35: Create a value counts column and reassign back to the DataFrame
Step36: Shift groups of the values in a column based on the index
Step37: Select row with maximum value from each group
Step38: Grouping like Python's itertools.groupby method
Step39: Expanding Data
Step40: Pivot
Step41: Frequency tables like plyr in R
Step42: Plot pandas DataFrame with year over year data
Step43: Apply
Step44: Rolling Apply with a DataFrame returning a Series
Step45: Rolling Apply with a DataFrame returning a Scalar
Step46: Timeseries
Step47: Now, back to the Timeseries stuff.
Step48: Using indexer between time
Step49: One trick to remember is that by setting start_time to be later than end_time, you can get the times that are not between the two times.
Step50: Vectorized Lookup
Step51: Resampling
Step52: Using Grouper instead of TimeGrouper for time grouping of values
Step53: Grouping using a MultiIndex
Step54: Using TimeGrouper and another grouping to create subgroups, then apply a custom function
|
12,806
|
<ASSISTANT_TASK:>
Python Code:
!pip install -q numpyro@git+https://github.com/pyro-ppl/numpyro funsor
import numpyro
from jax import numpy as jnp, random, ops
from jax.scipy.special import expit
from numpyro import distributions as dist, sample
from numpyro.infer.mcmc import MCMC
from numpyro.infer.hmc import NUTS
from math import inf
from graphviz import Digraph
simkeys = random.split(random.PRNGKey(0), 10)
nsim = 5000
mcmc_key = random.PRNGKey(1)
dot = Digraph()
dot.node("A")
dot.node("B")
dot.node("Z")
dot.node("Y")
dot.edges(["ZA", "ZB", "AY", "BY"])
dot
b_A = 0.25
b_B = 0.25
s_Y = 0.25
Z = random.normal(simkeys[0], (nsim,))
A = random.bernoulli(simkeys[1], expit(Z))
B = random.bernoulli(simkeys[2], expit(Z))
Y = A * b_A + B * b_B + s_Y * random.normal(simkeys[3], (nsim,))
dot_mnar_y = Digraph()
with dot_mnar_y.subgraph() as s:
s.attr(rank="same")
s.node("Y")
s.node("M")
dot_mnar_y.node("A")
dot_mnar_y.node("B")
dot_mnar_y.node("Z")
dot_mnar_y.node("M")
dot_mnar_y.edges(["YM", "ZA", "ZB", "AY", "BY"])
dot_mnar_y
A_isobs = random.bernoulli(simkeys[4], expit(3 * (Y - Y.mean())))
Aobs = jnp.where(A_isobs, A, -1)
A_obsidx = jnp.where(A_isobs)
# generate complete case arrays
Acc = Aobs[A_obsidx]
Bcc = B[A_obsidx]
Ycc = Y[A_obsidx]
def ccmodel(A, B, Y):
ntotal = A.shape[0]
# get parameters of outcome model
b_A = sample("b_A", dist.Normal(0, 2.5))
b_B = sample("b_B", dist.Normal(0, 2.5))
s_Y = sample("s_Y", dist.HalfCauchy(2.5))
with numpyro.plate("obs", ntotal):
### outcome model
eta_Y = b_A * A + b_B * B
sample("obs_Y", dist.Normal(eta_Y, s_Y), obs=Y)
cckernel = NUTS(ccmodel)
ccmcmc = MCMC(cckernel, num_warmup=250, num_samples=750)
ccmcmc.run(mcmc_key, Acc, Bcc, Ycc)
ccmcmc.print_summary()
def impmodel(A, B, Y):
ntotal = A.shape[0]
A_isobs = A >= 0
# get parameters of imputation model
mu_A = sample("mu_A", dist.Normal(0, 2.5))
b_B_A = sample("b_B_A", dist.Normal(0, 2.5))
# get parameters of outcome model
b_A = sample("b_A", dist.Normal(0, 2.5))
b_B = sample("b_B", dist.Normal(0, 2.5))
s_Y = sample("s_Y", dist.HalfCauchy(2.5))
with numpyro.plate("obs", ntotal):
### imputation model
# get linear predictor for missing values
eta_A = mu_A + B * b_B_A
# sample imputation values for A
# mask out to not add log_prob to total likelihood right now
Aimp = sample(
"A",
dist.Bernoulli(logits=eta_A).mask(False),
infer={"enumerate": "parallel"},
)
# 'manually' calculate the log_prob
log_prob = dist.Bernoulli(logits=eta_A).log_prob(Aimp)
# cancel out enumerated values that are not equal to observed values
log_prob = jnp.where(A_isobs & (Aimp != A), -inf, log_prob)
# add to total likelihood for sampler
numpyro.factor("A_obs", log_prob)
### outcome model
eta_Y = b_A * Aimp + b_B * B
sample("obs_Y", dist.Normal(eta_Y, s_Y), obs=Y)
impkernel = NUTS(impmodel)
impmcmc = MCMC(impkernel, num_warmup=250, num_samples=750)
impmcmc.run(mcmc_key, Aobs, B, Y)
impmcmc.print_summary()
dot_mnar_x = Digraph()
with dot_mnar_y.subgraph() as s:
s.attr(rank="same")
s.node("A")
s.node("M")
dot_mnar_x.node("B")
dot_mnar_x.node("Z")
dot_mnar_x.node("Y")
dot_mnar_x.edges(["AM", "ZA", "ZB", "AY", "BY"])
dot_mnar_x
A_isobs = random.bernoulli(simkeys[5], 0.9 - 0.8 * A)
Aobs = jnp.where(A_isobs, A, -1)
A_obsidx = jnp.where(A_isobs)
# generate complete case arrays
Acc = Aobs[A_obsidx]
Bcc = B[A_obsidx]
Ycc = Y[A_obsidx]
cckernel = NUTS(ccmodel)
ccmcmc = MCMC(cckernel, num_warmup=250, num_samples=750)
ccmcmc.run(mcmc_key, Acc, Bcc, Ycc)
ccmcmc.print_summary()
impkernel = NUTS(impmodel)
impmcmc = MCMC(impkernel, num_warmup=250, num_samples=750)
impmcmc.run(mcmc_key, Aobs, B, Y)
impmcmc.print_summary()
def impmissmodel(A, B, Y):
ntotal = A.shape[0]
A_isobs = A >= 0
# get parameters of imputation model
mu_A = sample("mu_A", dist.Normal(0, 2.5))
b_B_A = sample("b_B_A", dist.Normal(0, 2.5))
# get parameters of outcome model
b_A = sample("b_A", dist.Normal(0, 2.5))
b_B = sample("b_B", dist.Normal(0, 2.5))
s_Y = sample("s_Y", dist.HalfCauchy(2.5))
# get parameter of model of missingness
with numpyro.plate("obsmodel", 2):
p_Aobs = sample("p_Aobs", dist.Beta(1, 1))
with numpyro.plate("obs", ntotal):
### imputation model
# get linear predictor for missing values
eta_A = mu_A + B * b_B_A
# sample imputation values for A
# mask out to not add log_prob to total likelihood right now
Aimp = sample(
"A",
dist.Bernoulli(logits=eta_A).mask(False),
infer={"enumerate": "parallel"},
)
# 'manually' calculate the log_prob
log_prob = dist.Bernoulli(logits=eta_A).log_prob(Aimp)
# cancel out enumerated values that are not equal to observed values
log_prob = jnp.where(A_isobs & (Aimp != A), -inf, log_prob)
# add to total likelihood for sampler
numpyro.factor("obs_A", log_prob)
### outcome model
eta_Y = b_A * Aimp + b_B * B
sample("obs_Y", dist.Normal(eta_Y, s_Y), obs=Y)
### missingness / observationmodel
eta_Aobs = jnp.where(Aimp, p_Aobs[0], p_Aobs[1])
sample("obs_Aobs", dist.Bernoulli(probs=eta_Aobs), obs=A_isobs)
impmisskernel = NUTS(impmissmodel)
impmissmcmc = MCMC(impmisskernel, num_warmup=250, num_samples=750)
impmissmcmc.run(mcmc_key, Aobs, B, Y)
impmissmcmc.print_summary()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First we will simulate data with correlated binary covariates. The assumption is that we wish to estimate parameter for some parametric model without bias (e.g. for inferring a causal effect). For several different missing data patterns we will see how to impute the values to lead to unbiased models.
Step2: MAR conditional on outcome
Step3: This graph depicts the datagenerating mechanism, where Y is the only cause of missingness in A, denoted M. This means that the missingness in M is random, conditional on Y.
Step4: We will evaluate 2 approaches
Step5: As we can see, when data are missing conditionally on Y, imputation leads to consistent estimation of the parameter of interest (b_A and b_B).
Step6: Perhaps surprisingly, imputing missing values when the missingness mechanism depends on the variable itself will actually lead to bias, while complete case analysis is unbiased!
|
12,807
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import Math
from math import frexp, pi
import math
#Convert a float into its mantissa and exponent and print as LaTeX
def fprint(x):
m,e = frexp(x)
return Math('{:4} \\times 2^{{{:}}}'.format(m, int(e)))
#Convert a mantissa from decimal to binary and print as LaTeX
def ffrac_ltx(x, terms=10):
bits = []
exp = 1
latex = ''
while x > 0 and exp < terms:
bits.append(int(x / 2.0 ** -exp))
x -= bits[-1] * 2.0 ** -exp
latex += '\\frac{{ {} }}{{ 2^{{ {} }} }} + '.format(bits[-1], exp)
exp += 1
return Math(latex[:-3] + ' = {}'.format(sum([b * 2**-i for b,i in zip(bits,range(1,exp))])))
def ffrac(x, terms=10):
bits = []
exp = 1
while x > 0 and exp < terms:
bits.append(int(x / 2.0 ** -exp))
x -= bits[-1] * 2.0 ** -exp
exp += 1
return ''.join(['{}'.format(x) for x in bits])
fprint(4.3)
fprint(0.1)
fprint(pi)
ffrac_ltx(0.5)
ffrac_ltx(0.75)
ffrac_ltx(0.5232)
#This is the maximum number of bits used in the 0.1 Mantissa as a string.
print(ffrac(0.1, 100))
print('{:0.28f}'.format(0.1))
0.1 + 0.2 == (1.0 + 2.0) / 10.0
print(abs(-43))
x = -5
x = abs(x)
print(x)
from math import fabs
print(fabs(-4.34322))
from math import fabs, sin, cos
print(fabs(sin(4) * cos(43.)))
from math import *
print(e)
print(2 == 5)
a = 3
print(a == 3)
print(a <= 0)
print(a > 0 and a < 4)
print(True or 1 / 0.)
print(True and 1 / 0.)
x = 65
if x > 43:
print('Surprise! {} is greater than 43'.format(x))
x = 54
if x < 0:
print('{} is negative'.format(x))
elif x > 0:
print('{} is positive'.format(x))
else:
print('{} is 0'.format(x))
x = -32
if x < 0:
if abs(x) > 5:
print('{} is less than 0 and has magnitude greater than 5'.format(x))
else:
print('{} is not interesting'.format(x))
#or you can do:
if x < 0 and abs(x) > 5:
print('{} is less than 0 and has magnitude greater than 5'.format(x))
if x < 0 and not abs(x) > 5:
print('{} is not interesting'.format(x))
a = 28238
b = 28238
print(a is b, a == b)
a = b
print(a is b)
var = None
print(var is None)
a = 43
b = 23
print(a != b)
x = 1
if(x):
print('True')
else:
print('False')
x = None
if(x):
print('True')
else:
print('False')
x = 'Hello'
if(x):
print('True')
else:
print('False')
x = ''
if(x):
print('True')
else:
print('False')
x = math.sin(0.0)
if(x):
print('True')
else:
print('False')
x = 0.15 + 0.15
y = 0.1 + 0.2
print(x == y)
x = 0.15 + 0.15
y = 0.1 + 0.2
abs(x - y) < 10**-8
numbers = [4,7,24,11,2]
print(numbers)
print(numbers[0], numbers[1])
print(numbers[0:3])
numbers[2:5]
print(numbers)
print(numbers[-1], numbers[-2])
print(numbers[:4])
print(numbers)
print(numbers[:-1])
print(numbers[3:])
print(numbers[-3:])
print(numbers)
print(numbers[0:-1:2])
print(numbers[0::2])
print(numbers)
print(numbers[::2])
print(numbers[2::2])
print(numbers[4:0:-1])
print(numbers, numbers[::-1])
x = range(100)
len(x)
x = range(10)
print(max(x))
import math
math.cos(x)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A 'decimal' in binary is challenging to think about, because each integer location is a $1 / 2^{n}$, where $n$ is the location. That means to represent a binary-mantissa exacly, its denominator must be a power of $2$. For example
Step2: Do not use == with floats!
Step3: Imports and Modules
Step4: Boolean Logic
Step5: Boolean Logic uses Short-Circuiting - only as much as necessary is evaluated.
Step6: Flow with Boolean
Step7: Full list of boolean operators
Step8: is None is special case. None is used as a "sentinel" value to indicate if something is not ready or invalid.
Step9: Default Boolean - Don't Rely on this!
Step10: It is better to just be explicit and give a full boolean expression. You'll most often see these used by accident
Step11: The solution?
Step12: Lists
Step13: Individual elements can be accessed with []. The thing that goes inside the brackets is called the index. Recall that python starts counting from 0
Step14: Slicing Lists
Step15: The last element to the slice operator is not inclusive, so 0
Step16: If it's more convienent, you can count backwards from the end of the list by using negative indices. The trick is that -0 is not really a thing, so the last element in the list is -1.
Step17: The index arguments to a slice are optional.
Step18: You can add a third argument by adding another
Step19: The other slicing indices are optional.
Step20: Using a negative stepsize allows you to count downwards
Step21: Note that we still include the first slice index (4), but not the second one (0).
Step22: Notice that Python was clever and knew with a step-size of -1, we actually wanted to start at the far end and count downwards.
|
12,808
|
<ASSISTANT_TASK:>
Python Code:
#Importamos las librerías utilizadas
import numpy as np
import pandas as pd
import seaborn as sns
#Mostramos las versiones usadas de cada librerías
print ("Numpy v{}".format(np.__version__))
print ("Pandas v{}".format(pd.__version__))
print ("Seaborn v{}".format(sns.__version__))
#Abrimos el fichero csv con los datos de la muestra
datos = pd.read_csv('ensayo2.CSV')
%pylab inline
#Almacenamos en una lista las columnas del fichero con las que vamos a trabajar
columns = ['Diametro X','Diametro Y', 'RPM TRAC']
#Mostramos un resumen de los datos obtenidoss
datos[columns].describe()
#datos.describe().loc['mean',['Diametro X [mm]', 'Diametro Y [mm]']]
graf = datos.ix[:, "Diametro X"].plot(figsize=(16,10),ylim=(0.5,3)).hlines([1.85,1.65],0,3500,colors='r')
graf.axhspan(1.65,1.85, alpha=0.2)
graf.set_xlabel('Tiempo (s)')
graf.set_ylabel('Diámetro (mm)')
#datos['RPM TRAC'].plot(secondary_y='RPM TRAC')
datos.ix[:, "Diametro X":"Diametro Y"].boxplot(return_type='axes')
plt.scatter(x=datos['Diametro X'], y=datos['Diametro Y'], marker='.')
datos_filtrados = datos[(datos['Diametro X'] >= 0.9) & (datos['Diametro Y'] >= 0.9)]
#datos_filtrados.ix[:, "Diametro X":"Diametro Y"].boxplot(return_type='axes')
plt.scatter(x=datos_filtrados['Diametro X'], y=datos_filtrados['Diametro Y'], marker='.')
ratio = datos_filtrados['Diametro X']/datos_filtrados['Diametro Y']
ratio.describe()
rolling_mean = pd.rolling_mean(ratio, 50)
rolling_std = pd.rolling_std(ratio, 50)
rolling_mean.plot(figsize=(12,6))
# plt.fill_between(ratio, y1=rolling_mean+rolling_std, y2=rolling_mean-rolling_std, alpha=0.5)
ratio.plot(figsize=(12,6), alpha=0.6, ylim=(0.5,1.5))
Th_u = 1.85
Th_d = 1.65
data_violations = datos[(datos['Diametro X'] > Th_u) | (datos['Diametro X'] < Th_d) |
(datos['Diametro Y'] > Th_u) | (datos['Diametro Y'] < Th_d)]
data_violations.describe()
data_violations.plot(subplots=True, figsize=(12,12))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Representamos ambos diámetro y la velocidad de la tractora en la misma gráfica
Step2: Comparativa de Diametro X frente a Diametro Y para ver el ratio del filamento
Step3: Filtrado de datos
Step4: Representación de X/Y
Step5: Analizamos datos del ratio
Step6: Límites de calidad
|
12,809
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
try:
import cirq
except ImportError:
print("installing cirq...")
!pip install --quiet cirq
print("installed cirq.")
import cirq
import numpy as np
class QutritPlusGate(cirq.Gate):
A gate that adds one in the computational basis of a qutrit.
This gate acts on three-level systems. In the computational basis of
this system it enacts the transformation U|x〉 = |x + 1 mod 3〉, or
in other words U|0〉 = |1〉, U|1〉 = |2〉, and U|2> = |0〉.
def _qid_shape_(self):
# By implementing this method this gate implements the
# cirq.qid_shape protocol and will return the tuple (3,)
# when cirq.qid_shape acts on an instance of this class.
# This indicates that the gate acts on a single qutrit.
return (3,)
def _unitary_(self):
# Since the gate acts on three level systems it has a unitary
# effect which is a three by three unitary matrix.
return np.array([[0, 0, 1],
[1, 0, 0],
[0, 1, 0]])
def _circuit_diagram_info_(self, args):
return '[+1]'
# Here we create a qutrit for the gate to act on.
q0 = cirq.LineQid(0, dimension=3)
# We can now enact the gate on this qutrit.
circuit = cirq.Circuit(
QutritPlusGate().on(q0)
)
# When we print this out we see that the qutrit is labeled by its dimension.
print(circuit)
# Create an instance of the qutrit gate defined above.
gate = QutritPlusGate()
# Verify that it acts on a single qutrit.
print(cirq.qid_shape(gate))
# Create an instance of the qutrit gate defined above. This gate implements _unitary_.
gate = QutritPlusGate()
# Because it acts on qutrits, its unitary is a 3 by 3 matrix.
print(cirq.unitary(gate))
# Create a circuit from the gate we defined above.
q0 = cirq.LineQid(0, dimension=3)
circuit = cirq.Circuit(QutritPlusGate()(q0))
# Run a simulation of this circuit.
sim = cirq.Simulator()
result = sim.simulate(circuit)
# Verify that the returned state is that of a qutrit.
print(cirq.qid_shape(result))
# Create a circuit with three qutrit gates.
q0, q1 = cirq.LineQid.range(2, dimension=3)
circuit = cirq.Circuit([
QutritPlusGate()(q0),
QutritPlusGate()(q1),
QutritPlusGate()(q1),
cirq.measure(q0, q1, key="x")
])
# Sample from this circuit.
result = cirq.sample(circuit, repetitions=3)
# See that the results are all integers from 0 to 2.
print(result)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Qudits
Step3: Most of the time in quantum computation, we work with qubits, which are 2-level quantum systems. However, it is possible to also define quantum computation with higher dimensional systems. A qu-d-it is a generalization of a qubit to a d-level or d-dimension system. For example, the state of a single qubit is a superposition of two basis states, $|\psi\rangle=\alpha|0\rangle+\beta|1\rangle$, whereas the state of a qudit for a three dimensional system is a superposition of three basis states $|\psi\rangle=\alpha|0\rangle+\beta|1\rangle+\gamma|2\rangle$.
Step4: cirq.Qid
Step5: Unitaries, mixtures, and channels on qudits
Step6: For a single qubit gate, its unitary is a 2x2 matrix, whereas for a single qutrit gate its unitary is a 3x3 matrix. A two qutrit gate will have a unitary that is a 9x9 matrix (3 * 3 = 9) and a qubit-ququart gate will have a unitary that is an 8x8 matrix (2 * 4 = 8). The size of the matrices involved in defining mixtures and channels follow the same pattern.
Step7: Circuits on qudits are always assumed to start in the $|0\rangle$ computational basis state, and all the computational basis states of a qudit are assumed to be $|0\rangle$, $|1\rangle$, ..., $|d-1\rangle$. Correspondingly, measurements of qudits are assumed to be in the computational basis and for each qudit return an integer corresponding to these basis states. Thus measurement results for each qudit are assumed to run from $0$ to $d-1$.
|
12,810
|
<ASSISTANT_TASK:>
Python Code:
from GongSu22_Statistics_Population_Variance import *
prices_pd.head()
ny_pd = prices_pd[prices_pd['State'] == 'New York'].copy(True)
ny_pd.head(10)
ny_pd_HighQ = ny_pd.iloc[:, [1, 7]]
ny_pd_HighQ.columns = ['NY_HighQ', 'date']
ny_pd_HighQ.head()
ca_pd_HighQ = california_pd.iloc[:, [1, 7]]
ca_pd_HighQ.head()
ca_ny_pd = pd.merge(ca_pd_HighQ, ny_pd_HighQ, on="date")
ca_ny_pd.head()
ca_ny_pd.rename(columns={"HighQ": "CA_HighQ"}, inplace=True)
ca_ny_pd.head()
ny_mean = ca_ny_pd.NY_HighQ.mean()
ny_mean
ca_ny_pd['ca_dev'] = ca_ny_pd['CA_HighQ'] - ca_mean
ca_ny_pd.head()
ca_ny_pd['ny_dev'] = ca_ny_pd['NY_HighQ'] - ny_mean
ca_ny_pd.head()
ca_ny_cov = (ca_ny_pd['ca_dev'] * ca_ny_pd['ny_dev']).sum() / (ca_count - 1)
ca_ny_cov
ca_highq_std = ca_ny_pd.CA_HighQ.std()
ny_highq_std = ca_ny_pd.NY_HighQ.std()
ca_ny_corr = ca_ny_cov / (ca_highq_std * ny_highq_std)
ca_ny_corr
california_pd.describe()
ca_ny_pd.cov()
ca_ny_pd.corr()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 주의
Step2: 상관분석 설명
Step3: 이제 정수 인덱싱을 사용하여 상품(HighQ)에 대한 정보만을 가져오도록 하자.
Step4: 위 코드에 사용된 정수 인덱싱은 다음과 같다.
Step5: 준비 작업
Step6: 준비 작업
Step7: 캘리포니아 주의 HighQ 열의 이름을 CA_HighQ로 변경한다.
Step8: 준비 작업
Step9: 이제 ca_ny_pd 테이블에 새로운 열(column)을 추가한다. 추가되는 열의 이름은 ca_dev와 ny_dev이다.
Step10: 캘리포니아 주와 뉴욕 주에서 거래된 상품(HighQ) 담배(식물) 도매가의 공분산
Step11: 피어슨 상관계수
Step12: 상관관계(Correlation)와 인과관계(Causation)
Step13: 연습
Step14: 위 테이블에서 CA_HighQ와 NY_HighQ가 만나는 부분의 값을 보면 앞서 계산한 공분산 값과 일치함을 확인할 수 있다.
|
12,811
|
<ASSISTANT_TASK:>
Python Code:
# For using the same code in either Python 2 or 3
from __future__ import print_function
## Note: Python 2 users, use raw_input() to get player input. Python 3 users, use input()
from IPython.display import clear_output
def display_board(board):
clear_output()
print(' | |')
print(' ' + board[7] + ' | ' + board[8] + ' | ' + board[9])
print(' | |')
print('-----------')
print(' | |')
print(' ' + board[4] + ' | ' + board[5] + ' | ' + board[6])
print(' | |')
print('-----------')
print(' | |')
print(' ' + board[1] + ' | ' + board[2] + ' | ' + board[3])
print(' | |')
def player_input():
marker = ''
while not (marker == 'X' or marker == 'O'):
marker = raw_input('Player 1: Do you want to be X or O?').upper()
if marker == 'X':
return ('X', 'O')
else:
return ('O', 'X')
def place_marker(board, marker, position):
board[position] = marker
def win_check(board,mark):
return ((board[7] == mark and board[8] == mark and board[9] == mark) or # across the top
(board[4] == mark and board[5] == mark and board[6] == mark) or # across the middle
(board[1] == mark and board[2] == mark and board[3] == mark) or # across the bottom
(board[7] == mark and board[4] == mark and board[1] == mark) or # down the middle
(board[8] == mark and board[5] == mark and board[2] == mark) or # down the middle
(board[9] == mark and board[6] == mark and board[3] == mark) or # down the right side
(board[7] == mark and board[5] == mark and board[3] == mark) or # diagonal
(board[9] == mark and board[5] == mark and board[1] == mark)) # diagonal
import random
def choose_first():
if random.randint(0, 1) == 0:
return 'Player 2'
else:
return 'Player 1'
def space_check(board, position):
return board[position] == ' '
def full_board_check(board):
for i in range(1,10):
if space_check(board, i):
return False
return True
def player_choice(board):
# Using strings because of raw_input
position = ' '
while position not in '1 2 3 4 5 6 7 8 9'.split() or not space_check(board, int(position)):
position = raw_input('Choose your next position: (1-9) ')
return int(position)
def replay():
return raw_input('Do you want to play again? Enter Yes or No: ').lower().startswith('y')
print('Welcome to Tic Tac Toe!')
while True:
# Reset the board
theBoard = [' '] * 10
player1_marker, player2_marker = player_input()
turn = choose_first()
print(turn + ' will go first.')
game_on = True
while game_on:
if turn == 'Player 1':
# Player1's turn.
display_board(theBoard)
position = player_choice(theBoard)
place_marker(theBoard, player1_marker, position)
if win_check(theBoard, player1_marker):
display_board(theBoard)
print('Congratulations! You have won the game!')
game_on = False
else:
if full_board_check(theBoard):
display_board(theBoard)
print('The game is a draw!')
break
else:
turn = 'Player 2'
else:
# Player2's turn.
display_board(theBoard)
position = player_choice(theBoard)
place_marker(theBoard, player2_marker, position)
if win_check(theBoard, player2_marker):
display_board(theBoard)
print('Player 2 has won!')
game_on = False
else:
if full_board_check(theBoard):
display_board(theBoard)
print('The game is a tie!')
break
else:
turn = 'Player 1'
if not replay():
break
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Step 1
Step2: Step 2
Step3: Step 3
Step4: Step 4
Step5: Step 5
Step6: Step 6
Step7: Step 7
Step8: Step 8
Step9: Step 9
Step10: Step 10
|
12,812
|
<ASSISTANT_TASK:>
Python Code:
Assignment between variables creates aliases.
animal = "giraffe"
creature = animal
print("Is creature an alias of animal?", creature is animal)
Assignment of the same value to different variables does not necessarily create aliases.
weather_next_5_days = ["Sunny", "Partly sunny", "Cloudy", "Sunny", "Sunny"]
weather_subsequent_5_days = ["Sunny", "Partly sunny", "Cloudy", "Sunny", "Sunny"]
if weather_subsequent_5_days is weather_next_5_days:
is_weather_subsequent_alias_of_weather_next = "Yes."
else:
is_weather_subsequent_alias_of_weather_next = "No."
if weather_subsequent_5_days == weather_next_5_days:
same_forecast = "Yes."
else:
same_forecast = "No."
print("Is weather_subsequent_5_days an alias of weather_next_5_days?",
is_weather_subsequent_alias_of_weather_next, "\n")
print("Is the forecast for the next 5 days the same as the forecast for the subsequent 5 days?",
same_forecast, "\n")
print("id(weather_next_5_days): ", id(weather_next_5_days))
print("id(weather_subsequent_5_days):", id(weather_subsequent_5_days))
Clone a list to get a new object with the same values.
lst = [1, 2, 3]
alias = lst # create an alias to lst
clone = lst[:] # clone lst
print("Is alias an alias of lst?", alias is lst)
print("Is clone an alias of lst?", clone is lst)
print("Do lst, alias, and clone all have the same values?", lst == alias == clone)
List slicing DOES NOT deeply copy nested objects.
import time
delay = 2
hundreds = [100, 200, 300]
numbers = [1, hundreds] # hundreds is nested
shallow_clone = numbers[:]
print("hundreds:", hundreds)
print("numbers:", numbers)
print("shallow_clone:", shallow_clone, "\n")
time.sleep(delay)
# verify that shallow_clone[0] == numbers[0]
print("shallow_clone[0] == numbers[0]:", shallow_clone[0] == numbers[0], "\n")
time.sleep(delay)
# modify the first element in numbers from 1 -> 10
numbers[0] = 10
print("numbers[0] = 10\n")
time.sleep(delay)
# test whether shallow_clone[0] also was modified
print("Was shallow_clone[0] also modified?",
shallow_clone[0] == numbers[0], "\n")
time.sleep(delay)
# change the first element in the list of hundreds from 100 -> 500
numbers[1][0] = 500
print("numbers[1][0] = 500\n")
time.sleep(delay)
# test whether the list of hundreds in shallow_clone also has been modified
print("Was shallow_clone[1][0] also modified?",
numbers[1][0] == shallow_clone[1][0], "\n")
time.sleep(delay)
# look at all of the variables
print("hundreds:", hundreds)
print("numbers:", numbers)
print("shallow_clone:", shallow_clone)
By default, strings are split on whitespace.
quote = "Beware of bugs in the above code; I have only proved it correct, not tried it. -Donald Knuth"
words = quote.split()
print(words)
Specify the delimiter to change how strings are split.
quote = "Beware of bugs in the above code; I have only proved it correct, not tried it. -Donald Knuth"
delimiter = ';'
phrases = quote.split(delimiter)
print(phrases)
A delimiter can also be specified to join.
quote = "Beware of bugs in the above code; I have only proved it correct, not tried it. -Donald Knuth"
delimiter = '-'
parts = quote.split(delimiter)
print(parts, "\n")
improved_quote = '\nby '.join(parts)
print(improved_quote)
Access an element of a list by using square brackets with an index.
lst = [1, 2, 3]
print("The third element of lst:", lst[2])
By varying the index, you can access different elements of a list.
lst = [1, 2, 3]
i = 0 # i holds the index value
print("The element of lst at index 0 is", lst[i])
i = 1
print("The element of lst at index 1 is", lst[i])
Create a list from a list literal.
lst = [1, 2, 3]
print(lst)
Create a list using list().
lst = list(range(1, 4))
print(lst, type(lst))
Create a list from a list comprehension.
lst = [i for i in range(1, 11) if i <= 3]
print(lst)
Create a list from a list comprehension without an if clause.
lst = [i for i in range(1, 11)]
print(lst)
Create a list by accumulating values.
lst = []
for i in range(1, 10):
if i <= 3:
lst.append(i)
print(lst)
Create a list by accumulating values without if condition.
lst = []
for i in range(1, 11):
lst.append(i)
print(lst)
List traversal using a for loop.
lst = [1, 2, 3]
for i in lst:
print(i)
List traversal using a for loop and an index.
lst = [1, 2, 3]
for i in range(len(lst)):
print(lst[i])
import time
delay = 3
def title_modifier(lst):
This function will change the list passed to it.
for i in range(len(lst)):
lst[i] = ' '.join([word[0].upper() + word[1:].lower() for word in lst[i].split()])
places = ["new york", "kansas city", "los angeles", "seattle"]
print("Places before being modified:", places, "\n")
time.sleep(delay)
print("Calling title_modifier(places)...", "\n")
return_value = title_modifier(places)
time.sleep(delay)
print("The return value of title_modifier(places) is", return_value, "\n")
time.sleep(delay)
print("Places after being modified:", places, "\n")
import time
delay = 3
def title_modifier(lst):
This function will change the list passed to it.
for i in range(len(lst)):
place = lst[i]
words = place.split()
title_cased_words = []
for word in words:
first_letter = word[0]
rest_of_word = word[1:]
title_cased_words.append(first_letter.upper() + rest_of_word.lower())
lst[i] = ' '.join(title_cased_words)
places = ["new york", "kansas city", "los angeles", "seattle"]
print("Places before being modified:", places, "\n")
time.sleep(delay)
print("Calling title_modifier(places)...", "\n")
return_value = title_modifier(places)
time.sleep(delay)
print("The return value of title_modifier(places) is", return_value, "\n")
time.sleep(delay)
print("Places after being modified:", places, "\n")
Lists are mutable.
cities = ["Albuquerque", "Chicago", "Paris"]
print("Cities before changing cities[1]:", cities)
cities[1] = "Tokyo"
print("Cities after changing cities[1]:", cities)
Strings are not mutable
cities = "Albuquerque Chicago Paris"
print("Cities as a string:", cities)
cities[1] = "Tokyo"
print("This will never print because it is an error to try to change part of a string")
Lists of lists (nested lists).
The format of hourly_forecasts is a list of days:
hourly_forecasts = [day1, day2, ..., dayN]
Each day has a name and a list of conditions associated with that day for the hours of 1-3 pm:
day = [name, [conditions_at_1pm, conditions_at_2pm, conditions_at_3pm]]
Therefore, hourly_forecasts is, in part, a list of lists of lists.
hourly_forecasts = [["Monday", ["Sunny", "Sunny", "Partly Cloudy"]],
["Tuesday", ["Cloudy", "Cloudy", "Cloudy"]],
["Wednesday", ["Partly Cloudy", "Sunny", "Sunny"]]]
for day in hourly_forecasts:
name = day[0]
hourly_conditions = day[1]
i = 1
for conditions in hourly_conditions:
print("The conditions at " + str(i) + "pm on " + name + " are forecast to be " + conditions)
i += 1
Direct iteration.
months = ["January", "February", "March",
"April", "May", "June", "July",
"August", "September", "October",
"November", "December"]
for month in months:
print(month)
Indexed iteration.
months = ["January", "February", "March",
"April", "May", "June", "July",
"August", "September", "October",
"November", "December"]
for i in range(len(month)):
print(months[i])
List accumulation.
numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]
squares = []
for i in numbers:
squares.append(i ** 2)
print(squares)
import time
delay = 3
def title_pure_function(lst):
This function will create a new list and return it
instead of changing the list passed into it.
places_titled = []
for i in range(len(lst)):
place = lst[i]
words = place.split()
title_cased_words = []
for word in words:
first_letter = word[0]
rest_of_word = word[1:]
title_cased_words.append(first_letter.upper() + rest_of_word.lower())
titled = ' '.join(title_cased_words)
places_titled.append(titled)
return places_titled
places = ["new york", "kansas city", "los angeles", "seattle"]
print("Places:", places, "\n")
time.sleep(delay)
print("Calling title_pure_function(places)...", "\n")
return_value = title_pure_function(places)
time.sleep(delay)
print("The return value of title_pure_function(places) is", return_value, "\n")
time.sleep(delay)
print("Places:", places, "\n")
Lists are sequences.
lst = [1, 2, 3]
print("lst: ", lst)
print("lst[0]: ", lst[0])
Tuples are sequences.
t = (1, 2, 3)
print("t: ", t)
print("t[0]: ", t[0])
def change_forecast(forecast):
This modifier function changes the value of the
forecast without returning anything.
for i in range(len(forecast)):
forecast[i] = "Cloudy"
forecast = ["Sunny", "Sunny", "Sunny"]
print("The forecast is", forecast)
return_value = change_forecast(forecast)
print("The return_value from calling change_forecast(forecast) is", return_value)
print("The forecast is", forecast)
Create a tuple from a comma-separated list of values.
vehicle = (2000, "Ford", "Ranger")
print(vehicle)
Parentheses are not required.
vehicle = 2000, "Ford", "Ranger"
print(vehicle)
You can create a tuple from a list.
vehicle = tuple([2000, "Ford", "Ranger"])
print(vehicle)
Tuples are immutable.
vehicle = (2000, "Ford", "Ranger")
vehicle[2] = "F-150"
Values in tuples can be "unpacked" into multiple variables.
vehicle = (2000, "Ford", "Ranger")
year, make, model = vehicle # tuple unpacking
print("Year:", year)
print("Make:", make)
print("Model:", model)
print(vehicle)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Glossary
Step4: clone
Step6: <h4 style="color
Step10: delimiter
Step12: element
Step14: index
Step17: list
Step19: List comprehensions
Step21: In the above example the general pattern
Step24: Notice especially the "punctuation" separating the various parts of the comprehension
Step27: You can translate many for loops into list comprehensions
Step30: modifier
Step33: mutable data type
Step35: nested list
Step37: object
Step39: Boiled down to its essential characteristics, direct iteration takes the form
Step41: Indexed iteration often takes the form
Step43: List accumulation is a convenient for creating a new list from an old list by applying a transformation to each element in the old list and appending the result to the new list. It takes the form
Step46: sequence
Step48: side effect
Step54: tuple
|
12,813
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import tensorflow as tf
import tflearn
from tflearn.data_utils import to_categorical
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
reviews = pd.read_csv('reviews.txt', header=None)
labels = pd.read_csv('labels.txt', header=None)
from collections import Counter
total_counts = Counter()# bag of words here
for idx, row in reviews.iterrows():
for word in row[0].split(' '):
total_counts[word] += 1
print("Total words in data set: ", len(total_counts))
vocab = sorted(total_counts, key=total_counts.get, reverse=True)[:10000]
print(vocab[:60])
print(vocab[-1], ': ', total_counts[vocab[-1]])
word2idx = {word: i for i, word in enumerate(vocab)} ## create the word-to-index dictionary here
def text_to_vector(text):
vector = np.zeros(len(vocab), dtype=np.int)
for word in text.split(' '):
idx = word2idx.get(word, None)
if idx is None:
continue
else:
vector[idx] += 1
return np.array(vector)
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
word_vectors = np.zeros((len(reviews), len(vocab)), dtype=np.int_)
for ii, (_, text) in enumerate(reviews.iterrows()):
word_vectors[ii] = text_to_vector(text[0])
# Printing out the first 5 word vectors
word_vectors[:5, :23]
Y = (labels=='positive').astype(np.int_)
records = len(labels)
shuffle = np.arange(records)
np.random.shuffle(shuffle)
test_fraction = 0.9
train_split, test_split = shuffle[:int(records*test_fraction)], shuffle[int(records*test_fraction):]
trainX, trainY = word_vectors[train_split,:], to_categorical(Y.values[train_split], 2)
testX, testY = word_vectors[test_split,:], to_categorical(Y.values[test_split], 2)
trainY
# Network building
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
#### Your code ####
net = tflearn.input_data([None, len(vocab)])
net = tflearn.fully_connected(net, 300, activation='ReLU')
net = tflearn.fully_connected(net, 200, activation='ReLU')
net = tflearn.fully_connected(net, 25, activation='ReLU')
net = tflearn.fully_connected(net, 2, activation='softmax')
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.01, loss='categorical_crossentropy')
model = tflearn.DNN(net)
return model
model = build_model()
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=128, n_epoch=10)
predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_)
test_accuracy = np.mean(predictions == testY[:,0], axis=0)
print("Test accuracy: ", test_accuracy)
# Helper function that uses your model to predict sentiment
def test_sentence(sentence):
positive_prob = model.predict([text_to_vector(sentence.lower())])[0][1]
print('Sentence: {}'.format(sentence))
print('P(positive) = {:.3f} :'.format(positive_prob),
'Positive' if positive_prob > 0.5 else 'Negative')
sentence = "Moonlight is by far the best movie of 2016."
test_sentence(sentence)
sentence = "It's amazing anyone could be talented enough to make something this spectacularly awful"
test_sentence(sentence)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Preparing the data
Step2: Counting word frequency
Step3: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.
Step4: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.
Step5: The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words.
Step6: Text to vector function
Step7: If you do this right, the following code should return
Step8: Now, run through our entire review data set and convert each review to a word vector.
Step9: Train, Validation, Test sets
Step10: Building the network
Step11: Intializing the model
Step12: Training the network
Step13: Testing
Step14: Try out your own text!
|
12,814
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from math import log
from sklearn import linear_model
#comment below if not using ipython notebook
%matplotlib inline
#read csv
anscombe_i = pd.read_csv('../datasets/anscombe_i.csv')
anscombe_i
plt.scatter(anscombe_i.x, anscombe_i.y, color='black')
plt.ylabel("Y")
plt.xlabel("X")
regr_i = linear_model.LinearRegression()
#We need to reshape the data to be a matrix
# with only one column
X = anscombe_i.x.reshape((len(anscombe_i.x), 1))
y = anscombe_i.y.reshape((len(anscombe_i.y), 1))
#Fit a line
regr_i.fit(X,y)
# The coefficients
print('Coefficients: \n', regr_i.coef_)
# The mean square error
print("Residual sum of squares: %.2f"
% np.mean((regr_i.predict(X) - y) ** 2))
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f' % regr_i.score(X, y))
plt.plot(X,regr_i.predict(X), color='green',
linewidth=3)
plt.scatter(anscombe_i.x, anscombe_i.y, color='black')
plt.ylabel("X")
plt.xlabel("y")
from pylab import *
# determine the line-fit
k,d = polyfit(anscombe_i.x,y,1)
yfit = k*anscombe_i.x+d
# plot the data
figure(1)
scatter(anscombe_i.x,y, color='black')
plot(anscombe_i.x, yfit, 'green')
#plot line from point to regression line
for ii in range(len(X)):
plot([anscombe_i.x[ii], anscombe_i.x[ii]], [yfit[ii], y[ii]], 'k')
xlabel('X')
ylabel('Y')
import pylab as P
figure(1)
scatter(anscombe_i.x,y, color='black')
plot(anscombe_i.x, yfit, 'green')
#plot line from point to regression line
for ii in range(len(X)):
plot([anscombe_i.x[ii], anscombe_i.x[ii]], [yfit[ii], y[ii]], 'k')
xlabel('X')
ylabel('Y')
residual_error= anscombe_i.y - yfit
error_mean = np.mean(residual_error)
error_sigma = np.std(residual_error)
plt.figure(2)
plt.scatter(anscombe_i.x,residual_error,label='residual error')
plt.xlabel("X")
plt.ylabel("residual error")
plt.figure(3)
n, bins, patches = plt.hist(residual_error, 10, normed=1, facecolor='blue', alpha=0.75)
y_pdf = P.normpdf( bins, error_mean, error_sigma)
l = P.plot(bins, y_pdf, 'k--', linewidth=1.5)
plt.xlabel("residual error in y")
plt.title("Residual Distribution")
# load statsmodels as alias ``sm``
import statsmodels.api as sm
y = anscombe_i.y
X = anscombe_i.x
# Adds a constant term to the predictor
# y = mx +b
X = sm.add_constant(X)
#fit ordinary least squares
est = sm.OLS(y, X)
est = est.fit()
est.summary()
plt.scatter(anscombe_i.x, anscombe_i.y, color='black')
X_prime = np.linspace(min(anscombe_i.x), max(anscombe_i.x), 100)[:, np.newaxis]
# add constant as we did before
X_prime = sm.add_constant(X_prime)
y_hat = est.predict(X_prime)
# Add the regression line (provides same as above)
plt.plot(X_prime[:, 1], y_hat, 'r')
import seaborn as sns
#this just makes the plots pretty (in my opion)
sns.set(style="darkgrid", color_codes=True)
g = sns.jointplot("x", "y", data=anscombe_i, kind="reg",
xlim=(0, 20), ylim=(0, 12), color="r", size=7)
X = anscombe_i.x.reshape((len(anscombe_i.x), 1))
y = anscombe_i.y.reshape((len(anscombe_i.y), 1))
k,d = polyfit(anscombe_i.y,anscombe_i.x,1)
xfit = k*y+d
figure(2)
# plot the data
scatter(anscombe_i.x,y, color='black')
plot(xfit, y, 'blue')
for ii in range(len(y)):
plot([xfit[ii], anscombe_i.x[ii]], [y[ii], y[ii]], 'k')
xlabel('X')
ylabel('Y')
from scipy.odr import Model, Data, ODR
from scipy.stats import linregress
import numpy as np
def orthoregress(x, y):
# get initial guess by first running linear regression
linregression = linregress(x, y)
model = Model(fit_function)
data = Data(x, y)
od = ODR(data, model, beta0=linregression[0:2])
out = od.run()
return list(out.beta)
def fit_function(p, x):
#return y = m x + b
return (p[0] * x) + p[1]
m, b = orthoregress(anscombe_i.x, anscombe_i.y)
# determine the line-fit
y_ortho_fit = m*anscombe_i.x+b
# plot the data
scatter(anscombe_i.x,anscombe_i.y, color = 'black')
plot(anscombe_i.x, y_ortho_fit, 'r')
xlabel('X')
ylabel('Y')
scatter(anscombe_i.x,anscombe_i.y,color = 'black')
plot(xfit, anscombe_i.y, 'b', label= "horizontal residuals")
plot(anscombe_i.x, yfit, 'g', label= "vertical residuals")
plot(anscombe_i.x, y_ortho_fit, 'r', label = "perpendicular residuals" )
plt.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc=3,
ncol=2, mode="expand", borderaxespad=0.)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now lets read the first set of data, take a look at the dataset and make a simple scatter plot.
Step2: Luckly for us, we do not need to implement linear regression, since scikit learn already has a very efficient implementation. The straight line can be seen in the plot below, showing how linear regression attempts to draw a straight line that will best minimize the residual sum of squares between the observed responses in the dataset, and the responses predicted by the linear approximation.
Step3: Residuals
Step4: Now let us plot the residual (y - y predicted) vs x.
Step5: As seen the the histogram, the residual error should be (somewhat) normally distributed and centered around zero. This post explains why.
Step6: The important parts of the summary are the
Step7: If we want to be even more fancier, we can use the seaborn library to plot Linear regression with marginal distributions which also states the pearsonr and p value on the plot. Using the statsmodels approach is more rigourous, but sns provides quick visualizations.
Step8: Usually we calculate the (vertical) residual, or the difference in the observed and predicted in the y. This is because "the use of the least squares method to calculate the best-fitting line through a two-dimensional scatter plot typically requires the user to assume that one of the variables depends on the other. (We caculate the difference in the y) However, in many cases the relationship between the two variables is more complex, and it is not valid to say that one variable is independent and the other is dependent. When analysing such data researchers should consider plotting the three regression lines that can be calculated for any two-dimensional scatter plot."
Step9: Total Least Squares Regression
Step10: Plotting all three regression lines gives a fuller picture of the data, and comparing their slopes provides a simple graphical assessment of the correlation coefficient. Plotting the orthogonal regression line (red) provides additional information because it makes no assumptions about the dependence or independence of the variables; as such, it appears to more accurately describe the trend in the data compared to either of the ordinary least squares regression lines.
|
12,815
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
%matplotlib inline
import sys
from pathlib import Path
p = Path(".")
p = p.absolute().parent
sys.path.insert(0,str(p))
import codes
def draw_2d(dataset,k):
#colors = cm.rainbow(np.linspace(0, 1, k))
Color = ['b', 'g', 'r', 'c', 'm', 'y', 'k', 'w']
index = {j:n for n,j in enumerate(list(set([i["label"] for i in dataset])))}
for i in dataset:
plt.scatter(i["data"][0], i["data"][1],color=Color[index.get(i["label"])])
plt.show()
def main():
a = [2, 2]
b = [1, 2]
c = [1, 1]
d = [0, 0]
f = [3, 2]
dataset = [a, b, c, d, f]
dataset.append([1.5, 0])
dataset.append([3, 4])
res = codes.k_means(dataset, k=2)
return res
x = main()
x
draw_2d(x,2)
a = [1,2,3]
b = [1.0,2.0,3.0]
all([float(i[0]) == float(i[1]) for i in zip(a,b)])
from sklearn.cluster import KMeans
import numpy as np
X = np.array([[1, 2], [1, 4], [1, 0],[4, 2], [4, 4], [4, 0]])
kmeans = KMeans(n_clusters=2, random_state=0).fit(X)
kmeans.labels_
kmeans.predict([[0, 0], [4, 4]])
kmeans.cluster_centers_
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 使用sklearn实现k-means聚类
Step2: 查看模型训练结束后各个向量的标签
Step3: 模型训练结束后用于预测向量的标签
Step4: 模型训练结束后的各个簇的中心点
|
12,816
|
<ASSISTANT_TASK:>
Python Code:
%load_ext sql
%sql mysql://studentuser:studentpw@172.17.0.4/dognitiondb
import socket
socket.gethostbyname('mysqlserver')
#mysqlserver
%config SqlMagic
%sql USE dognitiondb
%sql SHOW tables
%sql SHOW columns FROM dogs
%sql DESCRIBE reviews
%sql DESCRIBE complete_tests
%sql DESCRIBE exam_answers
%sql DESCRIBE site_activities
%sql DESCRIBE users
%%sql
SELECT breed
FROM dogs;
%%sql
SELECT breed
FROM dogs
LIMIT 10;
%%sql
SELECT breed
FROM dogs LIMIT 10 OFFSET 5;
%%sql
SELECT breed
FROM dogs LIMIT 5, 10;
%%sql
SELECT breed, breed_type, breed_group
FROM dogs LIMIT 0, 5;
%%sql
SELECT *
FROM dogs LIMIT 0, 5;
%%sql
SELECT median_iti_minutes / 60
FROM dogs LIMIT 0, 5;
%%sql
SELECT dog_guid, subcategory_name, test_name
FROM reviews
Limit 0, 15;
%%sql
SELECT activity_type, created_at, updated_at
FROM site_activities
Limit 49, 10;
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The "%" in this line of code is syntax for Python, not SQL. The "cell" I am referring to is the empty box area beside the "In [ ]
Step2: <mark>Every time you run a line of SQL code in Jupyter, you will need to preface the line with "%sql". Remember to do this, even though I will not explicitly instruct you to do so for the rest of the exercises in this course.</mark>
Step3: You are now ready to run queries in the Dognition database!
Step4: The output that appears above should show you there are six tables in the Dognition database. To determine what columns or fields (we will use those terms interchangeably in this course) are in each table, you can use the SHOW command again, but this time (1) you have to clarify that you want to see columns instead of tables, and (2) you have to specify from which table you want to examine the columns.
Step5: You should have determined that the "dogs" table has 21 columns.
Step6: You should have determined that there are 7 columns in the "reviews" table.
Step7: As you examine the fields in each table, you will notice that none of the Dognition tables have primary keys declared. However, take note of which fields say "MUL" in the "Key" column of the DESCRIBE output, because these columns can still be used to link tables together. An important thing to keep in mind, though, is that because these linking columns were not configured as primary keys, it is possible the linking fields contain NULL values or duplicate rows.
Step8: When you do so, you will see a line at the top of the output panel that says "35050 rows affected". This means that there are 35050 rows of data in the dogs table. Each row of the output lists the name of the breed of the dog represented by that entry. Notice that some breed names are listed multiple times, because several dogs of that breed have participated in the Dognition tests.
Step9: You can also select rows of data from different parts of the output table, rather than always just starting at the beginning. To do this, use the OFFSET clause after LIMIT. The number after the OFFSET clause indicates from which row the output will begin querying. Note that the offset of Row 1 of a table is actually 0. Therefore, in the following query
Step10: The LIMIT command is one of the pieces of syntax that can vary across database platforms. MySQL uses LIMIT to restrict the output, but other databases including Teradata use a statement called "TOP" instead. Oracle has yet another syntax
Step11: Another trick to know about when using SELECT is that <mark> you can use an asterisk as a "wild card" to return all the data in a table.</mark> (A wild card is defined as a character that will represent or match any character or sequence of characters in a query.) <mark> Take note, this is very risky to do if you do not limit your output or if you don't know how many data are in your database, so use the wild card with caution.</mark> However, it is a handy tool to use when you don't have all the column names easily available or when you know you want to query an entire table.
Step12: NOTE
Step13: Now it's time to practice writing your own SELECT statements.
Step14: Question 11
|
12,817
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
# TODO: Implement Function
source_id_text = [[source_vocab_to_int[word] for word in sentence.split()] for sentence in source_text.split("\n")]
target_id_text = [[target_vocab_to_int[word] for word in sentence.split()] + [target_vocab_to_int['<EOS>']] for sentence in target_text.split("\n")]
return source_id_text, target_id_text
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
from tensorflow.python.layers.core import Dense
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def model_inputs():
Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.
:return: Tuple (input, targets, learning rate, keep probability, target sequence length,
max target sequence length, source sequence length)
# TODO: Implement Function
inputs_ = tf.placeholder(tf.int32, [None, None], name="input")
targets_ = tf.placeholder(tf.int32, [None, None], name="target")
learning_rate = tf.placeholder(tf.float32, name="learning_rate")
keep_probability = tf.placeholder(tf.float32, name="keep_prob")
target_sequence_length = tf.placeholder(tf.int32, [None], name="target_sequence_length")
max_target_length = tf.reduce_max(target_sequence_length, name='max_target_len')
source_sequence_length = tf.placeholder(tf.int32, [None], name='source_sequence_length')
return inputs_, targets_, learning_rate, keep_probability, target_sequence_length, max_target_length, source_sequence_length
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
def process_decoder_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for encoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
# TODO: Implement Function
ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
decoder_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1)
return decoder_input
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_encoding_input(process_decoder_input)
from imp import reload
reload(tests)
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size,
encoding_embedding_size):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:param source_sequence_length: a list of the lengths of each sequence in the batch
:param source_vocab_size: vocabulary size of source data
:param encoding_embedding_size: embedding size of source data
:return: tuple (RNN output, RNN state)
# TODO: Implement Function
embed = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size)
# enc_embeddings = tf.Variable(tf.random_uniform([source_vocab_size, encoding_embedding_size], -1, 1))
# embed = tf.nn.embedding_lookup(enc_embeddings, rnn_inputs)
def lstm_cell():
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
return tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([lstm_cell() for _ in range(num_layers)])
outputs, final_state = tf.nn.dynamic_rnn(cell, embed, sequence_length=source_sequence_length,dtype=tf.float32)
return outputs, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input,
target_sequence_length, max_summary_length,
output_layer, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_summary_length: The length of the longest sequence in the batch
:param output_layer: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing training logits and sample_id
# TODO: Implement Function
training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input,
sequence_length=target_sequence_length,
name = "training_helper")
basic_decode = tf.contrib.seq2seq.BasicDecoder(cell=dec_cell,
helper=training_helper,
initial_state=encoder_state,
output_layer=output_layer)
decoder_output,_ = tf.contrib.seq2seq.dynamic_decode(decoder=basic_decode,
maximum_iterations=max_summary_length)
return decoder_output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,
end_of_sequence_id, max_target_sequence_length,
vocab_size, output_layer, batch_size, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param max_target_sequence_length: Maximum length of target sequences
:param vocab_size: Size of decoder/target vocabulary
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_layer: Function to apply the output layer
:param batch_size: Batch size
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing inference logits and sample_id
# TODO: Implement Function
start_tokens = tf.fill([batch_size], start_of_sequence_id)
inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(embedding=dec_embeddings,
start_tokens=start_tokens,
end_token=end_of_sequence_id)
inference_decoder = tf.contrib.seq2seq.BasicDecoder(cell=dec_cell, helper=inference_helper,
initial_state=encoder_state, output_layer=output_layer)
outputs, _ = tf.contrib.seq2seq.dynamic_decode(inference_decoder, impute_finished = True,
maximum_iterations=max_target_sequence_length)
return outputs
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
def decoding_layer(dec_input, encoder_state,
target_sequence_length, max_target_sequence_length,
rnn_size,
num_layers, target_vocab_to_int, target_vocab_size,
batch_size, keep_prob, decoding_embedding_size):
Create decoding layer
:param dec_input: Decoder input
:param encoder_state: Encoder state
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_target_sequence_length: Maximum length of target sequences
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param target_vocab_size: Size of target vocabulary
:param batch_size: The size of the batch
:param keep_prob: Dropout keep probability
:param decoding_embedding_size: Decoding embedding size
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
# TODO: Implement Function
# embed = tf.contrib.layers.embed_sequence(dec_input, target_vocab_size, decoding_embedding_size)
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size], -1, 1))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
def lstm_cell():
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
return tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
dec_cell = tf.contrib.rnn.MultiRNNCell([lstm_cell() for _ in range(num_layers)])
output_layer = Dense(target_vocab_size,
kernel_initializer=tf.truncated_normal_initializer(mean=0.0,
stddev=0.1))
# outputs, final_state = tf.nn.dynamic_rnn(dec_cell, dec_embed_input, sequence_length=target_sequence_length,dtype=tf.float32)
# output_layer = tf.contrib.layers.fully_connected(outputs, target_vocab_size)
with tf.variable_scope("decode") as decoding_scope:
training_logits = decoding_layer_train(encoder_state,
dec_cell,
dec_embed_input,
target_sequence_length,
max_target_sequence_length,
output_layer,
keep_prob)
decoding_scope.reuse_variables()
inference_logits = decoding_layer_infer(encoder_state,
dec_cell,
dec_embeddings,
target_vocab_to_int['<GO>'],
target_vocab_to_int['<EOS>'],
max_target_sequence_length,
target_vocab_size,
output_layer,
batch_size,
keep_prob)
return training_logits, inference_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
def seq2seq_model(input_data, target_data, keep_prob, batch_size,
source_sequence_length, target_sequence_length,
max_target_sentence_length,
source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size,
rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param source_sequence_length: Sequence Lengths of source sequences in the batch
:param target_sequence_length: Sequence Lengths of target sequences in the batch
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
# TODO: Implement Function
encode_output, encode_state = encoding_layer(input_data, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size,
enc_embedding_size)
decoder_input = process_decoder_input(target_data, target_vocab_to_int, batch_size)
training_logits, inference_logits = decoding_layer(decoder_input, encode_state,
target_sequence_length,
max_target_sentence_length,
rnn_size, num_layers,
target_vocab_to_int,
target_vocab_size,
batch_size, keep_prob,
dec_embedding_size)
return training_logits, inference_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
# Number of Epochs
epochs = 5
# Batch Size
batch_size = 128
# RNN Size
rnn_size = 128
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 128
decoding_embedding_size = 128
# Learning Rate
learning_rate = 0.01
# Dropout Keep Probability
keep_probability = 0.8
display_step = 100
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()
#sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),
targets,
keep_prob,
batch_size,
source_sequence_length,
target_sequence_length,
max_target_sequence_length,
len(source_vocab_to_int),
len(target_vocab_to_int),
encoding_embedding_size,
decoding_embedding_size,
rnn_size,
num_layers,
target_vocab_to_int)
training_logits = tf.identity(train_logits.rnn_output, name='logits')
inference_logits = tf.identity(inference_logits.sample_id, name='predictions')
masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
training_logits,
targets,
masks)
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
DON'T MODIFY ANYTHING IN THIS CELL
def pad_sentence_batch(sentence_batch, pad_int):
Pad sentences with <PAD> so that each sentence of a batch has the same length
max_sentence = max([len(sentence) for sentence in sentence_batch])
return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]
def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):
Batch targets, sources, and the lengths of their sentences together
for batch_i in range(0, len(sources)//batch_size):
start_i = batch_i * batch_size
# Slice the right amount for the batch
sources_batch = sources[start_i:start_i + batch_size]
targets_batch = targets[start_i:start_i + batch_size]
# Pad
pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))
pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))
# Need the lengths for the _lengths parameters
pad_targets_lengths = []
for target in pad_targets_batch:
pad_targets_lengths.append(len(target))
pad_source_lengths = []
for source in pad_sources_batch:
pad_source_lengths.append(len(source))
yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths
DON'T MODIFY ANYTHING IN THIS CELL
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1])],
'constant')
return np.mean(np.equal(target, logits))
# Split data to training and validation sets
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = source_int_text[:batch_size]
valid_target = target_int_text[:batch_size]
(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,
valid_target,
batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>']))
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(
get_batches(train_source, train_target, batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>'])):
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
target_sequence_length: targets_lengths,
source_sequence_length: sources_lengths,
keep_prob: keep_probability})
if batch_i % display_step == 0 and batch_i > 0:
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch,
source_sequence_length: sources_lengths,
target_sequence_length: targets_lengths,
keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_sources_batch,
source_sequence_length: valid_sources_lengths,
target_sequence_length: valid_targets_lengths,
keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
# TODO: Implement Function
return [vocab_to_int[word] if word in vocab_to_int.keys() else vocab_to_int['<UNK>'] for word in sentence.split()]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
translate_sentence = 'he saw a old yellow truck .'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('predictions:0')
target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')
source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,
target_sequence_length: [len(translate_sentence)*2]*batch_size,
source_sequence_length: [len(translate_sentence)]*batch_size,
keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in translate_logits]))
print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits])))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Language Translation
Step3: Explore the Data
Step6: Implement Preprocessing Function
Step8: Preprocess all the data and save it
Step10: Check Point
Step12: Check the Version of TensorFlow and Access to GPU
Step15: Build the Neural Network
Step18: Process Decoder Input
Step21: Encoding
Step24: Decoding - Training
Step27: Decoding - Inference
Step30: Build the Decoding Layer
Step33: Build the Neural Network
Step34: Neural Network Training
Step36: Build the Graph
Step40: Batch and pad the source and target sequences
Step43: Train
Step45: Save Parameters
Step47: Checkpoint
Step50: Sentence to Sequence
Step52: Translate
|
12,818
|
<ASSISTANT_TASK:>
Python Code:
# Initialize parameter values
y0 = 0
rho = 0.5
w1 = 1
# Compute the period 1 value of y
y1 = rho*y0 + w1
# Print the result
print('y1 =',y1)
# Compute the period 2 value of y
w2=0
y2 = rho*y1 + w2
# Print the result
print('y2 =',y2)
# Compute
# Initialize the variables T and w
T = 10
w = np.zeros(T)
w[0]=1
# Define a function that returns an arrary of y-values given rho, y0, and an array of w values.
def diff1_example(rho,w,y0):
T = len(w)
y = np.zeros(T+1)
y[0] = y0
for t in range(T):
y[t+1]=rho*y[t]+w[t]
return y
fig = plt.figure()
y = diff1_example(0.5,w,0)
plt.plot(y,'-',lw=5,alpha = 0.75)
ax1.set_title('$\\rho=0.5$')
plt.ylabel('y')
plt.xlabel('t')
plt.grid()
fig = plt.figure(figsize=(12,8))
ax1 = fig.add_subplot(2,2,1)
y = diff1_example(0.5,w,0)
ax1.plot(y,'-',lw=5,alpha = 0.75)
ax1.set_title('$\\rho=0.5$')
ax1.set_ylabel('y')
ax1.set_xlabel('t')
ax1.grid()
ax2 = fig.add_subplot(2,2,2)
y = diff1_example(-0.5,w,0)
ax2.plot(y,'-',lw=5,alpha = 0.75)
ax2.set_title('$\\rho=-0.5$')
ax2.set_ylabel('y')
ax2.set_xlabel('t')
ax2.grid()
ax3 = fig.add_subplot(2,2,3)
y = diff1_example(1,w,0)
ax3.plot(y,'-',lw=5,alpha = 0.75)
ax3.set_title('$\\rho=1$')
ax3.set_ylabel('y')
ax3.set_xlabel('t')
ax3.grid()
ax4 = fig.add_subplot(2,2,4)
y = diff1_example(1.25,w,0)
ax4.plot(y,'-',lw=5,alpha = 0.75)
ax4.set_title('$\\rho=1.25$')
ax4.set_ylabel('y')
ax4.set_xlabel('t')
ax4.grid()
plt.tight_layout()
# Initialize the variables T and w
T = 25
w = np.zeros(T)
w[0]=1
y1 = diff1_example(0.25,w,0)
y2 = diff1_example(0.75,w,0)
y3 = diff1_example(0.5,w,0)
y4 = diff1_example(0.95,w,0)
fig = plt.figure()
plt.plot(y1,'-',lw=5,alpha = 0.75,label='$\\rho=0.25$')
plt.plot(y2,'-',lw=5,alpha = 0.75,label='$\\rho=0.50$')
plt.plot(y3,'-',lw=5,alpha = 0.75,label='$\\rho=0.75$')
plt.plot(y4,'-',lw=5,alpha = 0.75,label='$\\rho=0.95$')
ax1.set_title('$\\rho=0.5$')
plt.ylabel('y')
plt.xlabel('t')
plt.legend(loc='upper right')
plt.grid()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The variable y1 in the preceding example stores the computed value for $y_1$. We can continue to iterate on Equation (4) to compute $y_2$, $y_3$, and so on. For example
Step2: We can do this as many times as necessary to reach the desired value of $t$. Note that iteration is necesary. Even though $y_t$ is apparently a function of $t$, we could not, for example, compute $y_{20}$ directly. Rather we'd have to compute $y_1, y_2, y_3, \ldots, y_{19}$ first. The linear first-order difference equation is an example of a recursive model and iteration is necessary for computing recursive models in general.
Step3: Exercise
Step4: Exercise
|
12,819
|
<ASSISTANT_TASK:>
Python Code:
# Set up feedback system
from learntools.core import binder
binder.bind(globals())
from learntools.sql.ex5 import *
print("Setup Complete")
from google.cloud import bigquery
# Create a "Client" object
client = bigquery.Client()
# Construct a reference to the "chicago_taxi_trips" dataset
dataset_ref = client.dataset("chicago_taxi_trips", project="bigquery-public-data")
# API request - fetch the dataset
dataset = client.get_dataset(dataset_ref)
# Your code here to find the table name
# Write the table name as a string below
table_name = ____
# Check your answer
q_1.check()
#q_1.solution()
# Your code here
# Check your answer (Run this code cell to receive credit!)
q_2.solution()
# Your code goes here
rides_per_year_query = ____
# Set up the query (cancel the query if it would use too much of
# your quota)
safe_config = bigquery.QueryJobConfig(maximum_bytes_billed=10**10)
rides_per_year_query_job = ____ # Your code goes here
# API request - run the query, and return a pandas DataFrame
rides_per_year_result = ____ # Your code goes here
# View results
print(rides_per_year_result)
# Check your answer
q_3.check()
#q_3.hint()
#q_3.solution()
# Your code goes here
rides_per_month_query = ____
# Set up the query
safe_config = bigquery.QueryJobConfig(maximum_bytes_billed=10**10)
rides_per_month_query_job = ____ # Your code goes here
# API request - run the query, and return a pandas DataFrame
rides_per_month_result = ____ # Your code goes here
# View results
print(rides_per_month_result)
# Check your answer
q_4.check()
#q_4.hint()
#q_4.solution()
# Your code goes here
speeds_query =
WITH RelevantRides AS
(
SELECT ____
FROM ____
WHERE ____
)
SELECT ______
FROM RelevantRides
GROUP BY ____
ORDER BY ____
# Set up the query
safe_config = bigquery.QueryJobConfig(maximum_bytes_billed=10**10)
speeds_query_job = ____ # Your code here
# API request - run the query, and return a pandas DataFrame
speeds_result = ____ # Your code here
# View results
print(speeds_result)
# Check your answer
q_5.check()
#q_5.solution()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: You'll work with a dataset about taxi trips in the city of Chicago. Run the cell below to fetch the chicago_taxi_trips dataset.
Step2: Exercises
Step3: For the solution, uncomment the line below.
Step4: 2) Peek at the data
Step5: After deciding whether you see any important issues, run the code cell below.
Step7: 3) Determine when this data is from
Step8: For a hint or the solution, uncomment the appropriate line below.
Step10: 4) Dive slightly deeper
Step11: For a hint or the solution, uncomment the appropriate line below.
Step13: 5) Write the query
Step14: For the solution, uncomment the appropriate line below.
|
12,820
|
<ASSISTANT_TASK:>
Python Code:
sample = np.random.choice([1,2,3,4,5,6], 100)
# посчитаем число выпадений каждой из сторон:
from collections import Counter
c = Counter(sample)
print("Число выпадений каждой из сторон:")
print(c)
# теперь поделим на общее число подбрасываний и получим вероятности:
print("Вероятности выпадений каждой из сторон:")
print({k: v/100.0 for k, v in c.items()})
norm_rv = sts.norm(0, 1)
sample = norm_rv.rvs(100)
x = np.linspace(-4,4,100)
cdf = norm_rv.cdf(x)
plt.plot(x, cdf, label='theoretical CDF')
# для построения ECDF используем библиотеку statsmodels
from statsmodels.distributions.empirical_distribution import ECDF
ecdf = ECDF(sample)
plt.step(ecdf.x, ecdf.y, label='ECDF')
plt.ylabel('$f(x)$')
plt.xlabel('$x$')
plt.legend(loc='upper left')
plt.hist(sample, normed=True)
plt.ylabel('number of samples')
plt.xlabel('$x$')
plt.hist(sample, bins=3, normed=True)
plt.ylabel('number of samples')
plt.xlabel('$x$')
plt.hist(sample, bins=40, normed=True)
plt.ylabel('number of samples')
plt.xlabel('$x$')
# для построения используем библиотеку Pandas:
df = pd.DataFrame(sample, columns=['KDE'])
ax = df.plot(kind='density')
# на том же графике построим теоретическую плотность распределения:
x = np.linspace(-4,4,100)
pdf = norm_rv.pdf(x)
plt.plot(x, pdf, label='theoretical pdf', alpha=0.5)
plt.legend()
plt.ylabel('$f(x)$')
plt.xlabel('$x$')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Представим теперь, что эта выборка была получена не искусственно, а путём подбрасывания симметричного шестигранного кубика 100 раз. Оценим вероятности выпадения каждой из сторон с помощью частот
Step2: Это и есть оценка функции вероятности дискретного распределения.
Step3: Эмпирическая функция распределения для полученной выборки
Step4: Гистограмма выборки
Step5: Попробуем задавать число карманов гистограммы вручную
Step6: Эмпирическая оценка плотности, построенная по выборке с помощью ядерного сглаживания
|
12,821
|
<ASSISTANT_TASK:>
Python Code:
x = -5
if x < 0:
x = 0
print 'Negative changed to zero'
elif x == 0:
print 'Zero'
elif x == 1:
print 'Single'
else:
print 'More'
print x
x = -5
if x < 0:
print "X is negative"
x = 5
if x == 0: print 'X is zero'
for pet in ['cat', 'dog', 'pig']:
print 'I own a', pet
print range(10)
print range(5, 10)
print range(0, 10, 3)
a = ['Mary', 'had', 'a', 'little', 'lamb']
range(len(a))
for i in range(len(a)):
print i, a[i]
for n in [1, 2, 3, 4, 5]:
if n % 2 == 0:
print 'Found an even number', n
break
for n in [1, 2, 3, 4, 5]:
if n % 2 == 1:
continue
print 'Even number:', n
i = 0
while i < 5:
print i,
i = i + 1
x = [1, 5, 8, 9, 10]
y = [9, 2, 5, 7, 3]
result = 0
for i in range(len(x)):
result = result + x[i] * y[i]
print result
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: the body of the if is indented
Step2: for statements
Step3: if you need to iterate over a sequence of numbers, using built-in function <code>range()</code> function
Step4: Iterate over the indices of a sequence, you can combine <code>range()</code> and <code>len()</code> as follows
Step5: break and continue statements
Step6: In the loop when encounter
Step7: while statements
Step8: exercise
|
12,822
|
<ASSISTANT_TASK:>
Python Code:
import essentia.standard as es
filename = 'audio/dubstep.flac'
# Load the whole file in mono
audio = es.MonoLoader(filename=filename)()
print(audio.shape)
# Load the whole file in stereo
audio, _, _, _, _, _ = es.AudioLoader(filename=filename)()
print(audio.shape)
# Load and resample to 16000 Hz
audio = es.MonoLoader(filename=filename, sampleRate=16000)()
print(audio.shape)
# Load only a 10-seconds segment in mono, starting from the 2nd minute
audio = es.EasyLoader(filename='audio/Vivaldi_Sonata_5_II_Allegro.flac',
sampleRate=44100, startTime=60, endTime=70)()
print(audio.shape)
# Replace with your own file
es.MetadataReader(filename='audio/Mr. Bungle - Stubb (a Dub).mp3')()
metadata_pool = es.MetadataReader(filename='audio/Mr. Bungle - Stubb (a Dub).mp3')()[7]
for d in metadata_pool.descriptorNames():
print(d, metadata_pool[d])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Reading file metadata
Step2: The output contains standard metadata fields (track name, artist, name, album name, track number, etc.) as well as bitrate and samplerate. It also includes an Essentia pool object containing all other fields found
|
12,823
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense
from keras.utils import np_utils
# Fix random seed for reproducibility
seed = 7
np.random.seed(seed)
# Load data
(X_train, y_train), (X_test, y_test) = mnist.load_data()
# Flatten 28*28 images to a 784 vector for each image
num_pixels = X_train.shape[1] * X_train.shape[2]
X_train = X_train.reshape(X_train.shape[0], num_pixels).astype('float32')
X_test = X_test.reshape(X_test.shape[0], num_pixels).astype('float32')
# Normalize inputs from 0-255 to 0-1
X_train = X_train / 255
X_test = X_test / 255
# one-hot-encode outputs (Bsp: 2 --> [0,0,1,0,0,0,0,0,0,0])
y_train = np_utils.to_categorical(y_train)
y_test = np_utils.to_categorical(y_test)
num_classes = y_test.shape[1]
def model():
# Create model
model = Sequential()
model.add(Dense(num_pixels, input_dim=num_pixels, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))
# Compile model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
# Build the model
model = model()
# Fit the model
model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=10, batch_size=200, verbose=2)
# Final evaluation of the model
scores = model.evaluate(X_test, y_test, verbose=0)
print("Error: %.2f%%" % (100-scores[1]*100))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: num_pixels is equal to 748
Step2: one-hot-encoding is used because in the network, there is one neuron for one number...
Step3: 'softmax' is a sigmoid shaped curve
|
12,824
|
<ASSISTANT_TASK:>
Python Code:
pow(7, 4)
s = "Hi there Sam!"
s.split(' ')
planet = "Earth"
diameter = 12742
"The diameter of {0} is {1} kilometers.".format(planet, diameter)
lst = [1,2,[3,4],[5,[100,200,['hello']],23,11],1,7]
lst[3][1][2][0]
d = {'k1':[1,2,3,{'tricky':['oh','man','inception',{'target':[1,2,3,'hello']}]}]}
d['k1'][3]['tricky'][3]['target'][3]
# Just answer with text, no code necessary
'user@domain.com'.split('@')[-1]
def domain(email):
return email.split('@')[-1]
ss = 'This dog runs faster than the other dog dude!'
def countdog(s):
return s.lower().split(' ').count('dog')
countdog(ss)
def countDog(st):
count = 0
for word in st.lower().split():
if word == 'dog':
count += 1
return count
s = 'I have a dog'
def judge_dog_in_str(s):
return 'dog' in s.lower().split(' ')
judge_dog_in_str(s)
def caught_speeding(speed, is_birthday):
if is_birthday:
speeding = speed - 5
else:
speeding = speed
if speeding > 80:
return 'Big Ticket'
elif speeding > 60:
return 'Small Ticket'
else:
return 'No Ticket'
caught_speeding(81,True)
caught_speeding(81,False)
def fib_dyn(n):
a,b = 1,1
for i in range(n-1):
a,b = b,a+b
return a
fib_dyn(10)
def fib_recur(n):
if n == 0:
return 0
if n == 1:
return 1
else:
return fib_recur(n-1) + fib_recur(n-2)
fib_recur(10)
def fib(max):
n, a, b = 0, 0, 1
while n < max:
yield b
# print(b)
a, b = b, a + b
n = n + 1
print(list(fib(10))[-1])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 分割以下字符串
Step2: 提供了一下两个变量
Step3: 提供了以下嵌套列表,使用索引的方法获取单词‘hello'
Step4: 提供以下嵌套字典,从中抓去单词 “hello”
Step5: 字典和列表之间的差别是什么??
Step6: 编写一个函数,该函数能够获取类似于以下email地址的域名部分
Step7: 创建一个函数,如果输入的字符串中包含‘dog’,(请忽略corn case)统计一下'dog'的个数
Step8: 创建一个函数,判断'dog' 是否包含在输入的字符串中(请同样忽略corn case)
Step9: 如果你驾驶的过快,交警就会拦下你。编写一个函数来返回以下三种可能的情况之一:"No ticket", "Small ticket", 或者 "Big Ticket".
Step10: 计算斐波那契数列,使用生成器实现
|
12,825
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nerc', 'hadgem3-gc31-hh', 'toplevel')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Flux Correction
Step7: 3. Key Properties --> Genealogy
Step8: 3.2. CMIP3 Parent
Step9: 3.3. CMIP5 Parent
Step10: 3.4. Previous Name
Step11: 4. Key Properties --> Software Properties
Step12: 4.2. Code Version
Step13: 4.3. Code Languages
Step14: 4.4. Components Structure
Step15: 4.5. Coupler
Step16: 5. Key Properties --> Coupling
Step17: 5.2. Atmosphere Double Flux
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Step19: 5.4. Atmosphere Relative Winds
Step20: 6. Key Properties --> Tuning Applied
Step21: 6.2. Global Mean Metrics Used
Step22: 6.3. Regional Metrics Used
Step23: 6.4. Trend Metrics Used
Step24: 6.5. Energy Balance
Step25: 6.6. Fresh Water Balance
Step26: 7. Key Properties --> Conservation --> Heat
Step27: 7.2. Atmos Ocean Interface
Step28: 7.3. Atmos Land Interface
Step29: 7.4. Atmos Sea-ice Interface
Step30: 7.5. Ocean Seaice Interface
Step31: 7.6. Land Ocean Interface
Step32: 8. Key Properties --> Conservation --> Fresh Water
Step33: 8.2. Atmos Ocean Interface
Step34: 8.3. Atmos Land Interface
Step35: 8.4. Atmos Sea-ice Interface
Step36: 8.5. Ocean Seaice Interface
Step37: 8.6. Runoff
Step38: 8.7. Iceberg Calving
Step39: 8.8. Endoreic Basins
Step40: 8.9. Snow Accumulation
Step41: 9. Key Properties --> Conservation --> Salt
Step42: 10. Key Properties --> Conservation --> Momentum
Step43: 11. Radiative Forcings
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Step45: 12.2. Additional Information
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Step47: 13.2. Additional Information
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Step49: 14.2. Additional Information
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Step51: 15.2. Additional Information
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Step53: 16.2. Additional Information
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Step55: 17.2. Equivalence Concentration
Step56: 17.3. Additional Information
Step57: 18. Radiative Forcings --> Aerosols --> SO4
Step58: 18.2. Additional Information
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Step60: 19.2. Additional Information
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Step62: 20.2. Additional Information
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Step64: 21.2. Additional Information
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Step66: 22.2. Aerosol Effect On Ice Clouds
Step67: 22.3. Additional Information
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Step69: 23.2. Aerosol Effect On Ice Clouds
Step70: 23.3. RFaci From Sulfate Only
Step71: 23.4. Additional Information
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Step73: 24.2. Additional Information
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Step77: 25.4. Additional Information
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Step81: 26.4. Additional Information
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Step83: 27.2. Additional Information
Step84: 28. Radiative Forcings --> Other --> Land Use
Step85: 28.2. Crop Change Only
Step86: 28.3. Additional Information
Step87: 29. Radiative Forcings --> Other --> Solar
Step88: 29.2. Additional Information
|
12,826
|
<ASSISTANT_TASK:>
Python Code:
%run -i initilization.py
from classification.ExecuteClassificationWorkflow import ExecuteWorkflowClassification
import classification.CreateParametersClasification as create_params
from shared import GeneralDataImport
from IPython.display import display
data_import = GeneralDataImport.GeneralDataImport(parquet_path+'/merged_cvr.parquet')
data_import.select_columns()
from pyspark.sql import functions as F
train_df, test_df = (data_import
.data_frame
.filter(F.col('label') < 2.0)
.randomSplit([0.66, 0.33])
)
#print(data_import.list_features)
print('Number of training points are {}'.format(train_df.count()))
print('Number of test points are {}'.format(test_df.count()))
train_df.limit(5).toPandas()
#train_df.printSchema()
selector = create_params.ParamsClassification()
params = selector.select_parameters()
display(params)
parameter_dict = selector.output_parameters(params)
parameter_dict
model = ExecuteWorkflowClassification(
parameter_dict,
data_import.standardize,
data_import.list_features
)
from pyspark.ml.evaluation import BinaryClassificationEvaluator
result_model = model.pipeline.fit(train_df)
crossfitted_model = model.run_cross_val(
train_df,
BinaryClassificationEvaluator(),
3
)
#summary = fitted_data.bestModel.stages[-1].summary
df_no_cv_pipeline = (result_model.transform(test_df))
l = model.pipeline.getStages()[-1].getLabelCol()
p = model.pipeline.getStages()[-1].getPredictionCol()
df_confusion = df_no_cv_pipeline.groupBy([l,p]).count()
df_confusion.toPandas()
if crossfitted_model.bestModel.stages[-1].hasSummary:
fig, axes = plt.subplots(
nrows=2,
ncols=3,
figsize=(20, 14))
summary = crossfitted_model.bestModel.stages[-1].summary
print('The area under the curve is {}'.format(summary.areaUnderROC))
attributes = []
titles = ['F-measure by Threshold','Precision by Recall','Precision by Threshold', 'ROC', 'Recall by Threshold']
attributes.append(summary.fMeasureByThreshold.toPandas())
attributes.append(summary.pr.toPandas())
attributes.append(summary.precisionByThreshold.toPandas())
attributes.append(summary.roc.toPandas())
attributes.append(summary.recallByThreshold.toPandas())
#iterations = summary.totalIterations
jdx = 0
for idx, data_frame in enumerate(attributes):
if idx % 3 == 0 and idx != 0:
jdx+=1
ax = axes[jdx,idx % 3]
ax.plot(data_frame.columns[0],
data_frame.columns[1],
data=data_frame,
)
ax.legend()
ax.set_xlabel(data_frame.columns[0])
ax.set_ylabel(data_frame.columns[1])
ax.set_title(titles[idx])
plt.show()
from classification import ShowClassification
show_classification_attributes(crossfitted_model)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import data and select columns for id and features
Step2: Lets divide the data into a training- and test-set.
Step3: Select an algorithm and its parameters
Step4: For verification
Step5: Initilize the workflow class
Step6: Execute the pipeline in a K-fold cross-validation
Step7: Lets look at the ROC-curve
|
12,827
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
#Importamos las librerías utilizadas
import numpy as np
import pandas as pd
import seaborn as sns
#Mostramos las versiones usadas de cada librerías
print ("Numpy v{}".format(np.__version__))
print ("Pandas v{}".format(pd.__version__))
print ("Seaborn v{}".format(sns.__version__))
#Abrimos el fichero csv con los datos de la muestra
datos = pd.read_csv('M1.CSV')
#Almacenamos en una lista las columnas del fichero con las que vamos a trabajar
columns = ['Diametro X','Diametro Y']
#Mostramos un resumen de los datos obtenidoss
datos[columns].describe()
#datos.describe().loc['mean',['Diametro X [mm]', 'Diametro Y [mm]']]
datos.ix[:, "Diametro X":"Diametro Y"].plot(figsize=(16,10),ylim=(0.5,3)).hlines([1.85,1.65],0,3500,colors='r')
#datos['RPM TRAC'].plot(secondary_y='RPM TRAC')
datos.ix[:, "Diametro X":"Diametro Y"].boxplot(return_type='axes')
plt.scatter(x=datos['Diametro X'], y=datos['Diametro Y'], marker='.')
datos_filtrados = datos[(datos['Diametro X'] >= 0.9) & (datos['Diametro Y'] >= 0.9)]
#datos_filtrados.ix[:, "Diametro X":"Diametro Y"].boxplot(return_type='axes')
plt.scatter(x=datos_filtrados['Diametro X'], y=datos_filtrados['Diametro Y'], marker='.')
ratio = datos_filtrados['Diametro X']/datos_filtrados['Diametro Y']
ratio.describe()
rolling_mean = pd.rolling_mean(ratio, 50)
rolling_std = pd.rolling_std(ratio, 50)
rolling_mean.plot(figsize=(12,6))
# plt.fill_between(ratio, y1=rolling_mean+rolling_std, y2=rolling_mean-rolling_std, alpha=0.5)
ratio.plot(figsize=(12,6), alpha=0.6, ylim=(0.5,1.5))
Th_u = 1.85
Th_d = 1.65
data_violations = datos[(datos['Diametro X'] > Th_u) | (datos['Diametro X'] < Th_d) |
(datos['Diametro Y'] > Th_u) | (datos['Diametro Y'] < Th_d)]
data_violations.describe()
data_violations.plot(subplots=True, figsize=(12,12))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Representamos ambos diámetro y la velocidad de la tractora en la misma gráfica
Step2: Con esta segunda aproximación se ha conseguido estabilizar los datos. Se va a tratar de bajar ese porcentaje. Como cuarta aproximación, vamos a modificar las velocidades de tracción. El rango de velocidades propuesto es de 1.5 a 5.3, manteniendo los incrementos del sistema experto como en el actual ensayo.
Step3: Filtrado de datos
Step4: Representación de X/Y
Step5: Analizamos datos del ratio
Step6: Límites de calidad
|
12,828
|
<ASSISTANT_TASK:>
Python Code:
import graphlab
tmp = graphlab.SArray([1., 2., 3.])
tmp_cubed = tmp.apply(lambda x: x**3)
print tmp
print tmp_cubed
ex_sframe = graphlab.SFrame()
ex_sframe['power_1'] = tmp
print ex_sframe
def polynomial_sframe(feature, degree):
# assume that degree >= 1
# initialize the SFrame:
poly_sframe = graphlab.SFrame()
# and set poly_sframe['power_1'] equal to the passed feature
# first check if degree > 1
if degree > 1:
# then loop over the remaining degrees:
# range usually starts at 0 and stops at the endpoint-1. We want it to start at 2 and stop at degree
for power in range(2, degree+1):
# first we'll give the column a name:
name = 'power_' + str(power)
# then assign poly_sframe[name] to the appropriate power of feature
return poly_sframe
print polynomial_sframe(tmp, 3)
sales = graphlab.SFrame('kc_house_data.gl/')
sales = sales.sort('sqft_living')
poly1_data = polynomial_sframe(sales['sqft_living'], 1)
poly1_data['price'] = sales['price'] # add price to the data since it's the target
model1 = graphlab.linear_regression.create(poly1_data, target = 'price', features = ['power_1'], validation_set = None)
#let's take a look at the weights before we plot
model1.get("coefficients")
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(poly1_data['power_1'],poly1_data['price'],'.',
poly1_data['power_1'], model1.predict(poly1_data),'-')
poly2_data = polynomial_sframe(sales['sqft_living'], 2)
my_features = poly2_data.column_names() # get the name of the features
poly2_data['price'] = sales['price'] # add price to the data since it's the target
model2 = graphlab.linear_regression.create(poly2_data, target = 'price', features = my_features, validation_set = None)
model2.get("coefficients")
plt.plot(poly2_data['power_1'],poly2_data['price'],'.',
poly2_data['power_1'], model2.predict(poly2_data),'-')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Next we're going to write a polynomial function that takes an SArray and a maximal degree and returns an SFrame with columns containing the SArray to all the powers up to the maximal degree.
Step2: We can create an empty SFrame using graphlab.SFrame() and then add any columns to it with ex_sframe['column_name'] = value. For example we create an empty SFrame and make the column 'power_1' to be the first power of tmp (i.e. tmp itself).
Step3: Polynomial_sframe function
Step4: To test your function consider the smaller tmp variable and what you would expect the outcome of the following call
Step5: Visualizing polynomial regression
Step6: For the rest of the notebook we'll use the sqft_living variable. For plotting purposes (connected the dots) you'll need to sort by the values of sqft_living first
Step7: Let's start with a degree 1 polynomial using 'sqft_living' (i.e. a line) to predict 'price' and plot what it looks like.
Step8: NOTE
Step9: Let's unpack that plt.plot() command. The first pair of SArrays we passed are the 1st power of sqft and the actual price we then ask it to print these as dots '.'. The next pair we pass is the 1st power of sqft and the predicted values from the linear model. We ask these to be plotted as a line '-'.
|
12,829
|
<ASSISTANT_TASK:>
Python Code:
df = hc.sample.df_timeseries(N=2, Nb_bd=15+0*3700) #<=473
df.info()
display(df.head())
display(df.tail())
g = hc.Highstock()
g.chart.width = 650
g.chart.height = 550
g.legend.enabled = True
g.legend.layout = 'horizontal'
g.legend.align = 'center'
g.legend.maxHeight = 100
g.tooltip.enabled = True
g.tooltip.valueDecimals = 2
g.exporting.enabled = True
g.chart.zoomType = 'xy'
g.title.text = 'Time series plotted with HighStock'
g.subtitle.text = 'Transparent access to the underlying js lib'
g.plotOptions.series.compare = 'percent'
g.yAxis.labels.formatter = hc.scripts.FORMATTER_PERCENT
g.tooltip.pointFormat = hc.scripts.TOOLTIP_POINT_FORMAT_PERCENT
g.tooltip.positioner = hc.scripts.TOOLTIP_POSITIONER_CENTER_TOP
g.xAxis.gridLineWidth = 1.0
g.xAxis.gridLineDashStyle = 'Dot'
g.yAxis.gridLineWidth = 1.0
g.yAxis.gridLineDashStyle = 'Dot'
g.credits.enabled = True
g.credits.text = 'Source: XXX Flow Strategy & Solutions.'
g.credits.href = 'http://www.example.com'
g.series = hc.build.series(df)
g.plot(save=False, version='6.1.2', center=True)
## IF BEHIND A CORPORATE PROXY
## IF NOT PROXY IS PASSED TO .plot() THEN NO HIGHCHARTS VERSION UPDATE IS PERFORMED
## HARDODED VERSIONS ARE USED INSTEAD
# p = hc.Proxy('mylogin', 'mypwd', 'myproxyhost', 'myproxyport')
# g.plot(save=False, version='latest', proxy=p)
options_as_dict = g.options_as_dict()
options_as_dict
options_as_json = g.options_as_json()
options_as_json
df = hc.sample.df_timeseries(N=3, Nb_bd=2000)
df['Cash'] = 1.0+0.02/260
df['Cash'] = df['Cash'].cumprod()
display(df.head())
display(df.tail())
g = hc.Highstock()
g.chart.height = 550
g.legend.enabled = True
g.legend.layout = 'horizontal'
g.legend.align = 'center'
g.legend.maxHeight = 100
g.tooltip.enabled = True
g.tooltip.valueDecimals = 2
g.exporting.enabled = True
g.chart.zoomType = 'xy'
g.title.text = 'Time series plotted with HighStock'
g.subtitle.text = 'Transparent access to the underlying js lib'
g.plotOptions.series.compare = 'percent'
g.yAxis.labels.formatter = hc.scripts.FORMATTER_PERCENT
g.tooltip.pointFormat = hc.scripts.TOOLTIP_POINT_FORMAT_PERCENT
g.tooltip.positioner = hc.scripts.TOOLTIP_POSITIONER_CENTER_TOP
g.xAxis.gridLineWidth = 1.0
g.xAxis.gridLineDashStyle = 'Dot'
g.yAxis.gridLineWidth = 1.0
g.yAxis.gridLineDashStyle = 'Dot'
g.credits.enabled = True
g.credits.text = 'Source: XXX Flow Strategy & Solutions.'
g.credits.href = 'http://www.example.com'
g.series = hc.build.series(df, visible={'Track3': False})
g.plot(save=True, version='6.1.2', save_name='NoTable')
# g.plot_with_table_1(dated=False, version='6.1.2', save=True, save_name='Table1')
g.plotOptions.series.compare = 'value'
g.yAxis.labels.formatter = hc.scripts.FORMATTER_BASIC
g.tooltip.pointFormat = hc.scripts.TOOLTIP_POINT_FORMAT_BASIC
g.tooltip.formatter = hc.scripts.FORMATTER_QUANTILE
disclaimer =
THE VALUE OF YOUR INVESTMENT MAY FLUCTUATE.
THE FIGURES RELATING TO SIMULATED PAST PERFORMANCES REFER TO PAST
PERIODS AND ARE NOT A RELIABLE INDICATOR OF FUTURE RESULTS.
THIS ALSO APPLIES TO HISTORICAL MARKET DATA.
template_footer = hc.scripts.TEMPLATE_DISCLAIMER
create_footer = hc.scripts.from_template
logo_path = hc.scripts.PATH_TO_LOGO_SG
# logo_path = 'http://img.talkandroid.com/uploads/2015/11/Chrome-Logo.png'
# logo_path = hc.scripts.image_src('http://img.talkandroid.com/uploads/2015/11/Chrome-Logo.png')
footer = create_footer(template_footer, comment=disclaimer, img_logo=logo_path)
g.plot_with_table_2(dated=False, version='6.1.2', save=True, save_name='Table2', footer=footer)
df = hc.sample.df_one_idx_several_col()
df
g = hc.Highcharts()
g.chart.type = 'column'
g.chart.width = 500
g.chart.height = 300
# g.plotOptions.column.animation = False
g.title.text = 'Basic Bar Chart'
g.yAxis.title.text = 'Fruit Consumption'
g.xAxis.categories = list(df.index)
g.series = hc.build.series(df)
g.plot(center=True, save=True, version='6.1.2', save_name='test', dated=False)
g.plotOptions.column.stacking = 'normal'
g.title.text = 'Stack Bar Chart'
g.yAxis.title.text = 'Total Fruit Consumption'
g.plot(version='6.1.2')
g.plotOptions.column.stacking = 'percent'
g.yAxis.title.text = 'Fruit Consumption Distribution'
g.plot(version='6.1.2')
g = hc.Highcharts()
g.chart.type = 'bar'
g.chart.width = 500
g.chart.height = 400
g.title.text = 'Basic Bar Chart'
g.xAxis.title.text = 'Fruit Consumption'
g.xAxis.categories = list(df.index)
g.series = hc.build.series(df)
g.plot()
g.plotOptions.bar.stacking = 'normal'
g.title.text = 'Stacked Bar Chart'
g.xAxis.title.text = 'Total Fruit Consumption'
g.plot(version='6.1.2')
g.plotOptions.bar.stacking = 'percent'
g.title.text = 'Stacked Bar Chart'
g.xAxis.title.text = 'Fruit Consumption Distribution'
g.plot(version='6.1.2')
df = hc.sample.df_one_idx_one_col()
df
g = hc.Highcharts()
g.chart.type = 'pie'
g.chart.width = 400
g.chart.height = 400
gpo = g.plotOptions.pie
gpo.showInLegend = True
gpo.dataLabels.enabled = False
g.title.text = 'Browser Market Share'
g.series = hc.build.series(df)
g.plot(version='6.1.2')
g.chart.width = 400
g.chart.height = 300
gpo.showInLegend = False
gpo.dataLabels.enabled = True
gpo.startAngle = -90
gpo.endAngle = 90
gpo.innerSize = '40%'
gpo.center = ['50%', '95%']
g.plot(version='6.1.2')
df = hc.sample.df_two_idx_one_col()
df.head()
g = hc.Highcharts()
g.chart.type = 'pie'
g.chart.width = 500
g.chart.height = 500
g.exporting = False
gpo = g.plotOptions.pie
gpo.showInLegend = False
gpo.dataLabels.enabled = True
gpo.center = ['50%', '50%']
gpo.size = '65%'
g.drilldown.drillUpButton.position = {'x': 0, 'y': 0}
g.title.text = 'Browser Market Share'
g.series, g.drilldown.series = hc.build.series_drilldown(df)
g.plot(version='6.1.2')
g = hc.Highcharts()
g.chart.type = 'bar'
g.chart.width = 500
g.chart.height = 500
g.exporting = False
gpo = g.plotOptions.pie
gpo.showInLegend = False
gpo.dataLabels.enabled = True
gpo.center = ['50%', '50%']
gpo.size = '65%'
g.drilldown.drillUpButton.position = {'x': 0, 'y': 0}
g.title.text = 'Browser Market Share'
g.series, g.drilldown.series = hc.build.series_drilldown(df)
g.plot()
df = hc.sample.df_several_idx_one_col_2()
df.head()
df
# g = hc.Highcharts()
# g.chart.type = 'pie'
# g.chart.width = 500
# g.chart.height = 500
# g.exporting = False
# gpo = g.plotOptions.pie
# gpo.showInLegend = False
# gpo.dataLabels.enabled = True
# gpo.center = ['50%', '50%']
# gpo.size = '65%'
# g.drilldown.drillUpButton.position = {'x': 0, 'y': 0}
# g.title.text = 'World Population'
# g.series, g.drilldown.series = hc.build.series_drilldown(df, top_name='World')
# # g.plot(version='6.1.2')
df = hc.sample.df_one_idx_two_col()
df.head()
g = hc.Highcharts()
g.chart.type = 'columnrange'
g.chart.inverted = True
g.chart.width = 700
g.chart.height = 400
gpo = g.plotOptions.columnrange
gpo.dataLabels.enabled = True
gpo.dataLabels.formatter = 'function() { return this.y + "°C"; }'
g.tooltip.valueSuffix = '°C'
g.xAxis.categories, g.series = hc.build.series_range(df)
g.series[0]['name'] = 'Temperature'
g.yAxis.title.text = 'Temperature (°C)'
g.xAxis.title.text = 'Month'
g.title.text = 'Temperature Variations by Month'
g.subtitle.text = 'Vik, Norway'
g.legend.enabled = False
g.plot(save=True, save_name='index', version='6.1.2', dated=False, notebook=False)
df = hc.sample.df_scatter()
df.head()
g = hc.Highcharts()
g.chart.type = 'scatter'
g.chart.width = 700
g.chart.height = 500
g.chart.zoomType = 'xy'
g.exporting = False
g.plotOptions.scatter.marker.radius = 5
g.tooltip.headerFormat = '<b>Sex: {series.name}</b><br>'
g.tooltip.pointFormat = '{point.x} cm, {point.y} kg'
g.legend.layout = 'vertical'
g.legend.align = 'left'
g.legend.verticalAlign = 'top'
g.legend.x = 100
g.legend.y = 70
g.legend.floating = True
g.legend.borderWidth = 1
g.xAxis.title.text = 'Height (cm)'
g.yAxis.title.text = 'Weight (kg)'
g.title.text = 'Height Versus Weight of 507 Individuals by Gender'
g.subtitle.text = 'Source: Heinz 2003'
g.series = hc.build.series_scatter(df, color_column='Sex',
color={'Female': 'rgba(223, 83, 83, .5)',
'Male': 'rgba(119, 152, 191, .5)'})
g.plot(version='6.1.2')
df = hc.sample.df_scatter()
df['Tag'] = np.random.choice(range(int(1e5)), size=len(df), replace=False)
df.head()
g = hc.Highcharts()
g.chart.type = 'scatter'
g.chart.width = 700
g.chart.height = 500
g.chart.zoomType = 'xy'
g.exporting = False
g.plotOptions.scatter.marker.radius = 5
g.tooltip.headerFormat = '<b>Sex: {series.name}</b><br><b>Tag: {point.key}</b><br>'
g.tooltip.pointFormat = '{point.x} cm, {point.y} kg'
g.legend.layout = 'vertical'
g.legend.align = 'left'
g.legend.verticalAlign = 'top'
g.legend.x = 100
g.legend.y = 70
g.legend.floating = True
g.legend.borderWidth = 1
g.xAxis.title.text = 'Height (cm)'
g.yAxis.title.text = 'Weight (kg)'
g.title.text = 'Height Versus Weight of 507 Individuals by Gender'
g.subtitle.text = 'Source: Heinz 2003'
g.series = hc.build.series_scatter(df, color_column='Sex', title_column='Tag',
color={'Female': 'rgba(223, 83, 83, .5)',
'Male': 'rgba(119, 152, 191, .5)'})
g.plot(version='6.1.2')
df = hc.sample.df_bubble()
df.head()
g = hc.Highcharts()
g.chart.type = 'bubble'
g.chart.width = 700
g.chart.height = 500
g.chart.zoomType = 'xy'
g.plotOptions.bubble.minSize = 20
g.plotOptions.bubble.maxSize = 60
g.legend.enabled = True
g.title.text = 'Bubbles'
g.series = hc.build.series_bubble(df, color={'A': 'rgba(223, 83, 83, .5)', 'B': 'rgba(119, 152, 191, .5)'})
g.plot(version='6.1.2')
df = hc.sample.df_several_idx_one_col()
df.head()
colors = ['#7cb5ec', '#434348', '#90ed7d', '#f7a35c', '#8085e9',
'#f15c80', '#e4d354', '#2b908f', '#f45b5b', '#91e8e1']
points = hc.build.series_tree(df, set_color=True, colors=colors, set_value=True, precision=2)
points[:5]
g = hc.Highcharts()
g.chart.type = 'treemap'
g.chart.width = 900
g.chart.height = 600
g.title.text = 'Global Mortality Rate 2012, per 100 000 population'
g.subtitle.text = 'Click points to drill down.\nSource: \
<a href="http://apps.who.int/gho/data/node.main.12?lang=en">WHO</a>.'
g.exporting = False
g.series = [{
'type': "treemap",
'layoutAlgorithm': 'squarified',
'allowDrillToNode': True,
'dataLabels': {
'enabled': False
},
'levelIsConstant': False,
'levels': [{
'level': 1,
'dataLabels': {
'enabled': True
},
'borderWidth': 3
}],
'data': points,
}]
g.plot(version='6.1.2')
df = hc.sample.df_two_idx_one_col()
df.head()
points = hc.build.series_tree(df, set_total=True, name_total='Total',
set_color=False,
set_value=False, precision=2)
points[:5]
g = hc.Highcharts()
g.chart.type = 'sunburst'
g.title.text = 'Browser Market Share'
g.plotOptions.series.animation = True
g.chart.height = '80%'
g.chart.animation = True
g.exporting = False
g.tooltip = {
'headerFormat': "",
'pointFormat': '<b>{point.name}</b> Market Share is <b>{point.value:,.3f}</b>'
}
g.series = [{
'type': 'sunburst',
'data': points,
'allowDrillToNode': True,
'cursor': 'pointer',
'dataLabels': {
'format': '{point.name}',
'filter': {
'property': 'innerArcLength',
'operator': '>',
'value': 16
}
},
'levels': [{
'level': 2,
'colorByPoint': True,
'dataLabels': {
'rotationMode': 'parallel'
}
},
{
'level': 3,
'colorVariation': {
'key': 'brightness',
'to': -0.5
}
}, {
'level': 4,
'colorVariation': {
'key': 'brightness',
'to': 0.5
}
}]
}]
g.plot(version='6.1.2')
df = hc.sample.df_several_idx_one_col_2()
df.head()
points = hc.build.series_tree(df, set_total=True, name_total='World',
set_value=False, set_color=False, precision=0)
points[:5]
g = hc.Highcharts()
g.chart.type = 'sunburst'
g.chart.height = '90%'
g.chart.animation = True
g.title.text = 'World population 2017'
g.subtitle.text = 'Source <href="https://en.wikipedia.org/wiki/List_of_countries_by_population_(United_Nations)">Wikipedia</a>'
g.exporting = False
g.series = [{
'type': "sunburst",
'data': points,
'allowDrillToNode': True,
'cursor': 'pointer',
'dataLabels': {
'format': '{point.name}',
'filter': {
'property': 'innerArcLength',
'operator': '>',
'value': 16
}
},
'levels': [{
'level': 2,
'colorByPoint': True,
'dataLabels': {
'rotationMode': 'parallel'
}
},
{
'level': 3,
'colorVariation': {
'key': 'brightness',
'to': -0.5
}
}, {
'level': 4,
'colorVariation': {
'key': 'brightness',
'to': 0.5
}
}]
}]
g.plot(version='6.1.2')
df = pd.DataFrame(data=np.array([[8, 7, 6, 5, 4, 3, 2, 1],
[1, 2, 3, 4, 5, 6, 7, 8],
[1, 8, 2, 7, 3, 6, 4, 5]]).T,
columns=['column', 'line', 'area'])
df
g = hc.Highcharts()
g.chart.polar = True
g.chart.width = 500
g.chart.height = 500
g.title.text = 'Polar Chart'
g.pane.startAngle = 0
g.pane.endAngle = 360
g.pane.background = [{'backgroundColor': '#FFF',
'borderWidth': 0
}]
g.xAxis.tickInterval = 45
g.xAxis.min = 0
g.xAxis.max = 360
g.xAxis.labels.formatter = 'function() { return this.value + "°"; }'
g.yAxis.min = 0
g.plotOptions.series.pointStart = 0
g.plotOptions.series.pointInterval = 45
g.plotOptions.column.pointPadding = 0
g.plotOptions.column.groupPadding = 0
g.series = [{
'type': 'column',
'name': 'Column',
'data': list(df['column']),
'pointPlacement': 'between',
}, {
'type': 'line',
'name': 'Line',
'data': list(df['line']),
}, {
'type': 'area',
'name': 'Area',
'data': list(df['area']),
}
]
g.plot(version='6.1.2')
df = pd.DataFrame(data=np.array([[43000, 19000, 60000, 35000, 17000, 10000],
[50000, 39000, 42000, 31000, 26000, 14000]]).T,
columns=['Allocated Budget', 'Actual Spending'],
index = ['Sales', 'Marketing', 'Development', 'Customer Support',
'Information Technology', 'Administration'])
df
g = hc.Highcharts()
g.chart.polar = True
g.chart.width = 650
g.chart.height = 500
g.title.text = 'Budget vs. Spending'
g.title.x = -80
g.pane.size = '80%'
g.pane.background = [{'backgroundColor': '#FFF',
'borderWidth': 0
}]
g.xAxis.tickmarkPlacement = 'on'
g.xAxis.lineWidth = 0
g.xAxis.categories = list(df.index)
g.yAxis.min = 0
g.yAxis.lineWidth = 0
g.yAxis.gridLineInterpolation = 'polygon'
g.tooltip.pointFormat = '<span style="color:{series.color}">{series.name}: <b>${point.y:,.0f}</b><br/>'
g.tooltip.shared = True
g.legend.align = 'right'
g.legend.verticalAlign = 'top'
g.legend.y = 70
g.legend.layout = 'vertical'
g.series = [{
'name': 'Allocated Budget',
'data': list(df['Allocated Budget']),
'pointPlacement': 'on'
}, {
'name': 'Actual Spending',
'data': list(df['Actual Spending']),
'pointPlacement': 'on'
},
]
g.plot(version='6.1.2')
df = hc.sample.df_two_idx_several_col()
df.info()
display(df.head(10))
display(df.tail(10))
g = hc.Highcharts()
# g.chart.type = 'column'
g.chart.polar = True
g.plotOptions.series.animation = True
g.chart.width = 950
g.chart.height = 700
g.pane.size = '90%'
g.title.text = 'Perf (%) Contrib by Strategy & Period'
g.xAxis.type = 'category'
g.xAxis.tickmarkPlacement = 'on'
g.xAxis.lineWidth = 0
g.yAxis.gridLineInterpolation = 'polygon'
g.yAxis.lineWidth = 0
g.yAxis.plotLines = [{'color': 'gray', 'value': 0, 'width': 1.5}]
g.tooltip.pointFormat = '<span style="color:{series.color}">{series.name}: <b>{point.y:,.3f}%</b><br/>'
g.tooltip.shared = False
g.legend.enabled = True
g.legend.align = 'right'
g.legend.verticalAlign = 'top'
g.legend.y = 70
g.legend.layout = 'vertical'
# color names from http://www.w3schools.com/colors/colors_names.asp
# color rgba() codes from http://www.hexcolortool.com/
g.series, g.drilldown.series = hc.build.series_drilldown(df, colorByPoint=False,
color={'5Y': 'indigo'},
# color={'5Y': 'rgba(136, 110, 166, 1)'}
)
g.plot(save=True, save_name='ContribTable', version='6.1.2')
df_obs = pd.DataFrame(data=np.array([[760, 801, 848, 895, 965],
[733, 853, 939, 980, 1080],
[714, 762, 817, 870, 918],
[724, 802, 806, 871, 950],
[834, 836, 864, 882, 910]]),
index=list('ABCDE'))
display(df_obs)
# x, y positions where 0 is the first category
df_outlier = pd.DataFrame(data=np.array([[0, 644],
[4, 718],
[4, 951],
[4, 969]]))
display(df_outlier)
colors = ['#7cb5ec', '#434348', '#90ed7d', '#f7a35c', '#8085e9',
'#f15c80', '#e4d354', '#2b908f', '#f45b5b', '#91e8e1']
g = hc.Highcharts()
g.chart.type = 'boxplot'
g.chart.width = 850
g.chart.height = 500
g.title.text = 'Box Plot Example'
g.legend.enabled = False
g.xAxis.categories = list(df_obs.index)
g.xAxis.title.text = 'Experiment'
g.yAxis.title.text = 'Observations'
g.yAxis.plotLines= [{
'value': 932,
'color': 'red',
'width': 1,
'label': {
'text': 'Theoretical mean: 932',
'align': 'center',
'style': { 'color': 'gray' }
}
}]
g.series = []
g.series.append({
'name': 'Observations',
'data': list(df_obs.values),
'tooltip': { 'headerFormat': '<em>Experiment No {point.key}</em><br/>' },
})
g.series.append({
'name': 'Outlier',
'color': colors[0],
'type': 'scatter',
'data': list(df_outlier.values),
'marker': {
'fillColor': 'white',
'lineWidth': 1,
'lineColor': colors[0],
},
'tooltip': { 'pointFormat': 'Observation: {point.y}' }
})
g.plot(version='6.1.2')
df = hc.sample.df_one_idx_several_col_2()
df
colors = ['#7cb5ec', '#434348', '#90ed7d', '#f7a35c', '#8085e9',
'#f15c80', '#e4d354', '#2b908f', '#f45b5b', '#91e8e1']
idx, col, data = hc.build.series_heatmap(df)
g = hc.Highcharts()
g.chart.type = 'heatmap'
g.chart.width = 650
g.chart.height = 450
g.title.text = 'Sales per employee per weekday'
g.xAxis.categories = idx
g.yAxis.categories = col
g.yAxis.title = ''
g.colorAxis = {
'min': 0,
'minColor': '#FFFFFF',
'maxColor': colors[0],
}
g.legend = {
'align': 'right',
'layout': 'vertical',
'margin': 0,
'verticalAlign': 'top',
'y': 25,
'symbolHeight': 280
}
g.tooltip = {
'formatter': function () {
return '<b>' + this.series.xAxis.categories[this.point.x] + '</b> sold <br><b>' +
this.point.value + '</b> items on <br><b>' + this.series.yAxis.categories[this.point.y] + '</b>';
}
}
g.series = []
g.series.append({
'name': 'Sales per Employee',
'borderWidth': 1,
'data': data,
'dataLabels': {
'enabled': True,
'color': '#000000',
}
})
g.plot(version='6.1.2')
g = hc.Highcharts()
g.yAxis.info()
g.yAxis.labels.format.info()
g = hc.Highstock()
g.plotOptions.info()
g = hc.Highcharts()
g.legend.align.info()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Example 2
Step2: Example 3
Step4: Example 4
Step5: Column, Bar
Step6: Pie
Step7: Pie, Column Drilldown
Step8: Pie Drilldown - 3 levels
Step9: Column Range
Step10: Scatter - 1
Step11: Scatter - 2
Step12: Bubble
Step13: Treemap
Step14: Sunburst - 2 levels
Step15: Sunburst - 3 levels
Step16: Polar Chart
Step17: Spider Web
Step18: Spider Web DrillDown
Step19: Box Plot
Step21: Heatmap
Step22: Direct access to Highcharts/Highstock documentation
|
12,830
|
<ASSISTANT_TASK:>
Python Code:
def is_sorted(lst):
'''
Given a list of numbers, return whether or not they are sorted
in ascending order. If list has more than 1 duplicate of the same
number, return False. Assume no negative numbers and only integers.
Examples
is_sorted([5]) ➞ True
is_sorted([1, 2, 3, 4, 5]) ➞ True
is_sorted([1, 3, 2, 4, 5]) ➞ False
is_sorted([1, 2, 3, 4, 5, 6]) ➞ True
is_sorted([1, 2, 3, 4, 5, 6, 7]) ➞ True
is_sorted([1, 3, 2, 4, 5, 6, 7]) ➞ False
is_sorted([1, 2, 2, 3, 3, 4]) ➞ True
is_sorted([1, 2, 2, 2, 3, 4]) ➞ False
'''
count_digit = dict([(i, 0) for i in lst])
for i in lst:
count_digit[i]+=1
if any(count_digit[i] > 2 for i in lst):
return False
if all(lst[i-1] <= lst[i] for i in range(1, len(lst))):
return True
else:
return False
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
12,831
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import os as os
import preprocessing_helper as preprocessing_helper
import matplotlib as plt
% matplotlib inline
filename = "train_users_2.csv"
folder = 'data'
fileAdress = os.path.join(folder, filename)
df = pd.read_csv(fileAdress)
df.head()
df.isnull().any()
df = preprocessing_helper.cleanAge(df,'k')
preprocessing_helper.plotAge(df)
df = preprocessing_helper.cleanGender(df)
preprocessing_helper.plotGender(df)
df = preprocessing_helper.cleanFirst_affiliate_tracked(df)
df = preprocessing_helper.cleanDate_First_booking(df)
preprocessing_helper.plotDate_First_booking_years(df)
preprocessing_helper.plotDate_First_booking_months(df)
preprocessing_helper.plotDate_First_booking_weekdays(df)
filename = "cleaned_train_user.csv"
folder = 'cleaned_data'
fileAdress = os.path.join(folder, filename)
preprocessing_helper.saveFile(df, fileAdress)
# extract file
filename = "test_users.csv"
folder = 'data'
fileAdress = os.path.join(folder, filename)
df = pd.read_csv(fileAdress)
# process file
df = preprocessing_helper.cleanAge(df,'k')
df = preprocessing_helper.cleanGender(df)
df = preprocessing_helper.cleanFirst_affiliate_tracked(df)
# save file
filename = "cleaned_test_user.csv"
folder = 'cleaned_data'
fileAdress = os.path.join(folder, filename)
preprocessing_helper.saveFile(df, fileAdress)
filename = "countries.csv"
folder = 'data'
fileAdress = os.path.join(folder, filename)
df = pd.read_csv(fileAdress)
df
df.describe()
filename = "age_gender_bkts.csv"
folder = 'data'
fileAdress = os.path.join(folder, filename)
df = pd.read_csv(fileAdress)
df.head()
df_country = df.groupby(['country_destination'],as_index=False).sum()
df_country
filename = "sessions.csv"
folder = 'data'
fileAdress = os.path.join(folder, filename)
df = pd.read_csv(fileAdress)
df.head()
df.isnull().any()
df = preprocessing_helper.cleanSubset(df, 'user_id')
df['secs_elapsed'].fillna(-1, inplace = True)
df = preprocessing_helper.cleanAction(df)
df.isnull().any()
# Get total number of action per user_id
data_session_number_action = preprocessing_helper.createActionFeature(df)
# Save to .csv file
filename = "total_action_user_id.csv"
folder = 'cleaned_data'
fileAdress = os.path.join(folder, filename)
preprocessing_helper.saveFile(data_session_number_action, fileAdress)
# Plot distribution total number of action per user_id
preprocessing_helper.plotActionFeature(data_session_number_action)
preprocessing_helper.plotHist(df['device_type'])
# Get Time spent on average per user_id
data_time_mean = preprocessing_helper.createAverageTimeFeature(df)
# Save to .csv file
data_time_mean = data_time_mean.rename(columns={'user_id': 'id'})
filename = "time_mean_user_id.csv"
folder = 'cleaned_data'
fileAdress = os.path.join(folder, filename)
preprocessing_helper.saveFile(data_time_mean, fileAdress)
# Plot distribution average time of session per user_id
preprocessing_helper.plotTimeFeature(data_time_mean['secs_elapsed'],'mean')
# Get Time spent in total per user_id
data_time_total = preprocessing_helper.createTotalTimeFeature(df)
# Save to .csv file
data_time_total = data_time_total.rename(columns={'user_id': 'id'})
filename = "time_total_user_id.csv"
folder = 'cleaned_data'
fileAdress = os.path.join(folder, filename)
preprocessing_helper.saveFile(data_time_total, fileAdress)
# Plot distribution total time of session per user_id
preprocessing_helper.plotTimeFeature(data_time_total['secs_elapsed'],'total')
preprocessing_helper.plotTimeFeature(df['secs_elapsed'],'dist')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Data exploration and cleaning
Step2: There are missing values in the columns
Step3: Ages
Step4: The following graph presents the distribution of ages in the dataset. Also, the irrelevant ages are represented here, with their value of -1.
Step5: Gender
Step6: first_affiliate_tracked feature
Step7: Date_first_booking
Step8: It is possible to understand from this histogram that the bookings are pretty well spread over the year. Much less bookings are made during november and december and the months of May and June are the ones where users book the most. For these two months Airbnb counts more than 20000 bookings which corresponds to allmost a quarter of the bookings from our dataset.
Step9: As for the day where most accounts are created, it seems that tuesday and wednesdays are the days where people book the most appartments on Airbnb.
Step10: Save cleaned and explored file
Step11: 1.2 file 'test_user.csv'
Step12: 1.3 file 'countries.csv'
Step13: 1.4 file 'age_gender_bkts.csv'
Step14: Population total per country
Step15: 1.5 file 'sessions.csv'
Step16: NaN users
Step17: Invalid session time
Step18: Actions
Step19: As shown in the following, there are no more NaN values.
Step20: Total number of actions per user
Step21: Device types
Step22: Time spent on average per user
Step23: Time spent in total per user
Step24: Distribution of time spent
|
12,832
|
<ASSISTANT_TASK:>
Python Code:
# Run some setup code for this notebook.
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
from __future__ import print_function
# This is a bit of magic to make matplotlib figures appear inline in the notebook
# rather than in a new window.
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Some more magic so that the notebook will reload external python modules;
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
# Load the raw CIFAR-10 data.
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# As a sanity check, we print out the size of the training and test data.
print('Training data shape: ', X_train.shape)
print('Training labels shape: ', y_train.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
# Visualize some examples from the dataset.
# We show a few examples of training images from each class.
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
num_classes = len(classes)
samples_per_class = 7
for y, cls in enumerate(classes):
idxs = np.flatnonzero(y_train == y)
idxs = np.random.choice(idxs, samples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i * num_classes + y + 1
plt.subplot(samples_per_class, num_classes, plt_idx)
plt.imshow(X_train[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls)
plt.show()
# Subsample the data for more efficient code execution in this exercise
num_training = 5000
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
num_test = 500
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
# Reshape the image data into rows
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
print(X_train.shape, X_test.shape)
from cs231n.classifiers import KNearestNeighbor
# Create a kNN classifier instance.
# Remember that training a kNN classifier is a noop:
# the Classifier simply remembers the data and does no further processing
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
# Open cs231n/classifiers/k_nearest_neighbor.py and implement
# compute_distances_two_loops.
# Test your implementation:
dists = classifier.compute_distances_two_loops(X_test)
print(dists.shape)
# We can visualize the distance matrix: each row is a single test example and
# its distances to training examples
plt.imshow(dists, interpolation='none')
plt.show()
# Now implement the function predict_labels and run the code below:
# We use k = 1 (which is Nearest Neighbor).
y_test_pred = classifier.predict_labels(dists, k=1)
# Compute and print the fraction of correctly predicted examples
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))
y_test_pred = classifier.predict_labels(dists, k=5)
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))
# Now lets speed up distance matrix computation by using partial vectorization
# with one loop. Implement the function compute_distances_one_loop and run the
# code below:
dists_one = classifier.compute_distances_one_loop(X_test)
# To ensure that our vectorized implementation is correct, we make sure that it
# agrees with the naive implementation. There are many ways to decide whether
# two matrices are similar; one of the simplest is the Frobenius norm. In case
# you haven't seen it before, the Frobenius norm of two matrices is the square
# root of the squared sum of differences of all elements; in other words, reshape
# the matrices into vectors and compute the Euclidean distance between them.
difference = np.linalg.norm(dists - dists_one, ord='fro')
print('Difference was: %f' % (difference, ))
if difference < 0.001:
print('Good! The distance matrices are the same')
else:
print('Uh-oh! The distance matrices are different')
# Now implement the fully vectorized version inside compute_distances_no_loops
# and run the code
dists_two = classifier.compute_distances_no_loops(X_test)
# check that the distance matrix agrees with the one we computed before:
difference = np.linalg.norm(dists - dists_two, ord='fro')
print('Difference was: %f' % (difference, ))
if difference < 0.001:
print('Good! The distance matrices are the same')
else:
print('Uh-oh! The distance matrices are different')
# Let's compare how fast the implementations are
def time_function(f, *args):
Call a function f with args and return the time (in seconds) that it took to execute.
import time
tic = time.time()
f(*args)
toc = time.time()
return toc - tic
two_loop_time = time_function(classifier.compute_distances_two_loops, X_test)
print('Two loop version took %f seconds' % two_loop_time)
one_loop_time = time_function(classifier.compute_distances_one_loop, X_test)
print('One loop version took %f seconds' % one_loop_time)
no_loop_time = time_function(classifier.compute_distances_no_loops, X_test)
print('No loop version took %f seconds' % no_loop_time)
# you should see significantly faster performance with the fully vectorized implementation
num_folds = 5
k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100]
X_train_folds = []
y_train_folds = []
################################################################################
# TODO: #
# Split up the training data into folds. After splitting, X_train_folds and #
# y_train_folds should each be lists of length num_folds, where #
# y_train_folds[i] is the label vector for the points in X_train_folds[i]. #
# Hint: Look up the numpy array_split function. #
################################################################################
#pass
X_train_folds = np.array_split(X_train, num_folds)
y_train_folds = np.array_split(y_train, num_folds)
################################################################################
# END OF YOUR CODE #
################################################################################
# A dictionary holding the accuracies for different values of k that we find
# when running cross-validation. After running cross-validation,
# k_to_accuracies[k] should be a list of length num_folds giving the different
# accuracy values that we found when using that value of k.
k_to_accuracies = {}
################################################################################
# TODO: #
# Perform k-fold cross validation to find the best value of k. For each #
# possible value of k, run the k-nearest-neighbor algorithm num_folds times, #
# where in each case you use all but one of the folds as training data and the #
# last fold as a validation set. Store the accuracies for all fold and all #
# values of k in the k_to_accuracies dictionary. #
################################################################################
#pass
for k in k_choices:
inner_accuracies = np.zeros(num_folds)
for i in range(num_folds):
X_sub_train = np.concatenate(np.delete(X_train_folds, i, axis=0))
y_sub_train = np.concatenate(np.delete(y_train_folds, i, axis=0))
print(X_sub_train.shape,y_sub_train.shape)
X_sub_test = X_train_folds[i]
y_sub_test = y_train_folds[i]
print(X_sub_test.shape,y_sub_test.shape)
classifier = KNearestNeighbor()
classifier.train(X_sub_train, y_sub_train)
dists = classifier.compute_distances_no_loops(X_sub_test)
pred_y = classifier.predict_labels(dists, k)
num_correct = np.sum(y_sub_test == pred_y)
inner_accuracies[i] = float(num_correct)/X_test.shape[0]
k_to_accuracies[k] = np.sum(inner_accuracies)/num_folds
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out the computed accuracies
for k in sorted(k_to_accuracies):
for accuracy in k_to_accuracies[k]:
print('k = %d, accuracy = %f' % (k, accuracy))
X_train_folds = np.array_split(X_train, 5)
t = np.delete(X_train_folds, 1,axis=0)
print(X_train_folds)
# plot the raw observations
for k in k_choices:
accuracies = k_to_accuracies[k]
plt.scatter([k] * len(accuracies), accuracies)
# plot the trend line with error bars that correspond to standard deviation
accuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())])
accuracies_std = np.array([np.std(v) for k,v in sorted(k_to_accuracies.items())])
plt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std)
plt.title('Cross-validation on k')
plt.xlabel('k')
plt.ylabel('Cross-validation accuracy')
plt.show()
# Based on the cross-validation results above, choose the best value for k,
# retrain the classifier using all the training data, and test it on the test
# data. You should be able to get above 28% accuracy on the test data.
best_k = 1
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
y_test_pred = classifier.predict(X_test, k=best_k)
# Compute and display the accuracy
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps
Step2: Inline Question #1
Step3: You should expect to see approximately 27% accuracy. Now lets try out a larger k, say k = 5
Step5: You should expect to see a slightly better performance than with k = 1.
Step6: Cross-validation
|
12,833
|
<ASSISTANT_TASK:>
Python Code:
import csv
from scipy.stats import kurtosis
from scipy.stats import skew
from scipy.spatial import Delaunay
import numpy as np
import math
import skimage
import matplotlib.pyplot as plt
import seaborn as sns
from skimage import future
import networkx as nx
%matplotlib inline
# Read in the data
data = open('../data/data.csv', 'r').readlines()
fieldnames = ['x', 'y', 'z', 'unmasked', 'synapses']
reader = csv.reader(data)
reader.next()
rows = [[int(col) for col in row] for row in reader]
# These will come in handy later
sorted_x = sorted(list(set([r[0] for r in rows])))
sorted_y = sorted(list(set([r[1] for r in rows])))
sorted_z = sorted(list(set([r[2] for r in rows])))
a = np.array(rows)
b = np.delete(a, np.s_[3::],1)
# Separate layers - have to do some wonky stuff to get this to work
b = sorted(b, key=lambda e: e[1])
b = np.array([v.tolist() for v in b])
b = np.split(b, np.where(np.diff(b[:,1]))[0]+1)
graphs = []
centroid_list = []
for layer in b:
centroids = np.array(layer)
# get rid of the y value - not relevant anymore
centroids = np.delete(centroids, 1, 1)
centroid_list.append(centroids)
graph = Delaunay(centroids)
graphs.append(graph)
def get_d_edge_length(edge):
(x1, y1), (x2, y2) = edge
return math.sqrt((x2-x1)**2 + (y2-y1)**2)
edge_length_list = [[]]
tri_area_list = [[]]
for del_graph in graphs:
tri_areas = []
edge_lengths = []
triangles = []
for t in centroids[del_graph.simplices]:
triangles.append(t)
a, b, c = [tuple(map(int,list(v))) for v in t]
edge_lengths.append(get_d_edge_length((a,b)))
edge_lengths.append(get_d_edge_length((a,c)))
edge_lengths.append(get_d_edge_length((b,c)))
try:
tri_areas.append(float(Triangle(a,b,c).area))
except:
continue
edge_length_list.append(edge_lengths)
tri_area_list.append(tri_areas)
np.subtract(centroid_list[0], centroid_list[1])
real_volume = np.zeros((len(sorted_y), len(sorted_x), len(sorted_z)))
for r in rows:
real_volume[sorted_y.index(r[1]), sorted_x.index(r[0]), sorted_z.index(r[2])] = r[-1]
np.shape(real_volume)
# point = tuple containing index of point (position)
# returns list of neighbors in [north, east, south, west]
def get_neighbors(point, image):
shape = np.shape(image)
neighbors = []
# North
neighbors.append((point[0], point[1]-1)) if point[1]>0 else neighbors.append(None)
# East
neighbors.append((point[0]+1, point[1])) if point[0]<shape[0]-1 else neighbors.append(None)
# South
neighbors.append((point[0], point[1]+1)) if point[1]<shape[1]-1 else neighbors.append(None)
# West
neighbors.append((point[0]-1, point[1])) if point[0]>0 else neighbors.append(None)
return neighbors
# calculates weights between nodes
# weight defined as inverse absolute distance
def get_weights_nonlinear(image, point, neighbors):
weights = []
for neigh in neighbors:
if neigh != None:
weight = 1/(abs(image[point] - image[neigh])+1)
weights.append(weight)
return weights
# calculates weights between nodes
# weight scaled and linear
# TODO: Explain weighting difference with math
def get_weights_linear(image, point, neighbors, scale_factor):
weights = []
for neigh in neighbors:
if neigh != None:
diff = abs(image[point] - image[neigh])
weight = 1 - (scale_factor*diff)
weights.append(weight)
return weights
image = real_volume[1]
# print image
point = (1,1)
neighbors = get_neighbors(point, image)
# print neighbors
ws = get_weights_nonlinear(image, point, neighbors)
def populate_graph(G, im):
nodes_to_add = []
for x in range(np.shape(im)[0]):
for y in range(np.shape(im)[1]):
nodes_to_add.append((x,y))
G.add_nodes_from(nodes_to_add)
def get_diff_range(image):
diffs = []
x = 0
for col in image:
y = 0
for pix in col:
point = (x,y)
neighs = get_neighbors(point, image)
for neigh in neighbors:
if neigh != None:
diffs.append(abs(image[point] - image[neigh]))
y+=1
x+=1
return (max(diffs), min(diffs))
def generate_rag(im, linear):
G=nx.Graph()
if linear == True:
(max_diff, min_diff) = get_diff_range(im)
scale_factor = 1/(max_diff - min_diff)
x = 0
for col in im:
y = 0
for pix in col:
point = (x,y)
neighs = get_neighbors(point, im)
if linear == True:
weights = get_weights_linear(im, point, neighs, scale_factor)
else:
weights = get_weights_nonlinear(im, point, neighs)
to_add = []
which = 0
for neigh in neighs:
if neigh != None:
to_add.append((point, neigh, weights[which]))
which+=1
# print to_add
G.add_weighted_edges_from(to_add)
y+=1
x+=1
return G
test_im = real_volume[1]
shape = np.shape(test_im)
ragu = generate_rag(test_im, False)
ragu.number_of_edges()
# ragu.adjacency_list()
y_rags = []
for layer in real_volume:
y_rags.append(generate_rag(layer, False))
num_edges = []
for rag in y_rags:
num_edges.append(rag.number_of_edges())
sns.barplot(x=range(len(num_edges)), y=num_edges)
sns.plt.show()
# for rag in y_rags:
# plt.figure()
# nx.draw(rag, node_size=100)
def get_edge_weight_distributions(rags):
distributions = []
for rag in rags:
itty = rag.edges_iter()
weight_list = []
for index in range(rag.number_of_edges()):
eddy = itty.next()
weight_list.append(rag.get_edge_data(eddy[0], eddy[1])['weight'])
distributions.append(weight_list)
return distributions
distributions = get_edge_weight_distributions(y_rags)
count = 0
for distr in distributions:
plt.hist(distr, bins=150)
plt.title("Layer " + str(count) + " Edge Weight Histogram")
plt.show()
count+=1
y_edge_means = []
for distrib in distributions:
y_edge_means.append(np.mean(distrib))
print y_edge_means
sns.barplot(x=range(len(y_edge_means)), y=y_edge_means)
sns.plt.show()
y_edge_vars = []
for distrib in distributions:
y_edge_vars.append(np.var(distrib))
print y_edge_vars
sns.barplot(x=range(len(y_edge_vars)), y=y_edge_vars)
sns.plt.show()
y_edge_skews = []
for distrib in distributions:
y_edge_skews.append(skew(distrib))
print y_edge_skews
sns.barplot(x=range(len(y_edge_skews)), y=y_edge_skews)
sns.plt.show()
y_edge_kurts = []
for distrib in distributions:
y_edge_kurts.append(kurtosis(distrib))
print y_edge_kurts
sns.barplot(x=range(len(y_edge_kurts)), y=y_edge_kurts)
sns.plt.show()
y_rags_linear_weight = []
for layer in real_volume:
y_rags_linear_weight.append(generate_rag(layer, True))
test_rag = generate_rag(real_volume[4], True)
itty = test_rag.edges_iter()
weight_list = []
for index in range(test_rag.number_of_edges()):
eddy = itty.next()
weight_list.append(test_rag.get_edge_data(eddy[0], eddy[1])['weight'])
distributions_lin = get_edge_weight_distributions(y_rags_linear_weight)
y_edge_linear_means = []
for distrib in distributions_lin:
y_edge_linear_means.append(np.mean(distrib))
sns.barplot(x=range(len(y_edge_linear_means)), y=y_edge_linear_means)
sns.plt.show()
y_edge_linear_vars = []
for distrib in distributions_lin:
y_edge_linear_vars.append(np.var(distrib))
sns.barplot(x=range(len(y_edge_linear_vars)), y=y_edge_linear_vars)
sns.plt.show()
y_edge_linear_skews = []
for distrib in distributions_lin:
y_edge_linear_skews.append(skew(distrib))
sns.barplot(x=range(len(y_edge_linear_skews)), y=y_edge_linear_skews)
sns.plt.show()
y_edge_linear_kurts = []
for distrib in distributions_lin:
y_edge_linear_kurts.append(kurtosis(distrib))
sns.barplot(x=range(len(y_edge_linear_kurts)), y=y_edge_linear_kurts)
sns.plt.show()
num_self_loops = []
for rag in y_rags:
num_self_loops.append(rag.number_of_selfloops())
num_self_loops
# y_rags[0].adjacency_list()
# Test Data
test = np.array([[1,2],[3,4]])
test_rag = skimage.future.graph.RAG(test)
test_rag.adjacency_list()
real_volume_x = np.zeros((len(sorted_x), len(sorted_y), len(sorted_z)))
for r in rows:
real_volume_x[ sorted_x.index(r[0]), sorted_y.index(r[1]), sorted_z.index(r[2])] = r[-1]
x_rags = []
count = 0;
for layer in real_volume_x:
count = count + 1
x_rags.append(skimage.future.graph.RAG(layer))
num_edges_x = []
for rag in x_rags:
num_edges_x.append(rag.number_of_edges())
sns.barplot(x=range(len(num_edges_x)), y=num_edges_x)
sns.plt.show()
plt.imshow(np.amax(real_volume, axis=2), interpolation='nearest')
plt.show()
# edge_length_list[3]
# tri_area_list[3]
# triangles
# Note for future
# del_features['d_edge_length_mean'] = np.mean(edge_lengths)
# del_features['d_edge_length_std'] = np.std(edge_lengths)
# del_features['d_edge_length_skew'] = scipy.stats.skew(edge_lengths)
# del_features['d_edge_length_kurtosis'] = scipy.stats.kurtosis(edge_lengths)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We'll start with just looking at analysis in euclidian space, then thinking about weighing by synaptic density later. Since we hypothesize that our data will show that tissue varies as we move down the y-axis (z-axis in brain) through cortical layers, an interesting thing to do would be compare properties of the graphs on each layer (ie how does graph connectivity vary as we move through layers).
Step2: Now that our data is in the right format, we'll create 52 delaunay graphs. Then we'll perform analyses on these graphs. A simple but useful metric would be to analyze edge length distributions in each layer.
Step3: We're going to need a method to get edge lengths from 2D centroid pairs
Step4: Realizing after all this that simply location is useless. We know the voxels are evenly spaced, which means our edge length data will be all the same. See that the "centroids" are no different
Step5: There is no distance between the two. Therefore it is perhaps more useful to consider a graph that considers node weights. Voronoi is dual to Delaunay, so that's not much of an option. We want something that considers both spacial location and density similarity.
Step6: Handrolling Own RAG generator
Step7: The following method is for later
Step8: Testing the RAG generator
Step9: Creating RAGs for each layer
Step10: OK, great! Now we have a list of 52 region adjacency graphs for each y-layer. Now we want to measure properties of those graphs and see how the properties vary in the y direction - through what we hypothesize are the cortical layers.
Step11: Drawing Graphs
Step12: This is using the spring layout, so we're losing positional information. We can improve the plot by adding position information.
Step13: Full edge weight histograms down y-axis
Step14: Mean edge weights
Step15: Edge Weight Variances
Step16: Edge Weight Third Moments (Skewness)
Step17: Edge Weight Fourth Moments (Kurtosis)
Step18: Hmmm...very interesting
Step19: Linear Edge Weight Means
Step20: Linear Edge Weight Variance
Step21: Linear Edge Weight Skewness
Step22: Linear Edge Weight Kurtosis
Step23: Number of Self Loops
Step24: Interesting. There are no self loops. Why would this be? Let's come back to this. In the meantime, I want to give some though to what it means to have a self loop, whether it should be theoretically possible given our data, and whether our graphs are formed properly.
Step25: Compare that to the test data
Step26: X-Layers
Step27: We can see here the number of edges is low in that area that does not have many synapses. It, as expected, mirrors the distribution of synapses. It appears to be approximately uniform at the top, with buffers of very few synapses on the sides. Remember from here
|
12,834
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm
from scipy.optimize import fmin
from scipy.linalg import cholesky, cho_solve, inv
#np.set_printoptions(formatter={'float': '{: 0.4f}'.format})
%matplotlib inline
%load_ext autoreload
%autoreload 2
def get_kernel(X1,X2,sigmaf,l,sigman):
k = lambda x1,x2,sigmaf,l,sigman:(sigmaf**2)*np.exp(-(1/float(2*(l**2)))*np.dot((x1-x2),(x1-x2).T)) + (sigman**2);
K = np.zeros((X1.shape[0],X2.shape[0]))
for i in range(0,X1.shape[0]):
for j in range(0,X2.shape[0]):
if i==j:
K[i,j] = k(X1[i,:],X2[j,:],sigmaf,l,sigman);
else:
K[i,j] = k(X1[i,:],X2[j,:],sigmaf,l,0);
return K
n1 = 100; n2 = 60; # number of elements in each class
S1 = np.eye(2); S2 = np.array([[1,0.95],[0.95,1]])
m1 = np.array([0.75, 0]);m2 = np.array([-0.75, 0])
x1 = np.random.multivariate_normal(m1,S1,n1); y1 = -1*np.ones([n1,1]);
x2 = np.random.multivariate_normal(m2,S2,n2); y2 = np.ones([n2,1]);
x = np.concatenate((x1[0:50,:], x2[0:30,:]), axis=0);
y = np.concatenate((y1[0:50,:], y2[0:30,:]), axis=0);
x_test = np.concatenate((x1[50:,:], x2[30:,:]), axis=0);
y_test = np.concatenate((y1[50:,:], y2[30:,:]), axis=0);
plt.plot(x[y[:,0]==1,0], x[y[:,0]==1,1], 'r+', x[y[:,0]==-1,0], x[y[:,0]==-1,1], 'b+')
plt.plot(x_test[y_test[:,0]==1,0], x_test[y_test[:,0]==1,1], 'r+', x_test[y_test[:,0]==-1,0], x_test[y_test[:,0]==-1,1], 'b+')
plt.axis([-4, 4, -4, 4])
plt.show()
sigmaf = 1; l = 1; sigman = 1;# non optimum values
K = get_kernel(x,x,sigmaf,l,sigman);
# K = K + np.finfo(float).eps*np.eye(x.shape[0]);#for stability
K = nearestSPD(K);
f = np.zeros_like(y);
old_log_marg = np.inf; new_log_marg = 0;
it = 0; maxit=10000;
for it in range(maxit): #newton iterations
npf = norm.pdf(f); cpyf = norm.cdf(y*f);
dlp = y*npf/cpyf #3.16
d2lp = -npf**2/cpyf**2 - y*f*npf/cpyf # 3.16
W = -1.*d2lp; sW = np.sqrt(W)
B = np.eye(x.shape[0])+sW*K*sW #B=I+sqrt(W)*K*sqrt(W)
#B = make_PD(B)
B = nearestSPD(B)
L = cholesky(B, lower=True)
b = W*f + dlp; # 3.18 part 1
a = b - sW * cho_solve((L, True),np.dot(sW*K,b)) # b - sW.*(L'\(L\(sW.*K*b)))
f = K.dot(a)
new_log_marg = -0.5 * a.T.dot(f) \
+ np.log(norm.cdf(y*f)).sum() \
- np.log(np.diag(L)).sum()
if it%10==0:
print '%e'%abs(new_log_marg - old_log_marg)
if abs(new_log_marg - old_log_marg) < 1e-3: # need to adapt step sizes
print('FOUND! %f at %d iteration!'%(abs(new_log_marg - old_log_marg),it))
break
else:
old_log_marg = new_log_marg;
fhat = f
npf = norm.pdf(fhat); cpyf = norm.cdf(y*fhat);
lp = np.log(cpyf);
dlp = y*npf/cpyf #3.16
d2lp = -npf**2/cpyf**2 - y*f*npf/cpyf # 3.16
W = -1.*d2lp; sW = np.sqrt(W)
B = np.eye(x.shape[0])+sW*K*sW #B=I+sqrt(W)*K*sqrt(W)
#B = make_PD(B)
B = nearestSPD(B)
L = cholesky(B, lower=True)
K_s = get_kernel(x_test, x, sigmaf, l, sigman);
K_ss = get_kernel(x_test, x_test, sigmaf, l, sigman);
fs = np.dot(K_s.T,dlp);
v = np.dot(inv(L),(sW*K_s));
y_test_var = np.diag(K_ss - np.dot(v.T,v));# uncertainty
y_test_mean = norm.cdf(np.real(np.divide(fs[:,0],np.sqrt(1+y_test_var))));# best prediction
plt.plot(x[y[:,0]==1,0], x[y[:,0]==1,1], 'r+', x[y[:,0]==-1,0], x[y[:,0]==-1,1], 'b+')
plt.plot(x_test[y_test[:,0]==1,0], x_test[y_test[:,0]==1,1], 'r+', x_test[y_test[:,0]==-1,0], x_test[y_test[:,0]==-1,1], 'b+')
plt.plot(x_test[y_test_mean>=0.5,0], x_test[y_test_mean>=0.5,1], 'o', markerfacecolor='None',markeredgecolor='r');
plt.plot(x_test[y_test_mean<0.5,0], x_test[y_test_mean<0.5,1], 'o', markerfacecolor='None',markeredgecolor='b');
plt.title('Gaussian Process Classification Results')
plt.axis([-4, 4, -4, 4])
plt.show()
def make_PD(A):
# gets a matrix and turns it into positive definit
for i in range(100):
print i,
try:
L = cholesky(A, lower=True);
print('positive def at %d iters\n'%i);
return A
except:
A = A + np.finfo(float).eps*np.eye(A.shape[0]);#for stability
continue
return A
from numpy.linalg import svd, eig
from numpy.linalg import cholesky as chol
def nearestSPD(A):
r,c = A.shape;
B = (A + A.T)/2;
[U,Sigma,V] = svd(B);
Sigma = Sigma*np.eye(Sigma.size);
V = V.T
H = np.dot(V,np.dot(Sigma,V.T));
Ahat = (B+H)/2;
Ahat = (Ahat + Ahat.T)/2;
# test that Ahat is in fact PD. if it is not so, then tweak it just a bit.
k = 0;
while 1:
k = k + 1;
try:
cholesky(Ahat, lower=True);
return Ahat
except:
mineig = eig(Ahat)[0].min().real
Ahat = Ahat + (-1*mineig*k**2 + np.spacing(mineig))*np.eye(A.shape[0]);
return Ahat
#an implementation of doi:10.1016/0024-3795(88)90223-6
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We will define kernel function here
Step2: Predictive gaussian parameter finding (with gaussian cumulative likelihood)
|
12,835
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib as mpl
mpl.use('Agg')
import matplotlib.pyplot as plt
import numpy as np
from scipy.linalg import hadamard
from scipy.fftpack import dct
%matplotlib inline
n = 10 #dimension of data (rows in plot)
K = 3 #number of centroids
m = 4 #subsampling dimension
p = 6 #number of observations (columns in plot)
np.random.seed(0)
DPI = 300 #figure DPI for saving
def this_is_dumb(x):
Surely there's a better way but this works. Permute X
y = np.copy(x)
np.random.shuffle(y)
return y
## Preconditioning plot
# Unconditioned
vals_unconditioned = [i for i in range(-5,5)]
X_unconditioned = np.array([this_is_dumb(vals_unconditioned) for i in range(p)]).T
# Conditioned
D = np.diag(np.random.choice([-1,1],n))
X_conditioned = dct(np.dot(D,X_unconditioned), norm = 'ortho')
## Subsampling plots
# Define the entries to set X
vals = [1 for i in range(m)]
vals.extend([0 for i in range(n-m)])
# Define X by permuting the values.
X = np.array([this_is_dumb(vals) for i in range(p)]).T
# means matrix
U = np.zeros((n,K))
# This is used to plot the full data in X (before subsampling)
Z = np.zeros_like(X)
# Generate two copies of X, one to plot just the column in question (YC) and one to plot the others (YO)
def get_col_X(col):
YO = np.copy(X)
YO[:,col-1] = -1
YC = - np.ones_like(X)
YC[:,col-1] = X[:,col-1]
return [YO,YC]
# Generate a copy of U modified to plot the rows selected by the column we chose of X
def get_rows_U(col):
US = np.copy(U)
US[np.where(X[:,col-1]==1)[0],:]=1
return US
def read_colors(path_in):
Crappy little function to read in the text file defining the colors.
mycolors = []
with open(path_in) as f_in:
lines = f_in.readlines()
for line in lines:
line = line.lstrip()
if line[0:5] == 'shade':
mycolors.append(line.split("=")[1].strip())
return mycolors
CM = read_colors('CM.txt')
CA = read_colors('CA.txt')
CD = ['#404040','#585858','#989898']
# Set the axes colors
mpl.rc('axes', edgecolor = CD[0], linewidth = 1.3)
# Set up the colormaps and bounds
cmapM = mpl.colors.ListedColormap(['none', CM[1], CM[3]])
cmapA = mpl.colors.ListedColormap(['none', CA[1], CA[4]])
bounds = [-1,0,1,2]
normM = mpl.colors.BoundaryNorm(bounds, cmapM.N)
normA = mpl.colors.BoundaryNorm(bounds, cmapA.N)
bounds_unconditioned = [i for i in range(-5,6)]
cmap_unconditioned = mpl.colors.ListedColormap(CA[::-1] + CM)
norm_unconditioned = mpl.colors.BoundaryNorm(bounds_unconditioned, cmap_unconditioned.N)
def drawbrackets(ax):
Way hacky. Draws the brackets around X.
ax.annotate(r'$n$ data points', xy=(0.502, 1.03), xytext=(0.502, 1.08), xycoords='axes fraction',
fontsize=14, ha='center', va='bottom',
arrowprops=dict(arrowstyle='-[, widthB=4.6, lengthB=0.35', lw=1.2))
ax.annotate(r'$p$ dimensions', xy=(-.060, 0.495), xytext=(-.22, 0.495), xycoords='axes fraction',
fontsize=16, ha='center', va='center', rotation = 90,
arrowprops=dict(arrowstyle='-[, widthB=6.7, lengthB=0.36', lw=1.2, color='k'))
def drawbracketsU(ax):
ax.annotate(r'$K$ centroids', xy=(0.505, 1.03), xytext=(0.505, 1.08), xycoords='axes fraction',
fontsize=14, ha='center', va='bottom',
arrowprops=dict(arrowstyle='-[, widthB=2.25, lengthB=0.35', lw=1.2))
def formatax(ax):
Probably want to come up with a different way to do this. Sets a bunch of formatting options we want.
ax.tick_params(
axis='both', # changes apply to both axis
which='both', # both major and minor ticks are affected
bottom='off', # ticks along the bottom edge are off
top='off', # ticks along the top edge are off
left='off',
right='off',
labelbottom='off',
labelleft = 'off') # labels along the bottom edge are off
ax.set_xticks(np.arange(0.5, p-.5, 1))
ax.set_yticks(np.arange(0.5, n-.5, 1))
ax.grid(which='major', color = CD[0], axis = 'x', linestyle='-', linewidth=1.3)
ax.grid(which='major', color = CD[0], axis = 'y', linestyle='--', linewidth=.5)
def drawbox(ax,col):
Draw the gray box around the column.
s = col-2
box_X = ax.get_xticks()[0:2]
box_Y = [ax.get_yticks()[0]-1, ax.get_yticks()[-1]+1]
box_X = [box_X[0]+s,box_X[1]+s,box_X[1]+s,box_X[0]+s, box_X[0]+s]
box_Y = [box_Y[0],box_Y[0],box_Y[1],box_Y[1], box_Y[0]]
ax.plot(box_X,box_Y, color = CD[0], linewidth = 3, clip_on = False)
def plot_column_X(ax,col):
Draw data matrix with a single column highlighted.
formatax(ax)
drawbrackets(ax)
drawbox(ax,col)
YO,YC = get_col_X(col)
ax.imshow(YO,
interpolation = 'none',
cmap=cmapM,
alpha = 0.8,
norm=normM)
ax.imshow(YC,
interpolation = 'none',
cmap=cmapM,
norm=normM)
def plot_column_U(ax,col):
Draw means matrix with rows corresponding to col highlighted.
formatax(ax)
drawbracketsU(ax)
US = get_rows_U(col)
ax.imshow(US,
interpolation = 'none',
cmap=cmapA,
norm=normA)
def plot_column_selection(col,fn,save=False):
This one actually generates the plots. Wraps plot_column_X and plot_column_U,
saves the fig if we want to.
fig = plt.figure()
gs = mpl.gridspec.GridSpec(1,2, height_ratios=[1])
ax0 = plt.subplot(gs[0])
ax1 = plt.subplot(gs[1])
plot_column_X(ax0,col)
plot_column_U(ax1,col)
if save == True:
fig.savefig(fn,dpi=DPI)
else:
plt.show()
fig = plt.figure()
gs = mpl.gridspec.GridSpec(1,2, height_ratios=[1])
ax0 = plt.subplot(gs[0])
formatax(ax0)
drawbrackets(ax0)
ax0.imshow(X_unconditioned,
interpolation = 'none',
cmap=cmap_unconditioned,
norm=norm_unconditioned)
ax1 = plt.subplot(gs[1])
formatax(ax1)
ax1.imshow(X_conditioned,
interpolation = 'none',
cmap=cmap_unconditioned,
norm=norm_unconditioned)
#ax1.imshow(X_unconditioned,
# interpolation = 'none',
# cmap=cmap_unconditioned,
# norm=norm_unconditioned)
plt.show()
# Make a plot showing the system before we subsample.
fig = plt.figure()
gs = mpl.gridspec.GridSpec(1,2, height_ratios=[1])
ax0 = plt.subplot(gs[0])
formatax(ax0)
drawbrackets(ax0)
ax0.imshow(Z,
interpolation = 'none',
cmap=cmapM,
norm=normM)
ax1 = plt.subplot(gs[1])
formatax(ax1)
drawbracketsU(ax1)
ax1.imshow(U,
interpolation = 'none',
cmap=cmapA,
norm=normA)
plt.show()
fig.savefig('mat0.png',dpi=DPI)
# Plot the subsampled system.
fig = plt.figure()
gs = mpl.gridspec.GridSpec(1,2, height_ratios=[1])
ax0 = plt.subplot(gs[0])
formatax(ax0)
drawbrackets(ax0)
ax0.imshow(X,
interpolation = 'none',
cmap=cmapM,
norm=normM)
ax1 = plt.subplot(gs[1])
formatax(ax1)
drawbracketsU(ax1)
ax1.imshow(U,
interpolation = 'none',
cmap=cmapA,
norm=normA)
plt.show()
fig.savefig('mat1.png',dpi=DPI)
# Pick out the first column.
fig = plt.figure()
gs = mpl.gridspec.GridSpec(1,2, height_ratios=[1])
ax0 = plt.subplot(gs[0])
formatax(ax0)
drawbrackets(ax0)
drawbox(ax0,1)
plot_column_X(ax0,1)
ax1 = plt.subplot(gs[1])
formatax(ax1)
drawbracketsU(ax1)
ax1.imshow(U,
interpolation = 'none',
cmap=cmapA,
norm=normA)
plt.show()
fig.savefig('mat2.png',dpi=DPI)
# make all 6 "final plots".
for i in range(1,p+1):
fn = 'col' + str(i) + '.png'
plot_column_selection(i,fn,save=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Data Matrices
Step4: Color Functions
Step11: Plotting Functions
Step12: Generate the Plots
|
12,836
|
<ASSISTANT_TASK:>
Python Code:
depth = -0.7
width = 5.0
gaussian_A_depth = depth
gaussian_A_alpha = np.array([width, width])
gaussian_A_center = np.array([-0.5, -0.5])
gaussian_B_depth = depth
gaussian_B_alpha = np.array([width, width])
gaussian_B_center = np.array([0.5, 0.5])
pes = (
toys.OuterWalls([1.0, 1.0], [0.0, 0.0]) +
toys.Gaussian(gaussian_A_depth, gaussian_A_alpha, gaussian_A_center) +
toys.Gaussian(gaussian_B_depth, gaussian_B_alpha, gaussian_B_center)
)
topology = toys.Topology(
n_atoms = 1,
n_spatial = 2,
masses = [1.0],
pes = pes
)
dt = 0.02
temperature = 0.1
gamma = 2.5
integ = toys.LangevinBAOABIntegrator(dt, temperature, gamma)
options = {
'integ' : integ,
'n_frames_max' : 5000,
'n_steps_per_frame' : 1
}
toy_eng = toys.Engine(
options = options,
topology = topology
)
toy_eng.initialized = True
paths.PathMover.engine = toy_eng
def circle2D(snapshot, center):
import math
return math.sqrt((snapshot.xyz[0][0]-center[0])**2 + (snapshot.xyz[0][1]-center[1])**2)
cv_A = paths.CoordinateFunctionCV(name="cv_A", f=circle2D, center=gaussian_A_center)
cv_B = paths.CoordinateFunctionCV(name="cv_B", f=circle2D, center=gaussian_B_center)
state_A = paths.CVDefinedVolume(cv_A, 0.0, 0.1)
state_B = paths.CVDefinedVolume(cv_B, 0.0, 0.1)
def flambda(snapshot, center_A, center_B):
import math
x = snapshot.xyz[0][0]
y = snapshot.xyz[0][1]
dist_A = math.sqrt((x - center_A[0])**2 + (y - center_A[1])**2)
dist_B = math.sqrt((x - center_B[0])**2 + (y - center_B[1])**2)
return dist_A / (dist_A + dist_B)
rc = paths.CoordinateFunctionCV(name="rc", f=flambda,
center_A=gaussian_A_center,
center_B=gaussian_B_center)
def Epot(pos):
toy_eng.positions = np.array([pos[0], pos[1]])
return pes.V(toy_eng)
# Increase this number to get better statistics!
num_points = 10
points = []
p_max = np.exp(-Epot([0.0, 0.0])/temperature)
while len(points) < num_points:
x = np.random.uniform(-1.0, 1.0)
p = np.random.uniform(0.0, p_max)
if p < np.exp(-Epot([x, -x])/temperature):
points += [np.array([[x, -x]])]
template = toys.Snapshot(
coordinates = np.array([[0.0, 0.0]]),
velocities = np.array([[0.0, 0.0]]),
engine = toy_eng
)
snapshots = [template.copy_with_replacement(coordinates=point) for point in points]
delta = 0.01
x = np.arange(-0.81, 0.811, delta)
y1 = [Epot([x[i], -x[i]]) for i in range(len(x))]
y2 = [np.exp(-Epot([x[i], -x[i]])/temperature) for i in range(len(x))]
fig, ax1 = plt.subplots()
ax1.plot(x, y1, 'b-', lw=3)
ax1.set_xlabel('x')
ax1.set_ylabel('Epot(x, -x)', color = 'b')
ax2 = ax1.twinx()
ax2.plot(x, y2, 'r-', lw=3)
ax2.set_ylabel('exp(-Epot(x, -x) / T)', color = 'r')
plot = ToyPlot()
plot.contour_range = np.arange(-1.5, 1.0, 0.1)
plot.add_pes(pes)
plot.add_states([state_A, state_B])
fig = plot.plot()
ax = fig.get_axes()[0]
ax.plot([p[0][0] for p in points], [p[0][1] for p in points],'or')
delta = 0.025
x = np.arange(-1.1, 1.1, delta)
y = np.arange(-1.1, 1.1, delta)
X, Y = np.meshgrid(x, y)
p_xy = [np.array([[i[0], i[1]]]) for i in np.array([X.flatten(), Y.flatten()]).T]
template = toys.Snapshot(
coordinates = np.array([[0.0, 0.0]]),
)
P = [template.copy_with_replacement(coordinates=xy) for xy in p_xy]
Z = np.array([flambda(p, gaussian_A_center, gaussian_B_center) for p in P]).reshape(len(X), len(X[0]))
levels = np.arange(0.0,1.0,0.1)
CS = ax.contour(X, Y, Z, levels=levels, cmap=cm.viridis)
manual_locations = [(-0.4, -0.4), (-0.3, -0.3), (-0.2, -0.2), (-0.1, -0.1), (0.6, -0.6),
(0.1, 0.1),(0.2, 0.2), (0.3, 0.3), (0.4, 0.4)]
ax.clabel(CS, inline=1, fontsize=12, manual=manual_locations)
randomizer = paths.RandomVelocities(1.0 / temperature)
storage = paths.Storage("rf-2d.nc", mode="w", template=template)
simulation = paths.ReactiveFluxSimulation(
storage = storage,
engine = toy_eng,
states = [state_A, state_B],
randomizer = randomizer,
initial_snapshots = snapshots,
rc = rc
)
%%time
# Increase this number to get better statistics!
n_per_snapshot = 10
simulation.run(n_per_snapshot=n_per_snapshot)
plot = ToyPlot()
plot.contour_range = np.arange(-1.5, 1.0, 0.1)
plot.add_pes(pes)
plot.add_states([state_A, state_B])
fig = plot.plot([s.change.trials[-1].trajectory for s in storage.steps[::n_per_snapshot]])
ax = fig.get_axes()[0]
ax.plot([p[0][0] for p in points], [p[0][1] for p in points],'oy')
max([len(s.change.trials[-1].trajectory) for s in storage.steps])
storage.close()
storage = paths.Storage("rf-2d.nc", "r")
def dflambda(snapshot, center_A, center_B):
import numpy as np
x = snapshot.xyz[0][0]
y = snapshot.xyz[0][1]
dist2_A = (x - center_A[0])**2 + (y - center_A[1])**2
return np.array([(center_B - center_A) / (4 * dist2_A)])
gradient = paths.CoordinateFunctionCV(name="gradient", f=dflambda,
center_A=gaussian_A_center, center_B=gaussian_B_center)
%%time
results = paths.ReactiveFluxAnalysis(steps=storage.steps, gradient=gradient)
flux, flux_dict = results.flux()
flux
snapshot = list(flux_dict.keys())[0]
results[snapshot]
flux_dict[snapshot]
hash1D = lambda snap: snap.xyz[0][0]
bins1D = [-0.525 + i * 0.05 for i in range(22)]
hist, bins_x = results.flux_histogram(hash1D, bins1D)
plt.bar(x=bins1D[:-1], height=hist, width=[bins1D[i+1]-bins1D[i] for i in range(len(bins1D)-1)], align="edge")
hash2D = lambda snap: (snap.xyz[0][0], snap.xyz[0][1])
bins2D = [-0.525 + i * 0.05 for i in range(22)]
hist, bins_x, bins_y = results.flux_histogram(hash2D, bins2D)
plt.pcolor(bins_x, bins_y, hist.T)
plt.clim(0.0, 0.06)
plt.colorbar();
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Furthermore we need a method to define states $A$ and $B$, we use circles around the respective gaussian well centers
Step2: Given the symmetry of the system the dividing surface is the line $g$ defined by
Step3: Next the initial snapshots are randomly chosen along the dividing surface according to their Boltzmann weight $e^{-\beta E(x_0)}$ (rejection sampling). Usually these configurations are available from a preceding simulation, e.g. umbrella sampling.
Step4: Let's plot the potential energy and the corresponding weight along the dividing surface
Step5: Here is a plot of the current setup with the potential energy surface, the two states A and B, the reaction coordinate and the starting points for the reactive flux shooting.
Step6: Finally, we need a randomizer for velocities and a storage object
Step7: Now all ingredients are combined to form the reactive flux simulation
Step8: Upon calling the run method the trajectories are harvested and saved.
Step9: Let's plot some of the obtained trajectories (one per initial snapshot)
Step10: Check whether the maximum trajectory length (5000) is never used
Step11: Example
Step12: To carry out the reactive flux analysis the gradient of the reaction coordinate is required. Given $\lambda(\vec{r})$ in the reactive flux simulation section we compute
Step13: Now start the reactive flux analysis and extract the results
Step14: The flux() method extracts the total flux
Step15: It's possible to extract raw data for each snapshot with this method (note
Step16: Here, the accepted and rejected counters show how often trajectories from this starting point were accepted and the sumflux value is
Step17: The flux_histogram method can be used to display the results with any bin selection
Step18: Also 2-dimensional histograms are possible
|
12,837
|
<ASSISTANT_TASK:>
Python Code:
# Useful Functions
def check_for_stationarity(X, cutoff=0.01):
# H_0 in adfuller is unit root exists (non-stationary)
# We must observe significant p-value to convince ourselves that the series is stationary
pvalue = adfuller(X)[1]
if pvalue < cutoff:
print 'p-value = ' + str(pvalue) + ' The series is likely stationary.'
return True
else:
print 'p-value = ' + str(pvalue) + ' The series is likely non-stationary.'
return False
def generate_datapoint(params):
mu = params[0]
sigma = params[1]
return np.random.normal(mu, sigma)
# Useful Libraries
import numpy as np
import pandas as pd
import statsmodels
import statsmodels.api as sm
from statsmodels.tsa.stattools import coint, adfuller
import matplotlib.pyplot as plt
QQQ = get_pricing("QQQ", start_date='2014-1-1', end_date='2015-1-1', fields='price')
QQQ.name = QQQ.name.symbol
check_for_stationarity(QQQ)
from statsmodels.stats.stattools import jarque_bera
jarque_bera(QQQ)
X = np.random.normal(0, 1, 100)
check_for_stationarity(X)
# Set the number of datapoints
T = 100
B = pd.Series(index=range(T))
B.name = 'B'
for t in range(T):
# Now the parameters are dependent on time
# Specifically, the mean of the series changes over time
params = (np.power(t, 2), 1)
B[t] = generate_datapoint(params)
plt.plot(B);
check_for_stationarity(B)
QQQ = get_pricing("QQQ", start_date='2014-1-1', end_date='2015-1-1', fields='price')
QQQ.name = QQQ.name.symbol
# Write code to estimate the order of integration of QQQ.
# Feel free to sample from the code provided in the lecture.
QQQ = QQQ.diff()[1:]
QQQ.name = QQQ.name + ' Additive Returns'
check_for_stationarity(QQQ)
plt.plot(QQQ.index, QQQ.values)
plt.ylabel('Additive Returns')
plt.legend([QQQ.name]);
T = 500
X1 = pd.Series(index=range(T))
X1.name = 'X1'
for t in range(T):
# Now the parameters are dependent on time
# Specifically, the mean of the series changes over time
params = (t * 0.1, 1)
X1[t] = generate_datapoint(params)
X2 = np.power(X1, 2) + X1
X3 = np.power(X1, 3) + X1
X4 = np.sin(X1) + X1
# We now have 4 time series, X1, X2, X3, X4
# Determine a linear combination of the 4 that is stationary over the
# time period shown using the techniques in the lecture.
X1 = sm.add_constant(X1)
results = sm.OLS(X4, X1).fit()
# Get rid of the constant column
X1 = X1['X1']
results.params
plt.plot(X4-0.99 * X1);
check_for_stationarity(X4 - 0.99*X1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exercise 1
Step2: b. Checking for Normality
Step3: c. Constructing Examples I
Step4: d. Constructing Examples II
Step5: Exercise 2
Step6: Exercise 3
|
12,838
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pnd
import matplotlib.pylab as plt
import matplotlib.patches as mpatches
from IPython.display import HTML
%matplotlib inline
img = plt.imread("rueildigital.jpg")
plt.axis('off')
plt.imshow(img);
HTML("<iframe src='http://datea.pe/NanterreDigital/nanterredigital?tab=map' width=600 height=400>")
df = pnd.read_csv("NanterreDigital_17-10-2015.csv")
df.head(10)
df.info()
HTML("<iframe src='http://www.nanterre.fr/1522-les-equipements.htm' width=600 height=400>")
df2 = pnd.read_csv("BDE-Equipements-Nanterre-Liste.csv",encoding="latin-1")
df2.head(10)
def display(what):
ax = df2.plot(x="X_WGS84",y="Y_WGS84",kind='scatter',edgecolor = 'none');
ax.set_title("Nanterre")
ax.set_xlabel("Longitude")
ax.set_ylabel("Latitude")
df3=df2[df2["TYPE"]==what]
ax.scatter(df3["X_WGS84"],df3["Y_WGS84"],c='r',edgecolor = 'none')
ax.scatter(df["longitude"],df["latitude"],c='g',edgecolor = 'none',s=30)
blue_patch = mpatches.Patch(color='blue', label='Equipements')
red_patch = mpatches.Patch(color='red', label=what)
black_patch = mpatches.Patch(color='green', label='Acteurs du numérique')
plt.legend(bbox_to_anchor=(1.45, 0.85), handles=[blue_patch, red_patch, black_patch])
df2["TYPE"].value_counts()
display("Vie Sociale")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Lecture des données relatives aux acteurs du numérique (www.datea.pe)
Step2: Lecture des données relatives aux équipements de Nanterre (www.nanterre.fr)
Step3: Fonction d'affichage des data
Step4: Les différents types d'équipement
Step5: Cartographie des équipements et des acteurs du numérique
|
12,839
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
import tensorflow.compat.v1 as tf1
features = [[1., 1.5], [2., 2.5], [3., 3.5]]
labels = [[0.3], [0.5], [0.7]]
eval_features = [[4., 4.5], [5., 5.5], [6., 6.5]]
eval_labels = [[0.8], [0.9], [1.]]
def _input_fn():
return tf1.data.Dataset.from_tensor_slices((features, labels)).batch(1)
def _eval_input_fn():
return tf1.data.Dataset.from_tensor_slices(
(eval_features, eval_labels)).batch(1)
def _model_fn(features, labels, mode):
logits = tf1.layers.Dense(1)(features)
loss = tf1.losses.mean_squared_error(labels=labels, predictions=logits)
optimizer = tf1.train.AdagradOptimizer(0.05)
train_op = optimizer.minimize(loss, global_step=tf1.train.get_global_step())
return tf1.estimator.EstimatorSpec(mode, loss=loss, train_op=train_op)
estimator = tf1.estimator.Estimator(model_fn=_model_fn)
estimator.train(_input_fn)
estimator.evaluate(_eval_input_fn)
dataset = tf.data.Dataset.from_tensor_slices((features, labels)).batch(1)
eval_dataset = tf.data.Dataset.from_tensor_slices(
(eval_features, eval_labels)).batch(1)
model = tf.keras.models.Sequential([tf.keras.layers.Dense(1)])
optimizer = tf.keras.optimizers.Adagrad(learning_rate=0.05)
model.compile(optimizer=optimizer, loss="mse")
model.fit(dataset)
model.evaluate(eval_dataset, return_dict=True)
class CustomModel(tf.keras.Sequential):
A custom sequential model that overrides `Model.train_step`.
def train_step(self, data):
batch_data, labels = data
with tf.GradientTape() as tape:
predictions = self(batch_data, training=True)
# Compute the loss value (the loss function is configured
# in `Model.compile`).
loss = self.compiled_loss(labels, predictions)
# Compute the gradients of the parameters with respect to the loss.
gradients = tape.gradient(loss, self.trainable_variables)
# Perform gradient descent by updating the weights/parameters.
self.optimizer.apply_gradients(zip(gradients, self.trainable_variables))
# Update the metrics (includes the metric that tracks the loss).
self.compiled_metrics.update_state(labels, predictions)
# Return a dict mapping metric names to the current values.
return {m.name: m.result() for m in self.metrics}
dataset = tf.data.Dataset.from_tensor_slices((features, labels)).batch(1)
eval_dataset = tf.data.Dataset.from_tensor_slices(
(eval_features, eval_labels)).batch(1)
model = CustomModel([tf.keras.layers.Dense(1)])
optimizer = tf.keras.optimizers.Adagrad(learning_rate=0.05)
model.compile(optimizer=optimizer, loss="mse")
model.fit(dataset)
model.evaluate(eval_dataset, return_dict=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Migrate from Estimator to Keras APIs
Step2: TensorFlow 1
Step3: Instantiate your Estimator, and train the model
Step4: Evaluate the program with the evaluation set
Step5: TensorFlow 2
Step6: With that, you are ready to train the model by calling Model.fit
Step7: Finally, evaluate the model with Model.evaluate
Step9: TensorFlow 2
Step10: Next, as before
Step11: Call Model.fit to train the model
Step12: And, finally, evaluate the program with Model.evaluate
|
12,840
|
<ASSISTANT_TASK:>
Python Code:
import graphlab
products = graphlab.SFrame('amazon_baby.gl/')
products.head()
products['word_count'] = graphlab.text_analytics.count_words(products['review'])
products.head()
graphlab.canvas.set_target('ipynb')
products['name'].show()
giraffe_reviews = products[products['name'] == 'Vulli Sophie the Giraffe Teether']
len(giraffe_reviews)
giraffe_reviews['rating'].show(view='Categorical')
products['rating'].show(view='Categorical')
#ignore all 3* reviews
products = products[products['rating'] != 3]
#positive sentiment = 4* or 5* reviews
products['sentiment'] = products['rating'] >=4
products.head()
train_data,test_data = products.random_split(.8, seed=0)
sentiment_model = graphlab.logistic_classifier.create(train_data,
target='sentiment',
features=['word_count'],
validation_set=test_data)
sentiment_model.evaluate(test_data)
sentiment_model.evaluate(test_data, metric='roc_curve')
sentiment_model.show(view='Evaluation')
giraffe_reviews['predicted_sentiment'] = sentiment_model.predict(giraffe_reviews, output_type='probability')
giraffe_reviews.head()
giraffe_reviews = giraffe_reviews.sort('predicted_sentiment', ascending=False)
giraffe_reviews.head()
giraffe_reviews[0]['review']
giraffe_reviews[1]['review']
giraffe_reviews[-1]['review']
giraffe_reviews[-2]['review']
selected_words = ['awesome', 'great', 'fantastic', 'amazing', 'love', 'horrible', 'bad', 'terrible', 'awful', 'wow', 'hate']
def awesome_count(word_count):
if 'awesome' in word_count:
return word_count['awesome']
return 0
products['awesome'] = products['word_count'].apply(awesome_count)
def great_count(word_count):
if 'great' in word_count:
return word_count['great']
return 0
products['great'] = products['word_count'].apply(great_count)
def fantastic_count(word_count):
if 'fantastic' in word_count:
return word_count['fantastic']
return 0
products['fantastic'] = products['word_count'].apply(fantastic_count)
def amazing_count(word_count):
if 'amazing' in word_count:
return word_count['amazing']
return 0
products['amazing'] = products['word_count'].apply(amazing_count)
def love_count(word_count):
if 'love' in word_count:
return word_count['love']
return 0
products['love'] = products['word_count'].apply(love_count)
def horrible_count(word_count):
if 'horrible' in word_count:
return word_count['horrible']
return 0
products['horrible'] = products['word_count'].apply(horrible_count)
def bad_count(word_count):
if 'bad' in word_count:
return word_count['bad']
return 0
products['bad'] = products['word_count'].apply(bad_count)
def terrible_count(word_count):
if 'terrible' in word_count:
return word_count['terrible']
return 0
products['terrible'] = products['word_count'].apply(terrible_count)
def awful_count(word_count):
if 'awful' in word_count:
return word_count['awful']
return 0
products['awful'] = products['word_count'].apply(awful_count)
def wow_count(word_count):
if 'wow' in word_count:
return word_count['wow']
return 0
products['wow'] = products['word_count'].apply(wow_count)
def hate_count(word_count):
if 'hate' in word_count:
return word_count['hate']
return 0
products['hate'] = products['word_count'].apply(hate_count)
# products['awesome'] = products['word_count'].apply(awesome_count)
# # Generalize function for apply
# def selected_words_count(word_count, word):
# if word in word_count:
# return word_count[word]
# return 0
# for word in selected_words:
# products[word] = products.apply(lambda x: selected_words_count(x['word_count'], word))
products.head()
print 'Word count value:'
for word in selected_words:
print '{0}: {1}'.format(word, products[word].sum())
# awesome: 2002
# great: 42420.0
# fantastic: 873
# amazing: 1305
# love: 40277.0
# horrible: 659
# bad: 3197
# terrible: 673
# awful: 345
# wow: 131
# hate: 1057
train_data,test_data = products.random_split(.8, seed=0)
selected_words_model = graphlab.logistic_classifier.create(train_data,
target='sentiment',
features=selected_words,
validation_set=test_data)
coef = selected_words_model['coefficients']
coef = coef.sort('value', ascending=False)
coef
coef.sort('value', ascending=True)
selected_words_model.evaluate(test_data)
sentiment_model.evaluate(test_data)
selected_words_model.evaluate(test_data, metric='roc_curve')
selected_words_model.show(view='Evaluation')
diaper_champ_reviews = products[products['name'] == 'Baby Trend Diaper Champ']
diaper_champ_reviews.head()
diaper_champ_reviews['predicted_sentiment'] = sentiment_model.predict(diaper_champ_reviews, output_type='probability')
diaper_champ_reviews = diaper_champ_reviews.sort('predicted_sentiment', ascending=False)
diaper_champ_reviews.head()
diaper_champ_reviews['predicted_sentiment'].max()
selected_words_model.predict(diaper_champ_reviews[0:1], output_type='probability')
# diaper_champ_reviews['predicted_sentiment_2'] = selected_words_model.predict(diaper_champ_reviews, output_type='probability')
diaper_champ_reviews.head()
diaper_champ_reviews[0]['review']
diaper_champ_reviews[0]['word_count']
diaper_champ_reviews[1]['review']
diaper_champ_reviews[-1]['review']
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read some product review data
Step2: Let's explore this data together
Step3: Build the word count vector for each review
Step4: Examining the reviews for most-sold product
Step5: Build a sentiment classifier
Step6: Define what's a positive and a negative sentiment
Step7: Let's train the sentiment classifier
Step8: Evaluate the sentiment model
Step9: Applying the learned model to understand sentiment for Giraffe
Step10: Sort the reviews based on the predicted sentiment and explore
Step11: Most positive reviews for the giraffe
Step12: Show most negative reviews for giraffe
Step13: Exercise
Step14: Using the .sum() method on each of the new columns you created, answer the following questions
Step15: 2. Create a new sentiment analysis model using only the selected_words as features
Step16: You will now examine the weights the learned classifier assigned to each of the 11 words in selected_words and gain intuition as to what the ML algorithm did for your data using these features. In GraphLab Create, a learned model, such as the selected_words_model, has a field 'coefficients', which lets you look at the learned coefficients. You can access it by using
Step17: Using this approach, sort the learned coefficients according to the ‘value’ column using .sort(). Out of the 11 words in selected_words, which one got the most positive weight? Which one got the most negative weight? Do these values make sense for you? Save these results to answer the quiz at the end.
Step18: 3. Comparing the accuracy of different sentiment analysis model
Step19: 4. Interpreting the difference in performance between the models
|
12,841
|
<ASSISTANT_TASK:>
Python Code:
%load_ext autoreload
%autoreload 2
from __future__ import print_function
from orphics import io, maps, stats, cosmology
from enlib import enmap, resample
import numpy as np
shape, wcs = maps.rect_geometry(width_deg = 5.0, px_res_arcmin = 0.5)
cc = cosmology.Cosmology(lmax=2000,pickling=True,dimensionless=False)
ells = np.arange(0,2000,1)
cltt = cc.theory.lCl('TT',ells)
pl = io.Plotter(yscale='log')
pl.add(ells,cltt*ells**2.)
pl.done()
shape, wcs = maps.rect_geometry
ps = cltt.reshape((1,1,ells.size))
generator = maps.MapGen(shape,wcs,ps)
random_map = generator.get_map()
random_map2 = generator.get_map()
io.plot_img(random_map)
io.plot_img(random_map2)
taper, w2 = maps.get_taper_deg(shape,wcs,taper_width_degrees=1.0)
io.plot_img(taper)
tapered_map = random_map * taper
tapered_map2 = random_map2 * taper
fc = maps.FourierCalc(shape,wcs)
auto_power, k1, _ = fc.power2d(tapered_map) # power from real map
cross_power, k2 = fc.f1power(tapered_map2,k1) # power from real map and fourier map
modlmap = enmap.modlmap(shape,wcs)
io.plot_img(np.fft.fftshift(modlmap))
io.plot_img(np.fft.fftshift(np.log10(auto_power)))
io.plot_img(np.fft.fftshift(np.log10(cross_power)))
bin_edges = np.arange(200,2000,40)
binner = stats.bin2D(modlmap,bin_edges)
cents, a1d = binner.bin(auto_power)
cents, c1d = binner.bin(cross_power)
pl = io.Plotter(yscale='log')
pl.add(ells,cltt*ells**2.)
pl.add(cents,a1d*cents**2.,ls="--",label="no window correction")
pl.add(cents,a1d*cents**2./w2,marker="o",ls="none",label="window corrected")
pl.add(cents,c1d*cents**2.,label="cross correlation")
pl.legend(loc="lower right")
pl.done()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We want to first define a geometry for our map, by obtaining a numpy array shape and a WCS from a physical geometry. We then create a Cosmology object, an interface to CAMB that lets us get CMB power spectra. We plot the TT spectrum.
Step2: We can now reshape the TT spectrum into an enlib polarization friendly form, and set up a gaussian random field generator for the specified geometry and power spectrum. We obtain two random maps from this and plot them.
Step3: We next define a taper that we will apply to this map before we take its fourier transform. We make a 1 deg wide cosine taper. The w2 factor will come in handy later.
Step4: We can now multiply our maps by this taper and set up a power spectrum calculator object. We calculate the auto of the first map and the cross spectra of the first with the second. In doing the latter, we reuse the fourier transform calculated earlier to not waste time!
Step5: We need to bin these 2D power spectra into annuli. We define bin edges in 1D multipole space and create a binning object based on the absolute wavenumbers in the map. We then apply the binning object to the 2D powers and plot our results.
|
12,842
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from jupyterthemes import jtplot ; jtplot.style()
np.random.seed(1969-7-20)
N = 51
b = np.linspace(0.5, 1.5, N) + (np.random.random(N)-0.5)/100
z = 0.035
D = 1/np.sqrt((1-b*b)**2+(2*z*b)**2) * (1 + (np.random.random(N)-0.5)/100)
print('max of curve', max(D), '\tmax approx.', 1/2/z, '\texact', 1/2/z/np.sqrt(1-z*z))
plt.plot(b, D) ; plt.ylim((0, 15)) ; plt.grid(1);
Dmax = max(D)
D2 = Dmax/np.sqrt(2)
plt.plot(b, D, 'k-*')
plt.yticks((D2, Dmax))
plt.xlim((0.9, 1.1))
plt.grid(1)
plt.plot(b, D)
plt.yticks((D2, Dmax))
plt.xlim((0.950, 0.965))
plt.grid(1)
plt.show()
plt.plot(b, D)
plt.yticks((D2, Dmax))
plt.xlim((1.025, 1.040))
plt.grid(1);
f1 = 0.962
f2 = 1.034
print(z, '%.6f'%((f2-f1)/(f2+f1)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We simulate a dynamic testing, using a low sampled, random error affected sequence of frequencies to compute a random error affected sequence of dynamic amplification factors for $\zeta=3.5\%$.
Step2: We find the reference response value using the measured maximum value and plot a zone around the max value, using a reference line at $D_\text{max}/\sqrt2$
Step3: We plot 2 ranges around the crossings with the reference value
Step4: My estimates for the half-power frequencies are $f_1 = 0.962$ and $f_2 = 1.034$, and using these values in the half-power formula gives us our estimate of $\zeta$.
|
12,843
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import os
path = ""
data = pd.read_csv("https://github.com/JamesByers/GA-SEA-DAT2/raw/master/data/ozone.csv")
print data.head()
data.columns
print data.head(2)
print data.count()
print data.tail(2)
print data.loc[47:47,['Ozone']]
pd.isnull(data['Ozone']).sum()
#print misscnt
#cnt = data['Ozone'].count()
#print cnt
#np.count_nonzero(np.eye(4))
#cnt1 = np.count_nonzero(pd.isnull(data['Ozone']).values)
#np.count_nonzero(df.isnull())
#print cnt1
cnt = data['Ozone'] == np.nan
print cnt.count()
data['Ozone'].mean()
#df_posA[df_posA.A < 0] = -1*df_posA
newdf = data[(data.Ozone> 31 )& (data.Temp >90)]
newdf.mean()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Print the column names of the dataset to the screen, one column name per line.
Step2: Extract the first 2 rows of the data frame and print them to the console. Console in this case is output into the jupyter notebook.
Step3: How many observations (i.e. rows) are in this data frame?
Step4: Extract the last 2 rows of the data frame and print them to the console. What does the output
Step5: What is the value of Ozone in the 47th row?
Step6: How many missing values are in the Ozone column of this data frame?
Step7: What is the mean of the Ozone column in this dataset? Exclude missing values (coded as NA)
Step8: Extract the subset of rows of the data frame where Ozone values are above 31 and Temp values
|
12,844
|
<ASSISTANT_TASK:>
Python Code:
from numpy import random, array
#Create fake income/age clusters for N people in k clusters
def createClusteredData(N, k):
random.seed(10)
pointsPerCluster = float(N)/k
X = []
for i in range (k):
incomeCentroid = random.uniform(20000.0, 200000.0)
ageCentroid = random.uniform(20.0, 70.0)
for j in range(int(pointsPerCluster)):
X.append([random.normal(incomeCentroid, 10000.0), random.normal(ageCentroid, 2.0)])
X = array(X)
return X
%matplotlib inline
from sklearn.cluster import KMeans
import matplotlib.pyplot as plt
from sklearn.preprocessing import scale
from numpy import random, float
data = createClusteredData(100, 5)
model = KMeans(n_clusters=5)
# Note I'm scaling the data to normalize it! Important for good results.
model = model.fit(scale(data))
# We can look at the clusters each data point was assigned to
print(model.labels_)
# And we'll visualize it:
plt.figure(figsize=(8, 6))
plt.scatter(data[:,0], data[:,1], c=model.labels_.astype(float))
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We'll use k-means to rediscover these clusters in unsupervised learning
|
12,845
|
<ASSISTANT_TASK:>
Python Code:
import fst
# Let's see the input as a simple linear chain FSA
def make_input(srcstr, sigma = None):
converts a nonempty string into a linear chain acceptor
@param srcstr is a nonempty string
@param sigma is the source vocabulary
assert(srcstr.split())
return fst.linear_chain(srcstr.split(), sigma)
# this function will enumerate all paths in an automaton
def enumerate_paths(fsa):
paths = [[str(arc.ilabel) for arc in path] for path in fsa.paths()]
print len(paths), 'paths:'
for path in paths:
print ' '.join(path)
# I am going to start with a very simple wrapper for a python dictionary that
# will help us associate unique ids to items
# this wrapper simply offers one aditional method (insert) similar to the insert method of an std::map
class ItemFactory(object):
def __init__(self):
self.nextid_ = 0
self.i2s_ = {}
def insert(self, item):
Inserts a previously unmapped item.
Returns the item's unique id and a flag with the result of the intertion.
uid = self.i2s_.get(item, None)
if uid is None:
uid = self.nextid_
self.nextid_ += 1
self.i2s_[item] = uid
return uid, True
return uid, False
def get(self, item):
Returns the item's unique id (assumes the item has been mapped before)
return self.i2s_[item]
# This program packs all permutations of an input sentence
def Permutations(sentence, sigma=None, delta=None):
from collections import deque
from itertools import takewhile
A = fst.Transducer(isyms=sigma, osyms=delta)
I = len(sentence)
axiom = tuple([False]*I)
ifactory = ItemFactory()
ifactory.insert(axiom)
Q = deque([axiom])
while Q:
ant = Q.popleft() # antecedent (coverage vector)
sfrom = ifactory.get(ant) # state id
if all(ant): # goal item
A[sfrom].final = True # is a final node
continue
for i in range(I):
if not ant[i]:
cons = list(ant)
cons[i] = True
cons = tuple(cons)
sto, new = ifactory.insert(cons)
if new:
Q.append(cons)
A.add_arc(sfrom, sto, str(i + 1), sentence[i], 0)
return A
# Let's define a model of translational equivalences that performs word replacement of arbitrary permutations of the input
# constrained to a window of length $d$ (see WLd in (Lopez, 2009))
# same strategy in Moses (for phrase-based models)
def WLdPermutations(sentence, d = 2, sigma = None, delta = None):
from collections import deque
from itertools import takewhile
A = fst.Transducer(isyms = sigma, osyms = delta)
I = len(sentence)
axiom = (1, tuple([False]*min(I - 1, d - 1)))
ifactory = ItemFactory()
ifactory.insert(axiom)
Q = deque([axiom])
while Q:
ant = Q.popleft() # antecedent
l, C = ant # signature
sfrom = ifactory.get(ant) # state id
if l == I + 1: # goal item
A[sfrom].final = True # is a final node
continue
# adjacent
n = 0 if (len(C) == 0 or not C[0]) else sum(takewhile(lambda b : b, C)) # leading ones
ll = l + n + 1
CC = list(C[n+1:])
maxlen = min(I - ll, d - 1)
if maxlen:
m = maxlen - len(CC) # missing positions
[CC.append(False) for _ in range(m)]
cons = (ll, tuple(CC))
sto, inserted = ifactory.insert(cons)
if inserted:
Q.append(cons)
A.add_arc(sfrom, sto, str(l), sentence[l-1], 0)
# non-adjacent
ll = l
for i in range(l + 1, I + 1):
if i - l + 1 > d: # beyond limit
break
if C[i - l - 1]: # already used
continue
# free position
CC = list(C)
CC[i-l-1] = True
cons = (ll, tuple(CC))
sto, inserted = ifactory.insert(cons)
if inserted:
Q.append(cons)
A.add_arc(sfrom, sto, str(i), sentence[i-1], 0)
return A
# Let's create a table for the input vocabulary $\Sigma$
sigma = fst.SymbolTable()
# and for the output vocabulary $\Delta$
delta = fst.SymbolTable()
# Let's have a look at the input as an automaton
# we call it F ('f' is the cannonical source language)
ex1_F = make_input('nosso amigo comum', sigma)
ex1_F
ex1_all = Permutations('1 2 3 4'.split(), None, sigma)
ex1_all
enumerate_paths(ex1_all)
# these are the permutations of the input according to WL$2$
ex2_WLd2 = WLdPermutations('1 2 3 4'.split(), 2, None, sigma)
ex2_WLd2
enumerate_paths(ex2_WLd2)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step4: Helper code
Step5: All permutations
Step6: Window of length d
Step7: Examples
Step8: Input
Step9: All permutations
Step10: For a toy example we can enumerate the permutations
Step11: WLd
|
12,846
|
<ASSISTANT_TASK:>
Python Code:
import paver
import trigrid
import matplotlib.pyplot as plt
import numpy as np
import field
%matplotlib notebook
# Load and display a 25k cell grid of San Francisco Bay
p=paver.Paving(suntans_path='/home/rusty/models/suntans/spinupdated/rundata/original_grid')
fig,ax=plt.subplots()
p.tg_plot() ;
# create the refined density field:
dens=field.ApolloniusField(X=np.array( [[500e3,4.18e6]] ), # Farallons coords
F=np.array( [2000.0] ) ) # at 2km
# compare that to linear scale of existing edges
segments=p.points[p.edges[:,:2]] # [Nedge, endpoints, {x,y}]
edge_lengths=np.sqrt(np.sum(np.diff(segments,axis=1)**2,axis=-1)[:,0])
edge_centers=np.mean(segments,axis=1)
dens_at_edge=dens(edge_centers)
score=edge_lengths / dens_at_edge
# See what that looks like
fig,ax=plt.subplots()
coll=p.tg_plot(ax=ax,edge_values=score)
coll.set_cmap('seismic')
coll.set_clim([0.5,1.5])
cbar=plt.colorbar(coll,ticks=[0.5,1.5])
cbar.set_ticklabels(['Plenty short','Too long'])
# Load the grid again, since this step can require some iteration
p=paver.Paving(suntans_path='/home/rusty/models/suntans/spinupdated/rundata/original_grid')
# This approach is more awkward than testing edge length directly, but
# doesn't leave stranded edges.
# 0.6 is an approximation of sqrt(cell area) -> side length, plus
# some cushion
bad_cells=np.sqrt(p.areas())> 0.7*dens( p.vcenters() )
bad_edges= np.all( (p.edges[:,3:]!=trigrid.BOUNDARY)&bad_cells[p.edges[:,3:]],
axis=1)
orphan_nodes=set()
for e in np.nonzero(bad_edges)[0]:
orphan_nodes.add(p.edges[e,0])
orphan_nodes.add(p.edges[e,1])
p.delete_edge(e,handle_unpaved=1)
for n in orphan_nodes:
if len(p.pnt2edges(n))==0:
p.delete_node(n)
fig,ax=plt.subplots()
coll=p.tg_plot(ax=ax)
long_edges = np.nonzero((score>1.2) )[0] # i.e. edges 20% longer than desired
nodes_to_kill=np.unique(p.edges[long_edges,:2])
for n in nodes_to_kill:
on_boundary=False
local_edges=list(p.pnt2edges(n)) # copy, as it will change
for e in local_edges:
if p.edges[e,4]==trigrid.BOUNDARY:
on_boundary=True
else:
p.delete_edge(e,handle_unpaved=1)
if not on_boundary:
p.delete_node(n)
fig,ax=plt.subplots()
coll=p.tg_plot(ax=ax)
p.density=dens
p.pave_all(n_steps=np.inf)
fig,ax=plt.subplots()
coll=p.tg_plot(ax=ax)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Remove the nodes/edges where the refinement is needed. In this example,
Step2: Remove edges which are significantly longer than the target resolution. There
|
12,847
|
<ASSISTANT_TASK:>
Python Code:
from transformers import AutoTokenizer
checkpoint = "distilbert-base-uncased-finetuned-sst-2-english"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
raw_inputs = [
"I've been waiting for a HuggingFace course my whole life.",
"I hate this so much!",
]
inputs = tokenizer(raw_inputs, padding=True, truncation=True, return_tensors='pt')
inputs
from transformers import AutoModel
checkpoint_model = "distilbert-base-uncased-finetuned-sst-2-english"
model = AutoModel.from_pretrained(checkpoint_model)
outputs = model(**inputs)
outputs.last_hidden_state.shape
from transformers import AutoModelForSequenceClassification
checkpoint = "distilbert-base-uncased-finetuned-sst-2-english"
model = AutoModelForSequenceClassification.from_pretrained(checkpoint)
outputs = model(**inputs)
outputs.logits.shape
outputs.logits
import torch
predictions = torch.nn.functional.softmax(outputs.logits, dim=-1)
print(predictions)
model.config.id2label
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The output itself is a dictionary containing two keys, input_ids and attention_mask. input_ids contains two rows of integers (one for each sentence) that are the unique identifiers of the tokens in each sentence.
Step2: In this code snippet, we have downloaded the same checkpoint we used in our pipeline before (it should actually have been cached already) and instantiated a model with it.
Step3: Model heads
|
12,848
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
import os
import sys
import pandas as pd
import numpy as np
%matplotlib inline
from matplotlib import pyplot as plt
import seaborn as sns
import datetime
#set current working directory
os.chdir('D:/Practical Time Series')
#Read the dataset into a pandas.DataFrame
df = pd.read_csv('datasets/PRSA_data_2010.1.1-2014.12.31.csv')
print('Shape of the dataframe:', df.shape)
#Let's see the first five rows of the DataFrame
df.head()
df['datetime'] = df[['year', 'month', 'day', 'hour']].apply(lambda row: datetime.datetime(year=row['year'], month=row['month'], day=row['day'],
hour=row['hour']), axis=1)
df.sort_values('datetime', ascending=True, inplace=True)
#Let us draw a box plot to visualize the central tendency and dispersion of PRES
plt.figure(figsize=(5.5, 5.5))
g = sns.boxplot(df['PRES'])
g.set_title('Box plot of PRES')
plt.figure(figsize=(5.5, 5.5))
g = sns.tsplot(df['PRES'])
g.set_title('Time series of PRES')
g.set_xlabel('Index')
g.set_ylabel('PRES readings')
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler(feature_range=(0, 1))
df['scaled_PRES'] = scaler.fit_transform(np.array(df['PRES']).reshape(-1, 1))
Let's start by splitting the dataset into train and validation. The dataset's time period if from
Jan 1st, 2010 to Dec 31st, 2014. The first fours years - 2010 to 2013 is used as train and
2014 is kept for validation.
split_date = datetime.datetime(year=2014, month=1, day=1, hour=0)
df_train = df.loc[df['datetime']<split_date]
df_val = df.loc[df['datetime']>=split_date]
print('Shape of train:', df_train.shape)
print('Shape of test:', df_val.shape)
#First five rows of train
df_train.head()
#First five rows of validation
df_val.head()
#Reset the indices of the validation set
df_val.reset_index(drop=True, inplace=True)
The train and validation time series of scaled PRES is also plotted.
plt.figure(figsize=(5.5, 5.5))
g = sns.tsplot(df_train['scaled_PRES'], color='b')
g.set_title('Time series of scaled PRES in train set')
g.set_xlabel('Index')
g.set_ylabel('Scaled PRES readings')
plt.figure(figsize=(5.5, 5.5))
g = sns.tsplot(df_val['scaled_PRES'], color='r')
g.set_title('Time series of scaled PRES in validation set')
g.set_xlabel('Index')
g.set_ylabel('Scaled PRES readings')
def makeXy(ts, nb_timesteps):
Input:
ts: original time series
nb_timesteps: number of time steps in the regressors
Output:
X: 2-D array of regressors
y: 1-D array of target
X = []
y = []
for i in range(nb_timesteps, ts.shape[0]):
X.append(list(ts.loc[i-nb_timesteps:i-1]))
y.append(ts.loc[i])
X, y = np.array(X), np.array(y)
return X, y
X_train, y_train = makeXy(df_train['scaled_PRES'], 7)
print('Shape of train arrays:', X_train.shape, y_train.shape)
X_val, y_val = makeXy(df_val['scaled_PRES'], 7)
print('Shape of validation arrays:', X_val.shape, y_val.shape)
#X_train and X_val are reshaped to 3D arrays
X_train, X_val = X_train.reshape((X_train.shape[0], X_train.shape[1], 1)),\
X_val.reshape((X_val.shape[0], X_val.shape[1], 1))
print('Shape of arrays after reshaping:', X_train.shape, X_val.shape)
from keras.layers import Dense
from keras.layers import Input
from keras.layers import Dropout
from keras.layers import Flatten
from keras.layers.convolutional import ZeroPadding1D
from keras.layers.convolutional import Conv1D
from keras.layers.pooling import AveragePooling1D
from keras.optimizers import SGD
from keras.models import Model
from keras.models import load_model
from keras.callbacks import ModelCheckpoint
#Define input layer which has shape (None, 7) and of type float32. None indicates the number of instances
input_layer = Input(shape=(7,1), dtype='float32')
#Add zero padding
zeropadding_layer = ZeroPadding1D(padding=1)(input_layer)
#Add 1D convolution layer
conv1D_layer = Conv1D(64, 3, strides=1, use_bias=True)(zeropadding_layer)
#Add AveragePooling1D layer
avgpooling_layer = AveragePooling1D(pool_size=3, strides=1)(conv1D_layer)
#Add Flatten layer
flatten_layer = Flatten()(avgpooling_layer)
dropout_layer = Dropout(0.2)(flatten_layer)
#Finally the output layer gives prediction for the next day's air pressure.
output_layer = Dense(1, activation='linear')(dropout_layer)
ts_model = Model(inputs=input_layer, outputs=output_layer)
ts_model.compile(loss='mean_absolute_error', optimizer='adam')#SGD(lr=0.001, decay=1e-5))
ts_model.summary()
save_weights_at = os.path.join('keras_models', 'PRSA_data_Air_Pressure_1DConv_weights.{epoch:02d}-{val_loss:.4f}.hdf5')
save_best = ModelCheckpoint(save_weights_at, monitor='val_loss', verbose=0,
save_best_only=True, save_weights_only=False, mode='min',
period=1)
ts_model.fit(x=X_train, y=y_train, batch_size=16, epochs=20,
verbose=1, callbacks=[save_best], validation_data=(X_val, y_val),
shuffle=True)
best_model = load_model(os.path.join('keras_models', 'PRSA_data_Air_Pressure_1DConv_weights.16-0.0097.hdf5'))
preds = best_model.predict(X_val)
pred_PRES = np.squeeze(scaler.inverse_transform(preds))
from sklearn.metrics import r2_score
r2 = r2_score(df_val['PRES'].loc[7:], pred_PRES)
print('R-squared for the validation set:', round(r2, 4))
#Let's plot the first 50 actual and predicted values of PRES.
plt.figure(figsize=(5.5, 5.5))
plt.plot(range(50), df_val['PRES'].loc[7:56], linestyle='-', marker='*', color='r')
plt.plot(range(50), pred_PRES[:50], linestyle='-', marker='.', color='b')
plt.legend(['Actual','Predicted'], loc=2)
plt.title('Actual vs Predicted PRES')
plt.ylabel('PRES')
plt.xlabel('Index')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: To make sure that the rows are in the right order of date and time of observations,
Step2: Gradient descent algorithms perform better (for example converge faster) if the variables are wihtin range [-1, 1]. Many sources relax the boundary to even [-3, 3]. The PRES variable is mixmax scaled to bound the tranformed variable within [0,1].
Step5: Before training the model, the dataset is split in two parts - train set and validation set.
Step7: Now we need to generate regressors (X) and target variable (y) for train and validation. 2-D array of regressor and 1-D array of target is created from the original 1-D array of columm standardized_PRES in the DataFrames. For the time series forecasting model, Past seven days of observations are used to predict for the next day. This is equivalent to a AR(7) model. We define a function which takes the original time series and the number of timesteps in regressors as input to generate the arrays of X and y.
Step8: The input to convolution layers must be of shape (number of samples, number of timesteps, number of features per timestep). In this case we are modeling only PRES hence number of features per timestep is one. Number of timesteps is seven and number of samples is same as the number of samples in X_train and X_val, which are reshaped to 3D arrays.
Step9: Now we define the MLP using the Keras Functional API. In this approach a layer can be declared as the input of the following layer at the time of defining the next layer.
Step10: ZeroPadding1D layer is added next to add zeros at the beginning and end of each series. Zeropadding ensure that the downstream convolution layer does not reduce the dimension of the output sequences. Pooling layer, added after the convolution layer is used to downsampling the input.
Step11: The first argument of Conv1D is the number of filters, which determine the number of features in the output. Second argument indicates length of the 1D convolution window. The third argument is strides and represent the number of places to shift the convolution window. Lastly, setting use_bias as True, add a bias value during computation of an output feature. Here, the 1D convolution can be thought of as generating local AR models over rolling window of three time units.
Step12: AveragePooling1D is added next to downsample the input by taking average over pool size of three with stride of one timesteps. The average pooling in this case can be thought of as taking moving averages over a rolling window of three time units. We have used average pooling instead of max pooling to generate the moving averages.
Step13: The preceeding pooling layer returns 3D output. Hence before passing to the output layer, a Flatten layer is added. The Flatten layer reshapes the input to (number of samples, number of timesteps*number of features per timestep), which is then fed to the output layer
Step14: The input, dense and output layers will now be packed inside a Model, which is wrapper class for training and making
Step15: The model is trained by calling the fit function on the model object and passing the X_train and y_train. The training
Step16: Prediction are made for the PRES from the best saved model. The model's predictions, which are on the standardized PRES, are inverse transformed to get predictions of original PRES.
|
12,849
|
<ASSISTANT_TASK:>
Python Code:
# Import all necessary libraries, this is a configuration step for the exercise.
# Please run it before the simulation code!
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
# Show the plots in the Notebook.
plt.switch_backend("nbagg")
# Initialization of setup
# --------------------------------------------------------------------------
nx = 800 # number of grid points
c = 2500 # acoustic velocity in m/s
ro = 2500 # density in kg/m^3
Z = ro*c # impedance
mu = ro*c**2 # shear modulus
xmax = 10000 # Length in m
eps = 0.5 # CFL
tmax = 2.0 # simulation time in s
isnap = 10 # plotting rate
sig = 200 # argument in the inital condition
x0 = 5000 # position of the initial condition
imethod = 'upwind' # 'Lax-Wendroff', 'upwind'
# Initialize Space
x, dx = np.linspace(0,xmax,nx,retstep=True)
# use wave based CFL criterion
dt = eps*dx/c # calculate time step from stability criterion
# Simulation time
nt = int(np.floor(tmax/dt))
# Initialize wave fields
Q = np.zeros((2,nx))
Qnew = np.zeros((2,nx))
Qa = np.zeros((2,nx))
#################################################################
# INITIALIZE THE SOURCE TIME FUCTION HERE!
#################################################################
#################################################################
# PLOT THE SOURCE TIME FUNCTION HERE!
#################################################################
#################################################################
# INITIALIZE ALL MATRICES HERE!
#################################################################
# R =
# Rinv =
# Lp =
# Lm =
# Ap =
# Am =
# A =
# Initialize animated plot
# ---------------------------------------------------------------
fig = plt.figure(figsize=(10,6))
ax1 = fig.add_subplot(2,1,1)
ax2 = fig.add_subplot(2,1,2)
line1 = ax1.plot(x, Q[0,:], 'k', x, Qa[0,:], 'r--')
line2 = ax2.plot(x, Q[1,:], 'k', x, Qa[1,:], 'r--')
ax1.set_ylabel('Stress')
ax2.set_ylabel('Velocity')
ax2.set_xlabel(' x ')
plt.suptitle('Homogeneous F. volume - %s method'%imethod, size=16)
plt.ion() # set interective mode
plt.show()
# ---------------------------------------------------------------
# Time extrapolation
# ---------------------------------------------------------------
for i in range(nt):
if imethod =='Lax-Wendroff':
for j in range(1,nx-1):
dQ1 = Q[:,j+1] - Q[:,j-1]
dQ2 = Q[:,j-1] - 2*Q[:,j] + Q[:,j+1]
Qnew[:,j] = Q[:,j] - 0.5*dt/dx*(A @ dQ1)\
+ 1./2.*(dt/dx)**2 * (A @ A) @ dQ2 # Eq. 8.56
# Absorbing boundary conditions
Qnew[:,0] = Qnew[:,1]
Qnew[:,nx-1] = Qnew[:,nx-2]
elif imethod == 'upwind':
for j in range(1,nx-1):
dQl = Q[:,j] - Q[:,j-1]
dQr = Q[:,j+1] - Q[:,j]
Qnew[:,j] = Q[:,j] - dt/dx * (Ap @ dQl + Am @ dQr) # Eq. 8.54
# Absorbing boundary conditions
Qnew[:,0] = Qnew[:,1]
Qnew[:,nx-1] = Qnew[:,nx-2]
else:
raise NotImplementedError
Q, Qnew = Qnew, Q
# --------------------------------------
# Animation plot. Display solution
if not i % isnap:
for l in line1:
l.remove()
del l
for l in line2:
l.remove()
del l
# --------------------------------------
# Analytical solution (stress i.c.)
Qa[0,:] = 1./2.*(np.exp(-1./sig**2 * (x-x0 + c*i*dt)**2)\
+ np.exp(-1./sig**2 * (x-x0-c*i*dt)**2))
Qa[1,:] = 1/(2*Z)*(np.exp(-1./sig**2 * (x-x0+c*i*dt)**2)\
- np.exp(-1./sig**2 * (x-x0-c*i*dt)**2))
# --------------------------------------
# Display lines
line1 = ax1.plot(x, Q[0,:], 'k', x, Qa[0,:], 'r--', lw=1.5)
line2 = ax2.plot(x, Q[1,:], 'k', x, Qa[1,:], 'r--', lw=1.5)
plt.legend(iter(line2), ('F. Volume', 'Analytic'))
plt.gcf().canvas.draw()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Initialization of setup
Step2: 2. Initial condition
Step3: 3. Solution for the homogeneous problem
Step4: 4. Finite Volumes solution
|
12,850
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
from scipy.interpolate import interp1d
with np.load('trajectory.npz') as data:
x = data['x']
y = data['y']
t = data['t']
assert isinstance(x, np.ndarray) and len(x)==40
assert isinstance(y, np.ndarray) and len(y)==40
assert isinstance(t, np.ndarray) and len(t)==40
newt = np.linspace(np.min(t),np.max(t),200)
xfunc = interp1d(t, x, kind='cubic')
yfunc = interp1d(t, y, kind='cubic')
newx = xfunc(newt)
newy = yfunc(newt)
assert newt[0]==t.min()
assert newt[-1]==t.max()
assert len(newt)==200
assert len(newx)==200
assert len(newy)==200
plt.figure(figsize=(9,9))
plt.plot(x,y, marker='o', linestyle='', label='original data');
plt.plot(newx,newy, label='interpolated');
plt.legend();
plt.grid(False)
plt.box(True)
plt.xlabel('x(t)')
plt.ylabel('y(t)')
assert True # leave this to grade the trajectory plot
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2D trajectory interpolation
Step2: Use these arrays to create interpolated functions $x(t)$ and $y(t)$. Then use those functions to create the following arrays
Step3: Make a parametric plot of ${x(t),y(t)}$ that shows the interpolated values and the original points
|
12,851
|
<ASSISTANT_TASK:>
Python Code:
cd /notebooks/exercise-08/
import yaml
txt =
{ "yaml": 'is', 'a superset': 'of json'}
ret = yaml.load(txt)
print(ret)
# Yoda loves dictionaries ;)
print(yaml.dump(ret))
# Customized dumper
print(yaml.dump(ret, default_flow_style=False))
txt =
# Yaml comments starts with hash
you: {'can':'use', 'brace':'syntax'}
ret = yaml.load(txt)
print(yaml.dump(ret))
print(yaml.dump(ret, default_flow_style=False))
# Yaml can describe list..
print(yaml.load(
- tasks:
- contains
- a
- list
- of
- modules
))
# .. and maps / dicts
print(yaml.load(
- tasks:
- name: "this dict has two keys: name and debug"
debug: msg="Welcome to Rimini!"
))
print(yaml.load(
this_works: http://no-spaces-after-colon:8080
))
print(yaml.load(this_no: spaces: after colon))
# Quoting is important!
print(yaml.load(
that: "works: though"
))
# This is fine
print(yaml.load(
this_is: fine={{in_yaml}} but
))
# but with ansible you should
print(yaml.load(
always: quote="{{moustaches}}"
))
text =
one_line: "Rimini is also tied with the great cinema, since it is representative of Federico Fellini's world of fantasy."
trimmed_one_line: >-
Rimini is also tied with the great cinema,
since it is representative of Federico Fellini's
world of fantasy.
always_one_line: >
Rimini is also tied with the great cinema,
since it is representative of Federico Fellini's
world of fantasy.
ret = yaml.load(text)
assert ret['one_line'] == ret['trimmed_one_line'] == ret['always_one_line']
text =
multi: "Rimini, or the ancient Ariminum,
is an art heritage city with over 22 centuries of history.
In 268 B.C., the Roman Senate sent six thousand settlers
who founded the city that was meant to be strategically central
and to develop to this day."
# Comments are ignored from parser.
preserves: |
Rimini, or the ancient Ariminum,
is an art heritage city with over 22 centuries of history.
In 268 B.C., the Roman Senate nsent six thousand settlers
who founded the city that was meant to be strategically central
and to develop to this day.
trims: |-
Rimini, or the ancient Ariminum,
is an art heritage city with over 22 centuries of history.
In 268 B.C., the Roman Senate nsent six thousand settlers
who founded the city that was meant to be strategically central
and to develop to this day.
ret = yaml.load(text)
print(yaml.dump(ret, default_flow_style=False))
# exercise
preserves = ret['preserves']
trims = ret['trims']
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step5: What's yaml?
Step11: Quoting
Step13: Long texts
Step15: Or write a multi_line string with proper carets
Step16: Exercise
|
12,852
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
def test_mul():
arr = np.array([0.0, 1.0, 1.1])
v, expected = 1.1, np.array([0.0, 1.1, 1.21])
assert arr * v == expected, 'bad multiplication'
test_mul()
np.array([1,2,3]) == np.array([1, 1, 3])
bool(np.array([1, 2, 3]))
np.all([True, True, True])
def test_mul():
arr = np.array([0.0, 1.0, 1.1])
v, expected = 1.1, np.array([0.0, 1.1, 1.21])
assert np.all(arr * v == expected), 'bad multiplication'
test_mul()
1.1 * 1.1
def test_mul():
arr = np.array([0.0, 1.0, 1.1])
v, expected = 1.1, np.array([0.0, 1.1, 1.21])
assert np.allclose(arr * v, expected), 'bad multiplication'
test_mul()
def test_div():
arr1, arr2 = np.array([1.0, np.inf, 2.0]), np.array([2.0, np.inf, 2.0])
expected = np.array([0.5, np.nan, 1.0])
assert np.allclose(arr1 / arr2, expected), 'bad nan'
test_div()
np.nan == np.nan
np.isnan(np.inf/np.inf)
def test_div():
arr1, arr2 = np.array([1.0, np.inf, 2.0]), np.array([2.0, np.inf, 2.0])
expected = np.array([0.5, np.nan, 1.0])
result = arr1 / arr2
result[np.isnan(result)] = 0.0
expected[np.isnan(expected)] = 0.0
assert np.allclose(result, expected), 'bad nan'
test_div()
def test_div():
arr1, arr2 = np.array([1.0, np.inf, 2.0]), np.array([2.0, np.inf, 2.0])
expected = np.array([0.5, np.nan, 1.0])
assert np.allclose(arr1 / arr2, expected, equal_nan=True), 'bad nan'
test_div()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The Naive Approach
Step2: This is due to the fact that when we compare two numpy arrays with == we'll get an array of boolean values comparing each element.
Step3: And the truch value of an array (as the error says) is ambiguous.
Step4: We need to use np.all to check that all elements are equal.
Step5: Using np.all
Step6: This is due to the fact that floating points are not exact.
Step7: This is not a bug in Python but how floating points are implemented. You'll get the same result in C, Java, Go ...
Step8: Oh nan, Let Me Count the Ways ...
Step9: This is due to the fact the nan does not equal itself.
Step10: To check is a number is nan we need to use np.isnan
Step11: We have two options to solve this
Step12: Option 2
|
12,853
|
<ASSISTANT_TASK:>
Python Code:
%%bigquery df1
SELECT
team_code,
AVG(SAFE_DIVIDE(fgm + 0.5 * fgm3,fga)) AS offensive_shooting_efficiency,
AVG(SAFE_DIVIDE(opp_fgm + 0.5 * opp_fgm3,opp_fga)) AS opponents_shooting_efficiency,
AVG(win) AS win_rate,
COUNT(win) AS num_games
FROM lab_dev.team_box
WHERE fga IS NOT NULL
GROUP BY team_code
df1 = df1[df1['num_games'] > 100]
df1.plot(x='offensive_shooting_efficiency', y='win_rate', style='o');
df1.plot(x='opponents_shooting_efficiency', y='win_rate', style='o');
df1.corr()['win_rate']
%%bigquery df3
SELECT
team_code,
AVG(SAFE_DIVIDE(ftm,fga)) AS freethrows,
AVG(win) AS win_rate,
COUNT(win) AS num_games
FROM lab_dev.team_box
WHERE fga IS NOT NULL
GROUP BY team_code
HAVING num_games > 100
%%bigquery
SELECT
team_code,
is_home,
SAFE_DIVIDE(fgm + 0.5 * fgm3,fga) AS offensive_shooting_efficiency,
SAFE_DIVIDE(opp_fgm + 0.5 * opp_fgm3,opp_fga) AS opponents_shooting_efficiency,
SAFE_DIVIDE(tov,fga+0.475*fta+tov-oreb) AS turnover_percent,
SAFE_DIVIDE(opp_tov,opp_fga+0.475*opp_fta+opp_tov-opp_oreb) AS opponents_turnover_percent,
SAFE_DIVIDE(oreb,oreb + opp_dreb) AS rebounding,
SAFE_DIVIDE(opp_oreb,opp_oreb + dreb) AS opponents_rebounding,
SAFE_DIVIDE(ftm,fga) AS freethrows,
SAFE_DIVIDE(opp_ftm,opp_fga) AS opponents_freethrows,
win
FROM lab_dev.team_box
WHERE fga IS NOT NULL and win IS NOT NULL
LIMIT 10
# Copyright 2019 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's remove the entries corresponding to teams that played fewer than 100 games, and then plot it.
Step2: Does the relationship make sense? Do you think offensive and defensive efficiency are good predictors of a team's performance?
Step3: Turnover Percentage
Step4: Machine Learning
Step5: Is this correct, though? Will we know the offensive efficiency of the team before the game is played? How do we fix it?
|
12,854
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from collections import Counter
data = np.fromfile('/home/daniel/debian_testing_chroot/tmp/shockburst.u8', dtype = 'uint8').reshape((-1,34))
crc_table = [
0x0000, 0x1021, 0x2042, 0x3063, 0x4084, 0x50a5, 0x60c6, 0x70e7,
0x8108, 0x9129, 0xa14a, 0xb16b, 0xc18c, 0xd1ad, 0xe1ce, 0xf1ef,
0x1231, 0x0210, 0x3273, 0x2252, 0x52b5, 0x4294, 0x72f7, 0x62d6,
0x9339, 0x8318, 0xb37b, 0xa35a, 0xd3bd, 0xc39c, 0xf3ff, 0xe3de,
0x2462, 0x3443, 0x0420, 0x1401, 0x64e6, 0x74c7, 0x44a4, 0x5485,
0xa56a, 0xb54b, 0x8528, 0x9509, 0xe5ee, 0xf5cf, 0xc5ac, 0xd58d,
0x3653, 0x2672, 0x1611, 0x0630, 0x76d7, 0x66f6, 0x5695, 0x46b4,
0xb75b, 0xa77a, 0x9719, 0x8738, 0xf7df, 0xe7fe, 0xd79d, 0xc7bc,
0x48c4, 0x58e5, 0x6886, 0x78a7, 0x0840, 0x1861, 0x2802, 0x3823,
0xc9cc, 0xd9ed, 0xe98e, 0xf9af, 0x8948, 0x9969, 0xa90a, 0xb92b,
0x5af5, 0x4ad4, 0x7ab7, 0x6a96, 0x1a71, 0x0a50, 0x3a33, 0x2a12,
0xdbfd, 0xcbdc, 0xfbbf, 0xeb9e, 0x9b79, 0x8b58, 0xbb3b, 0xab1a,
0x6ca6, 0x7c87, 0x4ce4, 0x5cc5, 0x2c22, 0x3c03, 0x0c60, 0x1c41,
0xedae, 0xfd8f, 0xcdec, 0xddcd, 0xad2a, 0xbd0b, 0x8d68, 0x9d49,
0x7e97, 0x6eb6, 0x5ed5, 0x4ef4, 0x3e13, 0x2e32, 0x1e51, 0x0e70,
0xff9f, 0xefbe, 0xdfdd, 0xcffc, 0xbf1b, 0xaf3a, 0x9f59, 0x8f78,
0x9188, 0x81a9, 0xb1ca, 0xa1eb, 0xd10c, 0xc12d, 0xf14e, 0xe16f,
0x1080, 0x00a1, 0x30c2, 0x20e3, 0x5004, 0x4025, 0x7046, 0x6067,
0x83b9, 0x9398, 0xa3fb, 0xb3da, 0xc33d, 0xd31c, 0xe37f, 0xf35e,
0x02b1, 0x1290, 0x22f3, 0x32d2, 0x4235, 0x5214, 0x6277, 0x7256,
0xb5ea, 0xa5cb, 0x95a8, 0x8589, 0xf56e, 0xe54f, 0xd52c, 0xc50d,
0x34e2, 0x24c3, 0x14a0, 0x0481, 0x7466, 0x6447, 0x5424, 0x4405,
0xa7db, 0xb7fa, 0x8799, 0x97b8, 0xe75f, 0xf77e, 0xc71d, 0xd73c,
0x26d3, 0x36f2, 0x0691, 0x16b0, 0x6657, 0x7676, 0x4615, 0x5634,
0xd94c, 0xc96d, 0xf90e, 0xe92f, 0x99c8, 0x89e9, 0xb98a, 0xa9ab,
0x5844, 0x4865, 0x7806, 0x6827, 0x18c0, 0x08e1, 0x3882, 0x28a3,
0xcb7d, 0xdb5c, 0xeb3f, 0xfb1e, 0x8bf9, 0x9bd8, 0xabbb, 0xbb9a,
0x4a75, 0x5a54, 0x6a37, 0x7a16, 0x0af1, 0x1ad0, 0x2ab3, 0x3a92,
0xfd2e, 0xed0f, 0xdd6c, 0xcd4d, 0xbdaa, 0xad8b, 0x9de8, 0x8dc9,
0x7c26, 0x6c07, 0x5c64, 0x4c45, 0x3ca2, 0x2c83, 0x1ce0, 0x0cc1,
0xef1f, 0xff3e, 0xcf5d, 0xdf7c, 0xaf9b, 0xbfba, 0x8fd9, 0x9ff8,
0x6e17, 0x7e36, 0x4e55, 0x5e74, 0x2e93, 0x3eb2, 0x0ed1, 0x1ef0
]
def crc(frame):
c = 0xB95E # CRC of initial E7E7E7E7E7 address field
for b in frame:
tbl_idx = ((c >> 8) ^ b) & 0xff
c = (crc_table[tbl_idx] ^ (c << 8)) & 0xffff
return c & 0xffff
crc_ok = np.array([crc(d) == 0 for d in data])
frame_count = data[crc_ok,:2].ravel().view('uint16')
frame_count_unique = np.unique(frame_count)
np.sum(np.diff(frame_count_unique)-1)
np.where(np.diff(frame_count_unique)-1)
len(frame_count_unique)
plt.plot(np.diff(frame_count_unique)!=1)
frame_size = 30
with open('/tmp/file', 'wb') as f:
for count in frame_count_unique:
valid_frames = data[crc_ok][frame_count == count]
counter = Counter([bytes(frame[2:]) for frame in valid_frames])
f.seek(count * frame_size)
f.write(counter.most_common()[0][0])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The data shockburst.u8 contains ShockBurst frames without the 0xE7E7E7E7E7 address header (including frame counter, image payload and CRC). It has been obtained with nrf24.grc.
Step2: The CRC used in ShockBurst frames CRC16_CCITT_FALSE from this online calculator. Since the 0xE7E7E7E7E7 address is included in the CRC calculation but is missing in our data, we take this into account by modifying the initial XOR value.
Step3: Number of skipped frames
Step4: Number of correct frames
Step5: Write frames to a file according to their frame number. We do a majority voting to select among different frames with the same frame number (there are corrupted frames with good CRC). The file has gaps with zeros where frames are missing.
|
12,855
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'test-institute-3', 'sandbox-1', 'land')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Description
Step7: 1.4. Land Atmosphere Flux Exchanges
Step8: 1.5. Atmospheric Coupling Treatment
Step9: 1.6. Land Cover
Step10: 1.7. Land Cover Change
Step11: 1.8. Tiling
Step12: 2. Key Properties --> Conservation Properties
Step13: 2.2. Water
Step14: 2.3. Carbon
Step15: 3. Key Properties --> Timestepping Framework
Step16: 3.2. Time Step
Step17: 3.3. Timestepping Method
Step18: 4. Key Properties --> Software Properties
Step19: 4.2. Code Version
Step20: 4.3. Code Languages
Step21: 5. Grid
Step22: 6. Grid --> Horizontal
Step23: 6.2. Matches Atmosphere Grid
Step24: 7. Grid --> Vertical
Step25: 7.2. Total Depth
Step26: 8. Soil
Step27: 8.2. Heat Water Coupling
Step28: 8.3. Number Of Soil layers
Step29: 8.4. Prognostic Variables
Step30: 9. Soil --> Soil Map
Step31: 9.2. Structure
Step32: 9.3. Texture
Step33: 9.4. Organic Matter
Step34: 9.5. Albedo
Step35: 9.6. Water Table
Step36: 9.7. Continuously Varying Soil Depth
Step37: 9.8. Soil Depth
Step38: 10. Soil --> Snow Free Albedo
Step39: 10.2. Functions
Step40: 10.3. Direct Diffuse
Step41: 10.4. Number Of Wavelength Bands
Step42: 11. Soil --> Hydrology
Step43: 11.2. Time Step
Step44: 11.3. Tiling
Step45: 11.4. Vertical Discretisation
Step46: 11.5. Number Of Ground Water Layers
Step47: 11.6. Lateral Connectivity
Step48: 11.7. Method
Step49: 12. Soil --> Hydrology --> Freezing
Step50: 12.2. Ice Storage Method
Step51: 12.3. Permafrost
Step52: 13. Soil --> Hydrology --> Drainage
Step53: 13.2. Types
Step54: 14. Soil --> Heat Treatment
Step55: 14.2. Time Step
Step56: 14.3. Tiling
Step57: 14.4. Vertical Discretisation
Step58: 14.5. Heat Storage
Step59: 14.6. Processes
Step60: 15. Snow
Step61: 15.2. Tiling
Step62: 15.3. Number Of Snow Layers
Step63: 15.4. Density
Step64: 15.5. Water Equivalent
Step65: 15.6. Heat Content
Step66: 15.7. Temperature
Step67: 15.8. Liquid Water Content
Step68: 15.9. Snow Cover Fractions
Step69: 15.10. Processes
Step70: 15.11. Prognostic Variables
Step71: 16. Snow --> Snow Albedo
Step72: 16.2. Functions
Step73: 17. Vegetation
Step74: 17.2. Time Step
Step75: 17.3. Dynamic Vegetation
Step76: 17.4. Tiling
Step77: 17.5. Vegetation Representation
Step78: 17.6. Vegetation Types
Step79: 17.7. Biome Types
Step80: 17.8. Vegetation Time Variation
Step81: 17.9. Vegetation Map
Step82: 17.10. Interception
Step83: 17.11. Phenology
Step84: 17.12. Phenology Description
Step85: 17.13. Leaf Area Index
Step86: 17.14. Leaf Area Index Description
Step87: 17.15. Biomass
Step88: 17.16. Biomass Description
Step89: 17.17. Biogeography
Step90: 17.18. Biogeography Description
Step91: 17.19. Stomatal Resistance
Step92: 17.20. Stomatal Resistance Description
Step93: 17.21. Prognostic Variables
Step94: 18. Energy Balance
Step95: 18.2. Tiling
Step96: 18.3. Number Of Surface Temperatures
Step97: 18.4. Evaporation
Step98: 18.5. Processes
Step99: 19. Carbon Cycle
Step100: 19.2. Tiling
Step101: 19.3. Time Step
Step102: 19.4. Anthropogenic Carbon
Step103: 19.5. Prognostic Variables
Step104: 20. Carbon Cycle --> Vegetation
Step105: 20.2. Carbon Pools
Step106: 20.3. Forest Stand Dynamics
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
Step109: 22.2. Growth Respiration
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
Step111: 23.2. Allocation Bins
Step112: 23.3. Allocation Fractions
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
Step115: 26. Carbon Cycle --> Litter
Step116: 26.2. Carbon Pools
Step117: 26.3. Decomposition
Step118: 26.4. Method
Step119: 27. Carbon Cycle --> Soil
Step120: 27.2. Carbon Pools
Step121: 27.3. Decomposition
Step122: 27.4. Method
Step123: 28. Carbon Cycle --> Permafrost Carbon
Step124: 28.2. Emitted Greenhouse Gases
Step125: 28.3. Decomposition
Step126: 28.4. Impact On Soil Properties
Step127: 29. Nitrogen Cycle
Step128: 29.2. Tiling
Step129: 29.3. Time Step
Step130: 29.4. Prognostic Variables
Step131: 30. River Routing
Step132: 30.2. Tiling
Step133: 30.3. Time Step
Step134: 30.4. Grid Inherited From Land Surface
Step135: 30.5. Grid Description
Step136: 30.6. Number Of Reservoirs
Step137: 30.7. Water Re Evaporation
Step138: 30.8. Coupled To Atmosphere
Step139: 30.9. Coupled To Land
Step140: 30.10. Quantities Exchanged With Atmosphere
Step141: 30.11. Basin Flow Direction Map
Step142: 30.12. Flooding
Step143: 30.13. Prognostic Variables
Step144: 31. River Routing --> Oceanic Discharge
Step145: 31.2. Quantities Transported
Step146: 32. Lakes
Step147: 32.2. Coupling With Rivers
Step148: 32.3. Time Step
Step149: 32.4. Quantities Exchanged With Rivers
Step150: 32.5. Vertical Grid
Step151: 32.6. Prognostic Variables
Step152: 33. Lakes --> Method
Step153: 33.2. Albedo
Step154: 33.3. Dynamics
Step155: 33.4. Dynamic Lake Extent
Step156: 33.5. Endorheic Basins
Step157: 34. Lakes --> Wetlands
|
12,856
|
<ASSISTANT_TASK:>
Python Code:
from google.cloud import aiplatform
REGION = "us-central1"
PROJECT_ID = !(gcloud config get-value project)
PROJECT_ID = PROJECT_ID[0]
# Set `PATH` to include the directory containing KFP CLI
PATH = %env PATH
%env PATH=/home/jupyter/.local/bin:{PATH}
!cat trainer_image_vertex/Dockerfile
IMAGE_NAME = "trainer_image_covertype_vertex"
TAG = "latest"
TRAINING_CONTAINER_IMAGE_URI = f"gcr.io/{PROJECT_ID}/{IMAGE_NAME}:{TAG}"
TRAINING_CONTAINER_IMAGE_URI
!gcloud builds submit --timeout 15m --tag $TRAINING_CONTAINER_IMAGE_URI trainer_image_vertex
SERVING_CONTAINER_IMAGE_URI = (
"us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.0-20:latest"
)
%%writefile ./pipeline_vertex/pipeline.py
# Copyright 2021 Google LLC
# Licensed under the Apache License, Version 2.0 (the "License"); you may not use this
# file except in compliance with the License. You may obtain a copy of the License at
# https://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS"
# BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# express or implied. See the License for the specific language governing
# permissions and limitations under the License.
Kubeflow Covertype Pipeline.
import os
from kfp import dsl
from training_lightweight_component import train_and_deploy
from tuning_lightweight_component import tune_hyperparameters
PIPELINE_ROOT = os.getenv("PIPELINE_ROOT")
PROJECT_ID = os.getenv("PROJECT_ID")
REGION = os.getenv("REGION")
TRAINING_CONTAINER_IMAGE_URI = os.getenv("TRAINING_CONTAINER_IMAGE_URI")
SERVING_CONTAINER_IMAGE_URI = os.getenv("SERVING_CONTAINER_IMAGE_URI")
TRAINING_FILE_PATH = os.getenv("TRAINING_FILE_PATH")
VALIDATION_FILE_PATH = os.getenv("VALIDATION_FILE_PATH")
MAX_TRIAL_COUNT = int(os.getenv("MAX_TRIAL_COUNT", "5"))
PARALLEL_TRIAL_COUNT = int(os.getenv("PARALLEL_TRIAL_COUNT", "5"))
THRESHOLD = float(os.getenv("THRESHOLD", "0.6"))
@dsl.pipeline(
name="covertype-kfp-pipeline",
description="The pipeline training and deploying the Covertype classifier",
pipeline_root=PIPELINE_ROOT,
)
def covertype_train(
training_container_uri: str = TRAINING_CONTAINER_IMAGE_URI,
serving_container_uri: str = SERVING_CONTAINER_IMAGE_URI,
training_file_path: str = TRAINING_FILE_PATH,
validation_file_path: str = VALIDATION_FILE_PATH,
accuracy_deployment_threshold: float = THRESHOLD,
max_trial_count: int = MAX_TRIAL_COUNT,
parallel_trial_count: int = PARALLEL_TRIAL_COUNT,
pipeline_root: str = PIPELINE_ROOT,
):
staging_bucket = f"{pipeline_root}/staging"
tuning_op = # TODO
accuracy = tuning_op.outputs["best_accuracy"]
with dsl.Condition(
accuracy >= accuracy_deployment_threshold, name="deploy_decision"
):
train_and_deploy_op = # TODO
ARTIFACT_STORE = f"gs://{PROJECT_ID}-kfp-artifact-store"
PIPELINE_ROOT = f"{ARTIFACT_STORE}/pipeline"
DATA_ROOT = f"{ARTIFACT_STORE}/data"
TRAINING_FILE_PATH = f"{DATA_ROOT}/training/dataset.csv"
VALIDATION_FILE_PATH = f"{DATA_ROOT}/validation/dataset.csv"
%env PIPELINE_ROOT={PIPELINE_ROOT}
%env PROJECT_ID={PROJECT_ID}
%env REGION={REGION}
%env SERVING_CONTAINER_IMAGE_URI={SERVING_CONTAINER_IMAGE_URI}
%env TRAINING_CONTAINER_IMAGE_URI={TRAINING_CONTAINER_IMAGE_URI}
%env TRAINING_FILE_PATH={TRAINING_FILE_PATH}
%env VALIDATION_FILE_PATH={VALIDATION_FILE_PATH}
!gsutil ls | grep ^{ARTIFACT_STORE}/$ || gsutil mb -l {REGION} {ARTIFACT_STORE}
PIPELINE_JSON = "covertype_kfp_pipeline.json"
# TODO
!head {PIPELINE_JSON}
# TODO
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Understanding the pipeline design
Step2: Let's now build and push this trainer container to the container registry
Step3: To match the ml framework version we use at training time while serving the model, we will have to supply the following serving container to the pipeline
Step5: Note
Step6: Compile the pipeline
Step7: Let us make sure that the ARTIFACT_STORE has been created, and let us create it if not
Step8: Note
Step9: Exercise
Step10: Note
Step11: Deploy the pipeline package
|
12,857
|
<ASSISTANT_TASK:>
Python Code:
# Ensure compatibility with Python 2 and 3
from __future__ import print_function, division
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from climlab import constants as const
from climlab.solar.insolation import daily_insolation
help(daily_insolation)
daily_insolation(45,1)
daily_insolation(45,181)
lat = np.linspace(-90., 90., 30)
Q = daily_insolation(lat, 80)
fig, ax = plt.subplots()
ax.plot(lat,Q)
ax.set_xlim(-90,90); ax.set_xticks([-90,-60,-30,-0,30,60,90])
ax.set_xlabel('Latitude')
ax.set_ylabel('W/m2')
ax.grid()
ax.set_title('Daily average insolation on March 21')
lat = np.linspace( -90., 90., 500)
days = np.linspace(0, const.days_per_year, 365 )
Q = daily_insolation( lat, days )
fig, ax = plt.subplots(figsize=(10,8))
CS = ax.contour( days, lat, Q , levels = np.arange(0., 600., 50.) )
ax.clabel(CS, CS.levels, inline=True, fmt='%r', fontsize=10)
ax.set_xlabel('Days since January 1', fontsize=16 )
ax.set_ylabel('Latitude', fontsize=16 )
ax.set_title('Daily average insolation', fontsize=24 )
ax.contourf ( days, lat, Q, levels=[-1000., 0.], colors='k' )
Qaverage = np.average(np.mean(Q, axis=1), weights=np.cos(np.deg2rad(lat)))
print( 'The annual, global average insolation is %.2f W/m2.' %Qaverage)
summer_solstice = 170
winter_solstice = 353
fig, ax = plt.subplots(figsize=(10,8))
ax.plot( lat, Q[:,(summer_solstice, winter_solstice)] );
ax.plot( lat, np.mean(Q, axis=1), linewidth=2 )
ax.set_xbound(-90, 90)
ax.set_xticks( range(-90,100,30) )
ax.set_xlabel('Latitude', fontsize=16 );
ax.set_ylabel('Insolation (W m$^{-2}$)', fontsize=16 );
ax.grid()
%load_ext version_information
%version_information numpy, matplotlib, climlab
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Contents
Step2: First, get a little help on using the daily_insolation function
Step3: Here are a few simple examples.
Step4: Same location, July 1
Step5: We could give an array of values. Let's calculate and plot insolation at all latitudes on the spring equinox = March 21 = Day 80
Step6: In-class exercises
Step7: And make a contour plot of Q as function of latitude and time of year.
Step8: Time and space averages
Step9: Also plot the zonally averaged insolation at a few different times of the year
Step10: <div class="alert alert-success">
|
12,858
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
d = {'l': ['left', 'right', 'left', 'right', 'left', 'right'],
'r': ['right', 'left', 'right', 'left', 'right', 'left'],
'v': [-1, 1, -1, 1, -1, np.nan]}
df = pd.DataFrame(d)
def g(df):
return df.groupby('l')['v'].apply(pd.Series.sum,skipna=False)
result = g(df.copy())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
12,859
|
<ASSISTANT_TASK:>
Python Code:
import control as ct
import numpy as np
import matplotlib.pyplot as plt
import math
saturation=ct.saturation_nonlinearity(0.75)
x = np.linspace(-2, 2, 50)
plt.plot(x, saturation(x))
plt.xlabel("Input, x")
plt.ylabel("Output, y = sat(x)")
plt.title("Input/output map for a saturation nonlinearity");
amp_range = np.linspace(0, 2, 50)
plt.plot(amp_range, ct.describing_function(saturation, amp_range))
plt.xlabel("Amplitude A")
plt.ylabel("Describing function, N(A)")
plt.title("Describing function for a saturation nonlinearity");
backlash = ct.friction_backlash_nonlinearity(0.5)
theta = np.linspace(0, 2*np.pi, 50)
x = np.sin(theta)
plt.plot(x, [backlash(z) for z in x])
plt.xlabel("Input, x")
plt.ylabel("Output, y = backlash(x)")
plt.title("Input/output map for a friction-dominated backlash nonlinearity");
amp_range = np.linspace(0, 2, 50)
N_a = ct.describing_function(backlash, amp_range)
plt.figure()
plt.plot(amp_range, abs(N_a))
plt.xlabel("Amplitude A")
plt.ylabel("Amplitude of describing function, N(A)")
plt.title("Describing function for a backlash nonlinearity")
plt.figure()
plt.plot(amp_range, np.angle(N_a))
plt.xlabel("Amplitude A")
plt.ylabel("Phase of describing function, N(A)")
plt.title("Describing function for a backlash nonlinearity");
# Define a saturation nonlinearity as a simple function
def my_saturation(x):
if abs(x) >= 1:
return math.copysign(1, x)
else:
return x
amp_range = np.linspace(0, 2, 50)
plt.plot(amp_range, ct.describing_function(my_saturation, amp_range).real)
plt.xlabel("Amplitude A")
plt.ylabel("Describing function, N(A)")
plt.title("Describing function for a saturation nonlinearity");
# Linear dynamics
H_simple = ct.tf([8], [1, 2, 2, 1])
omega = np.logspace(-3, 3, 500)
# Nonlinearity
F_saturation = ct.saturation_nonlinearity(1)
amp = np.linspace(00, 5, 50)
# Describing function plot (return value = amp, freq)
ct.describing_function_plot(H_simple, F_saturation, amp, omega)
# Create an I/O system simulation to see what happens
io_saturation = ct.NonlinearIOSystem(
None,
lambda t, x, u, params: F_saturation(u),
inputs=1, outputs=1
)
sys = ct.feedback(ct.tf2io(H_simple), io_saturation)
T = np.linspace(0, 30, 200)
t, y = ct.input_output_response(sys, T, 0.1, 0)
plt.plot(t, y);
# Linear dynamics
H_simple = ct.tf([1], [1, 2, 2, 1])
H_multiple = H_simple * ct.tf(*ct.pade(5, 4)) * 4
omega = np.logspace(-3, 3, 500)
# Nonlinearity
F_backlash = ct.friction_backlash_nonlinearity(1)
amp = np.linspace(0.6, 5, 50)
# Describing function plot
ct.describing_function_plot(H_multiple, F_backlash, amp, omega, mirror_style=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Built-in describing functions
Step2: Backlash nonlinearity
Step3: User-defined, static nonlinearities
Step4: Stability analysis using describing functions
Step5: The intersection occurs at amplitude 3.3 and frequency 1.4 rad/sec (= 0.2 Hz) and thus we predict a limit cycle with amplitude 3.3 and period of approximately 5 seconds.
Step6: Limit cycle prediction with for a time-delay system with backlash
|
12,860
|
<ASSISTANT_TASK:>
Python Code:
from goatools.base import download_ncbi_associations
# fin -> Filename of input file (file to be read)
fin_gene2go = download_ncbi_associations()
from goatools.anno.genetogo_reader import Gene2GoReader
objanno_hsa = Gene2GoReader(fin_gene2go, taxids=[9606])
objanno_all = Gene2GoReader(fin_gene2go, taxids=True)
ns2assc_hsa1 = objanno_hsa.get_ns2assc()
from itertools import chain
def prt_assc_counts(ns2assc):
Print the number of genes and GO IDs in an association
for nspc, gene2goids in sorted(ns2assc.items()):
print("{NS} {N:6,} genes, {GOs:6,} GOs".format(
NS=nspc, N=len(gene2goids), GOs=len(set.union(*gene2goids.values()))))
prt_assc_counts(ns2assc_hsa1)
ns2assc_hsa2 = objanno_all.get_ns2assc(9606)
prt_assc_counts(ns2assc_hsa2)
ns2assc_mmu = objanno_all.get_ns2assc(10090)
prt_assc_counts(ns2assc_mmu)
ns2assc_two = objanno_all.get_ns2assc({9606, 10090})
prt_assc_counts(ns2assc_two)
ns2assc_all = objanno_all.get_ns2assc(True)
prt_assc_counts(ns2assc_all)
ns2assc_all = objanno_all.get_ns2assc()
print(ns2assc_all)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2) Read NCBI annotation file, "gene2go"
Step2: 2b) Read all taxids
Step4: 3) Get associations, split by namespace (Only human annotations loaded)
Step5: 4) Get associations, split by namespace (Many taxids loaded)
Step6: 4b) Get associations for one species (mouse)
Step7: 4c) Combine associations for multiple species (human and mouse)
Step8: 4d) Combine all associations
Step9: 4e) Try getting unspecified taxids
|
12,861
|
<ASSISTANT_TASK:>
Python Code:
%load_ext autoreload
%autoreload 2
from eden.util import configure_logging
import logging
BABELDRAW=False
DEBUG=False
NJOBS=4
if DEBUG: NJOBS=1
configure_logging(logging.getLogger(),verbosity=1+DEBUG)
from IPython.core.display import HTML
HTML('<style>.container { width:95% !important; }</style>')
%matplotlib inline
# get data
from eden.converter.graph.gspan import gspan_to_eden
from itertools import islice
def get_graphs(dataset_fname='../toolsdata/bursi.pos.gspan', size=100):
return islice(gspan_to_eden(dataset_fname),size)
from graphlearn.utils import draw
import graphlearn.abstract_graphs.minortransform as transform
import graphlearn.abstract_graphs.minordecompose as decompose
from eden.graph import Vectorizer
from sklearn.cluster import MiniBatchKMeans
from sklearn.cluster import KMeans
import math
#preparing
v=Vectorizer(complexity=3)
make_decomposer = decompose.make_decomposergen(include_base=False, base_thickness_list=[2])
# nodes in all graphs get scored.
# the default functionality is to take all scores and cluster them
# such that nodes that get assigned the same cluster can be contracted in a minor graph.
# ShapeCluster is going the lazy route and uses the score of the node directly for the clusterid
class ShapeCluster:
def fit(self,li):
pass
def predict(self,i):
return [math.ceil(i)]
pp=transform.GraphMinorTransformer(#core_shape_cluster =KMeans(n_clusters=4),
core_shape_cluster =ShapeCluster(),
name_cluster =MiniBatchKMeans(n_clusters=6),
save_graphclusters =True,
shape_score_threshold=2.5,
shape_min_size=2)
pp.set_param(v)
# the magic happens here
decomposers=[make_decomposer(v,x) for x in pp.fit_transform(get_graphs(size=200))]
# lets look at some clusters
if False:
for cluster_id in pp.graphclusters:
print('cluster id: %d num: %d' % (cluster_id, len(pp.graphclusters[cluster_id])))
if cluster_id != -1:
draw.graphlearn(pp.graphclusters[cluster_id][:7], n_graphs_per_line=7,
size=6, vertex_color='_label_', prog='neato', colormap='Set3',
contract=False,edge_label='label')
#lets draw what we did there
for i in range(3):
draw.graphlearn([decomposers[i+5].pre_vectorizer_graph(nested=True),decomposers[i+5].base_graph(),decomposers[i+5].abstract_graph()],
size=10,
contract=True,
abstract_color='red',
vertex_label='label',nesting_edge_alpha=0.7)
#parameters
radius_list=[0,2]
thickness_list=[2,4]
base_thickness_list=[2]
#extract
cips=decomposers[0].all_core_interface_pairs(thickness_list=[2],radius_list=[0,1],hash_bitmask=2**20-1)
#draw
draw.graphlearn([cips[0][0].graph,cips[0][1].graph], contract=False)
%%time
from graphlearn.graphlearn import Sampler as graphlearn_sampler
graphs = get_graphs(size=1000)
sampler=graphlearn_sampler(radius_list=[0,1],
thickness_list=[1],
min_cip_count=2,
min_interface_count=2,
decomposergen=make_decomposer,
graphtransformer=transform.GraphMinorTransformer(
core_shape_cluster =ShapeCluster(),
name_cluster =MiniBatchKMeans(n_clusters=6),
save_graphclusters =True)
sampler.fit(graphs,grammar_n_jobs=NJOBS)
print 'done'
draw.draw_grammar(sampler.lsgg.productions,n_productions=5,n_graphs_per_production=5,
n_graphs_per_line=5, size=9, contract=False,
colormap='Paired', invert_colormap=False,node_border=1,
vertex_alpha=0.6, edge_alpha=0.5, node_size=450, abstract_interface=True)
%%time
import graphlearn.utils.draw as draw
import itertools
#parameters
graphs = get_graphs()
id_start=15
id_end=id_start+9
graphs = itertools.islice(graphs,id_start,id_end)
n_steps=50
# sampling with many arguments.
graphs = sampler.sample(graphs,
n_samples=5,
batch_size=1,
n_steps=n_steps,
n_jobs=1,
quick_skip_orig_cip=False,
probabilistic_core_choice=True,
burnin=0,
improving_threshold=0.5,
select_cip_max_tries=100,
keep_duplicates=True,
include_seed=True)
scores=[]
ids=range(id_start,id_end)
for i,path_graphs in enumerate(graphs):
# for each sampling path:
print 'Graph id: %d'%(ids[i])
#collect scores so that we can display the score graph later
scores.append(sampler.monitors[i].sampling_info['score_history'])
# show graphs
if not BABELDRAW:
draw.graphlearn(path_graphs,
n_graphs_per_line=5, size=10,
colormap='Paired', invert_colormap=False,node_border=0.5, vertex_color='_label_',
vertex_alpha=0.5, edge_alpha=0.7, node_size=450)
else:
from graphlearn.utils import openbabel
openbabel.draw(path_graphs)
%matplotlib inline
from itertools import islice
import numpy as np
import matplotlib.pyplot as plt
step=1
num_graphs_per_plot=3
num_plots=np.ceil([len(scores)/num_graphs_per_plot])
for i in range(num_plots):
plt.figure(figsize=(10,5))
for j,score in enumerate(scores[i*num_graphs_per_plot:i*num_graphs_per_plot+num_graphs_per_plot]):
data = list(islice(score,None, None, step))
plt.plot(data, label='graph %d'%(j+i*num_graphs_per_plot+id_start))
plt.legend(loc='lower right')
plt.grid()
plt.ylim(-0.1,1.1)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: demonstration of the preprocesor learning the abstraction
Step2: lets see if these wrappers give us CIPS as this is their only purpose.
Step3: Train sampler
Step4: Inspect the induced grammar
Step5: sample molecules
Step6: plot score graph
|
12,862
|
<ASSISTANT_TASK:>
Python Code:
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
import pandas as pd
class PCAForPandas(PCA):
This class is just a small wrapper around the PCA estimator of sklearn including normalization to make it
compatible with pandas DataFrames.
def __init__(self, **kwargs):
self._z_scaler = StandardScaler()
super(self.__class__, self).__init__(**kwargs)
self._X_columns = None
def fit(self, X, y=None):
Normalize X and call the fit method of the base class with numpy arrays instead of pandas data frames.
X = self._prepare(X)
self._z_scaler.fit(X.values, y)
z_data = self._z_scaler.transform(X.values, y)
return super(self.__class__, self).fit(z_data, y)
def fit_transform(self, X, y=None):
Call the fit and the transform method of this class.
X = self._prepare(X)
self.fit(X, y)
return self.transform(X, y)
def transform(self, X, y=None):
Normalize X and call the transform method of the base class with numpy arrays instead of pandas data frames.
X = self._prepare(X)
z_data = self._z_scaler.transform(X.values, y)
transformed_ndarray = super(self.__class__, self).transform(z_data)
pandas_df = pd.DataFrame(transformed_ndarray)
pandas_df.columns = ["pca_{}".format(i) for i in range(len(pandas_df.columns))]
return pandas_df
def _prepare(self, X):
Check if the data is a pandas DataFrame and sorts the column names.
:raise AttributeError: if pandas is not a DataFrame or the columns of the new X is not compatible with the
columns from the previous X data
if not isinstance(X, pd.DataFrame):
raise AttributeError("X is not a pandas DataFrame")
X.sort_index(axis=1, inplace=True)
if self._X_columns is not None:
if self._X_columns != list(X.columns):
raise AttributeError("The columns of the new X is not compatible with the columns from the previous X data")
else:
self._X_columns = list(X.columns)
return X
from tsfresh.examples.robot_execution_failures import download_robot_execution_failures, load_robot_execution_failures
from tsfresh.feature_extraction import extract_features
from tsfresh.feature_selection import select_features
from tsfresh.utilities.dataframe_functions import impute
from tsfresh.feature_extraction import ComprehensiveFCParameters, MinimalFCParameters, settings
download_robot_execution_failures()
df, y = load_robot_execution_failures()
df_train = df.iloc[(df.id <= 87).values]
y_train = y[0:-1]
df_test = df.iloc[(df.id >= 87).values]
y_test = y[-2:]
df.head()
X_train = extract_features(df_train, column_id='id', column_sort='time', default_fc_parameters=MinimalFCParameters(),
impute_function=impute)
X_train.head()
X_train_filtered = select_features(X_train, y_train)
X_train_filtered.tail()
pca_train = PCAForPandas(n_components=4)
X_train_pca = pca_train.fit_transform(X_train_filtered)
# add index plus 1 to keep original index from robot example
X_train_pca.index += 1
X_train_pca.tail()
X_test_filtered = extract_features(df_test, column_id='id', column_sort='time',
kind_to_fc_parameters=settings.from_columns(X_train_filtered.columns),
impute_function=impute)
X_test_filtered
X_test_pca = pca_train.transform(X_test_filtered)
# reset index to keep original index from robot example
X_test_pca.index = [87, 88]
X_test_pca
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step5: tsfresh returns a great number of features. Depending on the dynamics of the inspected time series, some of them maybe highly correlated.
Step6: Load robot failure example
Step7: Train
Step8: Select train features
Step9: Principal Component Analysis on train features
Step10: Test
Step11: Principal Component Analysis on test features
|
12,863
|
<ASSISTANT_TASK:>
Python Code:
import datetime
import pickle
import os
import pandas as pd
import xgboost as xgb
import numpy as np
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import FeatureUnion, make_pipeline
from sklearn.utils import shuffle
from sklearn.base import clone
from sklearn.model_selection import train_test_split
from witwidget.notebook.visualization import WitWidget, WitConfigBuilder
import custom_transforms
import warnings
warnings.filterwarnings(action='ignore', category=DeprecationWarning)
os.environ['QWIKLABS_PROJECT_ID'] = ''
train_csv_path = 'https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data'
COLUMNS = (
'age',
'workclass',
'fnlwgt',
'education',
'education-num',
'marital-status',
'occupation',
'relationship',
'race',
'sex',
'capital-gain',
'capital-loss',
'hours-per-week',
'native-country',
'income-level'
)
raw_train_data = pd.read_csv(train_csv_path, names=COLUMNS, skipinitialspace=True)
raw_train_data = shuffle(raw_train_data, random_state=4)
raw_train_data.head()
print(raw_train_data['income-level'].value_counts(normalize=True))
test_csv_path = 'https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.test'
raw_test_data = pd.read_csv(test_csv_path, names=COLUMNS, skipinitialspace=True, skiprows=1)
raw_test_data.head()
raw_train_features = raw_train_data.drop('income-level', axis=1).values
raw_test_features = raw_test_data.drop('income-level', axis=1).values
# Create training labels list
train_labels = (raw_train_data['income-level'] == '>50K').values.astype(int)
test_labels = (raw_test_data['income-level'] == '>50K.').values.astype(int)
numerical_indices = [0, 12]
categorical_indices = [1, 3, 5, 7]
p1 = make_pipeline(
custom_transforms.PositionalSelector(categorical_indices),
custom_transforms.StripString(),
custom_transforms.SimpleOneHotEncoder()
)
p2 = make_pipeline(
custom_transforms.PositionalSelector(numerical_indices),
StandardScaler()
)
p3 = FeatureUnion([
('numericals', p1),
('categoricals', p2),
])
pipeline = make_pipeline(
p3,
xgb.sklearn.XGBClassifier(max_depth=4)
)
pipeline.fit(raw_train_features, train_labels)
with open('model.pkl', 'wb') as model_file:
pickle.dump(pipeline, model_file)
!gsutil mb gs://$QWIKLABS_PROJECT_ID
%%bash
python setup.py sdist --formats=gztar
gsutil cp model.pkl gs://$QWIKLABS_PROJECT_ID/original/
gsutil cp dist/custom_transforms-0.1.tar.gz gs://$QWIKLABS_PROJECT_ID/
!gcloud ai-platform models create census_income_classifier --regions us-central1
%%bash
MODEL_NAME="census_income_classifier"
VERSION_NAME="original"
MODEL_DIR="gs://$QWIKLABS_PROJECT_ID/original/"
CUSTOM_CODE_PATH="gs://$QWIKLABS_PROJECT_ID/custom_transforms-0.1.tar.gz"
gcloud beta ai-platform versions create $VERSION_NAME \
--model $MODEL_NAME \
--runtime-version 1.15 \
--python-version 3.7 \
--origin $MODEL_DIR \
--package-uris $CUSTOM_CODE_PATH \
--prediction-class predictor.MyPredictor \
--region=global
%%writefile predictions.json
[25, "Private", 226802, "11th", 7, "Never-married", "Machine-op-inspct", "Own-child", "Black", "Male", 0, 0, 40, "United-States"]
!gcloud ai-platform predict --model=census_income_classifier --json-instances=predictions.json --version=original --region=global
num_datapoints = 2000
test_examples = np.hstack(
(raw_test_features[:num_datapoints],
test_labels[:num_datapoints].reshape(-1,1)
)
)
config_builder = (
WitConfigBuilder(test_examples.tolist(), COLUMNS)
.set_ai_platform_model(os.environ['QWIKLABS_PROJECT_ID'], 'census_income_classifier', 'original')
.set_target_feature('income-level')
.set_model_type('classification')
.set_label_vocab(['Under 50K', 'Over 50K'])
)
WitWidget(config_builder, height=800)
bal_data_path = 'https://storage.googleapis.com/cloud-training/dei/balanced_census_data.csv'
bal_data = pd.read_csv(bal_data_path, names=COLUMNS, skiprows=1)
bal_data.head()
bal_data['sex'].value_counts(normalize=True)
bal_data.groupby(['sex', 'income-level'])['sex'].count()
bal_data['income-level'] = bal_data['income-level'].isin(['>50K', '>50K.']).values.astype(int)
raw_bal_features = bal_data.drop('income-level', axis=1).values
bal_labels = bal_data['income-level'].values
pipeline_bal = clone(pipeline)
pipeline_bal.fit(raw_bal_features, bal_labels)
with open('model.pkl', 'wb') as model_file:
pickle.dump(pipeline_bal, model_file)
%%bash
gsutil cp model.pkl gs://$QWIKLABS_PROJECT_ID/balanced/
MODEL_NAME="census_income_classifier"
VERSION_NAME="balanced"
MODEL_DIR="gs://$QWIKLABS_PROJECT_ID/balanced/"
CUSTOM_CODE_PATH="gs://$QWIKLABS_PROJECT_ID/custom_transforms-0.1.tar.gz"
gcloud beta ai-platform versions create $VERSION_NAME \
--model $MODEL_NAME \
--runtime-version 1.15 \
--python-version 3.7 \
--origin $MODEL_DIR \
--package-uris $CUSTOM_CODE_PATH \
--prediction-class predictor.MyPredictor \
--region=global
bal_test_csv_path = 'https://storage.googleapis.com/cloud-training/dei/balanced_census_data_test.csv'
bal_test_data = pd.read_csv(bal_test_csv_path, names=COLUMNS, skipinitialspace=True)
bal_test_data['income-level'] = (bal_test_data['income-level'] == '>50K').values.astype(int)
config_builder = (
WitConfigBuilder(bal_test_data.to_numpy()[1:].tolist(), COLUMNS)
.set_ai_platform_model(os.environ['QWIKLABS_PROJECT_ID'], 'census_income_classifier', 'original')
.set_compare_ai_platform_model(os.environ['QWIKLABS_PROJECT_ID'], 'census_income_classifier', 'balanced')
.set_target_feature('income-level')
.set_model_type('classification')
.set_label_vocab(['Under 50K', 'Over 50K'])
)
WitWidget(config_builder, height=800)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Before we continue, note that we'll be using your Qwiklabs project id a lot in this notebook. For convenience, set it as an environment variable using the command below
Step2: Download and process data
Step3: data.head() lets us preview the first five rows of our dataset in Pandas.
Step4: The income-level column is the thing our model will predict. This is the binary outcome of whether the individual makes more than $50,000 per year. To see the distribution of income levels in the dataset, run the following
Step5: As explained in this paper, each entry in the dataset contains the following information
Step6: Since we don't want to train a model on our labels, we're going to separate them from the features in both the training and test datasets. Also, notice that income-level is a string datatype. For machine learning, it's better to convert this to an binary integer datatype. We do this in the next cell.
Step7: Now you're ready to build and train your first model!
Step8: To finalize the pipeline we attach an XGBoost classifier at the end. The complete pipeline object takes the raw data we loaded from csv files, processes the categorical features, processes the numerical features, concatenates the two, and then passes the result through the XGBoost classifier.
Step9: We train our model with one function call using the fit() method. We pass the fit() method our training data.
Step10: Let's go ahead and save our model as a pickle file. Executing the command below will save the trained model in the file model.pkl in the same directory as this notebook.
Step11: Save Trained Model to AI Platform
Step12: Package custom transform code
Step13: Create and Deploy Model
Step14: Now it's time to deploy the model. We can do that with this gcloud command
Step15: While this is running, check the models section of your AI Platform console. You should see your new version deploying there. When the deploy completes successfully you'll see a green check mark where the loading spinner is. The deploy should take 2-3 minutes. You will need to click on the model name in order to see the spinner/checkmark. In the command above, notice we specify prediction-class. The reason we must specify a prediction class is that by default, AI Platform prediction will call a Scikit-Learn model's predict method, which in this case returns either 0 or 1. However, the What-If Tool requires output from a model in line with a Scikit-Learn model's predict_proba method. This is because WIT wants the probabilities of the negative and positive classes, not just the final determination on which class a person belongs to. Because that allows us to do more fine-grained exploration of the model. Consequently, we must write a custom prediction routine that basically renames predict_proba as predict. The custom prediction method can be found in the file predictor.py. This file was packaged in the section Package custom transform code. By specifying prediction-class we're telling AI Platform to call our custom prediction method--basically, predict_proba-- instead of the default predict method.
Step16: Test your model by running this code
Step17: You should see your model's prediction in the output. The first entry in the output is the model's probability that the individual makes under \$50K while the second entry is the model's confidence that the individual makes over \$50k. The two entries sum to 1.
Step18: Instantiating the What-if Tool is as simple as creating a WitConfigBuilder object and passing it the AI Platform model we built. Note that it'll take a minute to load the visualization.
Step19: The default view on the What-if Tool is the Datapoint editor tab. Here, you can click on any individual data point to see its features and even change feature values. Navigate to the Performance & Fairness tab in the What-if Tool. By slicing on a feature you can view the model error for individual feature values. Finally, navigate to the Features tab in the What-if Tool. This shows you the distribution of values for each feature in your dataset. You can use this tab to make sure your dataset is balanced. For example, if we only had Asians in a population, the model's predictions wouldn't necessarily reflect real world data. This tab gives us a good opportunity to see where our dataset might fall short, so that we can go back and collect more data to make it balanced.
Step20: Execute the command below to see the distribution of gender in the data.
Step21: Unlike the original dataset, this dataset has an equal number of rows for both males and females. Execute the command below to see the distriubtion of rows in the dataset of both sex and income-level.
Step22: We see that not only is the dataset balanced across gender, it's also balanced across income. Let's train a model on this data. We'll use exactly the same model pipeline as in the previous section. Scikit-Learn has a convenient utility function for copying model pipelines, clone. The clone function copies a pipeline architecture without saving learned parameter values.
Step23: As before, we save our trained model to a pickle file. Note, when we version this model in AI Platform the model in this case must be named model.pkl. It's ok to overwrite the existing model.pkl file since we'll be uploading it to Cloud Storage anyway.
Step24: Deploy the model to AI Platform using the following bash script
Step25: Now let's instantiate the What-if Tool by configuring a WitConfigBuilder. Here, we want to compare the original model we built with the one trained on the balanced census dataset. To achieve this we utilize the set_compare_ai_platform_model method. We want to compare the models on a balanced test set. The balanced test is loaded and then input to WitConfigBuilder.
|
12,864
|
<ASSISTANT_TASK:>
Python Code:
from six.moves import range
sum_sq_diff = lambda n: sum(range(1, n+1))**2 - sum(i**2 for i in range(1, n+1))
sum_sq_diff(10)
sum_sq_diff(100)
sum_sq_diff = lambda n: n*(3*n+2)*(n-1)*(n+1)/12
sum_sq_diff(100)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <!-- TEASER_END -->
|
12,865
|
<ASSISTANT_TASK:>
Python Code:
import file_processor as fp #contains simple routines for sorting files and making directories
import processing_tools as pt #bulk of the processing
import int_plot as ip #allows for interactive plots
directory = './example'
fp.plot_defaults(directory, file_ending='.h5')
list_of_files = fp.directory_list('./example') #returns a list of files that
#can be sorted in your preferred method
fp.sort_nicely(list_of_files) # sorts in a nice way, but you ought to check
#in this case the files are too different to work, so I provide similar files
list_of_files = ['./example/noise_10kSI_MASP.h5',
'./example/noise_10kSI_MASP.h5',
'./example/noise_10kSI_MASP.h5']
from bokeh.plotting import show
from bokeh.io import output_notebook #To view on a notebook such as this
output_notebook() # allows Bokeh to output to the notebook
int_plot = ip.interactive_plot(list_of_files,'z_pos','CoM_y',num_slices=100, undulator_period=0.0275,k_fact=1)
#what to plot
show(int_plot)
import matplotlib.pyplot as plt
test = pt.ProcessedData('./example/example.h5',undulator_period=0.00275,num_slices=100)
panda_data = test.StatsFrame()
ax = panda_data.plot(x='z_pos',y='CoM_y')
panda_data.plot(ax=ax, x='z_pos',y='std_y',c='b') #first option allows shared axes, one can even mix different runs
#by plotting another dataset on the same axis
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Plotting entire directories
Step2: Interactive plots
Step3: Quick plotting
|
12,866
|
<ASSISTANT_TASK:>
Python Code:
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
data = pd.read_csv('data/glucose_insulin.csv', index_col='time');
params = Params(I0 = 360,
k = 0.25,
gamma = 0.004,
G_T = 80)
# Solution
def make_system(params, data):
# params might be a Params object or an array,
# so we have to unpack it like this
I0, k, gamma, G_T = params
init = State(I=I0)
t_0 = get_first_label(data)
t_end = get_last_label(data)
G=interpolate(data.glucose)
system = System(I0=I0, k=k, gamma=gamma, G_T=G_T, G=G,
init=init, t_0=t_0, t_end=t_end, dt=1)
return system
# Solution
system = make_system(params, data)
# Solution
def slope_func(state, t, system):
[I] = state
k, gamma = system.k, system.gamma
G, G_T = system.G, system.G_T
dIdt = -k * I + gamma * (G(t) - G_T) * t
return [dIdt]
# Solution
slope_func(system.init, system.t_0, system)
# Solution
results, details = run_ode_solver(system, slope_func)
details
# Solution
results.tail()
# Solution
plot(results.I, 'g-', label='simulation')
plot(data.insulin, 'go', label='insulin data')
decorate(xlabel='Time (min)',
ylabel='Concentration ($\mu$U/mL)')
# Solution
def error_func(params, data):
Computes an array of errors to be minimized.
params: sequence of parameters
actual: array of values to be matched
returns: array of errors
print(params)
# make a System with the given parameters
system = make_system(params, data)
# solve the ODE
results, details = run_ode_solver(system, slope_func)
# compute the difference between the model
# results and actual data
errors = (results.I - data.insulin).dropna()
return TimeSeries(errors.loc[8:])
# Solution
error_func(params, data)
# Solution
best_params, details = leastsq(error_func, params, data)
print(details.mesg)
# Solution
system = make_system(best_params, data)
# Solution
results, details = run_ode_solver(system, slope_func, t_eval=data.index)
details
# Solution
plot(results.I, 'g-', label='simulation')
plot(data.insulin, 'go', label='insulin data')
decorate(xlabel='Time (min)',
ylabel='Concentration ($\mu$U/mL)')
# Solution
I0, k, gamma, G_T = best_params
# Solution
I_max = data.insulin.max()
Ib = data.insulin[0]
I_max, Ib
# Solution
# The value of G0 is the best estimate from the glucose model
G0 = 289
Gb = data.glucose[0]
G0, Gb
# Solution
phi_1 = (I_max - Ib) / k / (G0 - Gb)
phi_1
# Solution
phi_2 = gamma * 1e4
phi_2
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data
Step2: The insulin minimal model
Step3: Exercise
Step4: Exercise
Step6: Exercise
Step7: Exercise
Step8: Exercise
|
12,867
|
<ASSISTANT_TASK:>
Python Code:
import random,math
def fuc(i,a,b):
j=0
total_1=0
total_2=0
while j<i:
j=j+1
number=random.randint(a,b)
print(number)
total_1=total_1+math.ceil(math.log(number, 2))
total_2=total_2+1/math.ceil(math.log(number, 2))
print('西格玛log(随机整数为):',total_1)
print('西格玛1/log(随机整数)为:',total_2)
m=int(input('请输入随机整数的最小值'))
k=int(input('请输入随机整数的最大值'))
n=int(input('请输入随机整数的个数'))
fuc(n,m,k)
import math,random
def fuc(m):
a=random.randint(1,9)
print(a)
i=1
total=a
temp=a
while i<m:
temp=temp+a*(10**i)
total=total+temp
i=i+1
print('最终值为:',total)
n=int(input())
fuc(n)
import random, math
def win():
print('yeah! you won!')
def lost():
print('sorry! you lost! try again')
def game_over():
print('bye bye')
def show_team():
print('liupengyuan and his students!')
def show_instruction():
print('随便输入一个整数,让计算机在有限的次数内猜测你所输入的这个数')
def menu():
print('''
=====游戏菜单=====
1. 游戏说明
2. 开始游戏
3. 退出游戏
4. 制作团队
=====游戏菜单=====
''')
def guess_game():
m=int(input('请输入一个数:'))
n = int(input('请输入一个大于0的整数,作为神秘整数的上界,回车结束。'))
max_times = math.ceil(math.log(m, 2))#这句话的意思是为了让用户可以尝试的次数等于输入数的以2为底对数
guess_times = 0
while guess_times < max_times:
number = random.randint(1, n)
guess_times += 1
print('计算机已经猜了', guess_times, '次')
if number == m:
lost()
print('计算机给出的数是:', number)
print('计算机比标准次数少', max_times-guess_times, '次')
break
elif number > m:
print('计算机给出的数是:', number)
print('计算机猜大了')
else:
print('计算机给出的数是:', number)
print('计算机猜小了')
else:
print('你输入的数是:', m)
win()
def main():
while True:#while后面加ture,是为了让计算机一直执行main程序,一直到用户想要结束为止
menu()
choice = int(input('请输入你的选择'))
if choice == 1:
show_instruction()
elif (choice == 2):
guess_game()
elif (choice == 3):
game_over()
break
else:
show_team()
if __name__ == '__main__':#这句话也可以不要
main()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 练习 3:写函数,求s=a+aa+aaa+aaaa+aa...a的值,其中a是[1,9]之间的随机整数。例如2+22+222+2222+22222(此时共有5个数相加),几个数相加由键盘输入。
Step2: 挑战性练习:将猜数游戏改成由用户随便选择一个整数,让计算机来猜测的猜数游戏。
|
12,868
|
<ASSISTANT_TASK:>
Python Code:
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
# Ensure the right version of Tensorflow is installed.
!pip freeze | grep tensorflow==2.1
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
# Create SQL query using natality data after the year 2000
from google.cloud import bigquery
query =
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth
FROM
publicdata.samples.natality
WHERE year > 2000
# Call BigQuery but GROUP BY the hashmonth and see number of records for each group to enable us to get the correct train and evaluation percentages
df = bigquery.Client().query("SELECT hashmonth, COUNT(weight_pounds) AS num_babies FROM (" + query + ") GROUP BY hashmonth").to_dataframe()
print("There are {} unique hashmonths.".format(len(df)))
df.head()
# Added the RAND() so that we can now subsample from each of the hashmonths to get approximately the record counts we want
trainQuery = "SELECT * FROM (" + query + ") WHERE ABS(MOD(hashmonth, 4)) < 3 AND RAND() < 0.0005"
evalQuery = "SELECT * FROM (" + query + ") WHERE ABS(MOD(hashmonth, 4)) = 3 AND RAND() < 0.0005"
traindf = bigquery.Client().query(trainQuery).to_dataframe()
evaldf = bigquery.Client().query(evalQuery).to_dataframe()
print("There are {} examples in the train dataset and {} in the eval dataset".format(len(traindf), len(evaldf)))
traindf.head()
# Let's look at a small sample of the training data
traindf.describe()
# It is always crucial to clean raw data before using in ML, so we have a preprocessing step
import pandas as pd
def preprocess(df):
# clean up data we don't want to train on
# in other words, users will have to tell us the mother's age
# otherwise, our ML service won't work.
# these were chosen because they are such good predictors
# and because these are easy enough to collect
df = df[df.weight_pounds > 0]
df = df[df.mother_age > 0]
df = df[df.gestation_weeks > 0]
df = df[df.plurality > 0]
# modify plurality field to be a string
twins_etc = dict(zip([1,2,3,4,5],
['Single(1)', 'Twins(2)', 'Triplets(3)', 'Quadruplets(4)', 'Quintuplets(5)']))
df['plurality'].replace(twins_etc, inplace=True)
# now create extra rows to simulate lack of ultrasound
nous = df.copy(deep=True)
nous.loc[nous['plurality'] != 'Single(1)', 'plurality'] = 'Multiple(2+)'
nous['is_male'] = 'Unknown'
return pd.concat([df, nous])
traindf.head()# Let's see a small sample of the training data now after our preprocessing
traindf = preprocess(traindf)
evaldf = preprocess(evaldf)
traindf.head()
traindf.tail()
# Describe only does numeric columns, so you won't see plurality
traindf.describe()
traindf.to_csv('train.csv', index=False, header=False)
evaldf.to_csv('eval.csv', index=False, header=False)
%%bash
wc -l *.csv
head *.csv
tail *.csv
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: <h2> Create ML dataset by sampling using BigQuery </h2>
Step3: There are only a limited number of years and months in the dataset. Let's see what the hashmonths are.
Step4: Here's a way to get a well distributed portion of the data in such a way that the test and train sets do not overlap
Step5: <h2> Preprocess data using Pandas </h2>
Step6: Also notice that there are some very important numeric fields that are missing in some rows (the count in Pandas doesn't count missing data)
Step7: <h2> Write out </h2>
|
12,869
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import division, print_function
from sklearn.cross_validation import train_test_split
from keras.models import Sequential
from keras.layers.core import Dense, Activation, Dropout
from keras.utils import np_utils
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
def read_dataset(filename):
Z = np.loadtxt(filename, delimiter=",")
y = Z[:, 0]
X = Z[:, 1:]
return X, y
def plot_dataset(X, y):
Xred = X[y==0]
Xblue = X[y==1]
plt.scatter(Xred[:, 0], Xred[:, 1], color='r', marker='o')
plt.scatter(Xblue[:, 0], Xblue[:, 1], color='b', marker='o')
plt.xlabel("X[0]")
plt.ylabel("X[1]")
plt.show()
X, y = read_dataset("../data/linear.csv")
X = X[y != 2]
y = y[y != 2].astype("int")
print(X.shape, y.shape)
plot_dataset(X, y)
Y = np_utils.to_categorical(y, 2)
Xtrain, Xtest, Ytrain, Ytest = train_test_split(X, Y, test_size=0.3, random_state=0)
model = Sequential()
model.add(Dense(2, input_shape=(2,)))
model.add(Activation("softmax"))
model.compile(loss="categorical_crossentropy", optimizer="sgd", metrics=["accuracy"])
model.fit(Xtrain, Ytrain, batch_size=32, nb_epoch=50, validation_data=(Xtest, Ytest))
score = model.evaluate(Xtest, Ytest, verbose=0)
print("score: %.3f, accuracy: %.3f" % (score[0], score[1]))
Y_ = model.predict(X)
y_ = np_utils.categorical_probas_to_classes(Y_)
plot_dataset(X, y_)
X, y = read_dataset("../data/moons.csv")
y = y.astype("int")
print(X.shape, y.shape)
plot_dataset(X, y)
Y = np_utils.to_categorical(y, 2)
Xtrain, Xtest, Ytrain, Ytest = train_test_split(X, Y, test_size=0.3, random_state=0)
model = Sequential()
model.add(Dense(50, input_shape=(2,)))
model.add(Activation("relu"))
model.add(Dense(2))
model.add(Activation("softmax"))
model.compile(loss="categorical_crossentropy", optimizer="sgd", metrics=["accuracy"])
model.fit(Xtrain, Ytrain, batch_size=32, nb_epoch=50, validation_data=(Xtest, Ytest))
score = model.evaluate(Xtest, Ytest, verbose=0)
print("score: %.3f, accuracy: %.3f" % (score[0], score[1]))
Y_ = model.predict(X)
y_ = np_utils.categorical_probas_to_classes(Y_)
plot_dataset(X, y_)
model = Sequential()
model.add(Dense(50, input_shape=(2,)))
model.add(Activation("relu"))
model.add(Dense(100))
model.add(Activation("relu"))
model.add(Dense(2))
model.add(Activation("softmax"))
model.compile(loss="categorical_crossentropy", optimizer="sgd", metrics=["accuracy"])
model.fit(Xtrain, Ytrain, batch_size=32, nb_epoch=50, validation_data=(Xtest, Ytest))
score = model.evaluate(Xtest, Ytest, verbose=0)
print("score: %.3f, accuracy: %.3f" % (score[0], score[1]))
Y_ = model.predict(X)
y_ = np_utils.categorical_probas_to_classes(Y_)
plot_dataset(X, y_)
X, y = read_dataset("../data/saturn.csv")
y = y.astype("int")
print(X.shape, y.shape)
plot_dataset(X, y)
model = Sequential()
model.add(Dense(50, input_shape=(2,)))
model.add(Activation("relu"))
model.add(Dense(100))
model.add(Activation("relu"))
model.add(Dense(2))
model.add(Activation("softmax"))
model.compile(loss="categorical_crossentropy", optimizer="sgd", metrics=["accuracy"])
model.fit(Xtrain, Ytrain, batch_size=32, nb_epoch=50, validation_data=(Xtest, Ytest))
score = model.evaluate(Xtest, Ytest, verbose=0)
print("score: %.3f, accuracy: %.3f" % (score[0], score[1]))
Y_ = model.predict(X)
y_ = np_utils.categorical_probas_to_classes(Y_)
plot_dataset(X, y_)
model = Sequential()
model.add(Dense(1024, input_shape=(2,)))
model.add(Activation("relu"))
model.add(Dropout(0.2))
model.add(Dense(512))
model.add(Activation("relu"))
model.add(Dropout(0.2))
model.add(Dense(128))
model.add(Activation("relu"))
model.add(Dense(2))
model.add(Activation("softmax"))
model.compile(loss="categorical_crossentropy", optimizer="sgd", metrics=["accuracy"])
model.fit(Xtrain, Ytrain, batch_size=32, nb_epoch=50, validation_data=(Xtest, Ytest))
score = model.evaluate(Xtest, Ytest, verbose=0)
print("score: %.3f, accuracy: %.3f" % (score[0], score[1]))
Y_ = model.predict(X)
y_ = np_utils.categorical_probas_to_classes(Y_)
plot_dataset(X, y_)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Linearly Separable Data
Step2: Our y values need to be in sparse one-hot encoding format, so we convert the labels to this format. We then split the dataset 70% for training and 30% for testing.
Step3: Construct a model with an input layer which takes 2 inputs, and a softmax output layer. The softmax activation takes the scores from each output line and converts them to probabilities. There is no non-linear activation in this network. The equation is given by
Step4: Linearly non-separable data #1
Step5: A network with the same configuration as above produces a accuracy of 85.67% on the test set, as opposed to 92.7% on the linear dataset.
Step6: Lets add another layer. Layers produce non-linearity. We add another hidden layer with 100 units, also with a ReLu activation unit. This brings up our accuracy to 92%. The separation is still mostly linear, with just the beginnings of non-linearity.
Step7: Linearly non-separable data #2
Step8: Previous network (producing 90.5% accuracy on test data for the moon data) produces 90.3% accuracy on the Saturn data. You can see the boundary getting non-linear.
Step9: Lets increase the number of hidden units from 1 to 2. The number of hidden units in each layer is also much larger. We have also added Rectified Linear Unit activations and Dropouts on each layer. Using this, our accuracy goes up to 98.8%. The separation boundary is now definitely non-linear.
|
12,870
|
<ASSISTANT_TASK:>
Python Code:
import helper
#get_ipython().magic('matplotlib notebook')
helper.create_show_p_curve()
helper.create_plot_new_np_curve()
# PCA analysis and plot
helper.plot_PCA_errors()
#Non-Planar Errors
helper.ae_with_pca_wt_np_errors()
helper.ae_with_pca_wt_p_errors()
# non-planar curves
helper.rdm_np_errors()
# planar curves
helper.rdm_p_errors()
# Random weights Auto-encoder Planar and Non-Planar mixed errors
helper.rdm_wt_ae_errors()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Non-Planar Curve Generation
Step2: Discriminate Planarity with PCA
Step3: Autoencoder model with PCA Weights
Step4: Autoencoder model with Random Weights
|
12,871
|
<ASSISTANT_TASK:>
Python Code:
# YOUR CODE HERE
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.html import widgets
from IPython.display import SVG, display
s =
<svg width="100" height="100">
<circle cx="50" cy="50" r="20" fill="aquamarine" />
</svg>
SVG(s)
def draw_circle(width=100, height=100, cx=25, cy=25, r=5, fill='red'):
Draw an SVG circle.
Parameters
----------
width : int
The width of the svg drawing area in px.
height : int
The height of the svg drawing area in px.
cx : int
The x position of the center of the circle in px.
cy : int
The y position of the center of the circle in px.
r : int
The radius of the circle in px.
fill : str
The fill color of the circle.
# YOUR CODE HERE
#I had the other Ed's help, take svg arguments in string and replace with given values.
o = '<svg width="%s" height="%s">\n<circle cx="%s" cy="%s" r="%s" fill="%s" />\n</svg>' % (width, height, cx, cy, r, fill)
display(SVG(o))
draw_circle(cx=10, cy=10, r=10, fill='blue')
assert True # leave this to grade the draw_circle function
# YOUR CODE HERE
w = interactive(draw_circle, width=fixed(300), height=fixed(300), cy=[0, 300], cx=[0, 300], r=[0, 50], fill='red')
c = w.children
assert c[0].min==0 and c[0].max==300
assert c[1].min==0 and c[1].max==300
assert c[2].min==0 and c[2].max==50
assert c[3].value=='red'
# YOUR CODE HERE
display(w)
assert True # leave this to grade the display of the widget
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Interact with SVG display
Step4: Write a function named draw_circle that draws a circle using SVG. Your function should take the parameters of the circle as function arguments and have defaults as shown. You will have to write the raw SVG code as a Python string and then use the IPython.display.SVG object and IPython.display.display function.
Step5: Use interactive to build a user interface for exploing the draw_circle function
Step6: Use the display function to show the widgets created by interactive
|
12,872
|
<ASSISTANT_TASK:>
Python Code:
# A bit of setup
import numpy as np
import matplotlib.pyplot as plt
from time import time
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
from skynet.utils.data_utils import load_tiny_imagenet
tiny_imagenet_a = '../skynet/datasets/tiny-imagenet-100-A'
data = load_tiny_imagenet(tiny_imagenet_a, subtract_mean=True)
# Zero-mean the data
# mean_img = np.mean(X_train, axis=0)
# X_train -= mean_img
# X_val -= mean_img
# X_test -= mean_img
mean_img = data['mean_image']
X_train = data['X_train']
X_val = data['X_val']
X_test = data['X_test']
y_train = data['y_train']
y_val = data['y_val']
y_test = data['y_test']
for names in data['class_names']:
print(' '.join('"%s"' % name for name in names))
# Visualize some examples of the training data
classes_to_show = 7
examples_per_class = 5
class_idxs = np.random.choice(len(data['class_names']), size=classes_to_show, replace=False)
for i, class_idx in enumerate(class_idxs):
train_idxs, = np.nonzero(data['y_train'] == class_idx)
train_idxs = np.random.choice(train_idxs, size=examples_per_class, replace=False)
for j, train_idx in enumerate(train_idxs):
img = data['X_train'][train_idx] + data['mean_image']
img = img.transpose(1, 2, 0).astype('uint8')
plt.subplot(examples_per_class, classes_to_show, 1 + i + classes_to_show * j)
if j == 0:
plt.title(data['class_names'][class_idx][0])
plt.imshow(img)
plt.gca().axis('off')
plt.show()
mode = 'train'
class_names = data['class_names']
name_to_label = {n.lower(): i for i, ns in enumerate(class_names) for n in ns}
if mode == 'train':
X, y = X_train, y_train
elif mode == 'val':
X, y = X_val, y_val
num_correct = 0
num_images = 10
for i in range(num_images):
idx = np.random.randint(X.shape[0])
img = (X[idx] + mean_img).transpose(1, 2, 0).astype('uint8')
plt.imshow(img)
plt.gca().axis('off')
plt.gcf().set_size_inches((2, 2))
plt.show()
got_name = False
while not got_name:
name = input('Guess the class for the above image (%d / %d) : ' % (i + 1, num_images))
name = name.lower()
got_name = name in name_to_label
if not got_name:
print('That is not a valid class name; try again')
guess = name_to_label[name]
if guess == y[idx]:
num_correct += 1
print('Correct!')
else:
print('Incorrect; it was actually %r' % data['class_names'][y[idx]])
acc = float(num_correct) / num_images
print('You got %d / %d correct for an accuracy of %f' % (num_correct, num_images, acc))
from skynet.utils.data_utils import load_models
models_dir = '../skynet/datasets/tiny-100-A-pretrained'
# models is a dictionary mappping model names to models.
# Like the previous assignment, each model is a dictionary mapping parameter
# names to parameter values.
models = load_models(models_dir)
from skynet.neural_network.classifiers.convnet import five_layer_convnet
# Dictionary mapping model names to their predicted class probabilities on the
# validation set. model_to_probs[model_name] is an array of shape (N_val, 100)
# where model_to_probs[model_name][i, j] = p indicates that models[model_name]
# predicts that X_val[i] has class i with probability p.
model_to_probs = {}
################################################################################
# TODO: Use each model to predict classification probabilities for all images #
# in the validation set. Store the predicted probabilities in the #
# model_to_probs dictionary as above. To compute forward passes and compute #
# probabilities, use the function five_layer_convnet in the file #
# cs231n/classifiers/convnet.py. #
# #
# HINT: Trying to predict on the entire validation set all at once will use a #
# ton of memory, so you should break the validation set into batches and run #
# each batch through each model separately. #
################################################################################
from skynet.neural_network.classifiers.convnet import five_layer_convnet
import math
batch_size = 100
for model_name, model in list(models.items()):
model_to_probs[model_name] = None
for i in range(int(math.ceil(data['X_val'].shape[0] / batch_size))):
for model_name, model in list(models.items()):
y_predict = five_layer_convnet(data['X_val'][i*batch_size: (i+1)*batch_size],
model,
None and data['y_val'][i*batch_size: (i+1)*batch_size],
return_probs=True)
try:
if model_to_probs[model_name] is None:
model_to_probs[model_name] = y_predict
else:
model_to_probs[model_name] = np.concatenate(
(model_to_probs[model_name], y_predict), axis=0)
except:
print((model_to_probs[model_name].shape, y_predict.shape))
raise
pass
pass
################################################################################
# END OF YOUR CODE #
################################################################################
# Compute and print the accuracy for each model.
for model_name, probs in model_to_probs.items():
acc = np.mean(np.argmax(probs, axis=1) == data['y_val'])
print('%s got accuracy %f' % (model_name, acc))
def compute_ensemble_preds(probs_list):
Use the predicted class probabilities from different models to implement
the ensembling method described above.
Inputs:
- probs_list: A list of numpy arrays, where each gives the predicted class
probabilities under some model. In other words,
probs_list[j][i, c] = p means that the jth model in the ensemble thinks
that the ith data point has class c with probability p.
Returns:
An array y_pred_ensemble of ensembled predictions, such that
y_pred_ensemble[i] = c means that ensemble predicts that the ith data point
is predicted to have class c.
y_pred_ensemble = None
############################################################################
# TODO: Implement this function. Store the ensemble predictions in #
# y_pred_ensemble. #
############################################################################
probs_list_ensemble = np.mean(probs_list, axis=0)
y_pred_ensemble = np.argmax(probs_list_ensemble, axis=1)
pass
############################################################################
# END OF YOUR CODE #
############################################################################
return y_pred_ensemble
# Combine all models into an ensemble and make predictions on the validation set.
# This should be significantly better than the best individual model.
print(np.mean(compute_ensemble_preds(list(model_to_probs.values())) == data['y_val']))
################################################################################
# TODO: Create a plot comparing ensemble size with ensemble performance as #
# described above. #
# #
# HINT: Look up the function itertools.combinations. #
################################################################################
import itertools
ensemble_sizes = []
val_accs = []
for i in range(1, 11):
combinations = itertools.combinations(list(model_to_probs.values()), i)
for combination in combinations:
ensemble_sizes.append(i)
y_pred_ensemple = compute_ensemble_preds(combination)
val_accs.append(np.mean(y_pred_ensemple == data['y_val']))
pass
plt.scatter(ensemble_sizes, val_accs)
plt.title('Ensemble size vs Performance')
plt.xlabel('ensemble size')
plt.ylabel('validation set accuracy')
################################################################################
# END OF YOUR CODE #
################################################################################
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Introducing TinyImageNet
Step2: TinyImageNet-100-A classes
Step3: Visualize Examples
Step4: Test human performance
Step5: Download pretrained models
Step6: Run models on the validation set
Step8: Use a model ensemble
Step9: Ensemble size vs Performance
|
12,873
|
<ASSISTANT_TASK:>
Python Code:
### Simulation
%matplotlib inline
from __future__ import division
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(1)
import math
N=1000
s=0
def R(x,y):
return math.sqrt(x*x+y*y)
for i in range(N):
r=-100
y=0
x=0
while R(x,y)>r:
S=np.random.uniform(size=2)
x=S[0]
y=S[1]
r=np.random.exponential(1)
s+=r
print 'Average radius: {}'.format(s/N)
k_a=2e-6
k_b=2e-6
k_p=5e-6
k_d=1e-5
ll = 1e6
P = np.matrix([[1-k_a-k_b, k_a ,k_b, 0, 0, 0],
[k_a, 1-k_a-k_b, 0, k_b, 0, 0],
[k_b, 0, 1-k_a-k_b, k_a, 0, 0],
[0, k_b, k_a, 1-k_a-k_b-k_p, k_p, 0],
[0, 0, 0, 0, 1-k_d, k_d],
[0, 0, 0, k_d, 0, 1-k_d]], dtype=np.float64)
Q = ll*(P-np.eye(6))
print(Q)
Qd= Q[:-1,:-1]
Qi = np.linalg.pinv(Qd)
u=(np.sum(Qi, axis=1)*-1)
u=u.tolist()
def h(x):
s=0
ht=0
cc=0
for i in range(1,10000):
new_state=x
while new_state!=5:
old_state=new_state
probs = Q[old_state,:]/-Q[old_state,old_state]
probs=probs.tolist()[0]
probs[old_state]=0
qaa = np.random.exponential(-1/Q[old_state,old_state])
z=np.random.choice(6, 1, p=probs)
new_state = z[0] #states[z[0]]
s+=qaa
return s/10000
print('From calculation: {}\t From Simulation: {}'.format(u[0][0],h(0)))
print('From calculation: {}\t From Simulation: {}'.format(u[1][0],h(1)))
print('From calculation: {}\t From Simulation: {}'.format(u[2][0],h(2)))
print('From calculation: {}\t From Simulation: {}'.format(u[3][0],h(3)))
print('From calculation: {}\t From Simulation: {}'.format(u[4][0],h(4)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The simulation results do not seem to be close to the expected results of 0.15
Step2: Starting state $\phi$
Step3: Starting state $\alpha$
Step4: Starting state $\beta$
Step5: Starting state $\alpha+\beta$
Step6: Starting state $pol$
|
12,874
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
time = [0, 0, 0, 1, 1, 2, 2]
x = [216, 218, 217, 280, 290, 130, 132]
y = [13, 12, 12, 110, 109, 3, 56]
car = [1, 2, 3, 1, 3, 4, 5]
df = pd.DataFrame({'time': time, 'x': x, 'y': y, 'car': car})
import numpy as np
def g(df):
time = df.time.tolist()
car = df.car.tolist()
farmost_neighbour = []
euclidean_distance = []
for i in range(len(df)):
n = 0
d = 0
for j in range(len(df)):
if df.loc[i, 'time'] == df.loc[j, 'time'] and df.loc[i, 'car'] != df.loc[j, 'car']:
t = np.sqrt(((df.loc[i, 'x'] - df.loc[j, 'x'])**2) + ((df.loc[i, 'y'] - df.loc[j, 'y'])**2))
if t >= d:
d = t
n = df.loc[j, 'car']
farmost_neighbour.append(n)
euclidean_distance.append(d)
return pd.DataFrame({'time': time, 'car': car, 'farmost_neighbour': farmost_neighbour, 'euclidean_distance': euclidean_distance})
df = g(df.copy())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
12,875
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import mxnet as mx
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.cm as cm
from data import mnist_iterator
dev = mx.gpu()
batch_size = 100
train_iter, val_iter = mnist_iterator(batch_size=batch_size, input_shape = (1,28,28))
# input
data = mx.symbol.Variable('data')
# first conv
conv1 = mx.symbol.Convolution(data=data, kernel=(5,5), num_filter=20)
tanh1 = mx.symbol.Activation(data=conv1, act_type="tanh")
pool1 = mx.symbol.Pooling(data=tanh1, pool_type="max",
kernel=(2,2), stride=(2,2))
# second conv
conv2 = mx.symbol.Convolution(data=pool1, kernel=(5,5), num_filter=50)
tanh2 = mx.symbol.Activation(data=conv2, act_type="tanh")
pool2 = mx.symbol.Pooling(data=tanh2, pool_type="max",
kernel=(2,2), stride=(2,2))
# first fullc
flatten = mx.symbol.Flatten(data=pool2)
fc1 = mx.symbol.FullyConnected(data=flatten, num_hidden=500)
tanh3 = mx.symbol.Activation(data=fc1, act_type="tanh")
# second fullc
fc2 = mx.symbol.FullyConnected(data=tanh3, num_hidden=10)
def Softmax(theta):
max_val = np.max(theta, axis=1, keepdims=True)
tmp = theta - max_val
exp = np.exp(tmp)
norm = np.sum(exp, axis=1, keepdims=True)
return exp / norm
def LogLossGrad(alpha, label):
grad = np.copy(alpha)
for i in range(alpha.shape[0]):
grad[i, label[i]] -= 1.
return grad
data_shape = (batch_size, 1, 28, 28)
arg_names = fc2.list_arguments() # 'data'
arg_shapes, output_shapes, aux_shapes = fc2.infer_shape(data=data_shape)
arg_arrays = [mx.nd.zeros(shape, ctx=dev) for shape in arg_shapes]
grad_arrays = [mx.nd.zeros(shape, ctx=dev) for shape in arg_shapes]
reqs = ["write" for name in arg_names]
model = fc2.bind(ctx=dev, args=arg_arrays, args_grad = grad_arrays, grad_req=reqs)
arg_map = dict(zip(arg_names, arg_arrays))
grad_map = dict(zip(arg_names, grad_arrays))
data_grad = grad_map["data"]
out_grad = mx.nd.zeros(model.outputs[0].shape, ctx=dev)
for name in arg_names:
if "weight" in name:
arr = arg_map[name]
arr[:] = mx.rnd.uniform(-0.07, 0.07, arr.shape)
def SGD(weight, grad, lr=0.1, grad_norm=batch_size):
weight[:] -= lr * grad / batch_size
def CalAcc(pred_prob, label):
pred = np.argmax(pred_prob, axis=1)
return np.sum(pred == label) * 1.0
def CalLoss(pred_prob, label):
loss = 0.
for i in range(pred_prob.shape[0]):
loss += -np.log(max(pred_prob[i, label[i]], 1e-10))
return loss
num_round = 4
train_acc = 0.
nbatch = 0
for i in range(num_round):
train_loss = 0.
train_acc = 0.
nbatch = 0
train_iter.reset()
for batch in train_iter:
arg_map["data"][:] = batch.data[0]
model.forward(is_train=True)
theta = model.outputs[0].asnumpy()
alpha = Softmax(theta)
label = batch.label[0].asnumpy()
train_acc += CalAcc(alpha, label) / batch_size
train_loss += CalLoss(alpha, label) / batch_size
losGrad_theta = LogLossGrad(alpha, label)
out_grad[:] = losGrad_theta
model.backward([out_grad])
# data_grad[:] = grad_map["data"]
for name in arg_names:
if name != "data":
SGD(arg_map[name], grad_map[name])
nbatch += 1
#print(np.linalg.norm(data_grad.asnumpy(), 2))
train_acc /= nbatch
train_loss /= nbatch
print("Train Accuracy: %.2f\t Train Loss: %.5f" % (train_acc, train_loss))
val_iter.reset()
batch = val_iter.next()
data = batch.data[0]
label = batch.label[0]
arg_map["data"][:] = data
model.forward(is_train=True)
theta = model.outputs[0].asnumpy()
alpha = Softmax(theta)
print("Val Batch Accuracy: ", CalAcc(alpha, label.asnumpy()) / batch_size)
#########
grad = LogLossGrad(alpha, label.asnumpy())
out_grad[:] = grad
model.backward([out_grad])
noise = np.sign(data_grad.asnumpy())
arg_map["data"][:] = data.asnumpy() + 0.15 * noise
model.forward(is_train=True)
raw_output = model.outputs[0].asnumpy()
pred = Softmax(raw_output)
print("Val Batch Accuracy after pertubation: ", CalAcc(pred, label.asnumpy()) / batch_size)
import random as rnd
idx = rnd.randint(0, 99)
images = data.asnumpy() + 0.15 * noise
plt.imshow(images[idx, :].reshape(28,28), cmap=cm.Greys_r)
print("true: %d" % label.asnumpy()[idx])
print("pred: %d" % np.argmax(pred, axis=1)[idx])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Build Network
Step2: Prepare useful data for the network
Step3: Init weight
Step4: Train a network
Step5: Get pertubation by using fast sign method, check validation change.
Step6: Visualize an example after pertubation.
|
12,876
|
<ASSISTANT_TASK:>
Python Code:
import os
testfolder = os.path.abspath(r'..\..\bifacial_radiance\TEMP\Demo1')
print ("Your simulation will be stored in %s" % testfolder)
from bifacial_radiance import *
import numpy as np
demo = RadianceObj('bifacial_example',testfolder)
albedo = 0.62
demo.setGround(albedo)
epwfile = demo.getEPW(lat = 37.5, lon = -77.6)
# Read in the weather data pulled in above.
metdata = demo.readWeatherFile(epwfile)
fullYear = True
if fullYear:
demo.genCumSky(demo.epwfile) # entire year.
else:
demo.gendaylit(metdata,4020) # Noon, June 17th (timepoint # 4020)
module_type = 'Prism Solar Bi60 landscape'
demo.makeModule(name=module_type,x=1.695, y=0.984)
availableModules = demo.printModules()
sceneDict = {'tilt':10,'pitch':3,'clearance_height':0.2,'azimuth':180, 'nMods': 3, 'nRows': 3}
scene = demo.makeScene(module_type,sceneDict)
octfile = demo.makeOct(demo.getfilelist())
demo.getfilelist()
analysis = AnalysisObj(octfile, demo.basename)
frontscan, backscan = analysis.moduleAnalysis(scene)
results = analysis.analysis(octfile, demo.basename, frontscan, backscan)
load.read1Result('results\irr_bifacial_example.csv')
bifacialityfactor = 0.9
print('Annual bifacial ratio: %0.2f ' %( np.mean(analysis.Wm2Back) * bifacialityfactor / np.mean(analysis.Wm2Front)) )
modWanted=1
rowWanted=1
sensorsy=4
resultsfilename = demo.basename+"_Mod1Row1"
frontscan, backscan = analysis.moduleAnalysis(scene, modWanted = modWanted, rowWanted=rowWanted, sensorsy=sensorsy)
results = analysis.analysis(octfile, resultsfilename, frontscan, backscan)
load.read1Result('results\irr_bifacial_example_Mod1Row1.csv')
# Make a color render and falsecolor image of the scene.
analysis.makeImage('side.vp')
analysis.makeFalseColor('side.vp')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load bifacial_radiance
Step2: <a id='step2'></a>
Step3: This will create all the folder structure of the bifacial_radiance Scene in the designated testfolder in your computer, and it should look like this
Step4: To see more options of ground materials available (located on ground.rad), run this function without any input.
Step5: The downloaded EPW will be in the EPWs folder.
Step6: <a id='step5'></a>
Step7: The method gencumSky calculates the hourly radiance of the sky hemisphere by dividing it into 145 patches. Then it adds those hourly values to generate one single <b> cumulative sky</b>. Here is a visualization of this patched hemisphere for Richmond, VA, US. Can you deduce from the radiance values of each patch which way is North?
Step8: In case you want to use a pre-defined module or a module you've created previously, they are stored in a JSON format in data/module.json, and the options available can be called with printModules
Step9: <a id='step7'></a>
Step10: To make the scene we have to create a Scene Object through the method makeScene. This method will create a .rad file in the objects folder, with the parameters specified in sceneDict and the module created above.
Step11: <a id='step8'></a>
Step12: This is how the octfile looks like (** broke the first line so it would fit in the view, it's usually super long)
Step13: Then let's specify the sensor location. If no parameters are passed to moduleAnalysis, it will scan the center module of the center row
Step14: The frontscan and backscan include a linescan along a chord of the module, both on the front and back.
Step15: The results are also automatically saved in the results folder. Some of our input/output functions can be used to read the results and work with them, for example
Step16: As can be seen in the results for the Wm2Front and WM2Back, the irradiance values are quite high. This is because a cumulative sky simulation was performed on <b> step 5 </b>, so this is the total irradiance over all the hours of the year that the module at each sampling point will receive. Dividing the back irradiance average by the front irradiance average will give us the bifacial gain for the year
Step17: ANALYZE and get Results for another module
Step18: <a id='step10'></a>
|
12,877
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
rides[:24*10].plot(x='dteday', y='cnt')
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5,
(self.input_nodes, self.hidden_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.lr = learning_rate
#### DONE: Set self.activation_function to your implemented sigmoid function ####
#
# Note: in Python, you can define a function with a lambda expression,
# as shown below # Replace 0 with your sigmoid calculation.
### If the lambda code above is not something you're familiar with,
# You can uncomment out the following three lines and put your
# implementation there instead.
#
#def sigmoid(x):
# return 0 # Replace 0 with your sigmoid calculation here
#self.activation_function = sigmoid
self.activation_function = lambda x : 1.0/(1.0+np.exp(-x))
def train(self, features, targets):
''' Train the network on batch of features and targets.
Arguments
---------
features: 2D array, each row is one data record, each column is a feature
targets: 1D array of target values
'''
n_records = features.shape[0]
delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)
delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)
for X, y in zip(features, targets):
#### Implement the forward pass here ####
### Forward pass ###
# DONE: Hidden layer - Replace these values with your calculations.
hidden_inputs = X # signals into hidden layer (vector of size i)
hidden_outputs = self.activation_function(np.dot(hidden_inputs,self.weights_input_to_hidden)) # signals from hidden layer
#print("hidden inputs",hidden_inputs.shape)
#print("hidden outputs",hidden_outputs.shape)
# hidden_inputs = (input_nodes,)
# hidden_outputs = (hidden_nodes,)
# DONE: Output layer - Replace these values with your calculations. Remember, use f(x)=x as activation !!
final_inputs = hidden_outputs # signals into final output layer
final_outputs = np.dot(final_inputs,self.weights_hidden_to_output) # signals from final output layer
#print("final Inputs",final_inputs.shape)
#print("final outputs",final_outputs.shape)
# final_inputs = (hidden_nodes,)
# final_outputs = (output_nodes,)
#### Implement the backward pass here ####
### Backward pass ###
# DONE: Output error - Replace this value with your calculations.
error = y-final_outputs # Output layer error is the difference between desired target and actual output.
output_error_term = error * 1.0 # derivative of f(x)=x is 1
#print("error",error.shape)
#print("output error term",output_error_term.shape)
# error = (output_nodes,)
# output_error_term = (output_nodes,)
# DONE: Calculate the hidden layer's contribution to the error
hidden_error = np.dot(self.weights_hidden_to_output,output_error_term)
#print("hidden error",hidden_error.shape)
# hidden_error = (hidden_nodes,)
# DONE: Backpropagated error terms - Replace these values with your calculations.
hidden_error_term = hidden_error * hidden_outputs * (1.0-hidden_outputs)
#print("hidden error term",hidden_error_term.shape)
# hidden_error_term = (hidden_nodes,)
# Weight step (hidden to output)
delta_weights_h_o += np.matmul(hidden_outputs[:,np.newaxis],output_error_term[np.newaxis,:])
#print("delta_weights_h_o",delta_weights_h_o.shape)
# Weight step (input to hidden)
delta_weights_i_h += np.matmul(hidden_inputs[:,np.newaxis],hidden_error_term[np.newaxis,:])
# Has to be (input_nodes,hidden_nodes)
#print("delta_weights_i_h",delta_weights_i_h.shape)
# TODO: Update the weights - Replace these values with your calculations.
self.weights_hidden_to_output += self.lr * delta_weights_h_o / n_records # update hidden-to-output weights with gradient descent step
self.weights_input_to_hidden += self.lr * delta_weights_i_h / n_records # update input-to-hidden weights with gradient descent step
def run(self, features):
''' Run a forward pass through the network with input features
Arguments
---------
features: 1D array of feature values
'''
#### Implement the forward pass here ####
# TODO: Hidden layer - replace these values with the appropriate calculations.
hidden_inputs = features # signals into hidden layer
hidden_outputs = self.activation_function(np.matmul(hidden_inputs,self.weights_input_to_hidden)) # signals from hidden layer
# TODO: Output layer - Replace these values with the appropriate calculations.
final_inputs = hidden_outputs # signals into final output layer
final_outputs = np.matmul(final_inputs,self.weights_hidden_to_output) # signals from final output layer
return final_outputs
# Testcode
#inputs = np.array([[0.5, -0.2, 0.1, 0.8]])
#targets = np.array([[0.4, 0.3]])
#network = NeuralNetwork(4, 3, 2, 0.5)
#network.train(inputs, targets)
def MSE(y, Y):
return np.mean((y-Y)**2)
import unittest
inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2],
[0.4, 0.5],
[-0.3, 0.2]])
test_w_h_o = np.array([[0.3],
[-0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328],
[-0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, -0.20185996],
[0.39775194, 0.50074398],
[-0.29887597, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
#print(network.run(inputs))
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
import sys
### Set the hyperparameters here ###
iterations = 3000
learning_rate = 0.7
hidden_nodes = 10
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
_ = plt.ylim()
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features).T*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load and prepare the data
Step2: Checking out the data
Step3: Dummy variables
Step4: Scaling target variables
Step5: Splitting the data into training, testing, and validation sets
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Step8: Unit tests
Step9: Training the network
Step10: Check out your predictions
|
12,878
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from IPython.html.widgets import interact
o='ahjshd'
list(o)
x,y=letter_prob(list(o))
dict(zip(x,y))
def letter_prob(data):
letter_dictionary={}
for i in data:
if i not in letter_dictionary:
letter_dictionary[i]=1
else:
letter_dictionary[i]=letter_dictionary[i]+1
x=list(letter_dictionary)
y=list(letter_dictionary.values())
for i in range(len(x)):
y[i]=y[i]/(len(data))
return x,y
def char_probs(s):
Find the probabilities of the unique characters in the string s.
Parameters
----------
s : str
A string of characters.
Returns
-------
probs : dict
A dictionary whose keys are the unique characters in s and whose values
are the probabilities of those characters.
S=list(s)
letter2, prob2 =letter_prob(S)
ans=dict(zip(letter2,prob2))
return ans
test1 = char_probs('aaaa')
assert np.allclose(test1['a'], 1.0)
test2 = char_probs('aabb')
assert np.allclose(test2['a'], 0.5)
assert np.allclose(test2['b'], 0.5)
test3 = char_probs('abcd')
assert np.allclose(test3['a'], 0.25)
assert np.allclose(test3['b'], 0.25)
assert np.allclose(test3['c'], 0.25)
assert np.allclose(test3['d'], 0.25)
entropy({'a': 0.5, 'b': 0.5})
def entropy(d):
Compute the entropy of a dict d whose values are probabilities.
x=np.array(list(char_probs(d)))
x
y=np.array(list(char_probs(d).values()))
y
z=list(zip(x,y))
H=-sum(y*np.log2(y))
return H
assert np.allclose(entropy({'a': 0.5, 'b': 0.5}), 1.0)
assert np.allclose(entropy({'a': 1.0}), 0.0)
interact?
d=interact(char_probs,s=(''))
assert True # use this for grading the pi digits histogram
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Character counting and entropy
Step4: The entropy is a quantiative measure of the disorder of a probability distribution. It is used extensively in Physics, Statistics, Machine Learning, Computer Science and Information Science. Given a set of probabilities $P_i$, the entropy is defined as
Step5: Use IPython's interact function to create a user interface that allows you to type a string into a text box and see the entropy of the character probabilities of the string.
|
12,879
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!wget --show-progress --continue -O /content/shakespeare.txt http://www.gutenberg.org/files/100/100-0.txt
import numpy as np
import six
import tensorflow as tf
import time
import os
# This address identifies the TPU we'll use when configuring TensorFlow.
TPU_WORKER = 'grpc://' + os.environ['COLAB_TPU_ADDR']
SHAKESPEARE_TXT = '/content/shakespeare.txt'
tf.logging.set_verbosity(tf.logging.INFO)
def transform(txt, pad_to=None):
# drop any non-ascii characters
output = np.asarray([ord(c) for c in txt if ord(c) < 255], dtype=np.int32)
if pad_to is not None:
output = output[:pad_to]
output = np.concatenate([
np.zeros([pad_to - len(txt)], dtype=np.int32),
output,
])
return output
def training_generator(seq_len=100, batch_size=1024):
A generator yields (source, target) arrays for training.
with tf.gfile.GFile(SHAKESPEARE_TXT, 'r') as f:
txt = f.read()
tf.logging.info('Input text [%d] %s', len(txt), txt[:50])
source = transform(txt)
while True:
offsets = np.random.randint(0, len(source) - seq_len, batch_size)
# Our model uses sparse crossentropy loss, but Keras requires labels
# to have the same rank as the input logits. We add an empty final
# dimension to account for this.
yield (
np.stack([source[idx:idx + seq_len] for idx in offsets]),
np.expand_dims(
np.stack([source[idx + 1:idx + seq_len + 1] for idx in offsets]),
-1),
)
six.next(training_generator(seq_len=10, batch_size=1))
EMBEDDING_DIM = 512
def lstm_model(seq_len=100, batch_size=None, stateful=True):
Language model: predict the next word given the current word.
source = tf.keras.Input(
name='seed', shape=(seq_len,), batch_size=batch_size, dtype=tf.int32)
embedding = tf.keras.layers.Embedding(input_dim=256, output_dim=EMBEDDING_DIM)(source)
lstm_1 = tf.keras.layers.LSTM(EMBEDDING_DIM, stateful=stateful, return_sequences=True)(embedding)
lstm_2 = tf.keras.layers.LSTM(EMBEDDING_DIM, stateful=stateful, return_sequences=True)(lstm_1)
predicted_char = tf.keras.layers.TimeDistributed(tf.keras.layers.Dense(256, activation='softmax'))(lstm_2)
model = tf.keras.Model(inputs=[source], outputs=[predicted_char])
model.compile(
optimizer=tf.train.RMSPropOptimizer(learning_rate=0.01),
loss='sparse_categorical_crossentropy',
metrics=['sparse_categorical_accuracy'])
return model
tf.keras.backend.clear_session()
training_model = lstm_model(seq_len=100, batch_size=128, stateful=False)
tpu_model = tf.contrib.tpu.keras_to_tpu_model(
training_model,
strategy=tf.contrib.tpu.TPUDistributionStrategy(
tf.contrib.cluster_resolver.TPUClusterResolver(TPU_WORKER)))
tpu_model.fit_generator(
training_generator(seq_len=100, batch_size=1024),
steps_per_epoch=100,
epochs=10,
)
tpu_model.save_weights('/tmp/bard.h5', overwrite=True)
BATCH_SIZE = 5
PREDICT_LEN = 250
# Keras requires the batch size be specified ahead of time for stateful models.
# We use a sequence length of 1, as we will be feeding in one character at a
# time and predicting the next character.
prediction_model = lstm_model(seq_len=1, batch_size=BATCH_SIZE, stateful=True)
prediction_model.load_weights('/tmp/bard.h5')
# We seed the model with our initial string, copied BATCH_SIZE times
seed_txt = 'Looks it not like the king? Verily, we must go! '
seed = transform(seed_txt)
seed = np.repeat(np.expand_dims(seed, 0), BATCH_SIZE, axis=0)
# First, run the seed forward to prime the state of the model.
prediction_model.reset_states()
for i in range(len(seed_txt) - 1):
prediction_model.predict(seed[:, i:i + 1])
# Now we can accumulate predictions!
predictions = [seed[:, -1:]]
for i in range(PREDICT_LEN):
last_word = predictions[-1]
next_probits = prediction_model.predict(last_word)[:, 0, :]
# sample from our output distribution
next_idx = [
np.random.choice(256, p=next_probits[i])
for i in range(BATCH_SIZE)
]
predictions.append(np.asarray(next_idx, dtype=np.int32))
for i in range(BATCH_SIZE):
print('PREDICTION %d\n\n' % i)
p = [predictions[j][i] for j in range(PREDICT_LEN)]
generated = ''.join([chr(c) for c in p])
print(generated)
print()
assert len(generated) == PREDICT_LEN, 'Generated text too short'
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Predict Shakespeare with Cloud TPUs and Keras
Step3: Build the data generator
Step5: Build the model
Step6: Train the model
Step7: Make predictions with the model
|
12,880
|
<ASSISTANT_TASK:>
Python Code:
# Ignore
%load_ext sql
%sql sqlite://
%config SqlMagic.feedback = False
%%sql
-- Create a table of criminals
CREATE TABLE criminals (pid, name, age, sex, city, minor);
INSERT INTO criminals VALUES (412, 'James Smith', 15, 'M', 'Santa Rosa', 1);
INSERT INTO criminals VALUES (234, NULL, 22, 'M', 'Santa Rosa', 0);
INSERT INTO criminals VALUES (632, NULL, 23, 'F', 'San Francisco', 0);
INSERT INTO criminals VALUES (901, 'Gordon Ado', 32, 'F', 'San Francisco', 0);
INSERT INTO criminals VALUES (512, 'Bill Byson', 21, 'M', 'Petaluma', 0);
%%sql
-- Select name and average age,
SELECT name, age
-- from the table 'criminals',
FROM criminals
-- if age is not a null value
WHERE name IS NOT NULL
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create Data
Step2: Select Name And Ages Only When The Name Is Known
|
12,881
|
<ASSISTANT_TASK:>
Python Code:
import vcsn
a0 = vcsn.B.expression('ab*c').standard()
a0
a1 = a0.lift()
a1
a2 = a1.eliminate_state(2)
a2
a1
a3 = a2.eliminate_state(1)
a3
a4 = a3.eliminate_state(0)
a4
a5 = a4.eliminate_state(1)
a5
a1.eliminate_state()
a1.eliminate_state().eliminate_state().eliminate_state().eliminate_state()
from IPython.html import widgets
from IPython.display import display
from IPython.utils import traitlets
from vcsn.ipython import interact_h
def slider_eliminate_state(aut):
''' Create the list of automata while applying the eliminate_state algorithm.'''
count = aut.state_number()
auts = {}
auts[0] = aut
for i in range(count):
aut = aut.eliminate_state()
auts[i + 1] = aut
return auts, count
def update_svg(name, value, new):
interact_h(lambda: display(auts[new]))
class SliderWidget(widgets.IntSlider):
def __init__(self, auths, count):
self.auths = auths
self.value = 0
self._widget = widgets.IntSlider(description='Algorithm step(s)', min=0, max=count, step=1, value=0)
self._widget.on_trait_change(update_svg,'value')
def show(self):
display(self._widget)
interact_h(lambda: display(auts[0]))
# Call on the automaton to show.
auts, count = slider_eliminate_state(a1 ** 2)
slider = SliderWidget(auts, count)
slider.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The following examples with be using this simple automaton as input.
Step2: We first need to convert this automaton into a spontaneous automaton labeled with expressions. That's the purpose of automaton.lift.
Step3: Explicit state elimination
Step4: Note that the result is a fresh automaton
Step5: Let's eliminate state 1.
Step6: We can also remove the initial and final states.
Step7: Eventually, when all the states have been removed, you get a broken automaton, with no states, but a "lone transition" that bears the answer.
Step8: Rest assured that such automata (no states but with one transition) never occur in the normal course of use of Vcsn.
Step9: Interactive Examples
|
12,882
|
<ASSISTANT_TASK:>
Python Code:
import os
import numpy as np
import requests
# get some CSV data from the SDSS SQL server
URL = "http://skyserver.sdss.org/dr12/en/tools/search/x_sql.aspx"
cmd =
SELECT TOP 1000
p.u, p.g, p.r, p.i, p.z, s.class, s.z, s.zerr
FROM
PhotoObj AS p
JOIN
SpecObj AS s ON s.bestobjid = p.objid
WHERE
p.u BETWEEN 0 AND 19.6 AND
p.g BETWEEN 0 AND 20 AND
s.class = 'GALAXY'
if not os.path.exists('galaxy_colors.csv'):
cmd = ' '.join(map(lambda x: x.strip(), cmd.split('\n')))
response = requests.get(URL, params={'cmd': cmd, 'format':'csv'})
with open('galaxy_colors.csv', 'w') as f:
f.write(response.text)
!ls -lh galaxy_colors.csv
!more galaxy_colors.csv
dtype=[('u', 'f8'),
('g', 'f8'),
('r', 'f8'),
('i', 'f8'),
('z', 'f8'),
('class', 'S10'),
('redshift', 'f8'),
('redshift_err', 'f8')]
data = np.loadtxt('galaxy_colors.csv', skiprows=2, delimiter=',', dtype=dtype)
data[:10]
from astropy.io import ascii
data = ascii.read('galaxy_colors.csv', format='csv', comment='#')
type(data)
data[:10]
import pandas
data = pandas.read_csv('galaxy_colors.csv', comment='#')
type(data)
data.head()
data.describe()
# Pandas reads from *lots* of different data sources
pandas.read_
# get some data from CDS
prefix = "http://cdsarc.u-strasbg.fr/vizier/ftp/cats/J/ApJ/686/749/"
for fname in ["ReadMe", "table10.dat"]:
if not os.path.exists(fname):
response = requests.get(prefix + fname)
with open(fname, 'w') as f:
f.write(response.text)
!cat table10.dat
!cat ReadMe
# must specify the "readme" here.
data = ascii.read("table10.dat", format='cds', readme="ReadMe")
data
# get an SDSS image (can search for images from http://dr12.sdss3.org/fields/)
if not os.path.exists("frame-g-006728-4-0121.fits.bz2"):
!wget http://dr12.sdss3.org/sas/dr12/boss/photoObj/frames/301/6728/4/frame-g-006728-4-0121.fits.bz2
if not os.path.exists("frame-g-006728-4-0121.fits"):
!bunzip2 frame-g-006728-4-0121.fits.bz2
from astropy.io import fits
hdulist = fits.open("frame-g-006728-4-0121.fits")
hdulist
hdulist.info()
hdulist[0].data
hdulist[0].header
import fitsio
f = fitsio.FITS("frame-g-006728-4-0121.fits")
# summary of file HDUs
f
# summary of first HDU
f[0]
# Summary of 3rd HDU
f[2]
# Actually read the data.
data = f[0].read()
data
from scipy.io import readsav
# Note: won't work unless you have this sav file!
data = readsav("150623434_det8_8100keV.sav")
data
len(data.events)
!rm galaxy_colors.csv
!rm ReadMe
!rm table10.dat
!rm frame-g-006728-4-0121.fits.bz2
!rm frame-g-006728-4-0121.fits
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Getting data into Python
Step2: Using numpy.loadtxt
Step3: Using astropy.io.ascii
Step4: Using pandas
Step5: Specialized text formats
Step6: See http
Step7: astropy.io.fits
Step8: fitsio
Step9: Salvaging data from IDL
Step10: Clean up downloaded files
|
12,883
|
<ASSISTANT_TASK:>
Python Code:
#!pip install -I "phoebe>=2.3,<2.4"
import phoebe
import numpy as np
b = phoebe.default_binary()
b.set_value('q', value=0.7)
b.set_value('period', component='binary', value=10)
b.set_value('sma', component='binary', value=25)
b.set_value('incl', component='binary', value=0)
b.set_value('ecc', component='binary', value=0.9)
print(b.filter(qualifier='requiv*', context='component'))
b.set_value('requiv', component='primary', value=1.1)
b.set_value('requiv', component='secondary', value=0.9)
b.add_dataset('lc',
compute_times=phoebe.linspace(-2, 2, 201),
dataset='lc01')
b.add_dataset('orb', compute_times=phoebe.linspace(-2, 2, 201))
anim_times = phoebe.linspace(-2, 2, 101)
b.add_dataset('mesh',
compute_times=anim_times,
coordinates='uvw',
dataset='mesh01')
b.run_compute(irrad_method='none')
afig, mplfig = b.plot(kind='lc', x='phases', t0='t0_perpass', show=True)
afig, mplfig = b.plot(time=0.0,
z={'orb': 'ws'},
c={'primary': 'blue', 'secondary': 'red'},
fc={'primary': 'blue', 'secondary': 'red'},
ec='face',
uncover={'orb': True},
trail={'orb': 0.1},
highlight={'orb': False},
tight_layout=True,
show=True)
afig, mplfig = b.plot(times=anim_times,
z={'orb': 'ws'},
c={'primary': 'blue', 'secondary': 'red'},
fc={'primary': 'blue', 'secondary': 'red'},
ec='face',
uncover={'orb': True},
trail={'orb': 0.1},
highlight={'orb': False},
tight_layout=True, pad_aspect=False,
animate=True,
save='eccentric_ellipsoidal.gif',
save_kwargs={'writer': 'imagemagick'})
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As always, let's do imports and initialize a logger and a new bundle.
Step2: Now we need a highly eccentric system that nearly overflows at periastron and is slightly eclipsing.
Step3: Adding Datasets
Step4: Running Compute
Step5: Plotting
Step6: Now let's make a nice figure.
Step7: Now let's animate the same figure in time. We'll use the same arguments as the static plot above, with the following exceptions
|
12,884
|
<ASSISTANT_TASK:>
Python Code:
%config InlineBackend.figure_format='retina'
%matplotlib inline
# Silence warnings
import warnings
warnings.simplefilter(action="ignore", category=FutureWarning)
warnings.simplefilter(action="ignore", category=UserWarning)
warnings.simplefilter(action="ignore", category=RuntimeWarning)
import numpy as np
np.random.seed(123)
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (8, 8)
plt.rcParams["font.size"] = 14
from sklearn.model_selection import train_test_split
from sklearn import datasets
from sklearn import svm
iris = datasets.load_iris()
X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target,
test_size=0.3, random_state=1)
clf = svm.SVC(kernel='linear', C=1).fit(X_train, y_train)
clf.score(X_test, y_test)
from sklearn.model_selection import cross_val_score
scores = cross_val_score(clf, X_train, y_train, cv=5)
print(scores)
# as you have several estimates you can also compute a measure of the spread
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std()))
from sklearn.model_selection import GridSearchCV
parameter_grid = [{'kernel': ['rbf'], 'gamma': [1e-1, 1e-2, 1e-3, 1e-4],
'C': [1, 10, 20, 30, 40, 50, 100, 1000]},
{'kernel': ['linear'], 'C': [1, 10, 20, 30, 40, 50, 100, 1000]}]
clf = GridSearchCV(svm.SVC(C=1), parameter_grid, cv=5)
clf.fit(X_train, y_train)
# best parameters and score. Should we report this as our performance?
print(clf.best_score_, clf.best_params_)
# test scores for the whole grid
means = clf.cv_results_['mean_test_score']
stds = clf.cv_results_['std_test_score']
for mean, std, params in zip(means, stds, clf.cv_results_['params']):
print("%0.3f (+/-%0.03f) for %r" % (mean, std, params))
# score on the dataset we had in our vault
clf.score(X_test, y_test)
from sklearn.model_selection import RandomizedSearchCV
import scipy
param_dist = {'C': scipy.stats.expon(scale=100),
'gamma': scipy.stats.expon(scale=.1),
'kernel': ['rbf', 'linear']}
# n_iter controls how many points will be sampled
clf = RandomizedSearchCV(svm.SVC(), param_dist, cv=5, n_iter=12)
clf.fit(X_train, y_train)
# best parameters
print(clf.best_params_)
# test scores for all points that were evaluated
means = clf.cv_results_['mean_test_score']
stds = clf.cv_results_['std_test_score']
for mean, std, params in zip(means, stds, clf.cv_results_['params']):
print("%0.3f (+/-%0.03f) for %r" % (mean, std, params))
def f(x):
return (0.5-x[0])**2
X1 = np.linspace(-5, 5, 5)
X2 = np.linspace(-5, 5, 5)
param_grid = []
for x1 in X1:
for x2 in X2:
param_grid.append((x1, x2))
param_grid = np.array(param_grid)
plt.scatter(param_grid[:, 0], param_grid[:, 1], c=[f(x) for x in param_grid], s=260)
plt.colorbar()
plt.xlabel("X1")
plt.ylabel("X2");
vals = []
for x1 in X1:
for x2 in X2:
vals.append((x1, x2, f((x1, x2))))
vals = np.array(vals)
plt.plot(vals[:, 0], vals[:, 2], 'o')
plt.xlabel("X1")
plt.ylabel("f(X)");
vals = []
for n in range(25):
x1 = np.random.uniform(-5, 5)
x2 = np.random.uniform(-5, 5)
vals.append((x1, x2, f((x1, x2))))
vals = np.array(vals)
plt.plot(vals[:, 0], vals[:, 2], 'o')
plt.xlabel("X1")
plt.ylabel("f(X)");
plt.scatter(vals[:, 0], vals[:, 1], c=vals[:, 2], s=260)
plt.colorbar()
plt.xlabel("X1")
plt.ylabel("X2");
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Each model we have met so far has several parameters that need tuning. So called hyper-parameters. This lecture will discuss methods for systematically searching for the best
Step2: As we explore different values of C are we going to find a setting which does the equivalent of "memorising" the dataset? Is the score obtained on the testing dataset
Step3: Tune up
Step4: Some noteworthy points
Step5: Why is random better?
|
12,885
|
<ASSISTANT_TASK:>
Python Code:
flights={}
minn=1.0
for i in mdg.index.get_level_values(0).unique():
#2 weeks downloaded. want to get weekly freq. but multi by 2 dept+arrv
d=4.0
if i not in flights:flights[i]={}
for j in mdg.loc[i].index.get_level_values(0).unique():
if len(mdg.loc[i].loc[j])>minn: #minimum 1 flights required in this period once every 2 weeks
if j not in flights[i]:flights[i][j]={'airports':{},'7freq':0}
flights[i][j]['7freq']=len(mdg.loc[i].loc[j])/d
for k in mdg.loc[i].loc[j].index.get_level_values(0).unique():
if len(mdg.loc[i].loc[j].loc[k])>minn:
if k not in flights[i][j]['airports']:flights[i][j]['airports'][k]={'airlines':{},'7freq':0}
flights[i][j]['airports'][k]['7freq']=len(mdg.loc[i].loc[j].loc[k])/d
for l in mdg.loc[i].loc[j].loc[k].index.get_level_values(0).unique():
if len(mdg.loc[i].loc[j].loc[k].loc[l])>minn:
if l not in flights[i][j]['airports'][k]['airlines']:flights[i][j]['airports'][k]['airlines'][l]={'7freq':0}
flights[i][j]['airports'][k]['airlines'][l]['7freq']=len(mdg.loc[i].loc[j].loc[k].loc[l])/d
flights['TGM']['Budapest']=flights['CLJ']['Budapest']
for j in flights['TGM']:
if flights['CLJ'][j]['7freq']-flights['TGM'][j]['7freq']>0:
flights['CLJ'][j]['7freq']-=flights['TGM'][j]['7freq']
ap=list(flights['TGM'][j]['airports'].keys())[0]
flights['CLJ'][j]['airports'][ap]['7freq']-=flights['TGM'][j]['7freq']
flights['CLJ'][j]['airports'][ap]['airlines'][u'Wizz Air']['7freq']-=flights['TGM'][j]['7freq']
else: flights['CLJ'].pop(j)
file("flights_ro.json",'w').write(json.dumps(flights))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: manual fix TGM - all flights are departing from CLJ, therefore doublecounting + BUD not represented
|
12,886
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
# use seaborn plotting defaults
import seaborn as sns; sns.set()
from sklearn.cluster import KMeans
from sklearn.datasets.samples_generator import make_blobs, make_circles
from sklearn.utils import shuffle
from sklearn.metrics.pairwise import rbf_kernel
from sklearn.cluster import SpectralClustering
# For the graph representation
import networkx as nx
N = 300
nc = 4
Xs, ys = make_blobs(n_samples=N, centers=nc,
random_state=6, cluster_std=0.60, shuffle = False)
X, y = shuffle(Xs, ys, random_state=0)
plt.scatter(X[:, 0], X[:, 1], s=30);
plt.axis('equal')
plt.show()
X2s, y2s = make_circles(n_samples=N, factor=.5, noise=.05, shuffle=False)
X2, y2 = shuffle(X2s, y2s, random_state=0)
plt.scatter(X2[:, 0], X2[:, 1], s=30)
plt.axis('equal')
plt.show()
# <SOL>
# </SOL>
gamma = 0.5
K = rbf_kernel(X, X, gamma=gamma)
plt.imshow(K, cmap='hot')
plt.colorbar()
plt.title('RBF Affinity Matrix for gamma = ' + str(gamma))
plt.grid('off')
plt.show()
Ks = rbf_kernel(Xs, Xs, gamma=gamma)
plt.imshow(Ks, cmap='hot')
plt.colorbar()
plt.title('RBF Affinity Matrix for gamma = ' + str(gamma))
plt.grid('off')
plt.show()
t = 0.001
# Kt = <FILL IN> # Truncated affinity matrix
# Kst = <FILL IN> # Truncated and sorted affinity matrix
G = nx.from_numpy_matrix(Kt)
graphplot = nx.draw(G, X, node_size=40, width=0.5,)
plt.axis('equal')
plt.show()
Dst = np.diag(np.sum(Kst, axis=1))
Lst = Dst - Kst
# Next, we compute the eigenvalues of the matrix
w = np.linalg.eigvalsh(Lst)
plt.figure()
plt.plot(w, marker='.');
plt.title('Eigenvalues of the matrix')
plt.show()
# <SOL>
# </SOL>
# <SOL>
# </SOL>
wst, vst = np.linalg.eigh(Lst)
for n in range(nc):
plt.plot(vst[:,n], '.-')
plt.imshow(vst[:,:nc], aspect='auto')
plt.grid(False)
plt.title('Display of first 4 eigenvectors of Ks')
# <SOL>
# </SOL>
# <SOL>
# </SOL>
# <SOL>
# </SOL>
est = KMeans(n_clusters=2)
clusters = est.fit_predict(Z2t)
plt.scatter(X2[:, 0], X2[:, 1], c=clusters, s=50, cmap='rainbow')
plt.axis('equal')
plt.show()
n_clusters = 4
gamma = .1 # Warning do not exceed gamma=100
SpClus = SpectralClustering(n_clusters=n_clusters,affinity='rbf',
gamma=gamma)
SpClus.fit(X)
plt.scatter(X[:, 0], X[:, 1], c=SpClus.labels_.astype(np.int), s=50,
cmap='rainbow')
plt.axis('equal')
plt.show()
nc = 2
gamma = 50 #Warning do not exceed gamma=300
SpClus = SpectralClustering(n_clusters=nc, affinity='rbf', gamma=gamma)
SpClus.fit(X2)
plt.scatter(X2[:, 0], X2[:, 1], c=SpClus.labels_.astype(np.int), s=50,
cmap='rainbow')
plt.axis('equal')
plt.show()
nc = 5
SpClus = SpectralClustering(n_clusters=nc, affinity='nearest_neighbors')
SpClus.fit(X2)
plt.scatter(X2[:, 0], X2[:, 1], c=SpClus.labels_.astype(np.int), s=50,
cmap='rainbow')
plt.axis('equal')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Introduction
Step2: Note that we have computed two data matrices
Step3: Note, again, that we have computed both the sorted (${\bf X}_{2s}$) and the shuffled (${\bf X}_2$) versions of the dataset in the code above.
Step4: Spectral clustering algorithms are focused on connectivity
Step5: 2.3. Visualization
Step6: Despite the apparent randomness of the affinity matrix, it contains some hidden structure, that we can uncover by visualizing the affinity matrix computed with the sorted data matrix, ${\bf X}_s$.
Step7: Note that, despite their completely different appearance, both affinity matrices contain the same values, but with a different order of rows and columns.
Step8: 3. Affinity matrix and data graph
Step9: Note that, for this dataset, the graph connects edges from the same cluster only. Therefore, the number of diagonal blocks in $\overline{\bf K}_s$ is equal to the number of connected components in the graph.
Step10: Exercise 4
Step11: Exercise 5
Step12: Note that the position of 1's in eigenvectors ${\bf v}_i$ points out the samples in the $i$-th connected component. This suggest the following tentative clustering algorithm
Step13: 4.3. Non block diagonal matrices.
Step14: Note that, despite the eigenvector components can not be used as a straighforward cluster indicator, they are strongly informative of the clustering structure.
Step15: Complete step 2, 3 and 4, and draw a scatter plot of the samples in ${\bf Z}$
Step16: Complete step 5
Step17: Finally, complete step 6 and show, in a scatter plot, the result of the clustering algorithm
Step18: 5.2. Scikit-learn implementation.
|
12,887
|
<ASSISTANT_TASK:>
Python Code:
# convention recommended in documentation
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
#enable inline plotting in notebook
%matplotlib inline
df = pd.read_csv("../data/iris.data")
df = df.sample(frac=0.2) # only use 20% of the data so the results aren't so long
type(df)
# Columns can have different types.
# you can check the data types of the values
df.dtypes
# you can access the dataframe with a single column name
df["petal.width"]
# this leaves the original unmodified
# then the returned type is a Series, the second major concept in Pandas
type(df["petal.width"])
#
# alternately you can index a dataframe with a list of column names
df[["sepal.length", "petal.width", "class"]]
# the comparison operator returns a list of boolean
matching = df["sepal.width"] > df["petal.length"]
matching
# which can also be used to query the dataframe
df[list_]
# or more idiomatically
df[df["sepal.width"] > df["petal.length"]]
# one can get aggregates of single dimensions
df["sepal.width"].var() # try min, max, mean, median, sum, var
# or of the whole thing
df.sum() # same operations as above
# it's also possible to plot simple graphs using a simpleish syntax
df["sepal.width"].plot.box()
df["sepal.width"].plot.hist()
df.boxplot(column="sepal.width", by="class")
df.plot.scatter(x="sepal.length", y="sepal.width")
df.groupby("class").mean()
# creating a grouped by plot requires a loop
fig, ax = plt.subplots(figsize=(8,6))
for label, df_ in df.groupby('class'):
df_["sepal.length"].plot(kind="kde", ax=ax, label=label)
plt.legend()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's start by reading in a dataset. This dataset is about different subclasses of the iris flower.
Step2: DataFrame is the basic building block of Pandas. It represents two-dimensional data with labeled rows and columns.
|
12,888
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import Image
Image(filename='minimize.png', width=500, height=500)
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
x= np.array([0.,1.,2.,3.])
data = np.array([1.3,1.8,5.,10.7])
plt.scatter(x,data)
xarray=np.arange(-1,4,0.1)
plt.plot(xarray, xarray**2,'r-') # Not the best fit
plt.plot(xarray, xarray**2+1,'g-')
def get_residual(vars,x, data):
a= vars[0]
b=vars[1]
model =a* x**2 +b
return data-model
vars=[1.,0.]
print get_residual(vars,x,data)
print sum(get_residual(vars,x,data))
vars=[1.,1.]
print sum(get_residual(vars,x,data))
vars=[2.,0.]
print sum(get_residual(vars,x,data))
from scipy.optimize import leastsq
vars = [0.,0.]
out = leastsq(get_residual, vars, args=(x, data))
print out
vars=[1.06734694, 0.96428571]
print sum(get_residual(vars,x,data)**2)
vars=[1.06734694, 0.96428571]
plt.scatter(x,data)
xarray=np.arange(-1,4,0.1)
plt.plot(xarray, xarray**2,'r-')
plt.plot(xarray, xarray**2+1,'g-')
fitted = vars[0]* xarray**2 +vars[1]
plt.plot(xarray, fitted,'b-')
from lmfit import minimize, Parameters
params = Parameters()
params.add('amp', value=0.)
params.add('offset', value=0.)
def get_residual(params,x, data):
amp= params['amp'].value
offset=params['offset'].value
model =amp* x**2 +offset
return data-model
out = minimize(get_residual, params, args=(x, data))
dir(out)
out.params
dir(out.params)
out.params.values
out.__dict__
out.params['amp'].__dict__
params['amp'].vary = False
out = minimize(get_residual, params, args=(x, data))
print out.params
print out.chisqr
params['amp'].value = 1.0673469387778385
out = minimize(get_residual, params, args=(x, data))
print out.chisqr
def get_residual(params,x, data):
#amp= params['amp'].value
#offset=params['offset'].value
#xoffset=params['xoffset'].value
parvals = params.valuesdict()
amp = parvals['amp']
offset = parvals['offset']
model =amp* x**2 +offset
return data-model
params = Parameters()
params.add('amp', value=0.)
#params['amp'] = Parameter(value=..., min=...)
params.add('offset', value=0.)
params.add('xoffset', value=0.0, vary=False)
out = minimize(get_residual, params, args=(x, data))
print out.params
Image(filename='output.png', width=500, height=500)
params['offset'].min = -10.
params['offset'].max = 10.
out = minimize(get_residual, params, args=(x, data))
print out.params
print out.params['amp'].stderr
print out.params['amp'].correl
from lmfit import minimize, Parameters, Parameter, report_fit
result = minimize(get_residual, params, args=(x, data))
help(report_fit)
# write error report
report_fit(result.params)
Image(filename='fitting.png', width=500, height=500)
result2 = minimize(get_residual, params, args=(x, data), method='tnc')
report_fit(result2.params)
result3 = minimize(get_residual, params, args=(x, data), method='powell')
report_fit(result3.params)
print(report_fit(result3))
params.add('amp2', expr='(amp-offset)**2')
def get_residual(params,x, data):
#amp= params['amp'].value
#offset=params['offset'].value
#xoffset=params['xoffset'].value
parvals = params.valuesdict()
amp = parvals['amp']
offset = parvals['offset']
amp2 = parvals['amp2']
model =amp* x**2 + amp2*x**4 +offset
return data-model
result4 = minimize(get_residual, params, args=(x, data))
report_fit(result4.params)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: LMFIT package
Step2: Lets visualize how a quadratic curve fits to it
Step3: Lets build a general quadratic model
Step4: Questions ?
Step5: LMFIT
Step6: Fit values are the same as before !
Step7: Questions ?
Step8: Another way of defining the parameters
Step9: Other manipulations
Step10: Challenge
Step11: stderr
Step12: correl
Step13: report_fit
Step14: Choosing Different Fitting Methods
Step15: Challenge
Step16: Complete report
Step17: Using expressions
|
12,889
|
<ASSISTANT_TASK:>
Python Code:
double(5)
lst = list(range(1,5))
km_rechner(5)
km_rechner(123)
km_rechner(53)
#Unsere Formate
var_first = { 'measurement': 3.4, 'scale': 'kilometer' }
var_second = { 'measurement': 9.1, 'scale': 'mile' }
var_third = { 'measurement': 2.0, 'scale': 'meter' }
var_fourth = { 'measurement': 9.0, 'scale': 'inches' }
print(m_converter(var_first))
print(m_converter(var_second))
print(m_converter(var_third))
print(m_converter(var_fourth))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2.Baue einen for-loop, der durch vordefinierte Zahlen-list geht, und mithilfe der eben kreierten eigenen Funktion, alle Resultate verdoppelt ausdruckt.
Step2: 3.Entwickle einen Code, der den Nutzer nach der Länge seinem Namen fragt, und ihm dann sagt, wieviele Zeichen sein Name hat.
Step3: 5.Wir haben in einem Dictionary mit Massen, die mit ganz unterschiedlichen Formaten daherkommen. Entwickle eine Funktion namens m_converter, die diese Formate berücksichtigt, und in Meter umwandelt.
|
12,890
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
a = np.array([1, 2, 3, 4])
a
type(a)
2*a # multiple ndarray by number
b = np.array([2, 3, 4, 5])
print(a)
print(b)
a+b # two array summation
a*b
np.log(a) # apply functions to array
a
a[1]
a[1:3]
# omitted boundaries are assumed to be the beginning (or end) of the array
a[:3] # grap first three elements
a[::2] # element with step size 2
c = np.array([[0, 1, 2, 3],
[10, 11, 12, 13]])
c
c[0, 3] # get the first row and fourth number
c[1] # get the second row
c[0, 2:4] # get the first rown and slicing the first row
print(a)
a.shape
b = a.reshape((2, 2))
b
b.T
c[:, 2:4] # slice the row
print(c)
c.shape
a.size
c.size
a = np.array([1, 2, 3])
b = np.array([2, 3, 4])
# Stack arrays in sequence vertically (row wise).
np.vstack([a, b])
# Stack arrays in sequence horizontally (column wise).
np.hstack([a, b])
a = np.array([[1, 2, 3],
[4, 5, 6]])
a
np.sum(a)
np.sum(a, axis=0)
np.sum(a, axis=1)
np.max(a)
np.min(a, axis=0)
np.prod(a)
np.arange(4) # arange(start, stop=None, step=1)
np.arange(1, 5, 2)
np.ones(4) # ones(shape)
np.ones((2, 3))
np.zeros((2, 3)) # zeros(shape)
np.linspace(0, 1, 5) # Generate N evenly spaced elements between start and stop values
from IPython.core.display import HTML
HTML("<iframe src=http://www.numpy.org/ width=800 height=350></iframe>")
%matplotlib inline
import matplotlib.pyplot as plt
x = np.linspace(0, 2*np.pi, 20)
y = np.sin(x)
plt.xlabel('x label') # set label for x coordinate
plt.ylabel('y label') # set label for y coordinates
plt.title('Sin Function')
plt.plot(x, y)
x = np.linspace(0, 2*np.pi, 20)
y1 = np.sin(x)
y2 = np.cos(x)
plt.plot(x, y1)
plt.plot(x, y2)
x = np.linspace(0, 2*np.pi, 20)
y1 = np.sin(x)
y2 = np.cos(x)
fig = plt.figure()
ax1 = plt.subplot(211)
ax1.plot(x, y1)
ax2 = plt.subplot(212)
ax2.plot(x, y2)
x = np.linspace(0, 2*np.pi, 20)
y1 = np.sin(x)
y2 = np.cos(x)
fig = plt.figure()
ax1 = plt.subplot(211)
ax1.plot(x, y1, label='Sin', color='b')
ax1.legend() # show the label of plots on ax1
ax2 = plt.subplot(212)
ax2.plot(x, y2, label='Cos', color='r')
ax2.legend() # show the label of plotson ax2
x = np.linspace(0, 2*np.pi, 20)
y1 = np.sin(x)
y2 = np.cos(x)
fig = plt.figure()
ax1 = plt.subplot(211)
ax1.plot(x, y1, label='Sin', color='b')
ax1.legend() # show the label on ax1
ax1.set_xlim(-10, 10) # set the boundary of x coordinate
ax2 = plt.subplot(212)
ax2.plot(x, y2, label='Cos', color='r')
ax2.legend() # show the label on ax2
ax2.set_ylim(-2, 2) # set the boundary of y coordinate
r1 = np.random.normal(0, 10, 1000)
_ = plt.hist(r1, bins=50, normed=True)
x = np.random.normal(0, 10, 100)
y = np.random.normal(5, 3, 100)
plt.scatter(x, y)
r1 = np.random.normal(0, 10, 100)
r2 = np.random.gamma(1, 2, 100)
r3 = r1 + r2
_ = plt.boxplot(r3)
HTML("<iframe src=https://matplotlib.org/index.html width=800 height=350></iframe>")
from scipy import linalg # import linear algebra operations
arr = np.array([[1, 2],
[3, 4]])
linalg.det(arr) # compute the determinant of a square matrix
linalg.inv(arr) # compute the inverse of a squar matrix
from scipy import optimize
def f(x):
return x**2 + 10*np.sin(x)
x = np.arange(-10, 10, 0.1)
plt.plot(x, f(x))
result = optimize.minimize(f, x0=0)
result.x
plt.plot(x, f(x))
plt.axvline(result.x) # plot a vertical lines at result x
from scipy import stats
samples = np.random.normal(size=100)
plt.hist(samples, bins = 20, normed=True, label='Histogram of samples')
x_space = np.linspace(*plt.xlim())
pdf = stats.norm.pdf(x_space)
plt.plot(x_space, pdf, label='PDF')
plt.legend()
HTML("<iframe src=https://www.scipy.org/scipylib/index.html width=800 height=350></iframe>")
a = np.array([1, 3, 5, 7, 8, 9])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: index and slicing
Step2: Multi-Dimensional Arrays
Step3: Basic info of array
Step4: Joining arrays
Step5: Array Calculation Methods
Step6: Array creation Functions
Step7: For more detail of numpy please refer to Numpy.org
Step8: matplotlib
Step9: Draw two plots on one coordinate
Step10: Working with multiple figures and axes
Step11: Show the labels of each plot
Step12: Set boundary of each coordinate
Step13: Histogram
Step14: Scatterplot
Step15: Boxplots
Step16: For more detail of matplotlib please refer to matplotlib.org
Step17: SciPy
Step18: Example of optimization algorithm
Step19: Example of probability distributions method
Step20: For more detail of matplotlib please refer to scipy.org
Step21: Exercises 1
|
12,891
|
<ASSISTANT_TASK:>
Python Code:
# Read data
in_xlsx = (r'C:\Data\James_Work\Staff\Heleen_d_W\ICP_Waters\TOC_Trends_Analysis_2015'
r'\Trends_Maps\heleen_toc_trends_data.xlsx')
df = pd.read_excel(in_xlsx)
df.head()
def shiftedColorMap(cmap, start=0, midpoint=0.5, stop=1.0, name='shiftedcmap'):
'''
From here:
https://stackoverflow.com/questions/7404116/defining-the-midpoint-of-a-colormap-in-matplotlib
Function to offset the "center" of a colormap. Useful for
data with a negative min and positive max and you want the
middle of the colormap's dynamic range to be at zero
Input
-----
cmap : The matplotlib colormap to be altered
start : Offset from lowest point in the colormap's range.
Defaults to 0.0 (no lower ofset). Should be between
0.0 and `midpoint`.
midpoint : The new center of the colormap. Defaults to
0.5 (no shift). Should be between 0.0 and 1.0. In
general, this should be 1 - vmax/(vmax + abs(vmin))
For example if your data range from -15.0 to +5.0 and
you want the center of the colormap at 0.0, `midpoint`
should be set to 1 - 5/(5 + 15)) or 0.75
stop : Offset from highets point in the colormap's range.
Defaults to 1.0 (no upper ofset). Should be between
`midpoint` and 1.0.
'''
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import AxesGrid
cdict = {
'red': [],
'green': [],
'blue': [],
'alpha': []
}
# regular index to compute the colors
reg_index = np.linspace(start, stop, 257)
# shifted index to match the data
shift_index = np.hstack([
np.linspace(0.0, midpoint, 128, endpoint=False),
np.linspace(midpoint, 1.0, 129, endpoint=True)
])
for ri, si in zip(reg_index, shift_index):
r, g, b, a = cmap(ri)
cdict['red'].append((si, r, r))
cdict['green'].append((si, g, g))
cdict['blue'].append((si, b, b))
cdict['alpha'].append((si, a, a))
newcmap = matplotlib.colors.LinearSegmentedColormap(name, cdict)
plt.register_cmap(cmap=newcmap)
return newcmap
# Get max and min slopes for colour scale
vmin = df['slp'].min()
vmax = df['slp'].max()
# Build colourmap for later
orig_cmap = matplotlib.cm.coolwarm
shifted_cmap = shiftedColorMap(orig_cmap,
#start=min_slp,
midpoint=(1 - (vmax/(vmax + abs(vmin)))),
#stop=max_slp,
name='shifted')
# Dict for marker styles and region names
mark_dict = {'1_Bo_NA':['o', 'Boreal North America'],
'2_Temp_NA':['s', 'Temperate North America'],
'3_Atl_NA':['^', 'Atlantic North America'],
'4_Atl_EUR':['^', 'Atlantic Europe'],
'5_Bo_Eur':['o', 'Boreal Europe'],
'6_Temp_Eur':['s', 'Temperate Europe']}
# Setup map
fig = plt.figure(figsize=(12, 8))
ax = fig.add_subplot(111)
ax.set_title('North America', fontsize=20)
# Use a Lambert Conformal Conic projection
m = Basemap(projection='lcc', resolution='i',
lon_0=-73.8, lat_0=45, lat_1=40, lat_2=50,
width=3E6, height=2E6)
m.shadedrelief()
m.drawcountries(linewidth=0.5)
# Loop over dataets
for reg in mark_dict.keys():
for tr in ['increasing', 'decreasing', 'no trend']:
# Get data
df1 = df.query('(region==@reg) and (trend==@tr)')
# Map (long, lat) to (x, y) for plotting
x, y = m(df1['lon'].values, df1['lat'].values)
if tr == 'no trend':
plt.scatter(x, y,
c='white',
marker=mark_dict[reg][0],
s=200,
lw=2,
label=mark_dict[reg][1])
else:
plt.scatter(x, y,
c=df1['slp'].values,
marker=mark_dict[reg][0],
s=200,
lw=2,
cmap=shifted_cmap,
vmin=min_slp,
vmax=max_slp,
label=mark_dict[reg][1])
# Add colourbar
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="2%", pad=0.05)
#plt.colorbar(cax=cax)
#plt.legend(loc='lower right', frameon=True, fontsize=14)
vmax
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Absolute trends
|
12,892
|
<ASSISTANT_TASK:>
Python Code:
# Standard Python libraries
from __future__ import absolute_import, division, print_function, unicode_literals
from typing import Any, Iterator, Mapping, NamedTuple, Sequence, Tuple
import os
import time
import numpy as np
import glob
import matplotlib.pyplot as plt
import PIL
import imageio
from IPython import display
import sklearn
# TensorFlow ≥2.0 is required
import tensorflow as tf
from tensorflow import keras
assert tf.__version__ >= "2.0"
import tensorflow_datasets as tfds
print("tf version {}".format(tf.__version__))
import jax
from typing import Any, Callable, Sequence, Optional, Dict, Tuple
import jax.numpy as jnp
rng = jax.random.PRNGKey(0)
# Useful type aliases
Array = jnp.ndarray
PRNGKey = Array
Batch = Mapping[str, np.ndarray]
OptState = Any
import sklearn
import sklearn.datasets
from sklearn.model_selection import train_test_split
def get_datasets_iris():
iris = sklearn.datasets.load_iris()
X = iris["data"]
y = iris["target"]
N, D = X.shape # 150, 4
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
train_ds = {"X": X_train, "y": y_train}
test_ds = {"X": X_test, "y": y_test}
return train_ds, test_ds
train_ds, test_ds = get_datasets_iris()
print(train_ds["X"].shape)
print(train_ds["y"].shape)
iris = sklearn.datasets.load_iris()
print(iris.feature_names)
print(iris.target_names)
def extract_batch(ds, ndx):
batch = {k: v[ndx, ...] for k, v in ds.items()}
# batch = {'X': ds['X'][ndx,:], 'y': ds['y'][ndx]}
return batch
def process_epoch(train_ds, batch_size, rng):
train_ds_size = len(train_ds["X"])
steps_per_epoch = train_ds_size // batch_size
perms = jax.random.permutation(rng, len(train_ds["X"]))
perms = perms[: steps_per_epoch * batch_size] # skip incomplete batch
perms = perms.reshape((steps_per_epoch, batch_size)) # perms[i,:] is list of data indices for step i
for step, perm in enumerate(perms):
batch = extract_batch(train_ds, perm)
print("processing batch {} X shape {}, y shape {}".format(step, batch["X"].shape, batch["y"].shape))
batch_size = 30
process_epoch(train_ds, batch_size, rng)
def load_dataset_iris(split: str, batch_size: int) -> Iterator[Batch]:
train_ds, test_ds = get_datasets_iris()
if split == tfds.Split.TRAIN:
ds = tf.data.Dataset.from_tensor_slices({"X": train_ds["X"], "y": train_ds["y"]})
elif split == tfds.Split.TEST:
ds = tf.data.Dataset.from_tensor_slices({"X": test_ds["X"], "y": test_ds["y"]})
ds = ds.shuffle(buffer_size=1 * batch_size)
ds = ds.batch(batch_size)
ds = ds.prefetch(buffer_size=5)
ds = ds.repeat() # make infinite stream of epochs
return iter(tfds.as_numpy(ds)) # python iterator
batch_size = 30
train_ds = load_dataset_iris(tfds.Split.TRAIN, batch_size)
valid_ds = load_dataset_iris(tfds.Split.TEST, batch_size)
print(train_ds)
training_steps = 5
for step in range(training_steps):
batch = next(train_ds)
print("processing batch {} X shape {}, y shape {}".format(step, batch["X"].shape, batch["y"].shape))
ds, info = tfds.load("binarized_mnist", split=tfds.Split.TRAIN, shuffle_files=True, with_info=True)
print(ds)
print(info)
train_ds, info = tfds.load("mnist", split=tfds.Split.TRAIN, shuffle_files=True, with_info=True)
print(train_ds)
print(info)
ds = tfds.load("mnist", split="train")
print(type(ds))
ds = ds.take(1) # Only take a single example
print(type(ds))
for example in ds: # example is `{'image': tf.Tensor, 'label': tf.Tensor}`
print(list(example.keys()))
image = example["image"]
label = example["label"]
print(image.shape, label)
def rename(batch):
d = {"inputs": batch["image"], "outputs": batch["label"]}
return d
ds = tfds.load("mnist", split="train")
ds = ds.map(rename)
i = 0
for d in ds:
print(d["inputs"].shape)
i += 1
if i > 2:
break
ds = tfds.as_numpy(train_ds)
print(ds)
for i, batch in enumerate(ds):
print(type(batch))
X = batch["image"]
y = batch["label"]
print(X.shape)
print(y.shape)
i += 1
if i > 2:
break
ds = tfds.load("mnist", split="train")
ds = ds.take(100)
# ds = tfds.as_numpy(ds)
batches = ds.repeat(2).batch(batch_size)
print(type(batches))
print(batches)
batch_stream = batches.as_numpy_iterator()
print(type(batch_stream))
print(batch_stream)
b = next(batch_stream)
print(type(b))
print(b["image"].shape)
b = batch_stream.next()
print(type(b))
print(b["image"].shape)
ds = tfds.load("mnist", split="train")
batches = ds.repeat().batch(batch_size)
batch_stream = batches.as_numpy_iterator()
def process_stream(stream):
b = next(stream)
X = b["image"]
y = b["label"]
d = {"X": X, "y": y}
yield d
my_stream = process_stream(batch_stream)
b = next(my_stream)
print(type(b))
print(b["X"].shape)
b = my_stream.next()
print(type(b))
print(b["X"].shape)
def load_dataset_mnist(split: tfds.Split, batch_size: int) -> Iterator[Batch]:
ds, ds_info = tfds.load("mnist", split=split, with_info=True)
# For true randomness, we set the shuffle buffer to the full dataset size.
ds = ds.shuffle(ds_info.splits[split].num_examples)
# ds = ds.shuffle(buffer_size=10 * batch_size)
ds = ds.batch(batch_size)
ds = ds.prefetch(tf.data.experimental.AUTOTUNE)
ds = ds.repeat()
return iter(tfds.as_numpy(ds))
def preprocess_batch(batch: Batch, prng_key=None) -> Batch:
# Convert to X,y field names, optionally dequantize X, and convert to float
X = batch["image"].astype(np.float32)
y = batch["label"]
if prng_key is not None:
# Dequantize pixel values {0, 1, ..., 255} with uniform noise [0, 1).
X += jax.random.uniform(prng_key, X.shape)
X = X / 256.0 # Normalize pixel values from [0, 256) to [0, 1)
d = {"X": X, "y": y}
return d
batch_size = 30
train_ds = load_dataset_mnist(tfds.Split.TRAIN, batch_size)
print(type(train_ds))
training_steps = 5
for step in range(training_steps):
batch = next(train_ds)
batch = preprocess_batch(batch, rng)
X = batch["X"]
y = batch["y"]
print("processing batch {} X shape {}, y shape {}".format(step, X.shape, y.shape))
import pandas as pd
pd.set_option("precision", 2) # 2 decimal places
pd.set_option("display.max_rows", 20)
pd.set_option("display.max_columns", 30)
pd.set_option("display.width", 100) # wide windows
# ds, info = tfds.load('mnist', split='train', with_info=True)
ds, info = tfds.load("iris", split="train", with_info=True)
print(info)
df = tfds.as_dataframe(ds.take(4), info)
print(type(df))
print(df)
df.head()
ds, info = tfds.load("mnist", split="train", with_info=True)
fig = tfds.show_examples(ds, info, rows=2, cols=5)
# This function is not well documented. But source code for show_examples is here:
# https://github.com/tensorflow/datasets/blob/v4.2.0/tensorflow_datasets/core/visualization/image_visualizer.py
ds, info = tfds.load("cifar10", split="train", with_info=True)
fig = tfds.show_examples(ds, info, rows=2, cols=5)
import tensorflow_data_validation
tfds.show_statistics(info)
def get_datasets_mnist():
ds_builder = tfds.builder("mnist")
ds_builder.download_and_prepare()
train_ds_all = tfds.as_numpy(ds_builder.as_dataset(split="train", batch_size=-1))
test_ds_all = tfds.as_numpy(ds_builder.as_dataset(split="test", batch_size=-1))
num_train = len(train_ds_all["image"])
train_ds["X"] = jnp.reshape(jnp.float32(train_ds_all["image"]) / 255.0, (num_train, -1))
train_ds["y"] = train_ds_all["label"]
num_test = len(test_ds_all["image"])
test_ds["X"] = jnp.reshape(jnp.float32(test_ds["image"]) / 255.0, (num_test, -1))
test_ds["y"] = test_ds_all["label"]
return train_ds, test_ds
dataset = load_dataset_iris(tfds.Split.TRAIN, 30)
batches = dataset.repeat().batch(batch_size)
step = 0
num_minibatches = 5
for batch in batches:
if step >= num_minibatches:
break
X, y = batch["image"], batch["label"]
print("processing batch {} X shape {}, y shape {}".format(step, X.shape, y.shape))
step = step + 1
print("batchified version v2")
batch_stream = batches.as_numpy_iterator()
for step in range(num_minibatches):
batch = batch_stream.next()
X, y = batch["image"], batch["label"] # convert to canonical names
print("processing batch {} X shape {}, y shape {}".format(step, X.shape, y.shape))
step = step + 1
def sample_categorical(N, C):
p = (1 / C) * np.ones(C)
y = np.random.choice(C, size=N, p=p)
return y
def get_datasets_rnd():
Ntrain = 1000
Ntest = 1000
D = 5
C = 10
train_ds = {"X": np.random.randn(Ntrain, D), "y": sample_categorical(Ntrain, C)}
test_ds = {"X": np.random.randn(Ntest, D), "y": sample_categorical(Ntest, C)}
return train_ds, test_ds
def get_datasets_logreg(key):
Ntrain = 1000
Ntest = 1000
D = 5
C = 10
W = jax.random.normal(key, (D, C))
Xtrain = jax.random.normal(key, (Ntrain, D))
logits = jnp.dot(Xtrain, W)
ytrain = jax.random.categorical(key, logits)
Xtest = jax.random.normal(key, (Ntest, D))
logits = jnp.dot(Xtest, W)
ytest = jax.random.categorical(key, logits)
train_ds = {"X": Xtrain, "y": ytrain}
test_ds = {"X": Xtest, "y": ytest}
return train_ds, test_ds
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Manipulating data without using TFDS
Step2: Now we make one pass (epoch) over the data, computing random minibatches of size 30. There are 100 examples total, but with a batch size of 30,
Step3: Using TFDS
Step4: Using pre-packaged datasets
Step5: Streams and iterators
Step6: Worked example
Step7: Data visualization
Step8: Graveyard
|
12,893
|
<ASSISTANT_TASK:>
Python Code:
from k2datascience import yelp
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
%matplotlib inline
ydc = yelp.YDC()
ydc.load_data()
business = ydc.file_data['business']
business.shape
business.head()
business.tail()
ydc.get_zip_codes()
business.head()
ydc.get_restaurant_type()
business.head()
business.restaurant_type.ix[0]
business.attributes = yelp.convert_boolean(business.attributes)
business.head()
business.attributes.ix[0]
ydc.calc_open_hours()
business.head(6)
review = ydc.file_data['review']
review.shape
review.head()
review.tail()
ydc.get_avg_stars()
ydc.file_data['business'].head()
mask = ['name', 'restaurant_type', 'Friday', 'Saturday',
'attributes', 'zip_code', 'stars_avg']
ydc.file_data['business'].loc[:, mask]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load Data
Step2: Exercise 1
Step3: Exercise 2
Step4: Exercise 3
Step5: Exercise 4
Step6: Exercise 5
Step7: Exercise 6
|
12,894
|
<ASSISTANT_TASK:>
Python Code:
# As usual, a bit of setup
import time
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.fc_net import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
returns relative error
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in data.iteritems():
print '%s: ' % k, v.shape
x = np.random.randn(500, 500) + 10
for p in [0.3, 0.6, 0.75]:
out, _ = dropout_forward(x, {'mode': 'train', 'p': p})
out_test, _ = dropout_forward(x, {'mode': 'test', 'p': p})
print 'Running tests with p = ', p
print 'Mean of input: ', x.mean()
print 'Mean of train-time output: ', out.mean()
print 'Mean of test-time output: ', out_test.mean()
print 'Fraction of train-time output set to zero: ', (out == 0).mean()
print 'Fraction of test-time output set to zero: ', (out_test == 0).mean()
print
# p = probability of dropping neron.
# so, bigger p -> more dropout, smaller p -> less dropout
x = np.random.randn(10, 10) + 10
dout = np.random.randn(*x.shape)
dropout_param = {'mode': 'train', 'p': 0.8, 'seed': 123}
out, cache = dropout_forward(x, dropout_param)
dx = dropout_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda xx: dropout_forward(xx, dropout_param)[0], x, dout)
print 'dx relative error: ', rel_error(dx, dx_num)
N, D, H1, H2, C = 2, 15, 20, 30, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))
for dropout in [0, 0.25, 0.5]:
print 'Running check with dropout = ', dropout
model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
weight_scale=5e-2, dtype=np.float64,
dropout=dropout, seed=123)
loss, grads = model.loss(X, y)
print 'Initial loss: ', loss
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)
print '%s relative error: %.2e' % (name, rel_error(grad_num, grads[name]))
print
# Train two identical nets, one with dropout and one without
num_train = 500
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
solvers = {}
dropout_choices = [0, 0.75]
for dropout in dropout_choices:
model = FullyConnectedNet([500], dropout=dropout)
print dropout
solver = Solver(model, small_data,
num_epochs=25, batch_size=100,
update_rule='adam',
optim_config={
'learning_rate': 5e-4,
},
verbose=True, print_every=100)
solver.train()
solvers[dropout] = solver
# Plot train and validation accuracies of the two models
train_accs = []
val_accs = []
for dropout in dropout_choices:
solver = solvers[dropout]
train_accs.append(solver.train_acc_history[-1])
val_accs.append(solver.val_acc_history[-1])
plt.subplot(3, 1, 1)
for dropout in dropout_choices:
plt.plot(solvers[dropout].train_acc_history, 'o', label='%.2f dropout' % dropout)
plt.title('Train accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(ncol=2, loc='lower right')
plt.subplot(3, 1, 2)
for dropout in dropout_choices:
plt.plot(solvers[dropout].val_acc_history, 'o', label='%.2f dropout' % dropout)
plt.title('Val accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(ncol=2, loc='lower right')
plt.gcf().set_size_inches(15, 15)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Dropout
Step2: Dropout forward pass
Step3: Dropout backward pass
Step4: Fully-connected nets with Dropout
Step5: Regularization experiment
|
12,895
|
<ASSISTANT_TASK:>
Python Code:
import os
import numpy as np
import matplotlib.pyplot as plt
import scipy.misc # for image resizing
#import scipy.io.wavfile
# pip install soundfile
import soundfile
from IPython.display import Audio as audio_playback_widget
f = './data/raw-from-phone.wav'
#f = './data/num_phone_en-UK_m_Martin15.wav'
# Read in the original file
samples, sample_rate = soundfile.read(f)
def show_waveform(sound):
n_samples = sound.shape[0]
plt.figure(figsize=(12,2))
plt.plot(np.arange(0.0, n_samples)/sample_rate, sound)
plt.xticks( np.arange(0.0, n_samples/sample_rate, 0.5), rotation=90 )
plt.grid(True)
plt.show()
show_waveform(samples)
audio_playback_widget(f)
crop = (3.25, 16.25) # in seconds (from waveform graph above)
cropped = samples[ int(crop[0]*sample_rate):int(crop[1]*sample_rate) ]
show_waveform(cropped)
#Only do this (set it to 1) if you want to replace the file with the cropped version...
if 1:
f = './data/cropped-raw-from-phone.wav'
soundfile.write(f, cropped, samplerate=sample_rate)
print("Wrote '%s'" % (f,))
f = './data/num_phone_en-UK_m_Martin00.wav'
#f = './data/num_Bing_en-UK_f_Susan.wav'
#f = './data/animals_phone_en-UK_m_Martin02.wav'
#f = './data/num_phone_en-UK_m_Martin00.ogg'
#f = './data/num_Bing_en-UK_f_Susan.ogg'
def spectrogram(wav_filepath):
samples, sample_rate = soundfile.read(wav_filepath)
# Original code from :
# https://mail.python.org/pipermail/chicago/2010-December/007314.html
# Rescale so that max/min are ~ +/- 1 around 0
data_av = np.mean(samples)
data_max = np.max(np.absolute(samples-data_av))
sound_data = (samples - data_av)/data_max
## Parameters: 10ms step, 30ms window
nstep = int(sample_rate * 0.01)
nwin = int(sample_rate * 0.03)
nfft = 2*int(nwin/2)
window = np.hamming(nwin)
# will take windows x[n1:n2]. generate and loop over
# n2 such that all frames fit within the waveform
nn = range(nwin, len(sound_data), nstep)
X = np.zeros( (len(nn), nfft//2) )
for i,n in enumerate(nn):
segment = sound_data[ n-nwin:n ]
z = np.fft.fft(window * segment, nfft)
X[i,:] = np.log(np.absolute(z[:nfft//2]))
return X
# This is a function that smooths a time-series
# which enables us to segment the input into words by looking at the 'energy' profile
def smooth(x, window_len=31): # , window='hanning'
# http://scipy-cookbook.readthedocs.io/items/SignalSmooth.html
#s = np.r_[ x[window_len-1:0:-1], x, x[-1:-window_len:-1]]
s = np.r_[ np.zeros( ((window_len-1)//2,) ), x, np.zeros( ((window_len-1)//2,) ) ]
w=np.hamming(window_len)
return np.convolve(w/w.sum(), s, mode='valid') #[window_len-1 : -(window_len-1) ]
X = spectrogram(f)
print("X.shape=", X.shape)
#Y = np.std(X, axis=1)
Y = np.max(X, axis=1)
Y_min = np.min(Y)
Y_range = Y.max()-Y_min
Y = (Y - Y_min)/Y_range
print("Y.shape=", Y.shape)
Y_crop = np.where(Y>0.25, 1.0, 0.0)
# Apply some smoothing
Y_crop = smooth(Y_crop)
Y_crop = np.where(Y_crop>0.01, 1.0, 0.0)
print("Y_crop.shape=", Y_crop.shape)
plt.figure(figsize=(12,3))
plt.imshow(X.T, interpolation='nearest', origin='lower', aspect='auto')
plt.xlim(xmin=0)
plt.ylim(ymin=0)
plt.plot(Y * X.shape[1])
plt.plot(Y_crop * X.shape[1])
plt.show()
#Y.min(), Y.max()
#X[100,:]
print( np.argmin(X)/248, np.argmax(X)/248 )
audio_playback_widget(f)
#http://stackoverflow.com/questions/4494404/find-large-number-of-consecutive-values-fulfilling-condition-in-a-numpy-array
def contiguous_regions(condition):
idx = []
i = 0
while i < len(condition):
x1 = i + condition[i:].argmax()
try:
x2 = x1 + condition[x1:].argmin()
except:
x2 = x1 + 1
if x1 == x2:
if condition[x1] == True:
x2 = len(condition)
else:
break
idx.append( [x1,x2] )
i = x2
return idx
contiguous_regions(Y_crop>0.5)
import re
remove_punc = re.compile('[\,\.\?\!]')
squash_spaces = re.compile('\s+')
def words(s):
s = remove_punc.sub(' ', s)
s = squash_spaces.sub(' ', s)
return s.strip().lower()
sentences=dict(
num=words("zero one two three four five six seven eight nine."),
animals=words("cat dog fox bird."),
# https://www.quora.com/Is-there-a-text-that-covers-the-entire-English-phonetic-range/
qbf=words("That quick beige fox jumped in the air over each thin dog. "+
"Look out, I shout, for he's foiled you again, creating chaos."),
shy=words("Are those shy Eurasian footwear, cowboy chaps, "+
"or jolly earthmoving headgear?"),
ate=words("The hungry purple dinosaur ate the kind, zingy fox, the jabbering crab, "+
"and the mad whale and started vending and quacking."),
suz=words("With tenure, Suzie'd have all the more leisure for yachting, "+
"but her publications are no good."),
tbh=words("Shaw, those twelve beige hooks are joined if I patch a young, gooey mouth."),
# https://en.wikipedia.org/wiki/The_North_Wind_and_the_Sun #594
# http://videoweb.nie.edu.sg/phonetic/courses/aae103-web/wolf.html #1111
)
sentences['num']
def for_msft(prefixes): # comma separated
return ' '.join([sentences[a] for a in prefixes.split(',')]).replace(' ', '\n')
This is the SSML that will be sent to the service:
<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis"
xmlns:mstts="http://www.w3.org/2001/mstts" xml:lang="en-GB">
<voice xml:lang="en-GB" name="Microsoft Server Speech Text to Speech Voice (en-GB, Susan, Apollo)">
zero
one
two
three
four
five
six
seven
eight
nine
</voice>
</speak>
# https://www.microsoft.com/cognitive-services/en-us/Speech-api/documentation/API-Reference-REST/BingVoiceOutput
a=for_msft('num') # 49 long...
#a=for_msft('qbf,shy,ate,suz,tbh') # 474 long...
print("length_in_chars=%d\n%s" % (len(a),a,))
# sox_ogg_param='--rate 16000 --channels 1'
# sox_wav_param="${sox_ogg_param} --encoding signed-integer"
# sox english.au ${sox_wav_param} english.wav norm -3
# sox english.au ${sox_ogg_param} english.ogg norm -3
# pip install python_speech_features
import python_speech_features
sample_window_step = 0.01 # in seconds (10ms)
def get_sample_features(samples, sample_rate):
#sample_feat = python_speech_features.mfcc(samples, sample_rate, numcep=13, nfilt=26, appendEnergy=True)
#sample_feat = python_speech_features.mfcc(samples, sample_rate, numcep=28, nfilt=56, appendEnergy=True)
#sample_feat, e = python_speech_features.fbank(samples,samplerate=sample_rate,
# winlen=0.025,winstep=0.01,nfilt=26,nfft=512,
# lowfreq=0,highfreq=None,preemph=0.97, winfunc=lambda x:np.ones((x,)))
features, energy = python_speech_features.fbank(samples, samplerate=sample_rate,
winlen=0.025, winstep=sample_window_step,
nfilt=32,nfft=512,
lowfreq=0,highfreq=None,preemph=0.25,
winfunc=lambda x:np.hamming( x ))
return features, energy
def get_sample_isolated_words(energy, plot=False):
log_e = np.log(energy)
if plot: plt.plot(log_e-5)
#log_e = smooth(log_e)
#if plot: plt.plot(log_e)
log_e_hurdle = (log_e.max() - log_e.min())*0.25 + log_e.min()
log_e_crop = np.where(log_e>log_e_hurdle, 1.0, 0.0)
if plot: plt.plot(log_e_crop * 25 - 2.5)
# By smoothing, and applying a very low hurdle, we expand the crop area safely
log_e_crop_expanded = np.where( smooth(log_e_crop, )>0.01, 1.0, 0.0)
if plot: plt.plot(log_e_crop_expanded * 30 -5)
return contiguous_regions(log_e_crop_expanded>0.5)
samples, sample_rate = soundfile.read(f)
sample_feat, energy = get_sample_features(samples, sample_rate)
plt.figure(figsize=(12,3))
plt.imshow(np.log(sample_feat.T), interpolation='nearest', origin='lower', aspect='auto')
plt.xlim(xmin=0)
word_ranges = get_sample_isolated_words(energy, plot=True)
plt.show()
print(sample_feat.shape, energy.shape, energy[10])
audio_playback_widget(f)
def split_combined_file_into_wavs(f, prefix='num'):
# f ~ './data/num_Bing_en-UK_f_Susan.wav'
f_base_orig = os.path.basename( f )
if not f_base_orig.startswith(prefix+"_"):
print("Wrong prefix for '%s'" % (f_base_orig,))
return
# Here's the new filename (directory to be calculated per-word)
f_base = os.path.splitext(f_base_orig)[0][len(prefix)+1:] + '.wav'
samples, sample_rate = soundfile.read(f)
sample_feat, energy = get_sample_features(samples, sample_rate)
word_ranges = get_sample_isolated_words(energy, plot=False)
#print(word_ranges)
words = sentences[prefix].split(' ')
if len(word_ranges) != len(words):
print("Found %d segments, rather than %d, in '%s'" % (len(word_ranges), len(words), f,))
return
for i, word in enumerate(words):
word_path = os.path.join('data', prefix, word)
os.makedirs(word_path, exist_ok=True)
wr = word_ranges[i]
fac = int(sample_window_step*sample_rate)
soundfile.write(os.path.join(word_path, f_base), samples[ wr[0]*fac:wr[1]*fac ], samplerate=sample_rate)
split_combined_file_into_wavs('./data/num_Bing_en-UK_f_Susan.wav')
#split_combined_file_into_wavs('./data/num_phone_en-UK_m_Martin00.wav')
def split_all_combined_files_into_wavs(prefix='num'):
for audio_file in sorted(os.listdir( './data' )):
filename_stub, ext = os.path.splitext(audio_file)
if not (ext=='.wav' or ext=='.ogg'): continue
if not filename_stub.startswith( prefix+'_'): continue
print("Splitting %s" % (audio_file,))
split_combined_file_into_wavs( './data/'+audio_file, prefix=prefix)
split_all_combined_files_into_wavs(prefix='num')
# Convert a given (isolated word) WAV into a 'stamp' - using a helper function
def samples_to_stamp(samples, sample_rate):
sample_feat, energy = get_sample_features(samples, sample_rate)
data = np.log(sample_feat)
# Now normalize each vertical slice so that the minimum energy is ==0
data_mins = np.min(data, axis=1)
data_min0 = data - data_mins[:, np.newaxis]
# Force the data into the 'stamp size' as an image (implicit range normalization occurs)
stamp = scipy.misc.imresize(data_min0, (64, 32), 'bilinear')
# https://github.com/scipy/scipy/issues/4458 :: The stamps are stored as uint8...
return stamp
def wav_to_stamp(prefix, word, wav):
samples, sample_rate = soundfile.read( os.path.join('data', prefix, word, wav) )
return samples_to_stamp(samples, sample_rate)
# Show what the 'visual stamp' for a given word looks like
stamp = wav_to_stamp('num', 'six', 'phone_en-UK_m_Martin00.wav')
plt.imshow(stamp.T, interpolation='nearest', origin='lower', aspect='auto')
plt.show()
print( np.min(stamp), np.max(stamp) )
audio_playback_widget( os.path.join('data', 'num', 'six', 'phone_en-UK_m_Martin00.wav') )
# combine all words from a given prefix into a dataset of 'stamps'
import pickle
def create_dataset_from_folders(prefix, save_as='.pkl', seed=13):
words = sentences[prefix].split(' ')
stamps, labels = [], []
for label_i, word in enumerate( words ):
# Find all the files for this word
for stamp_file in os.listdir( os.path.join('data', prefix, word )):
if not f.endswith('.wav'): continue
#print(stamp_file)
stamp = wav_to_stamp(prefix, word, stamp_file)
stamps.append(stamp)
labels.append(label_i)
if save_as is None: # Return the data directly
return stamps, labels, words
np.random.seed(seed)
data_dictionary = dict(
stamp=stamps, label=labels,
rand=np.random.rand( len(labels) ), # This is to enable us to sample the data (based on hurdles)
words=words,
)
ds_file = os.path.join('data', prefix+save_as)
pickle.dump(data_dictionary, open(ds_file, 'wb'), protocol=pickle.HIGHEST_PROTOCOL)
print("Created dataset : %s" % (ds_file, ))
#if not os.path.exists('data/num.pkl'):
if True:
create_dataset_from_folders('num')
# Read in the dataset
dataset = pickle.load(open(os.path.join('data', 'num.pkl'), 'rb'))
# Plot all of a given 'word'
indices = [ i for i,label in enumerate(dataset['label'])
if dataset['words'][label]=='four']
plt.figure(figsize=(12, 2))
for pos, i in enumerate(indices[0:16]): # at most 16
plt.subplot(2, 8, pos+1) # nrows, ncols, subplot#
plt.imshow(dataset['stamp'][i].T, cmap='gray', origin='lower', interpolation='nearest')
plt.axis('off')
plt.show()
# Now do something similar for 'test files', create a dataset for all the audio files in the given folder
def create_dataset_from_adhoc_wavs(prefix, save_as='.pkl', seed=13):
stamps, labels, words = [], [], []
for audio_file in sorted(os.listdir( os.path.join('data', prefix) )):
filename_stub, ext = os.path.splitext(audio_file)
if not (ext=='.wav' or ext=='.ogg'): continue
samples, sample_rate = soundfile.read( os.path.join('data', prefix, audio_file) )
sample_feat, energy = get_sample_features(samples, sample_rate)
word_ranges = get_sample_isolated_words(energy, plot=False)
for i, wr in enumerate(word_ranges):
wr = word_ranges[i]
fac = int(sample_window_step*sample_rate)
segment = samples[ wr[0]*fac:wr[1]*fac ]
stamp = samples_to_stamp(segment, sample_rate)
print("Adding : %s #%2d : (%d,%d)" % (filename_stub, i, wr[0], wr[1],))
stamps.append(stamp)
labels.append(-1)
words.append("%s_%d" % (filename_stub, i))
np.random.seed(seed)
data_dictionary = dict(
stamp=stamps, label=labels,
rand=np.random.rand( len(labels) ),
words=words,
)
ds_file = os.path.join('data', prefix+save_as)
pickle.dump(data_dictionary, open(ds_file, 'wb'), protocol=pickle.HIGHEST_PROTOCOL)
print("Created dataset : %s" % (ds_file, ))
test_prefix = 'num' +'-test'
create_dataset_from_adhoc_wavs(test_prefix)
# Read in the ad-hoc test dataset
dataset = pickle.load(open(os.path.join('data', 'num-test.pkl'), 'rb'))
plt.figure(figsize=(12,2))
for pos in range(len(dataset['stamp'][0:16])): # at most 16
plt.subplot(2, 8, pos+1) # nrows, ncols, subplot#
plt.imshow(dataset['stamp'][pos].T, cmap='gray', origin='lower', interpolation='nearest')
plt.axis('off')
plt.show()
# First a training set
split_all_combined_files_into_wavs(prefix='animals')
create_dataset_from_folders('animals')
# And then some ad-hoc test cases
test_prefix = 'animals' +'-test'
create_dataset_from_adhoc_wavs(test_prefix)
audio_playback_widget( os.path.join('data', test_prefix, 'cat_dog_fox_bird.wav') )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Normally an audio file needs clipping
Step2: Now, let's select the region of interest
Step3: When satisfied, write the file to disk - and update the name as appropriate (it's also possible to over-write the existing file).
Step4: Now look at the audio spectrograms
Step5: The following defines a function that does the spectrogram (FFT, etc), and then we define a smoothing function that will help us segment the audio into words later.
Step6: Work out the contiguous region of high enery (== sound) so that we can split the file into voiced segments.
Step7: Next
Step9: We can also generate voices synthetically - and Bing has a nice interface for that at https
Step10: If you want to do some manipulations on raw audio in Linux, sox is the perfect tool.
Step11: Now use 'proper' audio tools for segmentation
Step12: Redo the calculation above, but using the 'proper' tools. Notice how the scaling, contrast, etc, are better 'looking'.
Step13: Building the dataset
Step14: Iterate through all the audio files with a given prefix, and unfold them
Step15: Convert WAVs to 'stamps'
Step16: Collect the WAVs into a 'stamp' dataset
Step17: Test that the dataset can be read back
Step18: Enable 'ad-hoc' look-see testing
Step19: All done
|
12,896
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from modules.helpers import plot_images
from functools import partial
from sklearn.metrics import (roc_auc_score, roc_curve)
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
imshow = partial(plt.imshow, cmap='gray', interpolation='nearest', aspect='auto')
sns.set(style='white')
V = 25
K = 10
N = 100
D = 1000
topics = []
topic_base = np.concatenate((np.ones((1, 5)) * 0.2, np.zeros((4, 5))), axis=0).ravel()
for i in range(5):
topics.append(np.roll(topic_base, i * 5))
topic_base = np.concatenate((np.ones((5, 1)) * 0.2, np.zeros((5, 4))), axis=1).ravel()
for i in range(5):
topics.append(np.roll(topic_base, i))
topics = np.array(topics)
plt.figure(figsize=(10, 5))
plot_images(plt, topics, (5, 5), layout=(2, 5), figsize=(10, 5))
alpha = np.repeat(1., K)
np.random.seed(42)
thetas = np.random.dirichlet(alpha, size=D)
topic_assignments = np.array([np.random.choice(range(K), size=100, p=theta)
for theta in thetas])
word_assignments = np.array([[np.random.choice(range(V), size=1, p=topics[topic_assignments[d, n]])[0]
for n in range(N)] for d in range(D)])
doc_term_matrix = np.array([np.histogram(word_assignments[d], bins=V, range=(0, V - 1))[0] for d in range(D)])
imshow(doc_term_matrix)
# choose parameter values
mu = 0.
nu2 = 1.
np.random.seed(4)
eta = np.random.normal(loc=mu, scale=nu2, size=K)
print(eta)
# plot histogram of pre-responses
zeta = np.array([np.dot(eta, thetas[i]) for i in range(D)])
_ = plt.hist(zeta, bins=50)
# choose parameter values
y = (zeta >= 0).astype(int)
# plot histogram of responses
print('positive examples {} ({:.1f}%)'.format(y.sum(), y.sum() / D * 100))
_ = plt.hist(y)
from slda.topic_models import BLSLDA
_K = 10
_alpha = alpha
_beta = np.repeat(0.01, V)
_mu = mu
_nu2 = nu2
_b = 7.25
n_iter = 500
blslda = BLSLDA(_K, _alpha, _beta, _mu, _nu2, _b, n_iter, seed=42)
%%time
blslda.fit(doc_term_matrix, y)
plot_images(plt, blslda.phi, (5, 5), (2, 5), figsize=(10, 5))
topic_order = [1, 7, 0, 3, 6, 4, 9, 5, 2, 8]
plot_images(plt, blslda.phi[topic_order], (5, 5), (2, 5), figsize=(10, 5))
imshow(blslda.theta)
plt.plot(blslda.loglikelihoods)
burn_in = 300 #max(n_iter - 100, int(n_iter / 2))
blslda.loglikelihoods[burn_in:].mean()
eta_pred = blslda.eta[burn_in:].mean(axis=0)
print(eta_pred)
print(eta_pred[topic_order])
print(eta)
np.random.seed(42^2)
thetas_test = np.random.dirichlet(alpha, size=D)
topic_assignments_test = np.array([np.random.choice(range(K), size=100, p=theta)
for theta in thetas_test])
word_assignments_test = np.array([[np.random.choice(range(V), size=1, p=topics[topic_assignments_test[d, n]])[0]
for n in range(N)] for d in range(D)])
doc_term_matrix_test = np.array([np.histogram(word_assignments_test[d], bins=V, range=(0, V - 1))[0] for d in range(D)])
y_test = np.array([np.dot(eta, thetas_test[i]) >= 0 for i in range(D)], dtype=int)
imshow(doc_term_matrix_test)
def bern_param(eta, theta):
return np.exp(np.dot(eta, theta)) / (1 + np.exp(np.dot(eta, theta)))
thetas_test_blslda = blslda.transform(doc_term_matrix_test)
y_blslda = [bern_param(eta_pred, thetas_test_blslda[i]) for i in range(D)]
_ = plt.hist(y_blslda, bins=30)
fpr, tpr, _ = roc_curve(y_test, y_blslda)
plt.plot(fpr, tpr, label=('AUC = {:.3f}'.format(roc_auc_score(y_test, y_blslda))))
_ = plt.legend(loc='best')
_b_grid = np.arange(1, 10, 1)
roc_auc_scores = []
for _b in _b_grid:
print('Training BLSLDA with b = {}'.format(_b))
_blslda = BLSLDA(_K, _alpha, _beta, _mu, _nu2, _b, n_iter, seed=42, verbose=False)
_blslda.fit(doc_term_matrix, y)
_thetas_test_blslda = _blslda.transform(doc_term_matrix_test)
_eta_pred = _blslda.eta[burn_in:].mean(axis=0)
_y_blslda = [bern_param(_eta_pred, _thetas_test_blslda[i]) for i in range(D)]
roc_auc_scores.append(roc_auc_score(y_test, _y_blslda))
print(' roc_auc_score = {}'.format(roc_auc_scores[-1]))
print(_b_grid[np.argmax(roc_auc_scores)], np.max(roc_auc_scores))
plt.plot(_b_grid, roc_auc_scores)
_b_grid = np.arange(7, 8, .1)
roc_auc_scores = []
for _b in _b_grid:
print('Training BLSLDA with b = {}'.format(_b))
_blslda = BLSLDA(_K, _alpha, _beta, _mu, _nu2, _b, n_iter, seed=42, verbose=False)
_blslda.fit(doc_term_matrix, y)
_thetas_test_blslda = _blslda.transform(doc_term_matrix_test)
_eta_pred = _blslda.eta[burn_in:].mean(axis=0)
_y_blslda = [bern_param(_eta_pred, _thetas_test_blslda[i]) for i in range(D)]
roc_auc_scores.append(roc_auc_score(y_test, _y_blslda))
print(' roc_auc_score = {}'.format(roc_auc_scores[-1]))
print(_b_grid[np.argmax(roc_auc_scores)], np.max(roc_auc_scores))
plt.plot(_b_grid, roc_auc_scores)
from slda.topic_models import LDA
lda = LDA(_K, _alpha, _beta, n_iter, seed=42)
%%time
lda.fit(doc_term_matrix)
plot_images(plt, lda.phi, (5, 5), (2, 5), figsize=(10, 5))
imshow(lda.theta)
plt.plot(lda.loglikelihoods)
thetas_test_lda = lda.transform(doc_term_matrix_test)
from sklearn.linear_model import LogisticRegression
_C_grid = np.arange(0.001, 100, 1)
roc_auc_scores = []
for _C in _C_grid:
print('Training Logistic Regression with C = {}'.format(_C))
_lr = LogisticRegression(fit_intercept=False, C=_C)
_lr.fit(lda.theta, y)
_y_lr = _lr.predict_proba(thetas_test_lda)[:, 1]
roc_auc_scores.append(roc_auc_score(y_test, _y_lr))
print(' roc_auc_score = {}'.format(roc_auc_scores[-1]))
print(_C_grid[np.argmax(roc_auc_scores)], np.max(roc_auc_scores))
plt.plot(_C_grid, roc_auc_scores)
lr = LogisticRegression(fit_intercept=False, C=39)
lr.fit(lda.theta, y)
y_lr = lr.predict_proba(thetas_test_lda)[:, 1]
fpr, tpr, _ = roc_curve(y_test, y_lr)
plt.plot(fpr, tpr, label=('AUC = {:.3f}'.format(roc_auc_score(y_test, y_lr))))
_ = plt.legend(loc='best')
from sklearn.ensemble import GradientBoostingClassifier
gbr = GradientBoostingClassifier()
gbr.fit(lda.theta, y)
y_gbr = gbr.predict_proba(thetas_test_lda)[:, 1]
fpr, tpr, _ = roc_curve(y_test, y_gbr)
plt.plot(fpr, tpr, label=('AUC = {:.3f}'.format(roc_auc_score(y_test, y_gbr))))
_ = plt.legend(loc='best')
_K = 5
_alpha = np.repeat(1., _K)
_b_grid = np.arange(1, 10, 1)
roc_auc_scores = []
for _b in _b_grid:
print('Training BLSLDA with b = {}'.format(_b))
_blslda = BLSLDA(_K, _alpha, _beta, _mu, _nu2, _b, n_iter, seed=42, verbose=False)
_blslda.fit(doc_term_matrix, y)
_thetas_test_blslda = _blslda.transform(doc_term_matrix_test)
_eta_pred = _blslda.eta[burn_in:].mean(axis=0)
_y_blslda = [bern_param(_eta_pred, _thetas_test_blslda[i]) for i in range(D)]
roc_auc_scores.append(roc_auc_score(y_test, _y_blslda))
print(' roc_auc_score = {}'.format(roc_auc_scores[-1]))
print(_b_grid[np.argmax(roc_auc_scores)], np.max(roc_auc_scores))
plt.plot(_b_grid, roc_auc_scores)
_b_grid = np.arange(9, 11, 0.2)
roc_auc_scores = []
for _b in _b_grid:
print('Training BLSLDA with b = {}'.format(_b))
_blslda = BLSLDA(_K, _alpha, _beta, _mu, _nu2, _b, n_iter, seed=42, verbose=False)
_blslda.fit(doc_term_matrix, y)
_thetas_test_blslda = _blslda.transform(doc_term_matrix_test)
_eta_pred = _blslda.eta[burn_in:].mean(axis=0)
_y_blslda = [bern_param(_eta_pred, _thetas_test_blslda[i]) for i in range(D)]
roc_auc_scores.append(roc_auc_score(y_test, _y_blslda))
print(' roc_auc_score = {}'.format(roc_auc_scores[-1]))
print(_b_grid[np.argmax(roc_auc_scores)], np.max(roc_auc_scores))
plt.plot(_b_grid, roc_auc_scores)
_b = 9
n_iter = 1000
blslda1 = BLSLDA(_K, _alpha, _beta, _mu, _nu2, _b, n_iter, seed=42)
%%time
blslda1.fit(doc_term_matrix, y)
plot_images(plt, blslda1.phi, (5, 5), (1, 5), figsize=(10, 5))
imshow(blslda1.theta)
plt.plot(blslda1.loglikelihoods)
burn_in1 = 600
blslda1.loglikelihoods[burn_in1:].mean()
eta_pred1 = blslda1.eta[burn_in1:].mean(axis=0)
eta_pred1
thetas_test_blslda1 = blslda1.transform(doc_term_matrix_test)
y_blslda1 = [bern_param(eta_pred1, thetas_test_blslda1[i]) for i in range(D)]
fpr, tpr, _ = roc_curve(y_test, y_blslda1)
plt.plot(fpr, tpr, label=('AUC = {:.3f}'.format(roc_auc_score(y_test, y_blslda1))))
_ = plt.legend(loc='best')
lda1 = LDA(_K, _alpha, _beta, n_iter, seed=42)
%%time
lda1.fit(doc_term_matrix)
plot_images(plt, lda1.phi, (5, 5), (1, 5), figsize=(10, 5))
plot_images(plt, blslda1.phi, (5, 5), (1, 5), figsize=(10, 5))
imshow(lda1.theta)
plt.plot(lda1.loglikelihoods)
thetas_test_lda1 = lda1.transform(doc_term_matrix_test)
_C_grid = np.arange(0.001, 100, 1)
roc_auc_scores = []
for _C in _C_grid:
print('Training Logistic Regression with C = {}'.format(_C))
_lr = LogisticRegression(fit_intercept=False, C=_C)
_lr.fit(lda1.theta, y)
_y_lr = _lr.predict_proba(thetas_test_lda1)[:, 1]
roc_auc_scores.append(roc_auc_score(y_test, _y_lr))
print(' roc_auc_score = {}'.format(roc_auc_scores[-1]))
print(_C_grid[np.argmax(roc_auc_scores)], np.max(roc_auc_scores))
plt.plot(_C_grid, roc_auc_scores)
lr1 = LogisticRegression(fit_intercept=False, C=93)
lr1.fit(lda1.theta, y)
y_lr1 = lr1.predict_proba(thetas_test_lda1)[:, 1]
fpr, tpr, _ = roc_curve(y_test, y_lr1)
plt.plot(fpr, tpr, label=('AUC = {:.3f}'.format(roc_auc_score(y_test, y_lr1))))
_ = plt.legend(loc='best')
gbr1 = GradientBoostingClassifier()
gbr1.fit(lda1.theta, y)
y_gbr1 = gbr1.predict_proba(thetas_test_lda1)[:, 1]
fpr, tpr, _ = roc_curve(y_test, y_gbr1)
plt.plot(fpr, tpr, label=('AUC = {:.3f}'.format(roc_auc_score(y_test, y_gbr1))))
_ = plt.legend(loc='best')
_C_grid = np.arange(0.001, 100, 1)
roc_auc_scores = []
for _C in _C_grid:
print('Training Logistic Regression with C = {}'.format(_C))
_lr = LogisticRegression(fit_intercept=False, C=_C)
_lr.fit(blslda1.theta, y)
_y_lr = _lr.predict_proba(thetas_test_blslda1)[:, 1]
roc_auc_scores.append(roc_auc_score(y_test, _y_lr))
print(' roc_auc_score = {}'.format(roc_auc_scores[-1]))
print(_C_grid[np.argmax(roc_auc_scores)], np.max(roc_auc_scores))
plt.plot(_C_grid, roc_auc_scores)
lr1_0 = LogisticRegression(fit_intercept=False, C=34)
lr1_0.fit(blslda1.theta, y)
y_lr1_0 = lr1_0.predict_proba(thetas_test_blslda1)[:, 1]
fpr, tpr, _ = roc_curve(y_test, y_lr1_0)
plt.plot(fpr, tpr, label=('AUC = {:.3f}'.format(roc_auc_score(y_test, y_lr1_0))))
_ = plt.legend(loc='best')
gbr1_0 = GradientBoostingClassifier()
gbr1_0.fit(blslda1.theta, y)
y_gbr1_0 = gbr1_0.predict_proba(thetas_test_blslda1)[:, 1]
fpr, tpr, _ = roc_curve(y_test, y_gbr1_0)
plt.plot(fpr, tpr, label=('AUC = {:.3f}'.format(roc_auc_score(y_test, y_gbr1_0))))
_ = plt.legend(loc='best')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Generate topics
Step2: Generate documents from topics
Step3: Generate responses
Step4: Estimate parameters
Step5: Predict response of test documents
Step6: Estimate their topic distributions using the trained model, then calculate the predicted responses using the mean of our samples of $\eta$ after burn-in as an estimate for $\eta$.
Step7: Measure the goodness of our prediction using area under the ROC curve.
Step8: Find best b
Step9: Two-step learning
Step10: L2-regularized logistic regression
Step11: Gradient boosted trees
Step12: Conclusion
Step13: We plot the SLDA topics again and note that they are indeed different!
Step14: L2-regularized linear regression
Step15: Gradient boosted trees
Step16: L2-regularized logistic regression with SLDA topics
Step17: Gradient boosted trees with SLDA topics
|
12,897
|
<ASSISTANT_TASK:>
Python Code:
import json
import copy
from functools import reduce
import numpy as np # contains helpful math functions like numpy.exp()
import numpy.random as random # see numpy.random module
# import random # alternative to numpy.random module
from typing import Tuple, List, Any
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
%matplotlib inline
Read input data and define helper functions for visualization.
# Map services and data available from U.S. Geological Survey, National Geospatial Program.
# Please go to http://www.usgs.gov/visual-id/credit_usgs.html for further information
map = mpimg.imread('img/map.png') # US States & Capitals map
# List of 30 US state capitals and corresponding coordinates on the map
with open('data/capitals.json', 'r') as capitals_file:
capitals = json.load(capitals_file)
capitals_list = list(capitals.items())
def show_path(path, starting_city, w=12, h=8):
Plot a TSP path overlaid on a map of the US States & their capitals.
x, y = list(zip(*path))
_, (x0, y0) = starting_city
plt.imshow(map)
plt.plot(x0, y0, 'y*', markersize=15) # y* = yellow star for starting point
plt.plot(x + x[:1], y + y[:1]) # include the starting point at the end of path
plt.axis("off")
fig = plt.gcf()
fig.set_size_inches([w, h])
def simulated_annealing(problem, schedule, rand=False):
The simulated annealing algorithm, a version of stochastic hill climbing
where some downhill moves are allowed. Downhill moves are accepted readily
early in the annealing schedule and then less often as time goes on. The
schedule input determines the value of the temperature T as a function of
time. [Norvig, AIMA Chapter 3]
Parameters
----------
problem : Problem
An optimization problem, already initialized to a random starting state.
The Problem class interface must implement a callable method
"successors()" which returns states in the neighborhood of the current
state, and a callable function "get_value()" which returns a fitness
score for the state. (See the `TravelingSalesmanProblem` class below
for details.)
schedule : callable
A function mapping time to "temperature". "Time" is equivalent in this
case to the number of loop iterations.
Returns
-------
Problem
An approximate solution state of the optimization problem
Notes
-----
(1) DO NOT include the MAKE-NODE line from the AIMA pseudocode
(2) Modify the termination condition to return when the temperature
falls below some reasonable minimum value (e.g., 1e-10) rather than
testing for exact equality to zero
See Also
--------
AIMA simulated_annealing() pseudocode
https://github.com/aimacode/aima-pseudocode/blob/master/md/Simulated-Annealing.md
function SIMULATED-ANNEALING(problem,schedule) returns a solution state
inputs: problem, a problem
schedule, a mapping from time to "temperature"
current ← MAKE-NODE(problem.INITIAL-STATE)
for t = 1 to ∞ do
T ← schedule(t)
if T = 0 then return current
next ← a randomly selected successor of current
ΔE ← next.VALUE - current.VALUE
if ΔE > 0 then current ← next
else current ← next only with probability eΔE/T
return cooler than 1e-10
time = 0
current = problem
while True:
temp = schedule(time)
time = time + 1
if temp < 1e-10:
return current
if rand:
# random switch successor is better
successor = current.rand_successor()
else:
successor = random.choice(current.successors())
delta = successor.get_value() - current.get_value()
if delta > 0 or np.exp(delta / temp) > random.uniform(0.0, 1.0):
current = successor
Node = Tuple[str, Tuple[int, int]]
class TravelingSalesmanProblem:
Representation of a traveling salesman optimization problem. The goal
is to find the shortest path that visits every city in a closed loop path.
Students should only need to implement or modify the successors() and
get_values() methods.
Parameters
----------
cities : list
A list of cities specified by a tuple containing the name and the x, y
location of the city on a grid. e.g., ("Atlanta", (585.6, 376.8))
Attributes
----------
names
coords
path : list
The current path between cities as specified by the order of the city
tuples in the list.
def __init__(self, cities) -> None:
self.path = copy.deepcopy(cities)
def copy(self) -> Any:
Return a copy of the current board state.
new_tsp = TravelingSalesmanProblem(self.path)
return new_tsp
@property
def names(self) -> Tuple:
Strip and return only the city name from each element of the
path list. For example,
[("Atlanta", (585.6, 376.8)), ...] -> ["Atlanta", ...]
names, _ = zip(*self.path)
return names
@property
def coords(self) -> Tuple:
Strip the city name from each element of the path list and return
a list of tuples containing only pairs of xy coordinates for the
cities. For example,
[("Atlanta", (585.6, 376.8)), ...] -> [(585.6, 376.8), ...]
_, coords = zip(*self.path)
return coords
def successors(self) -> List[Any]:
Return a list of states in the neighborhood of the current state by
switching the order in which any adjacent pair of cities is visited.
For example, if the current list of cities (i.e., the path) is [A, B, C, D]
then the neighbors will include [A, B, D, C], [A, C, B, D], [B, A, C, D],
and [D, B, C, A]. (The order of successors does not matter.)
In general, a path of N cities will have N neighbors (note that path wraps
around the end of the list between the first and last cities).
Returns
-------
list<Problem>
A list of TravelingSalesmanProblem instances initialized with their list
of cities set to one of the neighboring permutations of cities in the
present state
problems = []
for i in range(len(self.path)):
new_path = copy.deepcopy(self.path)
j = i - 1
if i == 0:
j = len(new_path) - 1
new_path[i], new_path[j] = new_path[j], new_path[i]
problems.append(TravelingSalesmanProblem(new_path))
return problems
def rand_successor(self) -> List[Any]:
new_path = copy.deepcopy(self.path)
i = random.randint(1, len(new_path) - 1)
j = i
while j == i:
j = random.randint(1, len(new_path) - 1)
new_path[i], new_path[j] = new_path[j], new_path[i]
return TravelingSalesmanProblem(new_path)
def get_value(self) -> float:
Calculate the total length of the closed-circuit path of the current
state by summing the distance between every pair of adjacent cities. Since
the default simulated annealing algorithm seeks to maximize the objective
function, return -1x the path length. (Multiplying by -1 makes the smallest
path the smallest negative number, which is the maximum value.)
Returns
-------
float
A floating point value with the total cost of the path given by visiting
the cities in the order according to the self.cities list
Notes
-----
(1) Remember to include the edge from the last city back to the
first city
(2) Remember to multiply the path length by -1 so that simulated
annealing finds the shortest path
distances = []
coords = self.coords
for i in range(len(coords)):
j = i - 1
if i == 0:
j = len(coords) - 1
# pythagoras theorem Euclidean distance
a2 = (coords[i][0] - coords[j][0]) ** 2
b2 = (coords[i][1] - coords[j][1]) ** 2
c = np.sqrt(a2 + b2)
distances.append(c)
return sum(distances) * -1.
# Construct an instance of the TravelingSalesmanProblem
test_cities = [('DC', (11, 1)), ('SF', (0, 0)), ('PHX', (2, -3)), ('LA', (0, -4))]
test_names = ('DC', 'SF', 'PHX', 'LA')
test_coords = ((11, 1), (0, 0), (2, -3), (0, -4))
tsp = TravelingSalesmanProblem(test_cities)
assert(tsp.path == test_cities)
assert(tsp.names == test_names)
assert(tsp.coords == test_coords)
# Test the successors() method -- no output means the test passed
successor_paths = [x.path for x in tsp.successors()]
assert(all(x in [[('LA', (0, -4)), ('SF', (0, 0)), ('PHX', (2, -3)), ('DC', (11, 1))],
[('SF', (0, 0)), ('DC', (11, 1)), ('PHX', (2, -3)), ('LA', (0, -4))],
[('DC', (11, 1)), ('PHX', (2, -3)), ('SF', (0, 0)), ('LA', (0, -4))],
[('DC', (11, 1)), ('SF', (0, 0)), ('LA', (0, -4)), ('PHX', (2, -3))]]
for x in successor_paths))
# Test the get_value() method -- no output means the test passed
assert(np.allclose(tsp.get_value(), -28.97, atol=1e-3))
# These are presented as globals so that the signature of schedule()
# matches what is shown in the AIMA textbook; you could alternatively
# define them within the schedule function, use a closure to limit
# their scope, or define an object if you would prefer not to use
# global variables
alpha = 0.95
temperature=1e4
def schedule(time):
# T(t) =α^t x T0
return np.power(alpha, time) * temperature
# test the schedule() function -- no output means that the tests passed
assert(np.allclose(alpha, 0.95, atol=1e-3))
assert(np.allclose(schedule(0), temperature, atol=1e-3))
assert(np.allclose(schedule(10), 5987.3694, atol=1e-3))
# Failure implies that the initial path of the test case has been changed
assert(tsp.path == [('DC', (11, 1)), ('SF', (0, 0)), ('PHX', (2, -3)), ('LA', (0, -4))])
result = simulated_annealing(tsp, schedule)
print("Initial score: {}\nStarting Path: {!s}".format(tsp.get_value(), tsp.path))
print("Final score: {}\nFinal Path: {!s}".format(result.get_value(), result.path))
assert(tsp.path != result.path)
assert(result.get_value() > tsp.get_value())
# Create the problem instance and plot the initial state
num_cities = 30
capitals_tsp = TravelingSalesmanProblem(capitals_list[:num_cities])
starting_city = capitals_list[0]
print("Initial path value: {:.2f}".format(-capitals_tsp.get_value()))
print(capitals_list[:num_cities]) # The start/end point is indicated with a yellow star
show_path(capitals_tsp.coords, starting_city)
# set the decay rate and initial temperature parameters, then run simulated annealing to solve the TSP
alpha = 0.99
temperature=1e20
result = simulated_annealing(capitals_tsp, schedule, True)
print("Final path length: {:.2f}".format(-result.get_value()))
print(result.path)
show_path(result.coords, starting_city)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: I. Introduction
Step5: II. Simulated Annealing -- Main Loop
Step12: III. Representing the Problem
Step13: Testing TravelingSalesmanProblem
Step14: IV. Define the Temperature Schedule
Step15: Testing the Temperature Schedule
Step16: V. Run Simulated Annealing on a Larger TSP
|
12,898
|
<ASSISTANT_TASK:>
Python Code:
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
import numpy as np
import pandas as pd
class BayesTable(pd.DataFrame):
def __init__(self, hypo, prior=1):
columns = ['hypo', 'prior', 'likelihood', 'unnorm', 'posterior']
super().__init__(columns=columns)
self.hypo = hypo
self.prior = prior
def mult(self):
self.unnorm = self.prior * self.likelihood
def norm(self):
nc = np.sum(self.unnorm)
self.posterior = self.unnorm / nc
return nc
def update(self):
self.mult()
return self.norm()
def reset(self):
return BayesTable(self.hypo, self.posterior)
table = BayesTable(['Bowl 1', 'Bowl 2'])
table.likelihood = [3/4, 1/2]
table
table.mult()
table
table.norm()
table
table2 = table.reset()
table2.likelihood = [1/4, 1/2]
table2.update()
table2
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As an example, I'll use the "cookie problem", which is a version of a classic probability "urn problem".
Step2: Here's an instance that represents the two hypotheses
Step3: Since we didn't specify prior probabilities, the default value is equal priors for all hypotheses.
Step4: The next step is to multiply the priors by the likelihoods, which yields the unnormalized posteriors.
Step5: Now we can compute the normalized posteriors; norm returns the normalization constant.
Step6: We can read the posterior probabilities from the last column
Step7: Here are the likelihoods for the second update.
Step8: We could run mult and norm again, or run update, which does both steps.
Step9: Here are the results.
|
12,899
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
import numpy as np
import IPython.display as display
# The following functions can be used to convert a value to a type compatible
# with tf.Example.
def _bytes_feature(value):
Returns a bytes_list from a string / byte.
if isinstance(value, type(tf.constant(0))):
value = value.numpy() # BytesList won't unpack a string from an EagerTensor.
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
def _float_feature(value):
Returns a float_list from a float / double.
return tf.train.Feature(float_list=tf.train.FloatList(value=[value]))
def _int64_feature(value):
Returns an int64_list from a bool / enum / int / uint.
return tf.train.Feature(int64_list=tf.train.Int64List(value=[value]))
print(_bytes_feature(b'test_string'))
print(_bytes_feature(u'test_bytes'.encode('utf-8')))
print(_float_feature(np.exp(1)))
print(_int64_feature(True))
print(_int64_feature(1))
feature = _float_feature(np.exp(1))
feature.SerializeToString()
# The number of observations in the dataset.
n_observations = int(1e4)
# Boolean feature, encoded as False or True.
feature0 = np.random.choice([False, True], n_observations)
# Integer feature, random from 0 to 4.
feature1 = np.random.randint(0, 5, n_observations)
# String feature
strings = np.array([b'cat', b'dog', b'chicken', b'horse', b'goat'])
feature2 = strings[feature1]
# Float feature, from a standard normal distribution
feature3 = np.random.randn(n_observations)
def serialize_example(feature0, feature1, feature2, feature3):
Creates a tf.Example message ready to be written to a file.
# Create a dictionary mapping the feature name to the tf.Example-compatible
# data type.
feature = {
'feature0': _int64_feature(feature0),
'feature1': _int64_feature(feature1),
'feature2': _bytes_feature(feature2),
'feature3': _float_feature(feature3),
}
# Create a Features message using tf.train.Example.
example_proto = tf.train.Example(features=tf.train.Features(feature=feature))
return example_proto.SerializeToString()
# This is an example observation from the dataset.
example_observation = []
serialized_example = serialize_example(False, 4, b'goat', 0.9876)
serialized_example
example_proto = tf.train.Example.FromString(serialized_example)
example_proto
tf.data.Dataset.from_tensor_slices(feature1)
features_dataset = tf.data.Dataset.from_tensor_slices((feature0, feature1, feature2, feature3))
features_dataset
# Use `take(1)` to only pull one example from the dataset.
for f0,f1,f2,f3 in features_dataset.take(1):
print(f0)
print(f1)
print(f2)
print(f3)
def tf_serialize_example(f0,f1,f2,f3):
tf_string = tf.py_function(
serialize_example,
(f0,f1,f2,f3), # pass these args to the above function.
tf.string) # the return type is `tf.string`.
return tf.reshape(tf_string, ()) # The result is a scalar
tf_serialize_example(f0,f1,f2,f3)
serialized_features_dataset = features_dataset.map(tf_serialize_example)
serialized_features_dataset
def generator():
for features in features_dataset:
yield serialize_example(*features)
serialized_features_dataset = tf.data.Dataset.from_generator(
generator, output_types=tf.string, output_shapes=())
serialized_features_dataset
filename = 'test.tfrecord'
writer = tf.data.experimental.TFRecordWriter(filename)
writer.write(serialized_features_dataset)
filenames = [filename]
raw_dataset = tf.data.TFRecordDataset(filenames)
raw_dataset
for raw_record in raw_dataset.take(10):
print(repr(raw_record))
# Create a description of the features.
feature_description = {
'feature0': tf.io.FixedLenFeature([], tf.int64, default_value=0),
'feature1': tf.io.FixedLenFeature([], tf.int64, default_value=0),
'feature2': tf.io.FixedLenFeature([], tf.string, default_value=''),
'feature3': tf.io.FixedLenFeature([], tf.float32, default_value=0.0),
}
def _parse_function(example_proto):
# Parse the input `tf.Example` proto using the dictionary above.
return tf.io.parse_single_example(example_proto, feature_description)
parsed_dataset = raw_dataset.map(_parse_function)
parsed_dataset
for parsed_record in parsed_dataset.take(10):
print(repr(parsed_record))
# Write the `tf.Example` observations to the file.
with tf.io.TFRecordWriter(filename) as writer:
for i in range(n_observations):
example = serialize_example(feature0[i], feature1[i], feature2[i], feature3[i])
writer.write(example)
!du -sh {filename}
filenames = [filename]
raw_dataset = tf.data.TFRecordDataset(filenames)
raw_dataset
for raw_record in raw_dataset.take(1):
example = tf.train.Example()
example.ParseFromString(raw_record.numpy())
print(example)
cat_in_snow = tf.keras.utils.get_file('320px-Felis_catus-cat_on_snow.jpg', 'https://storage.googleapis.com/download.tensorflow.org/example_images/320px-Felis_catus-cat_on_snow.jpg')
williamsburg_bridge = tf.keras.utils.get_file('194px-New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg','https://storage.googleapis.com/download.tensorflow.org/example_images/194px-New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg')
display.display(display.Image(filename=cat_in_snow))
display.display(display.HTML('Image cc-by: <a "href=https://commons.wikimedia.org/wiki/File:Felis_catus-cat_on_snow.jpg">Von.grzanka</a>'))
display.display(display.Image(filename=williamsburg_bridge))
display.display(display.HTML('<a "href=https://commons.wikimedia.org/wiki/File:New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg">From Wikimedia</a>'))
image_labels = {
cat_in_snow : 0,
williamsburg_bridge : 1,
}
# This is an example, just using the cat image.
image_string = open(cat_in_snow, 'rb').read()
label = image_labels[cat_in_snow]
# Create a dictionary with features that may be relevant.
def image_example(image_string, label):
image_shape = tf.image.decode_jpeg(image_string).shape
feature = {
'height': _int64_feature(image_shape[0]),
'width': _int64_feature(image_shape[1]),
'depth': _int64_feature(image_shape[2]),
'label': _int64_feature(label),
'image_raw': _bytes_feature(image_string),
}
return tf.train.Example(features=tf.train.Features(feature=feature))
for line in str(image_example(image_string, label)).split('\n')[:15]:
print(line)
print('...')
# Write the raw image files to `images.tfrecords`.
# First, process the two images into `tf.Example` messages.
# Then, write to a `.tfrecords` file.
record_file = 'images.tfrecords'
with tf.io.TFRecordWriter(record_file) as writer:
for filename, label in image_labels.items():
image_string = open(filename, 'rb').read()
tf_example = image_example(image_string, label)
writer.write(tf_example.SerializeToString())
!du -sh {record_file}
raw_image_dataset = tf.data.TFRecordDataset('images.tfrecords')
# Create a dictionary describing the features.
image_feature_description = {
'height': tf.io.FixedLenFeature([], tf.int64),
'width': tf.io.FixedLenFeature([], tf.int64),
'depth': tf.io.FixedLenFeature([], tf.int64),
'label': tf.io.FixedLenFeature([], tf.int64),
'image_raw': tf.io.FixedLenFeature([], tf.string),
}
def _parse_image_function(example_proto):
# Parse the input tf.Example proto using the dictionary above.
return tf.io.parse_single_example(example_proto, image_feature_description)
parsed_image_dataset = raw_image_dataset.map(_parse_image_function)
parsed_image_dataset
for image_features in parsed_image_dataset:
image_raw = image_features['image_raw'].numpy()
display.display(display.Image(data=image_raw))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TFRecord 和 tf.Example
Step5: tf.Example
Step6: 注:为了简单起见,本示例仅使用标量输入。要处理非标量特征,最简单的方法是使用 tf.io.serialize_tensor 将张量转换为二进制字符串。在 TensorFlow 中,字符串是标量。使用 tf.io.parse_tensor 可将二进制字符串转换回张量。
Step7: 可以使用 .SerializeToString 方法将所有协议消息序列化为二进制字符串:
Step8: 创建 tf.Example 消息
Step10: 您可以使用 _bytes_feature、_float_feature 或 _int64_feature 将下面的每个特征强制转换为兼容 tf.Example 的类型。然后,可以通过下面的已编码特征创建 tf.Example 消息:
Step11: 例如,假设您从数据集中获得了一个观测值 [False, 4, bytes('goat'), 0.9876]。您可以使用 create_message() 创建和打印此观测值的 tf.Example 消息。如上所述,每个观测值将被写为一条 Features 消息。请注意,tf.Example 消息只是 Features 消息外围的包装器:
Step12: 要解码消息,请使用 tf.train.Example.FromString 方法。
Step13: TFRecords 格式详细信息
Step14: 若应用于数组的元组,将返回元组的数据集:
Step15: 使用 tf.data.Dataset.map 方法可将函数应用于 Dataset 的每个元素。
Step16: 将此函数应用于数据集中的每个元素:
Step17: 并将它们写入 TFRecord 文件:
Step18: 读取 TFRecord 文件
Step19: 此时,数据集包含序列化的 tf.train.Example 消息。迭代时,它会将其作为标量字符串张量返回。
Step20: 可以使用以下函数对这些张量进行解析。请注意,这里的 feature_description 是必需的,因为数据集使用计算图执行,并且需要以下描述来构建它们的形状和类型签名:
Step21: 或者,使用 tf.parse example 一次解析整个批次。使用 tf.data.Dataset.map 方法将此函数应用于数据集中的每一项:
Step22: 使用 Eager Execution 在数据集中显示观测值。此数据集中有 10,000 个观测值,但只会显示前 10 个。数据会作为特征字典进行显示。每一项都是一个 tf.Tensor,此张量的 numpy 元素会显示特征的值:
Step23: 在这里,tf.parse_example 函数会将 tf.Example 字段解压缩为标准张量。
Step24: 读取 TFRecord 文件
Step25: 演练:读取和写入图像数据
Step26: 写入 TFRecord 文件
Step27: 请注意,所有特征现在都存储在 tf.Example 消息中。接下来,函数化上面的代码,并将示例消息写入名为 images.tfrecords 的文件:
Step28: 读取 TFRecord 文件
Step29: 从 TFRecord 文件中恢复图像:
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.