text_prompt
stringlengths 168
30.3k
| code_prompt
stringlengths 67
124k
|
|---|---|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Random Data Generation
Step2: Basic Line Chart
Step3: The x attribute refers to the data represented horizontally, while the y attribute refers the data represented vertically.
Step4: In a similar way, we can also change any attribute after the plot has been displayed to change the plot. Run each of the cells below, and try changing the attributes to explore the different features and how they affect the plot.
Step5: To switch to an area chart, set the fill attribute, and control the look with fill_opacities and fill_colors.
Step6: While a Lines plot allows the user to extract the general shape of the data being plotted, there may be a need to visualize discrete data points along with this shape. This is where the markers attribute comes in.
Step7: The marker attributes accepts the values square, circle, cross, diamond, square, triangle-down, triangle-up, arrow, rectangle, ellipse. Try changing the string above and re-running the cell to see how each marker type looks.
Step8: Plotting Multiples Sets of Data with Lines
Step9: We pass each data set as an element of a list. The colors attribute allows us to pass a specific color for each line.
Step10: Similarly, we can also pass multiple x-values for multiple sets of y-values
Step11: Coloring Lines according to data
Step12: We can also reset the colors of the Line to their defaults by setting the color attribute to None.
Step13: Patches
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np # For numerical programming and multi-dimensional arrays
from pandas import date_range # For date-rate generation
from bqplot import LinearScale, Lines, Axis, Figure, DateScale, ColorScale
security_1 = np.cumsum(np.random.randn(150)) + 100.0
security_2 = np.cumsum(np.random.randn(150)) + 100.0
sc_x = LinearScale()
sc_y = LinearScale()
line = Lines(x=np.arange(len(security_1)), y=security_1, scales={"x": sc_x, "y": sc_y})
ax_x = Axis(scale=sc_x, label="Index")
ax_y = Axis(scale=sc_y, orientation="vertical", label="y-values of Security 1")
Figure(marks=[line], axes=[ax_x, ax_y], title="Security 1")
line.colors = ["DarkOrange"]
# The opacity allows us to display the Line while featuring other Marks that may be on the Figure
line.opacities = [0.5]
line.stroke_width = 2.5
line.fill = "bottom"
line.fill_opacities = [0.2]
line.line_style = "dashed"
line.interpolation = "basis"
line.marker = "triangle-down"
# Here we define the dates we would like to use
dates = date_range(start="01-01-2007", periods=150)
dt_x = DateScale()
sc_y = LinearScale()
time_series = Lines(x=dates, y=security_1, scales={"x": dt_x, "y": sc_y})
ax_x = Axis(scale=dt_x, label="Date")
ax_y = Axis(scale=sc_y, orientation="vertical", label="Security 1")
Figure(marks=[time_series], axes=[ax_x, ax_y], title="A Time Series Plot")
x_dt = DateScale()
y_sc = LinearScale()
dates_new = date_range(start="06-01-2007", periods=150)
securities = np.cumsum(np.random.randn(150, 10), axis=0)
positions = np.random.randint(0, 2, size=10)
# We pass the color scale and the color data to the lines
line = Lines(
x=dates,
y=[security_1, security_2],
scales={"x": x_dt, "y": y_sc},
labels=["Security 1", "Security 2"],
)
ax_x = Axis(scale=x_dt, label="Date")
ax_y = Axis(scale=y_sc, orientation="vertical", label="Security 1")
Figure(marks=[line], axes=[ax_x, ax_y], legend_location="top-left")
line.x, line.y = [dates, dates_new], [security_1, security_2]
x_dt = DateScale()
y_sc = LinearScale()
col_sc = ColorScale(colors=["Red", "Green"])
dates_color = date_range(start="06-01-2007", periods=150)
securities = 100.0 + np.cumsum(np.random.randn(150, 10), axis=0)
positions = np.random.randint(0, 2, size=10)
# Here we generate 10 random price series and 10 random positions
# We pass the color scale and the color data to the lines
line = Lines(
x=dates_color,
y=securities.T,
scales={"x": x_dt, "y": y_sc, "color": col_sc},
color=positions,
labels=["Security 1", "Security 2"],
)
ax_x = Axis(scale=x_dt, label="Date")
ax_y = Axis(scale=y_sc, orientation="vertical", label="Security 1")
Figure(marks=[line], axes=[ax_x, ax_y], legend_location="top-left")
line.color = None
sc_x = LinearScale()
sc_y = LinearScale()
patch = Lines(
x=[
[0, 2, 1.2, np.nan, np.nan, np.nan, np.nan],
[0.5, 2.5, 1.7, np.nan, np.nan, np.nan, np.nan],
[4, 5, 6, 6, 5, 4, 3],
],
y=[
[0, 0, 1, np.nan, np.nan, np.nan, np.nan],
[0.5, 0.5, -0.5, np.nan, np.nan, np.nan, np.nan],
[1, 1.1, 1.2, 2.3, 2.2, 2.7, 1.0],
],
fill_colors=["orange", "blue", "red"],
fill="inside",
stroke_width=10,
close_path=True,
scales={"x": sc_x, "y": sc_y},
display_legend=True,
)
Figure(marks=[patch], animation_duration=1000)
patch.opacities = [0.1, 0.2]
patch.x = [
[2, 3, 3.2, np.nan, np.nan, np.nan, np.nan],
[0.5, 2.5, 1.7, np.nan, np.nan, np.nan, np.nan],
[4, 5, 6, 6, 5, 4, 3],
]
patch.close_path = False
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: What about the "data" content of the products?
Step2: Calculations
Step3: What does the LCL look like for that?
Step4: Given those conditions, what does the profile of a parcel look like?
Step5: Plots
Step6: Siphon
Step7: Ok, let's get data using NCSS
Step8: "With your powers combined..."
Step9: Ideas for the future
|
<ASSISTANT_TASK:>
Python Code:
# Level 3 example with multiple products
import numpy as np
import matplotlib.pyplot as plt
from numpy import ma
from metpy.cbook import get_test_data
from metpy.io.nexrad import Level3File
from metpy.plots import ctables
# Helper code for making sense of these products. This is hidden from the slideshow
# and eventually, in some form, will make its way into MetPy proper.
def print_tab_pages(prod):
print(('\n' + '-'*80 + '\n').join(prod.tab_pages))
def print_graph_pages(prod):
colors = {0:'white', 3:'red', 4:'cyan'}
for page in prod.graph_pages:
fig, ax = plt.subplots(1, 1, figsize=(10,10))
ax.axesPatch.set_facecolor('black')
for line in page:
if 'color' in line:
c = colors[line['color']]
if 'text' in line:
ax.text(line['x'], line['y'], line['text'], color=c,
transform=ax.transData, verticalalignment='top',
horizontalalignment='left', fontdict={'family':'monospace'},
fontsize=8)
else:
vecs = np.array(line['vectors'])
ax.plot(vecs[:, ::2], vecs[:, 1::2], color=c)
ax.set_xlim(0, 639)
ax.set_ylim(511, 0)
ax.set_aspect('equal', 'box')
ax.xaxis.set_major_formatter(plt.NullFormatter())
ax.xaxis.set_major_locator(plt.NullLocator())
ax.yaxis.set_major_formatter(plt.NullFormatter())
ax.yaxis.set_major_locator(plt.NullLocator())
for s in ax.spines: ax.spines[s].set_color('none')
def plot_prod(prod, cmap, norm, ax=None):
if ax is None:
ax = plt.gca()
data_block = prod.sym_block[0][0]
data = np.array(data_block['data'])
data = prod.map_data(data)
data = np.ma.array(data, mask=np.isnan(data))
if 'start_az' in data_block:
az = np.array(data_block['start_az'] + [data_block['end_az'][-1]])
rng = np.linspace(0, prod.max_range, data.shape[-1] + 1)
x = rng * np.sin(np.deg2rad(az[:, None]))
y = rng * np.cos(np.deg2rad(az[:, None]))
else:
x = np.linspace(-prod.max_range, prod.max_range, data.shape[1] + 1)
y = np.linspace(-prod.max_range, prod.max_range, data.shape[0] + 1)
data = data[::-1]
pc = ax.pcolormesh(x, y, data, cmap=cmap, norm=norm)
plt.colorbar(pc, extend='both')
ax.set_aspect('equal', 'datalim')
ax.set_xlim(-100, 100)
ax.set_ylim(-100, 100)
return pc, data
def plot_points(prod, ax=None):
if ax is None:
ax = plt.gca()
data_block = prod.sym_block[0]
styles = {'MDA': dict(marker='o', markerfacecolor='None', markeredgewidth=2, size='radius'),
'MDA (Elev.)': dict(marker='s', markerfacecolor='None', markeredgewidth=2, size='radius'),
'TVS': dict(marker='v', markerfacecolor='red', markersize=10),
'Storm ID': dict(text='id'),
'HDA': dict(marker='o', markersize=10, markerfacecolor='blue', alpha=0.5)}
artists = []
for point in data_block:
if 'type' in point:
info = styles.get(point['type'], {}).copy()
x,y = point['x'], point['y']
text_key = info.pop('text', None)
if text_key:
artists.append(ax.text(x, y, point[text_key], transform=ax.transData, clip_box=ax.bbox, **info))
artists[-1].set_clip_on(True)
else:
size_key = info.pop('size', None)
if size_key:
info['markersize'] = np.pi * point[size_key]**2
artists.append(ax.plot(x, y, **info))
def plot_tracks(prod, ax=None):
if ax is None:
ax = plt.gca()
data_block = prod.sym_block[0]
for track in data_block:
if 'marker' in track:
pass
if 'track' in track:
x,y = np.array(track['track']).T
ax.plot(x, y, color='k')
# Read in a bunch of NIDS products
tvs = Level3File(get_test_data('nids/KOUN_SDUS64_NTVTLX_201305202016'))
nmd = Level3File(get_test_data('nids/KOUN_SDUS34_NMDTLX_201305202016'))
nhi = Level3File(get_test_data('nids/KOUN_SDUS64_NHITLX_201305202016'))
n0q = Level3File(get_test_data('nids/KOUN_SDUS54_N0QTLX_201305202016'))
nst = Level3File(get_test_data('nids/KOUN_SDUS34_NSTTLX_201305202016'))
# What happens when we print one out
tvs
# Can print tabular (ASCII) information in the product
print_tab_pages(tvs)
fig = plt.figure(figsize=(20, 10))
ax = fig.add_subplot(1, 1, 1)
norm, cmap = ctables.registry.get_with_boundaries('NWSReflectivity', np.arange(0, 85, 5))
pc, data = plot_prod(n0q, cmap, norm, ax)
plot_points(tvs)
plot_points(nmd)
plot_points(nhi)
plot_tracks(nst)
ax.set_ylim(-20, 40)
ax.set_xlim(-50, 20)
import metpy.calc as mpcalc
from metpy.units import units
temp = 86 * units.degF
press = 860. * units.mbar
humidity = 40 / 100.
dewpt = mpcalc.dewpoint_rh(temp, humidity).to('degF')
dewpt
mpcalc.lcl(press, temp, dewpt)
import numpy as np
pressure_levels = np.array([860., 850., 700., 500., 300.]) * units.mbar
mpcalc.parcel_profile(pressure_levels, temp, dewpt)
import matplotlib.pyplot as plt
import numpy as np
from metpy.cbook import get_test_data
from metpy.calc import get_wind_components
from metpy.plots import SkewT
# Parse the data
p, T, Td, direc, spd = np.loadtxt(get_test_data('sounding_data.txt'),
usecols=(0, 2, 3, 6, 7), skiprows=4, unpack=True)
u, v = get_wind_components(spd, np.deg2rad(direc))
# Create a skewT using matplotlib's default figure size
fig = plt.figure(figsize=(8, 8))
skew = SkewT(fig)
# Plot the data using normal plotting functions, in this case using
# log scaling in Y, as dictated by the typical meteorological plot
skew.plot(p, T, 'r')
skew.plot(p, Td, 'g')
skew.plot_barbs(p, u, v)
# Add the relevant special lines
skew.plot_dry_adiabats()
skew.plot_moist_adiabats()
skew.plot_mixing_lines()
skew.ax.set_ylim(1000, 100)
fig
from siphon.catalog import TDSCatalog
best_gfs = TDSCatalog('http://thredds.ucar.edu/thredds/catalog/grib/NCEP/GFS/Global_0p5deg/catalog.xml?dataset=grib/NCEP/GFS/Global_0p5deg/Best')
best_ds = list(best_gfs.datasets.values())[0]
best_ds.access_urls
# Set up a class to access
from siphon.ncss import NCSS
ncss = NCSS(best_ds.access_urls['NetcdfSubset'])
# Get today's date
from datetime import datetime, timedelta
now = datetime.utcnow()
# Get a query object and set to get temperature for Boulder for the next 7 days
query = ncss.query()
query.lonlat_point(-105, 40).vertical_level(100000).time_range(now, now + timedelta(days=7))
query.variables('Temperature_isobaric').accept('netcdf4')
# Get the Data
data = ncss.get_data(query)
list(data.variables)
# Pull the variables we want from the NetCDF file
temp = data.variables['Temperature_isobaric']
time = data.variables['time']
# Convert time values from numbers to datetime
from netCDF4 import num2date
time_vals = num2date(time[:].squeeze(), time.units)
# Plot
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, 1, figsize=(9, 8))
ax.plot(time_vals, temp[:].squeeze(), 'r', linewidth=2)
ax.set_ylabel(temp.standard_name + ' (%s)' % temp.units)
ax.set_xlabel('Forecast Time (UTC)')
ax.grid(True)
# Re-use the NCSS access from earlier, but make a new query
query = ncss.query()
query.lonlat_point(-105, 40).time(now + timedelta(hours=12)).accept('csv')
query.variables('Temperature_isobaric', 'Relative_humidity_isobaric',
'u-component_of_wind_isobaric', 'v-component_of_wind_isobaric')
data = ncss.get_data(query)
# Pull out data with some units
p = (data['vertCoord'] * units('Pa')).to('mbar')
T = data['Temperature_isobaric'] * units('kelvin')
Td = mpcalc.dewpoint_rh(T, data['Relative_humidity_isobaric'] / 100.)
u = data['ucomponent_of_wind_isobaric'] * units('m/s')
v = data['vcomponent_of_wind_isobaric'] * units('m/s')
# Create a skewT using matplotlib's default figure size
fig = plt.figure(figsize=(8, 8))
skew = SkewT(fig)
# Plot the data using normal plotting functions, in this case using
# log scaling in Y, as dictated by the typical meteorological plot
skew.plot(p, T.to('degC'), 'r')
skew.plot(p, Td.to('degC'), 'g')
skew.plot_barbs(p[p>=100 * units.mbar], u.to('knots')[p>=100 * units.mbar],
v.to('knots')[p>=100 * units.mbar])
skew.ax.set_ylim(1000, 100)
skew.ax.set_title(data['date'][0]);
# This code isn't actually in a MetPy release yet.....
import importlib
import metpy.plots.station_plot
importlib.reload(metpy.plots.station_plot)
station_plot = metpy.plots.station_plot.station_plot
# Set up query for point data
ncss = NCSS('http://thredds.ucar.edu/thredds/ncss/nws/metar/ncdecoded/Metar_Station_Data_fc.cdmr/dataset.xml')
query = ncss.query()
query.lonlat_box(44, 37, -98, -108).time(now - timedelta(days=2)).accept('csv')
query.variables('air_temperature', 'dew_point_temperature',
'wind_from_direction', 'wind_speed')
data = ncss.get_data(query)
# Some unit conversions
speed = data['wind_speed'] * units('m/s')
data['u'],data['v'] = mpcalc.get_wind_components(speed.to('knots'),
data['wind_from_direction'] * units('degree'))
# Plot using basemap for now
from mpl_toolkits.basemap import Basemap
fig = plt.figure(figsize=(9, 9))
ax = fig.add_subplot(1, 1, 1)
m = Basemap(lon_0=-105, lat_0=40, lat_ts=40, resolution='i',
projection='stere', urcrnrlat=44, urcrnrlon=-98, llcrnrlat=37,
llcrnrlon=-108, ax=ax)
m.bluemarble()
# Just an early prototype...
station_plot(data, proj=m, ax=ax, layout={'NW': 'air_temperature', 'SW': 'dew_point_temperature'},
styles={'air_temperature': dict(color='r'), 'dew_point_temperature': dict(color='lightgreen')},
zorder=1);
fig
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Setup
Step2: Sometimes it's useful to have a zero value in the dataset, eg. for padding
Step3: Map from chars to indices and back again
Step4: idx will be the data we use from now on -- it simply converts all the characters to their index (based on the mapping above)
Step5: 3 Char Model
Step6: Our inputs
Step7: Our output
Step8: The first 4 inputs and ouputs
Step9: Will try to predict 30 from 40, 42, 29, 29 from 30, 25, 27, & etc. That's our data format.
Step10: The number of latent factors to create (ie. the size of our 3 character inputs)
Step11: Create and train model
Step12: This is the 'green arrow' from our diagram - the layer operation from input to hidden.
Step13: Our first hidden activation is simpmly this function applied to the result of the embedding of the first character.
Step14: This is the 'orange arrow' from our diagram - the layer operation from hidden to hidden.
Step15: Our second and third hidden activations sum up the previous hidden state (after applying dense_hidden) to the new input state.
Step16: This is the 'blue arrow' from our diagram - the layer operation from hidden to hidden.
Step17: The third hidden state is the input to our output layer.
Step18: Test model
Step19: Our first RNN
Step20: For each 0 thru 7, create a list of every 8th character with that starting point. These will be the 8 inputs to our model.
Step21: ^ create an array with 8 elements; ea. elem contains a list of the 0th,8th,16th,24th char, the 1st,9th,17th,25th char, etc just as before. A sequence of inputs where ea. one is offset by 1 from the previous one.
Step22: So each column below is one series of 8 characters from the text.
Step23: The first column in each row is the 1st 8 characters of our test.
Step24: NOTE
Step25: The first character of each sequence goes through dense_in(), to create our first hidden activations.
Step26: Then for each successive layer we combine the output of dense_in() on the ext character with the output of dense_hidden() on the current hidden state, to create the new hidden state.
Step27: Putting the final hidden state through dense_out() gives us our output
Step28: So now we can create our model
Step29: With 8 pieces of context instead of 3, we'd expect it to do better; and we see a loss of ~1.8 instead of ~2.0
Step30: Returning Sequences
Step31: Now our y dataset looks exactly like our x dataset did before, but everything's shifted over by 1 character.
Step32: We're going to pass a vector of all zeros as our starting point - here's our input layers for that
Step33: Now when we fit, we add the array of zeros to the start of our inputs; our ouputs are going to be those lists of 8, offset by 1. We get 8 losses instead of 1 bc ea. one of those 8 outputs has its own loss. You'll see the model's ability to predict the 1st character from a bunch of zeros is very limited and flattens out; but predicting the 8th char with the context of 7 is much better and keeps improving.
Step34: This is what a sequence model looks like. We pass in a sequence and after every character, it returns a guess.
Step35: Sequence model with Keras
Step36: To convert our previous ekras model into a sequence model, simply add the return_sequences=True parameter, and TimeDistributed() around our dense layer.
Step37: Note 8 outputs. What TimeDistributed does is create 8 copies of the weight matrix for each output.
Step38: Stateful model with Keras
Step39: Since we're using a fixed batch shape, we have to ensure our inputs and outputs are an even multiple of the batch size.
Step40: The LSTM model takes much longer to run than the regular RNN because it isn't in parallel
Step41: One-Hot Sequence Model with Keras
Step42: The 86 is the onehotted dimension; classes of characters
Step43: Theano RNN
Step44: Using raw theano, we have to create our weight matrices and bias vectors outselves - here are the functions we'll use to do so (using glorot initialization).
Step45: We return the weights and biases together as a tuple. For the hidden weights, we'll use an identity initialization (as recommended by Hinton.)
Step46: Different than Python; Theano requires us to build up a computation graph first. shared(..) basically tells Theano to keep track of something to send to the GPU later. Once you wrap smth in shared it basically belongs to Theano now.
Step47: Now we're ready to create our initial weight matrices.
Step48: We now need to tell Theano what happens each time we take a single step of this RNN.
Step49: Now we can provide everything necessary for the scan operation, so we can set that up - we have to pass in the function to call at each step, the sequence to step through, the initial values of the outputs, and any other arguments to pass to the step function.
Step50: You get this error if you accidently define step as
Step51: We can now calculate our loss function, and all of our gradients, with just a couple lines of code!
Step52: We even have to show Theano how to do SGD - so we set up this dictionary of updates to complete after every forward pass, which apply the standard SGD update rule to every weight.
Step53: To use it, we simply loop through our input data, calling the function compiled above, and printing our progress from time to time.
|
<ASSISTANT_TASK:>
Python Code:
import theano
%matplotlib inline
import sys, os
sys.path.insert(1, os.path.join('../utils'))
import utils; reload(utils)
from utils import *
from __future__ import division, print_function
path = get_file('nietzsche.txt', origin="https://s3.amazonaws.com/text-datasets/nietzsche.txt")
text = open(path).read()
print('corpus length:', len(text))
chars = sorted(list(set(text)))
vocab_size = len(chars) + 1
print('total chars:', vocab_size)
chars.insert(0, "\0")
''.join(chars[1:-6])
char_indices = dict((c, i) for i, c in enumerate(chars))
indices_char = dict((i, c) for i, c in enumerate(chars))
idx = [char_indices[c] for c in text]
# the 1st 10 characters:
idx[:10]
''.join(indices_char[i] for i in idx[:70])
cs = 3
c1_dat = [idx[i] for i in xrange(0, len(idx)-1-cs, cs)]
c2_dat = [idx[i+1] for i in xrange(0, len(idx)-1-cs, cs)]
c3_dat = [idx[i+2] for i in xrange(0, len(idx)-1-cs, cs)]
c4_dat = [idx[i+3] for i in xrange(0, len(idx)-1-cs, cs)] # <-- gonna predict this
# we can turn these into Numpy arrays just by stacking them up together
x1 = np.stack(c1_dat[:-2]) # 1st chars
x2 = np.stack(c2_dat[:-2]) # 2nd chars
x3 = np.stack(c3_dat[:-2]) # 3rd chars
# for every 4 character peice of this - collected works
# labels will just be the 4th characters
y = np.stack(c4_dat[:-2])
# 1st, 2nd, 3rd chars of text
x1[:4], x2[:4], x3[:4]
# 4th char of text
y[:3]
x1.shape, y.shape
# we're going to turn these into embeddings
n_fac = 42
# by creating an embedding matrix
def embedding_input(name, n_in, n_out):
inp = Input(shape=(1,), dtype='int64', name=name)
emb = Embedding(n_in, n_out, input_length=1)(inp)
return inp, Flatten()(emb)
c1_in, c1 = embedding_input('c1', vocab_size, n_fac)
c2_in, c2 = embedding_input('c2', vocab_size, n_fac)
c3_in, c3 = embedding_input('c3', vocab_size, n_fac)
# c1, c2, c3 represent result of putting each char through the embedding &
# getting out 42 latent vectors. <-- those are input to greenarrow.
n_hidden = 256
dense_in = Dense(n_hidden, activation='relu')
c1_hidden = dense_in(c1)
dense_hidden = Dense(n_hidden, activation='tanh')
c2_dense = dense_in(c2) # char-2 embedding thru greenarrow
hidden_2 = dense_hidden(c1_hidden) # output of char-1's hidden state thru orangearrow
c2_hidden = merge([c2_dense, hidden_2]) # merge the two together (default: sum)
c3_dense = dense_in(c3)
hidden_3 = dense_hidden(c2_hidden)
c3_hidden = merge([c3_dense, hidden_3])
dense_out = Dense(vocab_size, activation='softmax') #output size: 86 <-- vocab_size
c4_out = dense_out(c3_hidden)
# passing in our 3 inputs & 1 output
model = Model([c1_in, c2_in, c3_in], c4_out)
model.summary()
model.compile(loss='sparse_categorical_crossentropy', optimizer=Adam())
model.optimizer.lr=0.001
model.fit([x1, x2, x3], y, batch_size=64, nb_epoch=10)
def get_next(inp):
idxs = [char_indices[c] for c in inp]
arrs = [np.array(i)[np.newaxis] for i in idxs]
p = model.predict(arrs)
i = np.argmax(p)
return chars[i]
get_next('phi')
get_next(' th')
get_next(' an')
cs = 8 # use 8 characters to predict the 9th
c_in_dat = [[idx[i+n] for i in xrange(0, len(idx)-1-cs, cs)] for n in range(cs)]
c_out_dat = [idx[i+cs] for i in xrange(0, len(idx)-1-cs,cs)]
# go thru every one of those input lists and turn into Numpy array:
xs = [np.stack(c[:-2]) for c in c_in_dat]
len(xs), xs[0].shape
y = np.stack(c_out_dat[:-2])
# visualizing xs:
[xs[n][:cs] for n in range(cs)]
y[:cs]
n_fac = 42
def embedding_input(name, n_in, n_out):
inp = Input(shape=(1,), dtype='int64', name=name+'_in')
emb = Embedding(n_in, n_out, input_length=1, name=name+'_emb')(inp)
return inp, Flatten()(emb)
c_ins = [embedding_input('c'+str(n), vocab_size, n_fac) for n in range(cs)]
n_hidden = 256
dense_in = Dense(n_hidden, activation='relu')
dense_hidden = Dense(n_hidden, activation='relu', init='identity')
dense_out = Dense(vocab_size, activation='softmax')
hidden = dense_in(c_ins[0][1])
for i in range(1,cs):
c_dense = dense_in(c_ins[i][1]) #green arrow
hidden = dense_hidden(hidden) #orange arrow
hidden = merge([c_dense, hidden]) #merge the two together
c_out = dense_out(hidden)
model = Model([c[0] for c in c_ins], c_out)
model.compile(loss='sparse_categorical_crossentropy', optimizer=Adam())
model.fit(xs, y, batch_size=64, nb_epoch=12)
def get_next(inp):
idxs = [np.array(char_indices[c])[np.newaxis] for c in inp]
p = model.predict(idxs)
return chars[np.argmax(p)]
get_next('for thos')
get_next('part of ')
get_next('queens a')
# c_in_dat = [[idx[i+n] for i in xrange(0, len(idx)-1-cs, cs)] for n in range(cs)]
c_out_dat = [[idx[i+n] for i in xrange(1, len(idx)-cs, cs)] for n in range(cs)]
ys = [np.stack(c[:-2]) for c in c_out_dat]
[xs[n][:cs] for n in range(cs)]
[ys[n][:cs] for n in range(cs)]
dense_in = Dense(n_hidden, activation='relu')
dense_out = Dense(vocab_size, activation='softmax', name='output')
# our char1 input is moved within the diagram's loop-box; so now need
# initialized input (zeros)
inp1 = Input(shape=(n_fac,), name='zeros')
hidden = dense_in(inp1)
outs = []
for i in range(cs):
c_dense = dense_in(c_ins[i][1])
hidden = dense_hidden(hidden)
hidden = merge([c_dense, hidden], mode='sum')
# every layer now has an output
outs.append(dense_out(hidden))
# our loop is identical to before, except at the end of every loop,
# we're going to append this output; so now we're going to have
# 8 outputs for every sequence instead of just 1.
# model now has vector of 0s: [inp1], and array of outputs: outs
model = Model([inp1] + [c[0] for c in c_ins], outs)
model.compile(loss='sparse_categorical_crossentropy', optimizer=Adam())
zeros = np.tile(np.zeros(n_fac), (len(xs[0]),1))
zeros.shape
model.fit([zeros]+xs, ys, batch_size=64, nb_epoch=12)
def get_nexts(inp):
idxs = [char_indices[c] for c in inp]
arrs = [np.array(i)[np.newaxis] for i in idxs]
p = model.predict([np.zeros(n_fac)[np.newaxis,:]] + arrs)
print(list(inp))
return [chars[np.argmax(o)] for o in p]
get_nexts(' this is')
get_nexts(' part of')
n_hidden, n_fac, cs, vocab_size
model = Sequential([
Embedding(vocab_size, n_fac, input_length=cs),
SimpleRNN(n_hidden, return_sequences=True, activation='relu', inner_init='identity'),
TimeDistributed(Dense(vocab_size, activation='softmax')),
])
model.summary()
model.compile(loss='sparse_categorical_crossentropy', optimizer=Adam())
# just some dimensionality changes required; otherwise same
x_rnn = np.stack(np.squeeze(xs), axis=1)
y_rnn = np.stack(ys, axis=1)
x_rnn.shape, y_rnn.shape
model.fit(x_rnn, y_rnn, batch_size=64, nb_epoch=8)
def get_nexts_keras(inp):
idxs = [char_indices[c] for c in inp]
arr = np.array(idxs)[np.newaxis,:]
p = model.predict(arr)[0]
print(list(inp))
return [chars[np.argmax(o)] for o in p]
get_nexts_keras(' this is')
bs = 64
model = Sequential([
Embedding(vocab_size, n_fac, input_length=cs, batch_input_shape=(bs,8)),
BatchNormalization(),
LSTM(n_hidden, return_sequences=True, stateful=True),
TimeDistributed(Dense(vocab_size, activation='softmax')),
])
# dont forget to compile (accidetnly hit `M` in JNB)
model.compile(loss='sparse_categorical_crossentropy', optimizer=Adam())
mx = len(x_rnn)//bs*bs
model.fit(x_rnn[:mx], y_rnn[:mx], batch_size=bs, nb_epoch=4, shuffle=False)
model.optimizer.lr=1e-4
model.fit(x_rnn[:mx], y_rnn[:mx], batch_size=bs, nb_epoch=4, shuffle=False)
model = Sequential([
SimpleRNN(n_hidden, return_sequences=True, input_shape=(cs, vocab_size),
activation='relu', inner_init='identity'),
TimeDistributed(Dense(vocab_size, activation='softmax')),
])
model.compile(loss='categorical_crossentropy', optimizer=Adam())
# no embedding layer, so inputs must be onhotted too.
oh_ys = [to_categorical(o, vocab_size) for o in ys]
oh_y_rnn = np.stack(oh_ys, axis=1)
oh_xs = [to_categorical(o, vocab_size) for o in xs]
oh_x_rnn = np.stack(oh_xs, axis=1)
oh_x_rnn.shape, oh_y_rnn.shape
model.fit(oh_x_rnn, oh_y_rnn, batch_size=64, nb_epoch=8)
def get_nexts_oh(inp):
idxs = np.array([char_indices[c] for c in inp])
arr = to_categorical(idxs, vocab_size)
p = model.predict(arr[np.newaxis,:])[0]
print(list(inp))
return [chars[np.argmax(o)] for o in p]
get_nexts_oh(' this is')
n_input = vocab_size
n_output = vocab_size
def init_wgts(rows, cols):
scale = math.sqrt(2/rows) # 1st calc Glorot number to scale weights
return shared(normal(scale=scale, size=(rows, cols)).astype(np.float32))
def init_bias(rows):
return shared(np.zeros(rows, dtype=np.float32))
def wgts_and_bias(n_in, n_out):
return init_wgts(n_in, n_out), init_bias(n_out)
def id_and_bias(n):
return shared(np.eye(n, dtype=np.float32)), init_bias(n)
# Theano variables
t_inp = T.matrix('inp')
t_outp = T.matrix('outp')
t_h0 = T.vector('h0')
lr = T.scalar('lr')
all_args = [t_h0, t_inp, t_outp, lr]
W_h = id_and_bias(n_hidden)
W_x = wgts_and_bias(n_input, n_hidden)
W_y = wgts_and_bias(n_hidden, n_output)
w_all = list(chain.from_iterable([W_h, W_x, W_y]))
def step(x, h, W_h, b_h, W_x, b_x, W_y, b_y):
# Calculate the hidden activations
h = nnet.relu(T.dot(x, W_x) + b_x + T.dot(h, W_h) + b_h)
# Calculate the output activations
y = nnet.softmax(T.dot(h, W_y) + b_y)
# Return both (the 'Flatten()' is to work around a theano bug)
return h, T.flatten(y, 1)
[v_h, v_y], _ = theano.scan(step, sequences=t_inp,
outputs_info=[t_h0, None], non_sequences=w_all)
[v_h, v_y], _ = theano.scan(step, sequences=t_inp,
outputs_info=[t_h0, None], non_sequences=w_all)
error = nnet.categorical_crossentropy(v_y, t_outp).sum()
g_all = T.grad(error, w_all)
def upd_dict(wgts, grads, lr):
return OrderedDict({w: w-g*lr for (w,g) in zip(wgts,grads)})
upd = upd_dict(w_all, g_all, lr)
# we're finally ready to compile the function!:
fn = theano.function(all_args, error, updates=upd, allow_input_downcast=True)
X = oh_x_rnn
Y = oh_y_rnn
X.shape, Y.shape
err=0.0; l_rate=0.01
for i in xrange(len(X)):
err += fn(np.zeros(n_hidden), X[i], Y[i], l_rate)
if i % 1000 == 999:
print ("Error:{:.3f}".format(err/1000))
err=0.0
f_y = theano.function([t_h0, t_inp], v_y, allow_input_downcast=True)
pred = np.argmax(f_y(np.zeros(n_hidden), X[6]), axis=1)
act = np.argmax(X[6], axis=1)
[indices_char[o] for o in act]
[indices_char[o] for o in pred]
# looking at how to use Python debugger
import numpy as np
import pdb
err=0.; lrate=0.01
for i in range(len(np.zeros(10))):
err += np.sin(lrate+np.e**i)
pdb.set_trace()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We'll setup a distributed.Client locally. In the real world you could connect to a cluster of dask-workers.
Step2: For demonstration, we'll use the perennial NYC taxi cab dataset.
Step3: I happen to know that some of the values in this dataset are suspect, so let's drop them.
Step4: Now, we'll split our DataFrame into a train and test set, and select our feature matrix and target column (whether the passenger tipped).
Step5: With our training data in hand, we fit our logistic regression.
Step6: Again, following the lead of scikit-learn we can measure the performance of the estimator on the training dataset using the .score method.
Step7: and on the test dataset
Step8: Pipelines
Step10: First let's write a little transformer to convert columns to Categoricals.
Step11: We'll also want a daskified version of scikit-learn's StandardScaler, that won't eagerly
Step12: Finally, I've written a dummy encoder transformer that converts categoricals
Step13: So that's our pipeline.
Step14: And we can score it as well. The Pipeline ensures that all of the nescessary transformations take place before calling the estimator's score method.
Step15: Grid Search
Step16: We'll search over two hyperparameters
Step17: Now we have access to the usual attributes like cv_results_ learned by the grid search object
Step18: And we can do our usual checks on model fit for the training set
Step19: And the test set
|
<ASSISTANT_TASK:>
Python Code:
import os
import s3fs
import pandas as pd
import dask.array as da
import dask.dataframe as dd
from distributed import Client
from dask import persist, compute
from dask_glm.estimators import LogisticRegression
client = Client()
if not os.path.exists('trip.csv'):
s3 = S3FileSystem(anon=True)
s3.get("dask-data/nyc-taxi/2015/yellow_tripdata_2015-01.csv", "trip.csv")
ddf = dd.read_csv("trip.csv")
ddf = ddf.repartition(npartitions=8)
# these filter out less than 1% of the observations
ddf = ddf[(ddf.trip_distance < 20) &
(ddf.fare_amount < 150)]
ddf = ddf.repartition(npartitions=8)
df_train, df_test = ddf.random_split([0.8, 0.2], random_state=2)
columns = ['VendorID', 'passenger_count', 'trip_distance', 'payment_type', 'fare_amount']
X_train, y_train = df_train[columns], df_train['tip_amount'] > 0
X_test, y_test = df_test[columns], df_test['tip_amount'] > 0
X_train, y_train, X_test, y_test = persist(
X_train, y_train, X_test, y_test
)
X_train.head()
y_train.head()
print(f"{len(X_train):,d} observations")
%%time
# this is a *dask-glm* LogisticRegresion, not scikit-learn
lm = LogisticRegression(fit_intercept=False)
lm.fit(X_train.values, y_train.values)
%%time
lm.score(X_train.values, y_train.values).compute()
%%time
lm.score(X_test.values, y_test.values).compute()
from sklearn.base import TransformerMixin, BaseEstimator
from sklearn.pipeline import make_pipeline
class CategoricalEncoder(BaseEstimator, TransformerMixin):
Encode `categories` as pandas `Categorical`
Parameters
----------
categories : Dict[str, list]
Mapping from column name to list of possible values
def __init__(self, categories):
self.categories = categories
def fit(self, X, y=None):
# "stateless" transformer. Don't have anything to learn here
return self
def transform(self, X, y=None):
X = X.copy()
for column, categories in self.categories.items():
X[column] = X[column].astype('category').cat.set_categories(categories)
return X
class StandardScaler(BaseEstimator, TransformerMixin):
def __init__(self, columns=None, with_mean=True, with_std=True):
self.columns = columns
self.with_mean = with_mean
self.with_std = with_std
def fit(self, X, y=None):
if self.columns is None:
self.columns_ = X.columns
else:
self.columns_ = self.columns
if self.with_mean:
self.mean_ = X[self.columns_].mean(0)
if self.with_std:
self.scale_ = X[self.columns_].std(0)
return self
def transform(self, X, y=None):
X = X.copy()
if self.with_mean:
X[self.columns_] = X[self.columns_] - self.mean_
if self.with_std:
X[self.columns_] = X[self.columns_] / self.scale_
return X.values
from dummy_encoder import DummyEncoder
pipe = make_pipeline(
CategoricalEncoder({"VendorID": [1, 2],
"payment_type": [1, 2, 3, 4, 5]}),
DummyEncoder(),
StandardScaler(columns=['passenger_count', 'trip_distance', 'fare_amount']),
LogisticRegression(fit_intercept=False)
)
%%time
pipe.fit(X_train, y_train.values)
pipe.score(X_train, y_train.values).compute()
pipe.score(X_test, y_test.values).compute()
from sklearn.model_selection import GridSearchCV
import dask_searchcv as dcv
param_grid = {
'standardscaler__with_std': [True, False],
'logisticregression__lamduh': [.001, .01, .1, 1],
}
pipe = make_pipeline(
CategoricalEncoder({"VendorID": [1, 2],
"payment_type": [1, 2, 3, 4, 5]}),
DummyEncoder(),
StandardScaler(columns=['passenger_count', 'trip_distance', 'fare_amount']),
LogisticRegression(fit_intercept=False)
)
gs = dcv.GridSearchCV(pipe, param_grid)
%%time
gs.fit(X_train, y_train.values)
pd.DataFrame(gs.cv_results_)
gs.score(X_train, y_train.values).compute()
gs.score(X_test, y_test.values).compute()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Trapezoidal rule
Step3: Now use scipy.integrate.quad to integrate the f and g functions and see how the result compares with your trapz function. Print the results and errors.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from scipy import integrate
def trapz(f, a, b, N):
Integrate the function f(x) over the range [a,b] with N points.
h=(b-a)/N
k=np.arange(1,N)
return h*(0.5*f(a)+0.5*f(b)+f(a+k*h).sum())
f = lambda x: x**2
g = lambda x: np.sin(x)
I = trapz(f, 0, 1, 1000)
assert np.allclose(I, 0.33333349999999995)
J = trapz(g, 0, np.pi, 1000)
assert np.allclose(J, 1.9999983550656628)
x,err1=integrate.quad(f,0.0,1.0)
y,err2=integrate.quad(g,0.0,np.pi)
print("Integral result for f: ", x)
print("Trapezoidal result for f: ", I)
print("Error for integral of f: ", err1)
print("Integral result for g: ", y)
print("Trapezoidal result for g: ", J)
print("Error for integral of g: ", err2)
assert True # leave this cell to grade the previous one
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As the name suggest cart2pol converts a pair of cartesian coordinates [x, y] to polar coordinates [r, phi]
Step2: All well and good. However, what if we want to convert a list of cartesian coordinates to polar coordinates?
Step3: These coordinates make a circle centered at [0,0]
Step4: This is a bit time consuming to type out though, surely there is a better way to make our functions work for lists of inputs?
Step forward vectorise
Step5: Like magic! We can assure ourselves that these two methods produce the same answers
Step6: But how do they perform?
Step7: It is significantly faster, both for code writing and at runtime, to use vectorsie rather than manually looping through lists
Step8: Note that you can recover results stored in the task list with get(). This list will be in the same order as that which you used to spawn the processes
Step9: The structure of a multiproccess call is
Step10: Why can't we multithread in Python?
|
<ASSISTANT_TASK:>
Python Code:
import math
import numpy as np
from datetime import datetime
def cart2pol(x, y):
r = np.sqrt(x**2 + y**2)
phi = np.arctan2(y, x)
return(r, phi)
from IPython.core.display import Image
Image(url='https://upload.wikimedia.org/wikipedia/commons/thumb/7/78/Polar_to_cartesian.svg/1024px-Polar_to_cartesian.svg.png',width=400)
x = 3
y = 4
r, phi = cart2pol(x,y)
print(r,phi)
def cart2pol_list(list_x, list_y):
# Prepare empty lists for r and phi values
r = np.empty(len(list_x))
phi = np.empty(len(list_x))
# Loop through the lists of x and y, calculating the r and phi values
for i in range(len(list_x)):
r[i] = np.sqrt(list_x[i]**2 + list_y[i]**2)
phi[i] = np.arctan2(list_y[i], list_x[i])
return(r, phi)
x_list = np.sin(np.arange(0,2*np.pi,0.1))
y_list = np.cos(np.arange(0,2*np.pi,0.1))
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, 1, figsize=(6,6))
ax.scatter(x_list,y_list)
r_list, phi_list = cart2pol_list(x_list,y_list)
print(r_list)
print(phi_list)
cart2pol_vec = np.vectorize(cart2pol)
r_list_vec, phi_list_vec = cart2pol_vec(x_list, y_list)
print(r_list == r_list_vec)
print(phi_list == phi_list_vec)
%timeit cart2pol_list(x_list, y_list)
%timeit cart2pol_vec(x_list, y_list)
def do_maths(start=0, num=10):
pos = start
big = 1000 * 1000
ave = 0
while pos < num:
pos += 1
val = math.sqrt((pos - big) * (pos - big))
ave += val / num
return int(ave)
t0 = datetime.now()
do_maths(num=30000000)
dt = datetime.now() - t0
print("Done in {:,.2f} sec.".format(dt.total_seconds()))
import multiprocessing
t0 = datetime.now()
pool = multiprocessing.Pool()
processor_count = multiprocessing.cpu_count()
# processor_count = 2 # we can Python to use a specific number of cores if desired
print(f"Computing with {processor_count} processor(s)")
tasks = []
for n in range(1, processor_count + 1):
task = pool.apply_async(do_maths, (30000000 * (n - 1) / processor_count,
30000000 * n / processor_count))
tasks.append(task)
pool.close()
pool.join()
dt = datetime.now() - t0
print("Done in {:,.2f} sec.".format(dt.total_seconds()))
for t in tasks:
print(t.get())
pool = multiprocessing.Pool() # Make a pool ready to recieve taks
results = [] # empty list for results
for n in range(1, processor_count + 1): # Loop for assigning a number of tasks
result = pool.appy_async(function, (arguments)) # make a task by passing it a function and arguments
results.append(result) # append the result(s) of this task to the list
pool.close() # tell async there are no more tasks coming
pool.join() # start running the tasks concurrently
for t in results:
t.get() # ret`rieve your results, You could print or assign each result to a variable
HTML(html)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Use range() to print all the even numbers from 0 to 10.
Step2: Use List comprehension to create a list of all numbers between 1 and 50 that are divisible by 3.
Step3: Go through the string below and if the length of a word is even print "even!"
Step4: Write a program that prints the integers from 1 to 100. But for multiples of three print "Fizz" instead of the number, and for the multiples of five print "Buzz". For numbers which are multiples of both three and five print "FizzBuzz".
Step5: Use List Comprehension to create a list of the first letters of every word in the string below
|
<ASSISTANT_TASK:>
Python Code:
st = 'Print only the words that start with s in this sentence'
#Code here
st = 'Print only the words that start with s in this sentence'
for word in st.split():
if word[0] == 's':
print(word )
#Code Here
for number in range(0,11):
if number % 2 == 0:
print(number)
#Code in this cell
l = [number for number in range(1,51) if number % 3 == 0]
print(l)
st = 'Print every word in this sentence that has an even number of letters'
#Code in this cell
st = 'Print every word in this sentence that has an even number of letters'
for word in st.split():
if len(word) % 2 == 0:
print(word)
#Code in this cell
l = range(1,101)
for val in l:
if val % 3 == 0 and val % 5 == 0:
print ('FizzBuzz num ' + str(val))
elif val % 3 == 0:
print('Fizz num ' + str(val))
elif val % 5 ==0 :
print('Buzz num ' + str(val))
st = 'Create a list of the first letters of every word in this string'
#Code in this cell
st = 'Create a list of the first letters of every word in this string'
l = []
for word in st.split():
l.append(word[0])
print(l)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Note
Step2: Stock and Watson (1991) report that for their datasets, they could not reject the null hypothesis of a unit root in each series (so the series are integrated), but they did not find strong evidence that the series were co-integrated.
Step3: Dynamic factors
Step4: Estimates
Step5: Estimated factors
Step6: Post-estimation
Step7: Coincident Index
Step8: Below we plot the calculated coincident index along with the US recessions and the comparison coincident index USPHCI.
Step9: Appendix 1
Step10: So what did we just do?
Step11: Although this model increases the likelihood, it is not preferred by the AIC and BIC mesaures which penalize the additional three parameters.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt
np.set_printoptions(precision=4, suppress=True, linewidth=120)
from pandas_datareader.data import DataReader
# Get the datasets from FRED
start = '1979-01-01'
end = '2014-12-01'
indprod = DataReader('IPMAN', 'fred', start=start, end=end)
income = DataReader('W875RX1', 'fred', start=start, end=end)
sales = DataReader('CMRMTSPL', 'fred', start=start, end=end)
emp = DataReader('PAYEMS', 'fred', start=start, end=end)
# dta = pd.concat((indprod, income, sales, emp), axis=1)
# dta.columns = ['indprod', 'income', 'sales', 'emp']
# HMRMT = DataReader('HMRMT', 'fred', start='1967-01-01', end=end)
# CMRMT = DataReader('CMRMT', 'fred', start='1997-01-01', end=end)
# HMRMT_growth = HMRMT.diff() / HMRMT.shift()
# sales = pd.Series(np.zeros(emp.shape[0]), index=emp.index)
# # Fill in the recent entries (1997 onwards)
# sales[CMRMT.index] = CMRMT
# # Backfill the previous entries (pre 1997)
# idx = sales.ix[:'1997-01-01'].index
# for t in range(len(idx)-1, 0, -1):
# month = idx[t]
# prev_month = idx[t-1]
# sales.ix[prev_month] = sales.ix[month] / (1 + HMRMT_growth.ix[prev_month].values)
dta = pd.concat((indprod, income, sales, emp), axis=1)
dta.columns = ['indprod', 'income', 'sales', 'emp']
dta.ix[:, 'indprod':'emp'].plot(subplots=True, layout=(2, 2), figsize=(15, 6));
# Create log-differenced series
dta['dln_indprod'] = (np.log(dta.indprod)).diff() * 100
dta['dln_income'] = (np.log(dta.income)).diff() * 100
dta['dln_sales'] = (np.log(dta.sales)).diff() * 100
dta['dln_emp'] = (np.log(dta.emp)).diff() * 100
# De-mean and standardize
dta['std_indprod'] = (dta['dln_indprod'] - dta['dln_indprod'].mean()) / dta['dln_indprod'].std()
dta['std_income'] = (dta['dln_income'] - dta['dln_income'].mean()) / dta['dln_income'].std()
dta['std_sales'] = (dta['dln_sales'] - dta['dln_sales'].mean()) / dta['dln_sales'].std()
dta['std_emp'] = (dta['dln_emp'] - dta['dln_emp'].mean()) / dta['dln_emp'].std()
# Get the endogenous data
endog = dta.ix['1979-02-01':, 'std_indprod':'std_emp']
# Create the model
mod = sm.tsa.DynamicFactor(endog, k_factors=1, factor_order=2, error_order=2)
initial_res = mod.fit(method='powell', disp=False)
res = mod.fit(initial_res.params)
print(res.summary(separate_params=False))
fig, ax = plt.subplots(figsize=(13,3))
# Plot the factor
dates = endog.index._mpl_repr()
ax.plot(dates, res.factors.filtered[0], label='Factor')
ax.legend()
# Retrieve and also plot the NBER recession indicators
rec = DataReader('USREC', 'fred', start=start, end=end)
ylim = ax.get_ylim()
ax.fill_between(dates[:-3], ylim[0], ylim[1], rec.values[:-4,0], facecolor='k', alpha=0.1);
res.plot_coefficients_of_determination(figsize=(8,2));
usphci = DataReader('USPHCI', 'fred', start='1979-01-01', end='2014-12-01')['USPHCI']
usphci.plot(figsize=(13,3));
dusphci = usphci.diff()[1:].values
def compute_coincident_index(mod, res):
# Estimate W(1)
spec = res.specification
design = mod.ssm['design']
transition = mod.ssm['transition']
ss_kalman_gain = res.filter_results.kalman_gain[:,:,-1]
k_states = ss_kalman_gain.shape[0]
W1 = np.linalg.inv(np.eye(k_states) - np.dot(
np.eye(k_states) - np.dot(ss_kalman_gain, design),
transition
)).dot(ss_kalman_gain)[0]
# Compute the factor mean vector
factor_mean = np.dot(W1, dta.ix['1972-02-01':, 'dln_indprod':'dln_emp'].mean())
# Normalize the factors
factor = res.factors.filtered[0]
factor *= np.std(usphci.diff()[1:]) / np.std(factor)
# Compute the coincident index
coincident_index = np.zeros(mod.nobs+1)
# The initial value is arbitrary; here it is set to
# facilitate comparison
coincident_index[0] = usphci.iloc[0] * factor_mean / dusphci.mean()
for t in range(0, mod.nobs):
coincident_index[t+1] = coincident_index[t] + factor[t] + factor_mean
# Attach dates
coincident_index = pd.Series(coincident_index, index=dta.index).iloc[1:]
# Normalize to use the same base year as USPHCI
coincident_index *= (usphci.ix['1992-07-01'] / coincident_index.ix['1992-07-01'])
return coincident_index
fig, ax = plt.subplots(figsize=(13,3))
# Compute the index
coincident_index = compute_coincident_index(mod, res)
# Plot the factor
dates = endog.index._mpl_repr()
ax.plot(dates, coincident_index, label='Coincident index')
ax.plot(usphci.index._mpl_repr(), usphci, label='USPHCI')
ax.legend(loc='lower right')
# Retrieve and also plot the NBER recession indicators
ylim = ax.get_ylim()
ax.fill_between(dates[:-3], ylim[0], ylim[1], rec.values[:-4,0], facecolor='k', alpha=0.1);
from statsmodels.tsa.statespace import tools
class ExtendedDFM(sm.tsa.DynamicFactor):
def __init__(self, endog, **kwargs):
# Setup the model as if we had a factor order of 4
super(ExtendedDFM, self).__init__(
endog, k_factors=1, factor_order=4, error_order=2,
**kwargs)
# Note: `self.parameters` is an ordered dict with the
# keys corresponding to parameter types, and the values
# the number of parameters of that type.
# Add the new parameters
self.parameters['new_loadings'] = 3
# Cache a slice for the location of the 4 factor AR
# parameters (a_1, ..., a_4) in the full parameter vector
offset = (self.parameters['factor_loadings'] +
self.parameters['exog'] +
self.parameters['error_cov'])
self._params_factor_ar = np.s_[offset:offset+2]
self._params_factor_zero = np.s_[offset+2:offset+4]
@property
def start_params(self):
# Add three new loading parameters to the end of the parameter
# vector, initialized to zeros (for simplicity; they could
# be initialized any way you like)
return np.r_[super(ExtendedDFM, self).start_params, 0, 0, 0]
@property
def param_names(self):
# Add the corresponding names for the new loading parameters
# (the name can be anything you like)
return super(ExtendedDFM, self).param_names + [
'loading.L%d.f1.%s' % (i, self.endog_names[3]) for i in range(1,4)]
def transform_params(self, unconstrained):
# Perform the typical DFM transformation (w/o the new parameters)
constrained = super(ExtendedDFM, self).transform_params(
unconstrained[:-3])
# Redo the factor AR constraint, since we only want an AR(2),
# and the previous constraint was for an AR(4)
ar_params = unconstrained[self._params_factor_ar]
constrained[self._params_factor_ar] = (
tools.constrain_stationary_univariate(ar_params))
# Return all the parameters
return np.r_[constrained, unconstrained[-3:]]
def untransform_params(self, constrained):
# Perform the typical DFM untransformation (w/o the new parameters)
unconstrained = super(ExtendedDFM, self).untransform_params(
constrained[:-3])
# Redo the factor AR unconstraint, since we only want an AR(2),
# and the previous unconstraint was for an AR(4)
ar_params = constrained[self._params_factor_ar]
unconstrained[self._params_factor_ar] = (
tools.unconstrain_stationary_univariate(ar_params))
# Return all the parameters
return np.r_[unconstrained, constrained[-3:]]
def update(self, params, transformed=True, complex_step=False):
# Peform the transformation, if required
if not transformed:
params = self.transform_params(params)
params[self._params_factor_zero] = 0
# Now perform the usual DFM update, but exclude our new parameters
super(ExtendedDFM, self).update(params[:-3], transformed=True, complex_step=complex_step)
# Finally, set our new parameters in the design matrix
self.ssm['design', 3, 1:4] = params[-3:]
# Create the model
extended_mod = ExtendedDFM(endog)
initial_extended_res = extended_mod.fit(maxiter=1000, disp=False)
extended_res = extended_mod.fit(initial_extended_res.params, method='nm', maxiter=1000)
print(extended_res.summary(separate_params=False))
extended_res.plot_coefficients_of_determination(figsize=(8,2));
fig, ax = plt.subplots(figsize=(13,3))
# Compute the index
extended_coincident_index = compute_coincident_index(extended_mod, extended_res)
# Plot the factor
dates = endog.index._mpl_repr()
ax.plot(dates, coincident_index, '-', linewidth=1, label='Basic model')
ax.plot(dates, extended_coincident_index, '--', linewidth=3, label='Extended model')
ax.plot(usphci.index._mpl_repr(), usphci, label='USPHCI')
ax.legend(loc='lower right')
ax.set(title='Coincident indices, comparison')
# Retrieve and also plot the NBER recession indicators
ylim = ax.get_ylim()
ax.fill_between(dates[:-3], ylim[0], ylim[1], rec.values[:-4,0], facecolor='k', alpha=0.1);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First we need to define materials that will be used in the problem. We'll create three distinct materials for water, clad and fuel.
Step2: With our materials, we can now create a Materials object that can be exported to an actual XML file.
Step3: Now let's move on to the geometry. Our problem will have three regions for the fuel, the clad, and the surrounding coolant. The first step is to create the bounding surfaces -- in this case two cylinders and six reflective planes.
Step4: With the surfaces defined, we can now create cells that are defined by intersections of half-spaces created by the surfaces.
Step5: OpenMC requires that there is a "root" universe. Let us create a root cell that is filled by the pin cell universe and then assign it to the root universe.
Step6: We now must create a geometry that is assigned a root universe and export it to XML.
Step7: Next, we must define simulation parameters. In this case, we will use 10 inactive batches and 40 active batches each with 10,000 particles.
Step8: Now we are finally ready to make use of the openmc.mgxs module to generate multi-group cross sections! First, let's define "coarse" 2-group and "fine" 8-group structures using the built-in EnergyGroups class.
Step9: Now we will instantiate a variety of MGXS objects needed to run an OpenMOC simulation to verify the accuracy of our cross sections. In particular, we define transport, fission, nu-fission, nu-scatter and chi cross sections for each of the three cells in the fuel pin with the 8-group structure as our energy groups.
Step10: Next, we showcase the use of OpenMC's tally precision trigger feature in conjunction with the openmc.mgxs module. In particular, we will assign a tally trigger of 1E-2 on the standard deviation for each of the tallies used to compute multi-group cross sections.
Step11: Now, we must loop over all cells to set the cross section domains to the various cells - fuel, clad and moderator - included in the geometry. In addition, we will set each cross section to tally cross sections on a per-nuclide basis through the use of the MGXS class' boolean by_nuclide instance attribute.
Step12: Now we a have a complete set of inputs, so we can go ahead and run our simulation.
Step13: Tally Data Processing
Step14: The statepoint is now ready to be analyzed by our multi-group cross sections. We simply have to load the tallies from the StatePoint into each object as follows and our MGXS objects will compute the cross sections for us under-the-hood.
Step15: That's it! Our multi-group cross sections are now ready for the big spotlight. This time we have cross sections in three distinct spatial zones - fuel, clad and moderator - on a per-nuclide basis.
Step16: Our multi-group cross sections are capable of summing across all nuclides to provide us with macroscopic cross sections as well.
Step17: Although a printed report is nice, it is not scalable or flexible. Let's extract the microscopic cross section data for the moderator as a Pandas DataFrame .
Step18: Next, we illustate how one can easily take multi-group cross sections and condense them down to a coarser energy group structure. The MGXS class includes a get_condensed_xs(...) method which takes an EnergyGroups parameter with a coarse(r) group structure and returns a new MGXS condensed to the coarse groups. We illustrate this process below using the 2-group structure created earlier.
Step19: Group condensation is as simple as that! We now have a new coarse 2-group TransportXS in addition to our original 8-group TransportXS. Let's inspect the 2-group TransportXS by printing it to the screen and extracting a Pandas DataFrame as we have already learned how to do.
Step20: Verification with OpenMOC
Step21: Next, we we can inject the multi-group cross sections into the equivalent fuel pin cell OpenMOC geometry.
Step22: We are now ready to run OpenMOC to verify our cross-sections from OpenMC.
Step23: We report the eigenvalues computed by OpenMC and OpenMOC here together to summarize our results.
Step24: As a sanity check, let's run a simulation with the coarse 2-group cross sections to ensure that they also produce a reasonable result.
Step25: There is a non-trivial bias in both the 2-group and 8-group cases. In the case of a pin cell, one can show that these biases do not converge to <100 pcm with more particle histories. For heterogeneous geometries, additional measures must be taken to address the following three sources of bias
Step26: Another useful type of illustration is scattering matrix sparsity structures. First, we extract Pandas DataFrames for the H-1 and O-16 scattering matrices.
Step27: Matplotlib's imshow routine can be used to plot the matrices to illustrate their sparsity structures.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('seaborn-dark')
import openmoc
import openmc
import openmc.mgxs as mgxs
import openmc.data
from openmc.openmoc_compatible import get_openmoc_geometry
%matplotlib inline
# 1.6% enriched fuel
fuel = openmc.Material(name='1.6% Fuel')
fuel.set_density('g/cm3', 10.31341)
fuel.add_nuclide('U235', 3.7503e-4)
fuel.add_nuclide('U238', 2.2625e-2)
fuel.add_nuclide('O16', 4.6007e-2)
# borated water
water = openmc.Material(name='Borated Water')
water.set_density('g/cm3', 0.740582)
water.add_nuclide('H1', 4.9457e-2)
water.add_nuclide('O16', 2.4732e-2)
# zircaloy
zircaloy = openmc.Material(name='Zircaloy')
zircaloy.set_density('g/cm3', 6.55)
zircaloy.add_nuclide('Zr90', 7.2758e-3)
# Instantiate a Materials collection
materials_file = openmc.Materials([fuel, water, zircaloy])
# Export to "materials.xml"
materials_file.export_to_xml()
# Create cylinders for the fuel and clad
fuel_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, R=0.39218)
clad_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, R=0.45720)
# Create boundary planes to surround the geometry
min_x = openmc.XPlane(x0=-0.63, boundary_type='reflective')
max_x = openmc.XPlane(x0=+0.63, boundary_type='reflective')
min_y = openmc.YPlane(y0=-0.63, boundary_type='reflective')
max_y = openmc.YPlane(y0=+0.63, boundary_type='reflective')
min_z = openmc.ZPlane(z0=-0.63, boundary_type='reflective')
max_z = openmc.ZPlane(z0=+0.63, boundary_type='reflective')
# Create a Universe to encapsulate a fuel pin
pin_cell_universe = openmc.Universe(name='1.6% Fuel Pin')
# Create fuel Cell
fuel_cell = openmc.Cell(name='1.6% Fuel')
fuel_cell.fill = fuel
fuel_cell.region = -fuel_outer_radius
pin_cell_universe.add_cell(fuel_cell)
# Create a clad Cell
clad_cell = openmc.Cell(name='1.6% Clad')
clad_cell.fill = zircaloy
clad_cell.region = +fuel_outer_radius & -clad_outer_radius
pin_cell_universe.add_cell(clad_cell)
# Create a moderator Cell
moderator_cell = openmc.Cell(name='1.6% Moderator')
moderator_cell.fill = water
moderator_cell.region = +clad_outer_radius
pin_cell_universe.add_cell(moderator_cell)
# Create root Cell
root_cell = openmc.Cell(name='root cell')
root_cell.region = +min_x & -max_x & +min_y & -max_y
root_cell.fill = pin_cell_universe
# Create root Universe
root_universe = openmc.Universe(universe_id=0, name='root universe')
root_universe.add_cell(root_cell)
# Create Geometry and set root Universe
openmc_geometry = openmc.Geometry(root_universe)
# Export to "geometry.xml"
openmc_geometry.export_to_xml()
# OpenMC simulation parameters
batches = 50
inactive = 10
particles = 10000
# Instantiate a Settings object
settings_file = openmc.Settings()
settings_file.batches = batches
settings_file.inactive = inactive
settings_file.particles = particles
settings_file.output = {'tallies': True}
# Create an initial uniform spatial source distribution over fissionable zones
bounds = [-0.63, -0.63, -0.63, 0.63, 0.63, 0.63]
uniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True)
settings_file.source = openmc.source.Source(space=uniform_dist)
# Activate tally precision triggers
settings_file.trigger_active = True
settings_file.trigger_max_batches = settings_file.batches * 4
# Export to "settings.xml"
settings_file.export_to_xml()
# Instantiate a "coarse" 2-group EnergyGroups object
coarse_groups = mgxs.EnergyGroups([0., 0.625, 20.0e6])
# Instantiate a "fine" 8-group EnergyGroups object
fine_groups = mgxs.EnergyGroups([0., 0.058, 0.14, 0.28,
0.625, 4.0, 5.53e3, 821.0e3, 20.0e6])
# Extract all Cells filled by Materials
openmc_cells = openmc_geometry.get_all_material_cells().values()
# Create dictionary to store multi-group cross sections for all cells
xs_library = {}
# Instantiate 8-group cross sections for each cell
for cell in openmc_cells:
xs_library[cell.id] = {}
xs_library[cell.id]['transport'] = mgxs.TransportXS(groups=fine_groups)
xs_library[cell.id]['fission'] = mgxs.FissionXS(groups=fine_groups)
xs_library[cell.id]['nu-fission'] = mgxs.FissionXS(groups=fine_groups, nu=True)
xs_library[cell.id]['nu-scatter'] = mgxs.ScatterMatrixXS(groups=fine_groups, nu=True)
xs_library[cell.id]['chi'] = mgxs.Chi(groups=fine_groups)
# Create a tally trigger for +/- 0.01 on each tally used to compute the multi-group cross sections
tally_trigger = openmc.Trigger('std_dev', 1E-2)
# Add the tally trigger to each of the multi-group cross section tallies
for cell in openmc_cells:
for mgxs_type in xs_library[cell.id]:
xs_library[cell.id][mgxs_type].tally_trigger = tally_trigger
# Instantiate an empty Tallies object
tallies_file = openmc.Tallies()
# Iterate over all cells and cross section types
for cell in openmc_cells:
for rxn_type in xs_library[cell.id]:
# Set the cross sections domain to the cell
xs_library[cell.id][rxn_type].domain = cell
# Tally cross sections by nuclide
xs_library[cell.id][rxn_type].by_nuclide = True
# Add OpenMC tallies to the tallies file for XML generation
for tally in xs_library[cell.id][rxn_type].tallies.values():
tallies_file.append(tally, merge=True)
# Export to "tallies.xml"
tallies_file.export_to_xml()
# Run OpenMC
openmc.run()
# Load the last statepoint file
sp = openmc.StatePoint('statepoint.082.h5')
# Iterate over all cells and cross section types
for cell in openmc_cells:
for rxn_type in xs_library[cell.id]:
xs_library[cell.id][rxn_type].load_from_statepoint(sp)
nufission = xs_library[fuel_cell.id]['nu-fission']
nufission.print_xs(xs_type='micro', nuclides=['U235', 'U238'])
nufission = xs_library[fuel_cell.id]['nu-fission']
nufission.print_xs(xs_type='macro', nuclides='sum')
nuscatter = xs_library[moderator_cell.id]['nu-scatter']
df = nuscatter.get_pandas_dataframe(xs_type='micro')
df.head(10)
# Extract the 8-group transport cross section for the fuel
fine_xs = xs_library[fuel_cell.id]['transport']
# Condense to the 2-group structure
condensed_xs = fine_xs.get_condensed_xs(coarse_groups)
condensed_xs.print_xs()
df = condensed_xs.get_pandas_dataframe(xs_type='micro')
df
# Create an OpenMOC Geometry from the OpenMC Geometry
openmoc_geometry = get_openmoc_geometry(sp.summary.geometry)
# Get all OpenMOC cells in the gometry
openmoc_cells = openmoc_geometry.getRootUniverse().getAllCells()
# Inject multi-group cross sections into OpenMOC Materials
for cell_id, cell in openmoc_cells.items():
# Ignore the root cell
if cell.getName() == 'root cell':
continue
# Get a reference to the Material filling this Cell
openmoc_material = cell.getFillMaterial()
# Set the number of energy groups for the Material
openmoc_material.setNumEnergyGroups(fine_groups.num_groups)
# Extract the appropriate cross section objects for this cell
transport = xs_library[cell_id]['transport']
nufission = xs_library[cell_id]['nu-fission']
nuscatter = xs_library[cell_id]['nu-scatter']
chi = xs_library[cell_id]['chi']
# Inject NumPy arrays of cross section data into the Material
# NOTE: Sum across nuclides to get macro cross sections needed by OpenMOC
openmoc_material.setSigmaT(transport.get_xs(nuclides='sum').flatten())
openmoc_material.setNuSigmaF(nufission.get_xs(nuclides='sum').flatten())
openmoc_material.setSigmaS(nuscatter.get_xs(nuclides='sum').flatten())
openmoc_material.setChi(chi.get_xs(nuclides='sum').flatten())
# Generate tracks for OpenMOC
track_generator = openmoc.TrackGenerator(openmoc_geometry, num_azim=128, azim_spacing=0.1)
track_generator.generateTracks()
# Run OpenMOC
solver = openmoc.CPUSolver(track_generator)
solver.computeEigenvalue()
# Print report of keff and bias with OpenMC
openmoc_keff = solver.getKeff()
openmc_keff = sp.k_combined[0]
bias = (openmoc_keff - openmc_keff) * 1e5
print('openmc keff = {0:1.6f}'.format(openmc_keff))
print('openmoc keff = {0:1.6f}'.format(openmoc_keff))
print('bias [pcm]: {0:1.1f}'.format(bias))
openmoc_geometry = get_openmoc_geometry(sp.summary.geometry)
openmoc_cells = openmoc_geometry.getRootUniverse().getAllCells()
# Inject multi-group cross sections into OpenMOC Materials
for cell_id, cell in openmoc_cells.items():
# Ignore the root cell
if cell.getName() == 'root cell':
continue
openmoc_material = cell.getFillMaterial()
openmoc_material.setNumEnergyGroups(coarse_groups.num_groups)
# Extract the appropriate cross section objects for this cell
transport = xs_library[cell_id]['transport']
nufission = xs_library[cell_id]['nu-fission']
nuscatter = xs_library[cell_id]['nu-scatter']
chi = xs_library[cell_id]['chi']
# Perform group condensation
transport = transport.get_condensed_xs(coarse_groups)
nufission = nufission.get_condensed_xs(coarse_groups)
nuscatter = nuscatter.get_condensed_xs(coarse_groups)
chi = chi.get_condensed_xs(coarse_groups)
# Inject NumPy arrays of cross section data into the Material
openmoc_material.setSigmaT(transport.get_xs(nuclides='sum').flatten())
openmoc_material.setNuSigmaF(nufission.get_xs(nuclides='sum').flatten())
openmoc_material.setSigmaS(nuscatter.get_xs(nuclides='sum').flatten())
openmoc_material.setChi(chi.get_xs(nuclides='sum').flatten())
# Generate tracks for OpenMOC
track_generator = openmoc.TrackGenerator(openmoc_geometry, num_azim=128, azim_spacing=0.1)
track_generator.generateTracks()
# Run OpenMOC
solver = openmoc.CPUSolver(track_generator)
solver.computeEigenvalue()
# Print report of keff and bias with OpenMC
openmoc_keff = solver.getKeff()
openmc_keff = sp.k_combined[0]
bias = (openmoc_keff - openmc_keff) * 1e5
print('openmc keff = {0:1.6f}'.format(openmc_keff))
print('openmoc keff = {0:1.6f}'.format(openmoc_keff))
print('bias [pcm]: {0:1.1f}'.format(bias))
# Create a figure of the U-235 continuous-energy fission cross section
fig = openmc.plot_xs('U235', ['fission'])
# Get the axis to use for plotting the MGXS
ax = fig.gca()
# Extract energy group bounds and MGXS values to plot
fission = xs_library[fuel_cell.id]['fission']
energy_groups = fission.energy_groups
x = energy_groups.group_edges
y = fission.get_xs(nuclides=['U235'], order_groups='decreasing', xs_type='micro')
y = np.squeeze(y)
# Fix low energy bound
x[0] = 1.e-5
# Extend the mgxs values array for matplotlib's step plot
y = np.insert(y, 0, y[0])
# Create a step plot for the MGXS
ax.plot(x, y, drawstyle='steps', color='r', linewidth=3)
ax.set_title('U-235 Fission Cross Section')
ax.legend(['Continuous', 'Multi-Group'])
ax.set_xlim((x.min(), x.max()))
# Construct a Pandas DataFrame for the microscopic nu-scattering matrix
nuscatter = xs_library[moderator_cell.id]['nu-scatter']
df = nuscatter.get_pandas_dataframe(xs_type='micro')
# Slice DataFrame in two for each nuclide's mean values
h1 = df[df['nuclide'] == 'H1']['mean']
o16 = df[df['nuclide'] == 'O16']['mean']
# Cast DataFrames as NumPy arrays
h1 = h1.values
o16 = o16.values
# Reshape arrays to 2D matrix for plotting
h1.shape = (fine_groups.num_groups, fine_groups.num_groups)
o16.shape = (fine_groups.num_groups, fine_groups.num_groups)
# Create plot of the H-1 scattering matrix
fig = plt.subplot(121)
fig.imshow(h1, interpolation='nearest', cmap='jet')
plt.title('H-1 Scattering Matrix')
plt.xlabel('Group Out')
plt.ylabel('Group In')
# Create plot of the O-16 scattering matrix
fig2 = plt.subplot(122)
fig2.imshow(o16, interpolation='nearest', cmap='jet')
plt.title('O-16 Scattering Matrix')
plt.xlabel('Group Out')
plt.ylabel('Group In')
# Show the plot on screen
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Temos ~800 MB de dados. O servidor onde o backend do site vai funcionar apenas têm 1GB de memória, o que cria um desafio técnico. Como a útilidade do site é apenas contar palavras ou expressões que ocorrem mais em certas sessões, e não em todas as sessões ('enfermeiro' vs 'deputado'), podemos retirar essas palavras mais usuais
Step2: Fazendo uma contagem ás palavras mais frequentes que ainda restam
Step3: E estimando a redução de tamanho
Step4: 536 MB. Nada mau. Graças a esta redução tornou-se possível fazer uma query do site funcionar em ~4 seg em vez de 30 seg pois agora os dados cabem na memória. De notar que a ordem das palavras é a mesma, mas geram-se alguns problemas contando certas expressões ('porto de mar' é agora 'porto mar', e contando 'porto mar' tambem se contam ocorrencias de '(...)Porto. Mar(...)', pois retiramos os pontos e reduzimos os espaços consecutivos a um único. Mesmo assim, o dataset é perfeitamente útil para identificar em que sessões se falou de um certo assunto.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pylab
import matplotlib
import pandas
import numpy
dateparse = lambda x: pandas.datetime.strptime(x, '%Y-%m-%d')
sessoes = pandas.read_csv('sessoes_democratica_org.csv',index_col=0,parse_dates=['data'], date_parser=dateparse)
del sessoes['tamanho']
total0 = numpy.sum(sessoes['sessao'].map(len))
print(total0)
def substitui_palavras_comuns(texto):
t = texto.replace('.',' ').replace('\n',' ').replace(',',' ').replace(')',' ').replace('(',' ').replace('!',' ').replace('?',' ').replace(':',' ').replace(';',' ')
t = t.replace(' de ',' ').replace(' que ',' ').replace(' do ',' ').replace(' da ',' ').replace(' sr ',' ').replace(' não ',' ').replace(' em ',' ').replace(' se ','').replace(' para',' ').replace(' os ',' ').replace(' dos ',' ').replace(' uma ',' ').replace(' um ',' ').replace(' as ',' ').replace(' dos ',' ').replace(' no ',' ').replace(' dos ',' ').replace('presidente','').replace(' na ',' ').replace(' por ','').replace('presidente','').replace(' com ',' ').replace(' ao ',' ').replace('deputado','').replace(' das ',' ').replace(' como ','').replace('governo','').replace(' ou ','').replace(' mais ',' ').replace(' assembleia ','').replace(' ser ',' ').replace(' tem ',' ')
t = t.replace(' srs ','').replace(' pelo ','').replace(' mas ','').replace(' foi ','').replace('srs.','').replace('palavra','').replace(' que ','').replace(' sua ','').replace(' artigo ','').replace(' nos ','').replace(' eu ','').replace('muito','').replace('sobre ','').replace('também','').replace('proposta','').replace(' aos ',' ').replace(' esta ',' ').replace(' já ',' ')
t = t.replace(' vamos ',' ').replace(' nesta ',' ').replace(' lhe ',' ').replace(' meu ',' ').replace(' eu ',' ').replace(' vai ',' ')
t = t.replace(' isso ',' ').replace(' dia ',' ').replace(' discussão ',' ').replace(' dizer ',' ').replace(' seus ',' ').replace(' apenas ',' ').replace(' agora ',' ')
t = t.replace(' ª ',' ').replace(' foram ',' ').replace(' pois ',' ').replace(' nem ',' ').replace(' suas ',' ').replace(' deste ',' ').replace(' quer ',' ').replace(' desta ',' ').replace(' qual ',' ')
t = t.replace(' o ',' ').replace(' a ',' ').replace(' e ',' ').replace(' é ',' ').replace(' à ',' ').replace(' s ',' ')
t = t.replace(' - ','').replace(' º ',' ').replace(' n ',' ').replace(' . ',' ').replace(' são ',' ').replace(' está ',' ').replace(' seu ',' ').replace(' há ',' ').replace('orador',' ').replace(' este ',' ').replace(' pela ',' ').replace(' bem ',' ').replace(' nós ',' ').replace('porque','').replace('aqui','').replace(' às ',' ').replace('ainda','').replace('todos','').replace(' só ',' ').replace('fazer',' ').replace(' sem ',' ').replace(' qualquer ',' ').replace(' quanto ',' ').replace(' pode ',' ').replace(' nosso ',' ').replace(' neste ',' ').replace(' ter ',' ').replace(' mesmo ',' ').replace(' essa ',' ').replace(' até ',' ').replace(' me ',' ').replace(' nossa ',' ').replace(' entre ',' ').replace(' nas ',' ').replace(' esse ',' ').replace(' será ',' ').replace(' isto ',' ').replace(' quando ',' ').replace(' seja ',' ').replace(' assim ',' ').replace(' quanto ',' ').replace(' pode ',' ').replace(' é ',' ')
t = t.replace(' ',' ').replace(' ',' ').replace(' ',' ')
return t
sessoes['sessao'] = sessoes['sessao'].map(substitui_palavras_comuns)
import re
from collections import Counter
def agrupa_palavras(texto):
texto = texto.lower() #processa tudo em minusculas
palavras = re.split(';|,|\n| |\(|\)|\?|\!|:',texto) # separa as palavras
palavras = [x.title() for x in palavras if len(x)>0] # organiza e remove as palavras com menos de 5 caracteres
return palavras
def conta_palavras(sessoes):
lista = sessoes['sessao'].map(agrupa_palavras) # cria uma lista de 'lista de palavras', um elemento por sessao
palavras = []
for l in lista:
palavras.extend(l) # junta as 'listas de palavras' todas na mesma lista
return Counter(palavras).most_common(100) # conta as palavras mais frequentes
x = conta_palavras(sessoes[1:100])
for (y,z) in x:
print(str(str(z)+' x '+y))
total = numpy.sum(sessoes['sessao'].map(len))
print(str(total/total0*100)+' %')
print(total)
sessoes.to_csv('sessoes_democratica_clipped.csv')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1st Step
Step2: Question 2
Step3: Question 3
Step4: Question 4
Step5: Question 5
Step6: Question 6
Step7: Question 7
Step8: Question 8
Step9: 2nd Step
|
<ASSISTANT_TASK:>
Python Code:
# Import libraries
import tensorflow as tf
import numpy as np
import time
import collections
import os
# Import MNIST data with TensorFlow
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets(os.path.join('datasets', 'mnist'), one_hot=True) # load data in local folder
train_data = mnist.train.images.astype(np.float32)
train_labels = mnist.train.labels
test_data = mnist.test.images.astype(np.float32)
test_labels = mnist.test.labels
print(train_data.shape)
print(train_labels.shape)
print(test_data.shape)
print(test_labels.shape)
# computational graph inputs
batch_size = 100
d = train_data.shape[1]
nc = 10
x = tf.placeholder(tf.float32,[batch_size,d]); print('x=',x,x.get_shape())
y_label = YOUR CODE HERE
# computational graph variables
initial = tf.truncated_normal([d,nc], stddev=0.1); W = tf.Variable(initial); print('W=',W.get_shape())
b = YOUR CODE HERE
# Construct CG / output value
y = YOUR CODE HERE; print('y1=',y,y.get_shape())
y += b; print('y2=',y,y.get_shape())
y = YOUR CODE HERE; print('y3=',y,y.get_shape())
# Loss
cross_entropy = tf.reduce_mean(-tf.reduce_sum(YOUR CODE HERE * tf.log(YOUR CODE HERE), 1))
reg_loss = YOUR CODE HERE
reg_par = 1e-3
total_loss = YOUR CODE HERE
# Update CG variables / backward pass
train_step = YOUR CODE HERE
# Accuracy
correct_prediction = YOUR CODE HERE
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Create test set
idx = np.random.permutation(test_data.shape[0]) # rand permutation
idx = idx[:batch_size]
test_x, test_y = test_data[idx,:], test_labels[idx]
n = train_data.shape[0]
indices = collections.deque()
# Running Computational Graph
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
for i in range(50):
# Batch extraction
if len(indices) < batch_size:
indices.extend(np.random.permutation(n)) # rand permutation
idx = [indices.popleft() for i in range(batch_size)] # extract n_batch data
batch_x, batch_y = train_data[idx,:], train_labels[idx]
# Run CG for variable training
_,acc_train,total_loss_o = sess.run([train_step,accuracy,total_loss], feed_dict={x: batch_x, y_label: batch_y})
print('\nIteration i=',i,', train accuracy=',acc_train,', loss=',total_loss_o)
# Run CG for testset
acc_test = sess.run(accuracy, feed_dict={x: test_x, y_label: test_y})
print('test accuracy=',acc_test)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Create a dataframe with Pandas
Step2: 2. GeoPandas
Step3: 3 Cities?
Step4: Let's compare some data?
Step5: Crime?
Step6: <br><br><hr><br><br>
|
<ASSISTANT_TASK:>
Python Code:
import json, shapely, fiona, os
import seaborn as sns
import pandas as pd
import geopandas as gpd
import networkx as nx
import matplotlib.pyplot as plt
%matplotlib inline
# load the sample dataset (iris)
iris = sns.load_dataset('iris')
#look at it:
print("Rows: ",len(iris))
iris.head()
iris.species.value_counts()
#Now let's plot it:
ax = iris[iris.species=='setosa'].plot(kind='scatter', x='sepal_length', y='petal_length', color='steelblue', figsize=(10,6))
iris[iris.species=='virginica'].plot(kind='scatter', x='sepal_length', y='petal_length', ax=ax, color='red')
iris[iris.species=='versicolor'].plot(kind='scatter', x='sepal_length', y='petal_length', ax=ax, color='orange')
ax.legend(['Setosa','Virginica','Versicolor'])
# First we're going to read in the default data
path = gpd.datasets.get_path('naturalearth_lowres')
df = gpd.read_file(path)
df.head()
print("Average Population per country: {0}".format(df.pop_est.mean().round()))
print("Median Population per country: {0}".format(df.pop_est.median().round()))
#Now let's plot it
df.plot(figsize=(10,8))
# Get fancier with other libraries
import geoplot
fig, axes = plt.subplots(1,2)
fig.set_size_inches(15,8)
geoplot.cartogram(df[df['continent'] == 'Africa'],
scale='pop_est', limits=(0.2, 1), figsize=(7, 8), ax=axes[0])
geoplot.cartogram(df[df['continent'] == 'South America'],
scale='pop_est', limits=(0.2, 1), figsize=(7, 8), ax=axes[1])
df = gpd.read_file('data/HydrologyLine.shp')
df.columns = ['layer','length','geometry']
df.head()
df.plot(figsize=(15,8))
df.crs
mercator = df.head(10).to_crs({'init': 'epsg:4326'})
osmp_lands = gpd.read_file('data/OSMPLands.shp')
osmp_lands.head(3)
ax = osmp_lands[osmp_lands.TYPE=='Conservation Easement'].plot(figsize=(15,8))
osmp_lands[osmp_lands.TYPE!='Conservation Easement'].plot(ax=ax,color='green')
# geoplot.choropleth(osmp_lands, hue='PropertyID', cmap='Greens', figsize=(8, 4))
crime = gpd.read_file('data/Target_Crime_Locations.shp')
crime['date'] = crime.REPORTDATE.apply(lambda x: pd.Timestamp(x).date())
crime['year'] = crime.date.apply(lambda x: x.year)
crime.head()
ax = crime.groupby('date').aggregate({"REPORTNUM":"count"}).cumsum().plot()
ax.set_title("Amount of crime reports filed over time");
# Looks like crime is not going up in Boulder?
crime.OFFENSE.value_counts()
crime[crime.OFFENSE=='Trespassing'].groupby('date').aggregate('count')['OBJECTID'].cumsum().plot(figsize=(15,8))
conservation_easement = osmp_lands[osmp_lands.TYPE=='Conservation Easement'].to_crs(crime.crs)
ax = conservation_easement.plot(figsize=(15,8))
crime[(crime.OFFENSE=='Trespassing') & (crime.year==2018)].plot(ax=ax, color='red', linewidth=0.01)
crimes_on_osmplands = gpd.sjoin(osmp_lands.to_crs(crime.crs),crime) #Spatial join Points to polygons
crimes_on_osmplands.head(2)
# What crimes happpen on OSMP lands?
crimes_on_osmplands.OFFENSE.value_counts()
prairie = gpd.read_file('data/prairies.shp')
print("Found {0} colonies".format(len(prairie)))
prairie.head()
prairie.head().buffer(0.001).plot()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: NOTA BENE
Step2: Importazione del modulo re per utilizzare le espressioni regolari.
Step3: Lettura del file del codice genetico in una lista di righe
Step4: Costruzione del dizionario del codice genetico
Step5: Costruire il dizionario con la funzione dict().
Step6: Lettura del file EMBL in un'unica stringa
Step7: Estrazione dell'identificatore univoco e dell'organismo relativo all'entry.
Step8: Estrazione dell'organismo.
Step9: Estrazione della sequenza nucleotidica
Step10: Estrarre dalla lista appena costruita, la lista contenente i pezzi di lunghezza 10 della sequenza nucleotidica in modo tale che l'elemento i-esimo sia una lista annidata contenente i sei pezzi di sequenza relativi all'i-esimo record della lista appena costruita.
Step11: Concatenare i pezzi della lista in un'unica stringa.
Step12: Estrazione della sequenza della proteina
Step13: Ottenere la lista degli altri record di proteina
Step14: Aggiungere in testa alla lista il prefisso trovato prima e concatenare tutti gli elementi della lista per ottenere la sequenza della proteina in un'unica stringa.
Step15: Coding sequence (CDS) in formato FASTA
Step16: Ottenere la sequenza della CDS.
Step17: Produrre la sequenza della CDS in formato FASTA con il seguente header
Step18: Distribuzione di frequenza dei codoni della CDS
Step19: In alternativa (by Andrea Vignati), usare la funzione range() per creare la lista delle posizioni di inizio 0, 3, 6, 9, 12, ... dei codoni e usare una list comprehension che estragga i codoni sulla base di tali posizioni.
Step20: Costruire la lista di tuple (codone, frequenza) elencate per frequenze decrescenti.
Step21: Distribuzione di frequenza degli amminoacidi della proteina letta dal file EMBL
Step22: Costruire la lista di tuple (amminoacido, frequenza) elencate per frequenza decrescente.
Step23: Validazione della sequenza della proteina letta dal file EMBL
Step24: Alternativa2
Step25: Verificare che le due alternative portano alla stessa traduzione.
Step26: Verificare infine che la proteina letta dal file EMBL è uguale a quella ottenuta per traduzione della CDS.
|
<ASSISTANT_TASK:>
Python Code:
def format_fasta(header, sequence):
return header + '\n' + '\n'.join(re.findall('\w{,80}', sequence))
genetic_code_name = './genetic-code.txt'
input_file_name = './M10051.txt'
import re
with open(genetic_code_name, 'r') as input_file:
genetic_code_rows = input_file.readlines()
genetic_code_rows
splitted_genetic_code = [row.rstrip().split(',') for row in genetic_code_rows]
tuple_list = [(codon, ammino_list[0]) for ammino_list in splitted_genetic_code for codon in ammino_list[1:]]
tuple_list
genetic_code_dict = dict(tuple_list)
genetic_code_dict
with open(input_file_name, 'r') as input_file:
embl_str = input_file.read()
print(embl_str)
s = re.search('^ID\s+(\w+)', embl_str, re.M)
identifier = s.group(1)
identifier
s = re.search('^ID.+\s+(\w+);', embl_str, re.M)
organism = s.group(1)
organism
seq_row_list = re.findall('^(\s+\D+)\d+', embl_str, re.M)
#seq_row_list
seq_chunk_list = [re.findall('\w+', row) for row in seq_row_list]
#seq_chunk_list
nucleotide_sequence = ''.join([''.join(six_chunk_list) for six_chunk_list in seq_chunk_list])
nucleotide_sequence
s = re.search('^FT\s+\/translation=\"(\w+)$', embl_str, re.M)
protein_prefix = s.group(1)
#protein_prefix
protein_list = re.findall('^FT\s+([A-Z]+)"?$', embl_str, re.M)
#protein_list
protein_list[:0] = [protein_prefix]
protein_sequence = ''.join([''.join(chunk) for chunk in protein_list])
protein_sequence
s = re.search('^FT\s+CDS\s+(\d+)..(\d+)$', embl_str, re.M)
cds_start = s.group(1)
cds_end = s.group(2)
cds_start
cds_end
cds_sequence = nucleotide_sequence[int(cds_start)-1:int(cds_end)]
cds_sequence
header = '>' + identifier + '-' + organism + '; len = ' + str(len(cds_sequence)) + ';'
exist_codon_start = 'no'
exist_codon_end = 'no'
if cds_sequence[:3] == 'atg':
exist_codon_start = 'yes'
if cds_sequence[-3:] in ['taa', 'tga', 'tag']:
exist_codon_end = 'yes'
header = header + ' start = ' + exist_codon_start + ';'
header = header + ' end = ' + exist_codon_end
cds_sequence_fasta = format_fasta(header, cds_sequence)
print(cds_sequence_fasta)
codon_list = re.findall('\w{1,3}', cds_sequence)
codon_list = [cds_sequence[i:i+3] for i in list(range(0, len(cds_sequence), 3))]
codon_list
from collections import Counter
codon_frequency = Counter(codon_list).most_common()
codon_frequency
ammino_list = [ammino for ammino in protein_sequence]
ammino_list = list(protein_sequence)
ammino_list = re.findall('\w', protein_sequence)
ammino_list
ammino_frequency = Counter(ammino_list).most_common()
ammino_frequency
cds_translation1 = ''.join([genetic_code_dict[codon] for codon in codon_list[:-1]])
cds_translation1
cds_translation2 = re.sub('\w{3}', lambda x : genetic_code_dict[x.group()], cds_sequence[:-3])
cds_translation2
cds_translation1 == cds_translation2
cds_translation1 == protein_sequence
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step4: Here we will quickly demonstrate that slice sampling is able to cope with very high-dimensional problems without the use of gradients. Our target will in this case be a 250-D uncorrelated multivariate normal distribution with an identical prior.
Step5: We will use "Hamiltonian" Slice Sampling ('hslice') with our gradients to sample in high dimensions.
Step6: Now let's see how our sampling went.
Step7: That looks good! Obviously we can't plot the full 200x200 plot, but 5x5 subplots should do.
|
<ASSISTANT_TASK:>
Python Code:
# system functions that are always useful to have
import time, sys, os
import pickle
# basic numeric setup
import numpy as np
from numpy import linalg
from scipy import stats
# inline plotting
%matplotlib inline
# plotting
import matplotlib
from matplotlib import pyplot as plt
# seed the random number generator
rstate = np.random.default_rng(520)
# re-defining plotting defaults
from matplotlib import rcParams
rcParams.update({'xtick.major.pad': '7.0'})
rcParams.update({'xtick.major.size': '7.5'})
rcParams.update({'xtick.major.width': '1.5'})
rcParams.update({'xtick.minor.pad': '7.0'})
rcParams.update({'xtick.minor.size': '3.5'})
rcParams.update({'xtick.minor.width': '1.0'})
rcParams.update({'ytick.major.pad': '7.0'})
rcParams.update({'ytick.major.size': '7.5'})
rcParams.update({'ytick.major.width': '1.5'})
rcParams.update({'ytick.minor.pad': '7.0'})
rcParams.update({'ytick.minor.size': '3.5'})
rcParams.update({'ytick.minor.width': '1.0'})
rcParams.update({'font.size': 30})
import dynesty
ndim = 200 # number of dimensions
C = np.identity(ndim) # set covariance to identity matrix
Cinv = linalg.inv(C) # precision matrix
lnorm = -0.5 * (np.log(2 * np.pi) * ndim + np.log(linalg.det(C))) # ln(normalization)
# 250-D iid standard normal log-likelihood
def loglikelihood(x):
Multivariate normal log-likelihood.
return -0.5 * np.dot(x, np.dot(Cinv, x)) + lnorm
# gradient of log-likelihood *with respect to u*
# i.e. d(lnl(v))/dv * dv/du where
# dv/du = 1. / prior(v)
def gradient(x):
Gradient of multivariate normal log-likelihood.
return -np.dot(Cinv, x) / stats.norm.pdf(x)
# prior transform (iid standard normal prior)
def prior_transform(u):
Transforms our unit cube samples `u` to a standard normal prior.
return stats.norm.ppf(u)
# ln(evidence)
lnz_truth = lnorm - 0.5 * ndim * np.log(2)
print(lnz_truth)
# hamiltonian slice sampling ('hslice')
sampler = dynesty.NestedSampler(loglikelihood, prior_transform, ndim, nlive=50,
bound='none', sample='hslice',
slices=10, gradient=gradient, rstate=rstate)
sampler.run_nested(dlogz=0.01)
res = sampler.results
from dynesty import plotting as dyplot
# evidence check
fig, axes = dyplot.runplot(res, color='red', lnz_truth=lnz_truth, truth_color='black', logplot=True)
fig.tight_layout()
# posterior check
from dynesty.results import Results
dims = [-1, -2, -3, -4, -5]
fig, ax = plt.subplots(5, 5, figsize=(25, 25))
samps, samps_t = res.samples, res.samples[:,dims]
dres = res.asdict()
dres['samples'] = samps_t
res = Results(dres)
fg, ax = dyplot.cornerplot(res, color='red', truths=np.zeros(ndim), truth_color='black',
span=[(-3.5, 3.5) for i in range(len(dims))],
show_titles=True, title_kwargs={'y': 1.05},
quantiles=None, fig=(fig, ax))
dres = res.asdict()
dres['samples'] = samps
res = Results(dres)
print(1.96 / np.sqrt(2))
# let's confirm we actually got the entire distribution
from dynesty import utils
weights = np.exp(res.logwt - res.logz[-1])
mu, cov = utils.mean_and_cov(samps, weights)
# plot residuals
from scipy.stats.kde import gaussian_kde
mu_kde = gaussian_kde(mu)
xgrid = np.linspace(-0.5, 0.5, 1000)
mu_pdf = mu_kde.pdf(xgrid)
cov_kde = gaussian_kde((cov - C).flatten())
xgrid2 = np.linspace(-0.3, 0.3, 1000)
cov_pdf = cov_kde.pdf(xgrid2)
plt.figure(figsize=(16, 6))
plt.subplot(1, 2, 1)
plt.plot(xgrid, mu_pdf, lw=3, color='black')
plt.xlabel('Mean Offset')
plt.ylabel('PDF')
plt.subplot(1, 2, 2)
plt.plot(xgrid2, cov_pdf, lw=3, color='red')
plt.xlabel('Covariance Offset')
plt.ylabel('PDF')
# print values
print('Means (0.):', np.mean(mu), '+/-', np.std(mu))
print('Variance (0.5):', np.mean(np.diag(cov)), '+/-', np.std(np.diag(cov)))
cov_up = np.triu(cov, k=1).flatten()
cov_low = np.tril(cov,k=-1).flatten()
cov_offdiag = np.append(cov_up[abs(cov_up) != 0.], cov_low[cov_low != 0.])
print('Covariance (0.):', np.mean(cov_offdiag), '+/-', np.std(cov_offdiag))
plt.tight_layout()
# plot individual values
plt.figure(figsize=(20,6))
plt.subplot(1, 3, 1)
plt.plot(mu, 'k.')
plt.ylabel(r'$\Delta$ Mean')
plt.xlabel('Dimension')
plt.ylim([-np.max(np.abs(mu)) - 0.05,
np.max(np.abs(mu)) + 0.05])
plt.tight_layout()
plt.subplot(1, 3, 2)
dcov = np.diag(cov) - 0.5
plt.plot(dcov, 'r.')
plt.ylabel(r' $\Delta$ Variance')
plt.xlabel('Dimension')
plt.ylim([-np.max(np.abs(dcov)) - 0.02,
np.max(np.abs(dcov)) + 0.02])
plt.tight_layout()
plt.subplot(1, 3, 3)
dcovlow = cov_low[cov_low != 0.]
dcovup = cov_up[cov_up != 0.]
dcovoff = np.append(dcovlow, dcovup)
plt.plot(dcovlow, 'b.', ms=1, alpha=0.3)
plt.plot(dcovup, 'b.', ms=1, alpha=0.3)
plt.ylabel(r' $\Delta$ Covariance')
plt.xlabel('Cross-Term')
plt.ylim([-np.max(np.abs(dcovoff)) - 0.02,
np.max(np.abs(dcovoff)) + 0.02])
plt.tight_layout()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: scikit-learn Training on AI Platform
Step2: The data
Step3: Part 2
Step4: Part 3
Step5: Submit the training job.
Step6: [Optional] StackDriver Logging
|
<ASSISTANT_TASK:>
Python Code:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
%env PROJECT_ID <PROJECT_ID>
%env BUCKET_NAME <BUCKET_NAME>
%env REGION us-central1
%env TRAINER_PACKAGE_PATH ./census_training
%env MAIN_TRAINER_MODULE census_training.train
%env JOB_DIR gs://<BUCKET_NAME>/scikit_learn_job_dir
%env RUNTIME_VERSION 1.9
%env PYTHON_VERSION 3.5
! mkdir census_training
%%writefile ./census_training/train.py
# [START setup]
import datetime
import pandas as pd
from google.cloud import storage
from sklearn.ensemble import RandomForestClassifier
from sklearn.externals import joblib
from sklearn.feature_selection import SelectKBest
from sklearn.pipeline import FeatureUnion
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import LabelBinarizer
# TODO: REPLACE '<BUCKET_NAME>' with your GCS BUCKET_NAME
BUCKET_NAME = '<BUCKET_NAME>'
# [END setup]
# ---------------------------------------
# 1. Add code to download the data from GCS (in this case, using the publicly hosted data).
# AI Platform will then be able to use the data when training your model.
# ---------------------------------------
# [START download-data]
# Public bucket holding the census data
bucket = storage.Client().bucket('cloud-samples-data')
# Path to the data inside the public bucket
blob = bucket.blob('ml-engine/sklearn/census_data/adult.data')
# Download the data
blob.download_to_filename('adult.data')
# [END download-data]
# ---------------------------------------
# This is where your model code would go. Below is an example model using the census dataset.
# ---------------------------------------
# [START define-and-load-data]
# Define the format of your input data including unused columns (These are the columns from the census data files)
COLUMNS = (
'age',
'workclass',
'fnlwgt',
'education',
'education-num',
'marital-status',
'occupation',
'relationship',
'race',
'sex',
'capital-gain',
'capital-loss',
'hours-per-week',
'native-country',
'income-level'
)
# Categorical columns are columns that need to be turned into a numerical value to be used by scikit-learn
CATEGORICAL_COLUMNS = (
'workclass',
'education',
'marital-status',
'occupation',
'relationship',
'race',
'sex',
'native-country'
)
# Load the training census dataset
with open('./adult.data', 'r') as train_data:
raw_training_data = pd.read_csv(train_data, header=None, names=COLUMNS)
# Remove the column we are trying to predict ('income-level') from our features list
# Convert the Dataframe to a lists of lists
train_features = raw_training_data.drop('income-level', axis=1).values.tolist()
# Create our training labels list, convert the Dataframe to a lists of lists
train_labels = (raw_training_data['income-level'] == ' >50K').values.tolist()
# [END define-and-load-data]
# [START categorical-feature-conversion]
# Since the census data set has categorical features, we need to convert
# them to numerical values. We'll use a list of pipelines to convert each
# categorical column and then use FeatureUnion to combine them before calling
# the RandomForestClassifier.
categorical_pipelines = []
# Each categorical column needs to be extracted individually and converted to a numerical value.
# To do this, each categorical column will use a pipeline that extracts one feature column via
# SelectKBest(k=1) and a LabelBinarizer() to convert the categorical value to a numerical one.
# A scores array (created below) will select and extract the feature column. The scores array is
# created by iterating over the COLUMNS and checking if it is a CATEGORICAL_COLUMN.
for i, col in enumerate(COLUMNS[:-1]):
if col in CATEGORICAL_COLUMNS:
# Create a scores array to get the individual categorical column.
# Example:
# data = [39, 'State-gov', 77516, 'Bachelors', 13, 'Never-married', 'Adm-clerical',
# 'Not-in-family', 'White', 'Male', 2174, 0, 40, 'United-States']
# scores = [0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
#
# Returns: [['State-gov']]
# Build the scores array
scores = [0] * len(COLUMNS[:-1])
# This column is the categorical column we want to extract.
scores[i] = 1
skb = SelectKBest(k=1)
skb.scores_ = scores
# Convert the categorical column to a numerical value
lbn = LabelBinarizer()
r = skb.transform(train_features)
lbn.fit(r)
# Create the pipeline to extract the categorical feature
categorical_pipelines.append(
('categorical-{}'.format(i), Pipeline([
('SKB-{}'.format(i), skb),
('LBN-{}'.format(i), lbn)])))
# [END categorical-feature-conversion]
# [START create-pipeline]
# Create pipeline to extract the numerical features
skb = SelectKBest(k=6)
# From COLUMNS use the features that are numerical
skb.scores_ = [1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0]
categorical_pipelines.append(('numerical', skb))
# Combine all the features using FeatureUnion
preprocess = FeatureUnion(categorical_pipelines)
# Create the classifier
classifier = RandomForestClassifier()
# Transform the features and fit them to the classifier
classifier.fit(preprocess.transform(train_features), train_labels)
# Create the overall model as a single pipeline
pipeline = Pipeline([
('union', preprocess),
('classifier', classifier)
])
# [END create-pipeline]
# ---------------------------------------
# 2. Export and save the model to GCS
# ---------------------------------------
# [START export-to-gcs]
# Export the model to a file
model = 'model.joblib'
joblib.dump(pipeline, model)
# Upload the model to GCS
bucket = storage.Client().bucket(BUCKET_NAME)
blob = bucket.blob('{}/{}'.format(
datetime.datetime.now().strftime('census_%Y%m%d_%H%M%S'),
model))
blob.upload_from_filename(model)
# [END export-to-gcs]
%%writefile ./census_training/__init__.py
# Note that __init__.py can be an empty file.
! gcloud config set project $PROJECT_ID
! gcloud ml-engine jobs submit training census_training_$(date +"%Y%m%d_%H%M%S") \
--job-dir $JOB_DIR \
--package-path $TRAINER_PACKAGE_PATH \
--module-name $MAIN_TRAINER_MODULE \
--region $REGION \
--runtime-version=$RUNTIME_VERSION \
--python-version=$PYTHON_VERSION \
--scale-tier BASIC
! gsutil ls gs://$BUCKET_NAME/census_*
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We need to specify an encoding because the data set has some characters that aren't in Python's default utf-8 encoding. You can read more about character encodings on developer Joel Spolsky's blog.
Step2: Some columns are currently string types, because the main values they contain are Yes and No. We can make the data a bit easier to analyze down the road by converting each column to a Boolean having only the values True, False, and NaN
Step3: Change column name and bool values
Step4: Now that we've cleaned up the ranking columns, we can find the highest-ranked movie more quickly
Step5: The 5th movies (Episode V The Empire Strikes Back) has a highest rating (in this survey 1=best, 6=worst). "Episode III Revenge of the Sith" has a worst rate.
Step6: We can figure out how many people have seen each movie just by taking the sum of the column. Earliest movies is more popular - this corresponds to ranking above (earlier have better ranking).
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
star_wars = pd.read_csv('star_wars.csv', encoding='ISO=8859-1')
star_wars.head(10)
star_wars.columns
star_wars = star_wars[pd.notnull(star_wars['RespondentID'])]
star_wars.head()
bool_type = {
'Yes': True,
'No': False
}
star_wars['Have you seen any of the 6 films in the Star Wars franchise?'] = star_wars['Have you seen any of the 6 films in the Star Wars franchise?'].map(bool_type)
star_wars['Do you consider yourself to be a fan of the Star Wars film franchise?'] = star_wars['Do you consider yourself to be a fan of the Star Wars film franchise?'].map(bool_type)
star_wars['Do you consider yourself to be a fan of the Star Wars film franchise?'].head()
import numpy as np
bool_type1 = {
"Star Wars: Episode I The Phantom Menace": True,
np.nan: False,
"Star Wars: Episode II Attack of the Clones": True,
"Star Wars: Episode III Revenge of the Sith": True,
"Star Wars: Episode IV A New Hope": True,
"Star Wars: Episode V The Empire Strikes Back": True,
"Star Wars: Episode VI Return of the Jedi": True
}
for col in star_wars.columns[3:9]:
star_wars[col] = star_wars[col].map(bool_type1)
star_wars = star_wars.rename(columns={
'Star Wars: Episode I The Phantom Menace': "seen_1",
'Unnamed: 4': 'seen_2',
'Unnamed: 5': 'seen_3',
'Unnamed: 6': 'seen_4',
'Unnamed: 7': 'seen_5',
'Unnamed: 8': 'seen_6'
})
star_wars.head()
star_wars[star_wars.columns[9:15]] = star_wars[star_wars.columns[9:15]].astype(float)
star_wars.columns[9:15]
star_wars = star_wars.rename(columns={
'Please rank the Star Wars films in order of preference with 1 being your favorite film in the franchise and 6 being your least favorite film.': "ranking_1",
'Unnamed: 10': 'ranking_2',
'Unnamed: 11': 'ranking_3',
'Unnamed: 12': 'ranking_4',
'Unnamed: 13': 'ranking_5',
'Unnamed: 14': 'ranking_6'
})
star_wars.columns[9:15]
%matplotlib inline
import matplotlib.pyplot as plt
plt.bar(range(6), star_wars[star_wars.columns[9:15]].mean())
plt.bar(range(6), star_wars[star_wars.columns[3:9]].sum())
males = star_wars[star_wars["Gender"] == "Male"]
females = star_wars[star_wars["Gender"] == "Female"]
## Redo the two previous analyses (find the most viewed movie and the highest-ranked movie) separately for each group
## find highest-ranked movie (lower is better)
plt.bar(range(6), females[females.columns[9:15]].mean())
plt.show()
plt.bar(range(6), males[males.columns[9:15]].mean())
plt.show()
## find most viewed movie (higher is better)
plt.bar(range(6), females[females.columns[3:9]].mean())
plt.show()
plt.bar(range(6), males[males.columns[3:9]].mean())
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Preliminary Analysis
Step2: Preliminary Report
Step3: Report statistical significance for α = .01
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import bokeh.plotting as bkp
from mpl_toolkits.axes_grid1 import make_axes_locatable
# read in readmissions data provided
hospital_read_df = pd.read_csv('data/cms_hospital_readmissions.csv')
# deal with missing and inconvenient portions of data
clean_hospital_read_df = hospital_read_df[hospital_read_df['Number of Discharges'] != 'Not Available']
clean_hospital_read_df.loc[:, 'Number of Discharges'] = clean_hospital_read_df['Number of Discharges'].astype(int)
clean_hospital_read_df = clean_hospital_read_df.sort_values('Number of Discharges')
# generate a scatterplot for number of discharges vs. excess rate of readmissions
# lists work better with matplotlib scatterplot function
x = [a for a in clean_hospital_read_df['Number of Discharges'][81:-3]]
y = list(clean_hospital_read_df['Excess Readmission Ratio'][81:-3])
fig, ax = plt.subplots(figsize=(8,5))
ax.scatter(x, y,alpha=0.2)
ax.fill_between([0,350], 1.15, 2, facecolor='red', alpha = .15, interpolate=True)
ax.fill_between([800,2500], .5, .95, facecolor='green', alpha = .15, interpolate=True)
ax.set_xlim([0, max(x)])
ax.set_xlabel('Number of discharges', fontsize=12)
ax.set_ylabel('Excess rate of readmissions', fontsize=12)
ax.set_title('Scatterplot of number of discharges vs. excess rate of readmissions', fontsize=14)
ax.grid(True)
fig.tight_layout()
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import bokeh.plotting as bkp
import scipy.stats as st
from mpl_toolkits.axes_grid1 import make_axes_locatable
# read in readmissions data provided
hospital_read_df = pd.read_csv('data/cms_hospital_readmissions.csv')
# Set-up the hypothesis test.
# Get the two groups of hospitals, one with < 100 discharges and the other with > 1000 discharges.
# Get the hospitals with small discharges first.
# First statement deals with missing data.
clean_hospital_read_df = hospital_read_df[(hospital_read_df['Number of Discharges'] != 'Not Available')]
hosp_with_small_discharges = clean_hospital_read_df[clean_hospital_read_df['Number of Discharges'].astype(int) < 100]
hosp_with_small_discharges = hosp_with_small_discharges[hosp_with_small_discharges['Number of Discharges'].astype(int) != 0]
hosp_with_small_discharges.sort_values(by = 'Number of Discharges', ascending = False)
# Now get the hospitals with relatively large discharges.
hosp_with_large_discharges = clean_hospital_read_df[clean_hospital_read_df['Number of Discharges'].astype(int) > 1000]
hosp_with_large_discharges = hosp_with_large_discharges[hosp_with_large_discharges['Number of Discharges'].astype(int) != 0]
hosp_with_large_discharges.sort_values(by = 'Number of Discharges', ascending = False)
# Now calculate the statistical significance and p-value
small_hospitals = hosp_with_small_discharges['Excess Readmission Ratio']
large_hospitals = hosp_with_large_discharges['Excess Readmission Ratio']
result = st.ttest_ind(small_hospitals,large_hospitals, equal_var=False)
print("Statistical significance is equal to : %6.4F, P-value is equal to: %5.14F" % (result[0],result[1]))
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import bokeh.plotting as bkp
from mpl_toolkits.axes_grid1 import make_axes_locatable
# read in readmissions data provided
hospital_read_df = pd.read_csv('data/cms_hospital_readmissions.csv')
# deal with missing and inconvenient portions of data
clean_hospital_read_df = hospital_read_df[hospital_read_df['Number of Discharges'] != 'Not Available']
clean_hospital_read_df = clean_hospital_read_df.sort_values('Number of Discharges')
# generate a scatterplot for number of discharges vs. excess rate of readmissions
# lists work better with matplotlib scatterplot function
x = [a for a in clean_hospital_read_df['Number of Discharges'][81:-3]]
y = list(clean_hospital_read_df['Excess Readmission Ratio'][81:-3])
fig, ax = plt.subplots(figsize=(8,5))
im = ax.hexbin(x, y,gridsize=20)
fig.colorbar(im, ax=ax)
ax.fill_between([0,350], 1.15, 2, facecolor='red', alpha = .15, interpolate=True)
ax.fill_between([800,2500], .5, .95, facecolor='green', alpha = .15, interpolate=True)
ax.set_xlabel('Number of discharges', fontsize=10)
ax.set_ylabel('Excess rate of readmissions', fontsize=10)
ax.set_title('Hexagon Bin Plot of number of discharges vs. excess rate of readmissions', fontsize=12, fontweight='bold')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: Batch Normalization using tf.layers.batch_normalization<a id="example_1"></a>
Step6: We'll use the following function to create convolutional layers in our network. They are very basic
Step8: Run the following cell, along with the earlier cells (to load the dataset and define the necessary functions).
Step10: With this many layers, it's going to take a lot of iterations for this network to learn. By the time you're done training these 800 batches, your final test and validation accuracies probably won't be much better than 10%. (It will be different each time, but will most likely be less than 15%.)
Step12: TODO
Step13: TODO
Step15: With batch normalization, you should now get an accuracy over 90%. Notice also the last line of the output
Step17: TODO
Step18: TODO
|
<ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True, reshape=False)
DO NOT MODIFY THIS CELL
def fully_connected(prev_layer, num_units):
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
layer = tf.layers.dense(prev_layer, num_units, activation=tf.nn.relu)
return layer
DO NOT MODIFY THIS CELL
def conv_layer(prev_layer, layer_depth):
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
strides = 2 if layer_depth % 3 == 0 else 1
conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', activation=tf.nn.relu)
return conv_layer
DO NOT MODIFY THIS CELL
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]]})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
def fully_connected(prev_layer, num_units, is_training):
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
layer = tf.layers.dense(prev_layer, num_units, use_bias = False, activation=None)
layer = tf.layers.batch_normalization(layer, training=is_training)
layer = tf.nn.relu(layer)
return layer
def conv_layer(prev_layer, layer_depth, is_training):
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
strides = 2 if layer_depth % 3 == 0 else 1
conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', use_bias=False, activation=None)
conv_layer = tf.layers.batch_normalization(conv_layer, training=is_training)
conv_layer = tf.nn.relu(conv_layer)
return conv_layer
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
is_training = tf.placeholder(tf.bool)
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i, is_training)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100, is_training)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys, is_training: True})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels, is_training:False})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys, is_training:False})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels,
is_training: False})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels,
is_training:False})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]],
is_training:False})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
def fully_connected(prev_layer, num_units):
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
layer = tf.layers.dense(prev_layer, num_units, activation=tf.nn.relu)
return layer
def conv_layer(prev_layer, layer_depth):
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
strides = 2 if layer_depth % 3 == 0 else 1
in_channels = prev_layer.get_shape().as_list()[3]
out_channels = layer_depth*4
weights = tf.Variable(
tf.truncated_normal([3, 3, in_channels, out_channels], stddev=0.05))
bias = tf.Variable(tf.zeros(out_channels))
conv_layer = tf.nn.conv2d(prev_layer, weights, strides=[1,strides, strides, 1], padding='SAME')
conv_layer = tf.nn.bias_add(conv_layer, bias)
conv_layer = tf.nn.relu(conv_layer)
return conv_layer
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]]})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: So, show me how to align two vector spaces for myself!
Step4: Now we load the French and Russian word vectors, and evaluate the similarity of "chat" and "кот"
Step5: "chat" and "кот" both mean "cat", so they should be highly similar; clearly the two word vector spaces are not yet aligned. To align them, we need a bilingual dictionary of French and Russian translation pairs. As it happens, this is a great opportunity to show you something truly amazing...
Step6: Let's align the French vectors to the Russian vectors, using only this "free" dictionary that we acquired without any bilingual expert knowledge.
Step7: Finally, we re-evaluate the similarity of "chat" and "кот"
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from fasttext import FastVector
# from https://stackoverflow.com/questions/21030391/how-to-normalize-array-numpy
def normalized(a, axis=-1, order=2):
Utility function to normalize the rows of a numpy array.
l2 = np.atleast_1d(np.linalg.norm(a, order, axis))
l2[l2==0] = 1
return a / np.expand_dims(l2, axis)
def make_training_matrices(source_dictionary, target_dictionary, bilingual_dictionary):
Source and target dictionaries are the FastVector objects of
source/target languages. bilingual_dictionary is a list of
translation pair tuples [(source_word, target_word), ...].
source_matrix = []
target_matrix = []
for (source, target) in bilingual_dictionary:
if source in source_dictionary and target in target_dictionary:
source_matrix.append(source_dictionary[source])
target_matrix.append(target_dictionary[target])
# return training matrices
return np.array(source_matrix), np.array(target_matrix)
def learn_transformation(source_matrix, target_matrix, normalize_vectors=True):
Source and target matrices are numpy arrays, shape
(dictionary_length, embedding_dimension). These contain paired
word vectors from the bilingual dictionary.
# optionally normalize the training vectors
if normalize_vectors:
source_matrix = normalized(source_matrix)
target_matrix = normalized(target_matrix)
# perform the SVD
product = np.matmul(source_matrix.transpose(), target_matrix)
U, s, V = np.linalg.svd(product)
# return orthogonal transformation which aligns source language to the target
return np.matmul(U, V)
fr_dictionary = FastVector(vector_file='wiki.fr.vec')
ru_dictionary = FastVector(vector_file='wiki.ru.vec')
fr_vector = fr_dictionary["chat"]
ru_vector = ru_dictionary["кот"]
print(FastVector.cosine_similarity(fr_vector, ru_vector))
ru_words = set(ru_dictionary.word2id.keys())
fr_words = set(fr_dictionary.word2id.keys())
overlap = list(ru_words & fr_words)
bilingual_dictionary = [(entry, entry) for entry in overlap]
# form the training matrices
source_matrix, target_matrix = make_training_matrices(
fr_dictionary, ru_dictionary, bilingual_dictionary)
# learn and apply the transformation
transform = learn_transformation(source_matrix, target_matrix)
fr_dictionary.apply_transform(transform)
fr_vector = fr_dictionary["chat"]
ru_vector = ru_dictionary["кот"]
print(FastVector.cosine_similarity(fr_vector, ru_vector))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Change Log
Step2: Login()
Step3: Get data
Step4: Data inspection (Login)
Step5: Helper function
Step6: Usage
Step7: Client function
|
<ASSISTANT_TASK:>
Python Code:
%run ../../code/version_check.py
%run ../../code/eoddata.py
from getpass import getpass
import requests as r
import xml.etree.cElementTree as etree
ws = 'http://ws.eoddata.com/data.asmx'
ns='http://ws.eoddata.com/Data'
session = r.Session()
username = getpass()
password = getpass()
call = 'Login'
url = '/'.join((ws, call))
payload = {'Username': username, 'Password': password}
response = session.get(url, params=payload, stream=True)
if response.status_code == 200:
root = etree.parse(response.raw).getroot()
token = root.get('Token')
token
dir(root)
for item in root.items():
print (item)
for key in root.keys():
print (key)
print(root.get('Message'))
print(root.get('Token'))
print(root.get('DataFormat'))
print(root.get('Header'))
print(root.get('Suffix'))
def Login(session, username, password):
call = 'Login'
url = '/'.join((ws, call))
payload = {'Username': username, 'Password': password}
response = session.get(url, params=payload, stream=True)
if response.status_code == 200:
root = etree.parse(response.raw).getroot()
return root.get('Token')
token = Login(session, username, password)
token
# pass in username and password
eoddata = Client(username, password)
token = eoddata.get_token()
eoddata.close_session()
print('token: {}'.format(token))
# initialise using secure credentials file
eoddata = Client()
token = eoddata.get_token()
eoddata.close_session()
print('token: {}'.format(token))
# no need to manually close the session when using a with block
with (Client()) as eoddata:
print('token: {}'.format(eoddata.get_token()))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: To illustrate, we will use the Consumer Price Index for Apparel, which has a time-varying level and a strong seasonal component.
Step2: It is well known (e.g. Harvey and Jaeger [1993]) that the HP filter output can be generated by an unobserved components model given certain restrictions on the parameters.
Step3: The parameters of the unobserved components model (UCM) are written as
Step4: The estimate that corresponds to the HP filter's trend estimate is given by the smoothed estimate of the level (which is $\mu_t$ in the notation above)
Step5: It is easy to see that the estimate of the smoothed level from the UCM is equal to the output of the HP filter
Step6: Adding a seasonal component
Step7: In this case, we will continue to restrict the first three parameters as described above, but we want to estimate the value of sigma2.seasonal by maximum likelihood. Therefore, we will use the fit method along with the fix_params context manager.
Step8: Alternatively, we could have simply used the fit_constrained method, which also accepts a dictionary of constraints
Step9: The summary output includes all parameters, but indicates that the first three were fixed (and so were not estimated).
Step10: For comparison, we construct the unrestricted maximum likelihood estimates (MLE). In this case, the estimate of the level will no longer correspond to the HP filter concept.
Step11: Finally, we can retrieve the smoothed estimates of the trend and seasonal components.
Step12: Comparing the estimated level, it is clear that the seasonal UCM with fixed parameters still produces a trend that corresponds very closely (although no longer exactly) to the HP filter output.
Step13: Finally, the UCM with the parameter restrictions is still able to pick up the time-varying seasonal component quite well.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from importlib import reload
import numpy as np
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt
from pandas_datareader.data import DataReader
endog = DataReader('CPIAPPNS', 'fred', start='1980').asfreq('MS')
endog.plot(figsize=(15, 3));
# Run the HP filter with lambda = 129600
hp_cycle, hp_trend = sm.tsa.filters.hpfilter(endog, lamb=129600)
# The unobserved components model above is the local linear trend, or "lltrend", specification
mod = sm.tsa.UnobservedComponents(endog, 'lltrend')
print(mod.param_names)
res = mod.smooth([1., 0, 1. / 129600])
print(res.summary())
ucm_trend = pd.Series(res.level.smoothed, index=endog.index)
fig, ax = plt.subplots(figsize=(15, 3))
ax.plot(hp_trend, label='HP estimate')
ax.plot(ucm_trend, label='UCM estimate')
ax.legend();
# Construct a local linear trend model with a stochastic seasonal component of period 1 year
mod = sm.tsa.UnobservedComponents(endog, 'lltrend', seasonal=12, stochastic_seasonal=True)
print(mod.param_names)
# Here we restrict the first three parameters to specific values
with mod.fix_params({'sigma2.irregular': 1, 'sigma2.level': 0, 'sigma2.trend': 1. / 129600}):
# Now we fit any remaining parameters, which in this case
# is just `sigma2.seasonal`
res_restricted = mod.fit()
res_restricted = mod.fit_constrained({'sigma2.irregular': 1, 'sigma2.level': 0, 'sigma2.trend': 1. / 129600})
print(res_restricted.summary())
res_unrestricted = mod.fit()
# Construct the smoothed level estimates
unrestricted_trend = pd.Series(res_unrestricted.level.smoothed, index=endog.index)
restricted_trend = pd.Series(res_restricted.level.smoothed, index=endog.index)
# Construct the smoothed estimates of the seasonal pattern
unrestricted_seasonal = pd.Series(res_unrestricted.seasonal.smoothed, index=endog.index)
restricted_seasonal = pd.Series(res_restricted.seasonal.smoothed, index=endog.index)
fig, ax = plt.subplots(figsize=(15, 3))
ax.plot(unrestricted_trend, label='MLE, with seasonal')
ax.plot(restricted_trend, label='Fixed parameters, with seasonal')
ax.plot(hp_trend, label='HP filter, no seasonal')
ax.legend();
fig, ax = plt.subplots(figsize=(15, 3))
ax.plot(unrestricted_seasonal, label='MLE')
ax.plot(restricted_seasonal, label='Fixed parameters')
ax.legend();
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import packages
Step2: Create lists to store results for different thresholds from 0 to 25%
Step3: parse file and populate the list
|
<ASSISTANT_TASK:>
Python Code:
with open('./jeter.tsv', 'r') as file:
for i in range (10):
print (next(file))
%pylab inline
import csv
import matplotlib.pyplot as plt
res_s1_sup_s2 = [0 for i in range (26)]
res_s2_sup_s1 = [0 for i in range (26)]
res_s1_sup_other = [0 for i in range (26)]
res_s2_sup_other = [0 for i in range (26)]
with open('./jeter.tsv', 'r') as csvfile:
reader = csv.reader(csvfile, delimiter='\t')
# remove header
next(reader)
# iterate over rows
for R in reader:
s1_sup_s2 = float(R[6])*100/(float(R[5])+float(R[6])+float(R[7]))
s2_sup_s1 = float(R[9])*100/(float(R[8])+float(R[9])+float(R[10]))
s1_sup_other = float(R[7])*100/(float(R[5])+float(R[6])+float(R[7]))
s2_sup_other = float(R[10])*100/(float(R[8])+float(R[9])+float(R[10]))
for seuil in range (26):
if s1_sup_s2 >= seuil:
res_s1_sup_s2[seuil]+=1
if s2_sup_s1 >= seuil:
res_s2_sup_s1[seuil]+=1
if s1_sup_other >= seuil:
res_s1_sup_other[seuil]+=1
if s2_sup_other >= seuil:
res_s2_sup_other[seuil]+=1
print (res_s1_sup_s2)
print (res_s2_sup_s1)
print (res_s1_sup_other)
print (res_s2_sup_other)
plt.figure(figsize=(20, 10))
plt.title("percentage of samples with a number of read with incorect an genotype corresponding to another sample from the same lane or a randon error")
plt.xlabel("Percentage of sample")
plt.ylabel("Number of read with incorrect genotype")
plt.ylim(1,77869)
line1 = plt.semilogy(res_s1_sup_s2, 'b', label = 'reads_sample1_supporting_sample2')
line2 = plt.semilogy(res_s2_sup_s1, 'g', label = 'reads_sample2_supporting_sample1')
line3 = plt.semilogy(res_s1_sup_other, 'r', label = 'reads_sample1_supporting_others')
line4 = plt.semilogy(res_s2_sup_other, 'm', label = 'reads_sample2_supporting_others')
plt.legend(loc='best')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Then, we read the stc from file
Step2: This is a
Step3: The SourceEstimate object is in fact a surface source estimate. MNE also
Step4: Note that here we used initial_time=0.1, but we can also browse through
Step5: Volume Source Estimates
Step6: Then, we can load the precomputed inverse operator from a file.
Step7: The source estimate is computed using the inverse operator and the
Step8: This time, we have a different container
Step9: This too comes with a convenient plot method.
Step10: For this visualization, nilearn must be installed.
Step11: Vector Source Estimates
Step12: Dipole fits
Step13: Dipoles are fit independently for each time point, so let us crop our time
Step14: Finally, we can visualize the dipole.
|
<ASSISTANT_TASK:>
Python Code:
import os
import mne
from mne.datasets import sample
from mne.minimum_norm import apply_inverse, read_inverse_operator
from mne import read_evokeds
data_path = sample.data_path()
sample_dir = os.path.join(data_path, 'MEG', 'sample')
subjects_dir = os.path.join(data_path, 'subjects')
fname_evoked = data_path + '/MEG/sample/sample_audvis-ave.fif'
fname_stc = os.path.join(sample_dir, 'sample_audvis-meg')
stc = mne.read_source_estimate(fname_stc, subject='sample')
print(stc)
initial_time = 0.1
brain = stc.plot(subjects_dir=subjects_dir, initial_time=initial_time)
mpl_fig = stc.plot(subjects_dir=subjects_dir, initial_time=initial_time,
backend='matplotlib', verbose='error')
evoked = read_evokeds(fname_evoked, condition=0, baseline=(None, 0))
evoked.pick_types(meg=True, eeg=False)
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-vol-7-meg-inv.fif'
inv = read_inverse_operator(fname_inv)
src = inv['src']
snr = 3.0
lambda2 = 1.0 / snr ** 2
method = "dSPM" # use dSPM method (could also be MNE or sLORETA)
stc = apply_inverse(evoked, inv, lambda2, method)
stc.crop(0.0, 0.2)
print(stc)
stc.plot(src, subject='sample', subjects_dir=subjects_dir)
stc.plot(src, subject='sample', subjects_dir=subjects_dir, mode='glass_brain')
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'
inv = read_inverse_operator(fname_inv)
stc = apply_inverse(evoked, inv, lambda2, 'dSPM', pick_ori='vector')
stc.plot(subject='sample', subjects_dir=subjects_dir,
initial_time=initial_time)
fname_cov = os.path.join(data_path, 'MEG', 'sample', 'sample_audvis-cov.fif')
fname_bem = os.path.join(subjects_dir, 'sample', 'bem',
'sample-5120-bem-sol.fif')
fname_trans = os.path.join(data_path, 'MEG', 'sample',
'sample_audvis_raw-trans.fif')
evoked.crop(0.1, 0.1)
dip = mne.fit_dipole(evoked, fname_cov, fname_bem, fname_trans)[0]
dip.plot_locations(fname_trans, 'sample', subjects_dir)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Imagine I want to have a list of my friends with the amount of money I borrowed to each other, toghether with their names and surnames.
Step2: To merge two dataframes we can use merge function
Step3: Approximate String Matching
Step4: The closeness of a match is often measured in terms of edit distance, which is the number of primitive operations necessary to convert the string into an exact match.
Step5: Partial String Similarity
Step6: In fact we can have the following situation
Step7: partial_ratio, seeks the more appealing substring and returns its ratio
Step8: Out of Order
Step9: FuzzyWuzzy provides two ways to deal with this situation
Step10: Token Set
Step11: Example
Step12: Step 2
Step13: Step 3
Step14: Step 4
Step15: Step 5
Step16: Step 6
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
names = pd.DataFrame({"name" : ["Alice","Bob","Charlie","Dennis"],
"surname" : ["Doe","Smith","Sheen","Quaid"]})
names
names.name.str.match("A\w+")
debts = pd.DataFrame({"debtor":["D.Quaid","C.Sheen"],
"amount":[100,10000]})
debts
debts["surname"] = debts.debtor.str.extract("\w+\.(\w+)")
debts
names.merge(debts, left_on="surname", right_on="surname", how="left")
names.merge(debts, left_on="surname", right_on="surname", how="right")
names.merge(debts, left_on="surname", right_on="surname", how="inner")
names.merge(debts, left_on="surname", right_on="surname", how="outer")
names.merge(debts, left_index=True, right_index=True, how="left")
lat_lon_mun = pd.read_excel("lat_lon_municipalities.xls", skiprows=2)
lat_lon_mun.head()
mun_codes = pd.read_excel("11codmun.xls", encoding="latin1", skiprows=1)
mun_codes.head()
lat_lon_mun[lat_lon_mun["Población"].str.match(".*anaria.*")]
mun_codes[mun_codes["NOMBRE"].str.match(".*anaria.*")]
"Valsequillo de Gran Canaria" == "Valsequillo de Gran Canaria"
"Palmas de Gran Canaria (Las)" == "Palmas de Gran Canaria, Las"
from fuzzywuzzy import fuzz
fuzz.ratio("NEW YORK METS","NEW YORK MEATS")
fuzz.ratio("Palmas de Gran Canaria (Las)","Palmas de Gran Canaria, Las")
"San Millán de Yécora" == "Millán de Yécora"
fuzz.ratio("San Millán de Yécora", "Millán de Yécora")
fuzz.ratio("YANKEES", "NEW YORK YANKEES")
fuzz.ratio("NEW YORK METS", "NEW YORK YANKEES")
fuzz.partial_ratio("San Millán de Yécora", "Millán de Yécora")
fuzz.partial_ratio("YANKEES", "NEW YORK YANKEES")
fuzz.partial_ratio("NEW YORK METS", "NEW YORK YANKEES")
s1 = "Las Palmas de Gran Canaria"
s2 = "Gran Canaria, Las Palmas de"
s3 = "Palmas de Gran Canaria, Las"
s4 = "Palmas de Gran Canaria, (Las)"
fuzz.token_sort_ratio("Las Palmas de Gran Canaria", "Palmas de Gran Canaria Las")
fuzz.ratio("Las Palmas de Gran Canaria", "Palmas de Gran Canaria Las")
t0 = ["Canaria,","de","Gran", "Palmas"]
t1 = ["Canaria,","de","Gran", "Palmas"] + ["Las"]
t2 = ["Canaria,","de","Gran", "Palmas"] + ["(Las)"]
fuzz.token_sort_ratio("Palmas de Gran Canaria, Las", "Palmas de Gran Canaria, (Las)")
mun_codes.shape
mun_codes.head()
lat_lon_mun.shape
lat_lon_mun.head()
df1 = mun_codes.merge(lat_lon_mun, left_on="NOMBRE", right_on="Población",how="inner")
df1.head()
df1["match_ratio"] = 100
df2 = mun_codes.merge(df1, left_on="NOMBRE", right_on="NOMBRE", how="left")
df2.head()
df3 = df2.loc[: ,["CPRO_x","CMUN_x","DC_x","NOMBRE","match_ratio"]]
df3.rename(columns={"CPRO_x": "CPRO", "CMUN_x":"CMUN","DC_x":"DC"},inplace=True)
df3.head()
df3.loc[df3.match_ratio.isnull(),:].head()
mun_names = lat_lon_mun["Población"].tolist()
def approx_str_compare(x):
ratio = [fuzz.ratio(x,m) for m in mun_names]
res = pd.DataFrame({"ratio" : ratio,
"name": mun_names})
return res.sort_values(by="ratio",ascending=False).iloc[0,:]
df4 = df3.loc[df3.match_ratio.isnull(),"NOMBRE"].map(approx_str_compare)
df4.map(lambda x: x["name"])
df4.map(lambda x: x["ratio"])
df6 = df3.loc[df3.match_ratio.isnull(),:]
df6["match_ratio"] = df4.map(lambda x: x["ratio"])
df6["NOMBRE"] = df4.map(lambda x: x["name"])
df6.head()
df7 = pd.concat([df3,df6])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <a id='sim'></a>
Step2: Caution
Step3: <a id='get'></a>
Step4: <a id='apply'></a>
Step5: User defined functions can also be applied.
Step6: <a id='tabulate'></a>
Step7: Now sum the dice, and approximate the distribution of the sum using the normalize option.
Step8: Individual entries of the table can be referenced using .tabulate()[label] where label represents the value of interest.
Step9: By default, tabulate only tabulates those outcomes which are among the simulated values, rather than all possible outcomes. An argument can be passed to .tabulate() to tabulate all outcomes in a given list.
Step10: By default, the outcomes in the table produced by .tabulate() are in alphanumeric order. A list can be passed to .tabulate() to achieve a specified order.
Step11: <a id='filter'></a>
Step12: Using len (length) with the filter functions is one way to count the simulated occurrences of outcomes which satisfy some criteria.
Step13: In addition to .filter_eq(), the following filter functions can be used when the values are numerical.
Step14: You can also define your own custom filter function. Define a function that returns True for the outcomes you want to keep, and pass the function into .filter(). For example, the following code is equivalent to using .filter_geq(10).
Step15: <a id='count'></a>
Step16: In addition to .count_eq(), the following count functions can be used when the values are numerical.
Step17: You can also count the number of outcomes which satisfy some criteria specified by a user defined function. Define a function that returns True for the outcomes you want to keep, and pass the function into .count(). For example, the following code is equivalent to using .count_geq(10).
Step18: Counting can also be accomplished by creating logical (True = 1, False = 0) values according to whether an outcome satisfies some criteria and then summing.
Step19: Since a mean (average) is the sum of values divided by the number of values, changing sum to mean in the above method returns the relative frequency directly (without having to divide by the number of values).
Step20: <a id='recap'></a>
|
<ASSISTANT_TASK:>
Python Code:
from symbulate import *
%matplotlib inline
die = list(range(1, 6+1)) # this is just a list of the number 1 through 6
roll = BoxModel(die, size = 2)
roll.sim(100)
def spam_sim():
email_type = BoxModel(["spam", "not spam"], probs=[.1, .9]).draw()
if email_type == "spam":
has_money = BoxModel(["money", "no money"], probs=[.3, .7]).draw()
else:
has_money = BoxModel(["money", "no money"], probs=[.02, .98]).draw()
return email_type, has_money
P = ProbabilitySpace(spam_sim)
sims = P.sim(1000)
sims
die = list(range(1, 6+1)) # this is just a list of the number 1 through 6
roll = BoxModel(die, size = 2)
sims = roll.sim(4)
sims
sims.get(0)
sims.get(2)
die = list(range(1, 6+1)) # this is just a list of the number 1 through 6
roll = BoxModel(die, size = 2)
roll.sim(1000).apply(sum)
n = 10
labels = list(range(n)) # remember, Python starts the index at 0, so the cards are labebeled 0, ..., 9
P = BoxModel(labels, size = n, replace = False)
sims = P.sim(10000)
sims
def is_match(x):
for i in range(n):
if x[i] == labels[i]:
return 'At least one match'
return 'No match'
sims.apply(is_match)
die = list(range(1, 6+1, 1)) # this is just a list of the number 1 through 6
roll = BoxModel(die, size=2)
rolls = roll.sim(10000)
rolls.tabulate()
rolls.apply(sum).tabulate(normalize=True)
rolls.tabulate()[(2,4)]
roll_sum = rolls.apply(sum).tabulate(normalize=True)
roll_sum[10] + roll_sum[11] + roll_sum[12]
die = list(range(1, 6+1, 1)) # this is just a list of the number 1 through 6
rolls = BoxModel(die).sim(5)
rolls.tabulate(die)
# Compare with
rolls.tabulate()
BoxModel(['a', 'b', 1, 2, 3]).sim(10).tabulate([3, 'a', 2, 'b', 1])
Heads = BoxModel(['H','T']).sim(10000).filter_eq('H')
Heads
len(Heads)
die = list(range(1, 1+6)) # this is just a list of the number 1 through 6
sims = BoxModel(die, size=2).sim(1000).apply(sum)
len(sims.filter_geq(10)) / 1000
def greater_than_or_equal_to_10(x):
return x >= 10
len(sims.filter(greater_than_or_equal_to_10)) / 1000
BoxModel(['H','T']).sim(10000).count_eq('H')
die = list(range(1, 6+1, 1)) # this is just a list of the number 1 through 6
roll = BoxModel(die, size = 2)
rolls = roll.sim(10000)
rolls.count_eq((2,4))
rolls.apply(sum).count_geq(10) / 10000
def greater_than_or_equal_to_10(x):
return x >= 10
rolls.apply(sum).count(greater_than_or_equal_to_10) / 10000
rollsums = rolls.apply(sum)
sum(rollsums >= 10) / 10000
mean(rollsums >= 10)
die = list(range(1, 6+1, 1)) # this is just a list of the number 1 through 6
roll = BoxModel(die, size = 2)
rolls = roll.sim(10000)
rollsums = rolls.apply(sum)
(rollsums.tabulate()[10] + rollsums.tabulate()[11] + rollsums.tabulate()[12]) /10000
(rollsums.tabulate(normalize=True)[10] + rollsums.tabulate(normalize=True)[11] + rollsums.tabulate(normalize=True)[12])
len(rollsums.filter_geq(10)) / 10000
rollsums.count_geq(10) / 10000
sum(rollsums >= 10) / 10000
mean(rollsums >= 10)
rollsums = RV(roll, sum)
(rollsums >= 10).sim(10000).tabulate(normalize=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Primeiro, crie uma variável Polygon poly fora das coordenadas x e y
Step2: Por último
Step3: Problema 2
Step4: Próximo
Step5: Por último
Step6: Problem 3
Step7: Em seguida
Step8: Por último
Step9: Agora você deve ser capaz de imprimir as respostas para as seguintes perguntas
|
<ASSISTANT_TASK:>
Python Code:
import geopandas as gpd
import matplotlib.pyplot as plt
from shapely.geometry import Polygon
# X -coordinates
xcoords = [29.99671173095703, 31.58196258544922, 27.738052368164062, 26.50013542175293, 26.652359008789062, 25.921663284301758, 22.90027618408203, 23.257217407226562,
23.335693359375, 22.87444305419922, 23.08465003967285, 22.565473556518555, 21.452774047851562, 21.66388702392578, 21.065969467163086, 21.67659568786621,
21.496871948242188, 22.339998245239258, 22.288192749023438, 24.539581298828125, 25.444232940673828, 25.303749084472656, 24.669166564941406, 24.689163208007812,
24.174999237060547, 23.68471908569336, 24.000761032104492, 23.57332992553711, 23.76513671875, 23.430830001831055, 23.6597900390625, 20.580928802490234, 21.320831298828125,
22.398330688476562, 23.97638702392578, 24.934917449951172, 25.7611083984375, 25.95930290222168, 26.476804733276367, 27.91069221496582, 29.1027774810791, 29.29846954345703,
28.4355525970459, 28.817358016967773, 28.459857940673828, 30.028610229492188, 29.075136184692383, 30.13492774963379, 29.818885803222656, 29.640830993652344, 30.57735824584961,
29.99671173095703]
# Y -coordinates
ycoords = [63.748023986816406, 62.90789794921875, 60.511383056640625, 60.44499588012695, 60.646385192871094, 60.243743896484375, 59.806800842285156, 59.91944122314453,
60.02395248413086, 60.14555358886719, 60.3452033996582, 60.211936950683594, 60.56249237060547, 61.54027557373047, 62.59798049926758, 63.02013397216797,
63.20353698730469, 63.27652359008789, 63.525691986083984, 64.79915618896484, 64.9533920288086, 65.51513671875, 65.65470886230469, 65.89610290527344, 65.79151916503906,
66.26332092285156, 66.80228424072266, 67.1570053100586, 67.4168701171875, 67.47978210449219, 67.94589233398438, 69.060302734375, 69.32611083984375, 68.71110534667969,
68.83248901367188, 68.580810546875, 68.98916625976562, 69.68568420410156, 69.9363784790039, 70.08860778808594, 69.70597076416016, 69.48533630371094, 68.90263366699219,
68.84700012207031, 68.53485107421875, 67.69471740722656, 66.90360260009766, 65.70887756347656, 65.6533203125, 64.92096710205078, 64.22373962402344, 63.748023986816406]
## CODE HERE ##
# Check the content of the GeoDataFrame:
print(geo.head())
## CODE HERE ##
# Plot a map of the polygon
# REPLACE THE ERROR BELOW WITH YOUR OWN CODE
## CODE HERE ##
## CODE HERE ##
# Check the dataframe head
print(data.head())
## CODE HERE ##
# Check the geodataframe head
print(geo.head())
## CODE HERE ##
## CODE HERE ##
# Check that the crs is correct (should be epsg:32735)
print(data.crs)
## CODE HERE ##
## CODE HERE ##
## CODE HERE ##
## CODE HERE ##
## CODE HERE ##
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Generate brownian or directed trajectories
Step2: Get sample microscopy stack
Step3: Get sample H5 file stored by sktacker.io.ObjectsIO
Step4: See also
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
from sktracker import data
from sktracker.trajectories import Trajectories
from sktracker.io import TiffFile
trajs = Trajectories(data.brownian_trajectories_generator())
trajs.show(groupby_args={'by': 'true_label'})
trajs = Trajectories(data.directed_trajectories_generator())
trajs.show(groupby_args={'by': 'true_label'})
tf = TiffFile(data.CZT_peaks())
arr = tf.asarray()
print("Shape is :", arr.shape)
a = arr[0, 0, 0]
plt.imshow(a, interpolation='none', cmap='gray')
df = pd.HDFStore(data.sample_h5())
print(df.keys())
print(df['metadata'])
# Run this cell first.
%load_ext autoreload
%autoreload 2
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Given a data sample the de facto standard method to infer the parameters is the expectation maximisation (EM) algorithm that, in alternating so-called E and M steps, maximises the log-likelihood of the data.
Step2: Once Pymanopt has finished the optimisation we can obtain the inferred parameters as follows
Step3: And convince ourselves that the inferred parameters are close to the ground truth parameters.
Step4: And the inferred parameters $\hat{\mathbf{\mu}}_1, \hat{\mathbf{\Sigma}}_1, \hat{\mathbf{\mu}}_2, \hat{\mathbf{\Sigma}}_2, \hat{\mathbf{\mu}}_3, \hat{\mathbf{\Sigma}}_3, \hat{\pi}_1, \hat{\pi}_2, \hat{\pi}_3$
Step7: Et voilà – this was a brief demonstration of how to do inference for MoG models by performing Manifold optimisation using Pymanopt.
|
<ASSISTANT_TASK:>
Python Code:
import autograd.numpy as np
np.set_printoptions(precision=2)
import matplotlib.pyplot as plt
%matplotlib inline
# Number of data points
N = 1000
# Dimension of each data point
D = 2
# Number of clusters
K = 3
pi = [0.1, 0.6, 0.3]
mu = [np.array([-4, 1]), np.array([0, 0]), np.array([2, -1])]
Sigma = [
np.array([[3, 0], [0, 1]]),
np.array([[1, 1.0], [1, 3]]),
0.5 * np.eye(2),
]
components = np.random.choice(K, size=N, p=pi)
samples = np.zeros((N, D))
# For each component, generate all needed samples
for k in range(K):
# indices of current component in X
indices = k == components
# number of those occurrences
n_k = indices.sum()
if n_k > 0:
samples[indices, :] = np.random.multivariate_normal(
mu[k], Sigma[k], n_k
)
colors = ["r", "g", "b", "c", "m"]
for k in range(K):
indices = k == components
plt.scatter(
samples[indices, 0],
samples[indices, 1],
alpha=0.4,
color=colors[k % K],
)
plt.axis("equal")
plt.show()
import sys
sys.path.insert(0, "../..")
from autograd.scipy.special import logsumexp
import pymanopt
from pymanopt import Problem
from pymanopt.manifolds import Euclidean, Product, SymmetricPositiveDefinite
from pymanopt.optimizers import SteepestDescent
# (1) Instantiate the manifold
manifold = Product([SymmetricPositiveDefinite(D + 1, k=K), Euclidean(K - 1)])
# (2) Define cost function
# The parameters must be contained in a list theta.
@pymanopt.function.autograd(manifold)
def cost(S, v):
# Unpack parameters
nu = np.append(v, 0)
logdetS = np.expand_dims(np.linalg.slogdet(S)[1], 1)
y = np.concatenate([samples.T, np.ones((1, N))], axis=0)
# Calculate log_q
y = np.expand_dims(y, 0)
# 'Probability' of y belonging to each cluster
log_q = -0.5 * (np.sum(y * np.linalg.solve(S, y), axis=1) + logdetS)
alpha = np.exp(nu)
alpha = alpha / np.sum(alpha)
alpha = np.expand_dims(alpha, 1)
loglikvec = logsumexp(np.log(alpha) + log_q, axis=0)
return -np.sum(loglikvec)
problem = Problem(manifold, cost)
# (3) Instantiate a Pymanopt optimizer
optimizer = SteepestDescent(verbosity=1)
# let Pymanopt do the rest
Xopt = optimizer.run(problem).point
mu1hat = Xopt[0][0][0:2, 2:3]
Sigma1hat = Xopt[0][0][:2, :2] - mu1hat @ mu1hat.T
mu2hat = Xopt[0][1][0:2, 2:3]
Sigma2hat = Xopt[0][1][:2, :2] - mu2hat @ mu2hat.T
mu3hat = Xopt[0][2][0:2, 2:3]
Sigma3hat = Xopt[0][2][:2, :2] - mu3hat @ mu3hat.T
pihat = np.exp(np.concatenate([Xopt[1], [0]], axis=0))
pihat = pihat / np.sum(pihat)
print(mu[0])
print(Sigma[0])
print(mu[1])
print(Sigma[1])
print(mu[2])
print(Sigma[2])
print(pi[0])
print(pi[1])
print(pi[2])
print(mu1hat)
print(Sigma1hat)
print(mu2hat)
print(Sigma2hat)
print(mu3hat)
print(Sigma3hat)
print(pihat[0])
print(pihat[1])
print(pihat[2])
class LineSearchMoG:
Back-tracking line-search that checks for close to singular matrices.
def __init__(
self,
contraction_factor=0.5,
optimism=2,
sufficient_decrease=1e-4,
max_iterations=25,
initial_step_size=1,
):
self.contraction_factor = contraction_factor
self.optimism = optimism
self.sufficient_decrease = sufficient_decrease
self.max_iterations = max_iterations
self.initial_step_size = initial_step_size
self._oldf0 = None
def search(self, objective, manifold, x, d, f0, df0):
Function to perform backtracking line-search.
Arguments:
- objective
objective function to optimise
- manifold
manifold to optimise over
- x
starting point on the manifold
- d
tangent vector at x (descent direction)
- df0
directional derivative at x along d
Returns:
- step_size
norm of the vector retracted to reach newx from x
- newx
next iterate suggested by the line-search
# Compute the norm of the search direction
norm_d = manifold.norm(x, d)
if self._oldf0 is not None:
# Pick initial step size based on where we were last time.
alpha = 2 * (f0 - self._oldf0) / df0
# Look a little further
alpha *= self.optimism
else:
alpha = self.initial_step_size / norm_d
alpha = float(alpha)
# Make the chosen step and compute the cost there.
newx, newf, reset = self._newxnewf(x, alpha * d, objective, manifold)
step_count = 1
# Backtrack while the Armijo criterion is not satisfied
while (
newf > f0 + self.sufficient_decrease * alpha * df0
and step_count <= self.max_iterations
and not reset
):
# Reduce the step size
alpha = self.contraction_factor * alpha
# and look closer down the line
newx, newf, reset = self._newxnewf(
x, alpha * d, objective, manifold
)
step_count = step_count + 1
# If we got here without obtaining a decrease, we reject the step.
if newf > f0 and not reset:
alpha = 0
newx = x
step_size = alpha * norm_d
self._oldf0 = f0
return step_size, newx
def _newxnewf(self, x, d, objective, manifold):
newx = manifold.retraction(x, d)
try:
newf = objective(newx)
except np.linalg.LinAlgError:
replace = np.asarray(
[
np.linalg.matrix_rank(newx[0][k, :, :])
!= newx[0][0, :, :].shape[0]
for k in range(newx[0].shape[0])
]
)
x[0][replace, :, :] = manifold.random_point()[0][replace, :, :]
return x, objective(x), True
return newx, newf, False
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Model parameters
Step2: Create and run the MODFLOW-USG model
Step3: Read the simulated MODFLOW-USG model results
Step4: Plot MODFLOW-USG results
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import sys
import os
import platform
import numpy as np
import matplotlib.pyplot as plt
import flopy
import flopy.utils as fputl
#Set name of MODFLOW exe
# assumes executable is in users path statement
exe_name = 'mfusg'
if platform.system() == 'Windows':
exe_name = 'mfusg.exe'
mfexe = exe_name
modelpth = os.path.join('data')
modelname = 'zaidel'
#make sure modelpth directory exists
if not os.path.exists(modelpth):
os.makedirs(modelpth)
# model dimensions
nlay, nrow, ncol = 1, 1, 200
delr = 50.
delc = 1.
# boundary heads
h1 = 23.
h2 = 5.
# cell centroid locations
x = np.arange(0., float(ncol)*delr, delr) + delr / 2.
# ibound
ibound = np.ones((nlay, nrow, ncol), dtype=np.int)
ibound[:, :, 0] = -1
ibound[:, :, -1] = -1
# bottom of the model
botm = 25 * np.ones((nlay + 1, nrow, ncol), dtype=np.float)
base = 20.
for j in range(ncol):
botm[1, :, j] = base
#if j > 0 and j % 40 == 0:
if j+1 in [40,80,120,160]:
base -= 5
# starting heads
strt = h1 * np.ones((nlay, nrow, ncol), dtype=np.float)
strt[:, :, -1] = h2
#make the flopy model
mf = flopy.modflow.Modflow(modelname=modelname, exe_name=mfexe, model_ws=modelpth)
dis = flopy.modflow.ModflowDis(mf, nlay, nrow, ncol,
delr=delr, delc=delc,
top=botm[0, :, :], botm=botm[1:, :, :],
perlen=1, nstp=1, steady=True)
bas = flopy.modflow.ModflowBas(mf, ibound=ibound, strt=strt)
lpf = flopy.modflow.ModflowLpf(mf, hk=0.0001, laytyp=4)
oc = flopy.modflow.ModflowOc(mf,
stress_period_data={(0,0): ['print budget', 'print head',
'save head', 'save budget']})
sms = flopy.modflow.ModflowSms(mf, nonlinmeth=1, linmeth=1,
numtrack=50, btol=1.1, breduc=0.70, reslim = 0.0,
theta=0.85, akappa=0.0001, gamma=0., amomentum=0.1,
iacl=2, norder=0, level=5, north=7, iredsys=0, rrctol=0.,
idroptol=1, epsrn=1.e-5,
mxiter=500, hclose=1.e-3, hiclose=1.e-3, iter1=50)
mf.write_input()
# remove any existing head files
try:
os.remove(os.path.join(model_ws, '{0}.hds'.format(modelname)))
except:
pass
# run the model
mf.run_model()
# Create the mfusg headfile object
headfile = os.path.join(modelpth, '{0}.hds'.format(modelname))
headobj = fputl.HeadFile(headfile, precision='single')
times = headobj.get_times()
mfusghead = headobj.get_data(totim=times[-1])
fig = plt.figure(figsize=(8,6))
fig.subplots_adjust(left=None, bottom=None, right=None, top=None,
wspace=0.25, hspace=0.25)
ax = fig.add_subplot(1, 1, 1)
ax.plot(x, mfusghead[0, 0, :], linewidth=0.75, color='blue', label='MODFLOW-USG')
ax.fill_between(x, y1=botm[1, 0, :], y2=-5, color='0.5', alpha=0.5)
leg = ax.legend(loc='upper right')
leg.draw_frame(False)
ax.set_xlabel('Horizontal distance, in m')
ax.set_ylabel('Head, in m')
ax.set_ylim(-5,25);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Check the masking
Step2: All DUPs should be in an LSLGA blob.
Step3: 1) Find all bright Gaia stars.
Step4: Make sure the MASKBITS values are set correctly.
|
<ASSISTANT_TASK:>
Python Code:
import os, time
import numpy as np
import fitsio
from glob import glob
import matplotlib.pyplot as plt
from astropy.table import vstack, Table, hstack
MASKBITS = dict(
NPRIMARY = 0x1, # not PRIMARY
BRIGHT = 0x2,
SATUR_G = 0x4,
SATUR_R = 0x8,
SATUR_Z = 0x10,
ALLMASK_G = 0x20,
ALLMASK_R = 0x40,
ALLMASK_Z = 0x80,
WISEM1 = 0x100, # WISE masked
WISEM2 = 0x200,
BAILOUT = 0x400, # bailed out of processing
MEDIUM = 0x800, # medium-bright star
GALAXY = 0x1000, # LSLGA large galaxy
CLUSTER = 0x2000, # Cluster catalog source
)
# Bits in the "brightblob" bitmask
IN_BLOB = dict(
BRIGHT = 0x1,
MEDIUM = 0x2,
CLUSTER = 0x4,
GALAXY = 0x8,
)
def gather_gaia(camera='decam'):
#dr8dir = '/global/project/projectdirs/cosmo/work/legacysurvey/dr8b'
dr8dir = '/Users/ioannis/work/legacysurvey/dr8c'
#outdir = os.getenv('HOME')
outdir = dr8dir
for cam in np.atleast_1d(camera):
outfile = os.path.join(outdir, 'check-gaia-{}.fits'.format(cam))
if os.path.isfile(outfile):
gaia = Table.read(outfile)
else:
out = []
catfile = glob(os.path.join(dr8dir, cam, 'tractor', '???', 'tractor*.fits'))
for ii, ff in enumerate(catfile[1:]):
if ii % 100 == 0:
print('{} / {}'.format(ii, len(catfile)))
cc = Table(fitsio.read(ff, upper=True, columns=['BRICK_PRIMARY', 'BRICKNAME', 'BX', 'BY',
'REF_CAT', 'REF_ID', 'RA', 'DEC', 'TYPE',
'FLUX_G', 'FLUX_R', 'FLUX_Z',
'FLUX_IVAR_G', 'FLUX_IVAR_R', 'FLUX_IVAR_Z',
'BRIGHTBLOB', 'MASKBITS', 'GAIA_PHOT_G_MEAN_MAG']))
cc = cc[cc['BRICK_PRIMARY']]
out.append(cc)
out = vstack(out)
out.write(outfile, overwrite=True)
return gaia
%time gaia = gather_gaia(camera='decam')
idup = gaia['TYPE'] == 'DUP'
assert(np.all(gaia[idup]['MASKBITS'] & MASKBITS['GALAXY'] != 0))
assert(np.all(gaia[idup]['FLUX_G'] == 0))
for band in ('G', 'R', 'Z'):
assert(np.all(gaia[idup]['FLUX_{}'.format(band)] == 0))
assert(np.all(gaia[idup]['FLUX_IVAR_{}'.format(band)] == 0))
gaia[idup]
ibright = np.where(((gaia['MASKBITS'] & MASKBITS['BRIGHT']) != 0) * (gaia['REF_CAT'] == 'G2') * (gaia['TYPE'] != 'DUP'))[0]
#bb = (gaia['BRIGHTBLOB'][ibright] & IN_BLOB['BRIGHT'] != 0) == False
#gaia[ibright][bb]
#gaia[ibright]
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(14, 6))
_ = ax1.hist(gaia[ibright]['GAIA_PHOT_G_MEAN_MAG'], bins=100)
ax1.set_xlabel('Gaia G')
ax1.set_title('MASKBITS & BRIGHT, REF_CAT==G2, TYPE!=DUP', fontsize=14)
isb = np.where(gaia[ibright]['GAIA_PHOT_G_MEAN_MAG'] < 13.0)[0]
isf = np.where(gaia[ibright]['GAIA_PHOT_G_MEAN_MAG'] >= 13.0)[0]
print(len(isb), len(isf))
ax2.scatter(gaia['RA'][ibright][isb], gaia['DEC'][ibright][isb], s=10, color='green', label='G<13')
ax2.scatter(gaia['RA'][ibright][isf], gaia['DEC'][ibright][isf], s=10, color='red', alpha=0.5, label='G>=13')
ax2.legend(fontsize=14, frameon=True)
ax2.set_title('MASKBITS & BRIGHT, REF_CAT==G2, TYPE!=DUP', fontsize=14)
#ax.set_xlim(136.8, 137.2)
#ax.set_ylim(32.4, 32.8)
print(np.sum(gaia['BRIGHTBLOB'][ibright][isf] & IN_BLOB['BRIGHT'] != 0))
check = np.where(gaia['BRIGHTBLOB'][ibright][isf] & IN_BLOB['BRIGHT'] == 0)[0] # no bright targeting bit set
for key in MASKBITS.keys():
print(key, np.sum(gaia['MASKBITS'][ibright][isf][check] & MASKBITS[key] != 0))
gaia[ibright][isf][check]
mask = fitsio.read('decam/coadd/132/1325p325/legacysurvey-1325p325-maskbits.fits.fz')
#print(mask.max())
c = plt.imshow(mask > 0, origin='lower')
#plt.colorbar(c)
ww = gaia['BRICKNAME'] == '1325p325'
eq = []
for obj in gaia[ww]:
eq.append(mask[int(obj['BY']), int(obj['BX'])] == obj['MASKBITS'])
assert(np.all(eq))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load Iris Flower Dataset
Step2: Standardize Features
Step3: Conduct k-Means Clustering
|
<ASSISTANT_TASK:>
Python Code:
# Load libraries
from sklearn import datasets
from sklearn.preprocessing import StandardScaler
from sklearn.cluster import MiniBatchKMeans
# Load data
iris = datasets.load_iris()
X = iris.data
# Standarize features
scaler = StandardScaler()
X_std = scaler.fit_transform(X)
# Create k-mean object
clustering = MiniBatchKMeans(n_clusters=3, random_state=0, batch_size=100)
# Train model
model = clustering.fit(X_std)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <font color='red'>Please put your datahub API key into a file called APIKEY and place it to the notebook folder or assign your API key directly to the variable API_key!</font>
Step2: We use a simple dh_py_access library for controlling REST requests.
Step3: dh_py_access provides some easy to use functions to list dataset metadata
Step4: Initialize coordinates and reftime. Please note that more recent reftime, than computed below, may be available for particular dataset. We just use conservative example for demo
Step5: Fetch data and convert to Pandas dataframe
Step6: Filter out necessary data from the DataFrames
Step7: We easily see that differences between models can be larger than difference of 10m to 80m winds in the same model.
Step8: No let's energy production looks compared to the wind speed
Step9: Finally, let's analyse how much density changes vary over the whole domain during one forecast. For this purpose, we download the whole density field with package api
Step10: Maximum relative change of air density in single location is
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib notebook
import urllib.request
import numpy as np
import simplejson as json
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
import warnings
import datetime
import dateutil.parser
import matplotlib.cbook
warnings.filterwarnings("ignore",category=matplotlib.cbook.mplDeprecation)
import requests
from netCDF4 import Dataset
from dh_py_access.lib.dataset import dataset as dataset
import dh_py_access.package_api as package_api
import dh_py_access.lib.datahub as datahub
np.warnings.filterwarnings('ignore')
API_key = open("APIKEY").read().strip()
dh=datahub.datahub_main(API_key)
fmi_hirlam_surface = dataset('fmi_hirlam_surface',dh)
metno_harmonie_metcoop = dataset('metno_harmonie_metcoop',dh)
metno_harmonie_wind = dataset('metno_harmonie_wind_det',dh)
gfs = dataset('noaa_gfs_pgrb2_global_forecast_recompute_0.25degree',dh)
## Does not look good in github
##gfs.variable_names()
lon = 22
lat = 59+6./60
today = datetime.datetime.today()
reft = datetime.datetime(today.year,today.month,today.day,int(today.hour/6)*6) - datetime.timedelta(hours=12)
reft = reft.isoformat()
##reft = "2018-02-11T18:00:00"
arg_dict = {'lon':lon,'lat':lat,'reftime_start':reft,'reftime_end':reft,'count':250}
arg_dict_metno_wind_det = dict(arg_dict, **{'vars':'wind_u_z,wind_v_z,air_density_z'})
arg_dict_metno_harm_metcoop = dict(arg_dict, **{'vars':'u_wind_10m,v_wind_10m'})
arg_dict_hirlam = dict(arg_dict, **{'vars':'u-component_of_wind_height_above_ground,v-component_of_wind_height_above_ground'})
arg_dict_gfs = dict(arg_dict, **{'vars':'ugrd_m,vgrd_m','count':450})
dmw = metno_harmonie_wind.get_json_data_in_pandas(**arg_dict_metno_wind_det)
dmm = metno_harmonie_metcoop.get_json_data_in_pandas(**arg_dict_metno_harm_metcoop)
dhs = fmi_hirlam_surface.get_json_data_in_pandas(**arg_dict_hirlam)
dgfs = gfs.get_json_data_in_pandas(**arg_dict_gfs)
## show how to filter Pandas
## dgfs[dgfs['z']==80]
vel80_metno = np.array(np.sqrt(dmw[dmw['z']==80]['wind_u_z']**2 + dmw[dmw['z']==80]['wind_v_z']**2))
vel10_metno = np.array(np.sqrt(dmm['u_wind_10m']**2 + dmm['v_wind_10m']**2))
vel10_hirlam = np.array(np.sqrt(dhs['u-component_of_wind_height_above_ground']**2 +
dhs['v-component_of_wind_height_above_ground']**2))
vel10_gfs = np.sqrt(dgfs[dgfs['z']==10]['ugrd_m']**2+dgfs[dgfs['z']==10]['vgrd_m']**2)
vel80_gfs = np.sqrt(dgfs[dgfs['z']==80]['ugrd_m']**2+dgfs[dgfs['z']==80]['vgrd_m']**2)
t_metno = [dateutil.parser.parse(i) for i in dmw[dmw['z']==80]['time']]
t_metno_10 = [dateutil.parser.parse(i) for i in dmm['time']]
t_hirlam = [dateutil.parser.parse(i) for i in dhs['time']]
t_gfs_10 = [dateutil.parser.parse(i) for i in dgfs[dgfs['z']==10]['time']]
t_gfs_80 = [dateutil.parser.parse(i) for i in dgfs[dgfs['z']==80]['time']]
fig, ax = plt.subplots()
days = mdates.DayLocator()
daysFmt = mdates.DateFormatter('%Y-%m-%d')
hours = mdates.HourLocator()
ax.set_ylabel("wind speed")
ax.plot(t_metno, vel80_metno, label='Metno 80m')
ax.plot(t_metno_10, vel10_metno, label='Metno 10m')
ax.plot(t_hirlam, vel10_hirlam, label='HIRLAM 10m')
gfs_lim=67
ax.plot(t_gfs_10[:gfs_lim], vel10_gfs[:gfs_lim], label='GFS 10m')
ax.plot(t_gfs_80[:gfs_lim], vel80_gfs[:gfs_lim], label='GFS 80m')
ax.xaxis.set_major_locator(days)
ax.xaxis.set_major_formatter(daysFmt)
ax.xaxis.set_minor_locator(hours)
#fig.autofmt_xdate()
plt.legend()
plt.grid()
plt.savefig("model_comp")
fig, ax = plt.subplots()
ax2 = ax.twinx()
days = mdates.DayLocator()
daysFmt = mdates.DateFormatter('%Y-%m-%d')
hours = mdates.HourLocator()
ax.plot(t_metno,vel80_metno)
aird80 = dmw[dmw['z']==80]['air_density_z']
ax2.plot(t_metno,aird80,c='g')
ax.xaxis.set_major_locator(days)
ax.xaxis.set_major_formatter(daysFmt)
ax.xaxis.set_minor_locator(hours)
ax.set_ylabel("wind speed")
ax2.set_ylabel("air density")
fig.tight_layout()
#fig.autofmt_xdate()
plt.savefig("density_80m")
fig, ax = plt.subplots()
ax2 = ax.twinx()
ax2.set_ylabel("energy production")
ax.set_ylabel("wind speed")
ax.plot(t_metno,vel80_metno, c='b', label='wind speed')
ax2.plot(t_metno,aird80*vel80_metno**3, c='r', label='energy prod w.dens')
ax.xaxis.set_major_locator(days)
ax.xaxis.set_major_formatter(daysFmt)
ax.xaxis.set_minor_locator(hours)
fig.autofmt_xdate()
lines, labels = ax.get_legend_handles_labels()
lines2, labels2 = ax2.get_legend_handles_labels()
ax2.legend(lines + lines2, labels + labels2, loc=0)
density = package_api.package_api(dh,'metno_harmonie_wind_det','air_density_z',-20,60,10,80,'full_domain_harmonie')
density.make_package()
density.download_package()
density_data = Dataset(density.get_local_file_name())
## biggest change of density in one location during forecast period
maxval = np.nanmax(density_data.variables['air_density_z'],axis=0)
minval = np.nanmin(density_data.variables['air_density_z'],axis=0)
print(np.nanmax(maxval-minval),np.nanmean(maxval-minval))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: N_DAYS corresponds to the number of days of recording and SAMPLING_FREQUENCY corresponds to the sampling rate of the tetrodes recording neural activity
Step2: Data Processing Module
Step3: Often we use the epoch as a key to access more information about that epoch (information about the tetrodes, neuron, etc). Epoch keys are tuples with the following format
Step4: This is useful if we want to filter the epoch dataframe by a particular attribute (Say we only want sessions where the animal is asleep) and use the keys to access the data for that epoch.
Step5: make_tetrode_dataframe
Step6: If we want to access a particular epoch we can just pass the corresponding epoch tuple. In this case, we want animal HPa, day 6, epoch 2. This returns a dataframe where each row corresponds to a tetrode for that epoch.
Step7: Remember that the epoch dataframe index can be used as keys, which can be useful if we want to access a particular epoch.
Step8: make_neuron_dataframe
Step9: We can access the neuron dataframe for a particular epoch in the same way as the tetrodes
Step10: get_spike_indicator_dataframe
Step11: This information can be obtained from the neuron dataframe. The index of the neuron dataframe gives the key.
Step12: Like the epoch and tetrode dataframe, this allows us to filter for certain attributes (like if we want neurons in a certain brain area) and select only those neurons. For example, if we want only CA1 neurons
Step13: We can get the keys for CA1 neurons only
Step14: And then get the spike indicator data for those neurons
Step15: If we want the numpy array, use .values
Step16: get_interpolated_position_dataframe
|
<ASSISTANT_TASK:>
Python Code:
from src.parameters import ANIMALS
ANIMALS
from src.parameters import N_DAYS, SAMPLING_FREQUENCY
print('Days: {0}'.format(N_DAYS))
print('Sampling Frequency: {0}'.format(SAMPLING_FREQUENCY))
from src.data_processing import make_epochs_dataframe
days = range(1, N_DAYS + 1)
epoch_info = make_epochs_dataframe(ANIMALS, days)
epoch_info
epoch_info.index.tolist()
epoch_info.loc[epoch_info.type == 'run']
epoch_info.loc[epoch_info.type == 'run'].index.tolist()
from src.data_processing import make_tetrode_dataframe
tetrode_info = make_tetrode_dataframe(ANIMALS)
list(tetrode_info.keys())
epoch_key = ('HPa', 6, 2)
tetrode_info[epoch_key]
[tetrode_info[epoch_key]
for epoch_key in epoch_info.loc[
(epoch_info.type == 'sleep') & (epoch_info.day == 8)].index]
from src.data_processing import make_neuron_dataframe
neuron_info = make_neuron_dataframe(ANIMALS)
list(neuron_info.keys())
epoch_key = ('HPa', 6, 2)
neuron_info[epoch_key]
from src.data_processing import get_spike_indicator_dataframe
neuron_key = ('HPa', 6, 2, 1, 4)
get_spike_indicator_dataframe(neuron_key, ANIMALS)
neuron_info[epoch_key].index.tolist()
neuron_info[epoch_key].query('area == "CA1"')
neuron_info[epoch_key].query('area == "CA1"').index.tolist()
pd.concat(
[get_spike_indicator_dataframe(neuron_key, ANIMALS)
for neuron_key in neuron_info[epoch_key].query('area == "CA1"').index], axis=1)
pd.concat(
[get_spike_indicator_dataframe(neuron_key, ANIMALS)
for neuron_key in neuron_info[epoch_key].query('area == "CA1"').index], axis=1).values
from src.data_processing import get_interpolated_position_dataframe
epoch_key = ('HPa', 6, 2)
get_interpolated_position_dataframe(epoch_key, ANIMALS)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create a data fetcher
Step2: Access data
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams['figure.dpi'] = 150
from skdaccess.framework.param_class import *
from skdaccess.geo.wyoming_sounding.cache import DataFetcher
sdf = DataFetcher(station_number='72493', year=2014, month=5, day_start=30, day_end=30, start_hour=12, end_hour=12)
dw = sdf.output()
label, data = next(dw.getIterator())
data.head()
plt.figure(figsize=(5,3.75))
plt.plot(data['TEMP'],data['HGHT']);
plt.ylabel('Height');
plt.xlabel('Temperature');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Joblib for Daniel
Step3: Parallel over both loops with memapping
Step5: Direct copy of Joblib memmaping example
|
<ASSISTANT_TASK:>
Python Code:
import time
import tempfile
import shutil
import os
import numpy as np
from joblib import Parallel, delayed
from joblib import load, dump
def griddata(gridpoints, tlayer, teff_logg_feh, method='linear', rescale=True):
Do what ever it does
# put a short wait.
time.sleep(0.5)
return np.sum(tlayer) * teff_logg_feh[0] + teff_logg_feh[1] + teff_logg_feh[2] # thing to test inputs
def inside_loop(newatm, models, layer, column, gridpoints, teff_logg_feh):
tlayer = np.zeros( len(models))
for inx, model in enumerate(models):
tlayer[indx] = model[layer, column]
# print(" for layer = {0}, column = {1}".format(layer, column))
print("[Worker %d] Layer %d and Column %d is about to griddata" % (os.getpid(), layer, column))
newatm[layer, column] = griddata(gridpoints, tlayer, teff_logg_feh, method='linear', rescale=True)
layers = range(3)
columns = range(2)
gridpoints = 5
teff = 1000
logg = 1
feh = -0.01
model1 = np.array([[1, 2], [3, 4], [5, 6]])
model2 = np.array([[7, 8], [9, 10], [11, 12]])
models = [model1, model2, model1*2, model2*2] # random models
# %%timeit
newatm = np.zeros([len(layers), len(columns)])
generator = (inside_loop(newatm, models, layer, column, gridpoints, (teff, logg, feh)) for layer in layers for column in columns)
for i in generator:
# print(newatm)
pass
print(newatm)
# %%timeit
# Turning parallel
newatm = np.zeros([len(layers), len(columns)])
print("newatm before parallel", newatm)
Parallel(n_jobs=-1, verbose=1) (delayed(inside_loop)(newatm, models, layer, column, gridpoints, (teff, logg, feh)) for layer in layers for column in columns)
time.sleep(0.5)
print("newatm after parallel", newatm)
# This runs in parallel but it does not return any data yet.
# Need to memmap the results
def inside_loop(newatm, models, layer, column, gridpoints, teff_logg_feh):
tlayer = np.zeros( len(models))
for inx, model in enumerate(models):
tlayer[indx] = model[layer, column]
newatm[layer, column] = griddata(gridpoints, tlayer, teff_logg_feh, method='linear', rescale=True)
def griddata(gridpoints, tlayer, teff_logg_feh, method='linear', rescale=True):
Do what ever it does
time.sleep(0.5)
return True # thing to test inputs
folder = tempfile.mkdtemp()
newatm_name = os.path.join(folder, 'newatm')
try:
# Pre-allocate a writeable shared memory map as a container for the
# results of the parallel computation
newatm = np.memmap(newatm_name, dtype=model.dtype, shape=model.shape, mode='w+') # need to adjsut the shape
print("newatm before parallel", newatm)
Parallel(n_jobs=-1, verbose=1) (delayed(inside_loop)(newatm, models, layer, column, gridpoints, (teff, logg, feh)) for layer in layers for column in columns)
time.sleep(0.5)
print("newatm after parallel", newatm)
finally:
# deleting temp files after testing the reuslt in example
try:
shutil.rmtree(folder)
except:
print("Failed to delete: " + folder)
def sum_row(input, output, i):
Compute the sum of a row in input and store it in output
sum_ = input[i, :].sum()
print("[Worker %d] Sum for row %d is %f" % (os.getpid(), i, sum_))
output[i] = sum_
if __name__ == "__main__":
rng = np.random.RandomState(42)
folder = tempfile.mkdtemp()
samples_name = os.path.join(folder, 'samples')
sums_name = os.path.join(folder, 'sums')
try:
# Generate some data and an allocate an output buffer
samples = rng.normal(size=(10, int(1e6)))
# Pre-allocate a writeable shared memory map as a container for the
# results of the parallel computation
sums = np.memmap(sums_name, dtype=samples.dtype,
shape=samples.shape[0], mode='w+')
print("samples shape", samples.shape)
# Dump the input data to disk to free the memory
dump(samples, samples_name)
# Release the reference on the original in memory array and replace it
# by a reference to the memmap array so that the garbage collector can
# release the memory before forking. gc.collect() is internally called
# in Parallel just before forking.
samples = load(samples_name, mmap_mode='r')
# Fork the worker processes to perform computation concurrently
Parallel(n_jobs=4)(delayed(sum_row)(samples, sums, i)
for i in range(samples.shape[0]))
# Compare the results from the output buffer with the ground truth
print("Expected sums computed in the parent process:")
expected_result = samples.sum(axis=1)
print(expected_result)
print("Actual sums computed by the worker processes:")
print(sums)
assert np.allclose(expected_result, sums)
finally:
try:
shutil.rmtree(folder)
except:
print("Failed to delete: " + folder)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Linear SVM
Step2: 簡易的 svm 實驗
Step3: Q
|
<ASSISTANT_TASK:>
Python Code:
from PIL import Image
import numpy as np
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
from sklearn import datasets, svm, linear_model
matplotlib.style.use('bmh')
matplotlib.rcParams['figure.figsize']=(10,7)
X = np.random.normal(5, 5, size=(50,1))
y0 = X[:,0]>0
y = y0.ravel()*2-1
# linear regression
regr = linear_model.LinearRegression().fit(X, y)
test_X=np.linspace(-10,10,100).reshape(-1,1)
plt.plot(test_X, regr.predict(test_X), alpha=0.5, c='r')
plt.plot([-regr.intercept_/regr.coef_[0]]*2, [-1.5,1.5], 'r--', alpha=0.2)
# linear svm
clf = svm.SVC(kernel="linear", C=1000)
clf.fit(X,y)
plt.ylim(-1.5, 1.5)
plt.xlim(-5, 5)
# svm 的判斷分割點
x0 = -clf.intercept_[0]/clf.coef_[0,0]
x1 = (1-clf.intercept_[0])/clf.coef_[0,0]
x2 = (-1-clf.intercept_[0])/clf.coef_[0,0]
# or
#assert (clf.n_support_ == [1,1]).all()
#x1, x2 = clf.support_vectors_.ravel()
plt.plot(test_X, clf.coef_[0]*test_X+clf.intercept_, 'g', alpha=0.5);
plt.plot([x0]*2, [-1.5,1.5], 'g--', alpha=0.5)
for x in [x1, x2]:
plt.plot([x]*2, [-1.5,1.5], 'g--', alpha=0.2);
plt.fill_betweenx([-1.5,1.5], [x1]*2, [x2]*2,alpha=0.1, zorder=-1, color="g")
plt.plot(X, y, 'bx');
# Iris dataset
X, y = datasets.load_iris(return_X_y=True)
# 只取 y=0,2 以及 X 的前兩個 features
X = X[y!=1, :2]
y = y[y!=1]
clf=svm.SVC(kernel='rbf')
clf.fit(X, y)
# 邊界
x_min, y_min = X.min(axis=0)-1
x_max, y_max = X.max(axis=0)+1
# 座標點
grid = np.mgrid[x_min:x_max:200j, y_min:y_max:200j]
# grid.shape = (2, 200, 200)
# 在座標點 算出 svm 的判斷函數
Z = clf.decision_function(grid.reshape(2, -1).T)
Z = Z.reshape(grid.shape[1:])
# 畫出顏色和邊界
plt.pcolormesh(grid[0], grid[1], Z > 0, cmap=plt.cm.rainbow, alpha=0.02)
plt.contour(grid[0], grid[1], Z, colors=['k', 'k', 'k'], linestyles=['--', '-', '--'],
levels=[-.5, 0, .5])
# 標出 sample 點
plt.scatter(X[:,0], X[:, 1], c=y, cmap=plt.cm.rainbow, zorder=10, s=50);
import gzip
import pickle
with gzip.open('mnist.pkl.gz', 'rb') as f:
train_set, validation_set, test_set = pickle.load(f, encoding='latin1')
train_X, train_y = train_set
test_X, test_y = test_set
#PCA
from sklearn.decomposition import PCA
pca = PCA(n_components=60)
train_X = pca.fit_transform(train_set[0])
test_X = pca.transform(test_set[0])
# use only first 10000 samples
idx = np.random.choice(np.arange(train_X.shape[0]), 30000, replace=False)
train_X = train_X[idx]
train_y = train_y[idx]
clf = svm.SVC(decision_function_shape='ovo')
%%timeit -n 1 -r 1
clf.fit(train_X, train_y)
%%timeit -n 1 -r 1
print(np.mean(clf.predict(train_X) == train_y))
%%timeit -n 1 -r 1
print(np.mean(clf.predict(test_X) == test_y))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As always, let's do imports and initialize a logger and a new bundle.
Step2: Changing Meshing Options
Step3: Adding Datasets
Step4: Running Compute
Step5: Plotting
Step6: Now let's zoom in so we can see the layout of the triangles. Note that Wilson-Devinney uses trapezoids, but since PHOEBE uses triangles, we take each of the trapezoids and split it into two triangles.
Step7: And now looking down from above. Here you can see the gaps between the surface elements (and you can also see some of the subdivision that's taking place along the limb).
Step8: And see which elements are visible at the current time. This defaults to use the 'RdYlGn' colormap which will make visible elements green, partially hidden elements yellow, and hidden elements red. Note that the observer is in the positive w-direction.
|
<ASSISTANT_TASK:>
Python Code:
#!pip install -I "phoebe>=2.3,<2.4"
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
phoebe.devel_on() # CURRENTLY REQUIRED FOR WD-STYLE MESHING (WHICH IS EXPERIMENTAL)
logger = phoebe.logger()
b = phoebe.default_binary()
b.set_value_all('mesh_method', 'wd')
b.set_value_all('eclipse_method', 'graham')
b.add_dataset('mesh', compute_times=[0, 0.5], dataset='mesh01', columns=['visibilities'])
b.run_compute(irrad_method='none')
afig, mplfig = b['mesh01@model'].plot(time=0.5, x='us', y='vs',
show=True)
afig, mplfig = b['primary@mesh01@model'].plot(time=0.0, x='us', y='vs',
ec='blue', fc='gray',
xlim=(-0.2,0.2), ylim=(-0.2,0.2),
show=True)
afig, mplfig = b['primary@mesh01@model'].plot(time=0.0, x='us', y='ws',
ec='blue', fc='gray',
xlim=(-0.1,0.1), ylim=(-2.75,-2.55),
show=True)
afig, mplfig = b['secondary@mesh01@model'].plot(time=0.0, x='us', y='ws',
ec='face', fc='visibilities',
show=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Add cyclic data. Set minimum and maximum contour values although the interval.
Step2: Open a workstation, here x11 window.
Step3: Set resources.
Step4: Draw the plot.
Step5: Show the plot in this notebook.
|
<ASSISTANT_TASK:>
Python Code:
import Ngl,Nio
#-- define variables
fname = "/Users/k204045/NCL/general/data/new_data/rectilinear_grid_2D.nc" #-- data file name
#-- open file and read variables
f = Nio.open_file(fname,"r") #-- open data file
temp = f.variables["tsurf"][0,::-1,:] #-- first time step, reverse latitude
lat = f.variables["lat"][::-1] #-- reverse latitudes
lon = f.variables["lon"][:] #-- all longitudes
tempac = Ngl.add_cyclic(temp[:,:])
minval = 250. #-- minimum contour level
maxval = 315 #-- maximum contour level
inc = 5. #-- contour level spacing
ncn = (maxval-minval)/inc + 1 #-- number of contour levels.
wkres = Ngl.Resources() #-- generate an res object for workstation
wkres.wkColorMap = "rainbow" #-- choose colormap
wks_type = "png" #-- graphics output type
wks = Ngl.open_wks(wks_type,"plot_contour_PyNGL",wkres) #-- open workstation
res = Ngl.Resources() #-- generate an resource object for plot
if hasattr(f.variables["tsurf"],"long_name"):
res.tiMainString = f.variables["tsurf"].long_name #-- set main title
res.vpXF = 0.1 #-- start x-position of viewport
res.vpYF = 0.9 #-- start y-position of viewport
res.vpWidthF = 0.7 #-- width of viewport
res.vpHeightF = 0.7 #-- height of viewport
res.cnFillOn = True #-- turn on contour fill.
res.cnLinesOn = False #-- turn off contour lines
res.cnLineLabelsOn = False #-- turn off line labels.
res.cnInfoLabelOn = False #-- turn off info label.
res.cnLevelSelectionMode = "ManualLevels" #-- select manual level selection mode
res.cnMinLevelValF = minval #-- minimum contour value
res.cnMaxLevelValF = maxval #-- maximum contour value
res.cnLevelSpacingF = inc #-- contour increment
res.mpGridSpacingF = 30. #-- map grid spacing
res.sfXCStartV = float(min(lon)) #-- x-axis location of 1st element lon
res.sfXCEndV = float(max(lon)) #-- x-axis location of last element lon
res.sfYCStartV = float(min(lat)) #-- y-axis location of 1st element lat
res.sfYCEndV = float(max(lat)) #-- y-axis location of last element lat
res.pmLabelBarDisplayMode = "Always" #-- turn on the label bar.
res.lbOrientation = "Horizontal" #-- labelbar orientation
map = Ngl.contour_map(wks,tempac,res) #-- draw contours over a map.
#-- end
Ngl.delete_wks(wks) #-- this need to be done to close the graphics output file
Ngl.end()
from IPython.display import Image
Image(filename='plot_contour_PyNGL.png')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load an example dataset, the preload flag loads the data into memory now
Step2: Signal processing
Step3: In addition, there are functions for applying the Hilbert transform, which is
Step4: Finally, it is possible to apply arbitrary functions to your data to do
|
<ASSISTANT_TASK:>
Python Code:
import mne
import os.path as op
import numpy as np
from matplotlib import pyplot as plt
data_path = op.join(mne.datasets.sample.data_path(), 'MEG',
'sample', 'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(data_path, preload=True)
raw = raw.crop(0, 10)
print(raw)
filt_bands = [(1, 3), (3, 10), (10, 20), (20, 60)]
_, (ax, ax2) = plt.subplots(2, 1, figsize=(15, 10))
data, times = raw[0]
_ = ax.plot(data[0])
for fmin, fmax in filt_bands:
raw_filt = raw.copy()
raw_filt.filter(fmin, fmax, fir_design='firwin')
_ = ax2.plot(raw_filt[0][0][0])
ax2.legend(filt_bands)
ax.set_title('Raw data')
ax2.set_title('Band-pass filtered data')
# Filter signal with a fairly steep filter, then take hilbert transform
raw_band = raw.copy()
raw_band.filter(12, 18, l_trans_bandwidth=2., h_trans_bandwidth=2.,
fir_design='firwin')
raw_hilb = raw_band.copy()
hilb_picks = mne.pick_types(raw_band.info, meg=False, eeg=True)
raw_hilb.apply_hilbert(hilb_picks)
print(raw_hilb[0][0].dtype)
# Take the amplitude and phase
raw_amp = raw_hilb.copy()
raw_amp.apply_function(np.abs, hilb_picks)
raw_phase = raw_hilb.copy()
raw_phase.apply_function(np.angle, hilb_picks)
_, (a1, a2) = plt.subplots(2, 1, figsize=(15, 10))
a1.plot(raw_band[hilb_picks[0]][0][0].real)
a1.plot(raw_amp[hilb_picks[0]][0][0].real)
a2.plot(raw_phase[hilb_picks[0]][0][0].real)
a1.set_title('Amplitude of frequency band')
a2.set_title('Phase of frequency band')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Genetic Algorithm Workshop
Step11: The optimization problem
Step12: Great. Now that the class and its basic methods is defined, we move on to code up the GA.
Step13: Crossover
Step14: Mutation
Step16: Fitness Evaluation
Step17: Fitness and Elitism
Step18: Putting it all together and making the GA
Step19: Visualize
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
# All the imports
from __future__ import print_function, division
from math import *
import random
import sys
import matplotlib.pyplot as plt
# TODO 1: Enter your unity ID here
__author__ = "<unity-id>"
class O:
Basic Class which
- Helps dynamic updates
- Pretty Prints
def __init__(self, **kwargs):
self.has().update(**kwargs)
def has(self):
return self.__dict__
def update(self, **kwargs):
self.has().update(kwargs)
return self
def __repr__(self):
show = [':%s %s' % (k, self.has()[k])
for k in sorted(self.has().keys())
if k[0] is not "_"]
txt = ' '.join(show)
if len(txt) > 60:
show = map(lambda x: '\t' + x + '\n', show)
return '{' + ' '.join(show) + '}'
print("Unity ID: ", __author__)
# Few Utility functions
def say(*lst):
Print whithout going to new line
print(*lst, end="")
sys.stdout.flush()
def random_value(low, high, decimals=2):
Generate a random number between low and high.
decimals incidicate number of decimal places
return round(random.uniform(low, high),decimals)
def gt(a, b): return a > b
def lt(a, b): return a < b
def shuffle(lst):
Shuffle a list
random.shuffle(lst)
return lst
class Decision(O):
Class indicating Decision of a problem
def __init__(self, name, low, high):
@param name: Name of the decision
@param low: minimum value
@param high: maximum value
O.__init__(self, name=name, low=low, high=high)
class Objective(O):
Class indicating Objective of a problem
def __init__(self, name, do_minimize=True):
@param name: Name of the objective
@param do_minimize: Flag indicating if objective has to be minimized or maximized
O.__init__(self, name=name, do_minimize=do_minimize)
class Point(O):
Represents a member of the population
def __init__(self, decisions):
O.__init__(self)
self.decisions = decisions
self.objectives = None
def __hash__(self):
return hash(tuple(self.decisions))
def __eq__(self, other):
return self.decisions == other.decisions
def clone(self):
new = Point(self.decisions)
new.objectives = self.objectives
return new
class Problem(O):
Class representing the cone problem.
def __init__(self):
O.__init__(self)
# TODO 2: Code up decisions and objectives below for the problem
# using the auxilary classes provided above.
self.decisions = [Decision('r', 0, 10), Decision('h', 0, 20)]
self.objectives = [Objective('S'), Objective('T')]
@staticmethod
def evaluate(point):
[r, h] = point.decisions
# TODO 3: Evaluate the objectives S and T for the point.
return point.objectives
@staticmethod
def is_valid(point):
[r, h] = point.decisions
# TODO 4: Check if the point has valid decisions
return True
def generate_one(self):
# TODO 5: Generate a valid instance of Point.
return None
cone = Problem()
point = cone.generate_one()
cone.evaluate(point)
print(point)
def populate(problem, size):
population = []
# TODO 6: Create a list of points of length 'size'
return population
# or if ur python OBSESSED
# return [problem.generate_one() for _ in xrange(size)]
print(populate(cone, 5))
def crossover(mom, dad):
# TODO 7: Create a new point which contains decisions from
# the first half of mom and second half of dad
return None
pop = populate(cone,5)
crossover(pop[0], pop[1])
def mutate(problem, point, mutation_rate=0.01):
# TODO 8: Iterate through all the decisions in the problem
# and if the probability is less than mutation rate
# change the decision(randomly set it between its max and min).
return None
def bdom(problem, one, two):
Return if one dominates two
objs_one = problem.evaluate(one)
objs_two = problem.evaluate(two)
dominates = False
# TODO 9: Return True/False based on the definition
# of bdom above.
return dominates
def fitness(problem, population, point):
dominates = 0
# TODO 10: Evaluate fitness of a point.
# For this workshop define fitness of a point
# as the number of points dominated by it.
# For example point dominates 5 members of population,
# then fitness of point is 5.
return dominates
def elitism(problem, population, retain_size):
# TODO 11: Sort the population with respect to the fitness
# of the points and return the top 'retain_size' points of the population
return population[:retain_size]
def ga(pop_size = 100, gens = 250):
problem = Problem()
population = populate(problem, pop_size)
[problem.evaluate(point) for point in population]
initial_population = [point.clone() for point in population]
gen = 0
while gen < gens:
say(".")
children = []
for _ in range(pop_size):
mom = random.choice(population)
dad = random.choice(population)
while (mom == dad):
dad = random.choice(population)
child = mutate(problem, crossover(mom, dad))
if problem.is_valid(child) and child not in population+children:
children.append(child)
population += children
population = elitism(problem, population, pop_size)
gen += 1
print("")
return initial_population, population
def plot_pareto(initial, final):
initial_objs = [point.objectives for point in initial]
final_objs = [point.objectives for point in final]
initial_x = [i[0] for i in initial_objs]
initial_y = [i[1] for i in initial_objs]
final_x = [i[0] for i in final_objs]
final_y = [i[1] for i in final_objs]
plt.scatter(initial_x, initial_y, color='b', marker='+', label='initial')
plt.scatter(final_x, final_y, color='r', marker='o', label='final')
plt.title("Scatter Plot between initial and final population of GA")
plt.ylabel("Total Surface Area(T)")
plt.xlabel("Curved Surface Area(S)")
plt.legend(loc=9, bbox_to_anchor=(0.5, -0.175), ncol=2)
plt.show()
initial, final = ga()
plot_pareto(initial, final)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Imports
Step3: TPU/GPU detection
Step4: Colab-only auth for this notebook and the TPU
Step5: tf.data.Dataset
Step6: Let's have a look at the data
Step7: Keras model
Step8: Train and validate the model
Step9: Visualize predictions
|
<ASSISTANT_TASK:>
Python Code:
BATCH_SIZE = 64
LEARNING_RATE = 0.002
# GCS bucket for training logs and for saving the trained model
# You can leave this empty for local saving, unless you are using a TPU.
# TPUs do not have access to your local instance and can only write to GCS.
BUCKET="" # a valid bucket name must start with gs://
training_images_file = 'gs://mnist-public/train-images-idx3-ubyte'
training_labels_file = 'gs://mnist-public/train-labels-idx1-ubyte'
validation_images_file = 'gs://mnist-public/t10k-images-idx3-ubyte'
validation_labels_file = 'gs://mnist-public/t10k-labels-idx1-ubyte'
import os, re, math, json, time
import PIL.Image, PIL.ImageFont, PIL.ImageDraw
import numpy as np
import tensorflow as tf
from matplotlib import pyplot as plt
from tensorflow.python.platform import tf_logging
print("Tensorflow version " + tf.__version__)
try: # detect TPUs
tpu = tf.distribute.cluster_resolver.TPUClusterResolver() # TPU detection
tf.config.experimental_connect_to_cluster(tpu)
tf.tpu.experimental.initialize_tpu_system(tpu)
strategy = tf.distribute.experimental.TPUStrategy(tpu)
except ValueError: # detect GPUs
strategy = tf.distribute.MirroredStrategy() # for GPU or multi-GPU machines
#strategy = tf.distribute.get_strategy() # default strategy that works on CPU and single GPU
#strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy() # for clusters of multi-GPU machines
print("Number of accelerators: ", strategy.num_replicas_in_sync)
# adjust batch size and learning rate for distributed computing
global_batch_size = BATCH_SIZE * strategy.num_replicas_in_sync # num replcas is 8 on a single TPU or N when runing on N GPUs.
learning_rate = LEARNING_RATE * strategy.num_replicas_in_sync
#@title visualization utilities [RUN ME]
This cell contains helper functions used for visualization
and downloads only. You can skip reading it. There is very
little useful Keras/Tensorflow code here.
# Matplotlib config
plt.rc('image', cmap='gray_r')
plt.rc('grid', linewidth=0)
plt.rc('xtick', top=False, bottom=False, labelsize='large')
plt.rc('ytick', left=False, right=False, labelsize='large')
plt.rc('axes', facecolor='F8F8F8', titlesize="large", edgecolor='white')
plt.rc('text', color='a8151a')
plt.rc('figure', facecolor='F0F0F0')# Matplotlib fonts
MATPLOTLIB_FONT_DIR = os.path.join(os.path.dirname(plt.__file__), "mpl-data/fonts/ttf")
# pull a batch from the datasets. This code is not very nice, it gets much better in eager mode (TODO)
def dataset_to_numpy_util(training_dataset, validation_dataset, N):
# get one batch from each: 10000 validation digits, N training digits
unbatched_train_ds = training_dataset.unbatch()
# This is the TF 2.0 "eager execution" way of iterating through a tf.data.Dataset
for v_images, v_labels in validation_dataset:
break
for t_images, t_labels in unbatched_train_ds.batch(N):
break
validation_digits = v_images.numpy()
validation_labels = v_labels.numpy()
training_digits = t_images.numpy()
training_labels = t_labels.numpy()
# these were one-hot encoded in the dataset
validation_labels = np.argmax(validation_labels, axis=1)
training_labels = np.argmax(training_labels, axis=1)
return (training_digits, training_labels,
validation_digits, validation_labels)
# create digits from local fonts for testing
def create_digits_from_local_fonts(n):
font_labels = []
img = PIL.Image.new('LA', (28*n, 28), color = (0,255)) # format 'LA': black in channel 0, alpha in channel 1
font1 = PIL.ImageFont.truetype(os.path.join(MATPLOTLIB_FONT_DIR, 'DejaVuSansMono-Oblique.ttf'), 25)
font2 = PIL.ImageFont.truetype(os.path.join(MATPLOTLIB_FONT_DIR, 'STIXGeneral.ttf'), 25)
d = PIL.ImageDraw.Draw(img)
for i in range(n):
font_labels.append(i%10)
d.text((7+i*28,0 if i<10 else -4), str(i%10), fill=(255,255), font=font1 if i<10 else font2)
font_digits = np.array(img.getdata(), np.float32)[:,0] / 255.0 # black in channel 0, alpha in channel 1 (discarded)
font_digits = np.reshape(np.stack(np.split(np.reshape(font_digits, [28, 28*n]), n, axis=1), axis=0), [n, 28*28])
return font_digits, font_labels
# utility to display a row of digits with their predictions
def display_digits(digits, predictions, labels, title, n):
plt.figure(figsize=(13,3))
digits = np.reshape(digits, [n, 28, 28])
digits = np.swapaxes(digits, 0, 1)
digits = np.reshape(digits, [28, 28*n])
plt.yticks([])
plt.xticks([28*x+14 for x in range(n)], predictions)
for i,t in enumerate(plt.gca().xaxis.get_ticklabels()):
if predictions[i] != labels[i]: t.set_color('red') # bad predictions in red
plt.imshow(digits)
plt.grid(None)
plt.title(title)
# utility to display multiple rows of digits, sorted by unrecognized/recognized status
def display_top_unrecognized(digits, predictions, labels, n, lines):
idx = np.argsort(predictions==labels) # sort order: unrecognized first
for i in range(lines):
display_digits(digits[idx][i*n:(i+1)*n], predictions[idx][i*n:(i+1)*n], labels[idx][i*n:(i+1)*n],
"{} sample validation digits out of {} with bad predictions in red and sorted first".format(n*lines, len(digits)) if i==0 else "", n)
#IS_COLAB_BACKEND = 'COLAB_GPU' in os.environ # this is always set on Colab, the value is 0 or 1 depending on GPU presence
#if IS_COLAB_BACKEND:
# from google.colab import auth
# auth.authenticate_user() # Authenticates the backend and also the TPU using your credentials so that they can access your private GCS buckets
def read_label(tf_bytestring):
label = tf.io.decode_raw(tf_bytestring, tf.uint8)
label = tf.reshape(label, [])
label = tf.one_hot(label, 10)
return label
def read_image(tf_bytestring):
image = tf.io.decode_raw(tf_bytestring, tf.uint8)
image = tf.cast(image, tf.float32)/256.0
image = tf.reshape(image, [28*28])
return image
def load_dataset(image_file, label_file):
imagedataset = tf.data.FixedLengthRecordDataset(image_file, 28*28, header_bytes=16)
imagedataset = imagedataset.map(read_image, num_parallel_calls=16)
labelsdataset = tf.data.FixedLengthRecordDataset(label_file, 1, header_bytes=8)
labelsdataset = labelsdataset.map(read_label, num_parallel_calls=16)
dataset = tf.data.Dataset.zip((imagedataset, labelsdataset))
return dataset
def get_training_dataset(image_file, label_file, batch_size):
dataset = load_dataset(image_file, label_file)
dataset = dataset.cache() # this small dataset can be entirely cached in RAM, for TPU this is important to get good performance from such a small dataset
dataset = dataset.shuffle(5000, reshuffle_each_iteration=True)
dataset = dataset.repeat() # Mandatory for Keras for now
dataset = dataset.batch(batch_size, drop_remainder=True) # drop_remainder is important on TPU, batch size must be fixed
dataset = dataset.prefetch(-1) # fetch next batches while training on the current one (-1: autotune prefetch buffer size)
return dataset
def get_validation_dataset(image_file, label_file):
dataset = load_dataset(image_file, label_file)
dataset = dataset.cache() # this small dataset can be entirely cached in RAM, for TPU this is important to get good performance from such a small dataset
dataset = dataset.repeat() # Mandatory for Keras for now
dataset = dataset.batch(10000, drop_remainder=True) # 10000 items in eval dataset, all in one batch
return dataset
# instantiate the datasets
training_dataset = get_training_dataset(training_images_file, training_labels_file, global_batch_size)
validation_dataset = get_validation_dataset(validation_images_file, validation_labels_file)
N = 24
(training_digits, training_labels,
validation_digits, validation_labels) = dataset_to_numpy_util(training_dataset, validation_dataset, N)
display_digits(training_digits, training_labels, training_labels, "training digits and their labels", N)
display_digits(validation_digits[:N], validation_labels[:N], validation_labels[:N], "validation digits and their labels", N)
font_digits, font_labels = create_digits_from_local_fonts(N)
# This model trains to 99.4%— sometimes 99.5%— accuracy in 10 epochs (with a batch size of 64)
def make_model():
model = tf.keras.Sequential(
[
tf.keras.layers.Reshape(input_shape=(28*28,), target_shape=(28, 28, 1)),
tf.keras.layers.Conv2D(filters=6, kernel_size=3, padding='same', use_bias=True, activation='relu'),
tf.keras.layers.Conv2D(filters=12, kernel_size=6, padding='same', use_bias=True, activation='relu', strides=2),
tf.keras.layers.Conv2D(filters=24, kernel_size=6, padding='same', use_bias=True, activation='relu', strides=2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(200, use_bias=True, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam', # learning rate will be set by LearningRateScheduler
loss='categorical_crossentropy',
metrics=['accuracy'])
return model
with strategy.scope(): # the new way of handling distribution strategies in Tensorflow 1.14+
model = make_model()
# print model layers
model.summary()
# set up Tensorboard logs
timestamp = time.strftime("%Y-%m-%d-%H-%M-%S")
log_dir=os.path.join(BUCKET, 'mnist-logs', timestamp)
tb_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, update_freq=50*global_batch_size)
print("Tensorboard loggs written to: ", log_dir)
EPOCHS = 10
steps_per_epoch = 60000//global_batch_size # 60,000 items in this dataset
print("Step (batches) per epoch: ", steps_per_epoch)
history = model.fit(training_dataset, steps_per_epoch=steps_per_epoch, epochs=EPOCHS,
validation_data=validation_dataset, validation_steps=1,
callbacks=[tb_callback], verbose=1)
# recognize digits from local fonts
probabilities = model.predict(font_digits, steps=1)
predicted_labels = np.argmax(probabilities, axis=1)
display_digits(font_digits, predicted_labels, font_labels, "predictions from local fonts (bad predictions in red)", N)
# recognize validation digits
probabilities = model.predict(validation_digits, steps=1)
predicted_labels = np.argmax(probabilities, axis=1)
display_top_unrecognized(validation_digits, predicted_labels, validation_labels, N, 7)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load Corpus
Step2: Make Topic Model
Step3: Evaluate/Visualize Topic Model
Step4: Check the topics in documents
Step5: Visualize words in topics
|
<ASSISTANT_TASK:>
Python Code:
# enable showing matplotlib image inline
%matplotlib inline
# autoreload module
%load_ext autoreload
%autoreload 2
PROJECT_ROOT = "/"
def load_local_package():
import os
import sys
root = os.path.join(os.getcwd(), "./")
sys.path.append(root) # load project root
return root
PROJECT_ROOT = load_local_package()
prefix = "salons"
def load_corpus(p):
import os
import json
from gensim import corpora
s_path = os.path.join(PROJECT_ROOT, "./data/{0}.json".format(p))
d_path = os.path.join(PROJECT_ROOT, "./data/{0}_dict.dict".format(p))
c_path = os.path.join(PROJECT_ROOT, "./data/{0}_corpus.mm".format(p))
s = []
with open(s_path, "r", encoding="utf-8") as f:
s = json.load(f)
d = corpora.Dictionary.load(d_path)
c = corpora.MmCorpus(c_path)
return s, d, c
salons, dictionary, corpus = load_corpus(prefix)
print(dictionary)
print(corpus)
from gensim import models
topic_range = range(2, 5)
test_rate = 0.2
def split_corpus(c, rate_or_size):
import math
size = 0
if isinstance(rate_or_size, float):
size = math.floor(len(c) * rate_or_size)
else:
size = rate_or_size
# simple split, not take sample randomly
left = c[:-size]
right = c[-size:]
return left, right
def calc_perplexity(m, c):
import numpy as np
return np.exp(-m.log_perplexity(c))
def search_model(c, rate_or_size):
most = [1.0e6, None]
training, test = split_corpus(c, rate_or_size)
print("dataset: training/test = {0}/{1}".format(len(training), len(test)))
for t in topic_range:
m = models.LdaModel(corpus=training, id2word=dictionary, num_topics=t, iterations=250, passes=5)
p1 = calc_perplexity(m, training)
p2 = calc_perplexity(m, test)
print("{0}: perplexity is {1}/{2}".format(t, p1, p2))
if p2 < most[0]:
most[0] = p2
most[1] = m
return most[0], most[1]
perplexity, model = search_model(corpus, test_rate)
print("Best model: topics={0}, perplexity={1}".format(model.num_topics, perplexity))
def calc_topic_distances(m, topic):
import numpy as np
def kldiv(p, q):
distance = np.sum(p * np.log(p / q))
return distance
# get probability of each words
# https://github.com/piskvorky/gensim/blob/develop/gensim/models/ldamodel.py#L733
t = m.state.get_lambda()
for i, p in enumerate(t):
t[i] = t[i] / t[i].sum()
base = t[topic]
distances = [(i_p[0], kldiv(base, i_p[1])) for i_p in enumerate(t) if i_p[0] != topic]
return distances
def plot_distance_matrix(m):
import numpy as np
import matplotlib.pylab as plt
# make distance matrix
mt = []
for i in range(m.num_topics):
d = calc_topic_distances(m, i)
d.insert(i, (i, 0)) # distance between same topic
d = [_d[1] for _d in d]
mt.append(d)
mt = np.array(mt)
# plot matrix
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.set_aspect("equal")
plt.imshow(mt, interpolation="nearest", cmap=plt.cm.ocean)
plt.yticks(range(mt.shape[0]))
plt.xticks(range(mt.shape[1]))
plt.colorbar()
plt.show()
plot_distance_matrix(model)
def show_document_topics(c, m, sample_size=200, width=1):
import random
import numpy as np
import matplotlib.pylab as plt
# make document/topics matrix
d_topics = []
t_documents = {}
samples = random.sample(range(len(c)), sample_size)
for s in samples:
ts = m.__getitem__(corpus[s], -1)
d_topics.append([v[1] for v in ts])
max_topic = max(ts, key=lambda x: x[1])
if max_topic[0] not in t_documents:
t_documents[max_topic[0]] = []
t_documents[max_topic[0]] += [(s, max_topic[1])]
d_topics = np.array(d_topics)
for t in t_documents:
t_documents[t] = sorted(t_documents[t], key=lambda x: x[1], reverse=True)
# draw cumulative bar chart
fig = plt.figure(figsize=(20, 3))
N, K = d_topics.shape
indices = np.arange(N)
height = np.zeros(N)
bar = []
for k in range(K):
color = plt.cm.coolwarm(k / K, 1)
p = plt.bar(indices, d_topics[:, k], width, bottom=None if k == 0 else height, color=color)
height += d_topics[:, k]
bar.append(p)
plt.ylim((0, 1))
plt.xlim((0, d_topics.shape[0]))
topic_labels = ['Topic #{}'.format(k) for k in range(K)]
plt.legend([b[0] for b in bar], topic_labels)
plt.show(bar)
return d_topics, t_documents
document_topics, topic_documents = show_document_topics(corpus, model)
num_show_ranks = 5
for t in topic_documents:
print("Topic #{0} salons".format(t) + " " + "*" * 100)
for i, v in topic_documents[t][:num_show_ranks]:
print("{0}({1}):{2}".format(salons[i]["name"], v, salons[i]["urls"]["pc"]))
def visualize_topic(m, word_count=10, fontsize_base=10):
import matplotlib.pylab as plt
from matplotlib.font_manager import FontProperties
font = lambda s: FontProperties(fname=r'C:\Windows\Fonts\meiryo.ttc', size=s)
# get words in topic
topic_words = []
for t in range(m.num_topics):
words = m.show_topic(t, topn=word_count)
topic_words.append(words)
# plot words
fig = plt.figure(figsize=(8, 5))
for i, ws in enumerate(topic_words):
sub = fig.add_subplot(1, m.num_topics, i + 1)
plt.ylim(0, word_count + 0.5)
plt.xticks([])
plt.yticks([])
plt.title("Topic #{}".format(i))
for j, (share, word) in enumerate(ws):
size = fontsize_base + (fontsize_base * share * 2)
w = "%s(%1.3f)" % (word, share)
plt.text(0.1, word_count-j-0.5, w, ha="left", fontproperties=font(size))
plt.tight_layout()
plt.show()
visualize_topic(model)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Example
Step2: To illustrate the water filling principle, we will plot $\alpha_i + x_i$ and check that this level is flat where power has been allocated
|
<ASSISTANT_TASK:>
Python Code:
#!/usr/bin/env python3
# @author: R. Gowers, S. Al-Izzi, T. Pollington, R. Hill & K. Briggs
import numpy as np
import cvxpy as cvx
def water_filling(n,a,sum_x=1):
'''
Boyd and Vandenberghe, Convex Optimization, example 5.2 page 145
Water-filling.
This problem arises in information theory, in allocating power to a set of
n communication channels in order to maximise the total channel capacity.
The variable x_i represents the transmitter power allocated to the ith channel,
and log(α_i+x_i) gives the capacity or maximum communication rate of the channel.
The objective is to minimise -∑log(α_i+x_i) subject to the constraint ∑x_i = 1
'''
# Declare variables and parameters
x = cvx.Variable(n)
alpha = cvx.Parameter(n,sign='positive')
alpha.value = a
# Choose objective function. Interpret as maximising the total communication rate of all the channels
obj = cvx.Maximize(cvx.sum_entries(cvx.log(alpha + x)))
# Declare constraints
constraints = [x >= 0, cvx.sum_entries(x) - sum_x == 0]
# Solve
prob = cvx.Problem(obj, constraints)
prob.solve()
if(prob.status=='optimal'):
return prob.status,prob.value,x.value
else:
return prob.status,np.nan,np.nan
# As an example, we will solve the water filling problem with 3 buckets, each with different α
np.set_printoptions(precision=3)
buckets=3
alpha = np.array([0.8,1.0,1.2])
stat,prob,x=water_filling(buckets,alpha)
print('Problem status: ',stat)
print('Optimal communication rate = %.4g '%prob)
print('Transmitter powers:\n', x)
import matplotlib
import matplotlib.pylab as plt
%matplotlib inline
matplotlib.rcParams.update({'font.size': 14})
axis = np.arange(0.5,buckets+1.5,1)
index = axis+0.5
X = np.asarray(x).flatten()
Y = alpha + X
# to include the last data point as a step, we need to repeat it
A = np.concatenate((alpha,[alpha[-1]]))
X = np.concatenate((X,[X[-1]]))
Y = np.concatenate((Y,[Y[-1]]))
plt.xticks(index)
plt.xlim(0.5,buckets+0.5)
plt.ylim(0,1.5)
plt.step(axis,A,where='post',label =r'$\alpha$',lw=2)
plt.step(axis,Y,where='post',label=r'$\alpha + x$',lw=2)
plt.legend(loc='lower right')
plt.xlabel('Bucket Number')
plt.ylabel('Power Level')
plt.title('Water Filling Solution')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Here we do some things in the name of speed, such as crop (which will
Step2: Now we band-pass filter our data and create epochs.
Step3: Compute the forward and inverse
Step4: Compute label time series and do envelope correlation
Step5: Compute the degree and plot it
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Eric Larson <larson.eric.d@gmail.com>
# Sheraz Khan <sheraz@khansheraz.com>
# Denis Engemann <denis.engemann@gmail.com>
#
# License: BSD (3-clause)
import os.path as op
import mne
from mne.beamformer import make_lcmv, apply_lcmv_epochs
from mne.connectivity import envelope_correlation
from mne.preprocessing import compute_proj_ecg, compute_proj_eog
data_path = mne.datasets.brainstorm.bst_resting.data_path()
subjects_dir = op.join(data_path, 'subjects')
subject = 'bst_resting'
trans = op.join(data_path, 'MEG', 'bst_resting', 'bst_resting-trans.fif')
bem = op.join(subjects_dir, subject, 'bem', subject + '-5120-bem-sol.fif')
raw_fname = op.join(data_path, 'MEG', 'bst_resting',
'subj002_spontaneous_20111102_01_AUX.ds')
crop_to = 60.
raw = mne.io.read_raw_ctf(raw_fname, verbose='error')
raw.crop(0, crop_to).pick_types(meg=True, eeg=False).load_data().resample(80)
raw.apply_gradient_compensation(3)
projs_ecg, _ = compute_proj_ecg(raw, n_grad=1, n_mag=2)
projs_eog, _ = compute_proj_eog(raw, n_grad=1, n_mag=2, ch_name='MLT31-4407')
raw.info['projs'] += projs_ecg
raw.info['projs'] += projs_eog
raw.apply_proj()
cov = mne.compute_raw_covariance(raw) # compute before band-pass of interest
raw.filter(14, 30)
events = mne.make_fixed_length_events(raw, duration=5.)
epochs = mne.Epochs(raw, events=events, tmin=0, tmax=5.,
baseline=None, reject=dict(mag=8e-13), preload=True)
del raw
# This source space is really far too coarse, but we do this for speed
# considerations here
pos = 15. # 1.5 cm is very broad, done here for speed!
src = mne.setup_volume_source_space('bst_resting', pos, bem=bem,
subjects_dir=subjects_dir, verbose=True)
fwd = mne.make_forward_solution(epochs.info, trans, src, bem)
data_cov = mne.compute_covariance(epochs)
filters = make_lcmv(epochs.info, fwd, data_cov, 0.05, cov,
pick_ori='max-power', weight_norm='nai')
del fwd
epochs.apply_hilbert() # faster to do in sensor space
stcs = apply_lcmv_epochs(epochs, filters, return_generator=True)
corr = envelope_correlation(stcs, verbose=True)
degree = mne.connectivity.degree(corr, 0.15)
stc = mne.VolSourceEstimate(degree, [src[0]['vertno']], 0, 1, 'bst_resting')
brain = stc.plot(
src, clim=dict(kind='percent', lims=[75, 85, 95]), colormap='gnuplot',
subjects_dir=subjects_dir, mode='glass_brain')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step4: 1. Newton's method for functions of complex variables - stability and basins of attraction. (30 points)
Step7: 2. Ill-conditioned linear problems. (20 points)
Step8: Condition numbers are ratio of largest to smallest singular values
Step9: One way to think about the condition number is in terms of the ratio of the largest singular value to the smallest one - so a measure of the disproportionate stretching effect of the linear transform in one direction versus another. When this is very big, it means that errors in one or more direction will be amplified greatly. This often occurs because one or more columns is "almost" dependent - i.e. it can be approximated by a linear combination of the other columns.
Step13: 4. One of the goals of the course it that you will be able to implement novel algorithms from the literature. (30 points)
|
<ASSISTANT_TASK:>
Python Code:
from sympy import Symbol, exp, I, pi, N, expand
from sympy import init_printing
init_printing()
expand(exp(2*pi*I/3), complex=True)
expand(exp(4*pi*I/3), complex=True)
plt.figure(figsize=(4,4))
roots = np.array([[1,0], [-0.5, np.sqrt(3)/2], [-0.5, -np.sqrt(3)/2]])
plt.scatter(roots[:,0], roots[:,1], s=50, c='red')
xp = np.linspace(0, 2*np.pi, 100)
plt.plot(np.cos(xp), np.sin(xp), c='blue');
def newton(z, f, fprime, max_iter=100, tol=1e-6):
The Newton-Raphson method.
for i in range(max_iter):
step = f(z)/fprime(z)
if abs(step) < tol:
return i, z
z -= step
return i, z
def plot_newton_iters(p, pprime, n=200, extent=[-1,1,-1,1], cmap='hsv'):
Shows how long it takes to converge to a root using the Newton-Rahphson method.
m = np.zeros((n,n))
xmin, xmax, ymin, ymax = extent
for r, x in enumerate(np.linspace(xmin, xmax, n)):
for s, y in enumerate(np.linspace(ymin, ymax, n)):
z = x + y*1j
m[s, r] = newton(z, p, pprime)[0]
plt.imshow(m, cmap=cmap, extent=extent)
def plot_newton_basins(p, pprime, n=200, extent=[-1,1,-1,1], cmap='jet'):
Shows basin of attraction for convergence to each root using the Newton-Raphson method.
root_count = 0
roots = {}
m = np.zeros((n,n))
xmin, xmax, ymin, ymax = extent
for r, x in enumerate(np.linspace(xmin, xmax, n)):
for s, y in enumerate(np.linspace(ymin, ymax, n)):
z = x + y*1j
root = np.round(newton(z, p, pprime)[1], 1)
if not root in roots:
roots[root] = root_count
root_count += 1
m[s, r] = roots[root]
plt.imshow(m, cmap=cmap, extent=extent)
plt.grid('off')
plot_newton_iters(lambda x: x**3 - 1, lambda x: 3*x**2)
plt.grid('off')
m = plot_newton_basins(lambda x: x**3 - 1, lambda x: 3*x**2)
X = np.load('x.npy')
y = np.load('y.npy')
beta = np.load('b.npy')
def f1(X, y):
Direct translation of normal equations to code.
return np.dot(np.linalg.inv(np.dot(X.T, X)), np.dot(X.T, y))
def f2(X, y):
Solving normal equations wihtout matrix inversion.
return np.linalg.solve(np.dot(X.T, X), np.dot(X.T, y))
%precision 3
print("X = ")
print(X)
# counting from 0 (so column 5 is the last column)
# we can see that column 5 is a multiple of column 3
# so one approach is to simply remove this (dependent) column
print("True solution\t\t", beta)
print("Library function\t", np.linalg.lstsq(X, y)[0])
print("Using f1\t\t", f1(X[:, :5], y))
print("Using f2\t\t", f2(X[:, :5], y))
np.linalg.svd(X)[1]
np.linalg.svd(X[:, :-1])[1]
np.linalg.cond(X[:, :-1])
import sympy as sym
from sympy import Matrix
from numpy import linalg as la
x1, x2 = sym.symbols('x1 x2')
def f(x1, x2):
return sym.Matrix([-x1*x2*sym.exp(-(x1**2 + x2**2)/2)])
def h(x1,x2):
return x1**2+x2**2
def g(x1,x2):
return 2*x1+3*x2
def characterize_cp(H):
l,v = la.eig(H)
if(np.all(np.greater(l,np.zeros(2)))):
return("minimum")
elif(np.all(np.less(l,np.zeros(2)))):
return("maximum")
else:
return("saddle")
sym.init_printing()
fun = f(x1,x2)
X = sym.Matrix([x1,x2])
gradf = fun.jacobian(X)
sym.simplify(gradf)
hessianf = gradf.jacobian(X)
sym.simplify(hessianf)
fcritical = sym.solve(gradf,X)
for i in range(4):
H = np.array(hessianf.subs([(x1,fcritical[i][0]),(x2,fcritical[i][1])])).astype(float)
print(fcritical[i], characterize_cp(H))
import scipy.optimize as opt
def f(x):
return -x[0] * x[1] * np.exp(-(x[0]**2+x[1]**2)/2)
cons = ({'type': 'eq',
'fun' : lambda x: np.array([2.0*x[0] + 3.0*x[1] - 5.0]),
'jac' : lambda x: np.array([2.0,3.0])},
{'type': 'ineq',
'fun' : lambda x: np.array([-x[0]**2.0 - x[1]**2.0 + 10.0])})
x0 = [1.5,1.5]
cx = opt.minimize(f, x0, constraints=cons)
x = np.linspace(-5, 5, 200)
y = np.linspace(-5, 5, 200)
X, Y = np.meshgrid(x, y)
Z = f(np.vstack([X.ravel(), Y.ravel()])).reshape((200,200))
plt.contour(X, Y, Z)
plt.plot(x, (5-2*x)/3, 'k:', linewidth=1)
plt.plot(x, (10.0-x**2)**0.5, 'k:', linewidth=1)
plt.plot(x, -(10.0-x**2)**0.5, 'k:', linewidth=1)
plt.fill_between(x,(10-x**2)**0.5,-(10-x**2)**0.5,alpha=0.15)
plt.text(cx['x'][0], cx['x'][1], 'x', va='center', ha='center', size=20, color='red')
plt.axis([-5,5,-5,5])
plt.title('Contour plot of f(x) subject to constraints g(x) and h(x)')
plt.xlabel('x1')
plt.ylabel('x2')
pass
def gaussian_kernel(xs, x, h=1.0):
Gaussian kernel for a shifting window centerd at x.
X = xs - x
try:
d = xs.shape[1]
except:
d = 1
k = np.array([(2*np.pi*h**d)**-0.5*np.exp(-(np.dot(_.T, _)/h)**2) for _ in X])
if d != 1:
k = k[:, np.newaxis]
return k
def flat_kernel(xs, x, h=1.0):
Flat kenrel for a shifting window centerd at x.
X = xs - x
try:
d = xs.shape[1]
except:
d = 1
k = np.array([1 if np.dot(_.T, _) < h else 0 for _ in X])
if d != 1:
k = k[:, np.newaxis]
return k
def mean_shift(xs, x, kernel, max_iters=100, tol=1e-6, trace=False):
Finds the local mode using mean shift algorithm.
record = []
for i in range(max_iters):
if trace:
record.append(x)
m = (kernel(xs, x)*xs).sum(axis=0)/kernel(xs, x).sum(axis=0) - x
if np.sum(m**2) < tol:
break
x += m
return i, x, np.array(record)
x1 = np.load('x1d.npy')
# choose kernel to evaluate
kernel = flat_kernel
# kernel = gaussian_kernel
i1, m1, path = mean_shift(x1, 1, kernel)
print(i1, m1)
i2, m2, path = mean_shift(x1, -7, kernel)
print(i2, m2)
i3, m3, path = mean_shift(x1, 7 ,kernel)
print(i3, m3)
xp = np.linspace(0, 1.0, 100)
plt.hist(x1, 50, histtype='step', normed=True);
plt.axvline(m1, c='blue')
plt.axvline(m2, c='blue')
plt.axvline(m3, c='blue');
x2 = np.load('x2d.npy')
# choose kernel to evaluate
# kernel = flat_kernel (also OK if they use the Epanachnikov kernel since the flat is a shadow of that)
kernel = gaussian_kernel
i1, m1, path1 = mean_shift(x2, [0,0], kernel, trace=True)
print(i1, m1)
i2, m2, path2 = mean_shift(x2, [-4,5], kernel, trace=True)
print(i2, m2)
i3, m3, path3 = mean_shift(x2, [10,10] ,kernel, trace=True)
print(i3, m3)
import scipy.stats as stats
# size of marekr at starting position
base = 40
# set up for estimating density using gaussian_kde
xmin, xmax = -6, 12
ymin,ymax = -5, 15
X, Y = np.mgrid[xmin:xmax:100j, ymin:ymax:100j]
positions = np.vstack([X.ravel(), Y.ravel()])
kde = stats.gaussian_kde(x2.T)
Z = np.reshape(kde(positions).T, X.shape)
plt.contour(X, Y, Z)
# plot data in background
plt.scatter(x2[:, 0], x2[:, 1], c='grey', alpha=0.2, edgecolors='none')
# path from [0,0]
plt.scatter(path1[:, 0], path1[:, 1], s=np.arange(base, base+len(path1)),
c='red', edgecolors='red', marker='x', linewidth=1.5)
# path from [-4,5]
plt.scatter(path2[:, 0], path2[:, 1], s=np.arange(base, base+len(path2)),
c='blue', edgecolors='blue', marker='x', linewidth=1.5)
# path from [10,10]
plt.scatter(path3[:, 0], path3[:, 1], s=np.arange(base, base+len(path3)),
c='green', edgecolors='green',marker='x', linewidth=1.5)
plt.axis([xmin, xmax, ymin, ymax]);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set the camera to take a bit smaller photos (lower resolution). This will make our image processing a bit faster.
Step2: One can see all the available effects by listing the camera EFFECTS
Step3: Let's just loop through some interesting ones
Step4: Let's grab a few more photos to do some work on. Try to move in the frame as these are taken.
Step5: Now we'll clean up our camera object...
|
<ASSISTANT_TASK:>
Python Code:
from picamera import PiCamera, Color
from time import sleep
camera = PiCamera()
camera.resolution = (480, 320)
camera.vflip = True
camera.hflip = True
camera.start_preview()
camera.annotate_foreground = Color('white')
camera.annotate_text = "Colorswap Effect"
camera.annotate_text_size = 10
camera.image_effect = 'colorswap'
sleep(2)
camera.capture('img/colorswap.jpg')
camera.stop_preview()
camera.IMAGE_EFFECTS
camera.start_preview()
sleep(2)
cool_effects = ['none', 'watercolor', 'negative', 'emboss', 'washedout', 'solarize', 'oilpaint', 'sketch']
for effect in cool_effects:
camera.image_effect = effect
camera.annotate_text = "Effect: %s" % effect
camera.capture('img/effect-%s.jpg' % effect)
sleep(1)
camera.stop_preview()
camera.start_preview()
sleep(2)
camera.image_effect = 'none'
camera.annotate_text = ""
for i in range(3):
camera.capture('img/frame-%s.jpg' % i)
sleep(2)
camera.stop_preview()
camera.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Thinc provides a variety of layers, functions that create Model instances. Thinc tries to avoid inheritance, preferring function composition. The Linear function gives you a model that computes Y = X @ W.T + b (the function is defined in thinc.layers.linear.forward).
Step2: Models support dimension inference from data. You can defer some or all of the dimensions.
Step3: The chain function wires two model instances together, with a feed-forward relationship. Dimension inference is especially helpful here.
Step4: We call functions like chain combinators. Combinators take one or more models as arguments, and return another model instance, without introducing any new weight parameters. Another useful combinator is concatenate
Step5: The concatenate function produces a layer that runs the child layers separately, and then concatenates their outputs together. This is often useful for combining features from different sources. For instance, we use this all the time to build spaCy's embedding layers.
Step6: We can apply clone to model instances that have child layers, making it easy to define more complex architectures. For instance, we often want to attach an activation function and dropout to a linear layer, and then repeat that substructure a number of times. Of course, you can make whatever intermediate functions you find helpful.
Step7: Some combinators are unary functions
Step8: The combinator system makes it easy to wire together complex models very concisely. A concise notation is a huge advantage, because it lets you read and review your model with less clutter – making it easy to spot mistakes, and easy to make changes. For the ultimate in concise notation, you can also take advantage of Thinc's operator overloading, which lets you use an infix notation. Operator overloading can lead to unexpected results, so you have to enable the overloading explicitly in a contextmanager. This also lets you control how the operators are bound, making it easy to use the feature with your own combinators. For instance, here is a definition for a text classification network
Step9: The network above will expect a list of arrays as input, where each array should have two columns with different numeric identifier features. The two features will be embedded using separate embedding tables, and the two vectors added and passed through a Maxout layer with layer normalization and dropout. The sequences then pass through two pooling functions, and the concatenated results are passed through 2 Relu layers with dropout and residual connections. Finally, the sequence vectors are passed through an output layer, which has a Softmax activation.
Step10: Initialize the model with a sample of the data
Step11: Run the model over some data
Step12: Get a callback to backpropagate
Step13: Run the callback to calculate the gradient with respect to the inputs. If the model has trainable parameters, gradients for the parameters are accumulated internally, as a side-effect.
Step14: The backprop() callback only increments the parameter gradients, it doesn't actually change the weights. To increment the weights, call model.finish_update(), passing it an optimizer
Step15: You can get and set dimensions, parameters and attributes by name
Step16: You can also retrieve parameter gradients, and increment them explicitly
Step17: Finally, you can serialize models using the model.to_bytes and model.to_disk methods, and load them back with from_bytes and from_disk.
|
<ASSISTANT_TASK:>
Python Code:
!pip install "thinc>=8.0.0a0"
import numpy
from thinc.api import Linear, zero_init
n_in = numpy.zeros((128, 16), dtype="f")
n_out = numpy.zeros((128, 10), dtype="f")
model = Linear(nI=n_in.shape[1], nO=n_out.shape[1], init_W=zero_init)
nI = model.get_dim("nI")
nO = model.get_dim("nO")
print(f"Initialized model with input dimension nI={nI} and output dimension nO={nO}.")
model = Linear(init_W=zero_init)
print(f"Initialized model with no input/ouput dimensions.")
X = numpy.zeros((128, 16), dtype="f")
Y = numpy.zeros((128, 10), dtype="f")
model.initialize(X=X, Y=Y)
nI = model.get_dim("nI")
nO = model.get_dim("nO")
print(f"Initialized model with input dimension nI={nI} and output dimension nO={nO}.")
from thinc.api import chain, glorot_uniform_init
n_hidden = 128
X = numpy.zeros((128, 16), dtype="f")
Y = numpy.zeros((128, 10), dtype="f")
model = chain(Linear(n_hidden, init_W=glorot_uniform_init), Linear(init_W=zero_init),)
model.initialize(X=X, Y=Y)
nI = model.get_dim("nI")
nO = model.get_dim("nO")
nO_hidden = model.layers[0].get_dim("nO")
print(f"Initialized model with input dimension nI={nI} and output dimension nO={nO}.")
print(f"The size of the hidden layer is {nO_hidden}.")
from thinc.api import concatenate
model = concatenate(Linear(n_hidden), Linear(n_hidden))
model.initialize(X=X)
nO = model.get_dim("nO")
print(f"Initialized model with output dimension nO={nO}.")
from thinc.api import clone
model = clone(Linear(), 5)
model.layers[0].set_dim("nO", n_hidden)
model.initialize(X=X, Y=Y)
nI = model.get_dim("nI")
nO = model.get_dim("nO")
print(f"Initialized model with input dimension nI={nI} and output dimension nO={nO}.")
from thinc.api import Relu, Dropout
def Hidden(dropout=0.2):
return chain(Linear(), Relu(), Dropout(dropout))
model = clone(Hidden(0.2), 5)
from thinc.api import with_array
model = with_array(Linear(4, 2))
Xs = [model.ops.alloc2f(10, 2, dtype="f")]
model.initialize(X=Xs)
Ys = model.predict(Xs)
print(f"Prediction shape: {Ys[0].shape}.")
from thinc.api import add, chain, concatenate, clone
from thinc.api import with_array, reduce_max, reduce_mean, residual
from thinc.api import Model, Embed, Maxout, Softmax
nH = 5
with Model.define_operators({">>": chain, "|": concatenate, "+": add, "**": clone}):
model = (
with_array(
(Embed(128, column=0) + Embed(64, column=1))
>> Maxout(nH, normalize=True, dropout=0.2)
)
>> (reduce_max() | reduce_mean())
>> residual(Relu() >> Dropout(0.2)) ** 2
>> Softmax()
)
from thinc.api import Linear, Adam
import numpy
X = numpy.zeros((128, 10), dtype="f")
dY = numpy.zeros((128, 10), dtype="f")
model = Linear(10, 10)
model.initialize(X=X, Y=dY)
Y = model.predict(X)
Y
Y, backprop = model.begin_update(X)
Y, backprop
dX = backprop(dY)
dX
optimizer = Adam()
model.finish_update(optimizer)
dim = model.get_dim("nO")
W = model.get_param("W")
model.attrs["hello"] = "world"
model.attrs.get("foo", "bar")
dW = model.get_grad("W")
model.inc_grad("W", dW * 0.1)
model_bytes = model.to_bytes()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step4: Step 1
Step5: Step 2
Step6: Step 3
Step7: Step 4
Step8: Step 5
Step9: Step 6
Step10: Step 7
Step11: TODO
Step12: Deployment Option 1
Step13: Deployment Option 2
|
<ASSISTANT_TASK:>
Python Code:
df = spark.read.format("csv") \
.option("inferSchema", "true").option("header", "true") \
.load("s3a://datapalooza/airbnb/airbnb.csv.bz2")
df.registerTempTable("df")
print(df.head())
print(df.count())
df_filtered = df.filter("price >= 50 AND price <= 750 AND bathrooms > 0.0 AND bedrooms is not null")
df_filtered.registerTempTable("df_filtered")
df_final = spark.sql(
select
id,
city,
case when state in('NY', 'CA', 'London', 'Berlin', 'TX' ,'IL', 'OR', 'DC', 'WA')
then state
else 'Other'
end as state,
space,
cast(price as double) as price,
cast(bathrooms as double) as bathrooms,
cast(bedrooms as double) as bedrooms,
room_type,
host_is_super_host,
cancellation_policy,
cast(case when security_deposit is null
then 0.0
else security_deposit
end as double) as security_deposit,
price_per_bedroom,
cast(case when number_of_reviews is null
then 0.0
else number_of_reviews
end as double) as number_of_reviews,
cast(case when extra_people is null
then 0.0
else extra_people
end as double) as extra_people,
instant_bookable,
cast(case when cleaning_fee is null
then 0.0
else cleaning_fee
end as double) as cleaning_fee,
cast(case when review_scores_rating is null
then 80.0
else review_scores_rating
end as double) as review_scores_rating,
cast(case when square_feet is not null and square_feet > 100
then square_feet
when (square_feet is null or square_feet <=100) and (bedrooms is null or bedrooms = 0)
then 350.0
else 380 * bedrooms
end as double) as square_feet
from df_filtered
).persist()
df_final.registerTempTable("df_final")
df_final.select("square_feet", "price", "bedrooms", "bathrooms", "cleaning_fee").describe().show()
print(df_final.count())
print(df_final.schema)
# Most popular cities
spark.sql(
select
state,
count(*) as ct,
avg(price) as avg_price,
max(price) as max_price
from df_final
group by state
order by count(*) desc
).show()
# Most expensive popular cities
spark.sql(
select
city,
count(*) as ct,
avg(price) as avg_price,
max(price) as max_price
from df_final
group by city
order by avg(price) desc
).filter("ct > 25").show()
continuous_features = ["bathrooms", \
"bedrooms", \
"security_deposit", \
"cleaning_fee", \
"extra_people", \
"number_of_reviews", \
"square_feet", \
"review_scores_rating"]
categorical_features = ["room_type", \
"host_is_super_host", \
"cancellation_policy", \
"instant_bookable", \
"state"]
[training_dataset, validation_dataset] = df_final.randomSplit([0.8, 0.2])
continuous_feature_assembler = VectorAssembler(inputCols=continuous_features, outputCol="unscaled_continuous_features")
continuous_feature_scaler = StandardScaler(inputCol="unscaled_continuous_features", outputCol="scaled_continuous_features", \
withStd=True, withMean=False)
categorical_feature_indexers = [StringIndexer(inputCol=x, \
outputCol="{}_index".format(x)) \
for x in categorical_features]
categorical_feature_one_hot_encoders = [OneHotEncoder(inputCol=x.getOutputCol(), \
outputCol="oh_encoder_{}".format(x.getOutputCol() )) \
for x in categorical_feature_indexers]
feature_cols_lr = [x.getOutputCol() \
for x in categorical_feature_one_hot_encoders]
feature_cols_lr.append("scaled_continuous_features")
feature_assembler_lr = VectorAssembler(inputCols=feature_cols_lr, \
outputCol="features_lr")
linear_regression = LinearRegression(featuresCol="features_lr", \
labelCol="price", \
predictionCol="price_prediction", \
maxIter=10, \
regParam=0.3, \
elasticNetParam=0.8)
estimators_lr = \
[continuous_feature_assembler, continuous_feature_scaler] \
+ categorical_feature_indexers + categorical_feature_one_hot_encoders \
+ [feature_assembler_lr] + [linear_regression]
pipeline = Pipeline(stages=estimators_lr)
pipeline_model = pipeline.fit(training_dataset)
print(pipeline_model)
from jpmml import toPMMLBytes
pmmlBytes = toPMMLBytes(spark, training_dataset, pipeline_model)
print(pmmlBytes.decode("utf-8"))
import urllib.request
update_url = 'http://prediction-pmml-aws.demo.pipeline.io/update-pmml/pmml_airbnb'
update_headers = {}
update_headers['Content-type'] = 'application/xml'
req = urllib.request.Request(update_url, \
headers=update_headers, \
data=pmmlBytes)
resp = urllib.request.urlopen(req)
print(resp.status) # Should return Http Status 200
import urllib.request
update_url = 'http://prediction-pmml-gcp.demo.pipeline.io/update-pmml/pmml_airbnb'
update_headers = {}
update_headers['Content-type'] = 'application/xml'
req = urllib.request.Request(update_url, \
headers=update_headers, \
data=pmmlBytes)
resp = urllib.request.urlopen(req)
print(resp.status) # Should return Http Status 200
import urllib.parse
import json
evaluate_url = 'http://prediction-pmml-aws.demo.pipeline.io/evaluate-pmml/pmml_airbnb'
evaluate_headers = {}
evaluate_headers['Content-type'] = 'application/json'
input_params = '{"bathrooms":2.0, \
"bedrooms":2.0, \
"security_deposit":175.00, \
"cleaning_fee":25.0, \
"extra_people":1.0, \
"number_of_reviews": 2.0, \
"square_feet": 250.0, \
"review_scores_rating": 2.0, \
"room_type": "Entire home/apt", \
"host_is_super_host": "0.0", \
"cancellation_policy": "flexible", \
"instant_bookable": "1.0", \
"state": "CA"}'
encoded_input_params = input_params.encode('utf-8')
req = urllib.request.Request(evaluate_url, \
headers=evaluate_headers, \
data=encoded_input_params)
resp = urllib.request.urlopen(req)
print(resp.read())
import urllib.parse
import json
evaluate_url = 'http://prediction-pmml-gcp.demo.pipeline.io/evaluate-pmml/pmml_airbnb'
evaluate_headers = {}
evaluate_headers['Content-type'] = 'application/json'
input_params = '{"bathrooms":2.0, \
"bedrooms":2.0, \
"security_deposit":175.00, \
"cleaning_fee":25.0, \
"extra_people":1.0, \
"number_of_reviews": 2.0, \
"square_feet": 250.0, \
"review_scores_rating": 2.0, \
"room_type": "Entire home/apt", \
"host_is_super_host": "0.0", \
"cancellation_policy": "flexible", \
"instant_bookable": "1.0", \
"state": "CA"}'
encoded_input_params = input_params.encode('utf-8')
req = urllib.request.Request(evaluate_url, \
headers=evaluate_headers, \
data=encoded_input_params)
resp = urllib.request.urlopen(req)
print(resp.read())
with open('/root/pipeline/prediction.ml/pmml/data/pmml_airbnb/pmml_airbnb.pmml', 'wb') as f:
f.write(pmmlBytes)
!cat /root/pipeline/prediction.ml/pmml/data/pmml_airbnb/pmml_airbnb.pmml
!git
!git status
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.
Step2: We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.
Step3: Training
Step4: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss.
Step5: Checking out the results
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[200]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
len(mnist.train.images)
type(mnist.train.images[0])
inputs_ = [[image.flatten()] for image in mnist.train.images]
inputs_[1]
# Size of the encoding layer (the hidden layer)
encoding_dim = 32 # feel free to change this value
image_size = 784
inputs_ = tf.placeholder(tf.float32, (None, image_size), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, image_size), name='targets')
# Output of hidden layer
encoded = tf.contrib.layers.fully_connected(inputs_, encoding_dim, activation_fn = tf.nn.relu)
# Output layer logits
logits = tf.contrib.layers.fully_connected(encoded, image_size, activation_fn = None)
# Sigmoid output from logits
decoded = tf.nn.sigmoid(logits)
# Sigmoid cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_,logits = logits)
# Mean of the loss
cost = tf.reduce_mean(loss)
# Adam optimizer
opt = tf.train.AdamOptimizer().minimize(cost)
# Create the session
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
feed = {inputs_: batch[0], targets_: batch[0]}
batch_cost, _ = sess.run([cost, opt], feed_dict=feed)
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set parameters
Step2: We have to make sure all conditions have the same counts, as the ANOVA
Step3: Create TFR representations for all conditions
Step4: Setup repeated measures ANOVA
Step5: Now we'll assemble the data matrix and swap axes so the trial replications
Step6: While the iteration scheme used above for assembling the data matrix
Step7: Account for multiple comparisons using FDR versus permutation clustering test
Step8: A stat_fun must deal with a variable number of input arguments.
Step9: Create new stats image with only significant clusters
Step10: Now using FDR
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Denis Engemann <denis.engemann@gmail.com>
# Eric Larson <larson.eric.d@gmail.com>
# Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.time_frequency import tfr_morlet
from mne.stats import f_threshold_mway_rm, f_mway_rm, fdr_correction
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_raw-eve.fif'
tmin, tmax = -0.2, 0.5
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname)
events = mne.read_events(event_fname)
include = []
raw.info['bads'] += ['MEG 2443'] # bads
# picks MEG gradiometers
picks = mne.pick_types(raw.info, meg='grad', eeg=False, eog=True,
stim=False, include=include, exclude='bads')
ch_name = 'MEG 1332'
# Load conditions
reject = dict(grad=4000e-13, eog=150e-6)
event_id = dict(aud_l=1, aud_r=2, vis_l=3, vis_r=4)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax,
picks=picks, baseline=(None, 0), preload=True,
reject=reject)
epochs.pick_channels([ch_name]) # restrict example to one channel
epochs.equalize_event_counts(event_id)
# Factor to down-sample the temporal dimension of the TFR computed by
# tfr_morlet.
decim = 2
freqs = np.arange(7, 30, 3) # define frequencies of interest
n_cycles = freqs / freqs[0]
zero_mean = False # don't correct morlet wavelet to be of mean zero
# To have a true wavelet zero_mean should be True but here for illustration
# purposes it helps to spot the evoked response.
epochs_power = list()
for condition in [epochs[k] for k in event_id]:
this_tfr = tfr_morlet(condition, freqs, n_cycles=n_cycles,
decim=decim, average=False, zero_mean=zero_mean,
return_itc=False)
this_tfr.apply_baseline(mode='ratio', baseline=(None, 0))
this_power = this_tfr.data[:, 0, :, :] # we only have one channel.
epochs_power.append(this_power)
n_conditions = len(epochs.event_id)
n_replications = epochs.events.shape[0] // n_conditions
factor_levels = [2, 2] # number of levels in each factor
effects = 'A*B' # this is the default signature for computing all effects
# Other possible options are 'A' or 'B' for the corresponding main effects
# or 'A:B' for the interaction effect only (this notation is borrowed from the
# R formula language)
n_freqs = len(freqs)
times = 1e3 * epochs.times[::decim]
n_times = len(times)
data = np.swapaxes(np.asarray(epochs_power), 1, 0)
# reshape last two dimensions in one mass-univariate observation-vector
data = data.reshape(n_replications, n_conditions, n_freqs * n_times)
# so we have replications * conditions * observations:
print(data.shape)
fvals, pvals = f_mway_rm(data, factor_levels, effects=effects)
effect_labels = ['modality', 'location', 'modality by location']
# let's visualize our effects by computing f-images
for effect, sig, effect_label in zip(fvals, pvals, effect_labels):
plt.figure()
# show naive F-values in gray
plt.imshow(effect.reshape(8, 211), cmap=plt.cm.gray, extent=[times[0],
times[-1], freqs[0], freqs[-1]], aspect='auto',
origin='lower')
# create mask for significant Time-frequency locations
effect = np.ma.masked_array(effect, [sig > .05])
plt.imshow(effect.reshape(8, 211), cmap='RdBu_r', extent=[times[0],
times[-1], freqs[0], freqs[-1]], aspect='auto',
origin='lower')
plt.colorbar()
plt.xlabel('Time (ms)')
plt.ylabel('Frequency (Hz)')
plt.title(r"Time-locked response for '%s' (%s)" % (effect_label, ch_name))
plt.show()
effects = 'A:B'
def stat_fun(*args):
return f_mway_rm(np.swapaxes(args, 1, 0), factor_levels=factor_levels,
effects=effects, return_pvals=False)[0]
# The ANOVA returns a tuple f-values and p-values, we will pick the former.
pthresh = 0.00001 # set threshold rather high to save some time
f_thresh = f_threshold_mway_rm(n_replications, factor_levels, effects,
pthresh)
tail = 1 # f-test, so tail > 0
n_permutations = 256 # Save some time (the test won't be too sensitive ...)
T_obs, clusters, cluster_p_values, h0 = mne.stats.permutation_cluster_test(
epochs_power, stat_fun=stat_fun, threshold=f_thresh, tail=tail, n_jobs=1,
n_permutations=n_permutations, buffer_size=None)
good_clusers = np.where(cluster_p_values < .05)[0]
T_obs_plot = np.ma.masked_array(T_obs,
np.invert(clusters[np.squeeze(good_clusers)]))
plt.figure()
for f_image, cmap in zip([T_obs, T_obs_plot], [plt.cm.gray, 'RdBu_r']):
plt.imshow(f_image, cmap=cmap, extent=[times[0], times[-1],
freqs[0], freqs[-1]], aspect='auto',
origin='lower')
plt.xlabel('Time (ms)')
plt.ylabel('Frequency (Hz)')
plt.title("Time-locked response for 'modality by location' (%s)\n"
" cluster-level corrected (p <= 0.05)" % ch_name)
plt.show()
mask, _ = fdr_correction(pvals[2])
T_obs_plot2 = np.ma.masked_array(T_obs, np.invert(mask))
plt.figure()
for f_image, cmap in zip([T_obs, T_obs_plot2], [plt.cm.gray, 'RdBu_r']):
plt.imshow(f_image, cmap=cmap, extent=[times[0], times[-1],
freqs[0], freqs[-1]], aspect='auto',
origin='lower')
plt.xlabel('Time (ms)')
plt.ylabel('Frequency (Hz)')
plt.title("Time-locked response for 'modality by location' (%s)\n"
" FDR corrected (p <= 0.05)" % ch_name)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Step 2
Step2: We'll use the distribution strategy when we create our neural network model. Then, TensorFlow will distribute the training among the eight TPU cores by creating eight different replicas of the model, one for each core.
Step 3
Step3: You can use data from any public dataset here on Kaggle in just the same way. If you'd like to use data from one of your private datasets, see here.
Step4: Create Data Pipelines
Step5: This next cell will create the datasets that we'll use with Keras during training and inference. Notice how we scale the size of the batches to the number of TPU cores.
Step6: These datasets are tf.data.Dataset objects. You can think about a dataset in TensorFlow as a stream of data records. The training and validation sets are streams of (image, label) pairs.
Step7: The test set is a stream of (image, idnum) pairs; idnum here is the unique identifier given to the image that we'll use later when we make our submission as a csv file.
Step9: Step 4
Step10: You can display a single batch of images from a dataset with another of our helper functions. The next cell will turn the dataset into an iterator of batches of 20 images.
Step11: Use the Python next function to pop out the next batch in the stream and display it with the helper function.
Step12: By defining ds_iter and one_batch in separate cells, you only need to rerun the cell above to see a new batch of images.
Step 5
Step13: The 'sparse_categorical' versions of the loss and metrics are appropriate for a classification task with more than two labels, like this one.
Step14: Step 6
Step15: Fit Model
Step16: This next cell shows how the loss and metrics progressed during training. Thankfully, it converges!
Step17: Step 7
Step18: Confusion Matrix
Step19: You might be familiar with metrics like F1-score or precision and recall. This cell will compute these metrics and display them with a plot of the confusion matrix. (These metrics are defined in the Scikit-learn module sklearn.metrics; we've imported them in the helper script for you.)
Step20: Visual Validation
Step21: And here is a set of flowers with their predicted species. Run the cell again to see another set.
Step22: Step 8
Step23: We'll generate a file submission.csv. This file is what you'll submit to get your score on the leaderboard.
|
<ASSISTANT_TASK:>
Python Code:
import math, re, os
import numpy as np
import tensorflow as tf
print("Tensorflow version " + tf.__version__)
# Detect TPU, return appropriate distribution strategy
try:
tpu = tf.distribute.cluster_resolver.TPUClusterResolver()
print('Running on TPU ', tpu.master())
except ValueError:
tpu = None
if tpu:
tf.config.experimental_connect_to_cluster(tpu)
tf.tpu.experimental.initialize_tpu_system(tpu)
strategy = tf.distribute.experimental.TPUStrategy(tpu)
else:
strategy = tf.distribute.get_strategy()
print("REPLICAS: ", strategy.num_replicas_in_sync)
from kaggle_datasets import KaggleDatasets
GCS_DS_PATH = KaggleDatasets().get_gcs_path('tpu-getting-started')
print(GCS_DS_PATH) # what do gcs paths look like?
#$HIDE_INPUT$
IMAGE_SIZE = [512, 512]
GCS_PATH = GCS_DS_PATH + '/tfrecords-jpeg-512x512'
AUTO = tf.data.experimental.AUTOTUNE
TRAINING_FILENAMES = tf.io.gfile.glob(GCS_PATH + '/train/*.tfrec')
VALIDATION_FILENAMES = tf.io.gfile.glob(GCS_PATH + '/val/*.tfrec')
TEST_FILENAMES = tf.io.gfile.glob(GCS_PATH + '/test/*.tfrec')
CLASSES = ['pink primrose', 'hard-leaved pocket orchid', 'canterbury bells', 'sweet pea', 'wild geranium', 'tiger lily', 'moon orchid', 'bird of paradise', 'monkshood', 'globe thistle', # 00 - 09
'snapdragon', "colt's foot", 'king protea', 'spear thistle', 'yellow iris', 'globe-flower', 'purple coneflower', 'peruvian lily', 'balloon flower', 'giant white arum lily', # 10 - 19
'fire lily', 'pincushion flower', 'fritillary', 'red ginger', 'grape hyacinth', 'corn poppy', 'prince of wales feathers', 'stemless gentian', 'artichoke', 'sweet william', # 20 - 29
'carnation', 'garden phlox', 'love in the mist', 'cosmos', 'alpine sea holly', 'ruby-lipped cattleya', 'cape flower', 'great masterwort', 'siam tulip', 'lenten rose', # 30 - 39
'barberton daisy', 'daffodil', 'sword lily', 'poinsettia', 'bolero deep blue', 'wallflower', 'marigold', 'buttercup', 'daisy', 'common dandelion', # 40 - 49
'petunia', 'wild pansy', 'primula', 'sunflower', 'lilac hibiscus', 'bishop of llandaff', 'gaura', 'geranium', 'orange dahlia', 'pink-yellow dahlia', # 50 - 59
'cautleya spicata', 'japanese anemone', 'black-eyed susan', 'silverbush', 'californian poppy', 'osteospermum', 'spring crocus', 'iris', 'windflower', 'tree poppy', # 60 - 69
'gazania', 'azalea', 'water lily', 'rose', 'thorn apple', 'morning glory', 'passion flower', 'lotus', 'toad lily', 'anthurium', # 70 - 79
'frangipani', 'clematis', 'hibiscus', 'columbine', 'desert-rose', 'tree mallow', 'magnolia', 'cyclamen ', 'watercress', 'canna lily', # 80 - 89
'hippeastrum ', 'bee balm', 'pink quill', 'foxglove', 'bougainvillea', 'camellia', 'mallow', 'mexican petunia', 'bromelia', 'blanket flower', # 90 - 99
'trumpet creeper', 'blackberry lily', 'common tulip', 'wild rose'] # 100 - 102
def decode_image(image_data):
image = tf.image.decode_jpeg(image_data, channels=3)
image = tf.cast(image, tf.float32) / 255.0 # convert image to floats in [0, 1] range
image = tf.reshape(image, [*IMAGE_SIZE, 3]) # explicit size needed for TPU
return image
def read_labeled_tfrecord(example):
LABELED_TFREC_FORMAT = {
"image": tf.io.FixedLenFeature([], tf.string), # tf.string means bytestring
"class": tf.io.FixedLenFeature([], tf.int64), # shape [] means single element
}
example = tf.io.parse_single_example(example, LABELED_TFREC_FORMAT)
image = decode_image(example['image'])
label = tf.cast(example['class'], tf.int32)
return image, label # returns a dataset of (image, label) pairs
def read_unlabeled_tfrecord(example):
UNLABELED_TFREC_FORMAT = {
"image": tf.io.FixedLenFeature([], tf.string), # tf.string means bytestring
"id": tf.io.FixedLenFeature([], tf.string), # shape [] means single element
# class is missing, this competitions's challenge is to predict flower classes for the test dataset
}
example = tf.io.parse_single_example(example, UNLABELED_TFREC_FORMAT)
image = decode_image(example['image'])
idnum = example['id']
return image, idnum # returns a dataset of image(s)
def load_dataset(filenames, labeled=True, ordered=False):
# Read from TFRecords. For optimal performance, reading from multiple files at once and
# disregarding data order. Order does not matter since we will be shuffling the data anyway.
ignore_order = tf.data.Options()
if not ordered:
ignore_order.experimental_deterministic = False # disable order, increase speed
dataset = tf.data.TFRecordDataset(filenames, num_parallel_reads=AUTO) # automatically interleaves reads from multiple files
dataset = dataset.with_options(ignore_order) # uses data as soon as it streams in, rather than in its original order
dataset = dataset.map(read_labeled_tfrecord if labeled else read_unlabeled_tfrecord, num_parallel_calls=AUTO)
# returns a dataset of (image, label) pairs if labeled=True or (image, id) pairs if labeled=False
return dataset
#$HIDE_INPUT$
def data_augment(image, label):
# Thanks to the dataset.prefetch(AUTO)
# statement in the next function (below), this happens essentially
# for free on TPU. Data pipeline code is executed on the "CPU"
# part of the TPU while the TPU itself is computing gradients.
image = tf.image.random_flip_left_right(image)
#image = tf.image.random_saturation(image, 0, 2)
return image, label
def get_training_dataset():
dataset = load_dataset(TRAINING_FILENAMES, labeled=True)
dataset = dataset.map(data_augment, num_parallel_calls=AUTO)
dataset = dataset.repeat() # the training dataset must repeat for several epochs
dataset = dataset.shuffle(2048)
dataset = dataset.batch(BATCH_SIZE)
dataset = dataset.prefetch(AUTO) # prefetch next batch while training (autotune prefetch buffer size)
return dataset
def get_validation_dataset(ordered=False):
dataset = load_dataset(VALIDATION_FILENAMES, labeled=True, ordered=ordered)
dataset = dataset.batch(BATCH_SIZE)
dataset = dataset.cache()
dataset = dataset.prefetch(AUTO)
return dataset
def get_test_dataset(ordered=False):
dataset = load_dataset(TEST_FILENAMES, labeled=False, ordered=ordered)
dataset = dataset.batch(BATCH_SIZE)
dataset = dataset.prefetch(AUTO)
return dataset
def count_data_items(filenames):
# the number of data items is written in the name of the .tfrec
# files, i.e. flowers00-230.tfrec = 230 data items
n = [int(re.compile(r"-([0-9]*)\.").search(filename).group(1)) for filename in filenames]
return np.sum(n)
NUM_TRAINING_IMAGES = count_data_items(TRAINING_FILENAMES)
NUM_VALIDATION_IMAGES = count_data_items(VALIDATION_FILENAMES)
NUM_TEST_IMAGES = count_data_items(TEST_FILENAMES)
print('Dataset: {} training images, {} validation images, {} unlabeled test images'.format(NUM_TRAINING_IMAGES, NUM_VALIDATION_IMAGES, NUM_TEST_IMAGES))
# Define the batch size. This will be 16 with TPU off and 128 (=16*8) with TPU on
BATCH_SIZE = 16 * strategy.num_replicas_in_sync
ds_train = get_training_dataset()
ds_valid = get_validation_dataset()
ds_test = get_test_dataset()
print("Training:", ds_train)
print ("Validation:", ds_valid)
print("Test:", ds_test)
np.set_printoptions(threshold=15, linewidth=80)
print("Training data shapes:")
for image, label in ds_train.take(3):
print(image.numpy().shape, label.numpy().shape)
print("Training data label examples:", label.numpy())
print("Test data shapes:")
for image, idnum in ds_test.take(3):
print(image.numpy().shape, idnum.numpy().shape)
print("Test data IDs:", idnum.numpy().astype('U')) # U=unicode string
#$HIDE_INPUT$
from matplotlib import pyplot as plt
def batch_to_numpy_images_and_labels(data):
images, labels = data
numpy_images = images.numpy()
numpy_labels = labels.numpy()
if numpy_labels.dtype == object: # binary string in this case,
# these are image ID strings
numpy_labels = [None for _ in enumerate(numpy_images)]
# If no labels, only image IDs, return None for labels (this is
# the case for test data)
return numpy_images, numpy_labels
def title_from_label_and_target(label, correct_label):
if correct_label is None:
return CLASSES[label], True
correct = (label == correct_label)
return "{} [{}{}{}]".format(CLASSES[label], 'OK' if correct else 'NO', u"\u2192" if not correct else '',
CLASSES[correct_label] if not correct else ''), correct
def display_one_flower(image, title, subplot, red=False, titlesize=16):
plt.subplot(*subplot)
plt.axis('off')
plt.imshow(image)
if len(title) > 0:
plt.title(title, fontsize=int(titlesize) if not red else int(titlesize/1.2), color='red' if red else 'black', fontdict={'verticalalignment':'center'}, pad=int(titlesize/1.5))
return (subplot[0], subplot[1], subplot[2]+1)
def display_batch_of_images(databatch, predictions=None):
This will work with:
display_batch_of_images(images)
display_batch_of_images(images, predictions)
display_batch_of_images((images, labels))
display_batch_of_images((images, labels), predictions)
# data
images, labels = batch_to_numpy_images_and_labels(databatch)
if labels is None:
labels = [None for _ in enumerate(images)]
# auto-squaring: this will drop data that does not fit into square
# or square-ish rectangle
rows = int(math.sqrt(len(images)))
cols = len(images)//rows
# size and spacing
FIGSIZE = 13.0
SPACING = 0.1
subplot=(rows,cols,1)
if rows < cols:
plt.figure(figsize=(FIGSIZE,FIGSIZE/cols*rows))
else:
plt.figure(figsize=(FIGSIZE/rows*cols,FIGSIZE))
# display
for i, (image, label) in enumerate(zip(images[:rows*cols], labels[:rows*cols])):
title = '' if label is None else CLASSES[label]
correct = True
if predictions is not None:
title, correct = title_from_label_and_target(predictions[i], label)
dynamic_titlesize = FIGSIZE*SPACING/max(rows,cols)*40+3 # magic formula tested to work from 1x1 to 10x10 images
subplot = display_one_flower(image, title, subplot, not correct, titlesize=dynamic_titlesize)
#layout
plt.tight_layout()
if label is None and predictions is None:
plt.subplots_adjust(wspace=0, hspace=0)
else:
plt.subplots_adjust(wspace=SPACING, hspace=SPACING)
plt.show()
def display_training_curves(training, validation, title, subplot):
if subplot%10==1: # set up the subplots on the first call
plt.subplots(figsize=(10,10), facecolor='#F0F0F0')
plt.tight_layout()
ax = plt.subplot(subplot)
ax.set_facecolor('#F8F8F8')
ax.plot(training)
ax.plot(validation)
ax.set_title('model '+ title)
ax.set_ylabel(title)
#ax.set_ylim(0.28,1.05)
ax.set_xlabel('epoch')
ax.legend(['train', 'valid.'])
ds_iter = iter(ds_train.unbatch().batch(20))
one_batch = next(ds_iter)
display_batch_of_images(one_batch)
EPOCHS = 12
with strategy.scope():
pretrained_model = tf.keras.applications.VGG16(
weights='imagenet',
include_top=False ,
input_shape=[*IMAGE_SIZE, 3]
)
pretrained_model.trainable = False
model = tf.keras.Sequential([
# To a base pretrained on ImageNet to extract features from images...
pretrained_model,
# ... attach a new head to act as a classifier.
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(len(CLASSES), activation='softmax')
])
model.compile(
optimizer='adam',
loss = 'sparse_categorical_crossentropy',
metrics=['sparse_categorical_accuracy'],
)
model.summary()
#$HIDE_INPUT$
# Learning Rate Schedule for Fine Tuning #
def exponential_lr(epoch,
start_lr = 0.00001, min_lr = 0.00001, max_lr = 0.00005,
rampup_epochs = 5, sustain_epochs = 0,
exp_decay = 0.8):
def lr(epoch, start_lr, min_lr, max_lr, rampup_epochs, sustain_epochs, exp_decay):
# linear increase from start to rampup_epochs
if epoch < rampup_epochs:
lr = ((max_lr - start_lr) /
rampup_epochs * epoch + start_lr)
# constant max_lr during sustain_epochs
elif epoch < rampup_epochs + sustain_epochs:
lr = max_lr
# exponential decay towards min_lr
else:
lr = ((max_lr - min_lr) *
exp_decay**(epoch - rampup_epochs - sustain_epochs) +
min_lr)
return lr
return lr(epoch,
start_lr,
min_lr,
max_lr,
rampup_epochs,
sustain_epochs,
exp_decay)
lr_callback = tf.keras.callbacks.LearningRateScheduler(exponential_lr, verbose=True)
rng = [i for i in range(EPOCHS)]
y = [exponential_lr(x) for x in rng]
plt.plot(rng, y)
print("Learning rate schedule: {:.3g} to {:.3g} to {:.3g}".format(y[0], max(y), y[-1]))
# Define training epochs
EPOCHS = 12
STEPS_PER_EPOCH = NUM_TRAINING_IMAGES // BATCH_SIZE
history = model.fit(
ds_train,
validation_data=ds_valid,
epochs=EPOCHS,
steps_per_epoch=STEPS_PER_EPOCH,
callbacks=[lr_callback],
)
display_training_curves(
history.history['loss'],
history.history['val_loss'],
'loss',
211,
)
display_training_curves(
history.history['sparse_categorical_accuracy'],
history.history['val_sparse_categorical_accuracy'],
'accuracy',
212,
)
#$HIDE_INPUT$
import matplotlib.pyplot as plt
from sklearn.metrics import f1_score, precision_score, recall_score, confusion_matrix
def display_confusion_matrix(cmat, score, precision, recall):
plt.figure(figsize=(15,15))
ax = plt.gca()
ax.matshow(cmat, cmap='Reds')
ax.set_xticks(range(len(CLASSES)))
ax.set_xticklabels(CLASSES, fontdict={'fontsize': 7})
plt.setp(ax.get_xticklabels(), rotation=45, ha="left", rotation_mode="anchor")
ax.set_yticks(range(len(CLASSES)))
ax.set_yticklabels(CLASSES, fontdict={'fontsize': 7})
plt.setp(ax.get_yticklabels(), rotation=45, ha="right", rotation_mode="anchor")
titlestring = ""
if score is not None:
titlestring += 'f1 = {:.3f} '.format(score)
if precision is not None:
titlestring += '\nprecision = {:.3f} '.format(precision)
if recall is not None:
titlestring += '\nrecall = {:.3f} '.format(recall)
if len(titlestring) > 0:
ax.text(101, 1, titlestring, fontdict={'fontsize': 18, 'horizontalalignment':'right', 'verticalalignment':'top', 'color':'#804040'})
plt.show()
def display_training_curves(training, validation, title, subplot):
if subplot%10==1: # set up the subplots on the first call
plt.subplots(figsize=(10,10), facecolor='#F0F0F0')
plt.tight_layout()
ax = plt.subplot(subplot)
ax.set_facecolor('#F8F8F8')
ax.plot(training)
ax.plot(validation)
ax.set_title('model '+ title)
ax.set_ylabel(title)
#ax.set_ylim(0.28,1.05)
ax.set_xlabel('epoch')
ax.legend(['train', 'valid.'])
cmdataset = get_validation_dataset(ordered=True)
images_ds = cmdataset.map(lambda image, label: image)
labels_ds = cmdataset.map(lambda image, label: label).unbatch()
cm_correct_labels = next(iter(labels_ds.batch(NUM_VALIDATION_IMAGES))).numpy()
cm_probabilities = model.predict(images_ds)
cm_predictions = np.argmax(cm_probabilities, axis=-1)
labels = range(len(CLASSES))
cmat = confusion_matrix(
cm_correct_labels,
cm_predictions,
labels=labels,
)
cmat = (cmat.T / cmat.sum(axis=1)).T # normalize
score = f1_score(
cm_correct_labels,
cm_predictions,
labels=labels,
average='macro',
)
precision = precision_score(
cm_correct_labels,
cm_predictions,
labels=labels,
average='macro',
)
recall = recall_score(
cm_correct_labels,
cm_predictions,
labels=labels,
average='macro',
)
display_confusion_matrix(cmat, score, precision, recall)
dataset = get_validation_dataset()
dataset = dataset.unbatch().batch(20)
batch = iter(dataset)
images, labels = next(batch)
probabilities = model.predict(images)
predictions = np.argmax(probabilities, axis=-1)
display_batch_of_images((images, labels), predictions)
test_ds = get_test_dataset(ordered=True)
print('Computing predictions...')
test_images_ds = test_ds.map(lambda image, idnum: image)
probabilities = model.predict(test_images_ds)
predictions = np.argmax(probabilities, axis=-1)
print(predictions)
print('Generating submission.csv file...')
# Get image ids from test set and convert to unicode
test_ids_ds = test_ds.map(lambda image, idnum: idnum).unbatch()
test_ids = next(iter(test_ids_ds.batch(NUM_TEST_IMAGES))).numpy().astype('U')
# Write the submission file
np.savetxt(
'submission.csv',
np.rec.fromarrays([test_ids, predictions]),
fmt=['%s', '%d'],
delimiter=',',
header='id,label',
comments='',
)
# Look at the first few predictions
!head submission.csv
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Generate some example data
Step2: Now, run three-cornered hat phase calculation
Step3: Plot results
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy
import matplotlib.pyplot as plt
import allantools
from allantools import noise
def plotallan_phase(plt,y,rate,taus, style):
(t2, ad, ade,adn) = allantools.mdev(y,rate=rate,taus=taus)
plt.loglog(t2, ad, style)
# plot a line with the slope alpha
def plotline(plt, alpha, taus,style):
y = [ pow(tt,alpha) for tt in taus]
plt.loglog(taus,y,style)
t = numpy.logspace( 0 ,4,50) # tau values from 1 to 1000
N=10000
rate = 1.0
# white phase noise => 1/tau ADEV
d = numpy.random.randn(4*N)
phaseA = d[0:N] # numpy.random.randn(N) #pink(N)
phaseA = [1*x for x in phaseA]
phaseB = d[N:2*N] #numpy.random.randn(N) #noise.pink(N)
phaseB = [5*x for x in phaseB]
phaseC = d[2*N:3*N] #numpy.random.randn(N) #noise.pink(N)
phaseC = [5*x for x in phaseC]
phaseAB = [a-b for (a,b) in zip(phaseA,phaseB)]
phaseBC = [b-c for (b,c) in zip(phaseB,phaseC)]
phaseCA = [c-a for (c,a) in zip(phaseC,phaseA)]
(taus,devA,err_a,ns_ab) = allantools.three_cornered_hat_phase(phaseAB,phaseBC,phaseCA,rate,t, allantools.mdev)
plt.subplot(111, xscale="log", yscale="log")
plotallan_phase(plt, phaseA, 1, t, 'ro')
plotallan_phase(plt, phaseB, 1, t, 'go')
plotallan_phase(plt, phaseC, 1, t, 'bo')
plotallan_phase(plt, phaseAB, 1, t, 'r.')
plotallan_phase(plt, phaseBC, 1, t, 'g.')
plotallan_phase(plt, phaseCA, 1, t, 'b.')
plt.loglog(taus, devA, 'rv')
plt.grid()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Extract columns to be used for prediction. Pitcher and year are probably not predictive, so I am leaving them out.
Step2: Relabel pitch types using the scikit-learn label encoder (XGboost requires sequential labels).
Step3: XGBoost runs faster using its own binary data structures
Step4: We can specify the hyperparameters of the model, to use as a starting point for the analysis.
Step5: Using arbitrarily-chosen hyperparemeters, the model achieves nearly perfect accuracy on the training set.
Step6: However, this model may be overfit to the training dataset, so I am going to use 5-fold cross-validation to select hyperparameters that result in the best fit.
Step7: I can select the best parameters from the cross-validation procedure to use to predict on the test data (I could do a more refined search of the hyperparameter space, but the multiclass errors appear to be broadly similar across the range of values that I used, so I will stick with these).
Step8: The accuracy score for the training data is nominally lower than the original model, but not much; moreover this model should perform better in out-of-sample prediction.
Step9: Below is a plot of feature importances, using the F-score. This quantifies how many times a particular variable is used as a splitting variable across all the trees. This ranking makes intiutive sense, with movement and velocity being the most relevant factors.
Step10: Generate predictions on the test set using fitted classifier.
Step11: Back-transform label encoding, and export to predicted_pitches_fonnesbeck.csv.
|
<ASSISTANT_TASK:>
Python Code:
from pybaseball import statcast
pitch_data = statcast(start_dt='2017-04-01', end_dt='2017-04-30')
pitch_data.shape
pitch_data.pitch_type.value_counts()
pitch_type = pitch_data.pop('pitch_type')
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
pitch_data, pitch_type, test_size=0.33, random_state=42)
prediction_cols = ['p_throws', 'release_spin_rate', 'effective_speed', 'release_extension',
'vx0', 'vy0', 'vz0', 'ax', 'ay', 'az']
from sklearn.preprocessing import LabelEncoder
encoder = LabelEncoder().fit(y_train)
y_train_encoded = encoder.transform(y_train)
import xgboost as xgb
dtrain = xgb.DMatrix(X_train[prediction_cols], label=y_train_encoded)
dtest = xgb.DMatrix(X_test[prediction_cols])
xgb_params = {
'max_depth': 12,
'eta': 0.2,
'nthread': 4, # Use 4 cores for multiprocessing
'num_class': 6, # pitch types
'objective': 'multi:softmax' # use softmax multi-class classification
}
boosted_classifier = xgb.train(xgb_params, dtrain, num_boost_round=30)
from sklearn.metrics import accuracy_score
predictions = boosted_classifier.predict(dtrain)
accuracy_score(y_train_encoded, predictions)
param_grid = [(max_depth, min_child_weight, eta) for max_depth in (6, 8, 10, 12)
for min_child_weight in (7, 9, 11, 13)
for eta in (0.15, 0.2, 0.25)]
min_merror = np.inf
best_params = None
for max_depth, min_child_weight, eta in param_grid:
print("CV with max_depth={}, min_child_weight={}, eta={}".format(
max_depth,
min_child_weight,
eta))
# Update our parameters
xgb_params['max_depth'] = max_depth
xgb_params['min_child_weight'] = min_child_weight
xgb_params['eta'] = eta
# Run CV
cv_results = xgb.cv(
xgb_params,
dtrain,
num_boost_round=50,
nfold=5,
metrics={'merror'},
early_stopping_rounds=3
)
# Update best score
mean_merror = cv_results['test-merror-mean'].min()
boost_rounds = cv_results['test-merror-mean'].idxmin()
print("\tmerror {} for {} rounds".format(mean_merror, boost_rounds))
if mean_merror < min_merror:
min_merror = mean_merror
best_params = (max_depth, min_child_weight, eta)
print("Best params: {}, {}, {}, merror: {}".format(best_params[0], best_params[1], best_params[2], min_merror))
xgb_params = {
'max_depth': 10,
'eta': 0.2,
'min_child_weight': 9, # minimum sum of instance weight needed in a child
'nthread': 4, # use 4 cores for multiprocessing
'num_class': 6, # pitch types
'objective': 'multi:softmax' # use softmax multi-class classification
}
boosted_classifier = xgb.train(xgb_params, dtrain, num_boost_round=30)
from sklearn.metrics import accuracy_score
predictions = boosted_classifier.predict(dtrain)
accuracy_score(y_train_encoded, predictions)
xgb.plot_importance(boosted_classifier)
test_predictions = boosted_classifier.predict(dtest)
test_predictions
predicted_pitches = pd.Series(encoder.inverse_transform(test_predictions.astype(int)), index=X_test.index)
predicted_pitches.name = 'pitch_type'
predicted_pitches.index.name = 'pitchid'
predicted_pitches.to_csv('predicted_pitches_fonnesbeck.csv', header=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Preprocess Data
Step2: Step 1
Step3: Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include
Step4: Step 2
Step5: Features and Labels
Step6: Training Pipeline
Step7: Model Evaluation
Step8: Model Training
Step9: Model Evaluation
Step10: Question 1
Step11: Question 6
|
<ASSISTANT_TASK:>
Python Code:
# Load pickled data
import pickle
import csv
import cv2
import numpy as np
import math
import matplotlib.pyplot as plt
signnames = []
with open("signnames.csv", 'r') as f:
next(f)
reader = csv.reader(f)
signnames = list(reader)
n_classes = len(signnames)
training_file = "./train.p"
testing_file = "./test.p"
with open(training_file, mode='rb') as f:
train = pickle.load(f)
with open(testing_file, mode='rb') as f:
test = pickle.load(f)
from sklearn import cross_validation
X_train, X_test = [], []
y_train, y_test = [], test['labels']
for i, img in enumerate(train['features']):
img = cv2.resize(img,(48, 48), interpolation = cv2.INTER_CUBIC)
X_train.append(img)
y_train.append(train['labels'][i])
# Adaptive Histogram (CLAHE)
imgLab = cv2.cvtColor(img, cv2.COLOR_RGB2Lab)
clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8,8))
l, a, b = cv2.split(imgLab)
l = clahe.apply(l)
imgLab = cv2.merge((l, a, b))
imgLab = cv2.cvtColor(imgLab, cv2.COLOR_Lab2RGB)
X_train.append(imgLab)
y_train.append(train['labels'][i])
# Rotate -15
M = cv2.getRotationMatrix2D((24, 24), -15.0, 1)
imgL = cv2.warpAffine(img, M, (48, 48))
X_train.append(imgL)
y_train.append(train['labels'][i])
# Rotate 15
M = cv2.getRotationMatrix2D((24, 24), 15.0, 1)
imgR = cv2.warpAffine(img, M, (48, 48))
X_train.append(imgR)
y_train.append(train['labels'][i])
for img in test['features']:
X_test.append(cv2.resize(img,(48, 48), interpolation = cv2.INTER_CUBIC))
X_train, X_validation, y_train, y_validation = cross_validation.train_test_split(X_train, y_train, test_size=0.2, random_state=7)
from sklearn.utils import shuffle
X_train, y_train = shuffle(X_train, y_train)
n_train = len(X_train)
n_test = len(X_test)
image_shape = X_train[0].shape
print("Number of training examples =", n_train)
print("Number of testing examples =", n_test)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
print("Number of X_train = ", len(X_train))
print("Number of X_validation = ", len(X_validation))
print("Number of y_train = ", len(y_train))
print("Number of y_validation = ", len(y_validation))
import random
# Visualizations will be shown in the notebook.
%matplotlib inline
index = random.randint(0, len(X_train))
image = X_train[index].squeeze()
plt.figure(figsize=(1,1))
plt.imshow(image)
print(y_train[index], signnames[y_train[index]][1])
import tensorflow as tf
from tensorflow.contrib.layers import flatten
EPOCHS = 10
BATCH_SIZE = 128
def ConvNet(x):
mu = 0
sigma = 0.1
# Layer 1: Convolutional. Input = 48x48x3. Output = 42x42x100.
c1_W = tf.Variable(tf.truncated_normal([7, 7, 3, 100], mean=mu, stddev=sigma))
c1_b = tf.Variable(tf.zeros(100))
c1 = tf.nn.conv2d(x, c1_W, strides=[1, 1, 1, 1], padding='VALID')
c1 = tf.nn.bias_add(c1, c1_b)
c1 = tf.nn.relu(c1)
# Layer 2: Max Pooling. Input = 42x42x100. Output = 21x21x100.
s2 = tf.nn.max_pool(c1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
# Layer 3: Convolutional. Input = 21x21x100. Output = 18x18x150.
c3_W = tf.Variable(tf.truncated_normal([4, 4, 100, 150], mean=mu, stddev=sigma))
c3_b = tf.Variable(tf.zeros(150))
c3 = tf.nn.conv2d(s2, c3_W, strides=[1, 1, 1, 1], padding='VALID')
c3 = tf.nn.bias_add(c3, c3_b)
c3 = tf.nn.relu(c3)
# Layer 4: Max Pooling. Input = 18x18x150. Output = 9x9x150
s4 = tf.nn.max_pool(c3, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
# Layer 5: Convolutional. Input = 9x9x150. Output = 6x6x250.
c5_W = tf.Variable(tf.truncated_normal([4, 4, 150, 250], mean=mu, stddev=sigma))
c5_b = tf.Variable(tf.zeros(250))
c5 = tf.nn.conv2d(s4, c5_W, strides=[1, 1, 1, 1], padding='VALID')
c5 = tf.nn.bias_add(c5, c5_b)
c5 = tf.nn.relu(c5)
# Layer 6: Max Pooling. Input = 6x6x250. Output = 3x3x250.
s6 = tf.nn.max_pool(c5, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
# Layer 6: Flatten. Input = 3x3x250. Output = 2250
s6 = flatten(s6)
# Layer 7: Fully Connected. Input = 2250. Output = 300.
fc7_W = tf.Variable(tf.truncated_normal([2250, 300], mean=mu, stddev=sigma))
fc7_b = tf.Variable(tf.zeros(300))
fc7 = tf.add(tf.matmul(s6, fc7_W), fc7_b)
fc7 = tf.nn.relu(fc7)
# Layer 8: Fully Connected. Input = 300. Output = 43.
fc8_W = tf.Variable(tf.truncated_normal([300, 43], mean=mu, stddev=sigma))
fc8_b = tf.Variable(tf.zeros(43))
fc8 = tf.add(tf.matmul(fc7, fc8_W), fc8_b)
return fc8
x = tf.placeholder(tf.float32, (None, 48, 48, 3))
y = tf.placeholder(tf.int32, (None))
one_hot_y = tf.one_hot(y, n_classes)
rate = 0.001
logits = ConvNet(x)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits, one_hot_y)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = optimizer.minimize(loss_operation)
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y})
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
num_examples = len(X_train)
print("Training...")
print()
for i in range(EPOCHS):
X_train, y_train = shuffle(X_train, y_train)
for offset in range(0, num_examples, BATCH_SIZE):
end = offset + BATCH_SIZE
batch_x, batch_y = X_train[offset:end], y_train[offset:end]
sess.run(training_operation, feed_dict={x: batch_x, y: batch_y})
validation_accuracy = evaluate(X_validation, y_validation)
print("EPOCH {} ...".format(i+1))
print("Validation Accuracy = {:.3f}".format(validation_accuracy))
print()
try:
saver
except NameError:
saver = tf.train.Saver()
saver.save(sess, 'convnet')
print("Model saved")
with tf.Session() as sess:
loader = tf.train.import_meta_graph("convnet.meta")
loader.restore(sess, tf.train.latest_checkpoint('./'))
test_accuracy = evaluate(X_test, y_test)
print("Test Accuracy = {:.3f}".format(test_accuracy))
from PIL import Image
# Visualizations will be shown in the notebook.
%matplotlib inline
new_images = []
new_labels = np.array([4, 17, 26, 28, 14])
fig = plt.figure()
for i in range(1, 6):
subplot = fig.add_subplot(2,3,i)
img = cv2.imread("./dataset/{}.png".format(i))
img = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)
img = cv2.resize(img,(48, 48), interpolation = cv2.INTER_CUBIC)
subplot.set_title(signnames[new_labels[i-1]][1],fontsize=8)
subplot.imshow(img)
new_images.append(img)
with tf.Session() as sess:
loader = tf.train.import_meta_graph("convnet.meta")
loader.restore(sess, tf.train.latest_checkpoint('./'))
new_pics_classes = sess.run(logits, feed_dict={x: new_images})
test_accuracy = evaluate(new_images, new_labels)
print("Test Accuracy = {:.3f}".format(test_accuracy))
top3 = sess.run(tf.nn.top_k(new_pics_classes, k=3, sorted=True))
for i in range(len(top3[0])):
labels = list(map(lambda x: signnames[x][1], top3[1][i]))
print("Image {} predicted labels: {} with probabilities: {}".format(i+1, labels, top3[0][i]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: NB
Step2: Scientific
Step3: Collective Knowledge
Step4: Define helper functions
Step5: Plot experimental data
Step6: Access experimental data
Step7: Print
Step8: <a id="table"></a>
Step9: <a id="plot"></a>
|
<ASSISTANT_TASK:>
Python Code:
repo_uoa = 'explore-matrix-size-gemm-libs-dvdt-prof-firefly-rk3399-001'
import os
import sys
import json
import re
import IPython as ip
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib as mp
print ('IPython version: %s' % ip.__version__)
print ('Pandas version: %s' % pd.__version__)
print ('NumPy version: %s' % np.__version__)
print ('Seaborn version: %s' % sns.__version__) # apt install python-tk
print ('Matplotlib version: %s' % mp.__version__)
import matplotlib.pyplot as plt
from matplotlib import cm
%matplotlib inline
from IPython.display import Image, display
def display_in_full(df):
pd.options.display.max_columns = len(df.columns)
pd.options.display.max_rows = len(df.index)
display(df)
import ck.kernel as ck
print ('CK version: %s' % ck.__version__)
# client: 'acl-sgemm-opencl-example' or 'clblast-tune'
def get_mnk(characteristics, client):
# dim: 'm', 'n', 'k'
def get_dim_int(characteristics, client, dim):
if client == 'clblast-tune':
dim_str = characteristics['run'][dim][0]
dim_int = np.int64(dim_str)
else:
dim_str = characteristics['run'][dim]
dim_int = np.int64(dim_str)
return dim_int
m = get_dim_int(characteristics, client, 'm')
n = get_dim_int(characteristics, client, 'n')
k = get_dim_int(characteristics, client, 'k')
return ('(%d, %d, %d)' % (m, n, k))
def get_GFLOPS(characteristics, client):
if client == 'acl-sgemm-opencl-example':
GFLOPS_str = characteristics['run']['GFLOPS_1']
else:
GFLOPS_str = characteristics['run']['GFLOPS_1'][0]
GFLOPS = np.float(GFLOPS_str)
return GFLOPS
def get_TimeMS(characteristics,client):
time_execution =characteristics['run'].get('ms_1')
return time_execution
print profiling
start = datetime.strptime(profiling['timestamp']['start'], '%Y-%m-%dT%H:%M:%S.%f')
end = datetime.strptime(profiling['timestamp']['end'], '%Y-%m-%dT%H:%M:%S.%f')
print (start.timestamp() * 1000)
print (end.timestamp() * 1000)
elapsed = (end.timestamp() * 1000) - (start.timestamp() * 1000)
return elapsed
default_colormap = cm.autumn
default_figsize = [20, 12]
default_dpi = 200
default_fontsize = 20
default_legend_fontsize = 'medium'
if mp.__version__[0]=='2': mp.style.use('classic')
mp.rcParams['figure.figsize'] = default_figsize
mp.rcParams['figure.dpi'] = default_dpi
mp.rcParams['font.size'] = default_fontsize
mp.rcParams['legend.fontsize'] = default_legend_fontsize
def plot(df_mean, df_std, rot=90, patch_fontsize=default_fontsize):
ax = df_mean.plot(yerr=df_std,
kind='bar', ylim=[0, 20], rot=rot, width=0.9, grid=True, legend=True,
figsize=default_figsize, colormap=default_colormap, fontsize=default_fontsize)
ax.set_title('ARM Compute Library vs CLBlast (dv/dt)', fontsize=default_fontsize)
ax.set_ylabel('SGEMM GFLOPS', fontsize=default_fontsize)
ax.legend(loc='upper right')
for patch in ax.patches:
text = '{0:2.1f}'.format(patch.get_height())
ax.annotate(text, (patch.get_x()*1.00, patch.get_height()*1.01), fontsize=patch_fontsize)
def get_experimental_results(repo_uoa='explore-matrix-size-gemm-libs-dvdt-prof-firefly-rk3399', tags='explore-matrix-size-libs-sgemm, acl-sgemm-opencl-example'):
module_uoa = 'experiment'
r = ck.access({'action':'search', 'repo_uoa':repo_uoa, 'module_uoa':module_uoa, 'tags':tags})
if r['return']>0:
print ("Error: %s" % r['error'])
exit(1)
experiments = r['lst']
dfs = []
for experiment in experiments:
data_uoa = experiment['data_uoa']
r = ck.access({'action':'list_points', 'repo_uoa':repo_uoa, 'module_uoa':module_uoa, 'data_uoa':data_uoa})
if r['return']>0:
print ("Error: %s" % r['error'])
exit(1)
for point in r['points']:
with open(os.path.join(r['path'], 'ckp-%s.0001.json' % point)) as point_file:
point_data_raw = json.load(point_file)
characteristics_list = point_data_raw['characteristics_list']
num_repetitions = len(characteristics_list)
client = data_uoa[len('explore-matrix-size-gemm-libs-'):]
# Obtain column data.
data = [
{
'client': client,
'(m, n, k)': get_mnk(characteristics, client),
'GFLOPS': get_GFLOPS(characteristics, client),
'dvdt_prof_info': characteristics['run'].get('dvdt_prof',[]),
'time (ms)' : get_TimeMS(characteristics,client),
'repetition_id': repetition_id
}
for (characteristics, repetition_id) in zip(characteristics_list, range(num_repetitions))
]
#Construct a DataFrame.
df = pd.DataFrame(data)
# Set columns and index names.
df.columns.name = 'characteristics'
df.index.name = 'index'
df = df.set_index(['client', '(m, n, k)', 'repetition_id','GFLOPS','time (ms)'])
# Append to the list of similarly constructed DataFrames.
dfs.append(df)
# Concatenate all constructed DataFrames (i.e. stack on top of each other).
result = pd.concat(dfs).unstack('client').swaplevel(axis=1)
return result.sort_index(level=result.index.names)
df = get_experimental_results(repo_uoa=repo_uoa)
display_in_full(df)
df_min = df \
.ix[df.groupby(level=df.index.names[:-1])['time (ms)'].idxmin()] \
.reset_index('repetition_id', drop=True)
df_min
batch_size = 1
df_model_lib = df_min[['dvdt_prof_info']] \
.reset_index('platform', drop=True) \
.reorder_levels([ 'batch_size', 'model', 'lib']) \
.loc[batch_size] \
.sortlevel()
df_model_lib
models = df_model_lib.index.levels[0]
libs = df_model_lib.index.levels[1]
def concat(model, lib):
return '%s:%s' % (model, lib)
def analyse_model_lib(df_model_lib, model, lib, min_pc=1.0):
trace = pw.index_calls(df_model_lib.loc[model].loc[lib]['dvdt_prof_info'])
# All kernel enqueues.
df_kernel_enqueues = pw.df_kernel_enqueues(pw.filter_calls(trace, ['clEnqueueNDRangeKernel']), unit='ms')
# Kernel enqueues that take at least 'min_pc' % of the execution time.
df_kernel_enqueues_cum_time_num = pw.df_kernel_enqueues_cumulative_time_num(df_kernel_enqueues, unit)
df_kernel_enqueues_cum_time_num.columns.name = concat(model, lib)
return df_kernel_enqueues_cum_time_num[df_kernel_enqueues_cum_time_num['** Execution time (%) **'] > min_pc]
def analyse_xgemm_kernel(df_model_lib, model, lib, kernel):
# Get trace for lib and model.
trace = pw.index_calls(df_model_lib.loc[model].loc[lib]['dvdt_prof_info'])
# All calls to set kernel args.
set_args = pw.filter_calls(trace, ['clSetKernelArg'])
# All kernel enqueues.
nqs = pw.filter_calls(trace, ['clEnqueueNDRangeKernel'])
# Construct a DataFrame with info about kernel enqueues.
df = pw.df_kernel_enqueues(nqs, unit='ms').swaplevel().ix[kernel]
df = df[['p3 - p2 (ms)', 'gws2']]
# As gws2 is always 1, we can use it to count the number of enqueues.
df.columns = [ '** Execution time (ms) **', '** Number of enqueues **' ]
df.columns.name = kernel
# Augment the DataFrame with columns for the (M, N, K) triples.
df['kSizeM'] = 'M'; df['bSizeM'] = 'MM'
df['kSizeN'] = 'N'; df['bSizeN'] = 'NN'
df['kSizeK'] = 'K'; df['bSizeK'] = 'KK'
# Initialise buckets.
buckets = init_buckets()
# Augment the DataFrame with the actual (M, N, K) triples.
mnk_triples = []; mmnnkk_triples = []
for nq in nqs:
if nq['name'] == kernel:
prof = nq['profiling']
(M, N, K) = ('M', 'N', 'K'); (MM, NN, KK) = ('MM', 'NN', 'KK')
for set_arg in set_args:
if (set_arg['call_index'] > nq['call_index']): break
if (set_arg['kernel'] != nq['kernel']): continue
arg_value = pc.hex_str_as_int(set_arg['arg_value'])
if (set_arg['arg_index'] == 0): M = arg_value; MM = arg_value
if (set_arg['arg_index'] == 1): N = arg_value; NN = arg_value
if (set_arg['arg_index'] == 2): K = arg_value; KK = arg_value
mnk_triples.append((M, N, K))
mmnnkk_triples.append(get_nearest_bucket(buckets, (M, N, K)))
df[['kSizeM', 'kSizeN', 'kSizeK']] = mnk_triples
df[['bSizeM', 'bSizeN', 'bSizeK']] = mmnnkk_triples
# Calculate Gflops and GFLOPS (Gflops/s).
df['** Gflops **'] = 2*df['kSizeM']*df['kSizeN']*df['kSizeK']*1e-9
df['** GFLOPS **'] = df['** Gflops **'] / (df['** Execution time (ms) **']*1e-3)
return df
model_lib_kernel_analysis = {}
for model in models:
for lib in libs:
title = concat(model, lib)
print('== %s ==' % title)
try:
analysis = model_lib_analysis[title]
except:
print(' ... missing ...'); print(''); continue
for kernel in analysis.index:
if kernel.lower().find('xgemm') == -1: continue
analysis_xgemm = analyse_xgemm_kernel(df_model_lib, model, lib, kernel)
pd.options.display.max_columns = analysis_xgemm.columns.size
pd.options.display.max_rows = analysis_xgemm.index.size
display(analysis_xgemm)
analysis_xgemm_stats = analysis_xgemm.describe()
pd.options.display.max_columns = analysis_xgemm_stats.columns.size
pd.options.display.max_rows = analysis_xgemm_stats.index.size
display(analysis_xgemm_stats)
model_lib_kernel_analysis[concat(title, kernel)] = analysis_xgemm
print('')
print('')
df = get_experimental_results(repo_uoa=repo_uoa)
display_in_full(df)
df_mean = df.groupby(level=df.index.names[:-1]).mean()
df_std = df.groupby(level=df.index.names[:-1]).std()
plot(df_mean, df_std)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: The parameter file for Funwave-TVD is called "input.txt". Here we create it using writefile. You don't need to understand the details of how this file for this tutorial.
Step4: Here is a simple bash script to run the code....
Step5: FOLLOWING ALONG AT HOME
Step6: Here's how the average developer might run the code when doing their research.
Step7: Here's how they might retrieve the data.
Step8: Now let's use Python to graph the output
|
<ASSISTANT_TASK:>
Python Code:
!mkdir -p ~/agave
%cd ~/agave
!pip3 install --upgrade setvar
import re
import os
import sys
from setvar import *
from time import sleep
# This cell enables inline plotting in the notebook
%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
writefile("input.txt",
!INPUT FILE FOR FUNWAVE_TVD
! NOTE: all input parameter are capital sensitive
! --------------------TITLE-------------------------------------
! title only for log file
TITLE = VESSEL
! -------------------HOT START---------------------------------
HOT_START = F
FileNumber_HOTSTART = 1
! -------------------PARALLEL INFO-----------------------------
!
! PX,PY - processor numbers in X and Y
! NOTE: make sure consistency with mpirun -np n (px*py)
!
PX = 2
PY = 1
! --------------------DEPTH-------------------------------------
! Depth types, DEPTH_TYPE=DATA: from depth file
! DEPTH_TYPE=FLAT: idealized flat, need depth_flat
! DEPTH_TYPE=SLOPE: idealized slope,
! need slope,SLP starting point, Xslp
! and depth_flat
DEPTH_TYPE = FLAT
DEPTH_FLAT = 10.0
! -------------------PRINT---------------------------------
! PRINT*,
! result folder
RESULT_FOLDER = output/
! ------------------DIMENSION-----------------------------
! global grid dimension
Mglob = 500
Nglob = 100
! ----------------- TIME----------------------------------
! time: total computational time/ plot time / screen interval
! all in seconds
TOTAL_TIME = 3.0
PLOT_INTV = 1.0
PLOT_INTV_STATION = 50000.0
SCREEN_INTV = 1.0
HOTSTART_INTV = 360000000000.0
WAVEMAKER = INI_GAU
AMP = 3.0
Xc = 250.0
Yc = 50.0
WID = 20.0
! -----------------GRID----------------------------------
! if use spherical grid, in decimal degrees
! cartesian grid sizes
DX = 1.0
DY = 1.0
! ----------------SHIP WAKES ----------------------------
VESSEL_FOLDER = ./
NumVessel = 2
! -----------------OUTPUT-----------------------------
ETA = T
U = T
V = T
)
writefile("run.sh",
#!/bin/bash
export LD_LIBRARY_PATH=/usr/local/lib
mkdir -p rundir
cd ./rundir
cp ../input.txt .
mpirun -np 2 ~/FUNWAVE-TVD/src/funwave_vessel
)
if os.environ.get('USE_TUNNEL') == 'True':
# fetch the hostname and port of the reverse tunnel running in the sandbox
# so Agave can connect to our local sandbox
!echo $(ssh -q -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null sandbox 'curl -s http://localhost:4040/api/tunnels | jq -r '.tunnels[0].public_url'') > ngrok_url.txt
!cat ngrok_url.txt | sed 's|^tcp://||'
!cat ngrok_url.txt | sed 's|^tcp://||' | sed -r 's#(.*):(.*)#\1#' > ngrok_host.txt
!cat ngrok_url.txt | sed 's|^tcp://||' | sed -r 's#(.*):(.*)#\2#' > ngrok_port.txt
# set the environment variables otherwise set when running in a training cluster
os.environ['VM_PORT'] = readfile('ngrok_port.txt').strip()
os.environ['VM_MACHINE'] = readfile('ngrok_host.txt').strip()
os.environ['AGAVE_SYSTEM_HOST'] = readfile('ngrok_host.txt').strip()
os.environ['AGAVE_SYSTEM_PORT'] = readfile('ngrok_port.txt').strip()
!echo "VM_PORT=$VM_PORT"
!echo "VM_MACHINE=$VM_MACHINE"
setvar("VM_IPADDRESS=$(getent hosts ${VM_MACHINE}|cut -d' ' -f1)")
!scp -o "StrictHostKeyChecking=no" -P $VM_PORT input.txt run.sh $VM_IPADDRESS:.
!ssh -p $VM_PORT $VM_IPADDRESS bash run.sh
!ssh -p $VM_PORT $VM_IPADDRESS tar -C rundir -czf output.tgz output
!rm -fr output.tgz output
!scp -q -r -P $VM_PORT $VM_IPADDRESS:output.tgz .
!tar xzf output.tgz
!ls output
data = np.genfromtxt("output/v_00003")
fig = plt.figure(figsize=(12,12))
pltres = plt.imshow(data[::-1,:])
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Select a dataset
Step2: The end result of running both lines of code above is that we can now access the dataset by using spotify_data.
Step3: Check now that the first five rows agree with the image of the dataset (from when we saw what it would look like in Excel) above.
Step4: Thankfully, everything looks about right, with millions of daily global streams for each song, and we can proceed to plotting the data!
Step5: As you can see above, the line of code is relatively short and has two main components
Step6: The first line of code sets the size of the figure to 14 inches (in width) by 6 inches (in height). To set the size of any figure, you need only copy the same line of code as it appears. Then, if you'd like to use a custom size, change the provided values of 14 and 6 to the desired width and height.
Step7: In the next code cell, we plot the lines corresponding to the first two columns in the dataset.
|
<ASSISTANT_TASK:>
Python Code:
#$HIDE$
import pandas as pd
pd.plotting.register_matplotlib_converters()
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
print("Setup Complete")
# Path of the file to read
spotify_filepath = "../input/spotify.csv"
# Read the file into a variable spotify_data
spotify_data = pd.read_csv(spotify_filepath, index_col="Date", parse_dates=True)
# Print the first 5 rows of the data
spotify_data.head()
# Print the last five rows of the data
spotify_data.tail()
# Line chart showing daily global streams of each song
sns.lineplot(data=spotify_data)
# Set the width and height of the figure
plt.figure(figsize=(14,6))
# Add title
plt.title("Daily Global Streams of Popular Songs in 2017-2018")
# Line chart showing daily global streams of each song
sns.lineplot(data=spotify_data)
list(spotify_data.columns)
# Set the width and height of the figure
plt.figure(figsize=(14,6))
# Add title
plt.title("Daily Global Streams of Popular Songs in 2017-2018")
# Line chart showing daily global streams of 'Shape of You'
sns.lineplot(data=spotify_data['Shape of You'], label="Shape of You")
# Line chart showing daily global streams of 'Despacito'
sns.lineplot(data=spotify_data['Despacito'], label="Despacito")
# Add label for horizontal axis
plt.xlabel("Date")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Orçamento
Step2: Empenhos
Step3: A API fornece apenas uma página na consulta. O script abaixo checa a quantidade de páginas nos metadados da consulta e itera o número de vezes necessário para obter todas as páginas
Step4: Com os passos acima, fizemos a requisição de todas as páginas e convertemos o arquivo formato json em um DataFrame. Agora podemos trabalhar com a análise desses dado no Pandas. Para checar quantos registros existentes, vamos ver o final da lista
Step5: Modalidades de Aplicação
Step6: Maiores despesas de 2017
Step7: Fontes de recursos
Step8: Passo 4. Quer salvar um csv?
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import requests
import json
import numpy as np
TOKEN = '198f959a5f39a1c441c7c863423264'
base_url = "https://gatewayapi.prodam.sp.gov.br:443/financas/orcamento/sof/v2.1.0"
headers={'Authorization' : str('Bearer ' + TOKEN)}
url_orcado = '{base_url}/consultarDespesas?anoDotacao=2017&mesDotacao=08&codOrgao=84'.format(base_url=base_url)
request_orcado = requests.get(url_orcado,
headers=headers,
verify=True).json()
df_orcado = pd.DataFrame(request_orcado['lstDespesas'])
df_resumo_orcado = df_orcado[['valOrcadoInicial', 'valOrcadoAtualizado', 'valCongelado', 'valDisponivel', 'valEmpenhadoLiquido', 'valLiquidado']]
df_resumo_orcado
url_empenho = '{base_url}/consultaEmpenhos?anoEmpenho=2017&mesEmpenho=08&codOrgao=84'.format(base_url=base_url)
pagination = '&numPagina={PAGE}'
request_empenhos = requests.get(url_empenho,
headers=headers,
verify=True).json()
number_of_pages = request_empenhos['metadados']['qtdPaginas']
todos_empenhos = []
todos_empenhos = todos_empenhos + request_empenhos['lstEmpenhos']
if number_of_pages>1:
for p in range(2, number_of_pages+1):
request_empenhos = requests.get(url_empenho + pagination.format(PAGE=p), headers=headers, verify=True).json()
todos_empenhos = todos_empenhos + request_empenhos['lstEmpenhos']
df_empenhos = pd.DataFrame(todos_empenhos)
df_empenhos.tail()
modalidades = df_empenhos.groupby('txtModalidadeAplicacao')['valTotalEmpenhado', 'valLiquidado'].sum()
modalidades
# Outra maneira de fazer a mesma operação:
#pd.pivot_table(df_empenhos, values='valTotalEmpenhado', index=['txtModalidadeAplicacao'], aggfunc=np.sum)
despesas = pd.pivot_table(df_empenhos,
values=['valLiquidado', 'valPagoExercicio'],
index=['numCpfCnpj', 'txtRazaoSocial', 'txtDescricaoPrograma'],
aggfunc=np.sum).sort_values('valPagoExercicio', axis=0, ascending=False, inplace=False, kind='quicksort', na_position='last')
despesas.head(15)
fonte = pd.pivot_table(df_empenhos,
values=['valLiquidado', 'valPagoExercicio'],
index=['txtDescricaoFonteRecurso'],
aggfunc=np.sum).sort_values('valPagoExercicio', axis=0, ascending=False, inplace=False, kind='quicksort', na_position='last')
fonte
df_empenhos.to_csv('empenhos.csv')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Se sustituye la columna Type por un valor categórico
Step2: Separamos la columna target del resto de variables predictoras
Step3: Mutual Information
Step4: Chi-Square
Step5: Principal Component Analysis (PCA)
Step6: PCA without normalization
Step7: Dibujamos la proyección en las dos primeras componentes principales
Step8: PCA with Normalization
Step9: Linear Discriminant Analysis (LDA)
Step10: LDA without normalization
Step11: LDA with normalization
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import sklearn.feature_selection as FS
data = pd.read_csv("./wine_dataset.csv", delimiter=";")
data.head()
data["Type"] = pd.Categorical.from_array(data["Type"]).codes
data["Type"].replace("A",0)
data["Type"].replace("B",1)
data["Type"].replace("C",2)
data.head()
data.describe()
data_y = data["Type"]
data_X = data.drop("Type", 1)
data_X.head()
mi = FS.mutual_info_classif(data_X, data_y)
print(mi)
data_X.head(0)
names=data_X.axes[1]
names
indice=np.argsort(mi)[::-1]
print(indice)
print(names[indice])
plt.figure(figsize=(8,6))
plt.subplot(121)
plt.scatter(data[data.Type==1].Flavanoids,data[data.Type==1].Color_Intensity, color='red')
plt.scatter(data[data.Type==2].Flavanoids,data[data.Type==2].Color_Intensity, color='blue')
plt.scatter(data[data.Type==0].Flavanoids,data[data.Type==0].Color_Intensity, color='green')
plt.title('Good Predictor Variables \n Flavanoids vs Color_Intensity')
plt.xlabel('Flavanoids')
plt.ylabel('Color_Intensity')
plt.legend(['A','B','C'])
plt.subplot(122)
plt.scatter(data[data.Type==1].Ash,data[data.Type==1].Nonflavanoid_Phenols, color='red')
plt.scatter(data[data.Type==2].Ash,data[data.Type==2].Nonflavanoid_Phenols, color='blue')
plt.scatter(data[data.Type==0].Ash,data[data.Type==0].Nonflavanoid_Phenols, color='green')
plt.title('Ash vs Nonflavanoid_Phenols')
plt.xlabel('Ash')
plt.ylabel('Nonflavanoid_Phenols')
plt.legend(['A','B','C'])
plt.show()
chi = FS.chi2(X = data_X, y = data["Type"])[0]
print(chi)
indice_chi=np.argsort(chi)[::-1]
print(indice_chi)
print(names[indice_chi])
plt.figure()
plt.scatter(data[data.Type==1].Proline,data[data.Type==1].Color_Intensity, color='red')
plt.scatter(data[data.Type==2].Proline,data[data.Type==2].Color_Intensity, color='blue')
plt.scatter(data[data.Type==0].Proline,data[data.Type==0].Color_Intensity, color='green')
plt.title('Good Predictor Variables Chi-Square \n Proline vs Color_Intensity')
plt.xlabel('Proline')
plt.ylabel('Color_Intensity')
plt.legend(['A','B','C'])
plt.show()
from sklearn.decomposition.pca import PCA
pca = PCA()
pca.fit(data_X)
plt.plot(pca.explained_variance_)
plt.ylabel("eigenvalues")
plt.xlabel("position")
plt.show()
print ("Eigenvalues\n",pca.explained_variance_)
# Percentage of variance explained for each components
print('\nExplained variance ratio (first two components):\n %s'
% str(pca.explained_variance_ratio_))
pca = PCA(n_components=2)
X_pca = pd.DataFrame(pca.fit_transform(data_X))
pca_A = X_pca[data_y == 0]
pca_B = X_pca[data_y == 1]
pca_C = X_pca[data_y == 2]
#plot
plt.scatter(x = pca_A[0], y = pca_A[1], c="blue")
plt.scatter(x = pca_B[0], y = pca_B[1], c="turquoise")
plt.scatter(x = pca_C[0], y = pca_C[1], c="darkorange")
plt.xlabel("First Component")
plt.ylabel("Second Component")
plt.legend(["A","B","C"])
plt.show()
from sklearn import preprocessing
X_scaled = preprocessing.scale(data_X)
pca = PCA()
pca.fit(X_scaled)
plt.plot(pca.explained_variance_)
plt.ylabel("eigenvalues")
plt.xlabel("position")
plt.show()
print ("Eigenvalues\n",pca.explained_variance_)
# Percentage of variance explained for each components
print('\nExplained variance ratio (first two components):\n %s'
% str(pca.explained_variance_ratio_))
pca = PCA(n_components=2)
X_pca = pd.DataFrame(pca.fit_transform(X_scaled))
pca_A = X_pca[data_y == 'A']
pca_B = X_pca[data_y == 'B']
pca_C = X_pca[data_y == 'C']
#plot
plt.scatter(x = pca_A[0], y = pca_A[1], c="blue")
plt.scatter(x = pca_B[0], y = pca_B[1], c="turquoise")
plt.scatter(x = pca_C[0], y = pca_C[1], c="darkorange")
plt.xlabel("First Component")
plt.ylabel("Second Component")
plt.legend(["A","B","C"])
plt.show()
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
lda = LDA()
lda.fit(data_X,data_y)
print("Porcentaje explicado:", lda.explained_variance_ratio_)
X_lda = pd.DataFrame(lda.fit_transform(data_X, data_y))
# Dividimos en los 3 tipos para ponerles diferentes colores
lda_A = X_lda[data_y == 0]
lda_B = X_lda[data_y == 1]
lda_C = X_lda[data_y == 2]
#plot
plt.scatter(x = lda_A[0], y = lda_A[1], c="blue")
plt.scatter(x = lda_B[0], y = lda_B[1], c="turquoise")
plt.scatter(x = lda_C[0], y = lda_C[1], c="darkorange")
plt.title("LDA without normalization")
plt.xlabel("First LDA Component")
plt.ylabel("Second LDA Component")
plt.legend((["A","B","C"]), loc="lower right")
plt.show()
lda = LDA(n_components=2)
lda.fit(X_scaled,data_y)
print("Porcentaje explicado:", lda.explained_variance_ratio_)
X_lda = pd.DataFrame(lda.fit_transform(data_X, data_y))
# Dividimos en los 3 tipos para ponerles diferentes colores
lda_A = X_lda[data_y == 0]
lda_B = X_lda[data_y == 1]
lda_C = X_lda[data_y == 2]
#plot
plt.scatter(x = lda_A[0], y = lda_A[1], c="blue")
plt.scatter(x = lda_B[0], y = lda_B[1], c="turquoise")
plt.scatter(x = lda_C[0], y = lda_C[1], c="darkorange")
plt.xlabel("First LDA Component")
plt.ylabel("Second LDA Component")
plt.legend(["A","B","C"],loc="lower right")
plt.title("LDA with normalization")
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: NumPy
Step2: shape
Step3: np.ones
Step4: np.empty
Step5: np.arange
|
<ASSISTANT_TASK:>
Python Code:
test = "Hello World"
print ("test: " + test)
#
#The function zeros creates an array full of zeros
# function ones creates an array full of ones
#function empty creates an array whose initial content is random and depends on the state of the memory
# To create sequences of numbers, NumPy provides a function analogous to range that returns arrays instead of lists.
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Snooping on execution
Step2: Snooping on referenced functions
Step3: pp - pretty print
Step4: Shortcut
Step5: How to use in Jupyter
|
<ASSISTANT_TASK:>
Python Code:
ROMAN = [
(1000, "M"),
( 900, "CM"),
( 500, "D"),
( 400, "CD"),
( 100, "C"),
( 90, "XC"),
( 50, "L"),
( 40, "XL"),
( 10, "X"),
( 9, "IX"),
( 5, "V"),
( 4, "IV"),
( 1, "I"),
]
def to_roman(number: int):
result = ""
for (arabic, roman) in ROMAN:
(factor, number) = divmod(number, arabic)
result += roman * factor
return result
print(to_roman(2021))
print(to_roman(8))
import snoop
@snoop
def to_roman2(number: int):
result = ""
for (arabic, roman) in ROMAN:
(factor, number) = divmod(number, arabic)
result += roman * factor
return result
print(to_roman2(2021))
from statistics import stdev
numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
print(f"numbers={numbers}: stdev={stdev(numbers)}")
def mystddev(max: int) -> float:
my_numbers = list(range(max))
with snoop(depth=2):
return stdev(my_numbers)
print(mystddev(5))
from statistics import median
print(median(numbers) + 2 * stdev(numbers))
from snoop import pp
pp(pp(median(numbers)) + pp(2 * pp(stdev(numbers))))
# print(median(numbers) + 2 * stdev(numbers))
pp.deep(lambda: median(numbers) + 2 * stdev(numbers))
users = {
'user1': { 'is_admin': True, 'email': 'one@exmple.com'},
'user2': { 'is_admin': True, 'phone': '281-555-5555' },
'user3': { 'is_admin': False, 'email': 'three@example.com' },
}
def email_user(*user_names) -> None:
global users
for user in user_names:
print("Emailing %s at %s", (user, users[user]['email']))
email_user('user1', 'user2')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine.
Step4: Extract the dataset from the compressed .tar.gz file.
Step6: Problem 1
Step7: Problem 2
Step8: Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match.
Step9: Problem 4
|
<ASSISTANT_TASK:>
Python Code:
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import matplotlib.pyplot as plt
import numpy as np
import os
import sys
import tarfile
from IPython.display import display, Image
from scipy import ndimage
from sklearn.linear_model import LogisticRegression
from six.moves.urllib.request import urlretrieve
from six.moves import cPickle as pickle
# Config the matplotlib backend as plotting inline in IPython
%matplotlib inline
url = 'http://commondatastorage.googleapis.com/books1000/'
last_percent_reported = None
def download_progress_hook(count, blockSize, totalSize):
A hook to report the progress of a download. This is mostly intended for users with
slow internet connections. Reports every 1% change in download progress.
global last_percent_reported
percent = int(count * blockSize * 100 / totalSize)
if last_percent_reported != percent:
if percent % 5 == 0:
sys.stdout.write("%s%%" % percent)
sys.stdout.flush()
else:
sys.stdout.write(".")
sys.stdout.flush()
last_percent_reported = percent
def maybe_download(filename, expected_bytes, force=False):
Download a file if not present, and make sure it's the right size.
if force or not os.path.exists(filename):
print('Attempting to download:', filename)
filename, _ = urlretrieve(url + filename, filename, reporthook=download_progress_hook)
print('\nDownload Complete!')
statinfo = os.stat(filename)
if statinfo.st_size == expected_bytes:
print('Found and verified', filename)
else:
raise Exception(
'Failed to verify ' + filename + '. Can you get to it with a browser?')
return filename
train_filename = maybe_download('notMNIST_large.tar.gz', 247336696)
test_filename = maybe_download('notMNIST_small.tar.gz', 8458043)
num_classes = 10
np.random.seed(133)
def maybe_extract(filename, force=False):
root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz
if os.path.isdir(root) and not force:
# You may override by setting force=True.
print('%s already present - Skipping extraction of %s.' % (root, filename))
else:
print('Extracting data for %s. This may take a while. Please wait.' % root)
tar = tarfile.open(filename)
sys.stdout.flush()
tar.extractall()
tar.close()
data_folders = [
os.path.join(root, d) for d in sorted(os.listdir(root))
if os.path.isdir(os.path.join(root, d))]
if len(data_folders) != num_classes:
raise Exception(
'Expected %d folders, one per class. Found %d instead.' % (
num_classes, len(data_folders)))
print(data_folders)
return data_folders
train_folders = maybe_extract(train_filename)
test_folders = maybe_extract(test_filename)
image_size = 28 # Pixel width and height.
pixel_depth = 255.0 # Number of levels per pixel.
def load_letter(folder, min_num_images):
Load the data for a single letter label.
image_files = os.listdir(folder)
dataset = np.ndarray(shape=(len(image_files), image_size, image_size),
dtype=np.float32)
print(folder)
num_images = 0
for image in image_files:
image_file = os.path.join(folder, image)
try:
image_data = (ndimage.imread(image_file).astype(float) -
pixel_depth / 2) / pixel_depth
if image_data.shape != (image_size, image_size):
raise Exception('Unexpected image shape: %s' % str(image_data.shape))
dataset[num_images, :, :] = image_data
num_images = num_images + 1
except IOError as e:
print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.')
dataset = dataset[0:num_images, :, :]
if num_images < min_num_images:
raise Exception('Many fewer images than expected: %d < %d' %
(num_images, min_num_images))
print('Full dataset tensor:', dataset.shape)
print('Mean:', np.mean(dataset))
print('Standard deviation:', np.std(dataset))
return dataset
def maybe_pickle(data_folders, min_num_images_per_class, force=False):
dataset_names = []
for folder in data_folders:
set_filename = folder + '.pickle'
dataset_names.append(set_filename)
if os.path.exists(set_filename) and not force:
# You may override by setting force=True.
print('%s already present - Skipping pickling.' % set_filename)
else:
print('Pickling %s.' % set_filename)
dataset = load_letter(folder, min_num_images_per_class)
try:
with open(set_filename, 'wb') as f:
pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', set_filename, ':', e)
return dataset_names
train_datasets = maybe_pickle(train_folders, 45000)
test_datasets = maybe_pickle(test_folders, 1800)
def make_arrays(nb_rows, img_size):
if nb_rows:
dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32)
labels = np.ndarray(nb_rows, dtype=np.int32)
else:
dataset, labels = None, None
return dataset, labels
def merge_datasets(pickle_files, train_size, valid_size=0):
num_classes = len(pickle_files)
valid_dataset, valid_labels = make_arrays(valid_size, image_size)
train_dataset, train_labels = make_arrays(train_size, image_size)
vsize_per_class = valid_size // num_classes
tsize_per_class = train_size // num_classes
start_v, start_t = 0, 0
end_v, end_t = vsize_per_class, tsize_per_class
end_l = vsize_per_class+tsize_per_class
for label, pickle_file in enumerate(pickle_files):
try:
with open(pickle_file, 'rb') as f:
letter_set = pickle.load(f)
# let's shuffle the letters to have random validation and training set
np.random.shuffle(letter_set)
if valid_dataset is not None:
valid_letter = letter_set[:vsize_per_class, :, :]
valid_dataset[start_v:end_v, :, :] = valid_letter
valid_labels[start_v:end_v] = label
start_v += vsize_per_class
end_v += vsize_per_class
train_letter = letter_set[vsize_per_class:end_l, :, :]
train_dataset[start_t:end_t, :, :] = train_letter
train_labels[start_t:end_t] = label
start_t += tsize_per_class
end_t += tsize_per_class
except Exception as e:
print('Unable to process data from', pickle_file, ':', e)
raise
return valid_dataset, valid_labels, train_dataset, train_labels
train_size = 200000
valid_size = 10000
test_size = 10000
valid_dataset, valid_labels, train_dataset, train_labels = merge_datasets(
train_datasets, train_size, valid_size)
_, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size)
print('Training:', train_dataset.shape, train_labels.shape)
print('Validation:', valid_dataset.shape, valid_labels.shape)
print('Testing:', test_dataset.shape, test_labels.shape)
def randomize(dataset, labels):
permutation = np.random.permutation(labels.shape[0])
shuffled_dataset = dataset[permutation,:,:]
shuffled_labels = labels[permutation]
return shuffled_dataset, shuffled_labels
train_dataset, train_labels = randomize(train_dataset, train_labels)
test_dataset, test_labels = randomize(test_dataset, test_labels)
valid_dataset, valid_labels = randomize(valid_dataset, valid_labels)
pickle_file = 'notMNIST.pickle'
try:
f = open(pickle_file, 'wb')
save = {
'train_dataset': train_dataset,
'train_labels': train_labels,
'valid_dataset': valid_dataset,
'valid_labels': valid_labels,
'test_dataset': test_dataset,
'test_labels': test_labels,
}
pickle.dump(save, f, pickle.HIGHEST_PROTOCOL)
f.close()
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
statinfo = os.stat(pickle_file)
print('Compressed pickle size:', statinfo.st_size)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The ROOT Python module is the entry point for all the ROOT C++ functionality.
Step3: Calling user-defined C++ code via PyROOT
Step4: and use it right away from Python
Step5: What about code in C++ libraries?
Step6: Of course not every conversion is allowed!
Step8: An example of a useful allowed conversion is Python list to std
|
<ASSISTANT_TASK:>
Python Code:
import ROOT
h = ROOT.TH1F("my_histo", "Example histogram", 100, -4, 4)
ROOT.gInterpreter.ProcessLine(
double add(double a, double b) {
return a + b;
}
)
ROOT.add(3.14, 100)
ROOT.gInterpreter.ProcessLine("void print_integer(int i) { std::cout << i << std::endl; }")
ROOT.print_integer(7)
ROOT.print_integer([]) # fails with TypeError
ROOT.gInterpreter.ProcessLine(
void print_vector(const std::vector<std::string> &v) {
for (auto &s : v) {
std::cout << s << std::endl;
}
}
)
ROOT.print_vector(['Two', 'Words'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: PRE-PROCESSING!
Step2: TF-IDF
Step3: RAKE
|
<ASSISTANT_TASK:>
Python Code:
from sklearn.feature_extraction import stop_words
from nltk.corpus import stopwords
import math
from textblob import TextBlob as tb
with open("scripts/script.txt", "r") as f:
data = f.read()
#with open("scripts/script.txt", "r") as f:
# data2 = f.readlines()
#for line in data:
# words = data.split()
with open("scripts/transcript_1.txt", "r") as t1:
t1 = t1.read()
with open("scripts/transcript_2.txt", "r") as t2:
t2 = t2.read()
with open("scripts/transcript_3.txt", "r") as t3:
t3 = t3.read()
from spacy.en import English
import nltk
from sklearn.feature_extraction.stop_words import ENGLISH_STOP_WORDS
parser = English()
parsedData = parser(data)
# All you have to do is iterate through the parsedData
# Each token is an object with lots of different properties
# A property with an underscore at the end returns the string representation
# while a property without the underscore returns an index (int) into spaCy's vocabulary
# The probability estimate is based on counts from a 3 billion word
# corpus, smoothed using the Simple Good-Turing method.
for i, token in enumerate(parsedData[0:2]):
print("original:", token.orth, token.orth_)
print("lowercased:", token.lower, token.lower_)
print("lemma:", token.lemma, token.lemma_)
print("shape:", token.shape, token.shape_)
print("prefix:", token.prefix, token.prefix_)
print("suffix:", token.suffix, token.suffix_)
print("log probability:", token.prob)
print("Brown cluster id:", token.cluster)
print("----------------------------------------")
from sklearn.metrics import accuracy_score
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
def tf(word, blob):
return blob.words.count(word) / len(blob.words)
def n_containing(word, bloblist):
return sum(1 for blob in bloblist if word in blob.words)
def idf(word, bloblist):
return math.log(len(bloblist) / (1 + n_containing(word, bloblist)))
def tfidf(word, blob, bloblist):
return tf(word, blob) * idf(word, bloblist)
bloblist = []
[bloblist.append(tb(doc)) for doc in [data, t1, t2, t3]]
for i, blob in enumerate(bloblist):
print("Top words in document {}".format(i + 1))
scores = {word: tfidf(word, blob, bloblist) for word in blob.words}
sorted_words = sorted(scores.items(), key=lambda x: x[1], reverse=True)
for word, score in sorted_words[:3]:
print("Word: {}, TF-IDF: {}".format(word, round(score, 5)))
CountVectorizer(data)
tf = TfidfVectorizer(analyzer='word', ngram_range=(1,3), min_df = 0, stop_words = 'english')
tfidf_matrix = tf.fit_transform(data2)
feature_names = tf.get_feature_names()
tfidf_matrix.shape, len(feature_names)
dense = tfidf_matrix.todense()
episode = dense[0].tolist()[0]
phrase_scores = [pair for pair in zip(range(0, len(episode)), episode) if pair[1] > 0]
sorted_phrase_scores = sorted(phrase_scores, key=lambda t: t[1] * -1)
for phrase, score in [(feature_names[word_id], score) for (word_id, score) in sorted_phrase_scores][:20]:
print('{0: <20} {1}'.format(phrase, score))
def freq(word, tokens):
return tokens.count(word)
#Compute the frequency for each term.
vocabulary = []
docs = {}
all_tips = []
for tip in (venue.tips()):
tokens = tokenizer.tokenize(tip.text)
bi_tokens = bigrams(tokens)
tri_tokens = trigrams(tokens)
tokens = [token.lower() for token in tokens if len(token) > 2]
tokens = [token for token in tokens if token not in stopwords]
bi_tokens = [' '.join(token).lower() for token in bi_tokens]
bi_tokens = [token for token in bi_tokens if token not in stopwords]
tri_tokens = [' '.join(token).lower() for token in tri_tokens]
tri_tokens = [token for token in tri_tokens if token not in stopwords]
final_tokens = []
final_tokens.extend(tokens)
final_tokens.extend(bi_tokens)
final_tokens.extend(tri_tokens)
docs[tip.text] = {'freq': {}}
for token in final_tokens:
docs[tip.text]['freq'][token] = freq(token, final_tokens)
print docs
from rake_nltk import Rake
r = Rake() # Uses stopwords for english from NLTK, and all puntuation characters.
# If you want to provide your own set of stop words and punctuations to
# r = Rake(<list of stopwords>, <string of puntuations to ignore>
r.extract_keywords_from_text(data)
r.get_ranked_phrases_with_scores() # To get keyword phrases ranked highest to lowest.
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the lending club dataset
Step2: Like the previous assignment, we reassign the labels to have +1 for a safe loan, and -1 for a risky (bad) loan.
Step3: Unlike the previous assignment where we used several features, in this assignment, we will just be using 4 categorical
Step4: Let's explore what the dataset looks like.
Step5: Subsample dataset to make sure classes are balanced
Step6: Note
Step7: Let's see what the feature columns look like now
Step8: Let's explore what one of these columns looks like
Step9: This column is set to 1 if the loan grade is A and 0 otherwise.
Step10: Train-test split
Step11: Decision tree implementation
Step12: Because there are several steps in this assignment, we have introduced some stopping points where you can check your code and make sure it is correct before proceeding. To test your intermediate_node_num_mistakes function, run the following code until you get a Test passed!, then you should proceed. Otherwise, you should spend some time figuring out where things went wrong.
Step13: Function to pick best feature to split on
Step14: To test your best_splitting_feature function, run the following code
Step15: Building the tree
Step16: We have provided a function that learns the decision tree recursively and implements 3 stopping conditions
Step17: Here is a recursive function to count the nodes in your tree
Step18: Run the following test code to check your implementation. Make sure you get 'Test passed' before proceeding.
Step19: Build the tree!
Step20: Making predictions with a decision tree
Step21: Now, let's consider the first example of the test set and see what my_decision_tree model predicts for this data point.
Step22: Let's add some annotations to our prediction to see what the prediction path was that lead to this predicted class
Step23: Quiz question
Step24: Now, let's use this function to evaluate the classification error on the test set.
Step25: Quiz Question
Step26: Quiz Question
Step27: Exploring the left subtree of the left subtree
Step28: Quiz question
|
<ASSISTANT_TASK:>
Python Code:
import graphlab
loans = graphlab.SFrame('lending-club-data.gl/')
loans['safe_loans'] = loans['bad_loans'].apply(lambda x : +1 if x==0 else -1)
loans = loans.remove_column('bad_loans')
features = ['grade', # grade of the loan
'term', # the term of the loan
'home_ownership', # home_ownership status: own, mortgage or rent
'emp_length', # number of years of employment
]
target = 'safe_loans'
loans = loans[features + [target]]
loans
safe_loans_raw = loans[loans[target] == 1]
risky_loans_raw = loans[loans[target] == -1]
# Since there are less risky loans than safe loans, find the ratio of the sizes
# and use that percentage to undersample the safe loans.
percentage = len(risky_loans_raw)/float(len(safe_loans_raw))
safe_loans = safe_loans_raw.sample(percentage, seed = 1)
risky_loans = risky_loans_raw
loans_data = risky_loans.append(safe_loans)
print "Percentage of safe loans :", len(safe_loans) / float(len(loans_data))
print "Percentage of risky loans :", len(risky_loans) / float(len(loans_data))
print "Total number of loans in our new dataset :", len(loans_data)
loans_data = risky_loans.append(safe_loans)
for feature in features:
loans_data_one_hot_encoded = loans_data[feature].apply(lambda x: {x: 1})
loans_data_unpacked = loans_data_one_hot_encoded.unpack(column_name_prefix=feature)
# Change None's to 0's
for column in loans_data_unpacked.column_names():
loans_data_unpacked[column] = loans_data_unpacked[column].fillna(0)
loans_data.remove_column(feature)
loans_data.add_columns(loans_data_unpacked)
features = loans_data.column_names()
features.remove('safe_loans') # Remove the response variable
features
print "Number of features (after binarizing categorical variables) = %s" % len(features)
loans_data['grade.A']
print "Total number of grade.A loans : %s" % loans_data['grade.A'].sum()
print "Expexted answer : 6422"
train_data, test_data = loans_data.random_split(.8, seed=1)
def intermediate_node_num_mistakes(labels_in_node):
# Corner case: If labels_in_node is empty, return 0
if len(labels_in_node) == 0:
return 0
# Count the number of 1's (safe loans)
## YOUR CODE HERE
nr_of_safe_loans = 0
for e in labels_in_node:
if e == 1:
nr_of_safe_loans += 1
# Count the number of -1's (risky loans)
## YOUR CODE HERE
nr_of_risky_loans = 0
for e in labels_in_node:
if e == -1:
nr_of_risky_loans += 1
# Return the number of mistakes that the majority classifier makes.
## YOUR CODE HERE
if nr_of_safe_loans > nr_of_risky_loans:
return nr_of_risky_loans
return nr_of_safe_loans
# Test case 1
example_labels = graphlab.SArray([-1, -1, 1, 1, 1])
if intermediate_node_num_mistakes(example_labels) == 2:
print 'Test passed!'
else:
print 'Test 1 failed... try again!'
# Test case 2
example_labels = graphlab.SArray([-1, -1, 1, 1, 1, 1, 1])
if intermediate_node_num_mistakes(example_labels) == 2:
print 'Test passed!'
else:
print 'Test 2 failed... try again!'
# Test case 3
example_labels = graphlab.SArray([-1, -1, -1, -1, -1, 1, 1])
if intermediate_node_num_mistakes(example_labels) == 2:
print 'Test passed!'
else:
print 'Test 3 failed... try again!'
def best_splitting_feature(data, features, target):
best_feature = None # Keep track of the best feature
best_error = 10 # Keep track of the best error so far
# Note: Since error is always <= 1, we should intialize it with something larger than 1.
# Convert to float to make sure error gets computed correctly.
num_data_points = float(len(data))
# Loop through each feature to consider splitting on that feature
for feature in features:
# The left split will have all data points where the feature value is 0
left_split = data[data[feature] == 0]
# The right split will have all data points where the feature value is 1
## YOUR CODE HERE
right_split = data[data[feature] == 1]
# Calculate the number of misclassified examples in the left split.
# Remember that we implemented a function for this! (It was called intermediate_node_num_mistakes)
# YOUR CODE HERE
left_mistakes = intermediate_node_num_mistakes(left_split[target])
# Calculate the number of misclassified examples in the right split.
## YOUR CODE HERE
right_mistakes = intermediate_node_num_mistakes(right_split[target])
# Compute the classification error of this split.
# Error = (# of mistakes (left) + # of mistakes (right)) / (# of data points)
## YOUR CODE HERE
error = (left_mistakes + right_mistakes) / num_data_points
# If this is the best error we have found so far, store the feature as best_feature and the error as best_error
## YOUR CODE HERE
if error < best_error:
best_error = error
best_feature = feature
return best_feature # Return the best feature we found
if best_splitting_feature(train_data, features, 'safe_loans') == 'term. 36 months':
print 'Test passed!'
else:
print 'Test failed... try again!'
def create_leaf(target_values):
# Create a leaf node
leaf = {'splitting_feature' : None,
'left' : None,
'right' : None,
'is_leaf': True} ## YOUR CODE HERE
# Count the number of data points that are +1 and -1 in this node.
num_ones = len(target_values[target_values == +1])
num_minus_ones = len(target_values[target_values == -1])
# For the leaf node, set the prediction to be the majority class.
# Store the predicted class (1 or -1) in leaf['prediction']
if num_ones > num_minus_ones:
leaf['prediction'] = 1 ## YOUR CODE HERE
else:
leaf['prediction'] = -1 ## YOUR CODE HERE
# Return the leaf node
return leaf
def decision_tree_create(data, features, target, current_depth = 0, max_depth = 10):
remaining_features = features[:] # Make a copy of the features.
target_values = data[target]
print "--------------------------------------------------------------------"
print "Subtree, depth = %s (%s data points)." % (current_depth, len(target_values))
# Stopping condition 1
# (Check if there are mistakes at current node.
# Recall you wrote a function intermediate_node_num_mistakes to compute this.)
if intermediate_node_num_mistakes(target_values) == 0: ## YOUR CODE HERE
print "Stopping condition 1 reached."
# If not mistakes at current node, make current node a leaf node
return create_leaf(target_values)
# Stopping condition 2 (check if there are remaining features to consider splitting on)
if not remaining_features: ## YOUR CODE HERE
print "Stopping condition 2 reached."
# If there are no remaining features to consider, make current node a leaf node
return create_leaf(target_values)
# Additional stopping condition (limit tree depth)
if current_depth >= max_depth: ## YOUR CODE HERE
print "Reached maximum depth. Stopping for now."
# If the max tree depth has been reached, make current node a leaf node
return create_leaf(target_values)
# Find the best splitting feature (recall the function best_splitting_feature implemented above)
## YOUR CODE HERE
splitting_feature = best_splitting_feature(data, remaining_features, target)
# Split on the best feature that we found.
left_split = data[data[splitting_feature] == 0]
right_split = data[data[splitting_feature] == 1] ## YOUR CODE HERE
remaining_features.remove(splitting_feature)
print "Split on feature %s. (%s, %s)" % (\
splitting_feature, len(left_split), len(right_split))
# Create a leaf node if the split is "perfect"
if len(left_split) == len(data):
print "Creating leaf node."
return create_leaf(left_split[target])
if len(right_split) == len(data):
print "Creating leaf node."
## YOUR CODE HERE
return create_leaf(right_split[target])
# Repeat (recurse) on left and right subtrees
left_tree = decision_tree_create(left_split, remaining_features, target, current_depth + 1, max_depth)
## YOUR CODE HERE
right_tree = decision_tree_create(right_split, remaining_features, target, current_depth + 1, max_depth)
return {'is_leaf' : False,
'prediction' : None,
'splitting_feature': splitting_feature,
'left' : left_tree,
'right' : right_tree}
def count_nodes(tree):
if tree['is_leaf']:
return 1
return 1 + count_nodes(tree['left']) + count_nodes(tree['right'])
small_data_decision_tree = decision_tree_create(train_data, features, 'safe_loans', max_depth = 3)
if count_nodes(small_data_decision_tree) == 13:
print 'Test passed!'
else:
print 'Test failed... try again!'
print 'Number of nodes found :', count_nodes(small_data_decision_tree)
print 'Number of nodes that should be there : 13'
# Make sure to cap the depth at 6 by using max_depth = 6
my_decision_tree = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6)
def classify(tree, x, annotate = False):
# if the node is a leaf node.
if tree['is_leaf']:
if annotate:
print "At leaf, predicting %s" % tree['prediction']
return tree['prediction']
else:
# split on feature.
split_feature_value = x[tree['splitting_feature']]
if annotate:
print "Split on %s = %s" % (tree['splitting_feature'], split_feature_value)
if split_feature_value == 0:
return classify(tree['left'], x, annotate)
else:
return classify(tree['right'], x, annotate)
### YOUR CODE HERE
test_data[0]
print 'Predicted class: %s ' % classify(my_decision_tree, test_data[0])
classify(my_decision_tree, test_data[0], annotate=True)
def evaluate_classification_error(tree, data):
# Apply the classify(tree, x) to each row in your data
prediction = data.apply(lambda x: classify(tree, x))
# Once you've made the predictions, calculate the classification error and return it
## YOUR CODE HERE
nr_of_mistakes = data[data[target] != prediction]
print len(nr_of_mistakes) / float(len(data))
evaluate_classification_error(my_decision_tree, test_data)
def print_stump(tree, name = 'root'):
split_name = tree['splitting_feature'] # split_name is something like 'term. 36 months'
if split_name is None:
print "(leaf, label: %s)" % tree['prediction']
return None
split_feature, split_value = split_name.split('.')
print ' %s' % name
print ' |---------------|----------------|'
print ' | |'
print ' | |'
print ' | |'
print ' [{0} == 0] [{0} == 1] '.format(split_name)
print ' | |'
print ' | |'
print ' | |'
print ' (%s) (%s)' \
% (('leaf, label: ' + str(tree['left']['prediction']) if tree['left']['is_leaf'] else 'subtree'),
('leaf, label: ' + str(tree['right']['prediction']) if tree['right']['is_leaf'] else 'subtree'))
print_stump(my_decision_tree)
print_stump(my_decision_tree['left'], my_decision_tree['splitting_feature'])
print_stump(my_decision_tree['left']['left']['left'], my_decision_tree['left']['left']['splitting_feature'])
print_stump(my_decision_tree['left']['left']['left'], my_decision_tree['left']['splitting_feature'])
print_stump(my_decision_tree['right'], my_decision_tree['right']['splitting_feature'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Vectorised
Step2: Creating an array from a list
Step3: The core class of NumPy is the ndarray (homogeneous n-dimensional array).
Step4: Functions for creating arrays
Step5: linspace(start, stop, num=50, endpoint=True, retstep=False, dtype=None)
Step6: Filled with zeros
Step7: By default, the dtype of the created array is float64 but other dtypes can be used
Step8: Filled with ones
Step9: Create array with random numbers
Step10: Grid generation
Step11: Transpose arrays
Step12: Array attributes
Step13: ndarray.ndim
Step14: ndarray.shape
Step15: This is a tuple of integers indicating the size of the array in each dimension. For a matrix with n rows and m columns, shape will be (n,m). The length of the shape tuple is therefore the rank, or number of dimensions, ndim.
Step16: Note that size is not equal to len(). The latter returns the length of the first dimension.
Step17: Statistical methods of arrays
Step18: Operations over a given axis
Step19: Vectorisation
Step20: Vectorization is generally faster than a for loop.
Step21: Using indexes
Step22: Exercise 1
Step23: Can you guess what the following slices are equal to? Print them to check your understanding.
Step24: Fancy indexing
Step25: Exercise 2
Step26: Suppose you want to return an array result, which has the squared value when an element in array a is greater than -90 and less than -40, and is 1 otherwise.
Step27: But a more less verbose and quicker approach would be
Step28: A one-liner using np.where
Step29: Masked arrays - how to handle (propagating) missing values
Step30: Often, a task is to mask array depending on a criterion.
Step31: Exercise 3
Step32: Shape manipulation
Step33: Add a dimension
Step34: Exercise 4
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
np.lookfor('weighted average')
a1d = np.array([3, 4, 5, 6])
a1d
a2d = np.array([[10., 20, 30], [9, 8, 7]])
a2d
print( type( a1d[0] ) )
print( type( a2d[0,0] ) )
type(a1d)
try:
a = np.array(1,2,3,4) # WRONG, only 2 non-keyword arguments accepted
except ValueError as err:
print(err)
a = np.array([1,2,3,4]) # RIGHT
np.ndarray([1,2,3,4]) # ndarray is is a low level method. Use np.array() instead
np.arange(1, 9, 2)
# for integers, np.arange is same as range but returns an array insted of a list
np.array( range(1,9,2) )
np.linspace(0, 1, 10) # start, end, num-points
np.zeros((2, 3))
np.zeros((2,2),dtype=int)
np.ones((2, 3))
np.random.rand(4) # uniform in [0, 1]
np.random.normal(0,1,4) # Gaussian (mean,std dev, num samples)
np.random.gamma(1,1,(2,2)) # Gamma (shape, scale , num samples)
x = np.linspace(-5, 5, 3)
y = np.linspace(10, 40, 4)
x2d, y2d = np.meshgrid(x, y)
print(x2d)
print(y2d)
print(np.transpose(y2d)) # or equivalentely
print(y2d.transpose()) # using the method of y2d
print(y2d.T) # using the property of y2d
a2d
a2d.ndim
a2d.shape
a2d.size
len(a2d)
print('array a1d :',a1d)
print('Minimum and maximum :', a1d.min(), a1d.max())
print('Sum and product of all elements :', a1d.sum(), a1d.prod())
print('Mean and standard deviation :', a1d.mean(), a1d.std())
print(a2d)
print('sum :',a2d.sum())
print('sum :',a2d.sum(axis=0))
print('sum :',a2d.sum(axis=1))
np.exp(a/100.)/a
# Non-vectorised
r=np.zeros(a.shape) # create empy array for results
for i in range(len(a)):
r[i] = np.exp(a[i]/100.)/a[i]
r
a = np.arange(10, 100, 10)
a
a[2:9:3] # [start:end:step]
a[:3] # last is not included
a[-2] # negative index counts from the end
x = np.random.rand(6)
x
xs = np.sort(x)
xs
xs[1:] - xs[:-1]
# [[2, 3.2, 5.5, -6.4, -2.2, 2.4],
# [1, 22, 4, 0.1, 5.3, -9],
# [3, 1, 2.1, 21, 1.1, -2]]
# a[:, 3]
# a[1:4, 0:4]
# a[1:, 2]
a = np.random.randint(1, 100, 6) # array of 6 random integers between 1 and 100
a
mask = ( a % 3 == 0 ) # Where divisible by 3 (% is the modulus operator).
mask
a[mask]
a = np.arange(-100, 0, 5).reshape(4, 5)
a
result = np.zeros(a.shape, dtype=a.dtype)
for i in range(a.shape[0]):
for j in range(a.shape[1]):
if a[i, j] > -90 and a[i, j] < -40:
result[i, j] = a[i, j]**2
else:
result[i, j] = 1
result
condition = (a > -90) & (a < -40)
condition
result[condition] = a[condition]**2
result[~condition] = 1
print(result)
result = np.where(condition, a**2, 1)
print(result)
a = np.ma.masked_array(data=[1, 2, 3],
mask=[True, True, False],
fill_value=-999)
a
a = np.linspace(1, 15, 15)
masked_a = np.ma.masked_greater_equal(a, 11)
masked_a
# Your code:
# arr =
# Your code:
# condition =
# masked_arr = np.ma.masked_where(condition, arr)
# print(masked_arr)
a = np.array([[1, 2, 3], [4, 5, 6]])
print('{} <-- array'.format(a))
print('{} <-- its shape'.format(a.shape))
a.flatten()
a.repeat(4)
a.reshape((3, 2))
print('Old shape: {}'.format(a.shape))
print('New shape: {}'.format(a.reshape((3, 2)).shape))
a[..., np.newaxis].shape
e4 = np.arange(0.,2.5,.1)
e4
e4.reshape([5,5])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
df = pd.DataFrame({'codes':[[71020], [77085], [36415], [99213, 99287], [99233, 99233, 99233]]})
def g(df):
return df.codes.apply(pd.Series).add_prefix('code_')
result = g(df.copy())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: What are the earliest two films listed in the titles dataframe?
Step2: How many movies have the title "Hamlet"?
Step3: How many movies are titled "North by Northwest"?
Step4: When was the first movie titled "Hamlet" made?
Step5: List all of the "Treasure Island" movies from earliest to most recent.
Step6: How many movies were made in the year 1950?
Step7: How many movies were made in the year 1960?
Step8: How many movies were made from 1950 through 1959?
Step9: In what years has a movie titled "Batman" been released?
|
<ASSISTANT_TASK:>
Python Code:
titles.tail()
len(titles)
titles.sort(columns='year', ascending=True).head()[:2]
titles[titles['title'].str.contains('Hamlet')].sort('year')
len(titles[titles.title == 'North by Northwest'])
titles[titles['title'] 'Hamlet'].sort('year')[:1]
titles[titles.title == 'Treasure Island'].sort('year')
len(titles[titles.year == 1950])
movies_of_1960 = titles[titles.year == 1960]
len(movies_of_1960)
moviesOf1950And1959 = titles[(titles.year >= 1950) & (titles.year <= 1950)]
len(moviesOf1950And1959)
titles.year.value_counts().sort_index().plot()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: De Pie
Step3: The data
Step4: Scrape JobsAggregator
Step5: Load OES Data
Step6: Lightning-viz plots for inline D3.js in IPython
|
<ASSISTANT_TASK:>
Python Code:
###
# There is now a 'SparkContext' instance available as the named variable 'sc'
# and there is a HiveContext instance (for SQL-like queries) available as 'sqlCtx'
#
## Check that this simple code runs without error:
sc.parallelize([1,2,3,4,5]).take(2)
###
# Inspect the SparkContext [sc] or the HiveContext [sqlCtx]
#help(sc)
help(sqlCtx)
from random import random
from operator import add
def monte_carlo(_):
4 * area (1 quadrant of a unit circle) pi
x = random()
y = random()
return 4.0 if pow(x, 2) + pow(y, 2) < 1 else 0
N = 1000
parts = 2
sc.parallelize(xrange(N), parts).map(monte_carlo).reduce(add) / N
### ------------------------------------------------- AMAZON ----- ###
# ⇒ These files identify columns that will be common to the job
# board data and the BLS datasets.
#
# To use S3 buckets add `--copy-aws-credentials` to the ec2 launch command.
#
# Create a Resilient Distributed Dataset with the
# list of occupations in the BLS dataset:
# https://s3.amazonaws.com/tts-wwmm/occupations.txt
from pyspark.sql import Row
# Load the occupations lookups and convert each line to a Row.
lines = sc.textFile('s3n://tts-wwmm/occupations.txt')
Occupation = Row('OCC_CODE', 'OCC_TITLE')
occ = lines.map(lambda l: Occupation( *l.split('\t') ))
# Do the same for the areas lookups.
lines = sc.textFile('s3n://tts-wwmm/areas.txt')
Area = Row('AREA', 'AREA_NAME')
area = lines.map(lambda l: Area( *l.split('\t') ))
area_df = sqlCtx.createDataFrame(area)
area_df.registerTempTable('area')
# Just to show how sqlCtx.sql works
states = sqlCtx.sql("SELECT AREA_NAME, AREA FROM area WHERE AREA RLIKE '^S.*'")
print states.take(2)
# Same as above, but result is another Resilient Distributed Dataset
states = area.filter(lambda a: a.AREA.startswith('S'))
# Create every combination of occupation, state
occ_by_states = occ.cartesian(states)
# Broadcast makes a static copy of the variable available to all nodes
#broadcast_state_names = sc.broadcast(broadcast_state_names)
#
#print broadcast_state_names.take(2)
### ----------------------------------------- JOBS_AGGREGATOR ----- ###
#
# Make `jobs_aggregator_scraper.py` available on all nodes
# and iteratively get the top 5 jobs from each poster in each state for
# each occupation via JobsAggregator.com
sc.addPyFile('s3n://tts-wwmm/jobsaggregator_scraper.py')
def scrape_occupation(occ_state):
from jobsaggregator_scraper import scrape
occ_row, state_entry = occ_state
return [Row(**job)
for job in scrape(state=state_entry[1], occupation=occ_row.OCC_TITLE)]
jobs = occ_by_states.flatMap(scrape_occupation).distinct()
jobs_df = sqlCtx.inferSchema(jobs)
jobs_df.registerTempTable('jobs')
jobs_df.toJSON().saveAsTextFile('wwmm/jobsaggregator_json')
jobs.saveAsTextFile('wwmm/jobsaggregator_df')
jobs.take(2)
### -------------------------------------------- BLS OES DATA ----- ###
#
# The OES data were loaded to a mongolabs database. Read the URI
# (which has a user name and password) from an environment variable
# and create a connection. The pymongo API is very simple.
#
# Datasets are stored one entry per Occupation ID (OCC_ID)
# per area (00-0000)
from pymongo import MongoClient
MONGO_URI = os.getenv('MONGO_URI')
client = MongoClient(MONGO_URI) # connection
oe = client.oe # database
# Confirm we can get data from each collection
oo = oe['nat'].find(filter={'OCC_CODE':'00-0000'},
projection={'_id':False,
'OCC_CODE':True, 'OCC_TITLE':True,
'ANNUAL':{'$slice':-5}, 'OVERALL':{'$slice':-2}})
for o in oo:
print o
# Which OCC contains software-type people?
occ_df = sqlCtx.createDataFrame(occ)
occ_df.registerTempTable('occ')
computer_jobs = sqlCtx.sql((
"SELECT OCC_CODE, OCC_TITLE "
"FROM occ "
"WHERE OCC_TITLE RLIKE 'omputer'"
)).collect()
for row in computer_jobs:
print "{OCC_CODE}: {OCC_TITLE}".format(**row.asDict())
# Want Chicago's area code
chicago = sqlCtx.sql((
"SELECT AREA, AREA_NAME "
"FROM area "
"WHERE AREA_NAME RLIKE 'icago' or AREA_NAME RLIKE 'llinois'"
)).collect()
print "\n".join("{}: {}".format(c.AREA, c.AREA_NAME) for c in chicago)
# Now get the data:
## -------------------------------------- National
desired_data = {'_id':False,
'ANNUAL':{'$slice':-5}, 'OVERALL':{'$slice':-2}}
nat = oe['nat'].find(filter={'OCC_CODE':'15-1131'},
projection=desired_data)
nat = [n for n in nat]
len(nat)
## -------------------------------------- State
il = oe['st'].find(filter={'OCC_CODE':'15-1131',
'AREA':'17'},
projection=desired_data)
il = [i for i in il]
len(il)
## -------------------------------------- Municipal Areas
## The lookup for chicago didn't work...
## ... so I am looking through all of the municipal areas...
chi = oe['ma'].find(filter={'OCC_CODE':'15-1131'},
projection=desired_data)
chi = [c for c in chi if 'IL' in c['AREA_NAME']]
len(chi)
# Get the mean
import tablib
nat_annual = tablib.Dataset()
nat_annual.dict = nat[0]['ANNUAL']
il_annual = tablib.Dataset()
il_annual.dict = il[0]['ANNUAL']
chi_annual = tablib.Dataset()
chi_annual.dict = chi[1]['ANNUAL']
from lightning import Lightning
lgn = Lightning(host="https://tts-lightning.herokuapp.com",
ipython=True,
auth=("tanya@tickel.net", "password"))
# Median salaries
lgn.line(series=[nat_annual['pct50'], il_annual['pct50'], chi_annual['pct50']],
index=nat_annual['YEAR'],
color=[[0,0,0],[255,0,0],[0,155,0]],
size=[5,2,2],
xaxis="Year",
yaxis="Median annual salary")
# How about regionally?
all_states = oe['st'].find(filter={'OCC_CODE':'15-1131'},
projection={'$_id': False,
'OVERALL':{'$slice':-2}})
all_states = [a for a in all_states]
len(all_states)
state_abbrs = [a['ST'] for a in all_states]
mean_salaries = [a['OVERALL'][0]['A_MEAN'] for a in all_states]
num_employed = [a['OVERALL'][0]['TOT_EMP'] for a in all_states]
# Mean salaries
print "max average salary:", max(mean_salaries)
print "Illinois:", mean_salaries[state_abbrs.index('IL')]
lgn.map(regions=state_abbrs, values=mean_salaries)
# Employees
print "Most programmers:", max(num_employed)
print "Illinois:", num_employed[state_abbrs.index('IL')]
lgn.map(regions=state_abbrs, values=num_employed)
salaries = tablib.Dataset(*zip(state_abbrs, mean_salaries),
headers=('State', 'Salary'))
employees= tablib.Dataset(*zip(state_abbrs, num_employed),
headers=('State', 'Employees'))
salaries = salaries.sort("Salary", reverse=True)
print "\n".join("{s[0]}: {s[1]:0,.0f}".format(s=s) for s in salaries[:5])
employees = employees.sort("Employees", reverse=True)
print "\n".join("{e[0]}: {e[1]:0,.0f}".format(e=e) for e in employees[:5])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import torch
softmax_output = load_data()
def solve(softmax_output):
# def solve(softmax_output):
### BEGIN SOLUTION
y = torch.argmin(softmax_output, dim=1).detach()
### END SOLUTION
# return y
# y = solve(softmax_output)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Distribucion Exponencial
Step2: Distribuciones Discretas
Step3: Distribucion Geometrica
Step4: Distribucion Uniforme Discreta
|
<ASSISTANT_TASK:>
Python Code:
def funcc(x,xo,g):
pr=[]
pi=[]
pr=gen(x)
for i in range(x):
pi.append(xo+g*math.tan(math.pi*(pr[i]-(1/2))))
return pi
fcc=funcc(x,0,1)
for i in range(len(fcc)):
print "{0:.2f}".format(fcc[i])
lambda_=1
def funexp(l,x):
lmda=[]
for i in range(x):
lmda.append(l*math.exp(-l*i))
return lmda
fprobe=funexp(lambda_,x)
def funcexp(x,l):
p=[]
pi=[]
p=gen(x)
for i in range(x):
pi.append(-math.log10(1-p[i])/l)
return pi
fcue=funcuanexp(x,lambda_)
for i in range(len(fcue)):
print "{0:.2f}".format(fcue[i])
%matplotlib inline
import matplotlib.pyplot as plt
from scipy.integrate import quad
import math
import numpy as np
import scipy as sp
x=10
s1=17
s2=27
def gen(x):
x0=x*10
x0=((75*x0))%((2**16)+1)
Ux=float(x0)/((2**16)+1)
return Ux
def gena(N):
Ux=[]
x=0
x0=7
while x<N:
x0=((5*x0)+3)%607
x=x+1
Ux.append(float(x0)/607)
return Ux
p=gen(s1)
#funcion de probabilidad binomial
def funpb(x,p):
y=[]
for i in range(x):
y.append((math.factorial(x)/(math.factorial(i)*math.factorial(x-i)))*(p**i)*(1-p)**(x-i))
return y
fpb=funpb(x,p)
#funcion acumulada binomial
def funab():
y=[]
y.append(fpb[0])
for i in range(x-1):
y.append(fpb[i+1]+y[i])
return y
fab=funab()
plt.plot(fab)
#funcion inversa binomial
def fib(x):
p=[]
p=gena(x)
pi=[]
for i in range(x):
for j in range(len(fab)):
if p[i]<fab[j]:
pi.append(j)
break
return pi
finb=fib(x)
print finb
p=gen(s2)
def funpg(x,p):
y=[]
for i in range(x+1):
y.append(p*(1-p)**(i))
return y
fpg=funpg(x,p)
def funag():
y=[]
y.append(fpg[0])
for i in range(x-1):
y.append(fpg[i+1]+y[i])
return y
fag=funag()
plt.plot(fag)
#funcion inversa binomial
def fig(x):
p=[]
p=gena(x)
pi=[]
for i in range(x):
for j in range(len(fag)):
if p[i]<fag[j]:
pi.append(j)
break
return pi
fing=fig(x)
print fing
#Funcion de probabilidad Uniforme Discreta
def funpun(x):
fpu=[]
for i in range(x):
fpu.append(1/float(x))
return fpu
fpu=funpun(x)
def funaun(x):
facu=[]
facu.append(fpu[0])
for i in range(x-1):
facu.append(fpu[i+1]+facu[i])
return facu
facu=funaun(x)
plt.plot(facu)
#funcion inversa Uniforme
def fiun(x):
p=[]
p=gena(x)
pi=[]
for i in range(x):
for j in range(len(facu)):
if p[i]<facu[j]:
pi.append(j)
break
return pi
fing=fiun(x)
print fing
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Test Connection to Bedrock Server
Step2: Check for Spreadsheet Opal
Step3: Check for STAN GLM Opal
Step4: Check for select-from-dataframe Opal
Step5: Check for summarize Opal
Step6: Step 2
Step7: Now Upload the source file to the Bedrock Server
Step8: Check available data sources for the CSV file
Step9: Create a Bedrock Matrix from the CSV Source
Step10: Look at basic statistics on the source data
Step11: Step 3
Step12: Check that Matrix is filtered
Step13: Step 4
Step14: Visualize the output of the analysis
|
<ASSISTANT_TASK:>
Python Code:
from bedrock.client.client import BedrockAPI
import requests
import pandas
import pprint
SERVER = "http://localhost:81/"
api = BedrockAPI(SERVER)
resp = api.ingest("opals.spreadsheet.Spreadsheet.Spreadsheet")
if resp.json():
print("Spreadsheet Opal Installed!")
else:
print("Spreadsheet Opal Not Installed!")
resp = api.analytic('opals.stan.Stan.Stan_GLM')
if resp.json():
print("Stan_GLM Opal Installed!")
else:
print("Stan_GLM Opal Not Installed!")
resp = api.analytic('opals.select-from-dataframe.SelectByCondition.SelectByCondition')
if resp.json():
print("Select-from-dataframe Opal Installed!")
else:
print("Select-from-dataframe Opal Not Installed!")
resp = api.analytic('opals.summarize.Summarize.Summarize')
if resp.json():
print("Summarize Opal Installed!")
else:
print("Summarize Opal Not Installed!")
filepath = 'Rand2011PNAS_cooperation_data.csv'
datafile = pandas.read_csv('Rand2011PNAS_cooperation_data.csv')
datafile.head(10)
ingest_id = 'opals.spreadsheet.Spreadsheet.Spreadsheet'
resp = api.put_source('Rand2011', ingest_id, 'default', {'file': open(filepath, "rb")})
if resp.status_code == 201:
source_id = resp.json()['src_id']
print('Source {0} successfully uploaded'.format(filepath))
else:
try:
print("Error in Upload: {}".format(resp.json()['msg']))
except Exception:
pass
try:
source_id = resp.json()['src_id']
print("Using existing source. If this is not the desired behavior, upload with a different name.")
except Exception:
print("No existing source id provided")
available_sources = api.list("dataloader", "sources").json()
s = next(filter(lambda source: source['src_id'] == source_id, available_sources),'None')
if s != 'None':
pp = pprint.PrettyPrinter()
pp.pprint(s)
else:
print("Could not find source")
resp = api.create_matrix(source_id, 'rand_mtx')
mtx = resp[0]
matrix_id = mtx['id']
print(mtx)
resp
analytic_id = "opals.summarize.Summarize.Summarize"
inputData = {
'matrix.csv': mtx,
'features.txt': mtx
}
paramsData = []
summary_mtx = api.run_analytic(analytic_id, mtx, 'rand_mtx_summary', input_data=inputData, parameter_data=paramsData)
output = api.download_results_matrix(matrix_id, summary_mtx['id'], 'matrix.csv')
output
analytic_id = "opals.select-from-dataframe.SelectByCondition.SelectByCondition"
inputData = {
'matrix.csv': mtx,
'features.txt': mtx
}
paramsData = [
{"attrname":"colname","value":"condition"},
{"attrname":"comparator","value":"=="},
{"attrname":"value","value":"Static"}
]
filtered_mtx = api.run_analytic(analytic_id, mtx, 'rand_static_only', input_data=inputData, parameter_data=paramsData)
filtered_mtx
output = api.download_results_matrix('rand_mtx', 'rand_static_only', 'matrix.csv', remote_header_file='features.txt')
output
analytic_id = "opals.stan.Stan.Stan_GLM"
inputData = {
'matrix.csv': filtered_mtx,
'features.txt': filtered_mtx
}
paramsData = [
{"attrname":"formula","value":"decision0d1c ~ round_num"},
{"attrname":"family","value":'logit'},
{"attrname":"chains","value":"3"},
{"attrname":"iter","value":"3000"}
]
result_mtx = api.run_analytic(analytic_id, mtx, 'rand_bayesian1', input_data=inputData, parameter_data=paramsData)
result_mtx
summary_table = api.download_results_matrix('rand_mtx', 'rand_bayesian1', 'matrix.csv')
summary_table
prior_summary = api.download_results_matrix('rand_mtx', 'rand_bayesian1', 'prior_summary.txt')
print(prior_summary)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Then load in our data. We're actually going to define a generator to load our data in on-demand; this way we'll avoid having all our data sitting around in memory when we don't need it.
Step2: Before we go any further, we need to map the words in our corpus to numbers, so that we have a consistent way of referring to them. First we'll fit a tokenizer to the corpus
Step3: Now the tokenizer knows what tokens (words) are in our corpus and has mapped them to numbers. The keras tokenizer also indexes them in order of frequency (most common first, i.e. index 1 is usually a word like "the"), which will come in handy later.
Step4: Now let's define the model. When I described the skip-gram task, I mentioned two inputs
Step5: Finally, we can train the model.
Step6: With any luck, the model should finish training without a hitch.
Step7: We also want to set aside the tokenizer's word index for later use (so we can get indices for words) and also create a reverse word index (so we can get words from indices)
Step8: That's it for learning the embeddings. Now we can try using them.
Step9: Then we can define a function to get a most similar word for an input word
Step10: Now let's give it a try (you may get different results)
Step11: For the most part, we seem to be getting related words!
Step12: t-SNE
Step13: And now let's plot it out
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from keras.models import Sequential
from keras.layers.embeddings import Embedding
from keras.layers import Flatten, Activation, Merge
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import skipgrams, make_sampling_table
from glob import glob
text_files = glob('../data/sotu/*.txt')
def text_generator():
for path in text_files:
with open(path, 'r') as f:
yield f.read()
len(text_files)
# our corpus is small enough where we
# don't need to worry about this, but good practice
max_vocab_size = 50000
# `filters` specify what characters to get rid of
tokenizer = Tokenizer(nb_words=max_vocab_size,
filters='!"#$%&()*+,-./:;<=>?@[\\]^_{|}~\t\n\'`“”–')
# fit the tokenizer
tokenizer.fit_on_texts(text_generator())
# we also want to keep track of the actual vocab size
# we'll need this later
# note: we add one because `0` is a reserved index in keras' tokenizer
vocab_size = len(tokenizer.word_index) + 1
embedding_dim = 256
pivot_model = Sequential()
pivot_model.add(Embedding(vocab_size, embedding_dim, input_length=1))
context_model = Sequential()
context_model.add(Embedding(vocab_size, embedding_dim, input_length=1))
# merge the pivot and context models
model = Sequential()
model.add(Merge([pivot_model, context_model], mode='dot', dot_axes=2))
model.add(Flatten())
# the task as we've framed it here is
# just binary classification,
# so we want the output to be in [0,1],
# and we can use binary crossentropy as our loss
model.add(Activation('sigmoid'))
model.compile(optimizer='adam', loss='binary_crossentropy')
n_epochs = 60
# used to sample words (indices)
sampling_table = make_sampling_table(vocab_size)
for i in range(n_epochs):
loss = 0
for seq in tokenizer.texts_to_sequences_generator(text_generator()):
# generate skip-gram training examples
# - `couples` consists of the pivots (i.e. target words) and surrounding contexts
# - `labels` represent if the context is true or not
# - `window_size` determines how far to look between words
# - `negative_samples` specifies the ratio of negative couples
# (i.e. couples where the context is false)
# to generate with respect to the positive couples;
# i.e. `negative_samples=4` means "generate 4 times as many negative samples"
couples, labels = skipgrams(seq, vocab_size, window_size=5, negative_samples=4, sampling_table=sampling_table)
if couples:
pivot, context = zip(*couples)
pivot = np.array(pivot, dtype='int32')
context = np.array(context, dtype='int32')
labels = np.array(labels, dtype='int32')
loss += model.train_on_batch([pivot, context], labels)
print('epoch %d, %0.02f'%(i, loss))
embeddings = model.get_weights()[0]
word_index = tokenizer.word_index
reverse_word_index = {v: k for k, v in word_index.items()}
def get_embedding(word):
idx = word_index[word]
# make it 2d
return embeddings[idx][:,np.newaxis].T
from scipy.spatial.distance import cdist
ignore_n_most_common = 50
def get_closest(word):
embedding = get_embedding(word)
# get the distance from the embedding
# to every other embedding
distances = cdist(embedding, embeddings)[0]
# pair each embedding index and its distance
distances = list(enumerate(distances))
# sort from closest to furthest
distances = sorted(distances, key=lambda d: d[1])
# skip the first one; it's the target word
for idx, dist in distances[1:]:
# ignore the n most common words;
# they can get in the way.
# because the tokenizer organized indices
# from most common to least, we can just do this
if idx > ignore_n_most_common:
return reverse_word_index[idx]
print(get_closest('freedom'))
print(get_closest('justice'))
print(get_closest('america'))
print(get_closest('citizens'))
print(get_closest('citizen'))
from gensim.models.doc2vec import Word2Vec
with open('embeddings.dat', 'w') as f:
f.write('{} {}'.format(vocab_size, embedding_dim))
for word, idx in word_index.items():
embedding = ' '.join(str(d) for d in embeddings[idx])
f.write('\n{} {}'.format(word, embedding))
w2v = Word2Vec.load_word2vec_format('embeddings.dat', binary=False)
print(w2v.most_similar(positive=['freedom']))
from sklearn.manifold import TSNE
# `n_components` is the number of dimensions to reduce to
tsne = TSNE(n_components=2)
# apply the dimensionality reduction
# to our embeddings to get our 2d points
points = tsne.fit_transform(embeddings)
print(points)
import matplotlib
matplotlib.use('Agg') # for pngs
import matplotlib.pyplot as plt
# plot our results
# make it quite big so we can see everything
fig, ax = plt.subplots(figsize=(40, 20))
# extract x and y values separately
xs = points[:,0]
ys = points[:,1]
# plot the points
# we don't actually care about the point markers,
# just want to automatically set the bounds of the plot
ax.scatter(xs, ys, alpha=0)
# annotate each point with its word
for i, point in enumerate(points):
ax.annotate(reverse_word_index.get(i),
(xs[i], ys[i]),
fontsize=8)
plt.savefig('tsne.png')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Flower power
Step2: ConvNet Codes
Step3: Below I'm running images through the VGG network in batches.
Step4: Building the Classifier
Step5: Data prep
Step6: Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use StratifiedShuffleSplit from scikit-learn.
Step7: If you did it right, you should see these sizes for the training sets
Step9: Batches!
Step10: Training
Step11: Testing
Step12: Below, feel free to choose images and see how the trained classifier predicts the flowers in them.
|
<ASSISTANT_TASK:>
Python Code:
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
vgg_dir = 'tensorflow_vgg/'
# Make sure vgg exists
if not isdir(vgg_dir):
raise Exception("VGG directory doesn't exist!")
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(vgg_dir + "vgg16.npy"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='VGG16 Parameters') as pbar:
urlretrieve(
'https://s3.amazonaws.com/content.udacity-data.com/nd101/vgg16.npy',
vgg_dir + 'vgg16.npy',
pbar.hook)
else:
print("Parameter file already exists!")
import tarfile
dataset_folder_path = 'flower_photos'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile('flower_photos.tar.gz'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Flowers Dataset') as pbar:
urlretrieve(
'http://download.tensorflow.org/example_images/flower_photos.tgz',
'flower_photos.tar.gz',
pbar.hook)
if not isdir(dataset_folder_path):
with tarfile.open('flower_photos.tar.gz') as tar:
tar.extractall()
tar.close()
import os
import numpy as np
import tensorflow as tf
from tensorflow_vgg import vgg16
from tensorflow_vgg import utils
data_dir = 'flower_photos/'
contents = os.listdir(data_dir)
classes = [each for each in contents if os.path.isdir(data_dir + each)]
classes
classes
t = [[1,1], [2,2], [3,3], [4,4]]
np.concatenate(t)
next(enumerate(os.listdir(data_dir+'roses')), 1)
# Set the batch size higher if you can fit in in your GPU memory
batch_size = 16
codes_list = []
labels = []
batch = []
codes = None
with tf.Session() as sess:
# TODO: Build the vgg network here
vgg = vgg16.Vgg16()
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
with tf.name_scope("content_vgg"):
vgg.build(input_)
for each in classes:
print("Starting {} images".format(each))
class_path = data_dir + each
files = os.listdir(class_path)
for ii, file in enumerate(files, 1):
# Add images to the current batch
# utils.load_image crops the input images for us, from the center
img = utils.load_image(os.path.join(class_path, file))
batch.append(img.reshape((1, 224, 224, 3)))
labels.append(each)
# Running the batch through the network to get the codes
if ii % batch_size == 0 or ii == len(files):
# Image batch to pass to VGG network
images = np.concatenate(batch)
# TODO: Get the values from the relu6 layer of the VGG network
feed_dict = {input_: images}
codes_batch = sess.run(vgg.relu6, feed_dict=feed_dict)
# Here I'm building an array of the codes
if codes is None:
codes = codes_batch
else:
codes = np.concatenate((codes, codes_batch))
# Reset to start building the next batch
batch = []
print('{} images processed'.format(ii))
# write codes to file
with open('codes', 'w') as f:
codes.tofile(f)
# write labels to file
import csv
with open('labels', 'w') as f:
writer = csv.writer(f, delimiter='\n')
writer.writerow(labels)
# read codes and labels from file
import csv
with open('labels') as f:
reader = csv.reader(f, delimiter='\n')
labels = np.array([each for each in reader if len(each) > 0]).squeeze()
with open('codes') as f:
codes = np.fromfile(f, dtype=np.float32)
codes = codes.reshape((len(labels), -1))
codes[:5]
from sklearn import preprocessing as pp
lb = pp.LabelBinarizer()
lb.fit(labels)
labels_vecs = lb.transform(labels) # Your one-hot encoded labels array here
from sklearn.model_selection import StratifiedShuffleSplit
ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)
splitter = ss.split(codes, labels_vecs)
train_index, test_index = next(splitter)
print(len(train_index), len(test_index))
train_x, train_y = codes[train_index], labels_vecs[train_index]
other_x, other_y = codes[test_index], labels_vecs[test_index]
mid_other = len(other_x) // 2
val_x, val_y = other_x[:mid_other], other_y[:mid_other]
test_x, test_y = other_x[mid_other:], other_y[mid_other:]
print("Train shapes (x, y):", train_x.shape, train_y.shape)
print("Validation shapes (x, y):", val_x.shape, val_y.shape)
print("Test shapes (x, y):", test_x.shape, test_y.shape)
inputs_ = tf.placeholder(tf.float32, shape=[None, codes.shape[1]])
labels_ = tf.placeholder(tf.int64, shape=[None, labels_vecs.shape[1]])
# TODO: Classifier layers and operations
fc = tf.contrib.layers.fully_connected(inputs_, 256)
logits = tf.contrib.layers.fully_connected(fc, labels_vecs.shape[1], activation_fn=None)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels_))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Operations for validation/test accuracy
predicted = tf.nn.softmax(logits)
correct_pred = tf.equal(tf.argmax(predicted, 1), tf.argmax(labels_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
def get_batches(x, y, n_batches=10):
Return a generator that yields batches from arrays x and y.
batch_size = len(x)//n_batches
for ii in range(0, n_batches*batch_size, batch_size):
# If we're not on the last batch, grab data with size batch_size
if ii != (n_batches-1)*batch_size:
X, Y = x[ii: ii+batch_size], y[ii: ii+batch_size]
# On the last batch, grab the rest of the data
else:
X, Y = x[ii:], y[ii:]
# I love generators
yield X, Y
for x,y in get_batches(codes, labels_vecs, n_batches=2935):
print(x[0])
print(y[0])
# Hyperparameters
epochs = 20
iteration = 0
number_of_batches = 10
saver = tf.train.Saver()
with tf.Session() as sess:
# TODO: Your training code here
sess.run(tf.global_variables_initializer())
for epoch in range(epochs):
# Loop over all batches
for batch_x, batch_y in get_batches(train_x, train_y, n_batches=number_of_batches):
feed = {inputs_: batch_x, labels_: batch_y}
_, cost = sess.run([optimizer, cost], feed_dict = feed)
iteration += 1
print("Epoch: {}/{}".format(epoch+1, epochs),
"Iteration: {}".format(iteration),
"Training loss: {:.5f}".format(cost))
if iteration % 5 == 0:
feed = {inputs_: val_x,
labels_: val_y}
val_acc = sess.run(accuracy, feed_dict=feed)
print("Epoch: {}/{}".format(epoch+1, epochs),
"Iteration: {}".format(iteration),
"Validation Acc: {:.4f}".format(val_acc))
# Save Model
saver.save(sess, "checkpoints/flowers.ckpt")
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: test_x,
labels_: test_y}
test_acc = sess.run(accuracy, feed_dict=feed)
print("Test accuracy: {:.4f}".format(test_acc))
%matplotlib inline
import matplotlib.pyplot as plt
from scipy.ndimage import imread
test_img_path = 'flower_photos/roses/10894627425_ec76bbc757_n.jpg'
test_img = imread(test_img_path)
plt.imshow(test_img)
# Run this cell if you don't have a vgg graph built
if 'vgg' in globals():
print('"vgg" object already exists. Will not create again.')
else:
#create vgg
with tf.Session() as sess:
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
vgg = vgg16.Vgg16()
vgg.build(input_)
with tf.Session() as sess:
img = utils.load_image(test_img_path)
img = img.reshape((1, 224, 224, 3))
feed_dict = {input_: img}
code = sess.run(vgg.relu6, feed_dict=feed_dict)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: code}
prediction = sess.run(predicted, feed_dict=feed).squeeze()
plt.imshow(test_img)
plt.barh(np.arange(5), prediction)
_ = plt.yticks(np.arange(5), lb.classes_)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 多言語ユニバーサルセンテンスエンコーダーの Q&A と取得
Step2: 次のコードブロックを実行して、SQuAD データセットを次のように抽出します。
Step3: 次のコードブロックは、Univeral Encoder Multilingual Q&A モデルの question_encoder と response_encoder シグネチャを使用して、TensorFlow グラフ g と session をセットアップします。
Step4: 次のコードブロックは、response_encoder を使用して、すべてのテキストの埋め込みを計算し、タプルのコンテキストを形成し、simpleneighbors インデックスに格納します。
Step5: 取得時、質問は question_encoder でエンコードされ、質問の埋め込みを使って simpleneighbors インデックスがクエリされます。
|
<ASSISTANT_TASK:>
Python Code:
# Copyright 2019 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
%%capture
#@title Setup Environment
# Install the latest Tensorflow version.
!pip install -q tensorflow_text
!pip install -q simpleneighbors[annoy]
!pip install -q nltk
!pip install -q tqdm
#@title Setup common imports and functions
import json
import nltk
import os
import pprint
import random
import simpleneighbors
import urllib
from IPython.display import HTML, display
from tqdm.notebook import tqdm
import tensorflow.compat.v2 as tf
import tensorflow_hub as hub
from tensorflow_text import SentencepieceTokenizer
nltk.download('punkt')
def download_squad(url):
return json.load(urllib.request.urlopen(url))
def extract_sentences_from_squad_json(squad):
all_sentences = []
for data in squad['data']:
for paragraph in data['paragraphs']:
sentences = nltk.tokenize.sent_tokenize(paragraph['context'])
all_sentences.extend(zip(sentences, [paragraph['context']] * len(sentences)))
return list(set(all_sentences)) # remove duplicates
def extract_questions_from_squad_json(squad):
questions = []
for data in squad['data']:
for paragraph in data['paragraphs']:
for qas in paragraph['qas']:
if qas['answers']:
questions.append((qas['question'], qas['answers'][0]['text']))
return list(set(questions))
def output_with_highlight(text, highlight):
output = "<li> "
i = text.find(highlight)
while True:
if i == -1:
output += text
break
output += text[0:i]
output += '<b>'+text[i:i+len(highlight)]+'</b>'
text = text[i+len(highlight):]
i = text.find(highlight)
return output + "</li>\n"
def display_nearest_neighbors(query_text, answer_text=None):
query_embedding = model.signatures['question_encoder'](tf.constant([query_text]))['outputs'][0]
search_results = index.nearest(query_embedding, n=num_results)
if answer_text:
result_md = '''
<p>Random Question from SQuAD:</p>
<p> <b>%s</b></p>
<p>Answer:</p>
<p> <b>%s</b></p>
''' % (query_text , answer_text)
else:
result_md = '''
<p>Question:</p>
<p> <b>%s</b></p>
''' % query_text
result_md += '''
<p>Retrieved sentences :
<ol>
'''
if answer_text:
for s in search_results:
result_md += output_with_highlight(s, answer_text)
else:
for s in search_results:
result_md += '<li>' + s + '</li>\n'
result_md += "</ol>"
display(HTML(result_md))
#@title Download and extract SQuAD data
squad_url = 'https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v1.1.json' #@param ["https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v2.0.json", "https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v2.0.json", "https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json", "https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v1.1.json"]
squad_json = download_squad(squad_url)
sentences = extract_sentences_from_squad_json(squad_json)
questions = extract_questions_from_squad_json(squad_json)
print("%s sentences, %s questions extracted from SQuAD %s" % (len(sentences), len(questions), squad_url))
print("\nExample sentence and context:\n")
sentence = random.choice(sentences)
print("sentence:\n")
pprint.pprint(sentence[0])
print("\ncontext:\n")
pprint.pprint(sentence[1])
print()
#@title Load model from tensorflow hub
module_url = "https://tfhub.dev/google/universal-sentence-encoder-multilingual-qa/3" #@param ["https://tfhub.dev/google/universal-sentence-encoder-multilingual-qa/3", "https://tfhub.dev/google/universal-sentence-encoder-qa/3"]
model = hub.load(module_url)
#@title Compute embeddings and build simpleneighbors index
batch_size = 100
encodings = model.signatures['response_encoder'](
input=tf.constant([sentences[0][0]]),
context=tf.constant([sentences[0][1]]))
index = simpleneighbors.SimpleNeighbors(
len(encodings['outputs'][0]), metric='angular')
print('Computing embeddings for %s sentences' % len(sentences))
slices = zip(*(iter(sentences),) * batch_size)
num_batches = int(len(sentences) / batch_size)
for s in tqdm(slices, total=num_batches):
response_batch = list([r for r, c in s])
context_batch = list([c for r, c in s])
encodings = model.signatures['response_encoder'](
input=tf.constant(response_batch),
context=tf.constant(context_batch)
)
for batch_index, batch in enumerate(response_batch):
index.add_one(batch, encodings['outputs'][batch_index])
index.build()
print('simpleneighbors index for %s sentences built.' % len(sentences))
#@title Retrieve nearest neighbors for a random question from SQuAD
num_results = 25 #@param {type:"slider", min:5, max:40, step:1}
query = random.choice(questions)
display_nearest_neighbors(query[0], query[1])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Chemistry Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 1.8. Coupling With Chemical Reactivity
Step12: 2. Key Properties --> Software Properties
Step13: 2.2. Code Version
Step14: 2.3. Code Languages
Step15: 3. Key Properties --> Timestep Framework
Step16: 3.2. Split Operator Advection Timestep
Step17: 3.3. Split Operator Physical Timestep
Step18: 3.4. Split Operator Chemistry Timestep
Step19: 3.5. Split Operator Alternate Order
Step20: 3.6. Integrated Timestep
Step21: 3.7. Integrated Scheme Type
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
Step23: 4.2. Convection
Step24: 4.3. Precipitation
Step25: 4.4. Emissions
Step26: 4.5. Deposition
Step27: 4.6. Gas Phase Chemistry
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Step30: 4.9. Photo Chemistry
Step31: 4.10. Aerosols
Step32: 5. Key Properties --> Tuning Applied
Step33: 5.2. Global Mean Metrics Used
Step34: 5.3. Regional Metrics Used
Step35: 5.4. Trend Metrics Used
Step36: 6. Grid
Step37: 6.2. Matches Atmosphere Grid
Step38: 7. Grid --> Resolution
Step39: 7.2. Canonical Horizontal Resolution
Step40: 7.3. Number Of Horizontal Gridpoints
Step41: 7.4. Number Of Vertical Levels
Step42: 7.5. Is Adaptive Grid
Step43: 8. Transport
Step44: 8.2. Use Atmospheric Transport
Step45: 8.3. Transport Details
Step46: 9. Emissions Concentrations
Step47: 10. Emissions Concentrations --> Surface Emissions
Step48: 10.2. Method
Step49: 10.3. Prescribed Climatology Emitted Species
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Step51: 10.5. Interactive Emitted Species
Step52: 10.6. Other Emitted Species
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
Step54: 11.2. Method
Step55: 11.3. Prescribed Climatology Emitted Species
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Step57: 11.5. Interactive Emitted Species
Step58: 11.6. Other Emitted Species
Step59: 12. Emissions Concentrations --> Concentrations
Step60: 12.2. Prescribed Upper Boundary
Step61: 13. Gas Phase Chemistry
Step62: 13.2. Species
Step63: 13.3. Number Of Bimolecular Reactions
Step64: 13.4. Number Of Termolecular Reactions
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Step67: 13.7. Number Of Advected Species
Step68: 13.8. Number Of Steady State Species
Step69: 13.9. Interactive Dry Deposition
Step70: 13.10. Wet Deposition
Step71: 13.11. Wet Oxidation
Step72: 14. Stratospheric Heterogeneous Chemistry
Step73: 14.2. Gas Phase Species
Step74: 14.3. Aerosol Species
Step75: 14.4. Number Of Steady State Species
Step76: 14.5. Sedimentation
Step77: 14.6. Coagulation
Step78: 15. Tropospheric Heterogeneous Chemistry
Step79: 15.2. Gas Phase Species
Step80: 15.3. Aerosol Species
Step81: 15.4. Number Of Steady State Species
Step82: 15.5. Interactive Dry Deposition
Step83: 15.6. Coagulation
Step84: 16. Photo Chemistry
Step85: 16.2. Number Of Reactions
Step86: 17. Photo Chemistry --> Photolysis
Step87: 17.2. Environmental Conditions
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mri', 'sandbox-1', 'atmoschem')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Export -
|
<ASSISTANT_TASK:>
Python Code:
#|export
class MCDropoutCallback(Callback):
def before_validate(self):
for m in [m for m in flatten_model(self.model) if 'dropout' in m.__class__.__name__.lower()]:
m.train()
def after_validate(self):
for m in [m for m in flatten_model(self.model) if 'dropout' in m.__class__.__name__.lower()]:
m.eval()
learn = synth_learner()
# Call get_preds 10 times, then stack the predictions, yielding a tensor with shape [# of samples, batch_size, ...]
dist_preds = []
for i in range(10):
preds, targs = learn.get_preds(cbs=[MCDropoutCallback()])
dist_preds += [preds]
torch.stack(dist_preds).shape
#|hide
from nbdev.export import notebook2script
notebook2script()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Make a grid and set boundary conditions
Step2: Set the initial and run conditions
Step3: Instantiate the components
Step4: Run the components for 2 Myr and trace an East-West cross-section of the topography every 100 kyr
Step5: And plot final topography
Step6: This behaviour corresponds to the evolution observed using a classical non-linear diffusion model.
Step7: Set the run conditions
Step8: Instantiate the components
Step9: Run for 1 Myr, plotting the cross-section regularly
Step10: The material is diffused from the top and along the slope and it accumulates at the bottom, where the topography flattens.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from matplotlib.pyplot import figure, plot, show, title, xlabel, ylabel
from landlab import RasterModelGrid
from landlab.components import FlowDirectorSteepest, TransportLengthHillslopeDiffuser
from landlab.plot import imshow_grid
# to plot figures in the notebook:
%matplotlib inline
mg = RasterModelGrid(
(20, 20),
xy_spacing=50.0) # raster grid with 20 rows, 20 columns and dx=50m
z = np.random.rand(mg.size('node')) # random noise for initial topography
mg.add_field("topographic__elevation", z, at="node")
mg.set_closed_boundaries_at_grid_edges(
False, True, False,
True) # N and S boundaries are closed, E and W are open
total_t = 2000000.0 # total run time (yr)
dt = 1000.0 # time step (yr)
nt = int(total_t // dt) # number of time steps
uplift_rate = 0.0001 # uplift rate (m/yr)
kappa = 0.001 # erodibility (m/yr)
Sc = 0.6 # critical slope
fdir = FlowDirectorSteepest(mg)
tl_diff = TransportLengthHillslopeDiffuser(mg,
erodibility=kappa,
slope_crit=Sc)
for t in range(nt):
fdir.run_one_step()
tl_diff.run_one_step(dt)
z[mg.core_nodes] += uplift_rate * dt # add the uplift
# add some output to let us see we aren't hanging:
if t % 100 == 0:
print(t * dt)
# plot east-west cross-section of topography:
x_plot = range(0, 1000, 50)
z_plot = z[100:120]
figure("cross-section")
plot(x_plot, z_plot)
figure("cross-section")
title("East-West cross section")
xlabel("x (m)")
ylabel("z (m)")
figure("final topography")
im = imshow_grid(mg,
"topographic__elevation",
grid_units=["m", "m"],
var_name="Elevation (m)")
# Create grid and topographic elevation field:
mg2 = RasterModelGrid((20, 20), xy_spacing=50.0)
z = np.zeros(mg2.number_of_nodes)
z[mg2.node_x > 500] = mg2.node_x[mg2.node_x > 500] / 10
mg2.add_field("topographic__elevation", z, at="node")
# Set boundary conditions:
mg2.set_closed_boundaries_at_grid_edges(False, True, False, True)
# Show initial topography:
im = imshow_grid(mg2,
"topographic__elevation",
grid_units=["m", "m"],
var_name="Elevation (m)")
# Plot an east-west cross-section of the initial topography:
z_plot = z[100:120]
x_plot = range(0, 1000, 50)
figure(2)
plot(x_plot, z_plot)
title("East-West cross section")
xlabel("x (m)")
ylabel("z (m)")
total_t = 1000000.0 # total run time (yr)
dt = 1000.0 # time step (yr)
nt = int(total_t // dt) # number of time steps
fdir = FlowDirectorSteepest(mg2)
tl_diff = TransportLengthHillslopeDiffuser(mg2,
erodibility=0.001,
slope_crit=0.6)
for t in range(nt):
fdir.run_one_step()
tl_diff.run_one_step(dt)
# add some output to let us see we aren't hanging:
if t % 100 == 0:
print(t * dt)
z_plot = z[100:120]
figure(2)
plot(x_plot, z_plot)
# Import Linear diffuser:
from landlab.components import LinearDiffuser
# Create grid and topographic elevation field:
mg3 = RasterModelGrid((20, 20), xy_spacing=50.0)
z = np.ones(mg3.number_of_nodes)
z[mg.node_x > 500] = mg.node_x[mg.node_x > 500] / 10
mg3.add_field("topographic__elevation", z, at="node")
# Set boundary conditions:
mg3.set_closed_boundaries_at_grid_edges(False, True, False, True)
# Instantiate components:
fdir = FlowDirectorSteepest(mg3)
diff = LinearDiffuser(mg3, linear_diffusivity=0.1)
# Set run conditions:
total_t = 1000000.0
dt = 1000.0
nt = int(total_t // dt)
# Run for 1 Myr, plotting east-west cross-section regularly:
for t in range(nt):
fdir.run_one_step()
diff.run_one_step(dt)
# add some output to let us see we aren't hanging:
if t % 100 == 0:
print(t * dt)
z_plot = z[100:120]
figure(2)
plot(x_plot, z_plot)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The multiple regression model describes the response as a weighted sum of the predictors
Step2: You can also use the formulaic interface of statsmodels to compute regression with multiple predictors. You just need append the predictors to the formula via a '+' symbol.
Step3: Handling Categorical Variables
Step4: The variable famhist holds if the patient has a family history of coronary artery disease. The percentage of the response chd (chronic heart disease ) for patients with absent/present family history of coronary artery disease is
Step5: These two levels (absent/present) have a natural ordering to them, so we can perform linear regression on them, after we convert them to numeric. This can be done using pd.Categorical.
Step6: There are several possible approaches to encode categorical values, and statsmodels has built-in support for many of them. In general these work by splitting a categorical variable into many different binary variables. The simplest way to encode categoricals is "dummy-encoding" which encodes a k-level categorical variable into k-1 binary variables. In statsmodels this is done easily using the C() function.
Step7: Because hlthp is a binary variable we can visualize the linear regression model by plotting two lines
Step8: Notice that the two lines are parallel. This is because the categorical variable affects only the intercept and not the slope (which is a function of logincome).
Step9: The * in the formula means that we want the interaction term in addition each term separately (called main-effects). If you want to include just an interaction, use
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import statsmodels.api as sm
import matplotlib.pyplot as plt
%matplotlib inline
df_adv = pd.read_csv('http://www-bcf.usc.edu/~gareth/ISL/Advertising.csv', index_col=0)
X = df_adv[['TV', 'Radio']]
y = df_adv['Sales']
df_adv.head()
X = df_adv[['TV', 'Radio']]
y = df_adv['Sales']
## fit a OLS model with intercept on TV and Radio
X = sm.add_constant(X)
est = sm.OLS(y, X).fit()
est.summary()
# import formula api as alias smf
import statsmodels.formula.api as smf
# formula: response ~ predictor + predictor
est = smf.ols(formula='Sales ~ TV + Radio', data=df_adv).fit()
import pandas as pd
df = pd.read_csv('http://statweb.stanford.edu/~tibs/ElemStatLearn/datasets/SAheart.data', index_col=0)
# copy data and separate predictors and response
X = df.copy()
y = X.pop('chd')
df.head()
# compute percentage of chronic heart disease for famhist
y.groupby(X.famhist).mean()
import statsmodels.formula.api as smf
# encode df.famhist as a numeric via pd.Factor
df['famhist_ord'] = pd.Categorical(df.famhist).labels
est = smf.ols(formula="chd ~ famhist_ord", data=df).fit()
df = pd.read_csv('https://raw.githubusercontent.com/statsmodels/statsmodels/master/statsmodels/datasets/randhie/src/randhie.csv')
df["logincome"] = np.log1p(df.income)
df[['mdvis', 'logincome', 'hlthp']].tail()
plt.scatter(df.logincome, df.mdvis, alpha=0.3)
plt.xlabel('Log income')
plt.ylabel('Number of visits')
income_linspace = np.linspace(df.logincome.min(), df.logincome.max(), 100)
est = smf.ols(formula='mdvis ~ logincome + hlthp', data=df).fit()
plt.plot(income_linspace, est.params[0] + est.params[1] * income_linspace + est.params[2] * 0, 'r')
plt.plot(income_linspace, est.params[0] + est.params[1] * income_linspace + est.params[2] * 1, 'g')
short_summary(est)
plt.scatter(df.logincome, df.mdvis, alpha=0.3)
plt.xlabel('Log income')
plt.ylabel('Number of visits')
est = smf.ols(formula='mdvis ~ hlthp * logincome', data=df).fit()
plt.plot(income_linspace, est.params[0] + est.params[1] * 0 + est.params[2] * income_linspace +
est.params[3] * 0 * income_linspace, 'r')
plt.plot(income_linspace, est.params[0] + est.params[1] * 1 + est.params[2] * income_linspace +
est.params[3] * 1 * income_linspace, 'g')
short_summary(est)
# load the boston housing dataset - median house values in the Boston area
df = pd.read_csv('http://vincentarelbundock.github.io/Rdatasets/csv/MASS/Boston.csv')
# plot lstat (% lower status of the population) against median value
plt.figure(figsize=(6 * 1.618, 6))
plt.scatter(df.lstat, df.medv, s=10, alpha=0.3)
plt.xlabel('lstat')
plt.ylabel('medv')
# points linearlyd space on lstats
x = pd.DataFrame({'lstat': np.linspace(df.lstat.min(), df.lstat.max(), 100)})
# 1-st order polynomial
poly_1 = smf.ols(formula='medv ~ 1 + lstat', data=df).fit()
plt.plot(x.lstat, poly_1.predict(x), 'b-', label='Poly n=1 $R^2$=%.2f' % poly_1.rsquared,
alpha=0.9)
# 2-nd order polynomial
poly_2 = smf.ols(formula='medv ~ 1 + lstat + I(lstat ** 2.0)', data=df).fit()
plt.plot(x.lstat, poly_2.predict(x), 'g-', label='Poly n=2 $R^2$=%.2f' % poly_2.rsquared,
alpha=0.9)
# 3-rd order polynomial
poly_3 = smf.ols(formula='medv ~ 1 + lstat + I(lstat ** 2.0) + I(lstat ** 3.0)', data=df).fit()
plt.plot(x.lstat, poly_3.predict(x), 'r-', alpha=0.9,
label='Poly n=3 $R^2$=%.2f' % poly_3.rsquared)
plt.legend()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now we'll import isochrones for each cluster adopting nominal ages (Pleiades
Step2: Load data for Pleiades and Praesepe stars to demonstrate that models fit data.
Step3: Define cluster distance modulii and reddening. Pleiades reddening is adoted from Tayler (2008, AJ, 136, 3) while for Praesepe we adopt the value from Taylor (2006, AJ, 132, 6).
Step4: Start with just plotting $(B-V)$ CMD for both clusters.
Step5: Already we see a hint that Pleiades stars are as blue as one might expect (on average) given the properties of the cluster. Similarly, the stars in Praesepe are just as red as expected. However, issues arise once one begins to directly compare K-dwarf stars in the two clusters. The empirical isochrone from Kamai et al. (2014) reproduces well the Pleiades locus and is barely visible under the model isochrone, lending credence to the quality of the model color predictions above $M_V \sim 8$. However, if we compare the Kamai empirical isochrone to stars in Praesepe, the Kamai isochrone is noticeably bluer between $6 < M_V < 8$, or the magnitude range covered largely by K-dwarf stars. Yet, in both cases the model isochrones reproduce the morpohology of the CMDs, differing only by their ages (insignificant) and their metallicity.
Step6: Unfortunately, for Praesepe, there is not enough NIR data G and K dwarfs beyond the MgH bump. We can see from the Pleiades cluster that there are some issues with the observed K band magnitudes in the small region between the MgH bump and the formation of strong water absorption.
Step7: Now for a (V - K) diagram for the Pleiades only.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
iso_120_gas07 = np.genfromtxt('data/dmestar_00120.0myr_z+0.00_a+0.00_marcs.iso')
iso_600_gas07 = np.genfromtxt('data/isochrone_600.0myr_z+0.15_a+0.00_marcs.iso')
iso_120_mixed = np.genfromtxt('data/dmestar_00120.0myr_z+0.00_a+0.00_mixed.iso')
iso_emp_k14 = np.genfromtxt('data/Kamai_Pleiades_emp.iso')
pleiades_s07 = np.genfromtxt('data/Stauffer_Pleiades_litPhot.txt', usecols=(2, 3, 5, 6, 8, 9, 13, 14, 15))
pleiades_k14 = np.genfromtxt('data/Kamai_Pleiades_cmd.dat', usecols=(0, 1, 2, 3, 4, 5))
praesepe_u79 = np.genfromtxt('data/Upgren_Praesepe.txt', usecols=(3, 4))
praesepe_w81 = np.genfromtxt('data/Weis_Praesepe.txt', usecols=(3, 4))
praesepe_s82 = np.genfromtxt('data/Stauffer1982_Praesepe.txt', usecols=(3, 4))
praesepe_m90 = np.genfromtxt('data/Mermilliod_Praesepe.txt', usecols=(2, 3))
praesepe_j11 = np.genfromtxt('data/Joner_Praesepe.txt', usecols=(2, 4, 6, 8, 10))
pl_dis = 5.61
pl_ebv = 0.034
pl_evi = 1.25*pl_ebv
pl_evk = 2.78*pl_ebv
pl_ejk = 0.50*pl_ebv
pl_av = 3.12*pl_ebv
pl_ak = 0.34*pl_ebv
pr_dis = 6.26
pr_ebv = 0.027
pr_evi = 1.25*pr_ebv
pr_evk = 2.78*pr_ebv
pr_ejk = 0.50*pr_ebv
pr_av = 3.12*pr_ebv
pr_ak = 0.34*pr_ebv
fig, ax = plt.subplots(1, 2, figsize=(12., 8.), sharex=True, sharey=True)
for axis in ax:
axis.grid(True)
axis.tick_params(which='major', axis='both', length=15., labelsize=16.)
axis.set_ylim(12., 3.)
axis.set_xlim(0.0, 2.0)
axis.set_xlabel('$(B-V)$', fontsize=20.)
# PLEIADES
ax[0].set_title('Pleiades', fontsize=20., family='serif')
ax[0].set_ylabel('$M_V$', fontsize=20.)
ax[0].plot(pleiades_k14[:, 2] - pl_ebv, pleiades_k14[:, 0] - pl_av - pl_dis,
'o', c='#555555', markersize=4.0, alpha=0.2)
ax[0].plot(pleiades_s07[:, 6] - pleiades_s07[:, 7] - pl_ebv, pleiades_s07[:, 7] - pl_av - pl_dis,
'o', c='#555555', markersize=4.0, alpha=0.2)
ax[0].plot(iso_emp_k14[:, 1] - pl_ebv, iso_emp_k14[:, 0] - pl_av - pl_dis,
dashes=(20., 5.), lw=3, c='#B22222', alpha=0.8)
ax[0].plot(iso_120_gas07[:, 7] - iso_120_gas07[:, 8], iso_120_gas07[:, 8], lw=3, c='#0094b2')
ax[0].plot(iso_120_mixed[:, 6] - iso_120_mixed[:, 7], iso_120_mixed[:, 7], lw=3, c='#555555')
# PRAESEPE
ax[1].set_title('Praesepe', fontsize=20., family='serif')
ax[1].plot(praesepe_u79[:, 1] - pr_ebv, praesepe_u79[:, 0] - pr_av - pr_dis,
'o', c='#555555', markersize=4.0, alpha=0.2)
ax[1].plot(praesepe_w81[:, 1] - pr_ebv, praesepe_w81[:, 0] - pr_av - pr_dis,
'o', c='#555555', markersize=4.0, alpha=0.2)
ax[1].plot(praesepe_s82[:, 1] - pr_ebv, praesepe_s82[:, 0] - pr_av - pr_dis,
'o', c='#555555', markersize=4.0, alpha=0.2)
ax[1].plot(praesepe_m90[:, 1] - pr_ebv, praesepe_m90[:, 0] - pr_av - pr_dis,
'o', c='#555555', markersize=4.0, alpha=0.2)
ax[1].plot(praesepe_j11[:, 1] - pr_ebv, praesepe_j11[:, 0] - pr_av - pr_dis,
'o', c='#555555', markersize=4.0, alpha=0.2)
ax[1].plot(iso_emp_k14[:, 1] - pl_ebv, iso_emp_k14[:, 0] - pl_av - pl_dis,
dashes=(
20., 5.), lw=3, c='#B22222', alpha=0.8)
ax[1].plot(iso_600_gas07[:, 7] - iso_600_gas07[:, 8], iso_600_gas07[:, 8], lw=3, c='#0094b2')
praesepe_a02 = np.genfromtxt('data/Adams_Praesepe.txt', usecols=(3, 4, 5, 6, 7, 8)) # 2MASS NIR photometry
praesepe_h99 = np.genfromtxt('data/Hodgkin_Praesepe.txt') # JHK (UKIRT?) photometry
praesepe_t08 = np.genfromtxt('data/Taylor_Praesepe.txt') # VRI (Cousins) photometry
fig, ax = plt.subplots(1, 2, figsize=(12., 8.), sharex=True, sharey=True)
for axis in ax:
axis.grid(True)
axis.tick_params(which='major', axis='both', length=15., labelsize=16.)
axis.set_ylim(10., 1.)
axis.set_xlim(0.0, 1.5)
axis.set_xlabel('$(J - K)$', fontsize=20.)
# PLEIADES
ax[0].set_title('Pleiades', fontsize=20., family='serif')
ax[0].set_ylabel('$M_K$', fontsize=20.)
ax[0].plot(pleiades_s07[:, 0] - pleiades_s07[:, 4] - pl_ejk, pleiades_s07[:, 4] - pl_ak - pl_dis,
'o', c='#555555', markersize=4.0, alpha=0.2)
ax[0].plot(iso_120_gas07[:, 11] - iso_120_gas07[:, 13], iso_120_gas07[:, 13], lw=3, c='#0094b2')
# PRAESEPE
ax[1].set_title('Praesepe', fontsize=20., family='serif')
ax[1].plot(praesepe_a02[:, 0] - praesepe_a02[:, 4] - pr_ejk, praesepe_a02[:, 4] - pr_ak - pr_dis,
'o', c='#555555', markersize=4.0, alpha=0.2)
ax[1].plot(praesepe_h99[:, 7] - pr_ejk, praesepe_h99[:, 4] - pr_ak - pr_dis,
'o', c='#555555', markersize=4.0, alpha=0.2)
ax[1].plot(iso_600_gas07[:, 11] - iso_600_gas07[:, 13], iso_600_gas07[:, 13], lw=3, c='#0094b2')
fig, ax = plt.subplots(1, 2, figsize=(12., 8.), sharex=True, sharey=True)
for axis in ax:
axis.grid(True)
axis.tick_params(which='major', axis='both', length=15., labelsize=16.)
axis.set_ylim(10., 1.)
axis.set_xlim(0.0, 2.5)
axis.set_xlabel('$(V - I_C)$', fontsize=20.)
# PLEIADES
ax[0].set_title('Pleiades', fontsize=20., family='serif')
ax[0].set_ylabel('$M_V$', fontsize=20.)
ax[0].plot(pleiades_s07[:, 7] - pleiades_s07[:, 8] - pl_evi, pleiades_s07[:, 7] - pl_av - pl_dis,
'o', c='#555555', markersize=4.0, alpha=0.2)
ax[0].plot(pleiades_k14[:, 4] - pl_evi, pleiades_k14[:, 0] - pl_av - pl_dis,
'o', c='#555555', markersize=4.0, alpha=0.2)
ax[0].plot(iso_emp_k14[:, 2] - pl_ebv, iso_emp_k14[:, 0] - pl_av - pl_dis,
dashes=(20., 5.), lw=3, c='#B22222', alpha=0.8)
ax[0].plot(iso_120_gas07[:, 8] - iso_120_gas07[:, 10], iso_120_gas07[:, 8], lw=3, c='#0094b2')
ax[0].plot(iso_120_mixed[:, 7] - iso_120_mixed[:, 9], iso_120_mixed[:, 7], lw=3, c='#555555')
# PRAESEPE
ax[1].set_title('Praesepe', fontsize=20., family='serif')
ax[1].plot(praesepe_t08[:, 5] + praesepe_t08[:, 7] - pr_evi, praesepe_t08[:, 3] - pr_av - pr_dis,
'o', c='#555555', markersize=4.0, alpha=0.2)
ax[1].plot(praesepe_j11[:, -1] - pr_evi, praesepe_j11[:, 0] - pr_av - pr_dis,
'o', c='#555555', markersize=4.0, alpha=0.2)
ax[1].plot(iso_600_gas07[:, 8] - iso_600_gas07[:, 10], iso_600_gas07[:, 8], lw=3, c='#0094b2')
fig, ax = plt.subplots(1, 2, figsize=(12., 8.), sharex=True, sharey=True)
for axis in ax:
axis.grid(True)
axis.tick_params(which='major', axis='both', length=15., labelsize=16.)
axis.set_ylim(10., 1.)
axis.set_xlim(0.0, 5.0)
axis.set_xlabel('$(V - K)$', fontsize=20.)
# PLEIADES
ax[0].set_title('Pleiades', fontsize=20., family='serif')
ax[0].set_ylabel('$M_V$', fontsize=20.)
ax[0].plot(pleiades_s07[:, 7] - pleiades_s07[:, 4] - pl_evk, pleiades_s07[:, 7] - pl_av - pl_dis,
'o', c='#555555', markersize=4.0, alpha=0.2)
ax[0].plot(iso_emp_k14[:, 3] - pl_evk, iso_emp_k14[:, 0] - pl_av - pl_dis,
dashes=(20., 5.), lw=3, c='#B22222', alpha=0.8)
ax[0].plot(iso_120_gas07[:, 8] - iso_120_gas07[:, 13], iso_120_gas07[:, 8], lw=3, c='#0094b2')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Note that the first class is linearly separable from the other two classes but the second and third classes are not linearly separable from each other.
Step2: Visualize the working of the algorithm
Step3: Let's visualize the point and its K=5 nearest neighbors.
Step4: 2. Principal Components Analysis
Step5: Shuffle the data randomly and make train and test splits
Step7: Make a function for visualization of the images as an album
Step8: Visualize some faces from the training set
Step9: Calculate a set of eigen-faces
Step10: Visualize the eigen faces
Step11: Transform the data to the vector space spanned by the eigen faces
Step12: Use a KNN-Classifier in this transformed space to identify the faces
|
<ASSISTANT_TASK:>
Python Code:
iris = load_iris()
X = iris.data[:,:2] #Choosing only the first two input-features
Y = iris.target
number_of_samples = len(Y)
#Splitting into training and test sets
random_indices = np.random.permutation(number_of_samples)
#Training set
num_training_samples = int(number_of_samples*0.75)
x_train = X[random_indices[:num_training_samples]]
y_train = Y[random_indices[:num_training_samples]]
#Test set
x_test = X[random_indices[num_training_samples:]]
y_test = Y[random_indices[num_training_samples:]]
#Visualizing the training data
X_class0 = np.asmatrix([x_train[i] for i in range(len(x_train)) if y_train[i]==0]) #Picking only the first two classes
Y_class0 = np.zeros((X_class0.shape[0]),dtype=np.int)
X_class1 = np.asmatrix([x_train[i] for i in range(len(x_train)) if y_train[i]==1])
Y_class1 = np.ones((X_class1.shape[0]),dtype=np.int)
X_class2 = np.asmatrix([x_train[i] for i in range(len(x_train)) if y_train[i]==2])
Y_class2 = np.full((X_class2.shape[0]),fill_value=2,dtype=np.int)
plt.scatter(X_class0[:,0], X_class0[:,1],color='red')
plt.scatter(X_class1[:,0], X_class1[:,1],color='blue')
plt.scatter(X_class2[:,0], X_class2[:,1],color='green')
plt.xlabel('sepal length')
plt.ylabel('sepal width')
plt.legend(['class 0','class 1','class 2'])
plt.title('Fig 3: Visualization of training data')
plt.show()
model = neighbors.KNeighborsClassifier(n_neighbors = 5) # K = 5
model.fit(x_train, y_train)
query_point = np.array([5.9,2.9])
true_class_of_query_point = 1
predicted_class_for_query_point = model.predict([query_point])
print("Query point: {}".format(query_point))
print("True class of query point: {}".format(true_class_of_query_point))
query_point.shape
neighbors_object = neighbors.NearestNeighbors(n_neighbors=5)
neighbors_object.fit(x_train)
distances_of_nearest_neighbors, indices_of_nearest_neighbors_of_query_point = neighbors_object.kneighbors([query_point])
nearest_neighbors_of_query_point = x_train[indices_of_nearest_neighbors_of_query_point[0]]
print("The query point is: {}\n".format(query_point))
print("The nearest neighbors of the query point are:\n {}\n".format(nearest_neighbors_of_query_point))
print("The classes of the nearest neighbors are: {}\n".format(y_train[indices_of_nearest_neighbors_of_query_point[0]]))
print("Predicted class for query point: {}".format(predicted_class_for_query_point[0]))
plt.scatter(X_class0[:,0], X_class0[:,1],color='red')
plt.scatter(X_class1[:,0], X_class1[:,1],color='blue')
plt.scatter(X_class2[:,0], X_class2[:,1],color='green')
plt.scatter(query_point[0], query_point[1],marker='^',s=75,color='black')
plt.scatter(nearest_neighbors_of_query_point[:,0], nearest_neighbors_of_query_point[:,1],marker='s',s=150,color='yellow',alpha=0.30)
plt.xlabel('sepal length')
plt.ylabel('sepal width')
plt.legend(['class 0','class 1','class 2'])
plt.title('Fig 3: Working of the K-NN classification algorithm')
plt.show()
def evaluate_performance(model, x_test, y_test):
test_set_predictions = [model.predict(x_test[i].reshape((1,len(x_test[i]))))[0] for i in range(x_test.shape[0])]
test_misclassification_percentage = 0
for i in range(len(test_set_predictions)):
if test_set_predictions[i]!=y_test[i]:
test_misclassification_percentage+=1
test_misclassification_percentage *= 100/len(y_test)
return test_misclassification_percentage
#Evaluate the performances on the validation and test sets
print("Evaluating K-NN classifier:")
test_err = evaluate_performance(model, x_test, y_test)
print('test misclassification percentage = {}%'.format(test_err))
faces_data = fetch_olivetti_faces()
n_samples, height, width = faces_data.images.shape
X = faces_data.data
n_features = X.shape[1]
y = faces_data.target
n_classes = int(max(y)+1)
print("Number of samples: {}, \nHeight of each image: {}, \nWidth of each image: {}, \nNumber of input features: {},\nNumber of output classes: {}\n".format(n_samples,height,
width,n_features,n_classes))
# Split into a training set (75%) and a test set (25%)
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.25, random_state=42)
mean_image = np.mean(X_train,axis=0)
plt.figure
plt.imshow(mean_image.reshape((64,64)), cmap=plt.cm.gray)
plt.xticks(())
plt.yticks(())
plt.show()
def plot_gallery(images, h, w, titles=None, n_row=3, n_col=4):
Helper function to plot a gallery of portraits
Taken from: http://scikit-learn.org/stable/auto_examples/applications/face_recognition.html
plt.figure(figsize=(1.8 * n_col, 2.4 * n_row))
plt.subplots_adjust(bottom=0, left=.01, right=.99, top=.90, hspace=.35)
for i in range(n_row * n_col):
plt.subplot(n_row, n_col, i + 1)
plt.imshow(images[i].reshape((h, w)), cmap=plt.cm.gray)
if titles != None:
plt.title(titles[i], size=12)
plt.xticks(())
plt.yticks(())
chosen_images = X_train[:12]
chosen_labels = y_train[:12]
titles = ['Person #'+str(i) for i in chosen_labels]
plot_gallery(chosen_images, height, width, titles)
#Reduce the dimensionality of the feature space
n_components = 150
#Finding the top n_components principal components in the data
pca = RandomizedPCA(n_components=n_components, whiten=True).fit(X_train)
#Find the eigen-vectors of the feature space
eigenfaces = pca.components_.reshape((n_components, height, width))
titles = ['eigen-face #'+str(i) for i in range(12)]
plot_gallery(eigenfaces, height, width, titles)
#Projecting the data onto the eigenspace
X_train_pca = pca.transform(X_train)
X_test_pca = pca.transform(X_test)
print("Current shape of input data matrix: ", X_train_pca.shape)
knn_classifier = KNeighborsClassifier(n_neighbors = 5)
knn_classifier.fit(X_train_pca, y_train)
#Detect faces in the test set
y_pred_test = knn_classifier.predict(X_test_pca)
correct_count = 0.0
for i in range(len(y_test)):
if y_pred_test[i] == y_test[i]:
correct_count += 1.0
accuracy = correct_count/float(len(y_test))
print("Accuracy:", accuracy)
print(classification_report(y_test, y_pred_test))
print(confusion_matrix(y_test, y_pred_test, labels=range(n_classes)))
def title(y_pred, y_test, target_names, i):
pred_name = target_names[y_pred[i]].rsplit(' ', 1)[-1]
true_name = target_names[y_test[i]].rsplit(' ', 1)[-1]
return 'predicted: %s\ntrue: %s' % (pred_name, true_name)
target_names = [str(element) for element in np.arange(40)+1]
prediction_titles = [title(y_pred_test, y_test, target_names, i)
for i in range(y_pred_test.shape[0])]
plot_gallery(X_test, height, width, prediction_titles, n_row=2, n_col=6)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Input Parameter
Step2: Preparation
Step3: Create space and time vector
Step4: Source signal - Ricker-wavelet
Step5: Time stepping
Step6: Save seismograms
Step7: Plotting
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
# Discretization
c1=30 # Number of grid points per dominant wavelength
c2=0.2 # CFL-Number
nx=300 # Number of grid points in X
ny=300 # Number of grid points in Y
T=1 # Total propagation time
# Source Signal
f0= 5 # Center frequency Ricker-wavelet
q0= 100 # Maximum amplitude Ricker-Wavelet
xscr = 150 # Source position (in grid points) in X
yscr = 150 # Source position (in grid points) in Y
# Receiver
xrec1=150; yrec1=120; # Position Reciever 1 (in grid points)
xrec2=150; yrec2=150; # Position Reciever 2 (in grid points)
xrec3=150; yrec3=180;# Position Reciever 3 (in grid points)
# Velocity and density
modell_v = 3000*np.ones((ny,nx))
rho=2.2*np.ones((ny,nx))
# Init wavefields
vx=np.zeros(shape = (ny,nx))
vy=np.zeros(shape = (ny,nx))
p=np.zeros(shape = (ny,nx))
vx_x=np.zeros(shape = (ny,nx))
vy_y=np.zeros(shape = (ny,nx))
p_x=np.zeros(shape = (ny,nx))
p_y=np.zeros(shape = (ny,nx))
# Calculate first Lame-Paramter
l=rho * modell_v * modell_v
cmin=min(modell_v.flatten()) # Lowest P-wave velocity
cmax=max(modell_v.flatten()) # Highest P-wave velocity
fmax=2*f0 # Maximum frequency
dx=cmin/(fmax*c1) # Spatial discretization (in m)
dy=dx # Spatial discretization (in m)
dt=dx/(cmax)*c2 # Temporal discretization (in s)
lampda_min=cmin/fmax # Smallest wavelength
# Output model parameter:
print("Model size: x:",dx*nx,"in m, y:",dy*ny,"in m")
print("Temporal discretization: ",dt," s")
print("Spatial discretization: ",dx," m")
print("Number of gridpoints per minimum wavelength: ",lampda_min/dx)
x=np.arange(0,dx*nx,dx) # Space vector in X
y=np.arange(0,dy*ny,dy) # Space vector in Y
t=np.arange(0,T,dt) # Time vector
nt=np.size(t) # Number of time steps
# Plotting model
fig, (ax1, ax2) = plt.subplots(1, 2)
fig.subplots_adjust(wspace=0.4,right=1.6)
ax1.plot(x,modell_v)
ax1.set_ylabel('VP in m/s')
ax1.set_xlabel('Depth in m')
ax1.set_title('P-wave velocity')
ax2.plot(x,rho)
ax2.set_ylabel('Density in g/cm^3')
ax2.set_xlabel('Depth in m')
ax2.set_title('Density');
tau=np.pi*f0*(t-1.5/f0)
q=q0*(1.0-2.0*tau**2.0)*np.exp(-tau**2)
# Plotting source signal
plt.figure(3)
plt.plot(t,q)
plt.title('Source signal Ricker-Wavelet')
plt.ylabel('Amplitude')
plt.xlabel('Time in s')
plt.draw()
# Init Seismograms
Seismogramm=np.zeros((3,nt)); # Three seismograms
# Calculation of some coefficients
i_dx=1.0/(dx)
i_dy=1.0/(dy)
c1=9.0/(8.0*dx)
c2=1.0/(24.0*dx)
c3=9.0/(8.0*dy)
c4=1.0/(24.0*dy)
c5=1.0/np.power(dx,3)
c6=1.0/np.power(dy,3)
c7=1.0/np.power(dx,2)
c8=1.0/np.power(dy,2)
c9=np.power(dt,3)/24.0
# Prepare slicing parameter:
kxM2=slice(5-2,nx-4-2)
kxM1=slice(5-1,nx-4-1)
kx=slice(5,nx-4)
kxP1=slice(5+1,nx-4+1)
kxP2=slice(5+2,nx-4+2)
kyM2=slice(5-2,ny-4-2)
kyM1=slice(5-1,ny-4-1)
ky=slice(5,ny-4)
kyP1=slice(5+1,ny-4+1)
kyP2=slice(5+2,ny-4+2)
## Time stepping
print("Starting time stepping...")
for n in range(2,nt):
# Inject source wavelet
p[yscr,xscr]=p[yscr,xscr]+q[n]
# Update velocity
p_x[ky,kx]=c1*(p[ky,kxP1]-p[ky,kx])-c2*(p[ky,kxP2]-p[ky,kxM1])
p_y[ky,kx]=c3*(p[kyP1,kx]-p[ky,kx])-c4*(p[kyP2,kx]-p[kyM1,kx])
vx=vx-dt/rho*p_x
vy=vy-dt/rho*p_y
# Update pressure
vx_x[ky,kx]=c1*(vx[ky,kx]-vx[ky,kxM1])-c2*(vx[ky,kxP1]-vx[ky,kxM2])
vy_y[ky,kx]=c3*(vy[ky,kx]-vy[kyM1,kx])-c4*(vy[kyP1,kx]-vy[kyM2,kx])
p=p-l*dt*(vx_x+vy_y)
# Save seismograms
Seismogramm[0,n]=p[yrec1,xrec1]
Seismogramm[1,n]=p[yrec2,xrec2]
Seismogramm[2,n]=p[yrec3,xrec3]
print("Finished time stepping!")
## Save seismograms
np.save("Seismograms/FD_2D_DX4_DT2_fast",Seismogramm)
## Image plot
fig, ax = plt.subplots(1,1)
img = ax.imshow(p);
ax.set_title('P-Wavefield')
ax.set_xticks(range(0,nx+1,int(nx/5)))
ax.set_yticks(range(0,ny+1,int(ny/5)))
ax.set_xlabel('Grid-points in X')
ax.set_ylabel('Grid-points in Y')
fig.colorbar(img)
## Plot seismograms
fig, (ax1, ax2, ax3) = plt.subplots(3, 1)
fig.subplots_adjust(hspace=0.4,right=1.6, top = 2 )
ax1.plot(t,Seismogramm[0,:])
ax1.set_title('Seismogram 1')
ax1.set_ylabel('Amplitude')
ax1.set_xlabel('Time in s')
ax1.set_xlim(0, T)
ax2.plot(t,Seismogramm[1,:])
ax2.set_title('Seismogram 2')
ax2.set_ylabel('Amplitude')
ax2.set_xlabel('Time in s')
ax2.set_xlim(0, T)
ax3.plot(t,Seismogramm[2,:])
ax3.set_title('Seismogram 3')
ax3.set_ylabel('Amplitude')
ax3.set_xlabel('Time in s')
ax3.set_xlim(0, T);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Step 1. Environment setup
Step2: Modify the PATH environment variable so that skaffold is available
Step3: Environment variable setup
Step4: We also need to access your KFP cluster. You can access it in your Google Cloud Console under "AI Platform > Pipeline" menu.
Step5: Set the image name as tfx-pipeline under the current GCP project
Step6: Step 2. Copy the predefined template to your project directory.
Step7: TFX includes the taxi template with the TFX python package.
Step8: Step 3. Browse your copied source files
Step9: Let's quickly go over the structure of a test file to test Tensorflow code
Step10: First of all, notice that you start by importing the code you want to test by importing the corresponding module. Here we want to test the code in features.py so we import the module features
Step11: Let's upload our sample data to GCS bucket so that we can use it in our pipeline later.
Step12: Let's create a TFX pipeline using the tfx pipeline create command.
Step13: While creating a pipeline, Dockerfile and build.yaml will be generated to build a Docker image.
Step14: Or, you can also run the pipeline in the KFP Dashboard. The new execution run will be listed
Step 5. Add components for data validation.
Step15: Check pipeline outputs
Step16: Step 6. Add components for training
Step17: When this execution run finishes successfully, you have now created and run your first TFX pipeline in AI Platform Pipelines!
Step 7. Try BigQueryExampleGen
Step18: Step 8. Try Dataflow with KFP
Step19: You can find your Dataflow jobs in Dataflow in Cloud Console.
Step 9. Try Cloud AI Platform Training and Prediction with KFP
|
<ASSISTANT_TASK:>
Python Code:
import os
PATH=%env PATH
%env PATH={PATH}:/home/jupyter/.local/bin
%%bash
LOCAL_BIN="/home/jupyter/.local/bin"
SKAFFOLD_URI="https://storage.googleapis.com/skaffold/releases/latest/skaffold-linux-amd64"
test -d $LOCAL_BIN || mkdir -p $LOCAL_BIN
which skaffold || (
curl -Lo skaffold $SKAFFOLD_URI &&
chmod +x skaffold &&
mv skaffold $LOCAL_BIN
)
!which skaffold
shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null
GOOGLE_CLOUD_PROJECT=shell_output[0]
%env GOOGLE_CLOUD_PROJECT={GOOGLE_CLOUD_PROJECT}
ENDPOINT = # Enter your ENDPOINT here.
# Docker image name for the pipeline image.
CUSTOM_TFX_IMAGE = 'gcr.io/' + GOOGLE_CLOUD_PROJECT + '/tfx-pipeline'
CUSTOM_TFX_IMAGE
PIPELINE_NAME = "guided_project_1"
PROJECT_DIR = os.path.join(os.path.expanduser("."), PIPELINE_NAME)
PROJECT_DIR
!tfx template copy \
--pipeline-name={PIPELINE_NAME} \
--destination-path={PROJECT_DIR} \
--model=taxi
%cd {PROJECT_DIR}
!python -m models.features_test
!python -m models.keras.model_test
!tail -26 models/features_test.py
GCS_BUCKET_NAME = GOOGLE_CLOUD_PROJECT + '-kubeflowpipelines-default'
GCS_BUCKET_NAME
!gsutil mb gs://{GCS_BUCKET_NAME}
!gsutil cp data/data.csv gs://{GCS_BUCKET_NAME}/tfx-template/data/data.csv
!tfx pipeline create \
--pipeline-path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT} \
--build-target-image={CUSTOM_TFX_IMAGE}
!tfx run create --pipeline-name={PIPELINE_NAME} --endpoint={ENDPOINT}
# Update the pipeline
!tfx pipeline update \
--pipeline-path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
# You can run the pipeline the same way.
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
print('https://' + ENDPOINT)
!tfx pipeline update \
--pipeline-path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
print("https://" + ENDPOINT)
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
!tfx pipeline update \
--pipeline-path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
!tfx pipeline update \
--pipeline-path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
!tfx pipeline update \
--pipeline-path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Merge multiple columns
Step2: Then we create a new DataFrame from the obtained RDD.
Step3: Split one column
|
<ASSISTANT_TASK:>
Python Code:
mtcars = spark.read.csv(path='../../data/mtcars.csv',
sep=',',
encoding='UTF-8',
comment=None,
header=True,
inferSchema=True)
mtcars.show(n=5)
# adjust first column name
colnames = mtcars.columns
colnames[0] = 'model'
mtcars = mtcars.rdd.toDF(colnames)
mtcars.show(5)
from pyspark.sql import Row
mtcars_rdd = mtcars.rdd.map(lambda x: Row(model=x[0], values=x[1:]))
mtcars_rdd.take(5)
mtcars_df = spark.createDataFrame(mtcars_rdd)
mtcars_df.show(5, truncate=False)
mtcars_rdd_2 = mtcars_df.rdd.map(lambda x: Row(model=x[0], x1=x[1][:5], x2=x[1][5:]))
# convert RDD back to DataFrame
mtcars_df_2 = spark.createDataFrame(mtcars_rdd_2)
mtcars_df_2.show(5, truncate=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The Kernel
|
<ASSISTANT_TASK:>
Python Code:
import math
#given an array of Y values at consecutive integral x abscissas,
#return array of corresponding derivatives to make a natural cubic spline
def naturalSpline(ys):
vs = [0.0] * len(ys)
if (len(ys) < 2):
return vs
DECAY = math.sqrt(3)-2;
endi = len(ys)-1
# make convolutional spline
S = 0.0;E = 0.0
for i in range(len(Y)):
vs[i]+=S;vs[endi-i]+=E;
S=(S+3.0*ys[i])*DECAY;
E=(E-3.0*ys[endi-i])*DECAY;
#Natural Boundaries
S2 = 6.0*(ys[1]-ys[0]) - 4.0*vs[0] - 2.0*vs[1]
E2 = 6.0*(ys[endi-1]-ys[endi]) + 4.0*vs[endi] + 2.0*vs[endi-1]
# A = dE2/dE = -dS2/dS, B = dE2/dS = -dS2/dS
A = 4.0+2.0*DECAY
B = (4.0*DECAY+2.0)*(DECAY**(len(ys)-2))
DEN = A*A - B*B
S = (A*S2 + B*E2) / DEN
E = (-A*E2 - B*S2) / DEN
for i in range(len(ys)):
vs[i]+=S;vs[endi-i]+=E
S*=DECAY;E*=DECAY
return vs
#
#Plot a different natural spline, along with its 1st and 2nd derivatives, each time you run this
#
%run plothelp.py
%matplotlib inline
import random
import numpy
Y = [random.random()*10.0+2 for _ in range(5)]
V = naturalSpline(Y)
xs = numpy.linspace(0,len(Y)-1, 1000)
plt.figure(0, figsize=(12.0,4.0))
plt.plot(xs,[hermite_interp(Y,V,x) for x in xs])
plt.plot(range(0,len(Y)),[Y[x] for x in range(0,len(Y))], "bo")
plt.figure(1, figsize=(12.0,4.0));plt.grid(True)
plt.plot(xs,[hermite_interp1(Y,V,x) for x in xs])
plt.plot(xs,[hermite_interp2(Y,V,x) for x in xs])
#
# Plot the kernel
#
DECAY = math.sqrt(3)-2;
vs = [3*(DECAY**x) for x in range(1,7)]
ys = [0]*len(vs) + [1] + [0]*len(vs)
vs = [-v for v in vs[::-1]] + [0.0] + vs
xs = numpy.linspace(0,len(ys)-1, 1000)
plt.figure(0, figsize=(12.0,4.0));plt.grid(True);plt.ylim([-0.2,1.1]);plt.xticks(range(-5,6))
plt.plot([x-6.0 for x in xs],[hermite_interp(ys,vs,x) for x in xs])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: If there are more parameters then each could be an array, and they are applied together one element at a time
Step2: Python Reduce Function
Step3: If there is only one value in a sequence then that element is returned; if the sequence is empty, an exception is raised.
Step4: MapReduce using MRJOB
Step5: MRJOB Word count function
Step6: Another way to implement this is to use SPARK on your local box
|
<ASSISTANT_TASK:>
Python Code:
def cube(x): return x*x*x
map(cube,range(1,11))
seq = range(8)
def add(x,y): return x+y
map(add, seq,seq)
result = map(add, seq,seq)
reduce(add, result) # adding each element of the result together
import re
import pandas as pd
import numpy as np
aliceFile = open('data/canterbury/alice29.txt','r')
map1=[]
WORD_RE = re.compile(r"[\w']+")
# Create the map of words with prelminary counts
for line in aliceFile:
for w in WORD_RE.findall(line):
map1.append([w,1])
#sort the map
map2 = sorted(map1)
#Separate the map into groups by the key values
df = pd.DataFrame(map2)
uniquewords = df[0].unique()
DataFrameDict = {elem : pd.DataFrame for elem in uniquewords}
for key in DataFrameDict.keys():
DataFrameDict[key] = df[:][df[0] == key]
def wordcount(x,y):
x[1] = x[1] + y[1]
return x
#Add up the counts using reduce
for uw in uniquewords:
uarray = np.array(DataFrameDict[uw])
print reduce(wordcount,uarray)
! pip install mrjob
# %load code/MRWordFrequencyCount.py
# Do not run this cell it is just displaying the content of the code
from mrjob.job import MRJob
class MRWordFrequencyCount(MRJob):
def mapper(self, _ , line):
yield "chars", len(line)
yield "words", len(line.split())
yield "lines", 1
def reducer(self, key, values):
yield key, sum(values)
if __name__ == '__main__':
MRWordFrequencyCount.run()
%run -G code/MRWordFrequencyCount.py ./.mrjob.conf data/canterbury/alice29.txt
# %load code\MRWordFreqCount.py
# Do not run this block of code just a display of the stored program
from mrjob.job import MRJob
import re
WORD_RE = re.compile(r"[\w']+")
class MRWordFreqCount(MRJob):
def mapper(self, _, line):
for word in WORD_RE.findall(line):
yield word.lower(), 1
def combiner(self, word, counts):
yield word, sum(counts)
def reducer(self, word, counts):
yield word, sum(counts)
if __name__ == '__main__':
MRWordFreqCount.run()
%run code/MRWordFreqCount.py data/canterbury/alice29.txt
!pip install pyspark
# %load code\pysparkcount.py
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#https://github.com/apache/spark/blob/master/examples/src/main/python/wordcount.py
from __future__ import print_function
import sys
from operator import add
from pyspark import SparkContext
if __name__ == "__main__":
if len(sys.argv) != 2:
print("Usage: wordcount <file>", file=sys.stderr)
exit(-1)
sc = SparkContext(appName="PythonWordCount")
lines = sc.textFile(sys.argv[1], 1)
counts = lines.flatMap(lambda x: x.split(' ')) \
.map(lambda x: (x, 1)) \
.reduceByKey(add)
output = counts.collect()
for (word, count) in output:
print("%s: %i" % (word, count))
sc.stop()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Figure 4b
Step2: Figure 4c
Step3: Figure 4bc
|
<ASSISTANT_TASK:>
Python Code:
# read in exported table for genus
fig4a_genus = pd.read_csv('../../../data/07-entropy-and-covariation/genus-level-distribution.csv', header=0)
# read in exported table for otu
fig4a_otu = pd.read_csv('../../../data/07-entropy-and-covariation/otu-level-distribution-400.csv', header=0)
# read in exported table
fig4b = pd.read_csv('../../../data/07-entropy-and-covariation/entropy_by_phylogeny_c20.csv', header=0)
# read in exported table
fig4c = pd.read_csv('../../../data/07-entropy-and-covariation/entropy_by_taxonomy_c20.csv', header=0)
# read in exported table
fig4bc = pd.read_csv('../../../data/07-entropy-and-covariation/entropy_per_tag_sequence_s10.csv', header=0)
fig4 = pd.ExcelWriter('Figure4_data.xlsx')
fig4a_genus.to_excel(fig4,'Fig-4a_genus')
fig4a_otu.to_excel(fig4,'Fig-4a_sequence')
fig4b.to_excel(fig4,'Fig-4b')
fig4c.to_excel(fig4,'Fig-4c')
fig4bc.to_excel(fig4,'Fig-4bc_violin')
fig4.save()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 乱数の生成
Step2: tf.random.Generator クラス
Step3: ジェネレータオブジェクトには、さまざまな作成方法があります。最も簡単なのは、上記に示した Generator.from_seed で、シードからジェネレータを作成します。シードは、負でない整数値です。from_seed にはオプションの引数 alg があり、このジェネレータが使用する RNG アルゴリズムを指定します。
Step4: この詳細については、以下の「アルゴリズム」のセクションをご覧ください。
Step5: ジェネレータの作成方法はほかにもありますが、このガイドでは説明されていません。
Step6: 独立乱数ストリームを作成する
Step7: split は、normal などの RNG メソッドと同様に、それを呼び出したジェネレータ(上記の例の g)の状態を変更します。互いに独立しているほかに、新しいジェネレータ(new_gs)は、古いジェネレータ(g)からの独立も保証されます。
Step8: 注意
Step9: ユーザーは、関数が呼び出されたときにジェネレータオブジェクトがアライブである(ガベージコレクトされていない)ことを確認する必要があります。
Step10: ジェネレータを引数として tf.function に渡す
Step11: この再トレースの動作は tf.Variable と同じです。
Step12: 分散ストラテジーとのインタラクション
Step13: この使用方法では、ジェネレータのデバイスがレプリカとは異なるため、パフォーマンスの問題が発生する可能性があります。
Step14: 注意
Step15: tf.random.Generator を Strategy.run の引数として渡すことは推奨されなくなりました。Strategy.run が一般的にジェネレータではなくテンソルの引数を期待するためです。
Step16: また、分散ストラテジー内でも保存と復元を実行することができます。
Step17: 保存する前に、レプリカが RNG 呼び出し履歴で分岐しないことを確認する必要があります(1 つのレプリカが 1 つの RNG 呼び出しを行い、もう 1 つが 2 つの RNG 呼び出しを行うなど)。そうでない場合、RNG の内部状態が分岐してしまい、最初のレプリカの状態のみを保存する tf.train.Checkpoint が、すべてのレプリカを適切に復元できなくなります。
Step18: g1 と cp1 は g2 と cp2 からの別々のオブジェクトですが、共通の filename というチェックポイントファイルと my_generator というオブジェクト名を介してリンクされています。ストラテジー間で重複するレプリカ(上の cpu
Step19: tf.random.Generator を含む SavedModel を分散ストラテジーに読み込むのは、レプリカがすべて同じ乱数ストリームを生成するため、推奨されません。これは、レプリカの ID が SavedModel のグラフで凍結しているために起こります。
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
# Creates some virtual devices (cpu:0, cpu:1, etc.) for using distribution strategy
physical_devices = tf.config.list_physical_devices("CPU")
tf.config.experimental.set_virtual_device_configuration(
physical_devices[0], [
tf.config.experimental.VirtualDeviceConfiguration(),
tf.config.experimental.VirtualDeviceConfiguration(),
tf.config.experimental.VirtualDeviceConfiguration()
])
g1 = tf.random.Generator.from_seed(1)
print(g1.normal(shape=[2, 3]))
g2 = tf.random.get_global_generator()
print(g2.normal(shape=[2, 3]))
g1 = tf.random.Generator.from_seed(1, alg='philox')
print(g1.normal(shape=[2, 3]))
g = tf.random.Generator.from_non_deterministic_state()
print(g.normal(shape=[2, 3]))
g = tf.random.Generator.from_seed(1)
print(g.normal([]))
print(g.normal([]))
g.reset_from_seed(1)
print(g.normal([]))
g = tf.random.Generator.from_seed(1)
print(g.normal([]))
new_gs = g.split(3)
for new_g in new_gs:
print(new_g.normal([]))
print(g.normal([]))
with tf.device("cpu"): # change "cpu" to the device you want
g = tf.random.get_global_generator().split(1)[0]
print(g.normal([])) # use of g won't cause cross-device copy, unlike the global generator
g = tf.random.Generator.from_seed(1)
@tf.function
def foo():
return g.normal([])
print(foo())
g = None
@tf.function
def foo():
global g
if g is None:
g = tf.random.Generator.from_seed(1)
return g.normal([])
print(foo())
print(foo())
num_traces = 0
@tf.function
def foo(g):
global num_traces
num_traces += 1
return g.normal([])
foo(tf.random.Generator.from_seed(1))
foo(tf.random.Generator.from_seed(2))
print(num_traces)
num_traces = 0
@tf.function
def foo(v):
global num_traces
num_traces += 1
return v.read_value()
foo(tf.Variable(1))
foo(tf.Variable(2))
print(num_traces)
g = tf.random.Generator.from_seed(1)
strat = tf.distribute.MirroredStrategy(devices=["cpu:0", "cpu:1"])
with strat.scope():
def f():
print(g.normal([]))
results = strat.run(f)
strat = tf.distribute.MirroredStrategy(devices=["cpu:0", "cpu:1"])
with strat.scope():
g = tf.random.Generator.from_seed(1)
print(strat.run(lambda: g.normal([])))
print(strat.run(lambda: g.normal([])))
strat = tf.distribute.MirroredStrategy(devices=["cpu:0", "cpu:1"])
with strat.scope():
def f():
g = tf.random.Generator.from_seed(1)
a = g.normal([])
b = g.normal([])
return tf.stack([a, b])
print(strat.run(f))
print(strat.run(f))
filename = "./checkpoint"
g = tf.random.Generator.from_seed(1)
cp = tf.train.Checkpoint(generator=g)
print(g.normal([]))
cp.write(filename)
print("RNG stream from saving point:")
print(g.normal([]))
print(g.normal([]))
cp.restore(filename)
print("RNG stream from restoring point:")
print(g.normal([]))
print(g.normal([]))
filename = "./checkpoint"
strat = tf.distribute.MirroredStrategy(devices=["cpu:0", "cpu:1"])
with strat.scope():
g = tf.random.Generator.from_seed(1)
cp = tf.train.Checkpoint(my_generator=g)
print(strat.run(lambda: g.normal([])))
with strat.scope():
cp.write(filename)
print("RNG stream from saving point:")
print(strat.run(lambda: g.normal([])))
print(strat.run(lambda: g.normal([])))
with strat.scope():
cp.restore(filename)
print("RNG stream from restoring point:")
print(strat.run(lambda: g.normal([])))
print(strat.run(lambda: g.normal([])))
filename = "./checkpoint"
strat1 = tf.distribute.MirroredStrategy(devices=["cpu:0", "cpu:1"])
with strat1.scope():
g1 = tf.random.Generator.from_seed(1)
cp1 = tf.train.Checkpoint(my_generator=g1)
print(strat1.run(lambda: g1.normal([])))
with strat1.scope():
cp1.write(filename)
print("RNG stream from saving point:")
print(strat1.run(lambda: g1.normal([])))
print(strat1.run(lambda: g1.normal([])))
strat2 = tf.distribute.MirroredStrategy(devices=["cpu:0", "cpu:1", "cpu:2"])
with strat2.scope():
g2 = tf.random.Generator.from_seed(1)
cp2 = tf.train.Checkpoint(my_generator=g2)
cp2.restore(filename)
print("RNG stream from restoring point:")
print(strat2.run(lambda: g2.normal([])))
print(strat2.run(lambda: g2.normal([])))
filename = "./saved_model"
class MyModule(tf.Module):
def __init__(self):
super(MyModule, self).__init__()
self.g = tf.random.Generator.from_seed(0)
@tf.function
def __call__(self):
return self.g.normal([])
@tf.function
def state(self):
return self.g.state
strat = tf.distribute.MirroredStrategy(devices=["cpu:0", "cpu:1"])
with strat.scope():
m = MyModule()
print(strat.run(m))
print("state:", m.state())
with strat.scope():
tf.saved_model.save(m, filename)
print("RNG stream from saving point:")
print(strat.run(m))
print("state:", m.state())
print(strat.run(m))
print("state:", m.state())
imported = tf.saved_model.load(filename)
print("RNG stream from loading point:")
print("state:", imported.state())
print(imported())
print("state:", imported.state())
print(imported())
print("state:", imported.state())
print(tf.random.stateless_normal(shape=[2, 3], seed=[1, 2]))
print(tf.random.stateless_normal(shape=[2, 3], seed=[1, 2]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Exploratory Data Analysis <a class="anchor" id="1"></a>
Step2: Next, we'll choose 200 observations to be part of our train set, and 1500 to be part of our test set.
Step4: 2. Prepare 6 different candidate models <a class="anchor" id="2"></a>
Step5: 2.2 Training <a class="anchor" id="2.2"></a>
Step6: 2.3 Estimate leave-one-out cross-validated score for each training point <a class="anchor" id="2.3"></a>
Step7: 3. Bayesian Hierarchical Stacking <a class="anchor" id="3"></a>
Step9: 3.2 Define stacking model <a class="anchor" id="3.2"></a>
Step10: We can now extract the weights with which to weight the different models from the posterior, and then visualise how they vary across the training set.
Step11: 4. Evaluate on test set <a class="anchor" id="4"></a>
Step12: 4.2 Compare methods <a class="anchor" id="4.2"></a>
|
<ASSISTANT_TASK:>
Python Code:
!pip install -q numpyro@git+https://github.com/pyro-ppl/numpyro
import os
from IPython.display import set_matplotlib_formats
import arviz as az
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from scipy.interpolate import BSpline
import seaborn as sns
import jax
import jax.numpy as jnp
import numpyro
import numpyro.distributions as dist
plt.style.use("seaborn")
if "NUMPYRO_SPHINXBUILD" in os.environ:
set_matplotlib_formats("svg")
numpyro.set_host_device_count(4)
assert numpyro.__version__.startswith("0.9.2")
%matplotlib inline
wells = pd.read_csv(
"http://stat.columbia.edu/~gelman/arm/examples/arsenic/wells.dat", sep=" "
)
wells.head()
fig, ax = plt.subplots(2, 2, figsize=(12, 6))
fig.suptitle("Target variable plotted against various predictors")
sns.scatterplot(data=wells, x="arsenic", y="switch", ax=ax[0][0])
sns.scatterplot(data=wells, x="dist", y="switch", ax=ax[0][1])
sns.barplot(
data=wells.groupby("assoc")["switch"].mean().reset_index(),
x="assoc",
y="switch",
ax=ax[1][0],
)
ax[1][0].set_ylabel("Proportion switch")
sns.barplot(
data=wells.groupby("educ")["switch"].mean().reset_index(),
x="educ",
y="switch",
ax=ax[1][1],
)
ax[1][1].set_ylabel("Proportion switch");
np.random.seed(1)
train_id = wells.sample(n=200).index
test_id = wells.loc[~wells.index.isin(train_id)].sample(n=1500).index
y_train = wells.loc[train_id, "switch"].to_numpy()
y_test = wells.loc[test_id, "switch"].to_numpy()
wells["edu0"] = wells["educ"].isin(np.arange(0, 1)).astype(int)
wells["edu1"] = wells["educ"].isin(np.arange(1, 6)).astype(int)
wells["edu2"] = wells["educ"].isin(np.arange(6, 12)).astype(int)
wells["edu3"] = wells["educ"].isin(np.arange(12, 18)).astype(int)
wells["logarsenic"] = np.log(wells["arsenic"])
wells["assoc_half"] = wells["assoc"] / 2.0
wells["as_square"] = wells["logarsenic"] ** 2
wells["as_third"] = wells["logarsenic"] ** 3
wells["dist100"] = wells["dist"] / 100.0
wells["intercept"] = 1
def bs(x, knots, degree):
Generate the B-spline basis matrix for a polynomial spline.
Parameters
----------
x
predictor variable.
knots
locations of internal breakpoints (not padded).
degree
degree of the piecewise polynomial.
Returns
-------
pd.DataFrame
Spline basis matrix.
Notes
-----
This mirrors ``bs`` from splines package in R.
padded_knots = np.hstack(
[[x.min()] * (degree + 1), knots, [x.max()] * (degree + 1)]
)
return pd.DataFrame(
BSpline(padded_knots, np.eye(len(padded_knots) - degree - 1), degree)(x)[:, 1:],
index=x.index,
)
knots = np.quantile(wells.loc[train_id, "logarsenic"], np.linspace(0.1, 0.9, num=10))
spline_arsenic = bs(wells["logarsenic"], knots=knots, degree=3)
knots = np.quantile(wells.loc[train_id, "dist100"], np.linspace(0.1, 0.9, num=10))
spline_dist = bs(wells["dist100"], knots=knots, degree=3)
features_0 = ["intercept", "dist100", "arsenic", "assoc", "edu1", "edu2", "edu3"]
features_1 = ["intercept", "dist100", "logarsenic", "assoc", "edu1", "edu2", "edu3"]
features_2 = [
"intercept",
"dist100",
"arsenic",
"as_third",
"as_square",
"assoc",
"edu1",
"edu2",
"edu3",
]
features_3 = ["intercept", "dist100", "assoc", "edu1", "edu2", "edu3"]
features_4 = ["intercept", "logarsenic", "assoc", "edu1", "edu2", "edu3"]
features_5 = ["intercept", "dist100", "logarsenic", "assoc", "educ"]
X0 = wells.loc[train_id, features_0].to_numpy()
X1 = wells.loc[train_id, features_1].to_numpy()
X2 = wells.loc[train_id, features_2].to_numpy()
X3 = (
pd.concat([wells.loc[:, features_3], spline_arsenic], axis=1)
.loc[train_id]
.to_numpy()
)
X4 = pd.concat([wells.loc[:, features_4], spline_dist], axis=1).loc[train_id].to_numpy()
X5 = wells.loc[train_id, features_5].to_numpy()
X0_test = wells.loc[test_id, features_0].to_numpy()
X1_test = wells.loc[test_id, features_1].to_numpy()
X2_test = wells.loc[test_id, features_2].to_numpy()
X3_test = (
pd.concat([wells.loc[:, features_3], spline_arsenic], axis=1)
.loc[test_id]
.to_numpy()
)
X4_test = (
pd.concat([wells.loc[:, features_4], spline_dist], axis=1).loc[test_id].to_numpy()
)
X5_test = wells.loc[test_id, features_5].to_numpy()
train_x_list = [X0, X1, X2, X3, X4, X5]
test_x_list = [X0_test, X1_test, X2_test, X3_test, X4_test, X5_test]
K = len(train_x_list)
def logistic(x, y=None):
beta = numpyro.sample("beta", dist.Normal(0, 3).expand([x.shape[1]]))
logits = numpyro.deterministic("logits", jnp.matmul(x, beta))
numpyro.sample(
"obs",
dist.Bernoulli(logits=logits),
obs=y,
)
fit_list = []
for k in range(K):
sampler = numpyro.infer.NUTS(logistic)
mcmc = numpyro.infer.MCMC(
sampler, num_chains=4, num_samples=1000, num_warmup=1000, progress_bar=False
)
rng_key = jax.random.fold_in(jax.random.PRNGKey(13), k)
mcmc.run(rng_key, x=train_x_list[k], y=y_train)
fit_list.append(mcmc)
def find_point_wise_loo_score(fit):
return az.loo(az.from_numpyro(fit), pointwise=True, scale="log").loo_i.values
lpd_point = np.vstack([find_point_wise_loo_score(fit) for fit in fit_list]).T
exp_lpd_point = np.exp(lpd_point)
dist100_median = wells.loc[wells.index[train_id], "dist100"].median()
logarsenic_median = wells.loc[wells.index[train_id], "logarsenic"].median()
wells["dist100_l"] = (wells["dist100"] - dist100_median).clip(upper=0)
wells["dist100_r"] = (wells["dist100"] - dist100_median).clip(lower=0)
wells["logarsenic_l"] = (wells["logarsenic"] - logarsenic_median).clip(upper=0)
wells["logarsenic_r"] = (wells["logarsenic"] - logarsenic_median).clip(lower=0)
stacking_features = [
"edu0",
"edu1",
"edu2",
"edu3",
"assoc_half",
"dist100_l",
"dist100_r",
"logarsenic_l",
"logarsenic_r",
]
X_stacking_train = wells.loc[train_id, stacking_features].to_numpy()
X_stacking_test = wells.loc[test_id, stacking_features].to_numpy()
def stacking(
X,
d_discrete,
X_test,
exp_lpd_point,
tau_mu,
tau_sigma,
*,
test,
):
Get weights with which to stack candidate models' predictions.
Parameters
----------
X
Training stacking matrix: features on which stacking weights should depend, for the
training set.
d_discrete
Number of discrete features in `X` and `X_test`. The first `d_discrete` features
from these matrices should be the discrete ones, with the continuous ones coming
after them.
X_test
Test stacking matrix: features on which stacking weights should depend, for the
testing set.
exp_lpd_point
LOO score evaluated at each point in the training set, for each candidate model.
tau_mu
Hyperprior for mean of `beta`, for discrete features.
tau_sigma
Hyperprior for standard deviation of `beta`, for continuous features.
test
Whether to calculate stacking weights for test set.
Notes
-----
Naming of variables mirrors what's used in the original paper.
N = X.shape[0]
d = X.shape[1]
N_test = X_test.shape[0]
K = lpd_point.shape[1] # number of candidate models
with numpyro.plate("Candidate models", K - 1, dim=-2):
# mean effect of discrete features on stacking weights
mu = numpyro.sample("mu", dist.Normal(0, tau_mu))
# standard deviation effect of discrete features on stacking weights
sigma = numpyro.sample("sigma", dist.HalfNormal(scale=tau_sigma))
with numpyro.plate("Discrete features", d_discrete, dim=-1):
# effect of discrete features on stacking weights
tau = numpyro.sample("tau", dist.Normal(0, 1))
with numpyro.plate("Continuous features", d - d_discrete, dim=-1):
# effect of continuous features on stacking weights
beta_con = numpyro.sample("beta_con", dist.Normal(0, 1))
# effects of features on stacking weights
beta = numpyro.deterministic(
"beta", jnp.hstack([(sigma.squeeze() * tau.T + mu.squeeze()).T, beta_con])
)
assert beta.shape == (K - 1, d)
# stacking weights (in unconstrained space)
f = jnp.hstack([X @ beta.T, jnp.zeros((N, 1))])
assert f.shape == (N, K)
# log probability of LOO training scores weighted by stacking weights.
log_w = jax.nn.log_softmax(f, axis=1)
# stacking weights (constrained to sum to 1)
numpyro.deterministic("w", jnp.exp(log_w))
logp = jax.nn.logsumexp(lpd_point + log_w, axis=1)
numpyro.factor("logp", jnp.sum(logp))
if test:
# test set stacking weights (in unconstrained space)
f_test = jnp.hstack([X_test @ beta.T, jnp.zeros((N_test, 1))])
# test set stacking weights (constrained to sum to 1)
w_test = numpyro.deterministic("w_test", jax.nn.softmax(f_test, axis=1))
sampler = numpyro.infer.NUTS(stacking)
mcmc = numpyro.infer.MCMC(
sampler, num_chains=4, num_samples=1000, num_warmup=1000, progress_bar=False
)
mcmc.run(
jax.random.PRNGKey(17),
X=X_stacking_train,
d_discrete=4,
X_test=X_stacking_test,
exp_lpd_point=exp_lpd_point,
tau_mu=1.0,
tau_sigma=0.5,
test=True,
)
trace = mcmc.get_samples()
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(16, 6), sharey=True)
training_stacking_weights = trace["w"].mean(axis=0)
sns.scatterplot(data=pd.DataFrame(training_stacking_weights), ax=ax[0])
fixed_weights = (
az.compare({idx: fit for idx, fit in enumerate(fit_list)}, method="stacking")
.sort_index()["weight"]
.to_numpy()
)
fixed_weights_df = pd.DataFrame(
np.repeat(
fixed_weights[jnp.newaxis, :],
len(X_stacking_train),
axis=0,
)
)
sns.scatterplot(data=fixed_weights_df, ax=ax[1])
ax[0].set_title("Training weights from Bayesian Hierarchical stacking")
ax[1].set_title("Fixed weights stacking")
ax[0].set_xlabel("Index")
ax[1].set_xlabel("Index")
fig.suptitle(
"Bayesian Hierarchical Stacking weights can vary according to the input",
fontsize=18,
)
fig.tight_layout();
# for each candidate model, extract the posterior predictive logits
train_preds = []
for k in range(K):
predictive = numpyro.infer.Predictive(logistic, fit_list[k].get_samples())
rng_key = jax.random.fold_in(jax.random.PRNGKey(19), k)
train_pred = predictive(rng_key, x=train_x_list[k])["logits"]
train_preds.append(train_pred.mean(axis=0))
# reshape, so we have (N, K)
train_preds = np.vstack(train_preds).T
# same as previous cell, but for test set
test_preds = []
for k in range(K):
predictive = numpyro.infer.Predictive(logistic, fit_list[k].get_samples())
rng_key = jax.random.fold_in(jax.random.PRNGKey(20), k)
test_pred = predictive(rng_key, x=test_x_list[k])["logits"]
test_preds.append(test_pred.mean(axis=0))
test_preds = np.vstack(test_preds).T
# get the stacking weights for the test set
test_stacking_weights = trace["w_test"].mean(axis=0)
# get predictions using the stacking weights
bhs_predictions = (test_stacking_weights * test_preds).sum(axis=1)
# get predictions using only the model with the best LOO score
model_selection_preds = test_preds[:, lpd_point.sum(axis=0).argmax()]
# get predictions using fixed stacking weights, dependent on the LOO score
fixed_weights_preds = (fixed_weights * test_preds).sum(axis=1)
fig, ax = plt.subplots(figsize=(12, 6))
neg_log_pred_densities = np.vstack(
[
-dist.Bernoulli(logits=bhs_predictions).log_prob(y_test),
-dist.Bernoulli(logits=model_selection_preds).log_prob(y_test),
-dist.Bernoulli(logits=fixed_weights_preds).log_prob(y_test),
]
).T
neg_log_pred_density = pd.DataFrame(
neg_log_pred_densities,
columns=[
"Bayesian Hierarchical Stacking",
"Model selection",
"Fixed stacking weights",
],
)
sns.barplot(
data=neg_log_pred_density.reindex(
columns=neg_log_pred_density.mean(axis=0).sort_values(ascending=False).index
),
orient="h",
ax=ax,
)
ax.set_title(
"Bayesian Hierarchical Stacking performs best here", fontdict={"fontsize": 18}
)
ax.set_xlabel("Negative mean log predictive density (lower is better)");
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Front end (JavaScript)
Step2: Test
Step3: Making the widget stateful
Step4: Dynamic updates
Step5: An example including bidirectional communication
Step6: Test of the spinner widget
Step7: Wiring the spinner with another widget
|
<ASSISTANT_TASK:>
Python Code:
import ipywidgets as widgets
from traitlets import Unicode
class HelloWidget(widgets.DOMWidget):
_view_name = Unicode('HelloView').tag(sync=True)
_view_module = Unicode('hello').tag(sync=True)
%%javascript
require.undef('hello');
define('hello', ["@jupyter-widgets/base"], function(widgets) {
var HelloView = widgets.DOMWidgetView.extend({
// Render the view.
render: function() {
this.$el.text('Hello World!');
},
});
return {
HelloView: HelloView
};
});
HelloWidget()
class HelloWidget(widgets.DOMWidget):
_view_name = Unicode('HelloView').tag(sync=True)
_view_module = Unicode('hello').tag(sync=True)
value = Unicode('Hello World!').tag(sync=True)
%%javascript
require.undef('hello');
define('hello', ["@jupyter-widgets/base"], function(widgets) {
var HelloView = widgets.DOMWidgetView.extend({
render: function() {
this.$el.text(this.model.get('value'));
},
});
return {
HelloView : HelloView
};
});
%%javascript
require.undef('hello');
define('hello', ["@jupyter-widgets/base"], function(widgets) {
var HelloView = widgets.DOMWidgetView.extend({
render: function() {
this.value_changed();
this.model.on('change:value', this.value_changed, this);
},
value_changed: function() {
this.$el.text(this.model.get('value'));
},
});
return {
HelloView : HelloView
};
});
w = HelloWidget()
w
w.value = 'test'
from traitlets import CInt
class SpinnerWidget(widgets.DOMWidget):
_view_name = Unicode('SpinnerView').tag(sync=True)
_view_module = Unicode('spinner').tag(sync=True)
value = CInt().tag(sync=True)
%%javascript
requirejs.undef('spinner');
define('spinner', ["@jupyter-widgets/base"], function(widgets) {
var SpinnerView = widgets.DOMWidgetView.extend({
render: function() {
var that = this;
this.$input = $('<input />');
this.$el.append(this.$input);
this.$spinner = this.$input.spinner({
change: function( event, ui ) {
that.handle_spin(that.$spinner.spinner('value'));
},
spin: function( event, ui ) {
//ui.value is the new value of the spinner
that.handle_spin(ui.value);
}
});
this.value_changed();
this.model.on('change:value', this.value_changed, this);
},
value_changed: function() {
this.$spinner.spinner('value', this.model.get('value'));
},
handle_spin: function(value) {
this.model.set('value', value);
this.touch();
},
});
return {
SpinnerView: SpinnerView
};
});
w = SpinnerWidget(value=5)
w
w.value = 7
from IPython.display import display
w1 = SpinnerWidget(value=0)
w2 = widgets.IntSlider()
display(w1,w2)
from traitlets import link
mylink = link((w1, 'value'), (w2, 'value'))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Env setup
Step2: Object detection imports
Step3: Model preparation
Step4: Download Model
Step5: Load a (frozen) Tensorflow model into memory.
Step6: Loading label map
Step7: Helper code
Step8: Detection
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image
# This is needed to display the images.
%matplotlib inline
# This is needed since the notebook is stored in the object_detection folder.
sys.path.append("..")
from utils import label_map_util
from utils import visualization_utils as vis_util
# What model to download.
MODEL_NAME = 'ssd_mobilenet_v1_coco_11_06_2017'
MODEL_FILE = MODEL_NAME + '.tar.gz'
DOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/'
# Path to frozen detection graph. This is the actual model that is used for the object detection.
PATH_TO_CKPT = MODEL_NAME + '/frozen_inference_graph.pb'
# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = os.path.join('data', 'mscoco_label_map.pbtxt')
NUM_CLASSES = 90
opener = urllib.request.URLopener()
opener.retrieve(DOWNLOAD_BASE + MODEL_FILE, MODEL_FILE)
tar_file = tarfile.open(MODEL_FILE)
for file in tar_file.getmembers():
file_name = os.path.basename(file.name)
if 'frozen_inference_graph.pb' in file_name:
tar_file.extract(file, os.getcwd())
detection_graph = tf.Graph()
with detection_graph.as_default():
od_graph_def = tf.GraphDef()
with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')
label_map = label_map_util.load_labelmap(PATH_TO_LABELS)
categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True)
category_index = label_map_util.create_category_index(categories)
def load_image_into_numpy_array(image):
(im_width, im_height) = image.size
return np.array(image.getdata()).reshape(
(im_height, im_width, 3)).astype(np.uint8)
# For the sake of simplicity we will use only 2 images:
# image1.jpg
# image2.jpg
# If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS.
PATH_TO_TEST_IMAGES_DIR = 'test_images'
TEST_IMAGE_PATHS = [ os.path.join(PATH_TO_TEST_IMAGES_DIR, 'image{}.jpg'.format(i)) for i in range(1, 3) ]
# Size, in inches, of the output images.
IMAGE_SIZE = (12, 8)
with detection_graph.as_default():
with tf.Session(graph=detection_graph) as sess:
# Definite input and output Tensors for detection_graph
image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
# Each box represents a part of the image where a particular object was detected.
detection_boxes = detection_graph.get_tensor_by_name('detection_boxes:0')
# Each score represent how level of confidence for each of the objects.
# Score is shown on the result image, together with the class label.
detection_scores = detection_graph.get_tensor_by_name('detection_scores:0')
detection_classes = detection_graph.get_tensor_by_name('detection_classes:0')
num_detections = detection_graph.get_tensor_by_name('num_detections:0')
for image_path in TEST_IMAGE_PATHS:
image = Image.open(image_path)
# the array based representation of the image will be used later in order to prepare the
# result image with boxes and labels on it.
image_np = load_image_into_numpy_array(image)
# Expand dimensions since the model expects images to have shape: [1, None, None, 3]
image_np_expanded = np.expand_dims(image_np, axis=0)
# Actual detection.
(boxes, scores, classes, num) = sess.run(
[detection_boxes, detection_scores, detection_classes, num_detections],
feed_dict={image_tensor: image_np_expanded})
# Visualization of the results of a detection.
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
np.squeeze(boxes),
np.squeeze(classes).astype(np.int32),
np.squeeze(scores),
category_index,
use_normalized_coordinates=True,
line_thickness=8)
plt.figure(figsize=IMAGE_SIZE)
plt.imshow(image_np)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now I need to store it as
Step2: Getting the list of all words to store the most frequently occuring ones
Step3: Making a frequency distribution of the words
Step4: will train only for the first 5000 top words in the list
Step5: Finding these feature words in documents, making our function would ease it out!
Step6: What the below one does is, before hand we had only words and its category. But not we have the feature set (along with a boolean value of whether it is one of the most frequently used words or not)of the same word and then the category.
Step7: Training the classifier
Step8: We won't be telling the machine the category i.e. whether the document is a postive one or a negative one. We ask it to tell that to us. Then we compare it to the known category that we have and calculate how accurate it is.
|
<ASSISTANT_TASK:>
Python Code:
movie_reviews.categories()
documents = [(list(word for word in movie_reviews.words(fileid) if word not in stop_words), category)
for category in movie_reviews.categories()
for fileid in movie_reviews.fileids(category)
]
random.shuffle(documents)
all_words = []
for w in movie_reviews.words():
all_words.append(w.lower())
all_words = nltk.FreqDist(all_words)
all_words.most_common(20)
all_words["hate"] ## counting the occurences of a single word
feature_words = list(all_words.keys())[:5000]
def find_features(document):
words = set(document)
feature = {}
for w in feature_words:
feature[w] = (w in words)
return feature
feature_sets = [(find_features(rev), category) for (rev, category) in documents]
feature_sets[:1]
training_set = feature_sets[:1900]
testing_set = feature_sets[1900:]
## TO-DO: To build own naive bais algorithm
# classifier = nltk.NaiveBayesClassifier.train(training_set)
## saving the classifier
# save_classifier = open("naive_bayes.pickle", "wb")
# pickle.dump(classifier, save_classifier)
# save_classifier.close()
## Now that the picke is saved we will use that.
## Using the pickle file now
pickle_classifier = open("naive_bayes.pickle", "rb")
classifier = pickle.load(pickle_classifier)
pickle_classifier.close()
## Testing it's accuracy
print("Naive bayes classifier accuracy percentage : ", (nltk.classify.accuracy(classifier, testing_set))*100)
classifier.show_most_informative_features(20)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Optimizing the Text Generation Model
Step2: Get the Dataset
Step3: 250 Songs
Step4: Create Sequences and Labels
Step5: Train a (Better) Text Generation Model
Step6: View the Training Graph
Step7: Generate better lyrics!
Step8: Varying the Possible Outputs
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
# Other imports for processing data
import string
import numpy as np
import pandas as pd
!wget --no-check-certificate \
https://drive.google.com/uc?id=1LiJFZd41ofrWoBtW-pMYsfz1w8Ny0Bj8 \
-O /tmp/songdata.csv
def tokenize_corpus(corpus, num_words=-1):
# Fit a Tokenizer on the corpus
if num_words > -1:
tokenizer = Tokenizer(num_words=num_words)
else:
tokenizer = Tokenizer()
tokenizer.fit_on_texts(corpus)
return tokenizer
def create_lyrics_corpus(dataset, field):
# Remove all other punctuation
dataset[field] = dataset[field].str.replace('[{}]'.format(string.punctuation), '')
# Make it lowercase
dataset[field] = dataset[field].str.lower()
# Make it one long string to split by line
lyrics = dataset[field].str.cat()
corpus = lyrics.split('\n')
# Remove any trailing whitespace
for l in range(len(corpus)):
corpus[l] = corpus[l].rstrip()
# Remove any empty lines
corpus = [l for l in corpus if l != '']
return corpus
def tokenize_corpus(corpus, num_words=-1):
# Fit a Tokenizer on the corpus
if num_words > -1:
tokenizer = Tokenizer(num_words=num_words)
else:
tokenizer = Tokenizer()
tokenizer.fit_on_texts(corpus)
return tokenizer
# Read the dataset from csv - this time with 250 songs
dataset = pd.read_csv('/tmp/songdata.csv', dtype=str)[:250]
# Create the corpus using the 'text' column containing lyrics
corpus = create_lyrics_corpus(dataset, 'text')
# Tokenize the corpus
tokenizer = tokenize_corpus(corpus, num_words=2000)
total_words = tokenizer.num_words
# There should be a lot more words now
print(total_words)
sequences = []
for line in corpus:
token_list = tokenizer.texts_to_sequences([line])[0]
for i in range(1, len(token_list)):
n_gram_sequence = token_list[:i+1]
sequences.append(n_gram_sequence)
# Pad sequences for equal input length
max_sequence_len = max([len(seq) for seq in sequences])
sequences = np.array(pad_sequences(sequences, maxlen=max_sequence_len, padding='pre'))
# Split sequences between the "input" sequence and "output" predicted word
input_sequences, labels = sequences[:,:-1], sequences[:,-1]
# One-hot encode the labels
one_hot_labels = tf.keras.utils.to_categorical(labels, num_classes=total_words)
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Embedding, LSTM, Dense, Bidirectional
model = Sequential()
model.add(Embedding(total_words, 64, input_length=max_sequence_len-1))
model.add(Bidirectional(LSTM(20)))
model.add(Dense(total_words, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
history = model.fit(input_sequences, one_hot_labels, epochs=100, verbose=1)
import matplotlib.pyplot as plt
def plot_graphs(history, string):
plt.plot(history.history[string])
plt.xlabel("Epochs")
plt.ylabel(string)
plt.show()
plot_graphs(history, 'accuracy')
seed_text = "im feeling chills"
next_words = 100
for _ in range(next_words):
token_list = tokenizer.texts_to_sequences([seed_text])[0]
token_list = pad_sequences([token_list], maxlen=max_sequence_len-1, padding='pre')
predicted = np.argmax(model.predict(token_list), axis=-1)
output_word = ""
for word, index in tokenizer.word_index.items():
if index == predicted:
output_word = word
break
seed_text += " " + output_word
print(seed_text)
# Test the method with just the first word after the seed text
seed_text = "im feeling chills"
next_words = 100
token_list = tokenizer.texts_to_sequences([seed_text])[0]
token_list = pad_sequences([token_list], maxlen=max_sequence_len-1, padding='pre')
predicted_probs = model.predict(token_list)[0]
predicted = np.random.choice([x for x in range(len(predicted_probs))],
p=predicted_probs)
# Running this cell multiple times should get you some variance in output
print(predicted)
# Use this process for the full output generation
seed_text = "im feeling chills"
next_words = 100
for _ in range(next_words):
token_list = tokenizer.texts_to_sequences([seed_text])[0]
token_list = pad_sequences([token_list], maxlen=max_sequence_len-1, padding='pre')
predicted_probs = model.predict(token_list)[0]
predicted = np.random.choice([x for x in range(len(predicted_probs))],
p=predicted_probs)
output_word = ""
for word, index in tokenizer.word_index.items():
if index == predicted:
output_word = word
break
seed_text += " " + output_word
print(seed_text)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data Soure
Step2: Crime rate, Walks and School score in each postal code
Step3: House sales price by zipcode
Step4: Calculating Average price of the property
Step5: Average Price per sq. ft
Step6: Calculating rank based on Crime rate
Step7: Impact of Crime over sales price
Step8: Calculating rank based on Walk score
Step9: Impact of Walkscore over sales price
Step10: Calculating rank based on School rating
Step11: Impact of School score over sales price
Step12: z1['Avg. price per sq. ft'] = z1['Avg. Price']/z1['finished \n(SqFt)']
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd # Importing necessary data package
import matplotlib.pyplot as plt # pyplot module
import numpy as np
Zillow = pd.ExcelFile("Properties_philly_Kraggle_v2.xlsx")
zz = Zillow.parse('Properties_philly_Kraggle_v2')
zz
print('Dimensions: ', zz.shape) # looking at the categories I can work with
print('Column labels: ', zz.columns) #Listing out the Column headings
print('Row labels: ', zz.index)
z = zz.dropna() #Dropped empty rows that does have any data
print('Dimensions: ', z.shape)
plt.scatter(z['Postal Code'], z[' Violent Crime Rate '])
plt.show()
plt.scatter(z['Postal Code'], z[' Avg Walk&Transit score '])
plt.show()
plt.scatter(z['Postal Code'], z[' School Score '])
plt.show()
plt.scatter(z['Postal Code'], z['Sale Price/bid price'])
plt.show()
z1 = pd.DataFrame(z, columns = ['Address',
'Zillow Address',
'Sale Date',
'Opening Bid',
'Sale Price/bid price',
'Book/Writ',
'OPA',
'Postal Code',
'Attorney',
'Ward',
'Seller',
'Buyer',
'Sheriff Cost',
'Advertising',
'Other',
'Record Deed',
'Water',
'PGW',
' Avg Walk&Transit score ',
' Violent Crime Rate ',
' School Score ',
'Zillow Estimate',
'Rent Estimate',
'taxAssessment',
'yearBuilt',
'finished \n(SqFt)',
' bathrooms ',
' bedrooms ',
'PropType',
'Average comps'])
z1['Avg. Price'] = z1[['Zillow Estimate', 'taxAssessment', 'Average comps']].mean(axis=1)
list(z1)
z1['Avg. Price'].head()
z1['Avg. price per sq. ft'] = z1['Avg. Price']/z1['finished \n(SqFt)']
z1['Avg. price per sq. ft'].head()
z1[' Violent Crime Rate '].median()
z1[' Violent Crime Rate '].min()
z1[' Violent Crime Rate '].max()
crimerank = []
for row in z1[' Violent Crime Rate ']:
if row<0.344:
crimerank.append(1)
elif row>=0.344 and row<0.688:
crimerank.append(2)
elif row>=0.688 and row<1.032:
crimerank.append(3)
elif row>=1.032 and row<1.376:
crimerank.append(4)
else:
crimerank.append(5)
z1['Crime Rank'] = crimerank
z1['Crime Rank'].head()
zcrime = z1.groupby(['Crime Rank'])['Avg. Price'].mean()
zcrime
plt.figure(figsize = (16,7)) # plotting our data
plt.plot(zcrime, color = 'red', marker= '*')
plt.suptitle('Average price by crime', fontsize=18)
plt.xlabel('Crime Rank', fontsize=12)
plt.ylabel('Average price of houses', fontsize=12)
plt.show()
Walkrank = []
for row in z1[' Avg Walk&Transit score ']:
if row>88:
Walkrank.append(1)
elif row>77 and row<=88:
Walkrank.append(2)
elif row>66 and row<=77:
Walkrank.append(3)
elif row>55 and row<=66:
Walkrank.append(4)
else:
Walkrank.append(5)
z1['Walk Rank'] = Walkrank
z1['Walk Rank'].head()
zwalk = z1.groupby(['Walk Rank'])['Avg. Price'].mean()
zwalk
plt.figure(figsize = (16,7)) # plotting our data
plt.plot(zwalk, color = 'blue', marker= '*')
plt.suptitle('Average price by walkrank', fontsize=18)
plt.xlabel('Walk Rank', fontsize=12)
plt.ylabel('Average price of houses', fontsize=12)
plt.show()
Schoolrank = []
for row in z1[' School Score ']:
if row>57.308:
Schoolrank.append(1)
elif row>43.816 and row<=57.308:
Schoolrank.append(2)
elif row>30.324 and row<=43.816:
Schoolrank.append(3)
elif row>16.832 and row<=30.324:
Schoolrank.append(4)
else:
Schoolrank.append(5)
z1['School Rank'] = Schoolrank
zschool = z1.groupby(['School Rank'])['Avg. Price'].mean()
zschool
plt.figure(figsize = (16,7)) # plotting our data
plt.plot(zschool, color = 'blue', marker= '*')
plt.suptitle('Average price by schoolrank', fontsize=18)
plt.xlabel('School Rank', fontsize=12)
plt.ylabel('Average price of houses', fontsize=12)
plt.show()
z1.head()
z1['Closing Cost'] = z1['Avg. Price']*.085
z1['Rehab Cost'] = z1['finished \n(SqFt)']*25
z1['Estimated Max Bid Price']= z1['Avg. Price']-z1['Rehab Cost']
z1.head()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Combinatorial Generation
Step2: The following is an iterable of Gray codes we saw in the example above, namely all of them of length 3, pretty-printed
Step3: Moreover, we have implemented an operator which allows us to process an iterable of codes pairwise, possibly plugging-in custom behavior via lambda expressions. Previous table with the position of the changing-bit can be rebuilt as follows
Step4: Direct generation
Step5: and can be used to build the same example table; observe that the changing-bit positions sequence is shifted forward by one
Step6: Counting without cycles
Step7: Simply via bitmasking
Step8: and can be used directly with binary counting
Step9: Ranking Gray codes
Step10: where auxiliary functions are defined as follows
Step11: Gray codes for combinations
Step12: it can be used as follows, showing changing-bits positions too
Step13: Using homogeneous transposition relation
|
<ASSISTANT_TASK:>
Python Code:
%run ../python-libs/inpututils.py
%run ../python-libs/graycodes.py
%run ../python-libs/bits.py
python_code(graycode_unrank)
print('\n'.join(list(binary_reflected_graycodes(3, justified=True))))
from itertools import count
example_graycodes = binary_reflected_graycodes(length=3)
for c, (o, s, p, r) in zip(count(), binary_reflected_graycodes_reduce(example_graycodes)):
print('{:>5}\t{:>5}\t{}'.format(bin(c), bin(o), p))
python_code(graycodes_direct)
for i, (code, changingbit_position) in zip(count(), graycodes_direct(3)):
print('{:>5}\t{:>5}\t{}'.format(bin(i),bin(code), changingbit_position))
python_code(rightmost_zeroes_in_binary_counting, graycodes_by_transition_sequence)
list(rightmost_zeroes_in_binary_counting(3))
graycodes_with_positions = graycodes_by_transition_sequence(rightmost_zeroes_in_binary_counting(3))
for b, (code, p) in zip(count(), graycodes_with_positions):
print('{:>5}\t{:>5}\t{}'.format(bin(b), bin(code), p))
python_code(turn_on_last_zero)
for i in range(2**3-1):
I, z = turn_on_last_zero(i)
print(I, z)
python_code(graycode_rank)
python_code(is_on, set_all)
python_code(graycodes_combinations)
for c, i, j in graycodes_combinations(6,3):
print("{:>8}\t{}\t{}".format(bin(c), i, j))
python_code(graycodes_combinations_homogeneous)
for c, i, j in graycodes_combinations_homogeneous(8,4):
print('{}\t{}\t{:>8}'.format(i, j, bin(c),))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Round 1
Step2: Round 2
Step3: Round 3
Step4: K.O.
Step5: Pour avoir l'air savant
Step6: Quizz
Step7: Peut-on se mocker ?
Step8: Test de robustesse n°0
Step9: Test de robustesse n°1
Step10: __subclasshook__
Step11: NB ça ne teste que les noms de membre, pas leur type...
Step12: Le gadget de l'inspecteur
Step13: Un bon moyen pour tester l'arité ?
Step14: import typeguard
Step15: Un peu plus loin
Step16: MyPy
Step17: Python n'est il pas Dynamique?
Step18: Le coup fatal
|
<ASSISTANT_TASK:>
Python Code:
id # id(obj: Any) -> int
int # int(obj: SupportsInt) -> int
list.append # list.append(self: List[T], obj: T) -> None
from typing import TypeVar # PEP 484
T = TypeVar('T')
def add(self: T, other: T) -> T: # PEP 3107
return self + other
from typing import List, Tuple
t0: int = add(1, 2)
t1: List[int] = add([1], [2])
t2: Tuple = add((1,), (2,))
from typing import List
l: List[int] = [1,2,3]
def indexer(i):
return l[i]
from typing import overload, Iterable
@overload
def indexer(i: int) -> int:
pass
@overload
def indexer(i: slice) -> Iterable[int]:
pass
def indexer(i):
return l[i]
def bar(x):
return str(x)
def foo(x):
return bar if int(x) else None
from typing import Optional, SupportsInt, Any, Callable
def bar(x: Any) -> str:
return str(x)
def foo(x: SupportsInt) -> Optional[Callable[[Any], str]]:
return bar if int(x) else None
from typing import Iterable, Tuple, List
x: Iterable[int] = range(3)
y: Iterable[int] = reversed(range(3))
l0: Iterable[Tuple[int, int]] = zip(x, y)
l1: Tuple[Iterable[int], Iterable[int]] = zip(*l0)
from random import randint
n = randint(1, 4)
t0: List[Tuple[int]] = [(1, )] * n
l2: Tuple[(int,) * n] = zip(*t0)
l: Any = eval("1 + 2")
%%file pyconfr2017/ko.py
a: int = 1
ko = __import__("pyconfr2017.ko")
def isiterable0(x): # nominal
return isinstance(x, (set, tuple, list, dict, str))
def isiterable1(x): # structurel
return hasattr(x, '__iter__')
def isiterable2(x): # duck type v0
try:
x.__iter__()
return True
except:
return False
def isiterable3(x): # duck type v1
try:
iter(x)
return True
except:
return False
from collections.abc import Iterable
def isiterable(x):
return isinstance(x, Iterable)
def check_iterable(l):
return isiterable0(l), isiterable1(l), isiterable2(l), isiterable3(l), isiterable(l)
l = [1, 2, 3, 5]
check_iterable(l)
class EmptySequence(object):
def __iter__(self): yield
def __len__(self): return 0
es = EmptySequence()
check_iterable(es)
class Infinity(object):
def __getitem__(self, _):
return 0
infnty = Infinity()
check_iterable(infnty)
class Hole(object):
def __iter__(self, _):
pass
h = Hole()
check_iterable(h)
import abc # PEP 3119
class Appendable(abc.ABC):
@classmethod
def __subclasshook__(cls, C):
return any('append' in B.__dict__ for B in C.mro())
class DevNull(object):
def append(self, value):
pass
def __len__(self):
return 0
issubclass(DevNull, Appendable)
class TraitsFactory(object):
def __getitem__(self, method_names):
if not isinstance(method_names, tuple):
method_names = method_names,
class SlotCheck(abc.ABC):
@classmethod
def __subclasshook__(cls, C):
return all(any(method_name in B.__dict__ for B in C.mro())
for method_name in method_names)
return SlotCheck
Members = TraitsFactory()
issubclass(DevNull, Members['append', '__len__'])
issubclass(DevNull, Members['append', 'clear'])
class Twice(list):
def append(self, value, times):
for _ in range(times):
super(Twice, self).append(value)
tw = Twice()
tw.append("ore", 2)
len(tw)
isinstance(tw, Members['append', '__len__'])
tw.append("aison") # gentlemna contract again
import inspect
inspect.signature(Twice.append).parameters
inspect.signature(list.append)
from typing import Callable
isinstance(list.append, Callable)
from typing import List, TypeVar; T = TypeVar('T')
isinstance(list.append, Callable[[List[T], T], None])
from typeguard import typechecked
def aff(x: int) -> int:
return x * 2 + 1
aff(2)
aff("2")
taff = typechecked(aff)
taff(2)
taff("2")
taff(2.)
from numbers import Number # PEP 3141
@typechecked
def taff(x: Number) -> Number:
return x * 2 + 1
taff(1), taff(1.), taff(1j)
print("** without type checking **")
%timeit aff(2)
print("** with type checking **")
%timeit taff(2)
@typechecked
def index(x: Members["__getitem__"]) -> None:
x[1]
index([1,2])
@typechecked
def pouce(x: Members['__getitem__']) -> None:
return x[0]
pouce([0])
from typing import List, TypeVar; T = TypeVar('T')
@typechecked
def majeur(x: List[T]) -> T:
return x[2]
majeur([1,2,3])
majeur([1, "1", 1])
@typechecked
def step(x : List[T]) -> List[T]:
return x + [x[-2] + x[-1]]
step([1, 2])
from functools import reduce
reduce(lambda x, _: step(x), range(10), [0, 1])
from typing import Tuple
@typechecked
def step(x : Tuple) -> Tuple:
return x + (x[-2] + x[-1],)
step((1,2))
@typechecked
def step(x : Tuple) -> Tuple:
return x + (x[-2] + x[-1],)
step((1,2))
from mypy.api import run as mypy_runner
def mypy(*args):
print( mypy_runner(args)[0])
%%file pyconfr2017/mypy0.py
from typing import Tuple
def step(x : Tuple) -> Tuple:
return x + (x[-2] + x[-1],)
step([1, 2])
mypy("pyconfr2017/mypy0.py")
%%file pyconfr2017/mypy1.py
from typing import Tuple
def step(x : Tuple) -> Tuple:
return x + (x[-2] + x[-1],)
step((1, 2))
mypy("pyconfr2017/mypy1.py")
%%file pyconfr2017/mypy2.py
from typing import Tuple, Any
def step(x : Tuple[int, ...]) -> Tuple[int, ...]:
return x + (x[-2] + x[-1],)
step((1, 2))
mypy("pyconfr2017/mypy2.py")
float_mode = True
scalar = float if float_mode else int
@typechecked
def div(n: scalar):
return 2 / n
div(scalar(3))
%%file pyconfr2017/mypy3.py
float_mode = True
scalar_type = float if float_mode else int
def div(n: scalar_type):
return 2 / n
div(scalar_type(3))
mypy("pyconfr2017/mypy3.py")
import numpy
help(numpy.sum)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read in neighborhood shapefiles
Step3: Now plot the shapefiles
Step4: Read in issues and determine the region
Step5: Remove issues that do not have correct coordinates
|
<ASSISTANT_TASK:>
Python Code:
import fiona
from shapely.geometry import shape
import nhrc2
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap
from collections import defaultdict
import numpy as np
from matplotlib.patches import Polygon
from shapely.geometry import Point
%matplotlib inline
#the project root directory:
nhrc2dir = ('/').join(nhrc2.__file__.split('/')[:-1])+'/'
c = fiona.open(nhrc2dir+'data/nh_neighborhoods/nh_neighborhoods.shp')
pol = c.next()
geom = shape(pol['geometry'])
c.crs
for i in c.items():
print(i[1])
len(c)
for i in c:
pol = i
geom = shape(pol['geometry'])
geom
geom
#Based on code from Kelly Jordhal:
#http://nbviewer.ipython.org/github/mqlaql/geospatial-data/blob/master/Geospatial-Data-with-Python.ipynb
def plot_polygon(ax, poly):
a = np.asarray(poly.exterior)
ax.add_patch(Polygon(a, facecolor='#46959E', alpha=0.3))
ax.plot(a[:, 0], a[:, 1], color='black')
def plot_multipolygon(ax, geom):
Can safely call with either Polygon or Multipolygon geometry
if geom.type == 'Polygon':
plot_polygon(ax, geom)
elif geom.type == 'MultiPolygon':
for poly in geom.geoms:
plot_polygon(ax, poly)
nhv_geom = defaultdict()
#colors = ['red', 'green', 'orange', 'brown', 'purple']
fig, ax = plt.subplots(figsize=(12,12))
for rec in c:
#print(rec['geometry']['type'])
hood = rec['properties']['name']
nhv_geom[hood] = shape(rec['geometry'])
plot_multipolygon(ax, nhv_geom[hood])
labels = ax.get_xticklabels()
for label in labels:
label.set_rotation(90)
ax.set_xlabel('Longitude')
ax.set_ylabel('Latitude')
ax.plot(scf_df.loc[0, 'lng'], scf_df.loc[0, 'lat'], 'o')
import nhrc2.backend.read_seeclickfix_api_to_csv as rscf
scf_cats = rscf.read_categories(readfile=True)
scf_df = rscf.read_issues(scf_cats, readfile=True)
len(scf_cats)
scf_df.head(3)
len(scf_df)
scf_df = scf_df[((scf_df['lat'] < 41.36) & (scf_df['lat'] > 41.24) & (scf_df['lng']>=-73.00) & (scf_df['lng'] <= -72.86))]
print(len(scf_df))
scf_df.loc[0, 'lat']
grid_point = Point(scf_df.loc[0, 'lng'], scf_df.loc[0, 'lat'])
for idx in range(5):
grid_point = Point(scf_df.loc[idx, 'lng'], scf_df.loc[idx, 'lat'])
print('Point {} at {}'.format(idx, scf_df.loc[idx, 'address']))
print('Downtown: {}'.format(grid_point.within(nhv_geom['Downtown'])))
print('East Rock: {}'.format(grid_point.within(nhv_geom['East Rock'])))
print('Fair Haven Heights: {}'.format(grid_point.within(nhv_geom['Fair Haven Heights'])))
print('Number of neighborhoods: {}'.format(len(nhv_geom.keys())))
for hood in nhv_geom.keys():
print(hood)
def get_neighborhoods(scf_df, neighborhoods):
hoods = []
for idx in scf_df.index:
grid_point = Point(scf_df.loc[idx, 'lng'], scf_df.loc[idx, 'lat'])
for hoodnum, hood in enumerate(nhv_geom.keys()):
if grid_point.within(nhv_geom[hood]):
hoods.append(hood)
break
if hoodnum == 19:
#There are 20 neighborhoods. If you are the 20th (element 19 in
#zero-based indexing) and have not continued out of the iteration
#set the neighborhood name to "Other":
hoods.append('Other')
return hoods
%time nbrhoods = get_neighborhoods(scf_df, nhv_geom)
print(len(scf_df))
print(len(nbrhoods))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Reading and Writing Data in Text Format
Step2: A file will not always have a header row. Consider this file
Step3: To read this in, you have a couple of options. You can allow pandas to assign default column names, or you can specify names yourself
Step4: Suppose you wanted the message column to be the index of the returned DataFrame. You can either indicate you want the column at index 4 or named 'message' using the index_col argument
Step5: In the event that you want to form a hierarchical index from multiple columns, just pass a list of column numbers or names
Step6: In some cases, a table might not have a fixed delimiter, using whitespace or some other pattern to separate fields. In these cases, you can pass a regular expression as a delimiter for read_table. Consider a text file that looks like this
Step7: Because there was one fewer column name than the number of data rows, read_table infers that the first column should be the DataFrame’s index in this special case.
Step8: Handling missing values is an important and frequently nuanced part of the file parsing process. Missing data is usually either not present (empty string) or marked by some sentinel value. By default, pandas uses a set of commonly occurring sentinels, such as NA, -1.#IND, and NULL
Step9: The na_values option can take either a list or set of strings to consider missing values
Step10: Different NA sentinels can be specified for each column in a dict
Step11: Reading text files in pieces
Step12: If you want to only read out a small number of rows (avoiding reading the entire file), specify that with nrows
Step13: To read out a file in pieces, specify a chunksize as a number of rows
Step14: The TextParser object returned by read_csv allows you to iterate over the parts of the file according to the chunksize. For example, we can iterate over ex6.csv, aggregating the value counts in the 'key' column like so
Step15: Writing data out to text format
Step16: Using DataFrame’s to_csv method, we can write the data out to a comma-separated file
Step17: Other delimiters can be used, of course (writing to sys.stdout so it just prints the text result; make sure to import sys)
Step18: Missing values appear as empty strings in the output. You might want to denote them by some other sentinel value
Step19: With no other options specified, both the row and column labels are written. Both of these can be disabled
Step20: You can also write only a subset of the columns, and in an order of your choosing
Step21: Series also has a to_csv method
Step22: With a bit of wrangling (no header, first column as index), you can read a CSV version of a Series with read_csv, but there is also a from_csv convenience method that makes it a bit simpler
Step23: Manually working with delimited formats
Step25: JSON data
Step26: XML and HTML, Web scraping
Step27: Parsing XML with lxml.objectify
Step28: Binary data formats
Step29: Using HDF5 format
Step30: Interacting with HTML and Web APIs
Step32: Interacting with databases
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import division
from numpy.random import randn
import numpy as np
import os
import sys
import matplotlib.pyplot as plt
np.random.seed(12345)
plt.rc('figure', figsize=(10, 6))
from pandas import Series, DataFrame
import pandas as pd
np.set_printoptions(precision=4)
%pwd
!cat ch06/ex1.csv
df = pd.read_csv('ch06/ex1.csv')
df
!cat ch06/test.csv
dfx = pd.read_csv('ch06/test.csv')
dfx
pd.read_table('ch06/ex1.csv', sep=',')
!cat ch06/ex2.csv
pd.read_csv('ch06/ex2.csv', names=['a', 'b', 'c', 'd', 'message'])
pd.read_csv('ch06/ex2.csv', header=None)
names = ['a', 'b', 'c', 'd', 'message']
pd.read_csv('ch06/ex2.csv', names=names, index_col='message')
!cat ch06/csv_mindex.csv
parsed = pd.read_csv('ch06/csv_mindex.csv', index_col=['key1', 'key2'])
parsed
list(open('ch06/ex3.txt'))
result = pd.read_table('ch06/ex3.txt', sep='\s+')
result
!cat ch06/ex4.csv
pd.read_csv('ch06/ex4.csv', skiprows=[0, 2, 3])
!cat ch06/ex5.csv
result = pd.read_csv('ch06/ex5.csv')
result
pd.isnull(result)
result = pd.read_csv('ch06/ex5.csv', na_values=['NULL'])
result
sentinels = {'message': ['foo', 'NA'], 'something': ['two']}
pd.read_csv('ch06/ex5.csv', na_values=sentinels)
result = pd.read_csv('ch06/ex6.csv')
result
pd.read_csv('ch06/ex6.csv', nrows=5)
chunker = pd.read_csv('ch06/ex6.csv', chunksize=1000)
chunker
chunker = pd.read_csv('ch06/ex6.csv', chunksize=1000)
tot = Series([])
for piece in chunker:
tot = tot.add(piece['key'].value_counts(), fill_value=0)
tot = tot.sort_values(ascending=False)
tot[:10]
data = pd.read_csv('ch06/ex5.csv')
data
data.to_csv('ch06/out.csv')
!cat ch06/out.csv
data.to_csv(sys.stdout, sep='|')
data.to_csv(sys.stdout, na_rep='HELLO')
data.to_csv(sys.stdout, index=False, header=False)
data.to_csv(sys.stdout, index=False, columns=['a', 'b', 'c'])
dates = pd.date_range('1/1/2000', periods=7)
ts = Series(np.arange(7), index=dates)
ts.to_csv('ch06/tseries.csv')
!cat ch06/tseries.csv
Series.from_csv('ch06/tseries.csv', parse_dates=True)
!cat ch06/ex7.csv
import csv
f = open('ch06/ex7.csv')
reader = csv.reader(f)
for line in reader:
print(line)
lines = list(csv.reader(open('ch06/ex7.csv')))
header, values = lines[0], lines[1:]
data_dict = {h: v for h, v in zip(header, zip(*values))}
data_dict
class my_dialect(csv.Dialect):
lineterminator = '\n'
delimiter = ';'
quotechar = '"'
quoting = csv.QUOTE_MINIMAL
with open('mydata.csv', 'w') as f:
writer = csv.writer(f, dialect=my_dialect)
writer.writerow(('one', 'two', 'three'))
writer.writerow(('1', '2', '3'))
writer.writerow(('4', '5', '6'))
writer.writerow(('7', '8', '9'))
%cat mydata.csv
obj =
{"name": "Wes",
"places_lived": ["United States", "Spain", "Germany"],
"pet": null,
"siblings": [{"name": "Scott", "age": 25, "pet": "Zuko"},
{"name": "Katie", "age": 33, "pet": "Cisco"}]
}
import json
result = json.loads(obj)
result
asjson = json.dumps(result)
siblings = DataFrame(result['siblings'], columns=['name', 'age'])
siblings
from lxml.html import parse
from urllib2 import urlopen
parsed = parse(urlopen('http://finance.yahoo.com/q/op?s=AAPL+Options'))
doc = parsed.getroot()
links = doc.findall('.//a')
links[15:20]
lnk = links[28]
lnk
lnk.get('href')
lnk.text_content()
urls = [lnk.get('href') for lnk in doc.findall('.//a')]
urls[-10:]
tables = doc.findall('.//table')
calls = tables[9]
puts = tables[13]
rows = calls.findall('.//tr')
def _unpack(row, kind='td'):
elts = row.findall('.//%s' % kind)
return [val.text_content() for val in elts]
_unpack(rows[0], kind='th')
_unpack(rows[1], kind='td')
from pandas.io.parsers import TextParser
def parse_options_data(table):
rows = table.findall('.//tr')
header = _unpack(rows[0], kind='th')
data = [_unpack(r) for r in rows[1:]]
return TextParser(data, names=header).get_chunk()
call_data = parse_options_data(calls)
put_data = parse_options_data(puts)
call_data[:10]
%cd ch06/mta_perf/Performance_XML_Data
!head -21 Performance_MNR.xml
from lxml import objectify
path = 'Performance_MNR.xml'
parsed = objectify.parse(open(path))
root = parsed.getroot()
data = []
skip_fields = ['PARENT_SEQ', 'INDICATOR_SEQ',
'DESIRED_CHANGE', 'DECIMAL_PLACES']
for elt in root.INDICATOR:
el_data = {}
for child in elt.getchildren():
if child.tag in skip_fields:
continue
el_data[child.tag] = child.pyval
data.append(el_data)
perf = DataFrame(data)
perf
root
root.get('href')
root.text
cd ../..
frame = pd.read_csv('ch06/ex1.csv')
frame
frame.to_pickle('ch06/frame_pickle')
pd.read_pickle('ch06/frame_pickle')
store = pd.HDFStore('mydata.h5')
store['obj1'] = frame
store['obj1_col'] = frame['a']
store
store['obj1']
store.close()
os.remove('mydata.h5')
import requests
url = 'https://api.github.com/repos/pydata/pandas/milestones/28/labels'
resp = requests.get(url)
resp
data[:5]
issue_labels = DataFrame(data)
issue_labels
import sqlite3
query =
CREATE TABLE test
(a VARCHAR(20), b VARCHAR(20),
c REAL, d INTEGER
);
con = sqlite3.connect(':memory:')
con.execute(query)
con.commit()
data = [('Atlanta', 'Georgia', 1.25, 6),
('Tallahassee', 'Florida', 2.6, 3),
('Sacramento', 'California', 1.7, 5)]
stmt = "INSERT INTO test VALUES(?, ?, ?, ?)"
con.executemany(stmt, data)
con.commit()
cursor = con.execute('select * from test')
rows = cursor.fetchall()
rows
cursor.description
DataFrame(rows, columns=zip(*cursor.description)[0])
import pandas.io.sql as sql
sql.read_sql('select * from test', con)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Implementation
Step2: Preparing the Data
Step3: For highly-skewed feature distributions such as 'capital-gain' and 'capital-loss', it is common practice to apply a <a href="https
Step4: Normalizing Numerical Features
Step5: Implementation
Step6: Shuffle and Split Data
Step7: Evaluating Model Performance
Step8: Supervised Learning Models
Step9: Implementation
Step10: Question 3 - Choosing the Best Model
Step11: Question 5 - Final Model Evaluation
Step12: Question 7 - Extracting Feature Importance
|
<ASSISTANT_TASK:>
Python Code:
# Import libraries necessary for this project
import numpy as np
import pandas as pd
from time import time
from IPython.display import display # Allows the use of display() for DataFrames
# Import supplementary visualization code visuals.py
import visuals as vs
# Pretty display for notebooks
%matplotlib inline
# Load the Census dataset
data = pd.read_csv("census.csv")
# Success - Display the first record
display(data.head(n=1))
# TODO: Total number of records
n_records = data['age'].count()
# TODO: Number of records where individual's income is more than $50,000
n_greater_50k = data[data.income==">50K"].income.count()
#df[df.a > 1].sum()
#data[data['income']==">50K"].count()
# TODO: Number of records where individual's income is at most $50,000
n_at_most_50k = data[data.income=="<=50K"].income.count()
#data[data['income']=="<=50K"].count()
# TODO: Percentage of individuals whose income is more than $50,000
greater_percent = float(n_greater_50k)*100/n_records
# Print the results
print "Total number of records: {}".format(n_records)
print "Individuals making more than $50,000: {}".format(n_greater_50k)
print "Individuals making at most $50,000: {}".format(n_at_most_50k)
print "Percentage of individuals making more than $50,000: {:.2f}%".format(greater_percent)
# Split the data into features and target label
income_raw = data['income']
features_raw = data.drop('income', axis = 1)
# Visualize skewed continuous features of original data
vs.distribution(data)
# Log-transform the skewed features
skewed = ['capital-gain', 'capital-loss']
features_raw[skewed] = data[skewed].apply(lambda x: np.log(x + 1))
# Visualize the new log distributions
vs.distribution(features_raw, transformed = True)
# Import sklearn.preprocessing.StandardScaler
from sklearn.preprocessing import MinMaxScaler
# Initialize a scaler, then apply it to the features
scaler = MinMaxScaler()
numerical = ['age', 'education-num', 'capital-gain', 'capital-loss', 'hours-per-week']
features_raw[numerical] = scaler.fit_transform(data[numerical])
# Show an example of a record with scaling applied
display(features_raw.head(n = 1))
# TODO: One-hot encode the 'features_raw' data using pandas.get_dummies()
features = pd.get_dummies(features_raw)
# TODO: Encode the 'income_raw' data to numerical values
income = income_raw.apply(lambda x: 1 if x == ">50K" else 0)
# Print the number of features after one-hot encoding
encoded = list(features.columns)
print "{} total features after one-hot encoding.".format(len(encoded))
# Uncomment the following line to see the encoded feature names
#print encoded
# Import train_test_split
from sklearn.cross_validation import train_test_split
# Split the 'features' and 'income' data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(features, income, test_size = 0.2, random_state = 0)
# Show the results of the split
print "Training set has {} samples.".format(X_train.shape[0])
print "Testing set has {} samples.".format(X_test.shape[0])
# TODO: Calculate accuracy
from sklearn.metrics import accuracy_score
from sklearn.metrics import recall_score
from sklearn.metrics import fbeta_score
income_pred=income.apply(lambda x:1)
TP=sum(map(lambda x,y: 1 if x==1 and y==1 else 0, income,income_pred)) #True Pos
FP=sum(map(lambda x,y: 1 if x==0 and y==1 else 0, income,income_pred)) #False Pos
FN=sum(map(lambda x,y: 1 if x==1 and y==0 else 0, income,income_pred)) #False Neg
# accuracy = TP/(TP+FP)
accuracy = float(TP)/(TP+FP)
# The commented code below was used to confirm the precision calculation was correct
#accuracy1 = accuracy_score(income,income_pred)
#print 'accuracy comparison',accuracy,accuracy1
# recall = TP/(TP+FN)
recall=float(TP)/(TP+FN)
# The commented code below was used to confirm the recall calculation was correct
#recal1=recall_score(income,income_pred)
#print 'recall comparison',recal1,recall1
# TODO: Calculate F-score using the formula above for beta = 0.5
beta=0.5
fscore = (1+beta**2)*(accuracy*recall)/(beta**2*accuracy+recall)
#fscore1=fbeta_score(income,income_pred, beta=0.5)
#print 'fscore comparison',fscore,fscore1
# Print the results
print "Naive Predictor: [Accuracy score: {:.4f}, F-score: {:.4f}]".format(accuracy, fscore)
# TODO: Import two metrics from sklearn - fbeta_score and accuracy_score
from sklearn.metrics import fbeta_score, accuracy_score
def train_predict(learner, sample_size, X_train, y_train, X_test, y_test):
'''
inputs:
- learner: the learning algorithm to be trained and predicted on
- sample_size: the size of samples (number) to be drawn from training set
- X_train: features training set
- y_train: income training set
- X_test: features testing set
- y_test: income testing set
'''
results = {}
# TODO: Fit the learner to the training data using slicing with 'sample_size'
start = time() # Get start time
learner.fit(X_train[:sample_size],y_train[:sample_size])
end = time() # Get end time
# TODO: Calculate the training time
results['train_time'] = end-start
# TODO: Get the predictions on the test set,
# then get predictions on the first 300 training samples
start = time() # Get start time
predictions_test = learner.predict(X_test)
predictions_train = learner.predict(X_train[:300])
end = time() # Get end time
# TODO: Calculate the total prediction time
results['pred_time'] = end-start
# TODO: Compute accuracy on the first 300 training samples
results['acc_train'] = accuracy_score(y_train[:300],predictions_train)
# TODO: Compute accuracy on test set
results['acc_test'] = accuracy_score(y_test,predictions_test)
# TODO: Compute F-score on the the first 300 training samples
results['f_train'] = fbeta_score(y_train[:300],predictions_train,beta=0.5)
# TODO: Compute F-score on the test set
results['f_test'] = fbeta_score(y_test,predictions_test,beta=0.5)
# Success
print "{} trained on {} samples.".format(learner.__class__.__name__, sample_size)
# Return the results
return results
# TODO: Import the three supervised learning models from sklearn
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
from sklearn.ensemble import AdaBoostClassifier
# TODO: Initialize the three models
clf_A = GaussianNB()
clf_B = SVC(random_state=0)
clf_C = AdaBoostClassifier(random_state=0)
# TODO: Calculate the number of samples for 1%, 10%, and 100% of the training data
samples_1 = len(X_train)/100
samples_10 = len(X_train)/10
samples_100 = len(X_train)
# Collect results on the learners
results = {}
for clf in [clf_A, clf_B, clf_C]:
clf_name = clf.__class__.__name__
results[clf_name] = {}
for i, samples in enumerate([samples_1, samples_10, samples_100]):
results[clf_name][i] = \
train_predict(clf, samples, X_train, y_train, X_test, y_test)
# Run metrics visualization for the three supervised learning models chosen
vs.evaluate(results, accuracy, fscore)
# TODO: Import 'GridSearchCV', 'make_scorer', and any other necessary libraries
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import fbeta_score, make_scorer
from sklearn.ensemble import AdaBoostClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.naive_bayes import MultinomialNB
# TODO: Initialize the classifier
clf = AdaBoostClassifier(random_state=0)
# TODO: Create the parameters list you wish to tune
#parameters = {'n_estimators':[75,100,200]}
parameters = {'n_estimators':[75,200,500],'learning_rate':[1.0,1.5,2.0]}
# TODO: Make an fbeta_score scoring object
scorer = make_scorer(fbeta_score, beta=0.5)
# TODO: Perform grid search on the classifier using 'scorer' as the scoring method
grid_obj = GridSearchCV(clf, parameters,scoring=scorer)
# TODO: Fit the grid search object to the training data and find the optimal parameters
grid_fit = grid_obj.fit(X_train, y_train)
# Get the estimator
best_clf = grid_fit.best_estimator_
# Make predictions using the unoptimized and model
predictions = (clf.fit(X_train, y_train)).predict(X_test)
best_predictions = best_clf.predict(X_test)
# Report the before-and-afterscores
print "Unoptimized model\n------"
print "Accuracy score on testing data: {:.4f}".format(accuracy_score(y_test, predictions))
print "F-score on testing data: {:.4f}".format(fbeta_score(y_test, predictions, beta = 0.5))
print "\nOptimized Model\n------"
print "Final accuracy score on the testing data: {:.4f}".format(accuracy_score(y_test, best_predictions))
print "Final F-score on the testing data: {:.4f}".format(fbeta_score(y_test, best_predictions, beta = 0.5))
# TODO: Import a supervised learning model that has 'feature_importances_'
# TODO: Train the supervised model on the training set
#AdaBoostClassifier(random_state=0)
model = AdaBoostClassifier(random_state=0,n_estimators=500).fit(X_train, y_train)
# TODO: Extract the feature importances
importances = model.feature_importances_
# Plot
vs.feature_plot(importances, X_train, y_train)
# Import functionality for cloning a model
from sklearn.base import clone
# Reduce the feature space
X_train_reduced = X_train[X_train.columns.values[(np.argsort(importances)[::-1])[:5]]]
X_test_reduced = X_test[X_test.columns.values[(np.argsort(importances)[::-1])[:5]]]
# Train on the "best" model found from grid search earlier
clf = (clone(best_clf)).fit(X_train_reduced, y_train)
# Make new predictions
reduced_predictions = clf.predict(X_test_reduced)
# Report scores from the final model using both versions of data
print "Final Model trained on full data\n------"
print "Accuracy on testing data: {:.4f}".format(accuracy_score(y_test, best_predictions))
print "F-score on testing data: {:.4f}".format(fbeta_score(y_test, best_predictions, beta = 0.5))
print "\nFinal Model trained on reduced data\n------"
print "Accuracy on testing data: {:.4f}".format(accuracy_score(y_test, reduced_predictions))
print "F-score on testing data: {:.4f}".format(fbeta_score(y_test, reduced_predictions, beta = 0.5))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: While the data can be read directly from a URL we'll start by doing the simple thing of reading the CSV file directly from our local disk.
Step2: That was pretty simple. Now try to read directly from a URL to see if we get the same result. This has the advantage that you always get the latest version of the file which is updated nightly.
Step3: Use the requests package to read the CSV file from the URL.
Step4: If necessary we'll need to convert nodata values.
Step5: We can look at other columns and also filter the data in a variety of ways. Recently we had a site where the number of images varied a lot over time. Let's look at how consistent the number of images for the alligator river site. The image_count reflects our brightness threshold which will eliminate images in the winter time when the days are shorter. But there are a number of other ways the image count can be reduced. The ability reliably extract a 90^th precentile value is dependent on the number of images available for a particular summary period.
Step6: One possibility would be to filter the data for summary periods which had at least 10 images.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import os, sys
import numpy as np
import matplotlib
import pandas as pd
import requests
import StringIO
# set matplotlib style
matplotlib.style.use('ggplot')
sitename = 'alligatorriver'
roiname = 'DB_0001'
infile = "{}_{}_1day.csv".format(sitename, roiname)
print infile
%%bash
head -30 alligatorriver_DB_0001_1day.csv
with open(infile,'r') as fd:
df = pd.read_csv(fd, comment='#', parse_dates=[0])
df.head()
df.plot('date', ['gcc_90'], figsize=(16,4),
grid=True, style=['g'] )
url = "https://phenocam.sr.unh.edu/data/archive/{}/ROI/{}_{}_1day.csv"
url = url.format(sitename, sitename, roiname)
print url
response = requests.get(url)
fd = StringIO.StringIO(response.text)
df = pd.read_csv(fd, comment='#', parse_dates=[0])
fd.close
df[0:5]
df[df['gcc_90'] == -9999.].gcc_90 = np.nan
df.plot('date', ['gcc_90'], figsize=(16,4),
grid=True, style=['g'] )
df.plot('date','image_count', figsize=(16,4), style='b')
df10 = df[df['image_count'] >= 10]
df10.plot('date', ['gcc_90'], figsize=(16,4),
grid=True, style=['g'] )
<END_TASK>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.