text_prompt
stringlengths 168
30.3k
| code_prompt
stringlengths 67
124k
|
|---|---|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The data contains one event per row and has 5 variables
Step2: The output here is a pandas Series with each user_id as the index and the count of the number of events as values. Now to try the same thing with transform.
Step3: What the heck happened here? This odd DataFrame highlights a key difference
Step4: Much better. All we had to do was assign to the new event_count column and then specify the ['user_id'] column after the groupby statement. Whether you would prefer to have this additional column of repeating values depends on what you intend to do with the data afterwards. Let's assume this is acceptable. Now for something a bit more involved.
Step5: Here we've defined a very simple custom function that assigns values to each of the four event types. Now to apply it to our data.
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
data = pd.read_csv('test_user_data.csv')
print(data.head(10))
apply_ex = data.groupby('user_id').apply(len)
print(apply_ex.head())
transform_ex = data.groupby('user_id').transform(len)
print(transform_ex.head())
data['event_count'] = data.groupby('user_id')['user_id'].transform(len)
print(data.head(7))
def add_value(x):
if x == 'buy_coins':
y = 1.00
elif x == 'megapack':
y = 10.00
else:
y=0.0
return y
data['event_value'] = data['event_type'].apply(add_value)
print(data.head(7))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load example dataset
Step2: Multiple plots and map projections
Step3: Centered colormaps
Step4: Control the plot's colorbar
Step5: Multiple lines from a 2d DataArray
Step6: imshow() and rasterio map projections
Step7: Parsing rasterio geocoordinates
|
<ASSISTANT_TASK:>
Python Code:
import cartopy.crs as ccrs
import matplotlib.pyplot as plt
import xarray as xr
%matplotlib inline
ds = xr.tutorial.load_dataset("air_temperature")
air = ds.air.isel(time=[0, 724]) - 273.15
# This is the map projection we want to plot *onto*
map_proj = ccrs.LambertConformal(central_longitude=-95, central_latitude=45)
p = air.plot(
transform=ccrs.PlateCarree(), # the data's projection
col="time",
col_wrap=1, # multiplot settings
aspect=ds.dims["lon"] / ds.dims["lat"], # for a sensible figsize
subplot_kws={"projection": map_proj},
) # the plot's projection
# We have to set the map's options on all axes
for ax in p.axes.flat:
ax.coastlines()
ax.set_extent([-160, -30, 5, 75])
air = ds.air.isel(time=0)
f, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, figsize=(8, 6))
# The first plot (in kelvins) chooses "viridis" and uses the data's min/max
air.plot(ax=ax1, cbar_kwargs={"label": "K"})
ax1.set_title("Kelvins: default")
ax2.set_xlabel("")
# The second plot (in celsius) now chooses "BuRd" and centers min/max around 0
airc = air - 273.15
airc.plot(ax=ax2, cbar_kwargs={"label": "°C"})
ax2.set_title("Celsius: default")
ax2.set_xlabel("")
ax2.set_ylabel("")
# The center doesn't have to be 0
air.plot(ax=ax3, center=273.15, cbar_kwargs={"label": "K"})
ax3.set_title("Kelvins: center=273.15")
# Or it can be ignored
airc.plot(ax=ax4, center=False, cbar_kwargs={"label": "°C"})
ax4.set_title("Celsius: center=False")
ax4.set_ylabel("")
# Make it nice
plt.tight_layout()
air2d = ds.air.isel(time=500)
# Prepare the figure
f, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(14, 4))
# Irregular levels to illustrate the use of a proportional colorbar
levels = [245, 250, 255, 260, 265, 270, 275, 280, 285, 290, 310, 340]
# Plot data
air2d.plot(ax=ax1, levels=levels)
air2d.plot(ax=ax2, levels=levels, cbar_kwargs={"ticks": levels})
air2d.plot(
ax=ax3, levels=levels, cbar_kwargs={"ticks": levels, "spacing": "proportional"}
)
# Show plots
plt.tight_layout()
air = ds.air - 273.15 # to celsius
# Prepare the figure
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(8, 4), sharey=True)
# Selected latitude indices
isel_lats = [10, 15, 20]
# Temperature vs longitude plot - illustrates the "hue" kwarg
air.isel(time=0, lat=isel_lats).plot.line(ax=ax1, hue="lat")
ax1.set_ylabel("°C")
# Temperature vs time plot - illustrates the "x" and "add_legend" kwargs
air.isel(lon=30, lat=isel_lats).plot.line(ax=ax2, x="time", add_legend=False)
ax2.set_ylabel("")
# Show
plt.tight_layout()
da = xr.tutorial.open_rasterio("RGB.byte")
# The data is in UTM projection. We have to set it manually until
# https://github.com/SciTools/cartopy/issues/813 is implemented
crs = ccrs.UTM("18")
# Plot on a map
ax = plt.subplot(projection=crs)
da.plot.imshow(ax=ax, rgb="band", transform=crs)
ax.coastlines("10m", color="r")
from pyproj import Transformer
import numpy as np
da = xr.tutorial.open_rasterio("RGB.byte")
x, y = np.meshgrid(da["x"], da["y"])
transformer = Transformer.from_crs(da.crs, "EPSG:4326", always_xy=True)
lon, lat = transformer.transform(x, y)
da.coords["lon"] = (("y", "x"), lon)
da.coords["lat"] = (("y", "x"), lat)
# Compute a greyscale out of the rgb image
greyscale = da.mean(dim="band")
# Plot on a map
ax = plt.subplot(projection=ccrs.PlateCarree())
greyscale.plot(
ax=ax,
x="lon",
y="lat",
transform=ccrs.PlateCarree(),
cmap="Greys_r",
shading="auto",
add_colorbar=False,
)
ax.coastlines("10m", color="r")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First we will establish some general variables for our game, including the 'stake' of the game (how much money each play is worth), as well as a list representing the cards used in the game. To make things easier, we will just use a list of numbers 0-9 for the cards.
Step2: Next, let's define a new class to represent each player in the game. I have provided a rough framework of the class definition along with comments along the way to help you complete it. Places where you should write code are denoted by comments inside [] brackets and CAPITAL TEXT.
Step3: Next we will create some functions outside the class definition which will control the flow of the game. The first function will play one round. It will take as an input the collection of players, and iterate through each one, calling each player's '.play() function.
Step4: Next we will define a function that will check the balances of each player, and print out a message with the player's ID and their balance.
Step5: Now we are ready to start the game. First we create an empy list to store the collection of players in the game.
Step6: Then we create a loop that will run a certain number of times, each time creating a player with a unique ID and a starting balance. Each player should be appended to the empty list, which will store all the players. In this case we pass the 'i' iterator of the loop as the player ID, and set a constant value of 500 for the starting balance.
Step7: Once the players are created, we will create a loop to run the game a certain amount of times. Each step of the loop should start with a print statement announcing the start of the game, and then call the playHand() function, passing as an input the list of players.
Step8: Finally, we will analyze the results of the game by running the 'checkBalances()' function and passing it our list of players.
|
<ASSISTANT_TASK:>
Python Code:
import random
gameStake = 50
cards = range(10)
class Player:
# in the __init__() function, use the two input variables to initialize the ID and starting pot of each player
def __init__(self, inputID, startingPot):
# [CREATE YOUR INITIALIZATIONS HERE]
# make sure you initialize two local variables to store a unique ID for each player
# and the player's current 'pot' of money
# create a function for playing the game. This function starts by taking an input for the dealer's card
# and picking a random number from the 'cards' list for the player's card
def play(self, dealerCard):
# we use the random.choice() function to select a random item from a list
playerCard = random.choice(cards)
# here we should have a conditional that tests the player's card value against the dealer card
# and returns a statement saying whether the player won or lost the hand
# before returning the statement, make sure to either add or subtract the stake from the player's pot so that
# the 'pot' variable tracks the player's money
if playerCard < dealerCard:
# [INCREMENT THE PLAYER'S POT, AND RETURN A MESSAGE]
else:
# [INCREMENT THE PLAYER'S POT, AND RETURN A MESSAGE]
# create an accessor function to return the current value of the player's pot
def returnPot(self):
# [FILL IN THE RETURN STATEMENT]
# create an accessor function to return the player's ID
def returnID(self):
# [FILL IN THE RETURN STATEMENT]
def playHand(players):
for player in players:
dealerCard = random.choice(cards)
#[EXECUTE THE PLAY() FUNCTION FOR EACH PLAYER USING THE DEALER CARD, AND PRINT OUT THE RESULTS]
def checkBalances(players):
for player in players:
#[PRINT OUT EACH PLAYER'S BALANCE BY USING EACH PLAYER'S ACCESSOR FUNCTIONS]
players = []
for i in range(5):
players.append(Player(i, 500))
for i in range(10):
print('')
print('start game ' + str(i))
playHand(players)
print('')
print('game results:')
checkBalances(players)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Any components not passed automatically default to 0. REBOUND can also accept orbital elements.
Step2: We always have to pass a semimajor axis (to set a length scale), but any other elements are by default set to 0. Notice that our second star has the same vz as the first one due to the default Jacobi elements. Now we could add a distant planet on a circular orbit,
Step3: This planet is set up relative to the binary center of mass (again due to the Jacobi coordinates), which is probably what we want. But imagine we now want to place a test mass in a tight orbit around the second star. If we passed things as above, the orbital elements would be referenced to the binary/outer-planet center of mass. We can override the default by explicitly passing a primary (any instance of the Particle class)
Step4: All simulations are performed in Cartesian elements, so to avoid the overhead, REBOUND does not update particles' orbital elements as the simulation progresses. However, we can always calculate them when required with sim.calculate_orbits(). Note that REBOUND will always output angles in the range $[-\pi,\pi]$, except the inclination which is always in $[0,\pi]$.
Step5: Notice that there is always one less orbit than there are particles, since orbits are only defined between pairs of particles. We see that we got the first two orbits right, but the last one is way off. The reason is that again the REBOUND default is that we always get Jacobi elements. But we initialized the last particle relative to the second star, rather than the center of mass of all the previous particles.
Step6: though we could have simply avoided this problem by adding bodies from the inside out (second star, test mass, first star, circumbinary planet).
Step7: The problem here is that $\omega$ (the angle from the ascending node to pericenter) is ill-defined for a circular orbit, so it's not clear what we mean when we pass it, and we get spurious results (i.e., $\omega = 0$ rather than 0.1, and $f=0.1$ rather than the default 0). Similarly, $f$, the angle from pericenter to the particle's position, is undefined. However, the true longitude $\theta$, the broken angle from the $x$ axis to the ascending node = $\Omega + \omega + f$, and then to the particle's position, is always well defined
Step8: To be clearer and ensure we get the results we expect, we could instead pass theta to specify the longitude we want, e.g.
Step9: Here we have a planar orbit, in which case the line of nodes becomes ill defined, so $\Omega$ is not a good variable, but we pass it anyway! In this case, $\omega$ is also undefined since it is referenced to the ascending node. Here we get that now these two ill-defined variables get flipped. The appropriate variable is pomega ($\varpi = \Omega + \omega$), which is the angle from the $x$ axis to pericenter
Step10: We can specify the pericenter of the orbit with either $\omega$ or $\varpi$
Step11: Note that if the inclination is exactly zero, REBOUND sets $\Omega$ (which is undefined) to 0, so $\omega = \varpi$.
Step12: Accuracy
Step13: We see that the behavior is poor, which is physically due to $f$ becoming poorly defined at low $e$. If instead we initialize the orbits with the true longitude $\theta$ as discussed above, we get much better results
Step14: Hyperbolic & Parabolic Orbits
Step15: Currently there is no support for exactly parabolic orbits, but we can get a close approximation by passing a nearby hyperbolic orbit where we can specify the pericenter = $|a|(e-1)$ with $a$ and $e$. For example, for a 0.1 AU pericenter,
Step16: Retrograde Orbits
|
<ASSISTANT_TASK:>
Python Code:
import rebound
sim = rebound.Simulation()
sim.add(m=1., x=1., vz = 2.)
sim.add(m=1., a=1.)
sim.status()
sim.add(m=1.e-3, a=100.)
sim.add(primary=sim.particles[1], a=0.01)
orbits = sim.calculate_orbits()
for orbit in orbits:
print(orbit)
print(sim.particles[3].calculate_orbit(primary=sim.particles[1]))
sim = rebound.Simulation()
sim.add(m=1.)
sim.add(a=1., e=0., inc=0.1, Omega=0.3, omega=0.1)
print(sim.particles[1].orbit)
print(sim.particles[1].theta)
sim = rebound.Simulation()
sim.add(m=1.)
sim.add(a=1., e=0., inc=0.1, Omega=0.3, theta = 0.4)
print(sim.particles[1].theta)
import rebound
sim = rebound.Simulation()
sim.add(m=1.)
sim.add(a=1., e=0.2, Omega=0.1)
print(sim.particles[1].orbit)
print(sim.particles[1].pomega)
import rebound
sim = rebound.Simulation()
sim.add(m=1.)
sim.add(a=1., e=0.2, pomega=0.1)
print(sim.particles[1].orbit)
sim = rebound.Simulation()
sim.add(m=1.)
sim.add(a=1., e=0.1, Omega=0.3, M = 0.1)
sim.add(a=1., e=0.1, Omega=0.3, l = 0.4)
print(sim.particles[1].l)
print(sim.particles[2].l)
import rebound
sim = rebound.Simulation()
sim.add(m=1.)
sim.add(a=1., e=0.1, omega=1.)
print(sim.particles[1].orbit)
import random
import numpy as np
def simulation(par):
e,f = par
e = 10**e
f = 10**f
sim = rebound.Simulation()
sim.add(m=1.)
a = 1.
inc = random.random()*np.pi
Omega = random.random()*2*np.pi
sim.add(m=0.,a=a,e=e,inc=inc,Omega=Omega, f=f)
o=sim.particles[1].orbit
if o.f < 0: # avoid wrapping issues
o.f += 2*np.pi
err = max(np.fabs(o.e-e)/e, np.fabs(o.f-f)/f)
return err
random.seed(1)
N = 100
es = np.linspace(-16.,-1.,N)
fs = np.linspace(-16.,-1.,N)
params = [(e,f) for e in es for f in fs]
pool=rebound.InterruptiblePool()
res = pool.map(simulation, params)
res = np.array(res).reshape(N,N)
res = np.nan_to_num(res)
%matplotlib inline
import matplotlib.pyplot as plt
from matplotlib import ticker
from matplotlib.colors import LogNorm
import matplotlib
f,ax = plt.subplots(1,1,figsize=(7,5))
extent=[fs.min(), fs.max(), es.min(), es.max()]
ax.set_xlim(extent[0], extent[1])
ax.set_ylim(extent[2], extent[3])
ax.set_xlabel(r"true anomaly (f)")
ax.set_ylabel(r"eccentricity")
im = ax.imshow(res, norm=LogNorm(), vmax=1., vmin=1.e-16, aspect='auto', origin="lower", interpolation='nearest', cmap="RdYlGn_r", extent=extent)
cb = plt.colorbar(im, ax=ax)
cb.solids.set_rasterized(True)
cb.set_label("Relative Error")
def simulation(par):
e,theta = par
e = 10**e
theta = 10**theta
sim = rebound.Simulation()
sim.add(m=1.)
a = 1.
inc = random.random()*np.pi
Omega = random.random()*2*np.pi
omega = random.random()*2*np.pi
sim.add(m=0.,a=a,e=e,inc=inc,Omega=Omega, theta=theta)
o=sim.particles[1].orbit
if o.theta < 0:
o.theta += 2*np.pi
err = max(np.fabs(o.e-e)/e, np.fabs(o.theta-theta)/theta)
return err
random.seed(1)
N = 100
es = np.linspace(-16.,-1.,N)
thetas = np.linspace(-16.,-1.,N)
params = [(e,theta) for e in es for theta in thetas]
pool=rebound.InterruptiblePool()
res = pool.map(simulation, params)
res = np.array(res).reshape(N,N)
res = np.nan_to_num(res)
f,ax = plt.subplots(1,1,figsize=(7,5))
extent=[thetas.min(), thetas.max(), es.min(), es.max()]
ax.set_xlim(extent[0], extent[1])
ax.set_ylim(extent[2], extent[3])
ax.set_xlabel(r"true longitude (\theta)")
ax.set_ylabel(r"eccentricity")
im = ax.imshow(res, norm=LogNorm(), vmax=1., vmin=1.e-16, aspect='auto', origin="lower", interpolation='nearest', cmap="RdYlGn_r", extent=extent)
cb = plt.colorbar(im, ax=ax)
cb.solids.set_rasterized(True)
cb.set_label("Relative Error")
sim.add(a=-0.2, e=1.4)
sim.status()
sim = rebound.Simulation()
sim.add(m=1.)
q = 0.1
a=-1.e14
e=1.+q/np.fabs(a)
sim.add(a=a, e=e)
print(sim.particles[1].orbit)
sim = rebound.Simulation()
sim.add(m=1.)
sim.add(a=1.,inc=np.pi,e=0.1, Omega=0., pomega=1.)
print(sim.particles[1].orbit)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Here are some test demo containers to play around with
Step2: exatomic universes in principle contain a QM/MD calculation or set of calculations
Step3: Just a textfile...?
Step4: There we go. Our porphyrin looks pretty good. Check out the GUI controls in the animation
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import exatomic
u = exatomic.Universe()
u
#myxyz = exatomic.XYZ('../data/examples/porphyrin.xyz')
myxyz = exatomic.XYZ('porphyrin.xyz')
myxyz.head()
myxyz.atom.head() # Atomic units are used throughout the exatomic package
myuni = myxyz.to_universe()
myuni.two.head()
myuni
from exa.relational import Isotope
import random
nat = 10**4 # Be careful changing this value...
x = nat**0.5 * np.random.rand(nat)
y = nat**0.5 * np.random.rand(nat)
z = nat**0.5 * np.random.rand(nat)
symbols = Isotope.to_frame().drop_duplicates('symbol')['symbol'].tolist()
symbol = [random.choice(symbols) for i in range(nat)]
atom = pd.DataFrame.from_dict({'x': x, 'y': y, 'z': z, 'symbol': symbol})
atom['frame'] = 0
scuni = exatomic.Universe(atom=atom)
scuni
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Flower power
Step2: ConvNet Codes
Step3: Below I'm running images through the VGG network in batches.
Step4: Building the Classifier
Step5: Data prep
Step6: Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use StratifiedShuffleSplit from scikit-learn.
Step7: If you did it right, you should see these sizes for the training sets
Step9: Batches!
Step10: Training
Step11: Testing
Step12: Below, feel free to choose images and see how the trained classifier predicts the flowers in them.
|
<ASSISTANT_TASK:>
Python Code:
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
vgg_dir = 'tensorflow_vgg/'
# Make sure vgg exists
if not isdir(vgg_dir):
raise Exception("VGG directory doesn't exist!")
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(vgg_dir + "vgg16.npy"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='VGG16 Parameters') as pbar:
urlretrieve(
'https://s3.amazonaws.com/content.udacity-data.com/nd101/vgg16.npy',
vgg_dir + 'vgg16.npy',
pbar.hook)
else:
print("Parameter file already exists!")
import tarfile
dataset_folder_path = 'flower_photos'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile('flower_photos.tar.gz'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Flowers Dataset') as pbar:
urlretrieve(
'http://download.tensorflow.org/example_images/flower_photos.tgz',
'flower_photos.tar.gz',
pbar.hook)
if not isdir(dataset_folder_path):
with tarfile.open('flower_photos.tar.gz') as tar:
tar.extractall()
tar.close()
import os
import numpy as np
import tensorflow as tf
from tensorflow_vgg import vgg16
from tensorflow_vgg import utils
data_dir = 'flower_photos/'
contents = os.listdir(data_dir)
classes = [each for each in contents if os.path.isdir(data_dir + each)]
# Set the batch size higher if you can fit in in your GPU memory
batch_size = 10
codes_list = []
labels = []
batch = []
codes = None
with tf.Session() as sess:
vgg = vgg16.Vgg16()
input_=tf.placeholder(tf.float32,[None,224,224,3])
with tf.name_scope("content_vgg"):
vgg.build(input_)
for each in classes:
print("Starting {} images".format(each))
class_path = data_dir + each
files = os.listdir(class_path)
for ii, file in enumerate(files, 1):
# Add images to the current batch
# utils.load_image crops the input images for us, from the center
img = utils.load_image(os.path.join(class_path, file))
batch.append(img.reshape((1, 224, 224, 3)))
labels.append(each)
# Running the batch through the network to get the codes
if ii % batch_size == 0 or ii == len(files):
# Image batch to pass to VGG network
images = np.concatenate(batch)
# TODO: Get the values from the relu6 layer of the VGG network
codes_batch = sess.run(vgg.relu6,{input_:images})
# Here I'm building an array of the codes
if codes is None:
codes = codes_batch
else:
codes = np.concatenate((codes, codes_batch))
# Reset to start building the next batch
batch = []
print('{} images processed'.format(ii))
# write codes to file
with open('codes', 'w') as f:
codes.tofile(f)
# write labels to file
import csv
with open('labels', 'w') as f:
writer = csv.writer(f, delimiter='\n')
writer.writerow(labels)
# read codes and labels from file
import csv
with open('labels') as f:
reader = csv.reader(f, delimiter='\n')
labels = np.array([each for each in reader if len(each) > 0]).squeeze()
with open('codes') as f:
codes = np.fromfile(f, dtype=np.float32)
codes = codes.reshape((len(labels), -1))
from sklearn import preprocessing
lb=preprocessing.LabelBinarizer()
lb.fit(labels)
labels_vecs = lb.transform(labels)
train_x, train_y =
val_x, val_y =
test_x, test_y =
print("Train shapes (x, y):", train_x.shape, train_y.shape)
print("Validation shapes (x, y):", val_x.shape, val_y.shape)
print("Test shapes (x, y):", test_x.shape, test_y.shape)
inputs_ = tf.placeholder(tf.float32, shape=[None, codes.shape[1]])
labels_ = tf.placeholder(tf.int64, shape=[None, labels_vecs.shape[1]])
# TODO: Classifier layers and operations
fc = tf.contrib.layers.fully_connected(inputs_,256)
logits = tf.layers.dense(fc,labels_vecs.shape[1],activation=None)
# cross entropy loss
ce = tf.nn.softmax_cross_entropy_with_logits(labels = labels_,logits = logits)
cost = tf.reduce_mean(ce)
# training optimizer
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Operations for validation/test accuracy
predicted = tf.nn.softmax(logits)
correct_pred = tf.equal(tf.argmax(predicted, 1), tf.argmax(labels_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
def get_batches(x, y, n_batches=10):
Return a generator that yields batches from arrays x and y.
batch_size = len(x)//n_batches
for ii in range(0, n_batches*batch_size, batch_size):
# If we're not on the last batch, grab data with size batch_size
if ii != (n_batches-1)*batch_size:
X, Y = x[ii: ii+batch_size], y[ii: ii+batch_size]
# On the last batch, grab the rest of the data
else:
X, Y = x[ii:], y[ii:]
# I love generators
yield X, Y
num_epochs=100
saver = tf.train.Saver()
with tf.Session() as sess:
# TODO: Your training code here
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
for batch_i,(x,y) in enumerate(get_batches(train_x,train_y)):
feed = {inputs_: x,
labels_: y}
_,train_loss = sess.run([optimizer,cost],feed_dict=feed)
print("Epoch:{}/{}".format(epoch_i,num_epochs),"Iteration:{}".format(batch_i),"train loss:{:.5f}".format(train_loss))
if batch_i % 5 == 0: #every 5 iterations check accuracy on validation set
feed = {inputs_: val_x, labels_: val_y}
val_acc = sess.run(accuracy,feed_dict=feed)
print("------Epoch:{}/{}".format(epoch_i, num_epochs), "Iteration:{}".format(batch_i),
"Validation acc:{:.4f}".format(val_acc))
saver.save(sess, "checkpoints/flowers.ckpt")
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: test_x,
labels_: test_y}
test_acc = sess.run(accuracy, feed_dict=feed)
print("Test accuracy: {:.4f}".format(test_acc))
%matplotlib inline
import matplotlib.pyplot as plt
from scipy.ndimage import imread
test_img_path = 'flower_photos/roses/10894627425_ec76bbc757_n.jpg'
test_img = imread(test_img_path)
plt.imshow(test_img)
# Run this cell if you don't have a vgg graph built
if 'vgg' in globals():
print('"vgg" object already exists. Will not create again.')
else:
#create vgg
with tf.Session() as sess:
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
vgg = vgg16.Vgg16()
vgg.build(input_)
with tf.Session() as sess:
img = utils.load_image(test_img_path)
img = img.reshape((1, 224, 224, 3))
feed_dict = {input_: img}
code = sess.run(vgg.relu6, feed_dict=feed_dict)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: code}
prediction = sess.run(predicted, feed_dict=feed).squeeze()
plt.imshow(test_img)
plt.barh(np.arange(5), prediction)
_ = plt.yticks(np.arange(5), lb.classes_)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Reading the raw data and creating epochs
Step2: Compute covariances
Step3: Compute some source estimates
Step4: Plot source estimates
Step5: LCMV
Step6: dSPM
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Luke Bloy <luke.bloy@gmail.com>
# Eric Larson <larson.eric.d@gmail.com>
#
# License: BSD (3-clause)
import os.path as op
import numpy as np
import mne
from mne.cov import compute_covariance
from mne.datasets import somato
from mne.time_frequency import csd_morlet
from mne.beamformer import (make_dics, apply_dics_csd, make_lcmv,
apply_lcmv_cov)
from mne.minimum_norm import (make_inverse_operator, apply_inverse_cov)
print(__doc__)
data_path = somato.data_path()
subject = '01'
task = 'somato'
raw_fname = op.join(data_path, 'sub-{}'.format(subject), 'meg',
'sub-{}_task-{}_meg.fif'.format(subject, task))
# crop to 5 minutes to save memory
raw = mne.io.read_raw_fif(raw_fname).crop(0, 300)
# We are interested in the beta band (12-30 Hz)
raw.load_data().filter(12, 30)
# The DICS beamformer currently only supports a single sensor type.
# We'll use the gradiometers in this example.
picks = mne.pick_types(raw.info, meg='grad', exclude='bads')
# Read epochs
events = mne.find_events(raw)
epochs = mne.Epochs(raw, events, event_id=1, tmin=-1.5, tmax=2, picks=picks,
preload=True, decim=3)
# Read forward operator and point to freesurfer subject directory
fname_fwd = op.join(data_path, 'derivatives', 'sub-{}'.format(subject),
'sub-{}_task-{}-fwd.fif'.format(subject, task))
subjects_dir = op.join(data_path, 'derivatives', 'freesurfer', 'subjects')
fwd = mne.read_forward_solution(fname_fwd)
active_win = (0.5, 1.5)
baseline_win = (-1, 0)
baseline_cov = compute_covariance(epochs, tmin=baseline_win[0],
tmax=baseline_win[1], method='shrunk',
rank=None)
active_cov = compute_covariance(epochs, tmin=active_win[0], tmax=active_win[1],
method='shrunk', rank=None)
# Weighted averaging is already in the addition of covariance objects.
common_cov = baseline_cov + active_cov
def _gen_dics(active_win, baseline_win, epochs):
freqs = np.logspace(np.log10(12), np.log10(30), 9)
csd = csd_morlet(epochs, freqs, tmin=-1, tmax=1.5, decim=20)
csd_baseline = csd_morlet(epochs, freqs, tmin=baseline_win[0],
tmax=baseline_win[1], decim=20)
csd_ers = csd_morlet(epochs, freqs, tmin=active_win[0], tmax=active_win[1],
decim=20)
filters = make_dics(epochs.info, fwd, csd.mean(), pick_ori='max-power',
reduce_rank=True, real_filter=True)
stc_base, freqs = apply_dics_csd(csd_baseline.mean(), filters)
stc_act, freqs = apply_dics_csd(csd_ers.mean(), filters)
stc_act /= stc_base
return stc_act
# generate lcmv source estimate
def _gen_lcmv(active_cov, baseline_cov, common_cov):
filters = make_lcmv(epochs.info, fwd, common_cov, reg=0.05,
noise_cov=None, pick_ori='max-power')
stc_base = apply_lcmv_cov(baseline_cov, filters)
stc_act = apply_lcmv_cov(active_cov, filters)
stc_act /= stc_base
return stc_act
# generate mne/dSPM source estimate
def _gen_mne(active_cov, baseline_cov, common_cov, fwd, info, method='dSPM'):
inverse_operator = make_inverse_operator(info, fwd, common_cov)
stc_act = apply_inverse_cov(active_cov, info, inverse_operator,
method=method, verbose=True)
stc_base = apply_inverse_cov(baseline_cov, info, inverse_operator,
method=method, verbose=True)
stc_act /= stc_base
return stc_act
# Compute source estimates
stc_dics = _gen_dics(active_win, baseline_win, epochs)
stc_lcmv = _gen_lcmv(active_cov, baseline_cov, common_cov)
stc_dspm = _gen_mne(active_cov, baseline_cov, common_cov, fwd, epochs.info)
brain_dics = stc_dics.plot(
hemi='rh', subjects_dir=subjects_dir, subject=subject,
time_label='DICS source power in the 12-30 Hz frequency band')
brain_lcmv = stc_lcmv.plot(
hemi='rh', subjects_dir=subjects_dir, subject=subject,
time_label='LCMV source power in the 12-30 Hz frequency band')
brain_dspm = stc_dspm.plot(
hemi='rh', subjects_dir=subjects_dir, subject=subject,
time_label='dSPM source power in the 12-30 Hz frequency band')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Q. Is the following call sequence acceptable?
Step2: No! The following are all OK!
Step3: Keyword arguments
Step4: The first two arguments in this case are "positional arguments."
Step5: $$f(t)=A \cdot e^{-at} \cdot \sin({\omega \cdot t})$$
Step6: Q. What will this yield?
Step7: $$e^x = \sum_{i=0}^{\infty} \frac{x^i}{i!}$$
Step8: Q. What's missing? Have there been ambiguities in the functions we've written?
Step10: Doc-strings (review)
Step12: Q. So, what should doc strings contain in general?
Step14: Functions as arguments to other functions
Step15: We'll work with the following function
Step16: Q. What should this yield?
Step17: Lambda functions
|
<ASSISTANT_TASK:>
Python Code:
from math import exp
# Could avoid this by using our constants.py module!
h = 6.626e-34 # MKS
k = 1.38e-23
c = 3.00e8
def intensity(wave, temp, mydefault=0):
wavelength = wave / 1e10
B = 2 * h * c**2 / (wavelength**5 * (exp(h * c / (wavelength * k * temp)) - 1))
return B
mywave = 5000
intensity(mywave, temp=5800.0)
print(intensity(5000.0, temp=5800.0))
print(intensity(wave=5000.0, temp=5800.0))
print(intensity(5000.0, 5800.0))
intensity(5800.0, 5000.0)
def testFunc(arg1, arg2, kwarg1=True, kwarg2=4.2):
print(arg1, arg2, kwarg1, kwarg2)
testFunc(1.0, 2.0)
testFunc(1.0, 2.0, 3.0) # NOTE! I do not HAVE TO use the keyword access!
from math import pi, exp, sin
tau = 2*pi
# t is positional argument, others are keyword arguments
def f(t, A=1, a=1, omega=tau):
return A * exp(-a * t) * sin(omega * t)
v1 = f(0.01) # Only the time is specified
v1
v1 = f(A=2, t=0.01)
v1
import math
def exponential(x, epsilon=1e-6):
total = 1.0
i = 1
term = (x**i) / math.factorial(i)
while abs(term) > epsilon:
term = x**i / math.factorial(i)
total += term
i += 1
return total, i
total, i = exponential(2.4, epsilon=1e-2)
print(exponential(2.4, epsilon=1e-4))
print(exponential(2.4))
print(math.exp(2.4))
from math import exp
h = 6.626e-34 # MKS
k = 1.38e-23
c = 3.00e8
def intensity(wave, temp):
wavelength = wave / 1e10
inten = 2 * h * c**2 / (wavelength**5 * (exp(h * c / (wavelength * k * temp)) - 1))
return inten
import math
def exponential(x, epsilon=1e-6):
This function calculates exp(x) to a tolerance
of epsilon.
Parameters
----------
x : exponent
epsilon : tolerance
returns : exp(x), number of terms include (after 1.0)
Example from interactive shell:
>>> sum, i = exponential(0.15)
>>> print i-1, sum
5, 1.16183422656
total = 1.0
i = 1
term = (x**i) / math.factorial(i)
while abs(term) > epsilon:
term = x**i / math.factorial(i)
total += term
i += 1
return total, i
total, i = exponential(0.15)
i - 1, total
print(exponential.__doc__)
# How handy!!
expontential.
help(exponential)
from math import exp
h = 6.626e-34 # MKS
k = 1.38e-23
c = 3.00e8
def intensity(wave, temp):
Compute the value of the Planck function.
Parameters
----------
wave : int, float
Wavelength at which to compute Planck function, in Angstroms.
T : int, float
Temperature of blackbody, in Kelvin.
Example
-------
Radiance at 5000 Angstrom, 5800 K blackbody:
>>> radiance = I(wave=5000., T=5800.)
Returns
-------
Radiance of blackbody in W / sr / m^3.
wavelength = wave / 1e10
inten = 2 * h * c**2 / (wavelength**5 * (exp(h * c / (wavelength * k * temp)) - 1))
return inten
def diff2(f, x, h=1e-6):
Calculates a second derivative.
f: the function (of one variable) to be differentiated
x: value at which the function is evaluated
h: small difference in x
r = (f(x-h) + f(x + h) - 2.0 * f(x)) / float(h**2)
return r
def g(t):
return t**2
t = 3.0
gPrimePrime = diff2(g, t)
print("Second derivative of g=t^4 evaluated at t=%g" % t)
"g(%f)=%.8g" % (t, gPrimePrime)
g = lambda t: t**2
print(g)
g(2.0)
def g(t):
return t**2
# This simply calculates the second derivative of t^2 evaluated at t=1.
test = diff2(lambda t: t**2, 56)
# Recall the first argument to diff2 was the function
test
def cubed(x):
return x**3
cubed(2)
y = lambda x: x**3
y(2)
s = 'hans_gruber'
s.split('_')
df.sort().groupby().do_stuff(lambda x: x.split('-')[0])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 使用join='inner' 查看结果 ,和outer一样!
Step2: 修改df3的index,看看结果
Step3: 修改df5,使得某几行和df2一样,再看看结果
Step4: 如同numpy.concatenate()方法, pandas.concat方法接收一个列表或者字典对象然后进行拼接操作,注意列表或字典中每个对象要是同构的。拼接的本质就是沿着一条轴并且根据对另一条轴上进行的操作 将列表中的对象堆叠。
Step5: 注意到结果中的索引是层次化的。
Step6: 注意:concat()方法在拼接使要复制所有的数据,因此对于它的性能你要容忍。为了方便起见,如果对多个数据集拼接,可以使用列表解析式。
Step7: 对其它轴进行逻辑运算 Set logic on the other axes
Step8: 解释:
Step9: 再来看看join='inner'的例子:
Step10: 解释:
Step11: 解释:
Step12:
Step13: append()方法可以一次拼接多个对象。
Step14: append()操作并不是直接对df1对象进行操作,而是在df1副本的基础上进行拼接操作。
Step15: DataFrame.append方法也有ignore_index参数哦
Step16: 同时对Series和DataFrame对象进行拼接操作
Step17: 如果Series的列没有名字,会默认生成数字用于列名。
Step18: 对Series对象进行拼接时使用keys参数
Step19: 通过keys参数,来设置生成的DataFrame的列名
Step20: 回顾一下我们的第一个例子
Step21: 可以用字典代替上面concat方法中的frames和keys
Step22:
Step23: 对DataFrame对象增加行数
Step24: 数据库风格的DataFrame 连接/合并操作
Step25: 解释
Step26: 解释
Step27: 解释:
Step28:
Step29:
Step30: merge方法中indicator参数说明
Step31: 上面提到indicator参数取值也可以是字符串,没啥特别的,就是在拼接结果中用接收的字符串作为'_merge'的列名罢了。
Step32: 对索引进行拼接
Step33:
Step34:
Step35: merge方法中提供了left_index, right_index参数同样实现了基于索引的拼接操作。
Step36:
Step37: 除拼接键值之外,重复的列名用suffixes参数区别
Step38: 默认情况下:
Step39: DataFrame.join方法也提供了lsuffix和rsuffix参数来实现同样的功能
Step40: 拼接多个DataFrame对象
Step41: 合并 有序数据
Step42: 使用合并方法填充NaN
Step43: 先使用combine_first方法:
Step44: df1和df2列名完全相同,索引值也相似。
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
df1 = pd.DataFrame({'A':['A0','A1','A2','A3'],
'B':['B0','B1','B2','B3'],
'C':['C0','C1','C2','C3'],
'D':['D0','D1','D2','D3']},
index=[0,1,2,3])
df1
df2 = pd.DataFrame({'A': ['A4', 'A5', 'A6', 'A7'],
'B': ['B4', 'B5', 'B6', 'B7'],
'C': ['C4', 'C5', 'C6', 'C7'],
'D': ['D4', 'D5', 'D6', 'D7']},
index=[4, 5, 6, 7])
df2
df3 = pd.DataFrame({'A': ['A8', 'A9', 'A10', 'A11'],
'B': ['B8', 'B9', 'B10', 'B11'],
'C': ['C8', 'C9', 'C10', 'C11'],
'D': ['D8', 'D9', 'D10', 'D11']},
index=[8, 9, 10, 11])
df3
frames = [df1, df2, df3] #产生一个list对象
results = pd.concat(frames)
results
res = pd.concat(frames, join='inner')
res
df5 = pd.DataFrame({'A': ['A8', 'A9', 'A10', 'A11'],
'B': ['B8', 'B9', 'B10', 'B11'],
'C': ['C8', 'C9', 'C10', 'C11'],
'D': ['D8', 'D9', 'D10', 'D11']},
index=[2, 3, 10, 11])
df5
df1
results2 = pd.concat([df1,df2,df5])
results2
df5 = pd.DataFrame({'A': ['A2', 'A9', 'A10', 'A11'],
'B': ['B2', 'B9', 'B10', 'B11'],
'C': ['C2', 'C9', 'C10', 'C11'],
'D': ['D2', 'D9', 'D10', 'D11']},
index=[2, 3, 10, 11])
df5
results3 = pd.concat([df1,df2,df5])
results3
#层次化索引, 用处是给列表中每个对象一个map标记,这样在结果中还能方便的调用每个子Series或DataFrame
result = pd.concat(frames, keys=['x','y','z'])
result
result.ix['y'] #查看df2
frames = [process_your_file(f) for f in files]
result = pd.concat(frames)
df4 = pd.DataFrame({'B': ['B2', 'B3', 'B6', 'B7'],
'D': ['D2', 'D3', 'D6', 'D7'],
'F': ['F2', 'F3', 'F6', 'F7']},
index=[2, 3, 6, 7])
result = pd.concat([df1, df4], axis=1) #这里axis=1
df4 = pd.DataFrame({'B': ['B2', 'B3', 'B6', 'B7'],
'D': ['D2', 'D3', 'D6', 'D7'],
'F': ['F2', 'F3', 'F6', 'F7']},
index=[3, 2, 7, 6]) #注意这里的索引值序列
df4
result = pd.concat([df1, df4], axis=1)
result #结果中的索引值已排序
result = pd.concat([df1, df4], axis=1, join='inner')
result
result = pd.concat([df1, df4], axis=1, join_axes=[df1.index])
result = df1.append(df2)
result = df1.append(df4)
#由于df1和df2的索引值交集不为空,导致最后result的索引值有重复!
result = df1.append([df2,df3])
df1
df4
result = pd.concat([df1, df4],ignore_index=True)
result
df1
df4
result2 = pd.concat([df1, df4],axis=1, ignore_index=True) #99.9999%不推荐大家列拼接时使用ignore_index
result2
result = df1.append(df4, ignore_index=True)
s1 = pd.Series(['X0','X1','X2','X3','X4'],name='X')
s1
result = pd.concat([df1,s1],axis=1)
result
s2 = pd.Series(['_0', '_1', '_2', '_3'])
result = pd.concat([df1, s2, s2, s2], axis=1)
s3 = pd.Series([0, 1, 2, 3], name='foo')
s4 = pd.Series([0, 1, 2, 3])
s5 = pd.Series([0, 1, 4, 5])
pd.concat([s3,s4,s5],axis=1)
pd.concat([s3,s4,s5],axis=1,keys=['red','blue','yellos'])
result = pd.concat(frames, keys=['x', 'y', 'z'])
pieces = {'x':df1, 'y':df2, 'z':df3}
result = pd.concat(pieces)
result
result = pd.concat(pieces, keys=['z', 'y']) #只对'z','y'对应的DataFrame进行拼接
result
s2 = pd.Series(['X0', 'X1', 'X2', 'X3'], index=['A', 'B', 'C', 'D'])
s2
df1
result = df1.append(s2, ignore_index=True) #推荐忽略现有的索引值
result
dicts = [{'A': 1, 'B': 2, 'C': 3, 'X': 4},
{'A': 5, 'B': 6, 'C': 7, 'Y': 8}]
result = df1.append(dicts, ignore_index=True)
left = pd.DataFrame({'key': ['K0', 'K1', 'K2', 'K3'],
'A': ['A0', 'A1', 'A2', 'A3'],
'B': ['B0', 'B1', 'B2', 'B3']
})
right = pd.DataFrame({'key': ['K0', 'K2', 'K1', 'K3'],
'C': ['C0', 'C1', 'C2', 'C3'],
'D': ['D0', 'D1', 'D2', 'D3']
})
pd.merge(left, right, on='key')
left = pd.DataFrame({
'key1':['K0', 'K0', 'K1', 'K2'],
'key2':['K0', 'K1', 'K0', 'K1'],
'A':['A0', 'A1', 'A2', 'A3'],
'B':['B0', 'B1', 'B2', 'B3']
})
right = pd.DataFrame({'key1': ['K0', 'K1', 'K1', 'K2'],
'key2': ['K0', 'K0', 'K0', 'K0'],
'C': ['C0', 'C1', 'C2', 'C3'],
'D': ['D0', 'D1', 'D2', 'D3']})
pd.merge(left, right, on=['key1', 'key2'])
result = pd.merge(left, right, how='left', on=['key1', 'key2'])
result = pd.merge(left, right, how='right', on=['key1', 'key2'])
result = pd.merge(left, right, how='outer', on=['key1', 'key2'])
result = pd.merge(left, right, how='inner', on=['key1', 'key2'])
df1 = pd.DataFrame({'col1':[0,1], 'col_left':['a','b']})
df2 = pd.DataFrame({'col1':[1,2,2],'col_right':[2,2,2]})
df1
df2
pd.merge(df1, df2, on='col1', how='outer', indicator=True)
pd.merge(df1, df2, on='col1', how='outer', indicator='indicator_column')
left = pd.DataFrame({'A': ['A0', 'A1', 'A2'],
'B': ['B0', 'B1', 'B2']},
index=['K0', 'K1', 'K2'])
right = pd.DataFrame({'C': ['C0', 'C2', 'C3'],
'D': ['D0', 'D2', 'D3']},
index=['K0', 'K2', 'K3'])
result = left.join(right)
result = left.join(right, how='outer')
result = left.join(right, how='inner')
result = pd.merge(left, right, left_index=True, right_index=True, how='outer')
result = pd.merge(left, right, left_index=True, right_index=True, how='inner')
left = pd.DataFrame({'k': ['K0', 'K1', 'K2'], 'v': [1, 2, 3]})
right = pd.DataFrame({'k': ['K0', 'K0', 'K3'], 'v': [4, 5, 6]})
result = pd.merge(left, right, on='k')
result = pd.merge(left, right, on='k', suffixes=['_l', '_r']) #为suffixes参数赋值
left = left.set_index('k')
right = right.set_index('k')
result = left.join(right, lsuffix='_l', rsuffix='_r')
right2 = pd.DataFrame({'v': [7, 8, 9]}, index=['K1', 'K1', 'K2'])
left.join([right, right2])
left = pd.DataFrame({'k': ['K0', 'K1', 'K1', 'K2'],
'lv': [1, 2, 3, 4],
's': ['a', 'b', 'c', 'd']})
right = pd.DataFrame({'k': ['K1', 'K2', 'K4'],
'rv': [1, 2, 3]})
left
right
pd.ordered_merge(left, right, fill_method='ffill', left_by='s')
df1 = pd.DataFrame([[np.nan, 3., 5.], [-4.6, np.nan, np.nan],
[np.nan, 7., np.nan]])
df1
df2 = pd.DataFrame([[-42.6, np.nan, -8.2], [-5., 1.6, 4]],
index=[1, 2])
df2
df1.combine_first(df2)
df1
df1.update(df2)
df1
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load and process review dataset
Step2: Just like we did previously, we will work with a hand-curated list of important words extracted from the review data. We will also perform 2 simple data transformations
Step3: Now, let us take a look at what the dataset looks like (Note
Step4: Train-Validation split
Step5: Convert SFrame to NumPy array
Step6: We convert both the training and validation sets into NumPy arrays.
Step7: Building on logistic regression with no L2 penalty assignment
Step8: Adding L2 penalty
Step9: To verify the correctness of the gradient ascent algorithm, we provide a function for computing log likelihood.
Step10: The logistic regression function looks almost like the one in the last assignment, with a minor modification to account for the L2 penalty.
Step11: Explore effects of L2 regularization
Step12: Compare coefficients
Step13: Now, let's run the function add_coefficients_to_table for each of the L2 penalty strengths.
Step14: Using the coefficients trained with L2 penalty 0, find the 5 most positive words (with largest positive coefficients). Save them to positive_words. Similarly, find the 5 most negative words (with largest negative coefficients) and save them to negative_words.
Step15: Let us observe the effect of increasing L2 penalty on the 10 words just selected. We provide you with a utility function to plot the coefficient path.
Step16: Run the following cell to generate the plot of the Coefficient path.
Step17: Measuring accuracy
Step18: Below, we compare the accuracy on the training data and validation data for all the models that were trained in this assignment. We first calculate the accuracy values and then build a simple report summarizing the performance for the various models.
|
<ASSISTANT_TASK:>
Python Code:
# Run some setup code for this notebook.
from __future__ import division
import sys
import os
sys.path.append('..')
import graphlab
import numpy as np
products = graphlab.SFrame('datasets/')
# feature processing
# ---------------------------------------------------------------
import json
with open('important_words.json', 'r') as f: # Reads the list of most frequent words
important_words = json.load(f)
important_words = [str(s) for s in important_words]
def remove_punctuation(text):
import string
return text.translate(None, string.punctuation)
# Remove punctuation.
products['review_clean'] = products['review'].apply(remove_punctuation)
# Split out the words into individual columns
for word in important_words:
products[word] = products['review_clean'].apply(lambda s : s.split().count(word))
products
train_data, validation_data = products.random_split(.8, seed=2)
print 'Training set : %d data points' % len(train_data)
print 'Validation set : %d data points' % len(validation_data)
from algorithms.sframe_get_numpy_data import get_numpy_data
feature_matrix_train, sentiment_train = get_numpy_data(train_data, important_words, 'sentiment')
feature_matrix_valid, sentiment_valid = get_numpy_data(validation_data, important_words, 'sentiment')
'''
produces probablistic estimate for P(y_i = +1 | x_i, w).
estimate ranges between 0 and 1.
'''
def predict_probability(feature_matrix, coefficients):
scores = np.dot(feature_matrix, coefficients)
predictions = 1.0 / (1 + np.exp(-1*scores))
return predictions
def feature_derivative_with_L2(errors, feature, coefficient, l2_penalty, feature_is_constant):
derivative = np.dot(errors, feature)
if not feature_is_constant:
derivative = derivative -2.0*l2_penalty*coefficient
return derivative
def compute_log_likelihood_with_L2(feature_matrix, sentiment, coefficients, l2_penalty):
indicator = (sentiment==+1)
scores = np.dot(feature_matrix, coefficients)
lp = np.sum((indicator-1)*scores - np.log(1. + np.exp(-scores))) - l2_penalty*np.sum(coefficients[1:]**2)
return lp
def logistic_regression_with_L2(feature_matrix, sentiment, initial_coefficients, step_size, l2_penalty, max_iter):
coefficients = np.array(initial_coefficients) # make sure it's a numpy array
for itr in xrange(max_iter):
predictions = predict_probability(feature_matrix, coefficients)
# Compute indicator value for (y_i = +1)
indicator = (sentiment==+1)
# Compute the errors as indicator - predictions
errors = indicator - predictions
for j in xrange(len(coefficients)): # loop over each coefficient
is_intercept = (j == 0)
# Recall that feature_matrix[:,j] is the feature column associated with coefficients[j].
# Compute the derivative for coefficients[j].
derivative = feature_derivative_with_L2(errors, feature_matrix[:,j],
coefficients[j], l2_penalty, is_intercept) #feature_is_constant
coefficients[j] = coefficients[j] + step_size*derivative
# Checking whether log likelihood is increasing
if itr <= 15 or (itr <= 100 and itr % 10 == 0) or (itr <= 1000 and itr % 100 == 0) \
or (itr <= 10000 and itr % 1000 == 0) or itr % 10000 == 0:
lp = compute_log_likelihood_with_L2(feature_matrix, sentiment, coefficients, l2_penalty)
print 'iteration %*d: log likelihood of observed labels = %.8f' % \
(int(np.ceil(np.log10(max_iter))), itr, lp)
return coefficients
# run with L2 = 0
coefficients_0_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=0, max_iter=501)
# run with L2 = 4
coefficients_4_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=4, max_iter=501)
# run with L2 = 10
coefficients_10_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=10, max_iter=501)
# run with L2 = 1e2
coefficients_1e2_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=1e2, max_iter=501)
# run with L2 = 1e3
coefficients_1e3_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=1e3, max_iter=501)
# run with L2 = 1e5
coefficients_1e5_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=1e5, max_iter=501)
table = graphlab.SFrame({'word': ['(intercept)'] + important_words})
def add_coefficients_to_table(coefficients, column_name):
table[column_name] = coefficients
return table
add_coefficients_to_table(coefficients_0_penalty, 'coefficients [L2=0]')
add_coefficients_to_table(coefficients_4_penalty, 'coefficients [L2=4]')
add_coefficients_to_table(coefficients_10_penalty, 'coefficients [L2=10]')
add_coefficients_to_table(coefficients_1e2_penalty, 'coefficients [L2=1e2]')
add_coefficients_to_table(coefficients_1e3_penalty, 'coefficients [L2=1e3]')
add_coefficients_to_table(coefficients_1e5_penalty, 'coefficients [L2=1e5]')
sorted_frame = table.sort(['coefficients [L2=0]'], ascending=False)
sorted_table = sorted_frame['word']
positive_words = sorted_table[:10]
print(positive_words)
negative_words = sorted_table[-10:]
print(negative_words)
sorted_frame['word']
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = 10, 6
def make_coefficient_plot(table, positive_words, negative_words, l2_penalty_list):
cmap_positive = plt.get_cmap('Reds')
cmap_negative = plt.get_cmap('Blues')
xx = l2_penalty_list
plt.plot(xx, [0.]*len(xx), '--', lw=1, color='k')
table_positive_words = table.filter_by(column_name='word', values=positive_words)
table_negative_words = table.filter_by(column_name='word', values=negative_words)
del table_positive_words['word']
del table_negative_words['word']
for i in xrange(len(positive_words)):
color = cmap_positive(0.8*((i+1)/(len(positive_words)*1.2)+0.15))
plt.plot(xx, table_positive_words[i:i+1].to_numpy().flatten(),
'-', label=positive_words[i], linewidth=4.0, color=color)
for i in xrange(len(negative_words)):
color = cmap_negative(0.8*((i+1)/(len(negative_words)*1.2)+0.15))
plt.plot(xx, table_negative_words[i:i+1].to_numpy().flatten(),
'-', label=negative_words[i], linewidth=4.0, color=color)
plt.legend(loc='best', ncol=3, prop={'size':16}, columnspacing=0.5)
plt.axis([1, 1e5, -1, 2])
plt.title('Coefficient path')
plt.xlabel('L2 penalty ($\lambda$)')
plt.ylabel('Coefficient value')
plt.xscale('log')
plt.rcParams.update({'font.size': 18})
plt.tight_layout()
make_coefficient_plot(table, positive_words, negative_words, l2_penalty_list=[0, 4, 10, 1e2, 1e3, 1e5])
def get_classification_accuracy(feature_matrix, sentiment, coefficients):
scores = np.dot(feature_matrix, coefficients)
apply_threshold = np.vectorize(lambda x: 1. if x > 0 else -1.)
predictions = apply_threshold(scores)
num_correct = (predictions == sentiment).sum()
accuracy = num_correct / len(feature_matrix)
return accuracy
train_accuracy = {}
train_accuracy[0] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_0_penalty)
train_accuracy[4] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_4_penalty)
train_accuracy[10] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_10_penalty)
train_accuracy[1e2] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_1e2_penalty)
train_accuracy[1e3] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_1e3_penalty)
train_accuracy[1e5] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_1e5_penalty)
validation_accuracy = {}
validation_accuracy[0] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_0_penalty)
validation_accuracy[4] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_4_penalty)
validation_accuracy[10] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_10_penalty)
validation_accuracy[1e2] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_1e2_penalty)
validation_accuracy[1e3] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_1e3_penalty)
validation_accuracy[1e5] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_1e5_penalty)
# Build a simple report
for key in sorted(validation_accuracy.keys()):
print "L2 penalty = %g" % key
print "train accuracy = %s, validation_accuracy = %s" % (train_accuracy[key], validation_accuracy[key])
print "--------------------------------------------------------------------------------"
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: And the brython js lib
Step2: [It is highly recommended that, at least, you read the brythonmagic docs to understand what it does. It is also recommended to have a quick look at the Brython docs].
Step4: First example
Step5: Now the interesting part. To make Highcharts available to Brython we need to 'load' the Highchart object/namespace to Brython using <code style="background-color
Step7: Pretty simple!!
Step8: I got it!!! It's the ugliest chart in the world!!!
Step10: And now, let's repeat our first example again after the global configuration
Step12: Much better than the ugly chart.
Step14: <span style="background-color
Step16: An array of arrays with two values. In this case, the first value is the x value and the second is the y value. If the first value is a string, it is applied as the name of the point, and the x value is incremented following the above rules and the string will be used in the tooltip of the point. For range series, the arrays will be interpreted as [x, low, high]. In this cases, the x value can be skipped altogether to make use of pointStart and pointRange.
Step18: If the x values are not a valid number it will be used as a label in the tooltip and the x values will start by 0 and incrementing by 1. For example
Step20: An array of objects with named values. In this case the objects are point configuration objects. Range series values are given by low and high.
Step22: On each data value we have used a name (shown in the tooltip), a color (used in scatter, bar, column, bubble,..., charts but not in line or area charts, for example) and the x and y values.
Step24: <span style="background-color
Step26: <span style="background-color
Step28: <span style="background-color
Step30: <span style="background-color
Step32: <span style="background-color
Step34: <span style="background-color
Step36: Column and bar charts
Step38: <span style="background-color
Step40: <span style="background-color
Step42: <span style="background-color
Step44: A bubble plot with different colors depending on the size for each record would be as follows
Step45: 3d Charts
Step47: With the 3d 'module' loaded we can see a new example where some interactivity is added
Step48: <span style="background-color
Step50: A simple heatmap would be as follows
Step52: <span style="background-color
Step54: Ok, not very exciting. We initialise with an empty dictionary of options and with the show method we pass the options dictionary to Highcharts and we can plot an empty chart. As stated before, not very fascinating. Let's add options to include a title and a subtitle.
Step56: Still not very funny. If you see the two new methods added, title and subtitle, there is one line commented. We can uncomment these lines if we don't want to call the show method but everytime we use a method the chart will be plotted. I didn't see any overhead doing this but let's avoid it for the moment.
Step58: Wow!!, with less than 40 lines of Brython code we have a wrapper to do very simple interactive charts using Highcharts in a pythonic way. We added the label keyword in the plot method. In Matplotlib, the label is used by the legend in the case we want to add a legend. Let's add a legend method to provide the label keyword some utility.
Step60: The behaviour is not similar to that in the pyplot module of the matplotlib library but we can get some basic functionality. We can add a scatter method to combine line plots with scatter plots.
|
<ASSISTANT_TASK:>
Python Code:
%load_ext brythonmagic
from brythonmagic import load_brython_dev
load_brython_dev()
from brythonmagic import load_js_lib
load_js_lib("https://cdnjs.cloudflare.com/ajax/libs/highcharts/5.0.7/highcharts.js")
html = <div id="hc_ex1" style="width: 700px; height: 300px;"></div>
%%brython -h html -p
from browser import window
Highcharts = window.Highcharts
hc = Highcharts.Chart.new
config = {
'chart':{
'renderTo': 'hc_ex1'
},
'title': {
'text': 'Monthly Average Temperature',
'x': -20 #center
},
'subtitle': {
'text': 'Source: WorldClimate.com',
'x': -20
},
'xAxis': {
'categories': ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun',
'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec']
},
'yAxis': {
'title': {
'text': 'Temperature (°C)'
},
'plotLines': [{
'value': 0,
'width': 1,
'color': '#808080'
}]
},
'tooltip': {
'valueSuffix': '°C'
},
'legend': {
'layout': 'vertical',
'align': 'right',
'verticalAlign': 'middle',
'borderWidth': 0
},
'series': [{
'name': 'Tokyo',
'data': [7.0, 6.9, 9.5, 14.5, 18.2, 21.5, 25.2, 26.5, 23.3, 18.3, 13.9, 9.6]
}]
}
hc(config)
html = <div id="hc_ex2" style="width: 700px; height: 300px;"></div>
%%brython -h html
from browser import window
Highcharts = window.Highcharts
hc = Highcharts.Chart.new
config = {
'chart': {
'renderTo': 'hc_ex2',
'backgroundColor': {
'linearGradient': [0, 0, 500, 500],
'stops': [[0, 'rgb(255, 255, 255)'],
[1, 'rgb(200, 200, 255)']]
},
'borderRadius': 10
},
'title': {
'align': 'left',
'text': 'My dummy title',
'style': { "color": "green", "fontSize": "20px" }
},
'subtitle': {
'align': 'right',
'text': 'Ugly subtitle',
'style': { "color": "orange", "fontSize": "12px" }
},
'legend': {
'backgroundColor': 'green',
'borderColor': 'yellow',
'borderRadius': 10,
'borderWidth': 3,
},
'series': [{
'data': [1,2,3,4],
'type': 'line',
'name': 'Name of the series',
'color': 'orange',
}],
'tooltip': {
'backgroundColor': 'gray',
'borderColor': 'yellow',
'borderRadius': 10,
'borderWidth': 3,
},
'xAxis': {
'categories': ['data'] * 4,
'lineWidth': 5,
'lineColor': 'violet',
'gridLineColor': 'violet',
'gridLineWidth': 3,
'title': {'text': 'X axis title'}
},
'yAxis': {
'lineWidth': 5,
'lineColor': 'blue',
'gridLineColor': 'blue',
'gridLineWidth': 3,
'title': {'text': 'Y axis title'}
},
'credits': {
'text': "Pybonacci rules!",
'href': 'https://twitter.com/pybonacci'
}
}
hc(config)
%%brython -s globaloptions
from browser import window
Highcharts = window.Highcharts
global_options = {
'colors': ['rgb(0, 107, 164)', 'rgb(255, 128, 114)',
'rgb(171, 171, 171)', 'rgb(89, 89, 89)',
'rgb(95, 158, 209)', 'rgb(200, 82, 0)',
'rgb(137, 137, 137)', 'rgb(162, 200, 236)',
'rgb(256, 188, 121)', 'rgb(207, 207, 207)'],
'chart':{
'plotBackgroundColor': 'rgb(229, 229, 229)'
},
'credits':{
'enabled': False
},
'legend':{
'align': 'right',
'verticalAlign': 'middle',
'layout': 'vertical',
'borderWidth': 0,
'enabled': True
},
'plotOptions':{
'area': {
'fillOpacity': 0.5,
'marker': {'enabled': False},
},
'arearange': {
'fillOpacity': 0.5,
'marker': {'enabled': False},
},
'areaspline': {
'fillOpacity': 0.5,
'marker': {'enabled': False},
},
'areasplinerange': {
'fillOpacity': 0.5,
'marker': {'enabled': False},
},
'bar': {
'borderWidth': 0
},
'boxplot': {
'fillColor': '#FAFAFA',
'lineWidth': 2,
'medianWidth': 4,
'stemDashStyle': 'line',
'stemWidth': 1,
'whiskerLength': '30%',
'whiskerWidth': 2
},
'column': {
'borderWidth': 0
},
'columnrange': {
'borderWidth': 0
},
'errorbar': {
'color': '#fefefe',
'lineWidth': 2
},
'line': {
'marker': {'enabled': False},
'lineWidth': 2
},
'scatter': {
'marker': {
'enabled': True,
'lineWidth': 0,
'symbol': 'circle',
'radius': 5
},
},
'spline': {
'marker': {'enabled': False},
'lineWidth': 2
},
'waterfall': {
'borderWidth': 0
}
},
'subtitle': {
'align': 'center',
'style': {
'color': '#555555',
'fontWeight': 'bold'
}
},
'title': {
'align': 'center',
'text': None,
'style': {
'color': '#000000',
'fontWeight': 'bold'
}
},
'tooltip': {
'backgroundColor': 'rgba(255,255,224,0.5)',
'borderRadius': 5,
'crosshairs': [{
'width': 3,
'color': '#ffffff',
'dashStyle': 'shortdot'
}, {
'width': 3,
'color': '#ffffff',
'dashStyle': 'shortdot'
}],
'hideDelay': 200,
'enabled': True,
'shadow': False,
},
'xAxis': {
'gridLineColor': '#FFFFFF',
'gridLineWidth': 1,
'lineColor': 'rgb(229, 229, 229)',
'tickColor': 'rgb(229, 229, 229)',
'shadow': False,
},
'yAxis': {
'gridLineColor': '#FFFFFF',
'gridLineWidth': 1,
'lineColor': 'rgb(229, 229, 229)',
'tickColor': 'rgb(229, 229, 229)',
'shadow': False,
}
}
Highcharts.setOptions.new(global_options)
html = <div id="hc_ex3" style="width: 700px; height: 300px;"></div>
%%brython -h html
from browser import window
Highcharts = window.Highcharts
hc = Highcharts.Chart.new
config = {
'chart':{'renderTo': 'hc_ex3'},
'title': {'text': 'Monthly Average Temperature'},
'subtitle': {'text': 'Source: WorldClimate.com'},
'xAxis': {'categories': ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun',
'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec']},
'yAxis': {'title': {'text': 'Temperature (°C)'}},
'tooltip': {'valueSuffix': '°C'},
'series': [{'name': 'Tokyo',
'data': [7.0, 6.9, 9.5, 14.5, 18.2, 21.5, 25.2, 26.5, 23.3, 18.3, 13.9, 9.6]}]
}
hc(config)
html =
<div style="float: left;">
<div id="hc_ex4a" style="width: 400px; height: 300px;"></div>
<div id="hc_ex4b" style="width: 400px; height: 300px;"></div>
</div>
<div style="float: left;">
<div id="hc_ex4c" style="width: 400px; height: 300px;"></div>
<div id="hc_ex4d" style="width: 400px; height: 300px;"></div>
</div>
%%brython -h html
from browser import window
Highcharts = window.Highcharts
hc = Highcharts.Chart.new
# Bar
config = {
'chart': {'renderTo': 'hc_ex4a', 'type': 'bar'},
'xAxis': {'categories': ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun',
'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec']},
'yAxis': {'title': {'text': 'Temperature (°C)'}},
'tooltip': {'valueSuffix': '°C'},
'series': [{'name': 'Tokyo',
'data': [7.0, 6.9, 9.5, 14.5, 18.2, 21.5, 25.2, 26.5, 23.3, 18.3, 13.9, 9.6]}]
}
hc(config)
# Area
config = {
'chart': {'renderTo': 'hc_ex4b', 'type': 'area'},
'xAxis': {'categories': ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun',
'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec']},
'yAxis': {'title': {'text': 'Temperature (°C)'}},
'tooltip': {'valueSuffix': '°C'},
'series': [{'name': 'Tokyo',
'data': [7.0, 6.9, 9.5, 14.5, 18.2, 21.5, 25.2, 26.5, 23.3, 18.3, 13.9, 9.6]}]
}
hc(config)
# Bubble
config = {
'chart': {'renderTo': 'hc_ex4c', 'type': 'scatter', 'zoomType': 'xy'},
'title': {'text': 'You can pan and zoom'},
'series': [{'name': 'Tokyo',
'data': [[7.0, 6.9], [9.5, 14.5], [18.2, 21.5], [25.2, 26.5], [23.3, 18.3], [13.9, 9.6]]}]
}
hc(config)
# Pie
config = {
'chart': {'renderTo': 'hc_ex4d', 'type': 'pie', 'plotBackgroundColor': 'white'},
'series': [{'name': 'Python scientific libs',
'data': [['scipy', 6.9], ['IPython', 14.5],
['Matplotlib', 21.5], ['Numpy', 26.5], ['Pandas', 18.3]]}]
}
hc(config)
html = <div id="hc_ex5" style="width: 700px; height: 300px;"></div>
%%brython -h html
from browser import window
Highcharts = window.Highcharts
hc = Highcharts.Chart.new
config = {
'chart': {'renderTo': 'hc_ex5'},
'yAxis': {'title': {'text': 'Temperature (°C)'}},
'tooltip': {'valueSuffix': '°C'},
'series': [{'name': 'Tokyo',
'data': [7.0, 6.9, 9.5, 14.5, 18.2, 21.5, 25.2, 26.5, 23.3, 18.3, 13.9, 9.6]},
{'name': 'Madrid',
'data': [3.0, 5.4, 6.5, 12.7, 16.8, 21.4, 26.5, 26.2, 24.3, 17.3, 11.8, 6.7]}]
}
hc(config)
html = <div id="hc_ex6" style="width: 700px; height: 300px;"></div>
%%brython -h html
from browser import window
Highcharts = window.Highcharts
hc = Highcharts.Chart.new
config = {
'chart': {'renderTo': 'hc_ex6', 'type': 'line'},
'yAxis': {'title': {'text': 'Temperature (°C)'}},
'tooltip': {'valueSuffix': '°C'},
'series': [{'name': 'Neverland',
'data': [[1, 6.9], [3, 14.5], [7, 21.5], [8, 26.5], [9, 18.3], [10, 9.6]]}]
}
hc(config)
html = <div id="hc_ex7" style="width: 700px; height: 300px;"></div>
%%brython -h html
from browser import window
Highcharts = window.Highcharts
hc = Highcharts.Chart.new
config = {
'chart': {'renderTo': 'hc_ex7', 'type': 'line'},
'yAxis': {'title': {'text': 'Temperature (°C)'}},
'tooltip': {'valueSuffix': '°C'},
'series': [{'name': 'Neverland',
'data': [['Jan', 6.9], ['Mar', 14.5], ['Jul', 21.5], ['Aug', 26.5], ['Sep', 18.3], ['Oct', 9.6]]}]
}
hc(config)
from brythonmagic import load_js_lib
load_js_lib("https://cdnjs.cloudflare.com/ajax/libs/highcharts/5.0.7/highcharts-more.js")
html = <div id="hc_ex8" style="width: 700px; height: 300px;"></div>
%%brython -h html
import random
from browser import window
Highcharts = window.Highcharts
hc = Highcharts.Chart.new
# First we create the data to be passed to the plot
data = [{'name': 'Data {}'.format(i+1),
'color': 'rgb(100,50,{0})'.format(random.randrange(0,255)),
'y': random.randrange(0,25),
'x': i+1} for i in range(10)]
config = {
'chart': {'renderTo': 'hc_ex8'},
'yAxis': {'title': {'text': 'Temperature (°C)'}},
'series': [{'data': data, 'type': 'line', 'color': 'black'},
{'data': data, 'type': 'bubble'}]
}
hc(config)
html = <div id="hc_ex9" style="width: 900px; height: 300px;"></div>
%%brython -h html
import random
from browser import window
Highcharts = window.Highcharts
hc = Highcharts.Chart.new
# First we create the data to be passed to the plot
data = [[i+1, random.randrange(0,35)] for i in range(100)]
config = {
'chart': {'renderTo': 'hc_ex9', 'type': 'line', 'zoomType': 'x'},
'yAxis': {'title': {'text': 'Wind speed (m/s)'}},
'series': [{'data': data}],
'plotOptions': {
'line': {'dataLabels': {'enabled': True}, 'enableMouseTracking': False}
},
'title': {'text': 'Click and drag to zoom'}
}
hc(config)
html = <div id="hc_ex10" style="width: 900px; height: 300px;"></div>
%%brython -h html
import random
from browser import window
Highcharts = window.Highcharts
hc = Highcharts.Chart.new
# First we create the data to be passed to the plot
data = [[i+1, random.randrange(10,105)] for i in range(100)]
config = {
'chart': {'renderTo': 'hc_ex10'},
'xAxis': {'min': -20},
'yAxis': {
'title': {'text': 'Mean Daily NO2 (ug/m3)'},
'max': 110,
'minorGridLineWidth': 0,
'gridLineWidth': 0,
'alternateGridColor': None,
'plotBands': [{
'from': 0,
'to': 15,
'color': 'rgba(100,100,255,0.5)',
'label': {
'text': 'Clean air',
'style': {
'color': 'black'
}
}
}, {
'from': 15,
'to': 40,
'color': 'rgba(0,255,0,0.5)',
'label': {
'text': 'Below EU limit',
'style': {
'color': 'black'
}
}
}, {
'from': 40,
'to': 120,
'color': 'rgba(255,0,0,0.5)',
'label': {
'text': 'Above EU limit',
'style': {
'color': 'black'
}
}
}]
},
'series': [{'data': data, 'lineWidth': 2, 'color': 'black'}]
}
hc(config)
html = <div id="hc_ex11" style="width: 900px; height: 300px;"></div>
%%brython -h html
from browser import window
Highcharts = window.Highcharts
hc = Highcharts.Chart.new
# First we create the data to be passed to the plot
data = [[3**i, i] for i in range(1, 10)]
config = {
'chart': {'renderTo': 'hc_ex11', 'type': 'spline'},
'yAxis': {'type': 'logarithmic', 'opposite': True, 'offset': 30},
'legend': {'align': 'left'},
'series': [{'data': data, 'lineWidth': 4, 'color': 'black'}]
}
hc(config)
html = <div id="hc_ex12container" style="height: 350px;">
<div id="hc_ex12" style="width: 900px; height: 300px;"></div>
</div>
%%brython -h html
from browser.timer import set_interval, clear_interval
from browser import window, document, html
from random import randrange
Highcharts = window.Highcharts
hc = Highcharts.Chart.new
# First we create the data to be passed to the plot
data = [[i, randrange(0,10)] for i in range(0, 20)]
data_tmp = data[:]
config = {
'chart': {
'renderTo': 'hc_ex12',
'type': 'spline'
},
'series': [{'data': data, 'lineWidth': 2, 'color': 'black', 'animation': False,
'marker': {'enabled': True}}]
}
hc(config)
### NEW CODE ###
idtimer = None
# A button to animate the plot with new data
document['hc_ex12container'] <= html.BUTTON('Animate', Id = 'anim')
def add_point():
global data_tmp
x = data_tmp[-1][0] + 1
y = randrange(0,10)
data_tmp.append([x, y])
config['series'][0]['data'] = data_tmp[-20:]
hc(config)
def animate(ev):
global idtimer, config, data_tmp
idtimer = set_interval(add_point, 1000)
document['anim'].bind('click', animate)
# A button to stop the plot with new data
document['hc_ex12container'] <= html.BUTTON('Stop', Id = 'stop')
def stop(ev):
global idtimer
clear_interval(idtimer)
document['stop'].bind('click', stop)
# A button to reset the plot with the original values
document['hc_ex12container'] <= html.BUTTON('Reset', Id = 'reset')
def reset(ev):
global idtimer, config, data, data_tmp
if idtimer:
clear_interval(idtimer)
data_tmp = data[:]
config['series'][0]['data'] = data
hc(config)
document['reset'].bind('click', reset)
html = <div id="hc_ex13" style="width: 900px; height: 300px;"></div>
%%brython -h html
from random import randrange
from browser import window
Highcharts = window.Highcharts
hc = Highcharts.Chart.new
# First we create the data to be passed to the plot
data1 = [[i, randrange(0,10)] for i in range(0, 20)]
data2 = [[i, randrange(0,10)] for i in range(0, 20)]
config = {
'chart': {
'renderTo': 'hc_ex13',
'type': 'area'
},
'series': [{'data': data1, 'dashStyle': 'ShortDot', 'lineWidth': 3},
{'data': data2}]
}
hc(config)
html = <div id="hc_ex14" style="width: 900px; height: 300px;"></div>
%%brython -h html
from random import randrange
from browser import window
Highcharts = window.Highcharts
hc = Highcharts.Chart.new
# First we create the data to be passed to the plot
data1 = [[i, randrange(0,10)] for i in range(0, 20)]
data2 = [[i, randrange(0,10)] for i in range(0, 20)]
config = {
'chart': {
'renderTo': 'hc_ex14',
'type': 'area'
},
'series': [{'data': data1, 'lineWidth': 3},
{'data': data2}],
'plotOptions': {'area': {'stacking': 'normal'}},
'tooltip': {'shared': True},
}
hc(config)
html = <div id="hc_ex15" style="width: 900px; height: 300px;"></div>
%%brython -h html
from random import randrange
from browser import window
Highcharts = window.Highcharts
hc = Highcharts.Chart.new
# First we create the data to be passed to the plot
data1 = [[i, randrange(5,10), randrange(10,15)] for i in range(0, 20)]
data2 = [[i, (lst[1] + lst[2]) / 2.] for i, lst in enumerate(data1)]
config = {
'chart': {
'renderTo': 'hc_ex15'
},
'series': [{'data': data2, 'type': 'line', 'name': 'mean', 'lineWidth': 3, 'color': 'black'},
{'data': data1, 'lineWidth': 1, 'type': 'arearange', 'name': 'extremes'}],
'tooltip': {'shared': True}
}
hc(config)
html = <div id="hc_ex16" style="width: 900px; height: 300px;"></div>
%%brython -h html
from random import randrange
from browser import window
Highcharts = window.Highcharts
hc = Highcharts.Chart.new
# First we create the data to be passed to the plot
data1 = [[i, -randrange(1,15)] for i in range(0, 20)]
data2 = [[i, randrange(1,15)] for i in range(0, 20)]
config = {
'chart': {
'renderTo': 'hc_ex16',
'type': 'bar'
},
'series': [{'data': data1, 'name': 'negative'},
{'data': data2, 'name': 'positive'}],
'plotOptions': {'bar': {'stacking': 'normal'}},
'tooltip': {'shared': True},
'xAxis': [{'opposite': False}, {'opposite': True, 'linkedTo': 0}]
}
hc(config)
html = <div id="hc_ex17container">
<div id="hc_ex17" style="position: relative; float: left; width: 700px; height: 300px; margin: 20px;"></div>
<div id="tablediv"></div>
</div>
%%brython -h html
from browser import window, document, html
from random import randrange
Highcharts = window.Highcharts
# first we create a table with two series of data, X and Y:
tab = html.TABLE()
tab.style = {'textAlign': 'center', 'width': '50px'}
tab <= html.TR(html.TD('X') + html.TD('Y'))
for i in range(5):
tab <= html.TR(
html.TD(
html.INPUT(
Id = 'x' + str(i), value = randrange(1,5), style = {'width': '50px'}
)
) +
html.TD(
html.INPUT(
Id = 'y' + str(i), value = randrange(1,5), style = {'width': '50px'}
)
)
)
document['tablediv'] <= tab
# Function to retrieve the data from the table
def get_data():
data1 = []
data2 = []
for i in range(5):
print('x' + str(i))
data1.append(float(document['x' + str(i)].value))
data2.append(float(document['y' + str(i)].value))
return [data1, data2]
# Function to update the chart
def update(ev):
global config, hc
datasets = get_data()
config['series'][0]['data'] = datasets[0]
config['series'][1]['data'] = datasets[1]
hc(config)
print(datasets)
# Button and event
document['hc_ex17container'] <= html.BUTTON('Update', Id = 'btn')
document['btn'].bind('click', update)
hc = Highcharts.Chart.new
# First we create the data to be passed to the plot
datasets = get_data()
data1 = datasets[0]
data2 = datasets[1]
config = {
'chart': {
'renderTo': 'hc_ex17',
'type': 'column'
},
'series': [{'data': data1, 'name': 'X'},
{'data': data2, 'name': 'Y'}],
'title': {'text': 'Modify the values in the table and update'}
}
hc(config)
html = <div id="hc_ex18" style="width: 600px; height: 300px;"></div>
%%brython -h html
from browser import window
Highcharts = window.Highcharts
hc = Highcharts.Chart.new
# Pie
config = {
'chart': {'renderTo': 'hc_ex18', 'type': 'pie'},
'series': [{'name': 'Python scientific libs', 'innerSize': '60%',
'data': [['scipy', 6.9], ['IPython', 14.5],
['Matplotlib', 21.5], ['Numpy', 26.5], ['Pandas', 18.3]]}]
}
hc(config)
html = <div id="hc_ex19" style="width: 600px; height: 300px;"></div>
%%brython -h html
from random import randrange
from browser import window
Highcharts = window.Highcharts
hc = Highcharts.Chart.new
# First we create the data to be passed to the plot
data1 = [[i + randrange(-2, 2), randrange(i-3,i+3)] for i in range(0, 20)]
data2 = [[i + randrange(-2, 2) + 2, randrange(i-3,i+3)] for i in range(0, 20)]
# Scatter
config = {
'chart': {'renderTo': 'hc_ex19', 'type': 'scatter'},
'series': [{'name': 'Station 1 vs model', 'data': data1},
{'name': 'Station 2 vs model', 'data': data2}],
'xAxis': {'title': {'text': 'Station'}},
'yAxis': {'title': {'text': 'Model'}}
}
hc(config)
html = <div id="hc_ex20" style="width: 600px; height: 300px;"></div>
%%brython -h html
from random import randrange
from browser import window
Highcharts = window.Highcharts
hc = Highcharts.Chart.new
# First we create the data to be passed to the plot
data = []
for i in range(20):
x = randrange(-10, 10)
y = randrange(-10, 10)
z = randrange(1, 20)
data.append({'x': x, 'y': y, 'z': z,
'color': 'rgb(40,40,{0})'.format(int(z * 255 / 20)),
'marker': {'lineWidth': 1, 'lineColor': 'black'}})
# Scatter
config = {
'chart': {'renderTo': 'hc_ex20', 'type': 'bubble'},
'series': [{'name': 'bubbles', 'data': data}]
}
hc(config)
from brythonmagic import load_js_lib
load_js_lib("https://cdnjs.cloudflare.com/ajax/libs/highcharts/5.0.7/highcharts-3d.js")
html =
<div id="hc_ex21" style="width: 900px; height: 400px;"></div>
<div id="sliders">
<table>
<tr>
<td>Alpha Angle</td>
<td>
<input id="R0" type="range" min="0" max="45" value="15"/> <span id="R0-value" class="value"></span>
</td>
</tr>
<tr>
<td>Beta Angle</td>
<td>
<input id="R1" type="range" min="0" max="45" value="15"/> <span id="R1-value" class="value"></span>
</td>
</tr>
</table>
</div>
%%brython -h html
from browser import window, document
Highcharts = window.Highcharts
hc = Highcharts.Chart.new
config = {
'chart': {
'renderTo': 'hc_ex21',
'plotBackgroundColor': 'white',
'type': 'column',
'margin': 100,
'options3d': {
'enabled': True,
'alpha': 15,
'beta': 15,
'depth': 50,
'viewDistance': 50
}
},
'plotOptions': {'column': {'depth': 25}},
'series': [{'data': [29.9, 71.5, 106.4, 129.2, 144.0, 176.0,
135.6, 148.5, 216.4, 194.1, 95.6, 54.4]}],
'xAxis': {'gridLineColor': '#C0C0C0'},
'yAxis': {'gridLineColor': '#C0C0C0'}
}
columns = hc(config)
def show_values():
document['R0-value'].html = columns.options.chart.options3d.alpha
document['R1-value'].html = columns.options.chart.options3d.beta
show_values()
# activate the sliders
def change_alpha(ev):
columns.options.chart.options3d.alpha = ev.target.value
show_values()
columns.redraw(False)
def change_beta(ev):
columns.options.chart.options3d.beta = ev.target.value
show_values()
columns.redraw(False)
document['R0'].bind('change', change_alpha)
document['R1'].bind('change', change_beta)
from brythonmagic import load_js_lib
load_js_lib("https://cdnjs.cloudflare.com/ajax/libs/highcharts/5.0.7/modules/heatmap.js")
html = <div id="hc_ex22" style="width: 900px; height: 400px;"></div>
%%brython -h html
from random import randrange
from browser import window
Highcharts = window.Highcharts
hc = Highcharts.Chart.new
# First we create the data to be passed to the plot
data = []
for y in range(7):
for x in range(7):
data.append([y, x, randrange(0,150)])
config = {
'chart': {
'renderTo': 'hc_ex22',
'type': 'heatmap',
'marginTop': 40,
'marginBottom': 40
},
'title': {'text': 'Commits made last week :-P'},
'xAxis': {'categories': ['Numpy', 'Scipy', 'Matplotlib', 'IPython',
'Pandas', 'Brython', 'Brythonmagic']},
'yAxis': {'categories': ['Monday', 'Tuesday', 'Wednesday',
'Thursday', 'Friday', 'Saturday', 'Sunday'],
'title': {'text': None}},
'colorAxis': {'min': 0,
'minColor': '#FFFFFF',
'maxColor': JSConstructor(Highcharts.getOptions)().colors[0]},
'legend': {
'align': 'right',
'layout': 'vertical',
'margin': 0,
'verticalAlign': 'top',
'y': 25,
'symbolHeight': 300
},
'series': [{
'borderWidth': 1,
'data': data,
'dataLabels': {
'enabled': True,
'color': 'black',
'style': {
'textShadow': 'none',
'HcTextStroke': None
}
}
}]
}
hc(config)
%%brython -s wrapper01
from browser import window
Highcharts = window.Highcharts
hc = Highcharts.Chart.new
class HC:
def __init__(self, container):
self.options = {}
self.options['chart'] = {}
self.options['chart']['renderTo'] = container
self.options['title'] = {}
self.options['title']['text'] = ""
self.options['legend'] = {}
self.options['legend']['enabled'] = False
self.options['subtitle'] = {}
self.options['subtitle']['text'] = ""
self.options['series'] = []
def show(self):
hc(self.options)
html = <div id="hc_ex23" style="width: 900px; height: 400px;"></div>
%%brython -h html -S wrapper01
plt = HC('hc_ex23')
plt.show()
print(plt.options)
%%brython -s wrapper02
from browser import window
Highcharts = window.Highcharts
hc = Highcharts.Chart.new
class HC:
def __init__(self, container):
self.options = {}
self.options['chart'] = {}
self.options['chart']['renderTo'] = container
self.options['title'] = {}
self.options['title']['text'] = ""
self.options['legend'] = {}
self.options['legend']['enabled'] = False
self.options['subtitle'] = {}
self.options['subtitle']['text'] = ""
self.options['series'] = []
def show(self):
hc(self.options)
def title(self, text):
self.options['title']['text'] = text
#self.show()
def subtitle(self, text):
self.options['subtitle']['text'] = text
#self.show()
html = <div id="hc_ex24" style="width: 900px; height: 400px;"></div>
%%brython -h html -S wrapper02
plt = HC('hc_ex24')
plt.title('Dummy title')
plt.subtitle('Dummy title')
plt.show()
%%brython -s wrapper03
from browser import window
Highcharts = window.Highcharts
hc = Highcharts.Chart.new
class HC:
def __init__(self, container):
self.options = {}
self.options['chart'] = {}
self.options['chart']['renderTo'] = container
self.options['title'] = {}
self.options['title']['text'] = ""
self.options['legend'] = {}
self.options['legend']['enabled'] = False
self.options['subtitle'] = {}
self.options['subtitle']['text'] = ""
self.options['series'] = []
def show(self):
hc(self.options)
def title(self, text):
self.options['title']['text'] = text
#self.draw()
def subtitle(self, text):
self.options['subtitle']['text'] = text
#self.draw()
def plot(self, x, y = None, label = None, color = None, linewidth = None):
if y:
data = [[i, j] for i, j in zip(x, y)]
else:
data = x
serie = {'data': data, 'type': 'line'}
if linewidth:
serie['lineWidth'] = linewidth
if label:
serie['name'] = label
if color:
serie['color'] = color
self.options['series'].append(serie)
#self.draw()
html = <div id="hc_ex25" style="width: 900px; height: 400px;"></div>
%%brython -h html -S wrapper03
plt = HC('hc_ex25')
plt.plot([1,2,4,5], [3,6,4,7], label = 'lineplot1', linewidth = 5, color = 'red')
plt.plot([1,2,4,5], [8,5,9,2], label = 'lineplot2', linewidth = 2, color = 'blue')
plt.title('Some line plots')
plt.show()
%%brython -s wrapper04
from browser import window
Highcharts = window.Highcharts
hc = Highcharts.Chart.new
class HC:
def __init__(self, container):
self.options = {}
self.options['chart'] = {}
self.options['chart']['renderTo'] = container
self.options['title'] = {}
self.options['title']['text'] = ""
self.options['legend'] = {}
self.options['legend']['enabled'] = False
self.options['subtitle'] = {}
self.options['subtitle']['text'] = ""
self.options['series'] = []
def show(self):
hc(self.options)
def title(self, text):
self.options['title']['text'] = text
#self.draw()
def subtitle(self, text):
self.options['subtitle']['text'] = text
#self.draw()
def plot(self, x, y = None, label = None, color = None, linewidth = None):
if y:
data = [[i, j] for i, j in zip(x, y)]
else:
data = x
serie = {'data': data, 'type': 'line'}
if linewidth:
serie['lineWidth'] = linewidth
if label:
serie['name'] = label
if color:
serie['color'] = color
self.options['series'].append(serie)
#self.draw()
def legend(self, loc = 'right'):
self.options['legend']['enabled'] = True
if loc:
self.options['legend']['align'] = loc
#self.draw()
html = <div id="hc_ex26" style="width: 900px; height: 400px;"></div>
%%brython -h html -S wrapper04
plt = HC('hc_ex26')
plt.title('Line plots')
plt.plot([1,2,4,5], [3,6,4,7], label = 'lineplot1', linewidth = 5, color = 'red')
plt.plot([1,2,4,5], [8,5,9,2], label = 'lineplot2', linewidth = 2, color = 'blue')
plt.legend(loc = 'left')
plt.show()
%%brython -s wrapper05
from browser import window
Highcharts = window.Highcharts
hc = Highcharts.Chart.new
class HC:
def __init__(self, container):
self.options = {}
self.options['chart'] = {}
self.options['chart']['renderTo'] = container
self.options['title'] = {}
self.options['title']['text'] = ""
self.options['legend'] = {}
self.options['legend']['enabled'] = False
self.options['subtitle'] = {}
self.options['subtitle']['text'] = ""
self.options['series'] = []
def show(self):
hc(self.options)
def title(self, text):
self.options['title']['text'] = text
#self.draw()
def subtitle(self, text):
self.options['subtitle']['text'] = text
#self.draw()
def plot(self, x, y = None, label = None, color = None, linewidth = None):
if y:
data = [[i, j] for i, j in zip(x, y)]
else:
data = x
serie = {'data': data, 'type': 'line'}
if linewidth:
serie['lineWidth'] = linewidth
if label:
serie['name'] = label
if color:
serie['color'] = color
self.options['series'].append(serie)
#self.draw()
def legend(self, loc = 'right'):
self.options['legend']['enabled'] = True
if loc:
self.options['legend']['align'] = loc
#self.draw()
def scatter(self, x, y, label = None, color = None):
data = [[i, j] for i, j in zip(x, y)]
serie = {'data': data, 'type': 'scatter'}
if label:
serie['name'] = label
if color:
serie['color'] = color
self.options['series'].append(serie)
#self.draw()
html = <div id="hc_ex27" style="width: 900px; height: 400px;"></div>
%%brython -h html -S wrapper05
plt = HC('hc_ex27')
plt.title('Line plots')
plt.plot([1,2,4,5], [3,6,4,7], label = 'lineplot1', linewidth = 5, color = 'red')
plt.plot([1,2,4,5], [8,5,9,2], label = 'lineplot2', linewidth = 2, color = 'blue')
plt.scatter([1,2,4,5], [2,4,6,8], label = 'scatter1', color = 'green')
plt.legend(loc = 'left')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A Note about GridSearchCV's best_score_ attribute
Step2: By executing the code above, we created a simple data set of random integers that shall represent our class labels. Next, we fed the indices of 5 cross-validation folds (cv3_idx) to the cross_val_score scorer, which returned 5 accuracy scores -- these are the 5 accuracy values for the 5 test folds.
Step3: As we can see, the scores for the 5 folds are exactly the same as the ones from cross_val_score earlier.
Step4: As we can see, the result above is consistent with the average score computed the cross_val_score.
|
<ASSISTANT_TASK:>
Python Code:
%load_ext watermark
%watermark -a 'Sebastian Raschka' -u -d -v -p numpy,pandas,matplotlib,scikit-learn
from sklearn.grid_search import GridSearchCV
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.svm import SVC
from sklearn.datasets import load_iris
from sklearn.cross_validation import train_test_split
# load and split data
iris = load_iris()
X, y = iris.data, iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=42)
# pipeline setup
cls = SVC(C=10.0,
kernel='rbf',
gamma=0.1,
decision_function_shape='ovr')
kernel_svm = Pipeline([('std', StandardScaler()),
('svc', cls)])
# gridsearch setup
param_grid = [
{'svc__C': [1, 10, 100, 1000],
'svc__gamma': [0.001, 0.0001],
'svc__kernel': ['rbf']},
]
gs = GridSearchCV(estimator=kernel_svm,
param_grid=param_grid,
scoring='accuracy',
n_jobs=-1,
cv=5,
verbose=1,
refit=True,
pre_dispatch='2*n_jobs')
# run gridearch
gs.fit(X_train, y_train)
print('Best GS Score %.2f' % gs.best_score_)
print('best GS Params %s' % gs.best_params_)
# prediction on the training set
y_pred = gs.predict(X_train)
train_acc = (y_train == y_pred).sum()/len(y_train)
print('\nTrain Accuracy: %.2f' % (train_acc))
# evaluation on the test set
y_pred = gs.predict(X_test)
test_acc = (y_test == y_pred).sum()/len(y_test)
print('\nTest Accuracy: %.2f' % (test_acc))
from sklearn.cross_validation import StratifiedKFold, cross_val_score
from sklearn.linear_model import LogisticRegression
import numpy as np
np.random.seed(0)
np.set_printoptions(precision=6)
y = [np.random.randint(3) for i in range(25)]
X = (y + np.random.randn(25)).reshape(-1, 1)
cv5_idx = list(StratifiedKFold(y, n_folds=5, shuffle=False, random_state=0))
cross_val_score(LogisticRegression(random_state=123), X, y, cv=cv5_idx)
from sklearn.grid_search import GridSearchCV
gs = GridSearchCV(LogisticRegression(), {}, cv=cv5_idx, verbose=3).fit(X, y)
gs.best_score_
cross_val_score(LogisticRegression(), X, y, cv=cv5_idx).mean()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step7: Import helper methods
Step8: Select a pretrained model
Step10: Run on sample images
|
<ASSISTANT_TASK:>
Python Code:
import os
from io import BytesIO
import tarfile
import tempfile
from six.moves import urllib
from matplotlib import gridspec
from matplotlib import pyplot as plt
import numpy as np
from PIL import Image
import tensorflow as tf
class DeepLabModel(object):
Class to load deeplab model and run inference.
INPUT_TENSOR_NAME = 'ImageTensor:0'
OUTPUT_TENSOR_NAME = 'SemanticPredictions:0'
INPUT_SIZE = 513
FROZEN_GRAPH_NAME = 'frozen_inference_graph'
def __init__(self, tarball_path):
Creates and loads pretrained deeplab model.
self.graph = tf.Graph()
graph_def = None
# Extract frozen graph from tar archive.
tar_file = tarfile.open(tarball_path)
for tar_info in tar_file.getmembers():
if self.FROZEN_GRAPH_NAME in os.path.basename(tar_info.name):
file_handle = tar_file.extractfile(tar_info)
graph_def = tf.GraphDef.FromString(file_handle.read())
break
tar_file.close()
if graph_def is None:
raise RuntimeError('Cannot find inference graph in tar archive.')
with self.graph.as_default():
tf.import_graph_def(graph_def, name='')
self.sess = tf.Session(graph=self.graph)
def run(self, image):
Runs inference on a single image.
Args:
image: A PIL.Image object, raw input image.
Returns:
resized_image: RGB image resized from original input image.
seg_map: Segmentation map of `resized_image`.
width, height = image.size
resize_ratio = 1.0 * self.INPUT_SIZE / max(width, height)
target_size = (int(resize_ratio * width), int(resize_ratio * height))
resized_image = image.convert('RGB').resize(target_size, Image.ANTIALIAS)
batch_seg_map = self.sess.run(
self.OUTPUT_TENSOR_NAME,
feed_dict={self.INPUT_TENSOR_NAME: [np.asarray(resized_image)]})
seg_map = batch_seg_map[0]
return resized_image, seg_map
def create_pascal_label_colormap():
Creates a label colormap used in PASCAL VOC segmentation benchmark.
Returns:
A Colormap for visualizing segmentation results.
colormap = np.zeros((256, 3), dtype=int)
ind = np.arange(256, dtype=int)
for shift in reversed(range(8)):
for channel in range(3):
colormap[:, channel] |= ((ind >> channel) & 1) << shift
ind >>= 3
return colormap
def label_to_color_image(label):
Adds color defined by the dataset colormap to the label.
Args:
label: A 2D array with integer type, storing the segmentation label.
Returns:
result: A 2D array with floating type. The element of the array
is the color indexed by the corresponding element in the input label
to the PASCAL color map.
Raises:
ValueError: If label is not of rank 2 or its value is larger than color
map maximum entry.
if label.ndim != 2:
raise ValueError('Expect 2-D input label')
colormap = create_pascal_label_colormap()
if np.max(label) >= len(colormap):
raise ValueError('label value too large.')
return colormap[label]
def vis_segmentation(image, seg_map):
Visualizes input image, segmentation map and overlay view.
plt.figure(figsize=(15, 5))
grid_spec = gridspec.GridSpec(1, 4, width_ratios=[6, 6, 6, 1])
plt.subplot(grid_spec[0])
plt.imshow(image)
plt.axis('off')
plt.title('input image')
plt.subplot(grid_spec[1])
seg_image = label_to_color_image(seg_map).astype(np.uint8)
plt.imshow(seg_image)
plt.axis('off')
plt.title('segmentation map')
plt.subplot(grid_spec[2])
plt.imshow(image)
plt.imshow(seg_image, alpha=0.7)
plt.axis('off')
plt.title('segmentation overlay')
unique_labels = np.unique(seg_map)
ax = plt.subplot(grid_spec[3])
plt.imshow(
FULL_COLOR_MAP[unique_labels].astype(np.uint8), interpolation='nearest')
ax.yaxis.tick_right()
plt.yticks(range(len(unique_labels)), LABEL_NAMES[unique_labels])
plt.xticks([], [])
ax.tick_params(width=0.0)
plt.grid('off')
plt.show()
LABEL_NAMES = np.asarray([
'background', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus',
'car', 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse', 'motorbike',
'person', 'pottedplant', 'sheep', 'sofa', 'train', 'tv'
])
FULL_LABEL_MAP = np.arange(len(LABEL_NAMES)).reshape(len(LABEL_NAMES), 1)
FULL_COLOR_MAP = label_to_color_image(FULL_LABEL_MAP)
MODEL_NAME = 'mobilenetv2_coco_voctrainaug' # @param ['mobilenetv2_coco_voctrainaug', 'mobilenetv2_coco_voctrainval', 'xception_coco_voctrainaug', 'xception_coco_voctrainval']
_DOWNLOAD_URL_PREFIX = 'http://download.tensorflow.org/models/'
_MODEL_URLS = {
'mobilenetv2_coco_voctrainaug':
'deeplabv3_mnv2_pascal_train_aug_2018_01_29.tar.gz',
'mobilenetv2_coco_voctrainval':
'deeplabv3_mnv2_pascal_trainval_2018_01_29.tar.gz',
'xception_coco_voctrainaug':
'deeplabv3_pascal_train_aug_2018_01_04.tar.gz',
'xception_coco_voctrainval':
'deeplabv3_pascal_trainval_2018_01_04.tar.gz',
}
_TARBALL_NAME = 'deeplab_model.tar.gz'
model_dir = tempfile.mkdtemp()
tf.gfile.MakeDirs(model_dir)
download_path = os.path.join(model_dir, _TARBALL_NAME)
print('downloading model, this might take a while...')
urllib.request.urlretrieve(_DOWNLOAD_URL_PREFIX + _MODEL_URLS[MODEL_NAME],
download_path)
print('download completed! loading DeepLab model...')
MODEL = DeepLabModel(download_path)
print('model loaded successfully!')
SAMPLE_IMAGE = 'image1' # @param ['image1', 'image2', 'image3']
IMAGE_URL = '' #@param {type:"string"}
_SAMPLE_URL = ('https://github.com/tensorflow/models/blob/master/research/'
'deeplab/g3doc/img/%s.jpg?raw=true')
def run_visualization(url):
Inferences DeepLab model and visualizes result.
try:
f = urllib.request.urlopen(url)
jpeg_str = f.read()
original_im = Image.open(BytesIO(jpeg_str))
except IOError:
print('Cannot retrieve image. Please check url: ' + url)
return
print('running deeplab on image %s...' % url)
resized_im, seg_map = MODEL.run(original_im)
vis_segmentation(resized_im, seg_map)
image_url = IMAGE_URL or _SAMPLE_URL % SAMPLE_IMAGE
run_visualization(image_url)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Flip-Flops vs Latches
Step2: Sythinsis
Step3: !? clock in not hooked up on the wiring in this code need to figure out why
Step4: Sythinsis
Step5: The following shows the Xilinx's Vivado 2016.1 RTL generated schematic of our myHDL Syncrones Postive Edge D Flip-Flop's verilog code
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
from sympy import *
init_printing()
from myhdl import *
from myhdlpeek import *
import random
#python file of convince tools. Should be located with this notebook
from sympy_myhdl_tools import *
def DFFSyncCenter(D_in, Q_out, Qn_out, clk):
@always(clk)
def logic():
Q_out.next=D_in
Qn_out.next=not D_in
return logic
Peeker.clear()
D_in, Q_out, Qn_out, clk=[Signal(bool(0)) for _ in range(4)]
Peeker(D_in, 'D_in'); Peeker(clk, 'clk')
Peeker(Q_out, 'Q_out'); Peeker(Qn_out, 'Qn_out')
DUT=DFFSyncCenter(D_in=D_in, Q_out=Q_out, Qn_out=Qn_out, clk=clk)
inputs=[D_in]
time=[0]
def DFFSync_TB(inputs=[]):
#the # of inputs contorls everything
Ninputs=len(inputs)
tmax=2*2**Ninputs
#genrate sequantil number of inputs for comparsion to known
SequntialInputs=np.arange(2**Ninputs)
@always(delay(1))
def clkGen():
clk.next = not clk
@always(clk.negedge)
def stimulus():
time.append(time[-1]+1)
#run sequantial
try:
#genrate binary bit repsersintion of current sequantl input
NextSeqInput=np.binary_repr(SequntialInputs[time[-1]], width=Ninputs)
#pass each bit into the inputs
for i in range(Ninputs):
inputs[i].next=bool(int(NextSeqInput[i]))
#run the random to cheack for unexsected behavior
except IndexError:
NextRanInput=[random.randint(0,1) for i in range(Ninputs)]
for i in range(Ninputs):
inputs[i].next=NextRanInput[i]
if time[-1]==tmax:
raise StopSimulation
return clkGen, stimulus
Sim=Simulation(DUT, DFFSync_TB(inputs), *Peeker.instances()).run()
Peeker.to_wavedrom(start_time=0, tock=True)
MakeDFfromPeeker(Peeker.to_wavejson(start_time=0))
toVerilog(DFFSyncCenter, D_in, Q_out, Qn_out, clk)
_=VerilogTextReader('DFFSyncCenter')
def DFFSyncPosEdge(D_in, Q_out, Qn_out, clk):
@always(clk.posedge)
def logic():
Q_out.next=D_in
Qn_out.next=not D_in
return logic
Peeker.clear()
D_in, Q_out, Qn_out, clk=[Signal(bool(0)) for _ in range(4)]
Peeker(D_in, 'D_in'); Peeker(clk, 'clk')
Peeker(Q_out, 'Q_out'); Peeker(Qn_out, 'Qn_out')
DUT=DFFSyncPosEdge(D_in=D_in, Q_out=Q_out, Qn_out=Qn_out, clk=clk)
inputs=[D_in]
time=[0]
def DFFSync_TB(inputs=[]):
#the # of inputs contorls everything
Ninputs=len(inputs)
tmax=2*2**Ninputs
#genrate sequantil number of inputs for comparsion to known
SequntialInputs=np.arange(2**Ninputs)
@always(delay(1))
def clkGen():
clk.next = not clk
@always(clk.negedge)
def stimulus():
time.append(time[-1]+1)
#run sequantial
try:
#genrate binary bit repsersintion of current sequantl input
NextSeqInput=np.binary_repr(SequntialInputs[time[-1]], width=Ninputs)
#pass each bit into the inputs
for i in range(Ninputs):
inputs[i].next=bool(int(NextSeqInput[i]))
#run the random to cheack for unexsected behavior
except IndexError:
NextRanInput=[random.randint(0,1) for i in range(Ninputs)]
for i in range(Ninputs):
inputs[i].next=NextRanInput[i]
if time[-1]==tmax:
raise StopSimulation
return clkGen, stimulus
Sim=Simulation(DUT, DFFSync_TB(inputs), *Peeker.instances()).run()
Peeker.to_wavedrom(start_time=0, tock=True)
toVerilog(DFFSyncPosEdge, D_in, Q_out, Qn_out, clk)
_=VerilogTextReader('DFFSyncPosEdge')
def DFFSyncNegEdge(D_in, Q_out, Qn_out, clk):
@always(clk.negedge)
def logic():
Q_out.next=D_in
Qn_out.next=not D_in
return logic
Peeker.clear()
D_in, Q_out, Qn_out, clk=[Signal(bool(0)) for _ in range(4)]
Peeker(D_in, 'D_in'); Peeker(clk, 'clk')
Peeker(Q_out, 'Q_out'); Peeker(Qn_out, 'Qn_out')
DUT=DFFSyncNegEdge(D_in=D_in, Q_out=Q_out, Qn_out=Qn_out, clk=clk)
inputs=[D_in]
time=[0]
def DFFSync_TB(inputs=[]):
#the # of inputs contorls everything
Ninputs=len(inputs)
tmax=2*2**Ninputs
#genrate sequantil number of inputs for comparsion to known
SequntialInputs=np.arange(2**Ninputs)
@always(delay(1))
def clkGen():
clk.next = not clk
@always(clk.posedge)
def stimulus():
time.append(time[-1]+1)
#run sequantial
try:
#genrate binary bit repsersintion of current sequantl input
NextSeqInput=np.binary_repr(SequntialInputs[time[-1]], width=Ninputs)
#pass each bit into the inputs
for i in range(Ninputs):
inputs[i].next=bool(int(NextSeqInput[i]))
#run the random to cheack for unexsected behavior
except IndexError:
NextRanInput=[random.randint(0,1) for i in range(Ninputs)]
for i in range(Ninputs):
inputs[i].next=NextRanInput[i]
if time[-1]==tmax:
raise StopSimulation
return clkGen, stimulus
Sim=Simulation(DUT, DFFSync_TB(inputs), *Peeker.instances()).run()
Peeker.to_wavedrom(start_time=0, tock=True)
toVerilog(DFFSyncNegEdge, D_in, Q_out, Qn_out, clk)
_=VerilogTextReader('DFFSyncNegEdge')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Tokenization
Step2: 1. Dicionário
Step3: 1. Vetorização
Step4: 1. Word normalization
|
<ASSISTANT_TASK:>
Python Code:
import pandas
imdb = pandas.read_csv('data/imdb_labelled.txt', sep="\t", names=["sentences", "polarity"])
yelp = pandas.read_csv('data/yelp_labelled.txt', sep="\t", names=["sentences", "polarity"])
amazon = pandas.read_csv('data/amazon_cells_labelled.txt', sep="\t", names=["sentences", "polarity"])
big = pandas.DataFrame()
big = big.append([imdb, yelp, amazon])
big.to_csv('big.csv', index=False, encoding='utf-8')
import nltk
sentence = 'My test for nltk library!!'
tokens = nltk.word_tokenize(sentence)
print(tokens)
import nltk
sentences = big['sentences']
sentences_strings = sentences.str.lower()
sentences_tokens = [] # Has all sentences tokens
for sentence_string in sentences_strings:
token = nltk.word_tokenize(sentence_string)
sentences_tokens.extend(token)
dictionary = set()
dictionary.update(sentences_tokens)
import numpy
tuplas = zip(dicionario, range(len(dicionario)))
mapa = {sentence_tokens:indice for sentence_tokens, indice in tuplas}
def vetorizar_texto(texto, mapa):
vetor = [0] * len(mapa)
for sentence_tokens in texto:
if sentence_tokens in mapa:
posicao = mapa[sentence_tokens]
vetor[posicao] += 1
return vetor
vetoresdeTexto = [vetorizar_texto(texto, mapa) for texto in sentences_tokens]
X = numpy.array(vetoresdeTexto)
dictionary = set()
tokens = ['teste1', 'teste2', 'teste3', 'teste4']
for x in range(len(tokens)):
if x + 1 < len(tokens):
digram = [tokens[x]+' '+tokens[x+1]]
dictionary.update(digram)
print(dictionary)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As you get more experienced you may want to use matplotlib directly
|
<ASSISTANT_TASK:>
Python Code:
# Import matplotlib
import matplotlib
# Import pandas
import pandas as pd
# Tell matplotlib to plot in this window instead of a separate window.
%matplotlib inline
# Load data into dataframe (we will get to this later)
df = pd.read_csv('data/simple.csv')
# Plot data as line plot using simple function.
df.plot()
# Tip: the 538 blog-like styling looks nicer.
matplotlib.style.use('fivethirtyeight')
# Plot as histogram using slightly more complicated function.
df.plot.hist(bins=30)
# Because it's matplotlib-based, you can feed in matplotlib options
df.plot(linestyle='--', marker='o', color='red', linewidth=.50)
# If it's easier, df.plot() gives a matplotlib axes object for customization
axes = df.plot()
axes.annotate('Manual Annotation', (10, 40))
# We can also do this programatically:
for number in [10, 20, 30, 40, 50]:
axes.annotate('Auto-Annotation', (number, number))
# Import plottting because we're not using pandas
import matplotlib.pyplot as plt
# Clear any existing figures to be safe.
plt.clf()
# Create figure and axis
fig, axes = plt.subplots()
# plot scatter (s is size)
a = axes.scatter([1, 2, 3, 4], [1, 4, 9, 16], s=50, label='My Scatter')
# Add an arbitrary line
axes.plot([1, 2, 3, 4], [1, 2, 3, 4], label='My Line')
# Get the items we've already plotted
lines = axes.get_lines()
# We can get children of axes to reference elements
children = axes.get_children()
# In this case, scatter is child 0
scatter_points = children[0] # this is how we index objects.
scatter_points.set_color('red')
# In this case, the line is 1
line = children[1]
line.set_linestyle('-.')
# Make background white
axes.patch.set_facecolor('white')
fig.patch.set_facecolor('white')
# Print children for reference
for child in children:
print(child)
fig
# Zoom out
axes.set_xbound(0, 10)
# Set labels
axes.set_xticklabels(['Small', 'Medium', 'Large'])
axes.set_xlabel('Size')
# Set title
axes.set_title('Graphing Stuff')
# Set legend
axes.legend()
# Save figure to PNG
fig.savefig('data/output.png')
fig
# We already did this, but just for completeness sake
import matplotlib
%matplotlib inline
# Import the actual plotting tool (pyplot)
from matplotlib import pyplot as plt
# Import numpy
import numpy as np
plt.gcf()
with plt.xkcd():
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.bar([-0.125, 1.0-0.125], [25, 100], 0.25)
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
ax.xaxis.set_ticks_position('bottom')
ax.set_xticks([0, 1])
ax.set_xlim([-0.5, 1.5])
ax.set_ylim([0, 110])
ax.set_xticklabels(['Intuitively\nExplains Complex\nRelationships',
'People Expect\nShiny, Science-y\nData Thingies'])
plt.yticks([])
plt.title("Reasons to Make a Graph @ U.S. Bank")
font = {'size' : 18}
matplotlib.rc('font', **font)
im = plt.imread("static/small.jpg")
implot = plt.imshow(im, aspect='auto', extent=[.1, 1, 55, 75], alpha=.25)
fig.patch.set_facecolor('white')
ax.patch.set_facecolor('white')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: To install the dependencies related to running this notebook, see Installing notebook dependencies.
Step2: Perspectives
Step3: Back to top
Step4: With periodic boundaries, check that the filter "re-enters" the image from the other border
Step5: Back to top
|
<ASSISTANT_TASK:>
Python Code:
%load_ext autoreload
%autoreload 2
from LogGabor import LogGabor
parameterfile = 'https://raw.githubusercontent.com/bicv/LogGabor/master/default_param.py'
lg = LogGabor(parameterfile)
lg.set_size((32, 32))
import os
import numpy as np
np.set_printoptions(formatter={'float': '{: 0.3f}'.format})
%matplotlib inline
import matplotlib.pyplot as plt
fig_width = 12
figsize=(fig_width, .618*fig_width)
def twoD_Gaussian(xy, x_pos, y_pos, theta, sf_0):
FT_lg = lg.loggabor(x_pos, y_pos, sf_0=np.absolute(sf_0), B_sf=lg.pe.B_sf, theta=theta, B_theta=lg.pe.B_theta)
return lg.invert(FT_lg).ravel()
# Create x and y indices
x = np.arange(lg.pe.N_X)
y = np.arange(lg.pe.N_Y)
x, y = xy = np.meshgrid(x, y)
#create data
x_pos, y_pos, theta, sf_0 = 14.6, 8.5, 12 * np.pi / 180., .1
data = twoD_Gaussian(xy, x_pos, y_pos, theta=theta, sf_0=sf_0)
# plot twoD_Gaussian data generated above
#plt.figure()
#plt.imshow(data.reshape(lg.pe.N_X, lg.pe.N_Y))
#plt.colorbar()
# add some noise to the data and try to fit the data generated beforehand
data /= np.abs(data).max()
data_noisy = data + .25*np.random.normal(size=data.shape)
# getting best match
C = lg.linear_pyramid(data_noisy.reshape(lg.pe.N_X, lg.pe.N_Y))
idx = lg.argmax(C)
initial_guess = [idx[0], idx[1], lg.theta[idx[2]], lg.sf_0[idx[3]]]
print ('initial_guess :', initial_guess, ', idx :', idx)
import scipy.optimize as opt
popt, pcov = opt.curve_fit(twoD_Gaussian, xy, data_noisy, p0=initial_guess)
data_fitted = twoD_Gaussian(xy, *popt)
extent = (0, lg.pe.N_X, 0, lg.pe.N_Y)
print ('popt :', popt, ', true : ', x_pos, y_pos, theta, sf_0)
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
_ = axs[0].contourf(data.reshape(lg.pe.N_X, lg.pe.N_Y), 8, extent=extent, cmap=plt.cm.viridis, origin='upper')
_ = axs[1].imshow(data_noisy.reshape(lg.pe.N_X, lg.pe.N_Y), cmap=plt.cm.viridis, extent=extent)
_ = axs[2].contourf(data_fitted.reshape(lg.pe.N_X, lg.pe.N_Y), 8, extent=extent, cmap=plt.cm.viridis, origin='upper')
for ax in axs: ax.axis('equal')
from LogGabor import LogGaborFit
lg = LogGaborFit(parameterfile)
lg.set_size((32, 32))
x_pos, y_pos, theta, sf_0 = 14.6, 8.5, 12 * np.pi / 180., .1
data = lg.invert(lg.loggabor(x_pos, y_pos, sf_0=np.absolute(sf_0), B_sf=lg.pe.B_sf, theta=theta, B_theta=lg.pe.B_theta))
data /= np.abs(data).max()
data_noisy = data + .25*np.random.normal(size=data.shape)
data_fitted, params = lg.LogGaborFit(data_noisy.reshape(lg.pe.N_X, lg.pe.N_Y))
data_fitted.shape
params.pretty_print()
extent = (0, lg.pe.N_X, 0, lg.pe.N_Y)
print ('params :', params, ', true : ', x_pos, y_pos, theta, sf_0)
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
_ = axs[0].contourf(data.reshape(lg.pe.N_X, lg.pe.N_Y), 8, extent=extent, cmap=plt.cm.viridis, origin='upper')
_ = axs[1].imshow(data_noisy.reshape(lg.pe.N_X, lg.pe.N_Y), cmap=plt.cm.viridis, extent=extent)
_ = axs[2].contourf(data_fitted.reshape(lg.pe.N_X, lg.pe.N_Y), 8, extent=extent, cmap=plt.cm.viridis, origin='upper')
for ax in axs: ax.axis('equal')
data_fitted, params = lg.LogGaborFit(data_noisy.reshape(lg.pe.N_X, lg.pe.N_Y), do_border=False)
extent = (0, lg.pe.N_X, 0, lg.pe.N_Y)
print ('params :', params, ', true : ', x_pos, y_pos, theta, sf_0)
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
_ = axs[0].contourf(data.reshape(lg.pe.N_X, lg.pe.N_Y), 8, extent=extent, cmap=plt.cm.viridis, origin='upper')
_ = axs[1].imshow(data_noisy.reshape(lg.pe.N_X, lg.pe.N_Y), cmap=plt.cm.viridis, extent=extent)
_ = axs[2].contourf(data_fitted.reshape(lg.pe.N_X, lg.pe.N_Y), 8, extent=extent, cmap=plt.cm.viridis, origin='upper')
for ax in axs: ax.axis('equal')
%load_ext watermark
%watermark -i -h -m -v -p numpy,matplotlib,scipy,imageio,SLIP,LogGabor -r -g -b
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step4: The Weibull distribution
Step5: SciPy also provides functions to evaluate the Weibull distribution, which I'll use to check my implementation.
Step6: And here's what the PDF looks like, for these parameters.
Step7: We can use np.random.weibull to generate random values from a Weibull distribution with given parameters.
Step8: Exercise
Step9: Exercise
Step10: Exercise
Step11: Now I'll process the DataFrame to generate data in the form we want for the update.
Step12: Exercise
Step13: Prediction
Step14: Exercise
|
<ASSISTANT_TASK:>
Python Code:
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import classes from thinkbayes2
from thinkbayes2 import Pmf, Cdf, Suite, Joint
import thinkbayes2
import thinkplot
import numpy as np
def EvalWeibullPdf(x, lam, k):
Computes the Weibull PDF.
x: value
lam: parameter lambda in events per unit time
k: parameter
returns: float probability density
arg = (x / lam)
return k / lam * arg**(k-1) * np.exp(-arg**k)
def EvalWeibullCdf(x, lam, k):
Evaluates CDF of the Weibull distribution.
arg = (x / lam)
return 1 - np.exp(-arg**k)
def MakeWeibullPmf(lam, k, high, n=200):
Makes a PMF discrete approx to a Weibull distribution.
lam: parameter lambda in events per unit time
k: parameter
high: upper bound
n: number of values in the Pmf
returns: normalized Pmf
xs = np.linspace(0, high, n)
ps = EvalWeibullPdf(xs, lam, k)
return Pmf(dict(zip(xs, ps)))
from scipy.stats import weibull_min
lam = 2
k = 1.5
x = 0.5
weibull_min.pdf(x, k, scale=lam)
EvalWeibullPdf(x, lam, k)
weibull_min.cdf(x, k, scale=lam)
EvalWeibullCdf(x, lam, k)
pmf = MakeWeibullPmf(lam, k, high=10)
thinkplot.Pdf(pmf)
thinkplot.decorate(xlabel='Lifetime',
ylabel='PMF')
def SampleWeibull(lam, k, n=1):
return np.random.weibull(k, size=n) * lam
data = SampleWeibull(lam, k, 10000)
cdf = Cdf(data)
model = pmf.MakeCdf()
thinkplot.Cdfs([cdf, model])
thinkplot.decorate(xlabel='Lifetime',
ylabel='CDF')
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
import pandas as pd
lam = 2
k = 1.5
n = 15
t_end = 10
starts = np.random.uniform(0, t_end, n)
lifespans = SampleWeibull(lam, k, n)
df = pd.DataFrame({'start': starts, 'lifespan': lifespans})
df['end'] = df.start + df.lifespan
df['age_t'] = t_end - df.start
df.head()
data = []
for i, row in df.iterrows():
if row.end < t_end:
data.append(('eq', row.lifespan))
else:
data.append(('gt', row.age_t))
for pair in data:
print(pair)
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step5: More than one State object
Step7: And here's run_simulation, which is a solution to the exercise at the end of the previous notebook.
Step8: Now we can create more than one State object
Step9: Whenever we call a function, we indicate which State object to work with
Step10: And you can confirm that the different objects are getting updated independently
Step11: Negative bikes
Step14: We can fix this problem using the return statement to exit the function early if an update would cause negative bikes.
Step15: Now if you run the simulation again, it should behave.
Step16: Comparison operators
Step17: Whereas == compares two values and returns True if they are equal.
Step18: You can use == in an if statement.
Step19: But if you use = in an if statement, you get an error.
Step20: Exercise
Step23: Next we need versions of bike_to_wellesley and bike_to_olin that update the metrics.
Step24: Now when we run a simulation, it keeps track of unhappy customers.
Step25: After the simulation, we can print the number of unhappy customers at each location.
Step26: Exercises
Step27: Exercise
|
<ASSISTANT_TASK:>
Python Code:
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim library
from modsim import *
# set the random number generator
np.random.seed(7)
def step(state, p1, p2):
Simulate one minute of time.
state: bikeshare State object
p1: probability of an Olin->Wellesley customer arrival
p2: probability of a Wellesley->Olin customer arrival
if flip(p1):
bike_to_wellesley(state)
if flip(p2):
bike_to_olin(state)
def bike_to_wellesley(state):
Move one bike from Olin to Wellesley.
state: bikeshare State object
state.olin -= 1
state.wellesley += 1
def bike_to_olin(state):
Move one bike from Wellesley to Olin.
state: bikeshare State object
state.wellesley -= 1
state.olin += 1
def decorate_bikeshare():
Add a title and label the axes.
decorate(title='Olin-Wellesley Bikeshare',
xlabel='Time step (min)',
ylabel='Number of bikes')
def run_simulation(state, p1, p2, num_steps):
Simulate the given number of time steps.
state: State object
p1: probability of an Olin->Wellesley customer arrival
p2: probability of a Wellesley->Olin customer arrival
num_steps: number of time steps
results = TimeSeries()
for i in range(num_steps):
step(state, p1, p2)
results[i] = state.olin
plot(results, label='Olin')
bikeshare1 = State(olin=10, wellesley=2)
bikeshare2 = State(olin=2, wellesley=10)
bike_to_olin(bikeshare1)
bike_to_wellesley(bikeshare2)
bikeshare1
bikeshare2
bikeshare = State(olin=10, wellesley=2)
run_simulation(bikeshare, 0.4, 0.2, 60)
decorate_bikeshare()
def bike_to_wellesley(state):
Move one bike from Olin to Wellesley.
state: bikeshare State object
if state.olin == 0:
return
state.olin -= 1
state.wellesley += 1
def bike_to_olin(state):
Move one bike from Wellesley to Olin.
state: bikeshare State object
if state.wellesley == 0:
return
state.wellesley -= 1
state.olin += 1
bikeshare = State(olin=10, wellesley=2)
run_simulation(bikeshare, 0.4, 0.2, 60)
decorate_bikeshare()
x = 5
x == 5
if x == 5:
print('yes, x is 5')
# If you remove the # from the if statement and run it, you'll get
# SyntaxError: invalid syntax
#if x = 5:
# print('yes, x is 5')
bikeshare = State(olin=10, wellesley=2,
olin_empty=0, wellesley_empty=0)
def bike_to_wellesley(state):
Move one bike from Olin to Wellesley.
state: bikeshare State object
if state.olin == 0:
state.olin_empty += 1
return
state.olin -= 1
state.wellesley += 1
def bike_to_olin(state):
Move one bike from Wellesley to Olin.
state: bikeshare State object
if state.wellesley == 0:
state.wellesley_empty += 1
return
state.wellesley -= 1
state.olin += 1
run_simulation(bikeshare, 0.4, 0.2, 60)
decorate_bikeshare()
bikeshare.olin_empty
bikeshare.wellesley_empty
bikeshare = State(olin=10, wellesley=2,
olin_empty=0, wellesley_empty=0,
clock=0)
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data Creation
Step2: Create an 2D array with two columns that combines random frequency and time period equal to 1.
Step3: If we plot the histogram from the frequency data, we can see that the distribution tail is thicker at the right side because of the anomalous data points
Step4: Used Anomaly Detector
Step5: Train the anomaly detector
Step6: Compute the anomaly scores for each data point
Step7: Anomaly Scores
Step8: The anomalous frequencies vs. anomaly scores
Step9: As can be seen above, the anomalous frequences also have higher anomaly scores than the normal frequencies as it should be.<br/><br/> This becomes even more visible if we plot the frequency (x-axis) against anomaly scores (y-axis)
Step10: So, depending on at what level we would consider a frequency an anomaly, we can set a threshold to decide if a frequency is anomalous.
|
<ASSISTANT_TASK:>
Python Code:
import pyisc;
import numpy as np
from scipy.stats import poisson
%matplotlib inline
from pylab import hist, plot, figure
po_normal = poisson(10)
po_anomaly = poisson(25)
freq_normal = po_normal.rvs(10000)
freq_anomaly = po_anomaly.rvs(15)
data = np.column_stack([
list(freq_normal)+list(freq_anomaly),
[1.0]*(len(freq_normal)+len(freq_anomaly))
])
data[:5]
hist(data.T[0],100);
anomaly_detector = pyisc.AnomalyDetector(
pyisc.P_PoissonOnesided(frequency_column=0,period_column=1)
)
%timeit anomaly_detector.fit(data);
scores = anomaly_detector.anomaly_score(data)
for s in zip(freq_normal[:15], scores[:15]):
print s
for s in zip(freq_anomaly, scores[-15:]):
print s
plot(data.T[0], scores, '.');
data2 = np.column_stack([
poisson(15).rvs(15),
[1.0]*15
])
anomaly_detector.fit_incrementally(data2);
scores_ = anomaly_detector.anomaly_score(data)
figure(1);plot(data.T[0], scores, 'b.');plot(data.T[0], scores_, 'gx');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import some data to play with
Step2: Split the data and prepare data for ROC Curve
Step3: Plot ROC Curve using Matplotlib
Step4: Create ROCAUC using YellowBrick
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
from sklearn import svm, datasets
from sklearn.metrics import roc_curve, auc
from sklearn.model_selection import train_test_split
bc = datasets.load_breast_cancer()
X = bc.data
y = bc.target
random_state = np.random.RandomState(0)
# shuffle and split training and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.5,
random_state=random_state)
# Learn to predict each class against the other
classifier = svm.SVC(kernel='linear', probability=True, random_state=random_state)
y_score = classifier.fit(X_train, y_train).decision_function(X_test)
# Compute ROC curve and ROC area for each class
fpr, tpr, _ = roc_curve(y_test, y_score)
roc_auc = auc(fpr, tpr)
plt.figure()
lw = 2
plt.plot(fpr, tpr, color='darkorange',
lw=lw, label='ROC curve (area = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic example')
plt.legend(loc="lower right")
plt.show()
import yellowbrick as yb
from yellowbrick.classifier import ROCAUC
visualizer = ROCAUC(classifier)
visualizer.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer.score(X_test, y_test) # Evaluate the model on the test data
g = visualizer.poof() # Draw/show/poof the data
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Example with real audio recordings
Step2: Audio data
Step3: STFT
Step4: iterative WPE
Step5: Power spectrum
|
<ASSISTANT_TASK:>
Python Code:
def aquire_audio_data():
D, T = 4, 10000
y = np.random.normal(size=(D, T))
return y
y = aquire_audio_data()
Y = stft(y, **stft_options).transpose(2, 0, 1)
with tf.Session() as session:
Y_tf = tf.placeholder(
tf.complex128, shape=(None, None, None))
Z_tf = wpe(Y_tf)
Z = session.run(Z_tf, {Y_tf: Y})
z_tf = istft(Z.transpose(1, 2, 0), size=stft_options['size'], shift=stft_options['shift'])
channels = 8
sampling_rate = 16000
delay = 3
iterations = 5
taps = 10
file_template = 'AMI_WSJ20-Array1-{}_T10c0201.wav'
signal_list = [
sf.read(str(project_root / 'data' / file_template.format(d + 1)))[0]
for d in range(channels)
]
y = np.stack(signal_list, axis=0)
IPython.display.Audio(y[0], rate=sampling_rate)
Y = stft(y, **stft_options).transpose(2, 0, 1)
from nara_wpe.tf_wpe import get_power
with tf.Session()as session:
Y_tf = tf.placeholder(tf.complex128, shape=(None, None, None))
Z_tf = wpe(Y_tf, taps=taps, iterations=iterations)
Z = session.run(Z_tf, {Y_tf: Y})
z = istft(Z.transpose(1, 2, 0), size=stft_options['size'], shift=stft_options['shift'])
IPython.display.Audio(z[0], rate=sampling_rate)
fig, [ax1, ax2] = plt.subplots(1, 2, figsize=(20, 8))
im1 = ax1.imshow(20 * np.log10(np.abs(Y[:, 0, 200:400])), origin='lower')
ax1.set_xlabel('')
_ = ax1.set_title('reverberated')
im2 = ax2.imshow(20 * np.log10(np.abs(Z[:, 0, 200:400])), origin='lower')
_ = ax2.set_title('dereverberated')
cb = fig.colorbar(im1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: MSCOCO Captions
Step2: How can you look at feature maps from the first convolutional layer? Look here if you need a hint.
Step3: How can you look at the response of different layers in your network?
Step4: A file with feature vectors from 2000 of the MSCOCO images has been created. Next, you will load these and train. Please note this step can take more than 5 minutes to run.
Step5: In the cell above we created three lists, one for the image_id, feature map. and caption. To verify that the indices of each list are aligned, display the image id and caption for one image.
Step6: The next cell contains functions for queuing our data and the RNN model. What should the output for each function be? If you need a hint look here.
Step7: We can use the function below to estimate how well the network is able to predict the next word in the caption. You can evaluate a single image and its caption from the last batch using the index of the batch. If you need a hint look here.
Step8: Questions
Step9: The cell below will load a feature vector from one of the images in the validation data set and use it with our pretrained network to generate a caption. Use the VALDATA variable to propagate and image through our RNN and generate a caption. You also need to load the network you just created during training. Look here if you need a hint.
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import inspect
import time
import numpy as np
import tensorflow as tf
from tensorflow.python.framework import ops
from tensorflow.python.framework import dtypes
#import reader
import collections
import os
import re
import json
import matplotlib.pyplot as plt
from scipy import ndimage
from scipy import misc
import sys
sys.path.insert(0, '/data/models/slim')
slim=tf.contrib.slim
from nets import vgg
from preprocessing import vgg_preprocessing
%matplotlib inline
!nvidia-smi
TRAIN_IMAGE_PATH='/data/mscoco/train2014/'
## Read Training files
with open("/data/mscoco/captions_train2014.json") as data_file:
data=json.load(data_file)
image_feature_vectors={}
tf.reset_default_graph()
one_image=ndimage.imread(TRAIN_IMAGE_PATH+data["images"][0]['file_name'])
#resize for vgg network
resize_img=misc.imresize(one_image,[224,224])
if len(one_image.shape)!= 3: #Check to see if the image is grayscale if True mirror colorband
resize_img=np.asarray(np.dstack((resize_img, resize_img, resize_img)), dtype=np.uint8)
processed_image = vgg_preprocessing.preprocess_image(resize_img, 224, 224, is_training=False)
processed_images = tf.expand_dims(processed_image, 0)
network,endpts= vgg.vgg_16(processed_images, is_training=False)
init_fn = slim.assign_from_checkpoint_fn(os.path.join('/data/mscoco/vgg_16.ckpt'),slim.get_model_variables('vgg_16'))
sess = tf.Session()
init_fn(sess)
NETWORK,ENDPTS=sess.run([network,endpts])
sess.close()
print('fc7 array for a single image')
print(ENDPTS['vgg_16/fc7'][0][0][0])
plt.plot(ENDPTS['vgg_16/fc7'][0][0][0])
plt.xlabel('feature vector index')
plt.ylabel('amplitude')
plt.title('fc7 feature vector')
data["images"][0]['file_name']
print(ENDPTS['vgg_16/conv1/conv1_1'][0].shape)
FEATUREMAPID=0
print('input image and feature map from conv1')
plt.subplot(1,2,1)
plt.imshow(resize_img)
plt.subplot(1,2,2)
plt.imshow(ENDPTS['vgg_16/conv1/conv1_1'][0][:,:,FEATUREMAPID])
CaptionsForOneImage=[]
for k in range(len(data['annotations'])):
if data['annotations'][k]['image_id']==data["images"][0]['id']:
CaptionsForOneImage.append([data['annotations'][k]['caption'].lower()])
plt.imshow(resize_img)
print('MSCOCO captions for a single image')
CaptionsForOneImage
example_load=np.load('/data/mscoco/train_vgg_16_fc7_2000.npy').tolist()
image_ids=example_load.keys()
#Create 3 lists image_id, feature maps, and captions.
image_id_key=[]
feature_maps_to_id=[]
caption_to_id=[]
for observed_image in image_ids:
for k in range(len(data['annotations'])):
if data['annotations'][k]['image_id']==observed_image:
image_id_key.append([observed_image])
feature_maps_to_id.append(example_load[observed_image])
caption_to_id.append(re.sub('[^A-Za-z0-9]+',' ',data['annotations'][k]['caption']).lower()) #remove punctuation
print('number of images ',len(image_ids))
print('number of captions ',len(caption_to_id))
STRING='%012d' % image_id_key[0][0]
exp_image=ndimage.imread(TRAIN_IMAGE_PATH+'COCO_train2014_'+STRING+'.jpg')
plt.imshow(exp_image)
print('image_id ',image_id_key[:5])
print('the captions for this image ')
print(caption_to_id[:5])
num_steps=20
######################################################################
##Create a list of all of the sentences.
DatasetWordList=[]
for dataset_caption in caption_to_id:
DatasetWordList+=str(dataset_caption).split()
#Determine number of distint words
distintwords=collections.Counter(DatasetWordList)
#Order words
count_pairs = sorted(distintwords.items(), key=lambda x: (-x[1], x[0])) #ascending order
words, occurence = list(zip(*count_pairs))
#DictionaryLength=occurence.index(4) #index for words that occur 4 times or less
words=['PAD','UNK','EOS']+list(words)#[:DictionaryLength])
word_to_id=dict(zip(words, range(len(words))))
##################### Tokenize Sentence #######################
Tokenized=[]
for full_words in caption_to_id:
EmbeddedSentence=[word_to_id[word] for word in full_words.split() if word in word_to_id]+[word_to_id['EOS']]
#Pad sentences that are shorter than the number of steps
if len(EmbeddedSentence)<num_steps:
b=[word_to_id['PAD']]*num_steps
b[:len(EmbeddedSentence)]=EmbeddedSentence
if len(EmbeddedSentence)>num_steps:
b=EmbeddedSentence[:num_steps]
if len(b)==EmbeddedSentence:
b=EmeddedSentence
#b=[word_to_id['UNK'] if x>=DictionaryLength else x for x in b] #turn all words used 4 times or less to 'UNK'
#print(b)
Tokenized+=[b]
print("Number of words in this dictionary ", len(words))
#Tokenized Sentences
Tokenized[::2000]
def data_queue(caption_input,feature_vector,batch_size,):
train_input_queue = tf.train.slice_input_producer(
[caption_input, np.asarray(feature_vector)],num_epochs=10000,
shuffle=True) #False before
##Set our train data and label input shape for the queue
TrainingInputs=train_input_queue[0]
FeatureVectors=train_input_queue[1]
TrainingInputs.set_shape([num_steps])
FeatureVectors.set_shape([len(feature_vector[0])]) #fc7 is 4096
min_after_dequeue=1000000
capacity = min_after_dequeue + 3 * batch_size
#input_x, target_y
tokenized_caption, input_feature_map = tf.train.batch([TrainingInputs, FeatureVectors],
batch_size=batch_size,
capacity=capacity,
num_threads=6)
return tokenized_caption,input_feature_map
def rnn_model(Xconcat,input_keep_prob,output_keep_prob,num_layers,num_hidden):
#Create a multilayer RNN
#reuse=False for training but reuse=True for sharing
layer_cell=[]
for _ in range(num_layers):
lstm_cell = tf.contrib.rnn.LSTMCell(num_units=num_hidden, state_is_tuple=True)
lstm_cell = tf.contrib.rnn.DropoutWrapper(lstm_cell,
input_keep_prob=input_keep_prob,
output_keep_prob=output_keep_prob)
layer_cell.append(lstm_cell)
cell = tf.contrib.rnn.MultiRNNCell(layer_cell, state_is_tuple=True)
outputs, last_states = tf.contrib.rnn.static_rnn(
cell=cell,
dtype=tf.float32,
inputs=tf.unstack(Xconcat))
output_reshape=tf.reshape(outputs, [batch_size*(num_steps),num_hidden]) #[12==batch_size*num_steps,num_hidden==12]
pred=tf.matmul(output_reshape, variables_dict["weights_mscoco"]) +variables_dict["biases_mscoco"]
return pred
tf.reset_default_graph()
#######################################################################################################
# Parameters
num_hidden=2048
num_steps=num_steps
dict_length=len(words)
batch_size=4
num_layers=2
train_lr=0.00001
#######################################################################################################
TrainingInputs=Tokenized
FeatureVectors=feature_maps_to_id
## Variables ##
# Learning rate placeholder
lr = tf.placeholder(tf.float32, shape=[])
#tf.get_variable_scope().reuse_variables()
variables_dict = {
"weights_mscoco":tf.Variable(tf.truncated_normal([num_hidden,dict_length],
stddev=1.0,dtype=tf.float32),name="weights_mscoco"),
"biases_mscoco": tf.Variable(tf.truncated_normal([dict_length],
stddev=1.0,dtype=tf.float32), name="biases_mscoco")}
tokenized_caption, input_feature_map=data_queue(TrainingInputs,FeatureVectors,batch_size)
mscoco_dict=words
TrainInput=tf.constant(word_to_id['PAD'],shape=[batch_size,1],dtype=tf.int32)
#Pad the beginning of our caption. The first step now only has the image feature vector. Drop the last time step
#to timesteps to 20
TrainInput=tf.concat([tf.constant(word_to_id['PAD'],shape=[batch_size,1],dtype=tf.int32),
tokenized_caption],1)[:,:-1]
X_one_hot=tf.nn.embedding_lookup(np.identity(dict_length), TrainInput) #[batch,num_steps,dictionary_length][2,6,7]
#ImageFeatureTensor=input_feature_map
Xconcat=tf.concat([input_feature_map+tf.zeros([num_steps,batch_size,4096]),
tf.unstack(tf.to_float(X_one_hot),num_steps,1)],2)#[:num_steps,:,:]
pred=rnn_model(Xconcat,1.0,1.0,num_layers,num_hidden)
#the full caption is the target sentence
y_one_hot=tf.unstack(tf.nn.embedding_lookup(np.identity(dict_length), tokenized_caption),num_steps,1) #[batch,num_steps,dictionary_length][2,6,7]
y_target_reshape=tf.reshape(y_one_hot,[batch_size*num_steps,dict_length])
# Define loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y_target_reshape))
optimizer = tf.train.MomentumOptimizer(lr,0.9)
gvs = optimizer.compute_gradients(cost,aggregation_method = tf.AggregationMethod.EXPERIMENTAL_TREE)
capped_gvs = [(tf.clip_by_value(grad, -10., 10.), var) for grad, var in gvs]
train_op=optimizer.apply_gradients(capped_gvs)
saver = tf.train.Saver()
init_op = tf.group(tf.global_variables_initializer(),tf.local_variables_initializer())
with tf.Session() as sess:
sess.run(init_op)
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
#Load a pretrained network
saver.restore(sess, '/data/mscoco/rnn_layermodel_iter40000')
print('Model restored from file')
for i in range(100):
loss,y_pred,target_caption,_=sess.run([cost,pred,tokenized_caption,train_op],feed_dict={lr:train_lr})
if i% 10==0:
print("iteration: ",i, "loss: ",loss)
MODEL_NAME='rnn_model_iter'+str(i)
saver.save(sess, MODEL_NAME)
print('saved trained network ',MODEL_NAME)
print("Done Training")
coord.request_stop()
coord.join(threads)
sess.close()
def show_next_predicted_word(batch_id,batch_size,id_of_image,target_caption,predicted_caption,words,PATH):
Target=[words[ind] for ind in target_caption[batch_id]]
Prediction_Tokenized=np.argmax(predicted_caption[batch_id::batch_size],1)
Prediction=[words[ind] for ind in Prediction_Tokenized]
STRING2='%012d' % id_of_image
img=ndimage.imread(PATH+STRING2+'.jpg')
return Target,Prediction,img,STRING2
#You can change the batch id to a number between [0 , batch_size-1]
batch_id=0
image_id_for_predicted_caption=[x for x in range(len(Tokenized)) if target_caption[batch_id].tolist()== Tokenized[x]][0]
t,p,input_img,string_out=show_next_predicted_word(batch_id,batch_size,image_id_key[image_id_for_predicted_caption][0]
,target_caption,y_pred,words,TRAIN_IMAGE_PATH+'COCO_train2014_')
print('Caption')
print(t)
print('Predicted Words')
print(p)
plt.imshow(input_img)
##Load and test our test set
val_load=np.load('/data/mscoco/val_vgg_16_fc7_100.npy').tolist()
val_ids=val_load.keys()
#Create 3 lists image_id, feature maps, and captions.
val_id_key=[]
val_map_to_id=[]
val_caption_to_id=[]
for observed_image in val_ids:
val_id_key.append([observed_image])
val_map_to_id.append(val_load[observed_image])
print('number of images ',len(val_ids))
print('number of captions ',len(val_map_to_id))
tf.reset_default_graph()
batch_size=1
num_steps=20
print_topn=0 #0for do not display
printnum0f=3
#Choose a image to caption
VALDATA=54 #ValImage fc7 feature vector
variables_dict = {
"weights_mscoco":tf.Variable(tf.truncated_normal([num_hidden,dict_length],
stddev=1.0,dtype=tf.float32),name="weights_mscoco"),
"biases_mscoco": tf.Variable(tf.truncated_normal([dict_length],
stddev=1.0,dtype=tf.float32), name="biases_mscoco")}
StartCaption=np.zeros([batch_size,num_steps],dtype=np.int32).tolist()
CaptionPlaceHolder = tf.placeholder(dtype=tf.int32, shape=(batch_size , num_steps))
ValFeatureMap=val_map_to_id[VALDATA]
X_one_hot=tf.nn.embedding_lookup(np.identity(dict_length), CaptionPlaceHolder) #[batch,num_steps,dictionary_length][2,6,7]
#ImageFeatureTensor=input_feature_map
Xconcat=tf.concat([ValFeatureMap+tf.zeros([num_steps,batch_size,4096]),
tf.unstack(tf.to_float(X_one_hot),num_steps,1)],2)#[:num_steps,:,:]
pred=rnn_model(Xconcat,1.0,1.0,num_layers,num_hidden)
pred=tf.nn.softmax(pred)
saver = tf.train.Saver()
init_op = tf.group(tf.global_variables_initializer(),tf.local_variables_initializer())
with tf.Session() as sess:
sess.run(init_op)
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
#Load a pretrained network
saver.restore(sess, 'rnn_model_iter99')
print('Model restored from file')
for i in range(num_steps-1):
predict_next_word=sess.run([pred],feed_dict={CaptionPlaceHolder:StartCaption})
INDEX=np.argmax(predict_next_word[0][i])
StartCaption[0][i+1]=INDEX
##Post N most probable next words at each step
if print_topn !=0:
print("Top ",str(printnum0f), "predictions for the", str(i+1), "word in the predicted caption" )
result_args = np.argsort(predict_next_word[0][i])[-printnum0f:][::-1]
NextWord=[words[x] for x in result_args]
print(NextWord)
coord.request_stop()
coord.join(threads)
sess.close()
STRING2='%012d' % val_id_key[VALDATA][0]
img=ndimage.imread('/data/mscoco/val2014/COCO_val2014_'+STRING2+'.jpg')
plt.imshow(img)
plt.title('COCO_val2014_'+STRING2+'.jpg')
PredictedCaption=[words[x] for x in StartCaption[0]]
print("predicted sentence: ",PredictedCaption[1:])
#Free our GPU memory before proceeding to the next part of the lab
import os
os._exit(00)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Status Codes
Step2: Query Parameters
|
<ASSISTANT_TASK:>
Python Code:
import csv
import requests
response = requests.get("http://api.open-notify.org/iss-now.json")
response.status_code
# Set up the parameters we want to pass to the API.
# This is the latitude and longitude of New York City.
parameters = {"lat": 40.71, "lon": -74}
# Make a get request with the parameters.
response = requests.get("http://api.open-notify.org/iss-pass.json", params=parameters)
# Print the content of the response (the data the server returned)
print(response.content)
# This gets the same data as the command above
response = requests.get("http://api.open-notify.org/iss-pass.json?lat=40.71&lon=-74")
print(response.content)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Model Inputs
Step2: Generator network
Step3: Discriminator
Step4: Hyperparameters
Step5: Build network
Step6: Discriminator and Generator Losses
Step7: Optimizers
Step8: Training
Step9: Training loss
Step10: Generator samples from training
Step11: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.
Step12: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
Step13: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import pickle as pkl
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data')
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32,shape=(None,real_dim),name='inputs_real')
inputs_z = tf.placeholder(tf.float32,shape=(None,z_dim),name='inputs_z')
return inputs_real, inputs_z
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):
''' Build the generator network.
Arguments
---------
z : Input tensor for the generator
out_dim : Shape of the generator output
n_units : Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope('generator',reuse=reuse): # finish this
# Hidden layer
h1 = tf.layers.dense(z,units=n_units,activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha * h1, h1)
# Logits and tanh output
logits = tf.layers.dense(h1,units=out_dim,activation=None)
out = tf.tanh(logits)
return out
def discriminator(x, n_units=128, reuse=False, alpha=0.01):
''' Build the discriminator network.
Arguments
---------
x : Input tensor for the discriminator
n_units: Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope("discriminator",reuse=reuse): # finish this
# Hidden layer
h1 = tf.layers.dense(x,units=n_units,activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha * h1, h1)
# Logits and tanh output
logits = tf.layers.dense(h1,units=1,activation=None)
out = tf.sigmoid(logits)
return out, logits
# Size of input image to discriminator
input_size = 784 # 28x28 MNIST images flattened
# Size of latent vector to generator
z_size = 100
# Sizes of hidden layers in generator and discriminator
g_hidden_size = 128
d_hidden_size = 128
# Leak factor for leaky ReLU
alpha = 0.01
# Label smoothing
smooth = 0.1
tf.reset_default_graph()
# Create our input placeholders
input_real, input_z = model_inputs(input_size,z_size)
# Generator network here
g_model = generator(input_z,input_size,g_hidden_size,False,alpha)
# g_model is the generator output
# Disriminator network here
d_model_real, d_logits_real = discriminator(input_real,d_hidden_size,False,alpha)
d_model_fake, d_logits_fake = discriminator(g_model,d_hidden_size,True,alpha)
# Calculate losses
d_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real,labels=tf.ones_like(d_model_real)*(1-smooth)))
d_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,labels=tf.zeros_like(d_model_fake)))
d_loss = d_loss_real + d_loss_fake
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.ones_like(d_logits_fake)))
# Optimizers
learning_rate = 0.002
# Get the trainable_variables, split into G and D parts
t_vars = tf.trainable_variables()
g_vars = [var for var in t_vars if var.name.startswith('generator')]
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
d_train_opt = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(g_loss, var_list=g_vars)
batch_size = 100
epochs = 100
samples = []
losses = []
saver = tf.train.Saver(var_list = g_vars)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images, reshape and rescale to pass to D
batch_images = batch[0].reshape((batch_size, 784))
batch_images = batch_images*2 - 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_train_opt, feed_dict={input_z: batch_z})
# At the end of each epoch, get the losses and print them out
train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
# Sample from generator as we're training for viewing afterwards
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
samples.append(gen_samples)
saver.save(sess, './checkpoints/generator.ckpt')
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator')
plt.plot(losses.T[1], label='Generator')
plt.title("Training Losses")
plt.legend()
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')
return fig, axes
# Load samples from generator taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
_ = view_samples(-1, samples)
rows, cols = 10, 6
fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)
for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):
for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):
ax.imshow(img.reshape((28,28)), cmap='Greys_r')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
view_samples(0, [gen_samples])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Image Classification
Step2: Explore the Data
Step5: Implement Preprocess Functions
Step8: One-hot encode
Step10: Randomize Data
Step12: Check Point
Step17: Build the network
Step20: Convolution and Max Pooling Layer
Step23: Flatten Layer
Step26: Fully-Connected Layer
Step29: Output Layer
Step32: Create Convolutional Model
Step35: Train the Neural Network
Step37: Show Stats
Step38: Hyperparameters
Step40: Train on a Single CIFAR-10 Batch
Step42: Fully Train the Model
Step45: Checkpoint
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile('cifar-10-python.tar.gz'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
'cifar-10-python.tar.gz',
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open('cifar-10-python.tar.gz') as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 5
sample_id = 10
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
# TODO: Implement Function
# I used nanmax because return the maximum of an array or maximum along an axis, ignoring any NaNs. I don't know
# maybe there are some NaNs for future works.
# Revisar https://www.youtube.com/watch?time_continue=59&v=WaHQ9-UXIIg
return x/np.nanmax(x)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_normalize(normalize)
def one_hot_encode(x):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
# TODO: Implement Function
# with help of this topic https://nd101.slack.com/archives/project-2/p1488897256013609?thread_ts=1488894897.013552&cid=C3Q7DJM1R
rt = None
for i in x:
if rt is None:
rt = np.eye(N=1, M=10, k=i)
else:
rt = np.append(rt, np.eye(N=1, M=10, k=i), axis=0)
return rt
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_one_hot_encode(one_hot_encode)
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
DON'T MODIFY ANYTHING IN THIS CELL
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a bach of image input
: image_shape: Shape of the images
: return: Tensor for image input.
# TODO: Implement Function
w = image_shape[0]
h = image_shape[1]
dep = image_shape[2]
return tf.placeholder(tf.float32, shape=(None,w,h,dep), name="x") # in order to obtein initial depth use index 3
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
# TODO: Implement Function
return tf.placeholder(tf.float32, shape=(None,n_classes), name="y")
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
# TODO: Implement Function
return tf.placeholder(tf.float32,name="keep_prob")
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
# TODO: Implement Function
#x_tensor.TensorShape.as_list()
# initializing weights and bias
weights = tf.Variable(tf.random_normal([conv_ksize[0],conv_ksize[1],x_tensor.get_shape().as_list()[3],conv_num_outputs],
dtype=tf.float32))
bias = tf.Variable(tf.random_normal([conv_num_outputs],dtype=tf.float32))
# apliyin conv
convolution_layer = tf.nn.conv2d(x_tensor,weights, strides=[1,conv_strides[0],conv_strides[1],1],padding='SAME')
# adding bias
convolution_layer = tf.nn.bias_add(convolution_layer,bias)
# Non linear activation
convolution_layer = tf.nn.relu(convolution_layer)
# Applyin pooling
convolution_layer = tf.nn.max_pool(convolution_layer,
ksize=[1,pool_ksize[0],pool_ksize[1],1],
strides=[1,pool_strides[0],pool_strides[1],1],
padding='SAME')
return convolution_layer
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
# TODO: Implement Function
image_flatten_size = x_tensor.get_shape().as_list()[1]\
*x_tensor.get_shape().as_list()[2]\
*x_tensor.get_shape().as_list()[3]
return tf.reshape(x_tensor,[-1,image_flatten_size])
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_flatten(flatten)
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
# Revisar inicializacion de los pesos
weights = tf.Variable(tf.truncated_normal([x_tensor.get_shape().as_list()[1], num_outputs], stddev=0.1))
bias = tf.Variable(tf.truncated_normal([num_outputs],stddev=0.1))
fully_connected_layer = tf.add(tf.matmul(x_tensor,weights),bias)
fully_connected_layer = tf.nn.relu(fully_connected_layer)
return fully_connected_layer
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_fully_conn(fully_conn)
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
weights = tf.Variable(tf.truncated_normal([x_tensor.get_shape().as_list()[1], num_outputs],stddev=0.1))
bias = tf.Variable(tf.truncated_normal([num_outputs],stddev=0.1))
output = tf.add(tf.matmul(x_tensor,weights),bias)
return output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_output(output)
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
x_tensor = x
conv_num_outputs_1 = 22
conv_num_outputs_2 = 44
conv_ksize = [3, 3] # size of window i.e the patch size
conv_strides = [1,1] #paso en vertical y horizontal
pool_ksize = [2, 2]
pool_strides = [2, 2]
num_outputs_fully_1 = 256
num_outputs_fully_2 = 128
num_outputs = 10
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
conv_layer_1 = conv2d_maxpool(x_tensor, conv_num_outputs_1, conv_ksize, conv_strides, pool_ksize, pool_strides)
conv_layer_2 = conv2d_maxpool(conv_layer_1, conv_num_outputs_2, conv_ksize, conv_strides, pool_ksize, pool_strides)
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
flatten_layer = flatten(conv_layer_2)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
fully_conn_layer = fully_conn(flatten_layer,num_outputs_fully_1)
fully_conn_layer = tf.nn.dropout(fully_conn_layer,keep_prob)
fully_conn_layer = fully_conn(fully_conn_layer,num_outputs_fully_2)
fully_conn_layer = tf.nn.dropout(fully_conn_layer,keep_prob)
# TODO: Apply an Output Layer
# Set this to the number of classes OK
# Function Definition from Above:
# output(x_tensor, num_outputs)
output_layer = output(fully_conn_layer,num_outputs)
# TODO: return output
return output_layer
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
# TODO: Implement Function
session.run(optimizer, feed_dict={
x: feature_batch,
y: label_batch,
keep_prob: keep_probability})
pass
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_train_nn(train_neural_network)
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
# TODO: Implement Function
loss = session.run(cost, feed_dict={
x: feature_batch,
y: label_batch,
keep_prob: 1.0})
valid_acc = session.run(accuracy, feed_dict={
x: valid_features,
y: valid_labels,
keep_prob: 1.0})
print('Loss: {:>10.4f} Validation Accuracy: {:.6f}'.format(
loss,
valid_acc))
pass
# TODO: Tune Parameters
epochs = 40
batch_size = 128
keep_probability = 0.8
DON'T MODIFY ANYTHING IN THIS CELL
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_training.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for train_feature_batch, train_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: train_feature_batch, loaded_y: train_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load data using PaSDqc API
Step2: Load normalization file included in PaSDqc package
Step3: Fit the amplicon distribution curves using the erf method (other available methods include "logis" and "gamma")
Step4: Estimate amplicon median, mean, and bounds using Monte Carlo simulation
Step5: Estimate the amplicon size density
Step6: Normalize PSDs for plotting purposes
Step7: Load simulation data generated in Ampl_dist_sim.ipynb
Step8: Load data protocol comparison data generated in Lodato_vs_Zhang_ampl_dist.ipynb
Step9: Make a massive plot
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import scipy.optimize
import scipy.special
import matplotlib.pyplot as plt
import seaborn as sns
import pathlib
import sys
import PaSDqc
%matplotlib inline
sample_mda = PaSDqc.PSDTools.SamplePSD.load_from_file("../data/intro_PSDs/example_MDA.spec", name='MDA')
sample_malbac = PaSDqc.PSDTools.SamplePSD.load_from_file("../data/intro_PSDs/example_MALBAC.spec", name='MALBAC')
# sample_bulk = PaSDqc.PSDTools.SamplePSD(pd.read_table("../example_02-basic_PSD/example_bulk.spec", index_col=0), name='Bulk')
freq = sample_mda.freq
avg_mda = sample_mda.avg_PSD()
# avg_bulk = sample_bulk.avg_PSD()
avg_malbac = sample_malbac.avg_PSD()
f_norm = PaSDqc.extra_tools.get_data_file('bulk_1x.smooth3.spec')
norm = pd.Series.from_csv(f_norm, index_col=0, sep="\t").as_matrix()
ampl_dist_mda = PaSDqc.amplicon.AmplDist(freq, avg_mda)
ampl_dist_mda.fit_curve(method='erf')
popt_erf_mda = ampl_dist_mda.popt['erf'][0:4]
ampl_dist_malbac = PaSDqc.amplicon.AmplDist(freq, avg_malbac)
ampl_dist_malbac.fit_curve(method='erf')
popt_erf_malbac = ampl_dist_malbac.popt['erf'][0:4]
median_mda, mean_mda, lower_95_mda, upper_95_mda = ampl_dist_mda.amplicon_range(method='erf')
median_mal, mean_mal, lower_95_mal, upper_95_mal = ampl_dist_malbac.amplicon_range(method='erf')
pdf_erf_mda = ampl_dist_mda.amplicon_dist()
pdf_erf_mal = ampl_dist_malbac.amplicon_dist()
psd_mda = PaSDqc.PSDTools.normalize_psd(avg_mda)
psd_malbac = PaSDqc.PSDTools.normalize_psd(avg_malbac)
df_sim = pd.read_table("MDA_erf_sim.txt")
def normalize(freq, psd, norm, shift=0):
normed = 10*np.log10(psd/norm) + shift
return normed/np.max(normed)
freq_cut = freq[freq < 1e-3]
psd_30 = normalize(freq, avg_mda, norm, shift=-4)
freq_cut = freq[freq < 1e-3]
psd_30_3p = normalize(freq, sample_mda.df['3'], norm, shift=-0.75)
df_lodata = pd.read_table("Ampl_dist_lodato.txt")
df_cz = pd.read_table("Ampl_dist_CZ.txt")
sns.set_context('poster')
sns.set_style("ticks", {'ytick.minor.size': 0.0, 'xtick.minor.size': 0.0})
f = plt.figure(figsize=(10, 11.5))
cp = sns.color_palette()
# Spectral density and curve fits
ax0 = plt.subplot2grid((3, 2), (0, 0), colspan=2)
ax0.plot((0,0), (0,0))
ax0.plot(1/freq, psd_mda, 'o', label='MDA')
ax0.plot(1/freq, psd_malbac, 'o', label='MALBAC', color=cp[3])
ax0.plot(10**ampl_dist_mda.freq['erf'], ampl_dist_mda.func_erf(ampl_dist_mda.freq['erf'], *popt_erf_mda), color='red', label='erf fit')
ax0.plot(10**ampl_dist_malbac.freq['erf'], ampl_dist_malbac.func_erf(ampl_dist_malbac.freq['erf'], *popt_erf_malbac), color='red')
ax0.plot(median_mda, ampl_dist_mda.func_erf(np.log10(median_mda), *popt_erf_mda), '*', markersize=15, color='pink', label='Median amplicon')
ax0.plot(lower_95_mda, ampl_dist_mda.func_erf(np.log10(lower_95_mda), *popt_erf_mda), '*', markersize=15, color='yellow', label='95% amplicon bounds')
ax0.plot(upper_95_mda, ampl_dist_mda.func_erf(np.log10(upper_95_mda), *popt_erf_mda), '*', markersize=15, color='yellow')
ax0.plot(median_mal, ampl_dist_malbac.func_erf(np.log10(median_mal), *popt_erf_malbac), '*', markersize=15, color='pink')
ax0.plot(lower_95_mal, ampl_dist_malbac.func_erf(np.log10(lower_95_mal), *popt_erf_malbac), '*', markersize=15, color='yellow')
ax0.plot(upper_95_mal, ampl_dist_malbac.func_erf(np.log10(upper_95_mal), *popt_erf_malbac), '*', markersize=15, color='yellow')
ax0.legend(loc=4)
ax0.set_xscale('log')
ax0.set_xlabel('Genomic scale')
ax0.set_ylabel('Power spectral density (dB)')
ax0.set_xticklabels(["0", "100 bp", "1 kb", "10 kb", "100 kb", "1 mb"])
# Log distribution
ax1 = plt.subplot2grid((3, 2), (1, 0))
ax1.plot(ampl_dist_mda.freq['dist'], pdf_erf_mda, label='MDA', color=cp[1])
ax1.fill_between(ampl_dist_mda.freq['dist'], pdf_erf_mda, color=cp[1], alpha=0.2)
ax1.plot(ampl_dist_malbac.freq['dist'], pdf_erf_mal, label='MALBAC', color=cp[3])
ax1.fill_between(ampl_dist_malbac.freq['dist'], pdf_erf_mal, color=cp[3], alpha=0.2)
ax1.set_xlabel('Amplicon size (log)')
ax1.set_ylabel('Density')
ax1.legend()
ax1.set_xscale('log')
ax1.set_xticklabels(["100 bp", "1 kb", "10 kb", "100 kb", "1 mb"])
# Real distribution
ax2 = plt.subplot2grid((3, 2), (1, 1))
ax2.plot(ampl_dist_mda.freq['dist'], pdf_erf_mda, label='MDA', color=cp[1])
ax2.fill_between(ampl_dist_mda.freq['dist'], pdf_erf_mda, color=cp[1], alpha=0.2)
ax2.plot(ampl_dist_malbac.freq['dist'], pdf_erf_mal, label='MALBAC', color=cp[3])
ax2.fill_between(ampl_dist_malbac.freq['dist'], pdf_erf_mal, color=cp[3], alpha=0.2)
ax2.set_xlabel('Amplicon size')
ax2.set_ylabel('Density')
ax2.set_xlim(0, 2e5)
ax2.legend()
ax2.set_xticklabels(["0 bp", "50 kb", "100 kb", "150 kb", "200 kb"])
# Simulated fit
erf_avg = df_sim.avg
erf_se = df_sim.se
ax3 = plt.subplot2grid((3, 2), (2, 0), colspan=1)
ax3.plot(1/freq, psd_30_3p, label='MDA')
ax3.plot(1/freq_cut, erf_avg, label='Simulated', color=cp[2])
ax3.fill_between(1/freq_cut, erf_avg-2*erf_se, erf_avg+2*erf_se, alpha=0.25, color=cp[2])
ax3.set_xscale('log')
ax3.legend(loc='upper left')
ax3.set_ylabel('Normlized PSD')
ax3.set_xlabel('Genomic scale')
ax3.set_xlim(1e2, 1e6)
ax3.set_xticklabels(["0", "100 bp", "1 kb", "10 kb", "100 kb", "1 mb"])
# Lodato 2015 vs. Zhang 2015
pdf_alk_avg = df_lodata.avg
pdf_alk_std = df_lodata['std']
pdf_cz_avg = df_cz.avg
pdf_cz_std = df_cz['std']
freq_eval2 = np.arange(3, 5.5, 0.01)
freq_eval3 = np.arange(2.5, 5, 0.01)
ax4 = plt.subplot2grid((3, 2), (2, 1))
ax4.plot(10**freq_eval3, pdf_cz_avg, label='Zhang,\n2015')
ax4.fill_between(10**freq_eval3, pdf_cz_avg-pdf_cz_std, pdf_cz_avg+pdf_cz_std, alpha=0.3, color=cp[0])
ax4.plot(10**freq_eval2, pdf_alk_avg, label='Lodato,\n2015')#label='Alkaline\nlysis')
ax4.fill_between(10**freq_eval2, pdf_alk_avg-pdf_alk_std, pdf_alk_avg+pdf_alk_std, alpha=0.3, color=cp[1])
ax4.legend(bbox_to_anchor=(0., 0.8, 1., .102), loc=(0, 0), ncol=2, mode="expand", borderaxespad=0.)
ax4.set_xscale('log')
ax4.set_xlabel('Amplicon size')
ax4.set_ylabel('Density')
ax4.set_xticklabels(["0", "100 bp", "1 kb", "10 kb", "100 kb", "1 mb"])
# Figure layout
f.text(0.01, 0.98, "A", weight="bold", horizontalalignment='left', verticalalignment='center', fontsize=20)
f.text(0.01, 0.65, "B", weight="bold", horizontalalignment='left', verticalalignment='center', fontsize=20)
f.text(0.5, 0.65, "C", weight="bold", horizontalalignment='left', verticalalignment='center', fontsize=20)
f.text(0.01, 0.33, "D", weight="bold", horizontalalignment='left', verticalalignment='center', fontsize=20)
f.text(0.5, 0.33, "E", weight="bold", horizontalalignment='left', verticalalignment='center', fontsize=20)
plt.tight_layout()
sns.despine(fig=f, ax=ax0)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Build clustering model
Step2: Build the optimal model and apply it
Step3: Cluster Profiles
|
<ASSISTANT_TASK:>
Python Code:
def loadContributions(file, withsexe=False):
contributions = pd.read_json(path_or_buf=file, orient="columns")
rows = [];
rindex = [];
for i in range(0, contributions.shape[0]):
row = {};
row['id'] = contributions['id'][i]
rindex.append(contributions['id'][i])
if (withsexe):
if (contributions['sexe'][i] == 'Homme'):
row['sexe'] = 0
else:
row['sexe'] = 1
for question in contributions['questions'][i]:
if (question.get('Reponse')):
row[question['titreQuestion']+' : '+question['texte']] = 1
for criteres in question.get('Reponse'):
# print(criteres['critere'].keys())
row[question['titreQuestion']+'. (Réponse) '+question['texte']+' -> '+str(criteres['critere'].get('texte'))] = 1
rows.append(row)
df = pd.DataFrame(data=rows)
df.fillna(0, inplace=True)
return df
df = loadContributions('../data/EGALITE4.brut.json', True)
df.fillna(0, inplace=True)
df.index = df['id']
#df.to_csv('consultation_an.csv', format='%d')
#df.columns = ['Q_' + str(col+1) for col in range(len(df.columns) - 2)] + ['id' , 'sexe']
df.head()
from sklearn.cluster import KMeans
from sklearn import metrics
import numpy as np
X = df.drop('id', axis=1).values
def train_kmeans(nb_clusters, X):
kmeans = KMeans(n_clusters=nb_clusters, random_state=0).fit(X)
return kmeans
#print(kmeans.predict(X))
#kmeans.cluster_centers_
def select_nb_clusters():
perfs = {};
for nbclust in range(2,10):
kmeans_model = train_kmeans(nbclust, X);
labels = kmeans_model.labels_
# from http://scikit-learn.org/stable/modules/clustering.html#calinski-harabaz-index
# we are in an unsupervised model. cannot get better!
# perfs[nbclust] = metrics.calinski_harabaz_score(X, labels);
perfs[nbclust] = metrics.silhouette_score(X, labels);
print(perfs);
return perfs;
df['clusterindex'] = train_kmeans(4, X).predict(X)
#df
perfs = select_nb_clusters();
# result :
# {2: 341.07570462155348, 3: 227.39963334619881, 4: 186.90438345452918, 5: 151.03979976346525, 6: 129.11214073405731, 7: 112.37235520885432, 8: 102.35994869157568, 9: 93.848315820675438}
optimal_nb_clusters = max(perfs, key=perfs.get);
print("optimal_nb_clusters" , optimal_nb_clusters);
km_model = train_kmeans(optimal_nb_clusters, X);
df['clusterindex'] = km_model.predict(X)
lGroupBy = df.groupby(['clusterindex']).mean();
cluster_profile_counts = df.groupby(['clusterindex']).count();
cluster_profile_means = df.groupby(['clusterindex']).mean();
global_counts = df.count()
global_means = df.mean()
cluster_profile_counts.head(10)
df_profiles = pd.DataFrame();
nbclusters = cluster_profile_means.shape[0]
df_profiles['clusterindex'] = range(nbclusters)
for col in cluster_profile_means.columns:
if(col != "clusterindex"):
df_profiles[col] = np.zeros(nbclusters)
for cluster in range(nbclusters):
df_profiles[col][cluster] = cluster_profile_means[col][cluster]
# row.append(df[col].mean());
df_profiles.head()
#print(df_profiles.columns)
intereseting_columns = {};
for col in df_profiles.columns:
if(col != "clusterindex"):
global_mean = df[col].mean()
diff_means_global = abs(df_profiles[col] - global_mean). max();
# print(col , diff_means_global)
if(diff_means_global > 0.05):
intereseting_columns[col] = True
#print(intereseting_columns)
%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
interesting = list(intereseting_columns.keys())
df_profiles_sorted = df_profiles[interesting].sort_index(axis=1)
df_profiles_sorted.plot.bar(figsize =(1, 1))
df_profiles_sorted.plot.bar(figsize =(16, 8), legend=False)
df_profiles_sorted.T
#df_profiles.sort_index(axis=1).T
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: If you get an error stating that database "homework2" does not exist, make sure that you followed the instructions above exactly. If necessary, drop the database you created (with, e.g., DROP DATABASE your_database_name) and start again.
Step2: Problem set 1
Step3: Problem set 2
Step4: Nicely done. Now, in the cell below, fill in the indicated string with a SQL statement that returns all occupations, along with their count, from the uuser table that have more than fifty users listed for that occupation. (I.e., the occupation librarian is listed for 51 users, so it should be included in these results. There are only 12 lawyers, so lawyer should not be included in the result.)
Step5: TA-COMMENT
Step6: TA-COMMENT
Step7: BONUS
|
<ASSISTANT_TASK:>
Python Code:
import pg8000
conn = pg8000.connect(user='postgres', password='password', database="homework2_radhika")
conn.rollback()
conn.rollback()
cursor = conn.cursor()
statement = "SELECT movie_title, release_date from uitem WHERE horror=1 AND scifi=1 ORDER BY release_date DESC;"
cursor.execute(statement)
for row in cursor:
print(row[0])
conn.rollback()
cursor = conn.cursor()
statement = "SELECT count(*) FROM uitem WHERE musical=1 OR childrens=1;"
cursor.execute(statement)
for row in cursor:
print(row[0])
conn.rollback()
cursor = conn.cursor()
statement = "SELECT occupation, count(occupation) FROM uuser GROUP BY occupation HAVING count(occupation) > 50;"
cursor.execute(statement)
for row in cursor:
print(row[0], row[1])
conn.rollback()
cursor = conn.cursor()
#statement = "SELECT movie_id from uitem limit 5;"
statement = "SELECT DISTINCT(uitem.movie_title), item_id, rating FROM udata JOIN uitem on item_id = movie_id WHERE documentary=1 AND uitem.release_date < '1992-01-01' AND udata.rating = 5;"
cursor.execute(statement)
for row in cursor:
print(row[0])
conn.rollback()
cursor = conn.cursor()
statement = "SELECT uitem.movie_title, avg(rating) FROM udata JOIN uitem on item_id = movie_id WHERE horror=1 GROUP BY uitem.movie_title ORDER BY avg(udata.rating) limit 10;"
cursor.execute(statement)
for row in cursor:
print(row[0], "%0.2f" % row[1])
conn.rollback()
cursor = conn.cursor()
statement = "SELECT uitem.movie_title, avg(rating) FROM udata JOIN uitem on item_id = movie_id WHERE horror=1 GROUP BY uitem.movie_title HAVING count(rating) > 10 ORDER BY avg(udata.rating) limit 10;"
cursor.execute(statement)
for row in cursor:
print(row[0], "%0.2f" % row[1])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Visualization Function
Step2: Registration Functions
|
<ASSISTANT_TASK:>
Python Code:
import sys
sys.path.append('../code/functions')
sys.path.append('/home/simpleElastix/build/SimpleITK-build/Wrapping/Python')
import pickle
import cv2
import time
import SimpleITK as sitk
import numpy as np
import matplotlib.pyplot as plt
import nibabel as nib
from cluster import Cluster
from tiffIO import loadTiff, unzipChannels
from connectLib import adaptiveThreshold
from hyperReg import get3DRigid, parseTransformFile, apply3DRigid, apply3DRigidVolume
movingImg = pickle.load(open('../code/functions/movingDebug.io', 'r'))
fixedImg = pickle.load(open('../code/functions/fixedDebug.io', 'r'))
movingLmk = pickle.load(open('../code/functions/movingLDebug.io', 'r'))
fixedLmk = pickle.load(open('../code/functions/fixedLDebug.io', 'r'))
testFixedLmk = fixedLmk[10:15]
testMovingLmk = movingLmk[10:15]
testFixedImg = fixedImg[10:15]
testMovingImg = movingImg[10:15]
plt.figure()
plt.imshow(movingImg[10], cmap='gray')
plt.title('Moving Image Post Connected Components')
plt.show()
plt.figure()
plt.imshow(movingImg[10], cmap='gray')
plt.title('Fixed Image Post Connected Components')
plt.show()
plt.figure()
plt.imshow(movingLmk[10], cmap='gray')
plt.title('Moving Image Landmarks')
plt.show()
plt.figure()
plt.imshow(fixedLmk[10], cmap='gray')
plt.title('Fixed Image Landmarks')
plt.show()
def toDiff(imgA, imgB):
ret = np.empty((imgA.shape[0], imgA.shape[1], 3), dtype=np.uint8)
for y in range(imgA.shape[0]):
for x in range(imgA.shape[1]):
if imgA[y][x] and not imgB[y][x]:
ret[y][x][0] = 255
ret[y][x][1] = 0
ret[y][x][2] = 0
elif not imgA[y][x] and imgB[y][x]:
ret[y][x][0] = 0
ret[y][x][1] = 255
ret[y][x][2] = 0
elif imgA[y][x] and imgB[y][x]:
ret[y][x][0] = 255
ret[y][x][1] = 0
ret[y][x][2] = 255
else:
ret[y][x][0] = 255
ret[y][x][1] = 255
ret[y][x][2] = 255
return ret
def visDiff(sliceA, sliceB):
disp = toDiff(sliceA, sliceB)
return disp
def visVolDiff(volumeA, volumeB):
for i in range(volumeA.shape[0]):
plt.figure()
plt.title('Disperity at z=' + str(i))
plt.imshow(visDiff(volumeA[i], volumeB[i]))
plt.show()
def preproc(img):
binImg = adaptiveThreshold(img, 5, 5)
binImg*=255
outImg = np.stack([cv2.erode(sub, None, 1) for sub in binImg])
outImg = np.stack([cv2.dilate(sub, None, 2) for sub in outImg])
return outImg
def register(landmarks1, landmarks2, additionalParams):
SimpleElastix = sitk.SimpleElastix()
SimpleElastix.LogToConsoleOn()
img1 = nib.Nifti1Image(preproc(landmarks1), np.eye(4))
nib.save(img1, 'fixed.nii')
img2 = nib.Nifti1Image(preproc(landmarks2), np.eye(4))
nib.save(img2, 'moving.nii')
SimpleElastix.SetFixedImage(sitk.ReadImage('fixed.nii'))
SimpleElastix.SetMovingImage(sitk.ReadImage('moving.nii'))
pMap = sitk.GetDefaultParameterMap('rigid')
for elem in additionalParams:
pMap[elem[0]]=[elem[1]]
SimpleElastix.SetParameterMap(pMap)
SimpleElastix.Execute()
t = SimpleElastix.GetTransformParameterMap()
sitk.WriteParameterFile(t[0], 'transform.txt')
imgFilter = sitk.SimpleTransformix()
imgFilter.SetTransformParameterMap(t[0])
imgFilter.PrintParameterMap()
return imgFilter
start = time.time()
myTransform = register(fixedLmk, movingLmk,
[['MaximumNumberOfSamplingAttempts','200'],
['MaximumNumberOfIterations', '700'],
['Metric', 'AdvancedMeanSquares']]
)
end = time.time()
print end - start
params = parseTransformFile('transform.txt')
regVolume = apply3DRigidVolume(fixedLmk,
params['matrix'],
params['originZ'],
params['originY'],
params['originX'])
visVolDiff(preproc(fixedLmk[140:145]), preproc(movingLmk[140:145]))
visVolDiff(preproc(regVolume[140:145]), preproc(movingLmk[140:145]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Generating and implementing many-body equations
Step2: Generating equations for fully contracted terms
Step3: First we convert the expression derived into a set of equations. You get back a dictionary that shows all the components to the equations. The vertical bar (|) in the key separates the lower (left) and upper (right) indices in the resulting expression
Step4: Converting equations to code
Step5: Many-body equations
Step6: Next, we call to_manybody_equations to generate many-body equations
Step7: Out of all the terms, we select the terms that multiply the excitation operator ${ \hat{a}^\dagger_{a} \hat{a}_i }$ ("o|v")
Step8: Lastly, we can compile these equations into code
Step9: Antisymmetrization of uncontracted operator indices
Step10: In wick&d the two-body part of $[\hat{V}\mathrm{ovov},\hat{T}_2]$ gives us only a single term
|
<ASSISTANT_TASK:>
Python Code:
import wicked as w
from IPython.display import display, Math, Latex
def latex(expr):
Function to render any object that has a member latex() function
display(Math(expr.latex()))
w.reset_space()
w.add_space("o", "fermion", "occupied", ['i','j','k','l','m','n'])
w.add_space("v", "fermion", "unoccupied", ['a','b','c','d','e','f'])
wt = w.WickTheorem()
E0 = w.op("E_0",[""])
F = w.utils.gen_op('f',1,'ov','ov')
V = w.utils.gen_op('v',2,'ov','ov')
H = E0 + F + V
T = w.op("t",["v+ o", "v+ v+ o o"])
Hbar = w.bch_series(H,T,2)
expr = wt.contract(Hbar,0,0)
expr
mbeq = expr.to_manybody_equations('r')
mbeq
for eq in mbeq['|']:
print(eq.compile('einsum'))
F = w.utils.gen_op('f',1,'ov','ov')
T1 = w.op("t",["v+ o"])
expr = wt.contract(w.commutator(F,T1),2,2)
latex(expr)
mbeq = expr.to_manybody_equations('g')
print(mbeq)
mbeq_ov = mbeq["o|v"]
for eq in mbeq_ov:
latex(eq)
for eq in mbeq_ov:
print(eq.compile('einsum'))
T2 = w.op("t", ["v+ v+ o o"])
Vovov = w.op("v", ["o+ v+ v o"])
expr = wt.contract(w.commutator(Vovov, T2), 4, 4)
latex(expr)
for eq in expr.to_manybody_equations('g')['oo|vv']:
print(eq.compile('einsum'))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: You might see a warning saying that caffe2 does not have GPU support. That means you are running a CPU-only build. Don't be alarmed - anything CPU is still runnable without problem.
Step2: We can feed blobs into the workspace using FeedBlob().
Step3: Now, let's take a look what blobs there are in the workspace.
Step4: Let's verify that the arrays are equal.
Step5: Also, if you are trying to access a blob that does not exist, an error will be thrown
Step6: One thing that you might not use immediately
Step7: Let's switch back to the default workspace.
Step8: Finally, ResetWorkspace() clears anything that is in the current workspace.
Step9: Operators
Step10: As we mentioned, the created op is actually a protobuf object. Let's show the content.
Step11: OK, let's run the operator. We first feed in the input X to the workspace.
Step12: After execution, let's see if the operator is doing the right thing, which is our neural network's activation function (Relu) in this case.
Step13: This is working if your Expected output matches your Y output in this example.
Step14: Let's run it and see if things are as intended.
Step15: If you see a bell shaped curve then it worked!
Step16: Let's create a blob called X, and use GaussianFill to fill it with some random data.
Step17: You might have observed a few differences from the earlier core.CreateOperator call. Basically, when we have a net, you can direct create an operator and add it to the net at the same time using Python tricks
Step18: Let's continue to create W and b.
Step19: Now, one simple code sugar
Step20: Under the hood, X.FC(...) simply delegates to net.FC by inserting X as the first input of the corresponding operator, so what we did above is equivalent to
Step21: Too verbose huh? Let's try to visualize it as a graph. Caffe2 ships with a very minimal graph visualization tool for this purpose. Let's show that in ipython.
Step22: So we have defined a Net, but nothing gets executed yet. Remember that the net above is essentially a protobuf that holds the definition of the network. When we actually want to run the network, what happens under the hood is
Step23: Now let's try the second way to create the net, and run it. First clear the variables with ResetWorkspace(), create the net with the workspace's net object you created earlier CreateNet(net_object), and then run the net by name with RunNet(net_name).
Step24: There are a few differences between RunNetOnce and RunNet, but probably the main difference is the computation time overhead. Since RunNetOnce involves serializing the protobuf to pass between Python and C and instantiating the network, it may take longer to run. Let's see in this case what the overhead is.
|
<ASSISTANT_TASK:>
Python Code:
# We'll also import a few standard python libraries
from matplotlib import pyplot
import numpy as np
import time
# These are the droids you are looking for.
from caffe2.python import core, workspace
from caffe2.proto import caffe2_pb2
# Let's show all plots inline.
%matplotlib inline
print("Current blobs in the workspace: {}".format(workspace.Blobs()))
print("Workspace has blob 'X'? {}".format(workspace.HasBlob("X")))
X = np.random.randn(2, 3).astype(np.float32)
print("Generated X from numpy:\n{}".format(X))
workspace.FeedBlob("X", X)
print("Current blobs in the workspace: {}".format(workspace.Blobs()))
print("Workspace has blob 'X'? {}".format(workspace.HasBlob("X")))
print("Fetched X:\n{}".format(workspace.FetchBlob("X")))
np.testing.assert_array_equal(X, workspace.FetchBlob("X"))
try:
workspace.FetchBlob("invincible_pink_unicorn")
except RuntimeError as err:
print(err)
print("Current workspace: {}".format(workspace.CurrentWorkspace()))
print("Current blobs in the workspace: {}".format(workspace.Blobs()))
# Switch the workspace. The second argument "True" means creating
# the workspace if it is missing.
workspace.SwitchWorkspace("gutentag", True)
# Let's print the current workspace. Note that there is nothing in the
# workspace yet.
print("Current workspace: {}".format(workspace.CurrentWorkspace()))
print("Current blobs in the workspace: {}".format(workspace.Blobs()))
workspace.SwitchWorkspace("default")
print("Current workspace: {}".format(workspace.CurrentWorkspace()))
print("Current blobs in the workspace: {}".format(workspace.Blobs()))
workspace.ResetWorkspace()
# Create an operator.
op = core.CreateOperator(
"Relu", # The type of operator that we want to run
["X"], # A list of input blobs by their names
["Y"], # A list of output blobs by their names
)
# and we are done!
print("Type of the created op is: {}".format(type(op)))
print("Content:\n")
print(str(op))
workspace.FeedBlob("X", np.random.randn(2, 3).astype(np.float32))
workspace.RunOperatorOnce(op)
print("Current blobs in the workspace: {}\n".format(workspace.Blobs()))
print("X:\n{}\n".format(workspace.FetchBlob("X")))
print("Y:\n{}\n".format(workspace.FetchBlob("Y")))
print("Expected:\n{}\n".format(np.maximum(workspace.FetchBlob("X"), 0)))
op = core.CreateOperator(
"GaussianFill",
[], # GaussianFill does not need any parameters.
["Z"],
shape=[100, 100], # shape argument as a list of ints.
mean=1.0, # mean as a single float
std=1.0, # std as a single float
)
print("Content of op:\n")
print(str(op))
workspace.RunOperatorOnce(op)
temp = workspace.FetchBlob("Z")
pyplot.hist(temp.flatten(), bins=50)
pyplot.title("Distribution of Z")
net = core.Net("my_first_net")
print("Current network proto:\n\n{}".format(net.Proto()))
X = net.GaussianFill([], ["X"], mean=0.0, std=1.0, shape=[2, 3], run_once=0)
print("New network proto:\n\n{}".format(net.Proto()))
print("Type of X is: {}".format(type(X)))
print("The blob name is: {}".format(str(X)))
W = net.GaussianFill([], ["W"], mean=0.0, std=1.0, shape=[5, 3], run_once=0)
b = net.ConstantFill([], ["b"], shape=[5,], value=1.0, run_once=0)
Y = X.FC([W, b], ["Y"])
print("Current network proto:\n\n{}".format(net.Proto()))
from caffe2.python import net_drawer
from IPython import display
graph = net_drawer.GetPydotGraph(net, rankdir="LR")
display.Image(graph.create_png(), width=800)
workspace.ResetWorkspace()
print("Current blobs in the workspace: {}".format(workspace.Blobs()))
workspace.RunNetOnce(net)
print("Blobs in the workspace after execution: {}".format(workspace.Blobs()))
# Let's dump the contents of the blobs
for name in workspace.Blobs():
print("{}:\n{}".format(name, workspace.FetchBlob(name)))
workspace.ResetWorkspace()
print("Current blobs in the workspace: {}".format(workspace.Blobs()))
workspace.CreateNet(net)
workspace.RunNet(net.Proto().name)
print("Blobs in the workspace after execution: {}".format(workspace.Blobs()))
for name in workspace.Blobs():
print("{}:\n{}".format(name, workspace.FetchBlob(name)))
# It seems that %timeit magic does not work well with
# C++ extensions so we'll basically do for loops
start = time.time()
for i in range(1000):
workspace.RunNetOnce(net)
end = time.time()
print('Run time per RunNetOnce: {}'.format((end - start) / 1000))
start = time.time()
for i in range(1000):
workspace.RunNet(net.Proto().name)
end = time.time()
print('Run time per RunNet: {}'.format((end - start) / 1000))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Creating figures
Step2: In this case, x is now a NumPy array with 300 values ranging from 0 to 2$\pi$ (included). y is the sine (array of 300 values) and y2 is the square of the sine (array of 300 values) at each of those 300 x values.
Step3: What this plot lacks is important information about what is in it - there are no axis labels, no legend telling us what is different about blue vs. green, and the size of the font on the axis labels is a bit small. We should probably try to improve the plot's readability so other people viewing it will understand what we are trying to convey.
Step4: See the Matplotlib line style demo to view the different types of line styles you can choose from. The source code is a little "slick", though, so it may not be obvious to you how the markers are set in that example.
Step5: Refining our plots
Step6: Matplotlib comes with a set of default settings that allow customizing all kinds of properties. You can control the defaults of almost every property with Matplotlib
Step7: That's a pretty nicely formatted figure, ready for publication or presentation. Lines are thick enough to see and distinguishable even in black and white, and the legend and labels are also large enough to read easily. Axis labels would also generally be wise to include, but in this case they are implicit from the legend and the values on the axes.
Step8: Other types of plots
Step9: Perhaps your error bars vary from point to point
Step10: Subplots and Logarithmic axes
Step11: Recall that an exponential function plotted in semilogy is a straight line!
Step12: Histograms
Step13: The number of bins was set at 50 in the second argument of plt.hist. The area is normalized to 1, so the values on the y axis are fractions that will add up to 1 if all of the green bars are added together. The alpha parameter sets the transparency of the fill color.
Step14: Loading examples from the Matplotlib gallery
Step15: Common formatting tricks
|
<ASSISTANT_TASK:>
Python Code:
# In iPython or the iPython notebook, it's easiest to use the pylab magic, which
# imports matplotlib, numpy, and scipy.
# The inline flag means that images will be shown here in the notebooks, rather
# than in pop-up windows.
%pylab notebook
# If you are using 'regular' Python, however, you'll want the following. You'll
# need to also separately import numpy and any other packages that you might need.
import matplotlib.pyplot as plt
import numpy as np
#create the data to be plotted
x = np.linspace(0, 2*np.pi, 300)
y = np.sin(x)
y2 = np.sin(x**2)
#Now plot it
plt.plot(x, y)
plt.plot(x, y2)
plt.show()
plt.plot(x, y, linewidth=2);
plt.plot(x, y2, linewidth=2);
#decrease the number of points to illustrate the use of markers
x = np.linspace(0, 2*np.pi, 50)
y = np.sin(x)
y2 = np.cos(x)
plt.plot(x, y, 'o', markersize=5, color='r');
plt.plot(x, y2, '^', markersize=5, color='b');
#back to our original data
x = np.linspace(0, 2*np.pi, 300)
y = np.sin(x)
y2 = np.sin(x**2)
plt.plot(x, y)
plt.plot(x, y2)
#add a grid
plt.grid()
plt.savefig("Example.pdf")
plt.savefig("Example.png")
#create the data
x = np.linspace(-np.pi, np.pi, 56, endpoint=True)
c, s = np.cos(x), np.sin(x)
plt.plot(x, c)
plt.plot(x, s)
plt.show()
# Create a figure of size 8x6 points, 80 dots per inch
plt.figure(figsize=(8, 6), dpi=80)
# Create a new subplot from a grid of 1x1
plt.subplot(1, 1, 1)
# Plot cosine with a blue continuous line of width 1 (pixels)
plt.plot(x, c, color="blue", linewidth=1.0, linestyle="-",label="cosine")
# Plot sine with a green continuous line of width 1 (pixels)
plt.plot(x, s, color="green", linewidth=1.0, linestyle="-",label="sine")
# Set x limits
plt.xlim(-4.0, 4.0)
# Set x ticks
plt.xticks(np.linspace(-4, 4, 9, endpoint=True))
# Set y limits
plt.ylim(-1.0, 1.0)
# Set y ticks
plt.yticks(np.linspace(-1, 1, 5, endpoint=True))
# Add axis labels, setting a readable font size
plt.xlabel("x (rad)",fontsize=15)
plt.ylabel("sin,cos",fontsize=15)
# Add a legend
plt.legend(loc='upper left')
# Save figure using 72 dots per inch
plt.savefig("plot_example.png", dpi=72)
# Show result on screen
plt.show()
#your code here
#Simple constant error bars on each point
x = np.array([0.0, 2.0, 4.0, 6.0, 8.0])
y = np.array([1.1, 1.9, 3.2, 4.0, 5.9])
plt.figure()
plt.errorbar(x, y, xerr=0.2, yerr=0.6, marker='o')
plt.title("Simplest errorbars, 0.2 in x, 0.6 in y");
# example data
x = np.arange(0.1, 4, 0.5)
y = np.exp(-x)
# example variable error bar values
yerr = 0.1 + 0.2*np.sqrt(x)
xerr = 0.1 + yerr
plt.figure()
plt.errorbar(x, y, xerr, yerr, marker='^')
plt.show()
x = np.linspace(0., 5.)
y = np.exp(-x)
#Make a figure with 4 subplots and axes side-by-side
fig, ax = plt.subplots(1,4, figsize=(10,6))
#Plot on each axis
ax[0].plot(x,y)
ax[1].loglog(x,y)
ax[2].semilogx(x,y)
ax[3].semilogy(x,y);
# Make some data to plot
x = np.arange(0, 100)
y = np.random.rand(100) # 100 random numbers
plt.scatter(x,y);
mu, sigma = 100, 15
x = mu + sigma * np.random.randn(10000)
# the histogram of the data
n, bins, patches = plt.hist(x, 50, normed=1, facecolor='g', alpha=0.75)
plt.xlabel('Smarts',fontsize=20)
plt.ylabel('Probability',fontsize=20)
plt.title('Histogram of IQ',fontsize=20)
# This will put a text fragment at the position given:
plt.text(45, .027, r'$\mu=100,\ \sigma=15$', fontsize=20)
plt.axis([40, 160, 0, 0.03])
plt.xticks(fontsize=20)
plt.yticks(fontsize=20)
plt.grid();
#Your code here
# Try it here...
%loadpy http://matplotlib.org/mpl_examples/pylab_examples/contour_demo.py
# Make some data to plot
x = np.linspace(0, 2*np.pi)
y1 = np.sin(x)
y2 = np.cos(x)
# First, create an empty figure with 1 subplot
fig, ax1 = plt.subplots(1, 1)
# Add title and labels
ax1.set_title('My Plot',fontsize=20)
ax1.set_xlabel('x',fontsize=20)
ax1.set_ylabel('y',fontsize=20)
# Change axis limits
ax1.set_xlim([0,2])
ax1.set_ylim([-1, 2])
# Add the lines, changing their color, style, and marker
ax1.plot(x, y1, 'k--o', label='sin') # Black line, dashed, with 'o' markers
ax1.plot(x, y2, 'r-^', label='cos') # Red line, solid, with triangle-up markers
# Adjust tick marks and get rid of 'box'
ax1.tick_params(direction='out', top=False, right=False) # Turn ticks out
ax1.spines['top'].set_visible(False) # Get rid of top axis line
ax1.spines['right'].set_visible(False) # Get rid of bottom axis line
# Add subplot letter
ax1.annotate('(a)', (0.01, 0.96), size=12, xycoords='figure fraction')
# Add legend
ax1.legend()
# Finally, save the figure as a png file
fig.savefig('myfig-formatted.png')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1.阅读并绘制数据
Step2: 该函数将帮助我们绘制模型。
Step3: 2. 将我们的数据分为训练和测试集
Step4: 3. 拟合一个决策树模型
Step5: Now let's plot the model, and find the testing f1_score, to see how we did.
Step6:
Step7: 现在有一些过拟合。 我们不仅仅是看图表,还需要看看高训练分(1.0)和低测试分(0.7)之间的差异。思考一下,我们是否可以找到更好的超参数来让这个模型做得更好? 接下来我们将使用网格搜索。
Step8:
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
def load_pts(csv_name):
data = np.asarray(pd.read_csv(csv_name, header=None))
X = data[:,0:2]
y = data[:,2]
plt.scatter(X[np.argwhere(y==0).flatten(),0], X[np.argwhere(y==0).flatten(),1],s = 50, color = 'blue', edgecolor = 'k')
plt.scatter(X[np.argwhere(y==1).flatten(),0], X[np.argwhere(y==1).flatten(),1],s = 50, color = 'red', edgecolor = 'k')
plt.xlim(-2.05,2.05)
plt.ylim(-2.05,2.05)
plt.grid(False)
plt.tick_params(
axis='x',
which='both',
bottom='off',
top='off')
return X,y
X, y = load_pts('data.csv')
plt.show()
def plot_model(X, y, clf):
plt.scatter(X[np.argwhere(y==0).flatten(),0],X[np.argwhere(y==0).flatten(),1],s = 50, color = 'blue', edgecolor = 'k')
plt.scatter(X[np.argwhere(y==1).flatten(),0],X[np.argwhere(y==1).flatten(),1],s = 50, color = 'red', edgecolor = 'k')
plt.xlim(-2.05,2.05)
plt.ylim(-2.05,2.05)
plt.grid(False)
plt.tick_params(
axis='x',
which='both',
bottom='off',
top='off')
r = np.linspace(-2.1,2.1,300)
s,t = np.meshgrid(r,r)
s = np.reshape(s,(np.size(s),1))
t = np.reshape(t,(np.size(t),1))
h = np.concatenate((s,t),1)
z = clf.predict(h)
s.shape = (np.size(r),np.size(r))
t.shape = (np.size(r),np.size(r))
z.shape = (np.size(r),np.size(r))
plt.contourf(s,t,z,colors = ['blue','red'],alpha = 0.2,levels = range(-1,2))
if len(np.unique(z)) > 1:
plt.contour(s,t,z,colors = 'k', linewidths = 2)
plt.show()
from sklearn.model_selection import train_test_split
from sklearn.metrics import f1_score, make_scorer
#Fixing a random seed
import random
random.seed(42)
# Split the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
from sklearn.tree import DecisionTreeClassifier
# Define the model (with default hyperparameters)
clf = DecisionTreeClassifier(random_state=42)
# Fit the model
clf.fit(X_train, y_train)
# Make predictions using the unoptimized and model
train_predictions = clf.predict(X_train)
test_predictions = clf.predict(X_test)
plot_model(X, y, clf)
print('The Training F1 Score is', f1_score(train_predictions, y_train))
print('The Testing F1 Score is', f1_score(test_predictions, y_test))
The Training F1 Score is 1.0
The Testing F1 Score is 0.7
from sklearn.metrics import make_scorer
from sklearn.model_selection import GridSearchCV
clf = DecisionTreeClassifier(random_state=42)
# TODO: Create the parameters list you wish to tune.
parameters = {'max_depth':[2,4,6,8,10],'min_samples_leaf':[2,4,6,8,10], 'min_samples_split':[2,4,6,8,10]}
# TODO: Make an fbeta_score scoring object.
scorer = make_scorer(f1_score)
# TODO: Perform grid search on the classifier using 'scorer' as the scoring method.
grid_obj = GridSearchCV(clf, parameters, scoring=scorer)
# TODO: Fit the grid search object to the training data and find the optimal parameters.
grid_fit = grid_obj.fit(X_train, y_train)
# Get the estimator.
best_clf = grid_fit.best_estimator_
# Fit the new model.
best_clf.fit(X_train, y_train)
# Make predictions using the new model.
best_train_predictions = best_clf.predict(X_train)
best_test_predictions = best_clf.predict(X_test)
# Calculate the f1_score of the new model.
print('The training F1 Score is', f1_score(best_train_predictions, y_train))
print('The testing F1 Score is', f1_score(best_test_predictions, y_test))
# Plot the new model.
plot_model(X, y, best_clf)
# Let's also explore what parameters ended up being used in the new model.
best_clf
The training F1 Score is 0.814814814815
The testing F1 Score is 0.8
DecisionTreeClassifier(class_weight=None, criterion='gini', max_depth=4,
max_features=None, max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=2, min_samples_split=2,
min_weight_fraction_leaf=0.0, presort=False, random_state=42,
splitter='best')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Single Day Analysis
Step2: Portfolio Construction
Step8: 2. Porfolio Construction
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import os
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from PyFin.api import *
from alphamind.api import *
from alphamind.strategy.strategy import Strategy, RunningSetting
from alphamind.portfolio.meanvariancebuilder import target_vol_builder
plt.style.use('ggplot')
ref_date = '2020-01-02'
engine = SqlEngine(os.environ['DB_URI'])
universe = Universe('hs300')
codes = engine.fetch_codes(ref_date, universe)
total_data = engine.fetch_data(ref_date, 'EMA5D', codes, 300, industry='sw', risk_model='short')
all_styles = risk_styles + industry_styles + ['COUNTRY']
risk_cov = total_data['risk_cov'][all_styles].values
factor = total_data['factor']
risk_exposure = factor[all_styles].values
special_risk = factor['srisk'].values
er = factor['EMA5D'].fillna(factor["EMA5D"].median()).values
bm = factor['weight'].values
lbound = np.zeros(len(er))
ubound = bm + 0.01
cons_mat = np.ones((len(er), 1))
risk_targets = (bm.sum(), bm.sum())
target_vol = 0.025
risk_model = dict(cov=None, factor_cov=risk_cov/10000, factor_loading=risk_exposure, idsync=special_risk ** 2 / 10000.)
status, p_er, p_weight = \
target_vol_builder(er, risk_model, bm, lbound, ubound, cons_mat, risk_targets, target_vol)
sec_cov = risk_exposure @ risk_cov @ risk_exposure.T / 10000. + np.diag(special_risk ** 2) / 10000
# check the result
print(f"total weight is {p_weight.sum(): .4f}")
print(f"portfolio activate weight forecasting vol is {np.sqrt((p_weight - bm) @ sec_cov @ (p_weight - bm)):.4f}")
print(f"portfolio er: {p_weight @ er:.4f} comparing with benchmark er: {bm @ er:.4f}")
Back test parameter settings
start_date = '2020-01-01'
end_date = '2020-02-21'
freq = '10b'
neutralized_risk = industry_styles
industry_name = 'sw'
industry_level = 1
risk_model = 'short'
batch = 0
horizon = map_freq(freq)
universe = Universe('hs300')
data_source = os.environ['DB_URI']
benchmark_code = 300
target_vol = 0.05
weights_bandwidth = 0.02
Factor Model
alpha_factors = {'f01': CSRank(LAST('EMA5D'))}
weights = dict(f01=1.)
alpha_model = ConstLinearModel(features=alpha_factors, weights=weights)
data_meta = DataMeta(freq=freq,
universe=universe,
batch=batch,
neutralized_risk=neutralized_risk,
risk_model='short',
pre_process=[winsorize_normal, standardize],
post_process=[standardize],
warm_start=0,
data_source=data_source)
Constraintes settings
constraint_risk = ['SIZE', 'SIZENL', 'BETA']
total_risk_names = constraint_risk + ['benchmark', 'total']
b_type = []
l_val = []
u_val = []
previous_pos = pd.DataFrame()
rets = []
turn_overs = []
leverags = []
for name in total_risk_names:
if name == 'benchmark':
b_type.append(BoundaryType.RELATIVE)
l_val.append(0.8)
u_val.append(1.0)
else:
b_type.append(BoundaryType.ABSOLUTE)
l_val.append(0.0)
u_val.append(0.0)
bounds = create_box_bounds(total_risk_names, b_type, l_val, u_val)
Running Settings
running_setting = RunningSetting(weights_bandwidth=weights_bandwidth,
rebalance_method='tv',
bounds=bounds,
target_vol=target_vol)
Strategy run
strategy = Strategy(alpha_model,
data_meta,
universe=universe,
start_date=start_date,
end_date=end_date,
freq=freq,
benchmark=benchmark_code)
strategy.prepare_backtest_data()
ret_df, positions = strategy.run(running_setting)
ret_df[['excess_return', 'turn_over']].cumsum().plot(figsize=(14, 7),
title='Fixed freq rebalanced with target vol \
at {2}: {0} with benchmark {1}'.format(freq, benchmark_code, target_vol),
secondary_y='turn_over')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TV Script Generation
Step3: Explore the Data
Step6: Implement Preprocessing Functions
Step9: Tokenize Punctuation
Step11: Preprocess all the data and save it
Step13: Check Point
Step15: Build the Neural Network
Step18: Input
Step21: Build RNN Cell and Initialize
Step24: Word Embedding
Step27: Build RNN
Step30: Build the Neural Network
Step33: Batches
Step35: Neural Network Training
Step37: Build the Graph
Step39: Train
Step41: Save Parameters
Step43: Checkpoint
Step46: Implement Generate Functions
Step49: Choose Word
Step51: Generate TV Script
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
vocab_to_int = {w : i for i, w in list(enumerate(set(text)))}
int_to_vocab = {i : w for w, i in vocab_to_int.items()}
return vocab_to_int, int_to_vocab
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
return {"." : "||Period||",
"," : "||Comma||",
"\"" : "||Quotation_Mark||",
";" : "||Semicolon||",
"!" : "||Exclamation_Mark||",
"?" : "||Question_Mark||",
"(" : "||Left_Parentheses||",
")" : "||Right_Parentheses||",
"--" : "||Dash||",
"\n" : "||Return||"}
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
return tf.placeholder(tf.int32, [None, None], "input"), tf.placeholder(tf.int32, [None, None], "targets"), tf.placeholder(tf.float32, None, "lr")
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
#drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=0.5)
cell = tf.contrib.rnn.MultiRNNCell([lstm])
initial_state = cell.zero_state(batch_size, tf.float32)
initial_state = tf.identity(initial_state, "initial_state")
return cell, initial_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
return tf.nn.embedding_lookup(embedding, input_data)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
final_state = tf.identity(final_state, "final_state")
return outputs, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
embeddings = get_embed(input_data, vocab_size, embed_dim)
rnn, final_state = build_rnn(cell, embeddings)
logits = tf.contrib.layers.fully_connected(rnn, vocab_size, activation_fn=None,
weights_initializer = tf.truncated_normal_initializer(mean=0, stddev=0.01),
biases_initializer = tf.zeros_initializer())
return logits, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
batch_total = batch_size*seq_length
num_batches = len(int_text)//batch_total
x, y = np.array(int_text[:num_batches*batch_total]), np.array(int_text[1:num_batches*batch_total] + [int_text[0]])
x = np.split(x.reshape(batch_size, -1), num_batches, 1)
y = np.split(y.reshape(batch_size, -1), num_batches, 1)
return np.array(list(zip(x, y)))
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
# Number of Epochs
num_epochs = 100
# Batch Size
batch_size = 256
# RNN Size
rnn_size = 256
# Embedding Dimension Size
embed_dim = 300
# Sequence Length
seq_length = 20
# Learning Rate
learning_rate = 0.01
# Show stats for every n number of batches
show_every_n_batches = 20
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
# TODO: Implement Function
input_tensor = loaded_graph.get_tensor_by_name("input:0")
initial_state_tensor = loaded_graph.get_tensor_by_name("initial_state:0")
final_state_tensor = loaded_graph.get_tensor_by_name("final_state:0")
probs_tensor = loaded_graph.get_tensor_by_name("probs:0")
return input_tensor, initial_state_tensor, final_state_tensor, probs_tensor
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
import operator
max_index, _ = max(enumerate(probabilities), key=operator.itemgetter(1))
return int_to_vocab[max_index]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Introduction
Step2: The solution I came up with
Step3: Let's compute the union of two consecutive intervals, if they are not disjoint
Step4: And now we are reading to compress the list of intervals (in linear time)
Step5: Example
Step6: So now we can write the requested function
Step7: Other examples
|
<ASSISTANT_TASK:>
Python Code:
import sys
print(sys.version)
from typing import List, Tuple
Position = int
Interval = Tuple[Position, Position]
import re
def bad_events(pattern: str, string: str) -> List[Interval]:
# m.span(1) = (m.start(1), m.end(1))
return [m.span(1) for m in re.finditer(f"(?=({pattern}))", string)]
pat = "aca"
strng = "acacavcacabacacbbacacazdbacaca"
# Do you know if there is a regex trick to obtain
# [(0, 5), (7, 10), (11, 14), (17, 22), (25, 30)]
# instead of
bad_events(pat, strng)
# [(0, 3), (2, 5), (7, 10), (11, 14), (17, 20), (19, 22), (25, 28), (27, 30)]
# ?
def are_not_disjoint(interval1: Interval, interval2: Interval) -> bool:
x1, y1 = interval1
assert x1 <= y1, f"Error: interval = {intervals1} is not a valid interval."
x2, y2 = interval2
assert x2 <= y2, f"Error: interval = {intervals2} is not a valid interval."
if x1 <= x2 <= y1 <= y2: # interval1 finishes in interval2
return True
elif x2 <= x1 <= y2 <= y1: # interval2 finishes in interval1
return True
elif x1 <= x2 <= y2 <= y1: # interval2 is included in interval1
return True
elif x2 <= x1 <= y1 <= y2: # interval1 is included in interval2
return True
return False
assert are_not_disjoint((0, 3), (2, 5)) # True
assert not are_not_disjoint((0, 5), (7, 10)) # False
def union_intervals(interval1: Interval, interval2: Interval) -> bool:
x1, y1 = interval1
assert x1 <= y1, f"Error: interval = {intervals1} is not a valid interval."
x2, y2 = interval2
assert x2 <= y2, f"Error: interval = {intervals2} is not a valid interval."
return (min(x1, x2), max(y1, y2))
union_intervals((0, 3), (2, 5))
def compress_intervals(intervals: List[Interval]) -> List[Interval]:
intervals_after_compression: List[Interval] = []
n = len(intervals)
assert n > 0
current_interval = intervals[0] # eg (0, 3)
i = 1
# as long as we can read another interval in the list
while i < n: # ==> O(n) as the inside of the loop is O(1)
next_interval = intervals[i] # eg (2, 5)
if are_not_disjoint(current_interval, next_interval):
# eg (0, 3) and (2, 5) -> (0, 5)
current_interval = union_intervals(current_interval, next_interval)
else:
# eg (0, 5) and (7, 10) -> (0, 5) is added,
intervals_after_compression.append(current_interval)
# and current_interval = next_interval = (7, 10)
current_interval = next_interval
i += 1
# we add the last current interval if it was not yet added
if current_interval not in intervals_after_compression:
intervals_after_compression.append(current_interval)
return intervals_after_compression
# Do you know if there is a regex trick to obtain
# [(0, 5), (7, 10), (11, 14), (17, 22), (25, 30)]
# instead of
intervals = bad_events(pat, strng)
print(intervals)
# [(0, 3), (2, 5), (7, 10), (11, 14), (17, 20), (19, 22), (25, 28), (27, 30)]
# ?
compress_intervals(intervals)
def bad_events_compressed(pat: str, strng: str) -> List[Interval]:
intervals1 = bad_events(pat, strng)
intervals2 = compress_intervals(intervals1)
return intervals2
def test(pat: str, strng: str) -> None:
print(f"For pattern {pat} and string {strng}, the bad events uncompressed are:\n{bad_events(pat, strng)}\nand the bad events compressed are:\n{bad_events_compressed(pat, strng)}")
test(pat, strng)
test("acab", "acabacabacabacacavcacabacacbbacacazdbacacaacacavcacabacacbbacacazdbacacaacabacab")
test("merci", "mercimerciderienmercimerki")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load some double-well data
Step2: We'll discretize the space using 10 states
Step3: Now lets try using 50 states
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
from mdtraj.utils import timing
from msmbuilder.example_datasets import load_doublewell
from msmbuilder.cluster import NDGrid
from msmbuilder.msm import BayesianMarkovStateModel, MarkovStateModel
trjs = load_doublewell(random_state=0)['trajectories']
plt.hist(np.concatenate(trjs), bins=50, log=True)
plt.ylabel('Frequency')
plt.show()
clusterer = NDGrid(n_bins_per_feature=10)
mle_msm = MarkovStateModel(lag_time=100)
b_msm = BayesianMarkovStateModel(lag_time=100, n_samples=10000, n_steps=1000)
states = clusterer.fit_transform(trjs)
with timing('running mcmc'):
b_msm.fit(states)
mle_msm.fit(states)
plt.subplot(2, 1, 1)
plt.plot(b_msm.all_transmats_[:, 0, 0])
plt.axhline(mle_msm.transmat_[0, 0], c='k')
plt.ylabel('t_00')
plt.subplot(2, 1, 2)
plt.ylabel('t_23')
plt.xlabel('MCMC Iteration')
plt.plot(b_msm.all_transmats_[:, 2, 3])
plt.axhline(mle_msm.transmat_[2, 3], c='k')
plt.show()
plt.plot(b_msm.all_timescales_[:, 0], label='MCMC')
plt.axhline(mle_msm.timescales_[0], c='k', label='MLE')
plt.legend(loc='best')
plt.ylabel('Longest timescale')
plt.xlabel('MCMC iteration')
plt.show()
clusterer = NDGrid(n_bins_per_feature=50)
mle_msm = MarkovStateModel(lag_time=100)
b_msm = BayesianMarkovStateModel(lag_time=100, n_samples=1000, n_steps=100000)
states = clusterer.fit_transform(trjs)
with timing('running mcmc (50 states)'):
b_msm.fit(states)
mle_msm.fit(states)
plt.plot(b_msm.all_timescales_[:, 0], label='MCMC')
plt.axhline(mle_msm.timescales_[0], c='k', label='MLE')
plt.legend(loc='best')
plt.ylabel('Longest timescale')
plt.xlabel('MCMC iteration')
plt.plot(b_msm.all_transmats_[:, 0, 0], label='MCMC')
plt.axhline(mle_msm.transmat_[0, 0], c='k', label='MLE')
plt.legend(loc='best')
plt.ylabel('t_00')
plt.xlabel('MCMC iteration')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create a new figure. The dimensions here give a good aspect ratio
|
<ASSISTANT_TASK:>
Python Code:
from datetime import datetime
import matplotlib.pyplot as plt
import metpy.calc as mpcalc
from metpy.io import get_upper_air_data
from metpy.io.upperair import UseSampleData
from metpy.plots import SkewT
with UseSampleData(): # Only needed to use our local sample data
# Download and parse the data
dataset = get_upper_air_data(datetime(1999, 5, 4, 0), 'OUN')
p = dataset.variables['pressure'][:]
T = dataset.variables['temperature'][:]
Td = dataset.variables['dewpoint'][:]
u = dataset.variables['u_wind'][:]
v = dataset.variables['v_wind'][:]
fig = plt.figure(figsize=(9, 9))
skew = SkewT(fig, rotation=45)
# Plot the data using normal plotting functions, in this case using
# log scaling in Y, as dictated by the typical meteorological plot
skew.plot(p, T, 'r')
skew.plot(p, Td, 'g')
skew.plot_barbs(p, u, v)
skew.ax.set_ylim(1000, 100)
skew.ax.set_xlim(-40, 60)
# Calculate LCL height and plot as black dot
lcl_pressure, lcl_temperature = mpcalc.lcl(p[0], T[0], Td[0])
skew.plot(lcl_pressure, lcl_temperature, 'ko', markerfacecolor='black')
# Calculate full parcel profile and add to plot as black line
prof = mpcalc.parcel_profile(p, T[0], Td[0]).to('degC')
skew.plot(p, prof, 'k', linewidth=2)
# Example of coloring area between profiles
greater = T >= prof
skew.ax.fill_betweenx(p, T, prof, where=greater, facecolor='blue', alpha=0.4)
skew.ax.fill_betweenx(p, T, prof, where=~greater, facecolor='red', alpha=0.4)
# An example of a slanted line at constant T -- in this case the 0
# isotherm
skew.ax.axvline(0, color='c', linestyle='--', linewidth=2)
# Add the relevant special lines
skew.plot_dry_adiabats()
skew.plot_moist_adiabats()
skew.plot_mixing_lines()
# Show the plot
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: XML example
Step2: XML exercise
Step3: 10 countries with the lowest infant mortality rates
Step4: 10 cities with the largest population
Step5: 10 ethnic groups with the largest overall populations (sum of best/latest estimates over all countries)
|
<ASSISTANT_TASK:>
Python Code:
from xml.etree import ElementTree as ET
document_tree = ET.parse( './data/mondial_database_less.xml' )
# print names of all countries
for child in document_tree.getroot():
print (child.find('name').text)
# print names of all countries and their cities
for element in document_tree.iterfind('country'):
print ('* ' + element.find('name').text + ':', end=''),
capitals_string = ''
for subelement in element.getiterator('city'):
capitals_string += subelement.find('name').text + ', '
print (capitals_string[:-2])
document = ET.parse( './data/mondial_database.xml' )
# print child and attributes
#for child in document.getroot():
# print (child.tag, child.attrib)
import pandas as pd
# Create a list of country and their Infant Mortality Rate
country_imr=[]
for country in document.getroot().findall('country'):
name = country.find('name').text
infant_mortality_rate = country.find('infant_mortality')
if infant_mortality_rate is not None:
infant_mortality_rate=infant_mortality_rate.text
else :
infant_mortality_rate = -1
country_imr.append((name, (float)(infant_mortality_rate)))
df = pd.DataFrame(country_imr, columns=['Country', 'Infant_Mortality_Rate'])
df_unknown_removed = df[df.Infant_Mortality_Rate != -1]
df_unknown_removed.set_index('Infant_Mortality_Rate').sort().head(10)
city_population=[]
for country in document.iterfind('country'):
for state in country.iterfind('province'):
for city in state.iterfind('city'):
try:
city_population.append((city.find('name').text, float(city.find('population').text)))
except:
next
for city in country.iterfind('city'):
try:
city_population.append((city.find('name').text, float(city.find('population').text)))
except:
next
df = pd.DataFrame(city_population, columns=['City', 'Population'])
#df.info()
df.sort_index(by='Population', ascending=False).head(10)
ethnic_population={}
country_population={}
for country in document.iterfind('country'):
try:
country_population[country.find('name').text]= float(country.find('population').text)
except:
next
for state in country.iterfind('province' or 'state'):
try:
country_population[country.find('name').text] += float(state.find('population').text)
except:
next
for city in state.iterfind('city'):
try:
country_population[country.find('name').text] += float(city.find('population').text)
except:
next
for country in document.iterfind('country'):
for ethnicgroup in country.iterfind('ethnicgroup'):
try:
if ethnicgroup.text in ethnic_population:
ethnic_population[ethnicgroup.text] += country_population[country.find('name').text]*float(ethnicgroup.get('percentage'))/100
else:
ethnic_population[ethnicgroup.text] = country_population[country.find('name').text]*float(ethnicgroup.get('percentage'))/100
except:
next
pd.DataFrame(sorted(ethnic_population.items(), key=lambda x:x[1], reverse=True)[:10], columns=['Ethnic_Groups', 'Population'])
rivers_list=[]
rivers_df = pd.DataFrame()
for rivers in document.iterfind('river'):
try:
rivers_list.append({'name':rivers.find('name').text, 'length':rivers.find('length').text, 'country':rivers.find('located').attrib['country']})
except:
next
rivers_list
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now, we'll tackle the "function in time" part of this model by learning how to make and use arrays to represent time.
Step2: We can assign time intervals to variables
Step3: Your turn - use np.linspace() to make a time interval for our model
Step4: We can set and change parameter values to see how the model behaves
Step5: PREDICT BEFORE YOU PLOT
Step6: Explore the model by visualizing it
Step7: Explore the model by changing parameter values
Step8: // Note each change you made and what you saw
Step9: You Can View a Pandas DataFrame by Executing It
Step10: You can view the first/last few rows of data with .head() and .tail() functions
Step11: Use data["column_name"] to View or Refer to a Column of Data
Step12: Pandas DataFrame Columns Behave Like Numpy Arrays
Step13: Plotting the Experimental Data
Step14: Fitting Our Model To Experimental Data
Step15: THINK about Tuning the Model Parameters to Fit the Model to Data
Step16: Please Give Feedback on this assignment
|
<ASSISTANT_TASK:>
Python Code:
# Make plots inline
%matplotlib inline
# Make inline plots vector graphics instead of raster graphics
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('pdf', 'svg')
# import modules for plotting and data analysis
import matplotlib.pyplot as plt
import numpy as np
import pandas
import numpy as np
np.linspace(
0, # where the interval starts
1, # where the interval ends
11 # How many steps (elements) we want in the final array
)
# put your code here!
# Create your time array here
# Make B equal to zero and set some non-zero values for the other parameters
# DON'T PLOT YET -- you should make a prediction below!
# Write and evaluate viral_load = ...
# Verify that both arrays are the same length.
# You can use .size, as in time.size or viral_load.size
# Then, try plotting viral_load vs. time
# Make plots inline
%matplotlib inline
# Make inline plots vector graphics instead of raster graphics
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('pdf', 'svg')
import matplotlib.pyplot as plt
# Put your plot code here
# Change the values and make a new plot.
# Loading the data using pandas
hiv_data = pandas.read_csv(
"https://raw.githubusercontent.com/ComputationalModeling/IPML-Data/master/01HIVseries/HIVseries.csv",
header = None,
names = ["time_in_days", "viral_load"]
)
# the data type of hiv_data is "dataframe"
type(hiv_data)
# Execute this cell (Shift + Enter) to see the data
hiv_data
# If you have a pandas dataframe, you can call `head()` on it like this:
hiv_data.head()
# To see the last few rows, call `tail()` on it
hiv_data.tail()
# How to view an individual column
hiv_data["time_in_days"] # or
hiv_data["viral_load"]
# Here's the viral load column again
hiv_data["viral_load"]
# And we can calulate its mean, max, size, and other properties
# Just like we would on a numpy array
hiv_data["viral_load"].mean()
hiv_data["viral_load"].max()
hiv_data["viral_load"].size
# Plot viral load vs. time
# Plot the data and model together
# Do whatever work you need here to determine
# the parameter values you think work best.
# REMEMBER: You can assign each new model to a new variable,
# like `model_01`, `model_02`, ...
from IPython.display import IFrame
IFrame('http://goo.gl/forms/v8oZUSLDaa', width=800, height=1200)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2.1. Species
Step2: There are some naming conventions for the name of Species.
Step3: The arguments in set_attribute is the name of attribute and its value. The name of an attribute is given as a string, and its value is a string, integer, float, or boolean. get_attribute returns the value set. For an integer or float attribute, get_attribute returns a quantity object, which is a pair of value (magnitude) and unit (units).
Step4: The equality between Species is just evaluated based on their serial
Step5: A Species consists of one or more UnitSpecies
Step6: A Species can be reproduced from its serial. In the serial, all UnitSpecies are joined with the separator, dot .. The order of UnitSpecies affects the Species comparison.
Step7: UnitSpecies can have sites. Sites consists of a name, state and bond, and are sorted automatically in UnitSpecies. name must be unique in a UnitSpecies. All the value have to be string. Do not include parenthesis, dot and blank, and not start from numbers except for bond.
Step8: UnitSpecies can be also reproduced from its serial. Please be careful with the order of sites where a site with a state must be placed after sites with no state specification
Step9: Of course, a site of UnitSpecies is available even in Species' serial.
Step10: The information (UnitSpecies and its site) is used for rule-based modeling. The way of rule-based modeling in E-Cell4 is explained in 7. Introduction of Rule-based Modeling.
Step11: Here is a binding reaction from A and B to C. In this reaction definition, you don't need to set attributes to Species. The above series of operations can be written in one line using create_binding_reaction_rule(Species("A"), Species("B"), Species("C"), 1.0).
Step12: You can also provide components to the constructor
Step13: Basically ReactionRule represents a mass action reaction with the rate k. ode solver also supports rate laws thought it's under development yet. ode.ODERatelaw is explained in 6. How to Solve ODEs with Rate Law Functions local ipynb readthedocs.
Step14: You can put the Species and ReactionRule with add_species_attribute and add_reaction_rule.
Step15: Now we have a simple model with the binding and unbinding reactions. You can use species_attributes and reaction_rules to check the Model.
Step16: The Species attributes are required for the spatial Model, but not required for the nonspatial Model (i.e. gillespie or ode). The attribute pushed first has higher priority than one pushed later. You can also attribute a Species based on the attributes in a Model.
Step17: For your information, all functions related to Species, ReactionRule and NetworkModel above are also available on C++ in the same way.
Step18: 2.4. Python Utilities to Build a Model
Step19: For reversible reactions, == is available. In the with statement, undeclared variables are automaticaly assumed to be a Species. Any Python variables, functions and statement are available even in the with block.
Step20: Meanwhile, once some variable is declared even outside the block, you can not use its name as a Species as follows
Step21: This is because A + B == C is evaluated as 10 + B == C due to A = 10.
Step22: The following Species' name is not necessarily needed to be the same as others. The model above describes $[A]'=2-1\times[A]$
Step23: A chain of reactions can be described in one line. To split a line into two or more physical lines, wrap lines in a parenthesis
Step24: The method uses global variables in ecell4.util.decorator (e.g. REACTION_RULES) to cache objects created in the with statement
Step25: Python decorator functions are also available. Decorator functions improve the modularity of the Model.
Step26: In contrast to the with statement, do not add parentheses after the decorator here. The functions decorated by reaction_rules and species_attributes return a list of ReactionRules and Species respectively. The list can be registered to Model at once by using add_reaction_rules and add_species_attributes.
Step27: This method is modular and reusable relative to the way using with statement.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from ecell4.prelude import *
sp1 = Species("A")
print(sp1.serial())
sp1.set_attribute("radius", 0.005)
sp1.set_attribute("D", 1)
sp1.set_attribute("location", "cytoplasm")
print(sp1.has_attribute("radius"))
print(sp1.get_attribute("radius"))
print(sp1.get_attribute("radius").magnitude)
print(sp1.get_attribute("radius").units)
print(sp1.has_attribute("location"))
print(sp1.get_attribute("location"))
sp1.remove_attribute("radius")
print(sp1.has_attribute("radius"))
sp1 = Species("A", 0.005, 1, "cytoplasm") # serial, radius, D, location
print(Species("A") == Species("B"))
print(Species("A") == Species("A"))
sp1 = Species()
usp1 = UnitSpecies("C")
print(usp1.serial())
sp1.add_unit(usp1)
sp1.add_unit(UnitSpecies("A"))
sp1.add_unit(UnitSpecies("B"))
print(sp1.serial(), len(sp1.units()))
sp1 = Species("C.A.B")
print(sp1.serial())
print(Species("A.B.C") == Species("C.A.B"))
print(Species("A.B.C") == Species("A.B.C"))
usp1 = UnitSpecies("A")
usp1.add_site("us", "u", "")
usp1.add_site("ps", "p", "_")
usp1.add_site("bs", "", "_")
print(usp1.serial())
usp1 = UnitSpecies()
usp1.deserialize("A(bs^_, us=u, ps=p^_)")
print(usp1.serial())
sp1 = Species("A(bs^1, ps=u).A(bs, ps=p^1)")
print(sp1.serial())
print(len(sp1.units()))
rr1 = ReactionRule()
rr1.add_reactant(Species("A"))
rr1.add_reactant(Species("B"))
rr1.add_product(Species("C"))
rr1.set_k(1.0)
rr1 = create_binding_reaction_rule(Species("A"), Species("B"), Species("C"), 1.0)
print(rr1.as_string())
rr1 = ReactionRule([Species("A"), Species("B")], [Species("C")], 1.0)
print(rr1.as_string())
sp1 = Species("A", 0.005, 1)
sp2 = Species("B", 0.005, 1)
sp3 = Species("C", 0.01, 0.5)
rr1 = create_binding_reaction_rule(Species("A"), Species("B"), Species("C"), 0.01)
rr2 = create_unbinding_reaction_rule(Species("C"), Species("A"), Species("B"), 0.3)
m1 = NetworkModel()
m1.add_species_attribute(sp1)
m1.add_species_attribute(sp2)
m1.add_species_attribute(sp3)
m1.add_reaction_rule(rr1)
m1.add_reaction_rule(rr2)
print([sp.serial() for sp in m1.species_attributes()])
print([rr.as_string() for rr in m1.reaction_rules()])
sp1 = Species("A")
print(sp1.has_attribute("radius"))
sp2 = m1.apply_species_attributes(sp1)
print(sp2.has_attribute("radius"))
print(sp2.get_attribute("radius").magnitude)
run_simulation(10.0, model=m1, y0={'C': 60})
with species_attributes():
A | B | {'radius': 0.005, 'D': 1}
C | {'radius': 0.01, 'D': 0.5}
with reaction_rules():
A + B == C | (0.01, 0.3)
m1 = get_model()
from math import log
ka, kd, kf = 0.01, 0.3, 0.1
tau = 10.0
with reaction_rules():
E0 + S == ES | (ka, kd)
if tau > 0:
ES > E1 + P | kf
E1 > E0 | log(2) / tau
else:
ES > E0 + P | kf
m1 = get_model()
del ka, kd, kf, tau
A = 10
try:
with reaction_rules():
A + B == C | (0.01, 0.3)
except Exception as e:
print(repr(e))
del A
with reaction_rules():
~A > A | 2.0 # equivalent to `create_synthesis_reaction_rule`
A > ~A | 1.0 # equivalent to `create_degradation_reaction_rule`
m1 = get_model()
print([rr.as_string() for rr in m1.reaction_rules()])
from math import exp
ret = run_simulation(10.0, model=m1)
ret.plot('-', lambda t: 2.0 * (1 - exp(-t)), '--')
with reaction_rules():
(E + S == ES | (0.5, 1.0)
> E + P | 1.5)
m1 = get_model()
print([rr.as_string() for rr in m1.reaction_rules()])
import ecell4.util.decorator
with reaction_rules():
A + B == C | (0.01, 0.3)
print(ecell4.util.decorator.REACTION_RULES) #XXX: Only for debugging
get_model()
print(ecell4.util.decorator.REACTION_RULES) #XXX: Only for debugging
@species_attributes
def attrgen1(radius, D):
A | B | {'radius': radius, 'D': D}
C | {'radius': radius * 2, 'D': D * 0.5}
@reaction_rules
def rrgen1(kon, koff):
A + B == C | (kon, koff)
attrs1 = attrgen1(0.005, 1)
rrs1 = rrgen1(0.01, 0.3)
print(attrs1)
print(rrs1)
m1 = NetworkModel()
m1.add_species_attributes(attrs1)
m1.add_reaction_rules(rrs1)
print(m1.num_reaction_rules())
@reaction_rules
def michaelis_menten(S, P, E, ES, kf, kr, kcat):
S + E == ES | (kf, kr)
ES > P + E | kcat
with reaction_rules():
michaelis_menten(K, Kp, KK, KK_K, 0.01, 0.3, 0.15)
michaelis_menten(Kp, K, PP, PP_Kp, 0.01, 0.3, 0.15)
m1 = get_model()
show(m1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Construct LOW core configuration
Step2: We create the visibility. This just makes the uvw, time, antenna1, antenna2, weight columns in a table
Step3: Plot the synthesized uv coverage.
Step4: Read the venerable test image, constructing an image
Step5: Make the dirty image and point spread function
Step6: Deconvolve using clean
Step7: Predict the visibility of the model
Step8: Now we will plot the original visibility and the residual visibility.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import os
import sys
sys.path.append(os.path.join('..', '..'))
from data_models.parameters import arl_path
results_dir = arl_path('test_results')
from matplotlib import pylab
pylab.rcParams['figure.figsize'] = (8.0, 8.0)
pylab.rcParams['image.cmap'] = 'rainbow'
import numpy
from astropy.coordinates import SkyCoord
from astropy import units as u
from astropy.wcs.utils import pixel_to_skycoord
from matplotlib import pyplot as plt
from processing_components.image.iterators import image_raster_iter
from wrappers.serial.visibility.base import create_visibility
from wrappers.serial.skycomponent.operations import create_skycomponent
from wrappers.serial.image.operations import show_image, export_image_to_fits
from wrappers.serial.image.deconvolution import deconvolve_cube, restore_cube
from wrappers.serial.visibility.iterators import vis_timeslice_iter
from wrappers.serial.simulation.configurations import create_named_configuration
from wrappers.serial.simulation.testing_support import create_test_image
from wrappers.serial.imaging.base import create_image_from_visibility
from wrappers.serial.imaging.base import advise_wide_field
from workflows.serial.imaging.imaging_serial import invert_list_serial_workflow, predict_list_serial_workflow
from data_models.polarisation import PolarisationFrame
import logging
log = logging.getLogger()
log.setLevel(logging.DEBUG)
log.addHandler(logging.StreamHandler(sys.stdout))
mpl_logger = logging.getLogger("matplotlib")
mpl_logger.setLevel(logging.WARNING)
pylab.rcParams['figure.figsize'] = (12.0, 12.0)
pylab.rcParams['image.cmap'] = 'rainbow'
lowr3 = create_named_configuration('LOWBD2', rmax=750.0)
print(lowr3.xyz)
times = numpy.zeros([1])
frequency = numpy.array([1e8])
channel_bandwidth = numpy.array([1e6])
phasecentre = SkyCoord(ra=+15.0 * u.deg, dec=-45.0 * u.deg, frame='icrs', equinox='J2000')
vt = create_visibility(lowr3, times, frequency, channel_bandwidth=channel_bandwidth,
weight=1.0, phasecentre=phasecentre, polarisation_frame=PolarisationFrame('stokesI'))
advice = advise_wide_field(vt, guard_band_image=3.0, delA=0.1, facets=1, wprojection_planes=1,
oversampling_synthesised_beam=4.0)
cellsize = advice['cellsize']
plt.clf()
plt.plot(vt.data['uvw'][:,0], vt.data['uvw'][:,1], '.', color='b')
plt.plot(-vt.data['uvw'][:,0], -vt.data['uvw'][:,1], '.', color='b')
plt.xlim([-400.0, 400.0])
plt.ylim([-400.0, 400.0])
plt.show()
m31image = create_test_image(frequency=frequency, cellsize=cellsize)
nchan, npol, ny, nx = m31image.data.shape
m31image.wcs.wcs.crval[0] = vt.phasecentre.ra.deg
m31image.wcs.wcs.crval[1] = vt.phasecentre.dec.deg
m31image.wcs.wcs.crpix[0] = float(nx // 2)
m31image.wcs.wcs.crpix[1] = float(ny // 2)
fig=show_image(m31image)
vt = predict_list_serial_workflow([vt], [m31image], context='2d')[0]
# To check that we got the prediction right, plot the amplitude of the visibility.
uvdist=numpy.sqrt(vt.data['uvw'][:,0]**2+vt.data['uvw'][:,1]**2)
plt.clf()
plt.plot(uvdist, numpy.abs(vt.data['vis']), '.')
plt.xlabel('uvdist')
plt.ylabel('Amp Visibility')
plt.show()
model = create_image_from_visibility(vt, cellsize=cellsize, npixel=512)
dirty, sumwt = invert_list_serial_workflow([vt], [model], context='2d')[0]
psf, sumwt = invert_list_serial_workflow([vt], [model], context='2d', dopsf=True)[0]
show_image(dirty)
print("Max, min in dirty image = %.6f, %.6f, sumwt = %f" % (dirty.data.max(), dirty.data.min(), sumwt))
print("Max, min in PSF = %.6f, %.6f, sumwt = %f" % (psf.data.max(), psf.data.min(), sumwt))
export_image_to_fits(dirty, '%s/imaging_dirty.fits'%(results_dir))
export_image_to_fits(psf, '%s/imaging_psf.fits'%(results_dir))
comp, residual = deconvolve_cube(dirty, psf, niter=10000, threshold=0.001, fractional_threshold=0.001,
window_shape='quarter', gain=0.7, scales=[0, 3, 10, 30])
restored = restore_cube(comp, psf, residual)
# Show the results
fig=show_image(comp)
plt.title('Solution')
fig=show_image(residual)
plt.title('Residual')
fig=show_image(restored)
plt.title('Restored')
vtmodel = create_visibility(lowr3, times, frequency, channel_bandwidth=channel_bandwidth,
weight=1.0, phasecentre=phasecentre,
polarisation_frame=PolarisationFrame('stokesI'))
vtmodel=predict_list_serial_workflow([vtmodel], [comp], context='2d')[0]
uvdist=numpy.sqrt(vt.data['uvw'][:,0]**2+vt.data['uvw'][:,1]**2)
plt.clf()
plt.plot(uvdist, numpy.abs(vt.data['vis'][:]-vtmodel.data['vis'][:]), '.', color='r',
label='Residual')
plt.plot(uvdist, numpy.abs(vt.data['vis'][:]), '.', color='b', label='Original')
plt.xlabel('uvdist')
plt.ylabel('Amp Visibility')
plt.legend()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Implementing a Neural Network
Step2: We will use the class TwoLayerNet in the file cs231n/classifiers/neural_net.py to represent instances of our network. The network parameters are stored in the instance variable self.params where keys are string parameter names and values are numpy arrays. Below, we initialize toy data and a toy model that we will use to develop your implementation.
Step3: Forward pass
Step4: Forward pass
Step5: Backward pass
Step6: Train the network
Step8: Load the data
Step9: Train a network
Step10: Debug the training
Step11: Tune your hyperparameters
Step12: Run on the test set
|
<ASSISTANT_TASK:>
Python Code:
# A bit of setup
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.neural_net import TwoLayerNet
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
returns relative error
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Create a small net and some toy data to check your implementations.
# Note that we set the random seed for repeatable experiments.
input_size = 4
hidden_size = 10
num_classes = 3
num_inputs = 5
def init_toy_model():
np.random.seed(0)
return TwoLayerNet(input_size, hidden_size, num_classes, std=1e-1)
def init_toy_data():
np.random.seed(1)
X = 10 * np.random.randn(num_inputs, input_size)
y = np.array([0, 1, 2, 2, 1])
return X, y
net = init_toy_model()
X, y = init_toy_data()
scores = net.loss(X)
print 'Your scores:'
print scores
print
print 'correct scores:'
correct_scores = np.asarray([
[-0.81233741, -1.27654624, -0.70335995],
[-0.17129677, -1.18803311, -0.47310444],
[-0.51590475, -1.01354314, -0.8504215 ],
[-0.15419291, -0.48629638, -0.52901952],
[-0.00618733, -0.12435261, -0.15226949]])
print correct_scores
# The difference should be very small. We get < 1e-7
print 'Difference between your scores and correct scores:'
print np.sum(np.abs(scores - correct_scores))
loss, _ = net.loss(X, y, reg=0.1)
correct_loss = 1.30378789133
# should be very small, we get < 1e-12
print 'Difference between your loss and correct loss:'
print np.sum(np.abs(loss - correct_loss))
from cs231n.gradient_check import eval_numerical_gradient
# Use numeric gradient checking to check your implementation of the backward pass.
# If your implementation is correct, the difference between the numeric and
# analytic gradients should be less than 1e-8 for each of W1, W2, b1, and b2.
loss, grads = net.loss(X, y, reg=0.1)
# these should all be less than 1e-8 or so
for param_name in grads:
f = lambda W: net.loss(X, y, reg=0.1)[0]
param_grad_num = eval_numerical_gradient(f, net.params[param_name], verbose=False)
print '%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name]))
net = init_toy_model()
stats = net.train(X, y, X, y,
learning_rate=1e-1, reg=1e-5,
num_iters=100, verbose=False)
print 'Final training loss: ', stats['loss_history'][-1]
# plot the loss history
plt.plot(stats['loss_history'])
plt.xlabel('iteration')
plt.ylabel('training loss')
plt.title('Training Loss history')
plt.show()
from cs231n.data_utils import load_CIFAR10
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
Load the CIFAR-10 dataset from disk and perform preprocessing to prepare
it for the two-layer neural net classifier. These are the same steps as
we used for the SVM, but condensed to a single function.
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
# Normalize the data: subtract the mean image
mean_image = np.mean(X_train, axis=0)
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
# Reshape data to rows
X_train = X_train.reshape(num_training, -1)
X_val = X_val.reshape(num_validation, -1)
X_test = X_test.reshape(num_test, -1)
return X_train, y_train, X_val, y_val, X_test, y_test
# Invoke the above function to get our data.
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
print 'Train data shape: ', X_train.shape
print 'Train labels shape: ', y_train.shape
print 'Validation data shape: ', X_val.shape
print 'Validation labels shape: ', y_val.shape
print 'Test data shape: ', X_test.shape
print 'Test labels shape: ', y_test.shape
input_size = 32 * 32 * 3
hidden_size = 50
num_classes = 10
net = TwoLayerNet(input_size, hidden_size, num_classes)
# Train the network
stats = net.train(X_train, y_train, X_val, y_val,
num_iters=1000, batch_size=200,
learning_rate=1e-4, learning_rate_decay=0.95,
reg=0.5, verbose=True)
# Predict on the validation set
val_acc = (net.predict(X_val) == y_val).mean()
print 'Validation accuracy: ', val_acc
# Plot the loss function and train / validation accuracies
plt.subplot(2, 1, 1)
plt.plot(stats['loss_history'])
plt.title('Loss history')
plt.xlabel('Iteration')
plt.ylabel('Loss')
plt.subplot(2, 1, 2)
plt.plot(stats['train_acc_history'], label='train')
plt.plot(stats['val_acc_history'], label='val')
plt.title('Classification accuracy history')
plt.xlabel('Epoch')
plt.ylabel('Clasification accuracy')
plt.show()
from cs231n.vis_utils import visualize_grid
# Visualize the weights of the network
def show_net_weights(net):
W1 = net.params['W1']
W1 = W1.reshape(32, 32, 3, -1).transpose(3, 0, 1, 2)
plt.imshow(visualize_grid(W1, padding=3).astype('uint8'))
plt.gca().axis('off')
plt.show()
show_net_weights(net)
best_net = None # store the best model into this
#################################################################################
# TODO: Tune hyperparameters using the validation set. Store your best trained #
# model in best_net. #
# #
# To help debug your network, it may help to use visualizations similar to the #
# ones we used above; these visualizations will have significant qualitative #
# differences from the ones we saw above for the poorly tuned network. #
# #
# Tweaking hyperparameters by hand can be fun, but you might find it useful to #
# write code to sweep through possible combinations of hyperparameters #
# automatically like we did on the previous exercises. #
#################################################################################
best_val = -1
for hidden_size in [200,500,700]:
for learning_rate in [5e-4,1e-3,5e-3]:
net = TwoLayerNet(input_size, hidden_size, num_classes)
# Train the network
stats = net.train(X_train, y_train, X_val, y_val,
num_iters=1000, batch_size=200,
learning_rate=learning_rate, learning_rate_decay=0.95,
reg=0.5, verbose=False)
print "."
# Predict on the validation set
val_acc = (net.predict(X_val) == y_val).mean()
if best_val < val_acc:
best_val = val_acc
best_net = net
print "best till now ",best_val
#################################################################################
# END OF YOUR CODE #
#################################################################################
# visualize the weights of the best network
show_net_weights(best_net)
test_acc = (best_net.predict(X_test) == y_test).mean()
print 'Test accuracy: ', test_acc
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 2. Key Properties --> Software Properties
Step12: 2.2. Code Version
Step13: 2.3. Code Languages
Step14: 3. Key Properties --> Timestep Framework
Step15: 3.2. Split Operator Advection Timestep
Step16: 3.3. Split Operator Physical Timestep
Step17: 3.4. Integrated Timestep
Step18: 3.5. Integrated Scheme Type
Step19: 4. Key Properties --> Meteorological Forcings
Step20: 4.2. Variables 2D
Step21: 4.3. Frequency
Step22: 5. Key Properties --> Resolution
Step23: 5.2. Canonical Horizontal Resolution
Step24: 5.3. Number Of Horizontal Gridpoints
Step25: 5.4. Number Of Vertical Levels
Step26: 5.5. Is Adaptive Grid
Step27: 6. Key Properties --> Tuning Applied
Step28: 6.2. Global Mean Metrics Used
Step29: 6.3. Regional Metrics Used
Step30: 6.4. Trend Metrics Used
Step31: 7. Transport
Step32: 7.2. Scheme
Step33: 7.3. Mass Conservation Scheme
Step34: 7.4. Convention
Step35: 8. Emissions
Step36: 8.2. Method
Step37: 8.3. Sources
Step38: 8.4. Prescribed Climatology
Step39: 8.5. Prescribed Climatology Emitted Species
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Step41: 8.7. Interactive Emitted Species
Step42: 8.8. Other Emitted Species
Step43: 8.9. Other Method Characteristics
Step44: 9. Concentrations
Step45: 9.2. Prescribed Lower Boundary
Step46: 9.3. Prescribed Upper Boundary
Step47: 9.4. Prescribed Fields Mmr
Step48: 9.5. Prescribed Fields Mmr
Step49: 10. Optical Radiative Properties
Step50: 11. Optical Radiative Properties --> Absorption
Step51: 11.2. Dust
Step52: 11.3. Organics
Step53: 12. Optical Radiative Properties --> Mixtures
Step54: 12.2. Internal
Step55: 12.3. Mixing Rule
Step56: 13. Optical Radiative Properties --> Impact Of H2o
Step57: 13.2. Internal Mixture
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Step59: 14.2. Shortwave Bands
Step60: 14.3. Longwave Bands
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Step62: 15.2. Twomey
Step63: 15.3. Twomey Minimum Ccn
Step64: 15.4. Drizzle
Step65: 15.5. Cloud Lifetime
Step66: 15.6. Longwave Bands
Step67: 16. Model
Step68: 16.2. Processes
Step69: 16.3. Coupling
Step70: 16.4. Gas Phase Precursors
Step71: 16.5. Scheme Type
Step72: 16.6. Bulk Scheme Species
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cmcc', 'sandbox-3', 'aerosol')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code::
import pandas as pd
X = pd.get_dummies(X, columns=['neighbourhood_group','room_type'], drop_first=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: On Windows
Step2: Data parallelism versus task parallelism
Step3: Manager and proxies
Step4: See
Step5: Issues
Step6: Queue and Pipe
Step7: Synchronization with Lock and Event
Step8: High-level task parallelism
Step9: Variants
Step12: Issues
Step13: Issues
Step14: EXERCISE
Step15: Programming efficiency
Step16: Strives for natural programming constructs in parallel code
Step17: Programming models and hierarchical computing
Step18: Pool caching
|
<ASSISTANT_TASK:>
Python Code:
%%file multihello.py
'''hello from another process
'''
from multiprocessing import Process
def f(name):
print 'hello', name
if __name__ == '__main__':
p = Process(target=f, args=('world',))
p.start()
p.join()
# EOF
!python2.7 multihello.py
if __name__ == '__main__':
from multiprocessing import freeze_support
freeze_support()
# Then, do multiprocessing stuff...
%%file sharedobj.py
'''demonstrate shared objects in multiprocessing
'''
from multiprocessing import Process, Value, Array
def f(n, a):
n.value = 3.1415927
for i in range(len(a)):
a[i] = -a[i]
if __name__ == '__main__':
num = Value('d', 0.0)
arr = Array('i', range(10))
p = Process(target=f, args=(num, arr))
p.start()
p.join()
print num.value
print arr[:]
# EOF
!python2.7 sharedobj.py
%%file sharedproxy.py
'''demonstrate sharing objects by proxy through a manager
'''
from multiprocessing import Process, Manager
def f(d, l):
d[1] = '1'
d['2'] = 2
d[0.25] = None
l.reverse()
if __name__ == '__main__':
manager = Manager()
d = manager.dict()
l = manager.list(range(10))
p = Process(target=f, args=(d, l))
p.start()
p.join()
print d
print l
# EOF
!python2.7 sharedproxy.py
%%file numpyshared.py
'''demonstrating shared objects using numpy and ctypes
'''
import multiprocessing as mp
from multiprocessing import sharedctypes
from numpy import ctypeslib
def fill_arr(arr_view, i):
arr_view.fill(i)
if __name__ == '__main__':
ra = sharedctypes.RawArray('i', 4)
arr = ctypeslib.as_array(ra)
arr.shape = (2, 2)
p1 = mp.Process(target=fill_arr, args=(arr[:1, :], 1))
p2 = mp.Process(target=fill_arr, args=(arr[1:, :], 2))
p1.start(); p2.start()
p1.join(); p2.join()
print arr
!python2.7 numpyshared.py
%%file mprocess.py
'''demonstrate the process claas
'''
import multiprocessing as mp
from time import sleep
from random import random
def worker(num):
sleep(2.0 * random())
name = mp.current_process().name
print "worker {},name:{}".format(num, name)
if __name__ == '__main__':
master = mp.current_process().name
print "Master name: {}".format(master)
for i in range(2):
p = mp.Process(target=worker, args=(i,))
p.start()
# Close all child processes spawn
[p.join() for p in mp.active_children()]
!python2.7 mprocess.py
%%file queuepipe.py
'''demonstrate queues and pipes
'''
import multiprocessing as mp
import pickle
def qworker(q):
v = q.get() # blocking!
print "queue worker got '{}' from parent".format(v)
def pworker(p):
import pickle # needed for encapsulation
msg = 'hello hello hello'
print "pipe worker sending {!r} to parent".format(msg)
p.send(msg)
v = p.recv()
print "pipe worker got {!r} from parent".format(v)
print "unpickled to {}".format(pickle.loads(v))
if __name__ == '__main__':
q = mp.Queue()
p = mp.Process(target=qworker, args=(q,))
p.start() # blocks at q.get()
v = 'python rocks!'
print "putting '{}' on queue".format(v)
q.put(v)
p.join()
print ''
# The two ends of the pipe: the parent and the child connections
p_conn, c_conn = mp.Pipe()
p = mp.Process(target=pworker, args=(c_conn,))
p.start()
msg = pickle.dumps([1,2,3],-1)
print "got {!r} from child".format(p_conn.recv())
print "sending {!r} to child".format(msg)
p_conn.send(msg)
import datetime
print "\nfinished: {}".format(datetime.date.today())
p.join()
!python2.7 queuepipe.py
%%file multi_sync.py
'''demonstrating locks
'''
import multiprocessing as mp
def print_lock(lk, i):
name = mp.current_process().name
lk.acquire()
for j in range(5):
print i, "from process", name
lk.release()
if __name__ == '__main__':
lk = mp.Lock()
ps = [mp.Process(target=print_lock, args=(lk,i)) for i in range(5)]
[p.start() for p in ps]
[p.join() for p in ps]
!python2.7 multi_sync.py
'''events
'''
import multiprocessing as mp
def wait_on_event(e):
name = mp.current_process().name
e.wait()
print name, "finished waiting"
if __name__ == '__main__':
e = mp.Event()
ps = [mp.Process(target=wait_on_event, args=(e,)) for i in range(10)]
[p.start() for p in ps]
print "e.is_set()", e.is_set()
#raw_input("press any key to set event")
e.set()
[p.join() for p in ps]
import multiprocessing as mp
def random_mean(x):
import numpy as np
return round(np.mean(np.random.randint(-x,x+1,10000)), 3)
if __name__ == '__main__':
# create a pool with cpu_count() procsesses
p = mp.Pool()
results = p.map(random_mean, range(1,10))
print results
print p.apply(random_mean, [100])
p.close()
p.join()
import multiprocessing as mp
def random_mean_count(x):
import numpy as np
return x + round(np.mean(np.random.randint(-x,x+1,10000)), 3)
if __name__ == '__main__':
# create a pool with cpu_count() procsesses
p = mp.Pool()
results = p.imap_unordered(random_mean_count, range(1,10))
print "[",
for i in results:
print i,
if abs(i) <= 1.0:
print "...] QUIT"
break
list(results)
p.close()
p.join()
import multiprocessing as mp
def random_mean_count(x):
import numpy as np
return x + round(np.mean(np.random.randint(-x,x+1,10000)), 3)
if __name__ == '__main__':
# create a pool with cpu_count() procsesses
p = mp.Pool()
results = p.map_async(random_mean_count, range(1,10))
print "Waiting .",
i = 0
while not results.ready():
if not i%4000:
print ".",
i += 1
print results.get()
print "\n", p.apply_async(random_mean_count, [100]).get()
p.close()
p.join()
import numpy as np
def walk(x, n=100, box=.5, delta=.2):
"perform a random walk"
w = np.cumsum(x + np.random.uniform(-delta,delta,n))
w = np.where(abs(w) > box)[0]
return w[0] if len(w) else n
N = 10
# run N trials, all starting from x=0
pwalk = np.vectorize(walk)
print pwalk(np.zeros(N))
# run again, using list comprehension instead of ufunc
print [walk(0) for i in range(N)]
# run again, using multiprocessing's map
import multiprocessing as mp
p = mp.Pool()
print p.map(walk, [0]*N)
%%file state.py
some good state utilities
def check_pickle(x, dill=False):
"checks the pickle across a subprocess"
import pickle
import subprocess
if dill:
import dill as pickle
pik = "dill"
else:
pik = "pickle"
fail = True
try:
_x = pickle.dumps(x)
fail = False
finally:
if fail:
print "DUMP FAILED"
msg = "python -c import {0}; print {0}.loads({1})".format(pik,repr(_x))
print "SUCCESS" if not subprocess.call(msg.split(None,2)) else "LOAD FAILED"
def random_seed(s=None):
"sets the seed for calls to 'random()'"
import random
random.seed(s)
try:
from numpy import random
random.seed(s)
except:
pass
return
def random_state(module='random', new=False, seed='!'):
return a (optionally manually seeded) random generator
For a given module, return an object that has random number generation (RNG)
methods available. If new=False, use the global copy of the RNG object.
If seed='!', do not reseed the RNG (using seed=None 'removes' any seeding).
If seed='*', use a seed that depends on the process id (PID); this is useful
for building RNGs that are different across multiple threads or processes.
import random
if module == 'random':
rng = random
elif not isinstance(module, type(random)):
# convienence for passing in 'numpy'
if module == 'numpy': module = 'numpy.random'
try:
import importlib
rng = importlib.import_module(module)
except ImportError:
rng = __import__(module, fromlist=module.split('.')[-1:])
elif module.__name__ == 'numpy': # convienence for passing in numpy
from numpy import random as rng
else: rng = module
_rng = getattr(rng, 'RandomState', None) or \
getattr(rng, 'Random') # throw error if no rng found
if new:
rng = _rng()
if seed == '!': # special case: don't reset the seed
return rng
if seed == '*': # special case: random seeding for multiprocessing
try:
try:
import multiprocessing as mp
except ImportError:
import processing as mp
try:
seed = mp.current_process().pid
except AttributeError:
seed = mp.currentProcess().getPid()
except:
seed = 0
import time
seed += int(time.time()*1e6)
# set the random seed (or 'reset' with None)
rng.seed(seed)
return rng
# EOF
import multiprocess
print multiprocess.Pool().map(lambda x:x**2, range(10))
%%file runppft.py
'''demonstrate ppft
'''
import ppft
def squared(x):
return x*x
server = ppft.Server() # can take 'localhost:8000' or remote:port
result = server.submit(squared, (5,))
result.wait()
print result.finished
print result()
!python2.7 runppft.py
%%file allpool.py
'''demonstrate pool API
'''
import pathos
def sum_squared(x,y):
return (x+y)**2
x = range(5)
y = range(0,10,2)
if __name__ == '__main__':
sp = pathos.pools.SerialPool()
pp = pathos.pools.ParallelPool()
mp = pathos.pools.ProcessPool()
tp = pathos.pools.ThreadPool()
for pool in [sp,pp,mp,tp]:
print pool.map(sum_squared, x, y)
pool.close()
pool.join()
!python2.7 allpool.py
from itertools import izip
PRIMES = [
112272535095293,
112582705942171,
112272535095293,
115280095190773,
115797848077099,
1099726899285419]
def is_prime(n):
if n % 2 == 0:
return False
import math
sqrt_n = int(math.floor(math.sqrt(n)))
for i in range(3, sqrt_n + 1, 2):
if n % i == 0:
return False
return True
def sleep_add1(x):
from time import sleep
if x < 4: sleep(x/10.0)
return x+1
def sleep_add2(x):
from time import sleep
if x < 4: sleep(x/10.0)
return x+2
def test_with_multipool(Pool):
inputs = range(10)
with Pool() as pool1:
res1 = pool1.amap(sleep_add1, inputs)
with Pool() as pool2:
res2 = pool2.amap(sleep_add2, inputs)
with Pool() as pool3:
for number, prime in izip(PRIMES, pool3.imap(is_prime, PRIMES)):
assert prime if number != PRIMES[-1] else not prime
assert res1.get() == [i+1 for i in inputs]
assert res2.get() == [i+2 for i in inputs]
print "OK"
if __name__ == '__main__':
from pathos.pools import ProcessPool
test_with_multipool(ProcessPool)
import pathos
from math import sin, cos
if __name__ == '__main__':
mp = pathos.pools.ProcessPool()
tp = pathos.pools.ThreadPool()
print mp.amap(tp.map, [sin, cos], [range(3),range(3)]).get()
mp.close(); tp.close()
mp.join(); tp.join()
import pathos
import sys
rhost = 'localhost'
rport = 23
if __name__ == '__main__':
tunnel = pathos.secure.Tunnel()
lport = tunnel.connect(rhost, rport)
print 'SSH Tunnel to:', rhost
print 'Remote port:', rport
print 'Local port:', lport
print 'Press <Enter> to disconnect'
sys.stdin.readline()
tunnel.disconnect()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Image_Augmentation
Step4: VGG_Prep
Step5: VGG_16 Bottleneck
Step6: Running the model on the Train, Test, and Validation Data
Step7: Train Top Model
Step8: Confusion Matrix
Step9: Running the Top Model
Step10: Running the Top Model on the Fully Augmented Data
Step11: Running the Top Model at 224x224
Step12: Generating the Confusion Matrix for the Balanced 224x224 Run
Step13: 224x224 With Flips
Step14: Thresholded Images at 224x224 with no Augmentations
Step15: 224x224 DDSM - Two Categories
Step16: 224x224 DDSM Thresholded Images - Two Categories
|
<ASSISTANT_TASK:>
Python Code:
# Function for rotating the image files.
def Image_Rotate(img, angle):
Rotates a given image the requested angle. Returns the rotated image.
rows,cols = img.shape
M = cv2.getRotationMatrix2D((cols/2,rows/2), angle, 1)
return(cv2.warpAffine(img,M,(cols,rows)))
# Function for augmenting the images
def Image_Augment(X, Y, vflip=False, hflip=False, major_rotate=False, minor_rotate=False):
:param X np.array of images
Y np.array of labels
vflip, hflip, major_rotate, minor_rotate set to True to perform the augmentations
:return The set of augmented iages and their corresponding labels
if len(X) != len(Y):
print('Data and Label arrays not of the same length.')
n = vflip + hflip + 2*major_rotate + 6*minor_rotate
augmented = np.zeros([len(X) + n*len(X), X.shape[1], X.shape[2]])
label = np.zeros([len(Y) + n*len(Y), 1])
count = 0
for i in range(0, len(X)):
augmented[count] = X[i]
label[count] = Y[i]
count += 1
if vflip:
aug = cv2.flip(X[i], 0)
augmented[count] = aug
label[count] = Y[i]
count += 1
if hflip:
aug = cv2.flip(X[i], 1)
augmented[count] = aug
label[count] = Y[i]
count +=1
if major_rotate:
angles = [90, 270]
for angle in angles:
aug = Image_Rotate(X[i], angle)
augmented[count] = aug
label[count] = Y[i]
count += 1
if minor_rotate:
angles = [-45,-30,-15,15,30,45]
for angle in angles:
aug = Image_Rotate(X[i], angle)
augmented[count] = aug
label[count] = Y[i]
count += 1
return(augmented, label)
def VGG_Prep(img_data):
:param img_data: training or test images of shape [#images, height, width]
:return: the array transformed to the correct shape for the VGG network
shape = [#images, height, width, 3] transforms to rgb and reshapes
images = np.zeros([len(img_data), img_data.shape[1], img_data.shape[2], 3])
for i in range(0, len(img_data)):
im = 255 - (img_data[i] * 255) # Orginal imagnet images were not rescaled
im = color.gray2rgb(im)
images[i] = im
return(images)
def vgg16_bottleneck(trainPath, testPath, imagePath, modelPath, size, balance = True, verbose = True,
verboseFreq = 50, valPath = 'None', transform = False, binary = False):
categories = bc.bcNormVsAbnormNumerics()
# Loading data
metaTr, metaTr2, mCountsTr = bc.load_training_metadata(trainPath, balance, verbose)
lenTrain = len(metaTr)
X_train, Y_train = bc.load_data(trainPath, imagePath, maxData = lenTrain,
categories=categories,
verboseFreq = verboseFreq,
imgResize=size,
normalVsAbnormal=binary)
metaTest, meataT2, mCountsT = bc.load_training_metadata(testPath, balance, verbose)
lenTest = len(metaTest)
X_test, Y_test = bc.load_data(testPath, imagePath, maxData = lenTrain,
categories=categories,
verboseFreq = verboseFreq,
imgResize=size,
normalVsAbnormal=binary)
if transform:
print('Transforming the Training Data')
X_train, Y_train = Image_Augment(X=X_train, Y=Y_train, hflip=True, vflip=True, minor_rotate=False, major_rotate=False)
print('Preparing the Training Data for the VGG_16 Model.')
X_train = VGG_Prep(X_train)
print('Preparing the Test Data for the VGG_16 Model')
X_test = VGG_Prep(X_test)
print('Loading the VGG_16 Model')
model = applications.VGG16(include_top=False, weights='imagenet')
# Generating the bottleneck features for the training data
print('Evaluating the VGG_16 Model on the Training Data')
bottleneck_features_train = model.predict(X_train)
# Saving the bottleneck features for the training data
featuresTrain = os.path.join(modelPath, 'bottleneck_features_train.npy')
labelsTrain = os.path.join(modelPath, 'labels_train.npy')
print('Saving the Training Data Bottleneck Features.')
np.save(open(featuresTrain, 'wb'), bottleneck_features_train)
np.save(open(labelsTrain, 'wb'), Y_train)
# Generating the bottleneck features for the test data
print('Evaluating the VGG_16 Model on the Test Data')
bottleneck_features_test = model.predict(X_test)
# Saving the bottleneck features for the test data
featuresTest = os.path.join(modelPath, 'bottleneck_features_test.npy')
labelsTest = os.path.join(modelPath, 'labels_test.npy')
print('Saving the Test Data Bottleneck Feaures.')
np.save(open(featuresTest, 'wb'), bottleneck_features_test)
np.save(open(labelsTest, 'wb'), Y_test)
if valPath != 'None':
metaVal, metaV2, mCountsV = bc.load_training_metadata(valPath, verbose = verbose, balanceViaRemoval = False)
lenVal = len(metaVal)
X_val, Y_val = bc.load_data(valPath, imagePath, maxData = lenVal, verboseFreq = verboseFreq, imgResize=size)
X_val = VGG_Prep(X_val)
# Generating the bottleneck features for the test data
print('Evaluating the VGG_16 Model on the Validataion Data')
bottleneck_features_val = model.predict(X_val)
# Saving the bottleneck features for the test data
featuresVal = os.path.join(modelPath, 'bottleneck_features_validation.npy')
labelsVal = os.path.join(modelPath, 'labels_validation.npy')
print('Saving the Validation Data Bottleneck Features.')
np.save(open(featuresVal, 'wb'), bottleneck_features_val)
np.save(open(labelsVal, 'wb'), Y_val)
# global variables for loading the data
imagePath = '../images/threshold/DDSM/'
trainDataPath = '../images/ddsm/ddsm_train.csv'
testDataPath = '../images/ddsm/ddsm_test.csv'
valDataPath = '../images/ddsm/ddsm_val.csv'
imgResize = (224, 224) # can go up to (224, 224)
modelPath = '../model/'
vgg16_bottleneck(trainDataPath, testDataPath, imagePath, modelPath, imgResize,
balance = True, verbose = True, verboseFreq = 50, valPath = valDataPath,
transform = False, binary = True)
class LossHistory(cb.Callback):
def on_train_begin(self, logs={}):
self.losses = []
def on_batch_end(self, batch, logs={}):
batch_loss = logs.get('loss')
self.losses.append(batch_loss)
def train_top_model(train_feats, train_lab, test_feats, test_lab, model_path, model_save, epoch = 50, batch = 64):
train_bottleneck = os.path.join(model_path, train_feats)
train_labels = os.path.join(model_path, train_lab)
test_bottleneck = os.path.join(model_path, test_feats)
test_labels = os.path.join(model_path, test_lab)
history = LossHistory()
X_train = np.load(train_bottleneck)
Y_train = np.load(train_labels)
#Y_train = np_utils.to_categorical(Y_train, nb_classes=3)
Y_train = np_utils.to_categorical(Y_train, nb_classes=2)
X_test = np.load(test_bottleneck)
Y_test = np.load(test_labels)
#Y_test = np_utils.to_categorical(Y_test, nb_classes=3)
Y_test = np_utils.to_categorical(Y_test, nb_classes=2)
print(X_train.shape)
noise = 0.01
model = Sequential()
model.add( GaussianNoise(noise, input_shape=X_train.shape[1:]))
model.add(Flatten(input_shape=X_train.shape[1:]))
model.add(Dropout(0.7))
model.add( Dense(256, activation = 'relu') )
model.add(Dropout(0.5))
#model.add(Dense(3))
model.add(Dense(2))
model.add(Activation('softmax'))
#loss = 'categorical_crossentropy'
model.compile(optimizer='adadelta',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(X_train, Y_train,
nb_epoch=epoch,
batch_size=batch,
callbacks=[history],
validation_data=(X_test, Y_test),
verbose=2)
score = model.evaluate(X_test, Y_test, batch_size=16, verbose=0)
print "Network's test score [loss, accuracy]: {0}".format(score)
model.save_weights(os.path.join(model_path, model_save))
def cf_Matrix(data, label, weights, path, save):
data = os.path.join(path, data)
label = os.path.join(path, label)
categories = bc.bcNormVsAbnormNumerics()
X = np.load(data)
Y = np.load(label)
#Y = np_utils.to_categorical(Y, nb_classes=3)
# Loading and preping the model
model = Sequential()
model.add(Flatten(input_shape=X.shape[1:]))
model.add(Dropout(0.7))
model.add(Dense(256))
model.add(Activation('relu'), constraint= maxnorm(3.))
model.add(Dropout(0.5))
#model.add(Dense(3))
model.add(Dense(2))
model.add(Activation('softmax'))
model.load_weights(os.path.join('../model/', weights))
# try Adadelta and Adam
model.compile(optimizer='adadelta',
loss='categorical_crossentropy',
metrics=['accuracy'])
predictOutput = model.predict(X, batch_size=64, verbose=2)
#numBC = bc.numericBC()
numBC = bc.reverseDict(categories)
predClasses = []
for i in range(len(predictOutput)):
arPred = np.array(predictOutput[i])
predictionProb = arPred.max()
predictionNdx = arPred.argmax()
predClassName = numBC[predictionNdx]
predClasses.append(predictionNdx)
# Use sklearn's helper method to generate the confusion matrix
cnf_matrix = skm.confusion_matrix(Y, predClasses)
# Ploting the confusion matrix
class_names = numBC.values()
np.set_printoptions(precision=2)
fileCfMatrix = '../figures/confusion_matrix-' + save + '.png'
plt.figure()
bc.plot_confusion_matrix(cnf_matrix, classes=class_names,
title='Confusion matrix, \n' + save)
plt.savefig(fileCfMatrix)
plt.show()
# Locations for the bottleneck and labels files that we need
modelPath = '../model/'
train_bottleneck = 'bottleneck_features_train.npy'
train_labels = 'labels_train.npy'
test_bottleneck = 'bottleneck_features_test.npy'
test_labels = 'labels_test.npy'
validation_bottleneck = 'bottleneck_features_valdation.npy'
validation_label = 'labels_validation.npy'
top_model_weights_path = 'top_weights02.h5'
train_top_model(train_feats=train_bottleneck, train_lab=train_labels, test_feats=test_bottleneck, test_lab=test_labels,
model_path=modelPath, model_save=top_model_weights_path)
feats_loc = '150_test_val/bottleneck_features_test.npy'
feats_labs = '150_test_val/labels_test.npy'
weight = 'balanced150run2/top_weights02.h5'
saveFile = 'balanced150'
cf_Matrix(data=feats_loc, label=feats_labs, weights=weight, path=modelPath, save=saveFile)
# Locations for the bottleneck and labels files that we need
modelPath = '../model/'
train_bottleneck = 'bottleneck_features_150fulltrans_train.npy'
train_labels = 'labels_150fulltrans_train.npy'
test_bottleneck = 'bottleneck_features_test.npy'
test_labels = 'labels_test.npy'
validation_bottleneck = 'bottleneck_features_valdation.npy'
validation_label = 'labels_validation.npy'
top_model_weights_path = 'top_weights_150fulltrans.h5'
train_top_model(train_feats=train_bottleneck, train_lab=train_labels, test_feats=test_bottleneck, test_lab=test_labels,
model_path=modelPath, model_save=top_model_weights_path)
feats_loc = '150_test_val/bottleneck_features_test.npy'
feats_labs = '150_test_val/labels_test.npy'
weight = 'balanced150FullTrans/top_weights_150fulltrans.h5'
saveFile = 'balanced150FullTrans'
cf_Matrix(data=feats_loc, label=feats_labs, weights=weight, path=modelPath, save=saveFile)
# Locations for the bottleneck and labels files that we need
modelPath = '../model/'
train_bottleneck = 'bottleneck_features_train_224.npy'
train_labels = 'labels_train_224.npy'
test_bottleneck = 'bottleneck_features_test.npy'
test_labels = 'labels_test.npy'
validation_bottleneck = 'bottleneck_features_valdation.npy'
validation_label = 'labels_validation.npy'
top_model_weights_path = 'top_weights_224.h5'
train_top_model(train_feats=train_bottleneck, train_lab=train_labels, test_feats=test_bottleneck, test_lab=test_labels,
model_path=modelPath, model_save=top_model_weights_path)
feats_loc = '224_test_val/bottleneck_features_test.npy'
feats_labs = '224_test_val/labels_test.npy'
weight = 'balanced224/top_weights_224.h5'
saveFile = 'balanced224'
cf_Matrix(data=feats_loc, label=feats_labs, weights=weight, path=modelPath, save=saveFile)
# Locations for the bottleneck and labels files that we need
modelPath = '../model/'
train_bottleneck = 'Balanced224flips/bottleneck_features_train_224flip.npy'
train_labels = 'Balanced224flips/labels_train_224flip.npy'
test_bottleneck = '224_test_val/bottleneck_features_test.npy'
test_labels = '224_test_val/labels_test.npy'
validation_bottleneck = 'bottleneck_features_valdation.npy'
validation_label = 'labels_validation.npy'
top_model_weights_path = 'Balanced224flips/top_weights_224flip.h5'
train_top_model(train_feats=train_bottleneck, train_lab=train_labels, test_feats=test_bottleneck, test_lab=test_labels,
model_path=modelPath, model_save=top_model_weights_path)
feats_loc = '224_test_val/bottleneck_features_test.npy'
feats_labs = '224_test_val/labels_test.npy'
weight = 'balanced224flips/top_weights_224flip.h5'
saveFile = 'balanced224flip'
cf_Matrix(data=feats_loc, label=feats_labs, weights=weight, path=modelPath, save=saveFile)
# Locations for the bottleneck and labels files that we need
modelPath = '../model/'
train_bottleneck = 'bottleneck_features_train_224th.npy'
train_labels = 'labels_train_224th.npy'
test_bottleneck = 'bottleneck_features_test.npy'
test_labels = 'labels_test.npy'
validation_bottleneck = 'bottleneck_features_valdation.npy'
validation_label = 'labels_validation.npy'
top_model_weights_path = 'top_weights_224th.h5'
train_top_model(train_feats=train_bottleneck, train_lab=train_labels, test_feats=test_bottleneck, test_lab=test_labels,
model_path=modelPath, model_save=top_model_weights_path)
feats_loc = '224_threshold/bottleneck_features_test.npy'
feats_labs = '224_threshold/labels_test.npy'
weight = 'balanced224Threshold/top_weights_224th.h5'
saveFile = 'balanced224Threshold'
cf_Matrix(data=feats_loc, label=feats_labs, weights=weight, path=modelPath, save=saveFile)
# Locations for the bottleneck and labels files that we need
modelPath = '../model/'
train_bottleneck = 'Balanced224Binary/bottleneck_features_train_224twoclass.npy'
train_labels = 'Balanced224Binary/labels_train_224twoclass.npy'
test_bottleneck = '224_binary/bottleneck_features_test.npy'
test_labels = '224_binary/labels_test.npy'
validation_bottleneck = 'bottleneck_features_valdation.npy'
validation_label = 'labels_validation.npy'
top_model_weights_path = 'Balanced224Binary/top_weights_224twoclass.h5'
train_top_model(train_feats=train_bottleneck, train_lab=train_labels, test_feats=test_bottleneck, test_lab=test_labels,
model_path=modelPath, model_save=top_model_weights_path, epoch = 100)
feats_loc = '224_binary/bottleneck_features_test.npy'
feats_labs = '224_binary/labels_test.npy'
weight = 'balanced224Binary/top_weights_224twoclass.h5'
saveFile = 'balanced224Twoclass'
cf_Matrix(data=feats_loc, label=feats_labs, weights=weight, path=modelPath, save=saveFile)
# Locations for the bottleneck and labels files that we need
modelPath = '../model/'
train_bottleneck = 'bottleneck_features_train_224th_twoclass.npy'
train_labels = 'labels_train_224th_twoclass.npy'
test_bottleneck = 'bottleneck_features_test.npy'
test_labels = 'labels_test.npy'
validation_bottleneck = 'bottleneck_features_valdation.npy'
validation_label = 'labels_validation.npy'
top_model_weights_path = 'top_weights_224th_twoclass.h5'
train_top_model(train_feats=train_bottleneck, train_lab=train_labels, test_feats=test_bottleneck, test_lab=test_labels,
model_path=modelPath, model_save=top_model_weights_path)
feats_loc = '224_binary/bottleneck_features_test.npy'
feats_labs = '224_binary/labels_test.npy'
weight = 'balanced224Th_Binary/top_weights_224th_twoclass.h5'
saveFile = 'balanced224Th_Twoclass'
cf_Matrix(data=feats_loc, label=feats_labs, weights=weight, path=modelPath, save=saveFile)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create example data
Step2: Replace the header value with the first row's values
|
<ASSISTANT_TASK:>
Python Code:
# Import required modules
import pandas as pd
# Create a values as dictionary of lists
raw_data = {'0': ['first_name', 'Molly', 'Tina', 'Jake', 'Amy'],
'1': ['last_name', 'Jacobson', 'Ali', 'Milner', 'Cooze'],
'2': ['age', 52, 36, 24, 73],
'3': ['preTestScore', 24, 31, 2, 3]}
# Create a dataframe
df = pd.DataFrame(raw_data)
# View a dataframe
df
# Create a new variable called 'header' from the first row of the dataset
header = df.iloc[0]
# Replace the dataframe with a new one which does not contain the first row
df = df[1:]
# Rename the dataframe's column values with the header variable
df.rename(columns = header)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: Constants and Defaults
Step6: Subhalo Mass Function
Step10: Substructure Likelihood Function
Step12: Mass Probability
Step14: Likelihood Function
Step17: Simulated Data
Step18: Validation Results
Step19: As an extended validation, we attempt to reproduce Figure 3 from VK09.
|
<ASSISTANT_TASK:>
Python Code:
# General imports
%matplotlib inline
import logging
import numpy as np
import pylab as plt
from scipy import stats
from scipy import integrate
from scipy.integrate import simps,trapz,quad,nquad
from scipy.interpolate import interp1d
from scipy.misc import factorial
# Constants
MMIN,MMAX = 4e6,4e9
MLOW,MHIGH = 0.3e8,4e9
P = (MMIN,MMAX,MLOW,MHIGH)
NSTEPS=(1000,1000)
# Defaults
ALPHA=1.9
FRAC=0.02
MHALO=1e11
SIGMA = 0
# Utility functions
def create_mass_array(log=True,nsteps=(1500,1300)):
Create an array spanning the true and observable mass ranges.
Parameters:
-----------
p : Tuple of the range of masses (MMIN,MMAX,MHIGH,MLOW)
nsteps : Number of steps to span the ranges (NTRUE, NCONV)
log : Sample in log or linear space
Returns:
--------
m,mp,mm,mmp : The
nsteps = map(int,nsteps)
if log:
m = np.logspace(np.log10(MMIN),np.log10(MMAX),nsteps[0])
mp = np.logspace(np.log10(MLOW),np.log10(MHIGH),nsteps[1])
else:
m = np.linspace(MMIN,MMAX,nsteps[0])
mp = np.linspace(MLOW,MHIGH,nsteps[1])
mm,mmp = np.meshgrid(m,mp)
return m,mp,mm,mmp
def mhalo(radius=None):
Return the halo mass as a function of maximum radius.
WARNING: Returns constant MHALO independent of R!
Parameters:
-----------
radius : Maximum radius for inclused halo mass
Returns:
--------
mhalo : Enclosed halo mass
return MHALO
def dP_dm_true(m,alpha):
True mass function (Eqn. 6) normalized over full mass range [MMIN,MMAX].
Parameters:
----------
m : True mass of subhalo
alpha : Power-law index of subhalo mass function
Returns:
--------
dP_dm_true : Normalized pdf
m = np.atleast_1d(m)
ret = ((1-alpha)*m**(-alpha))/(MMAX**(1-alpha)-MMIN**(1-alpha))
ret = np.where(alpha==1,(m**-alpha)/np.log(MMAX/MMIN),ret)
return np.where(np.isfinite(ret),ret,np.nan)
def dP_dm_conv(m,mp,alpha,sigma=SIGMA):
The convolved mass function.
Parameters:
-----------
m : The range of true masses
mp : The range of observed masses
Returns:
--------
dP_dm_conv : The integrated convolved mass function
if sigma == 0:
# Convolution replaced with delta function when sigma == 0
return dP_dm_true(np.atleast_2d(mp.T)[0],alpha)
else:
return simps(dP_dm_true(m,alpha)*stats.norm.pdf(m,loc=mp,scale=sigma),m)
def mu0(alpha, frac, radius=None):
Expected number of substructures from the true mass function (Eq. 5).
Parameters:
-----------
alpha : Slope of the substructure mass function
frac : Substructure mass fraction
radius: Enclosed radius
Returns:
--------
mu0 : Predicted number of substructures for the true mass function
alpha = np.atleast_1d(alpha)
integral = ( (2-alpha)*(MMAX**(1-alpha) - MMIN**(1-alpha))) / \
( (1-alpha)*(MMAX**(2-alpha) - MMIN**(2-alpha)))
integral = np.where(alpha==2,-(MMAX**-1 - MMIN**-1)/np.log(MMAX/MMIN),integral)
integral = np.where(alpha==1,np.log(MMAX/MMIN)/(MMAX - MMIN),integral)
return frac * mhalo(radius) * integral
def mu(alpha, frac, radius=None, sigma=SIGMA):
Expected number of substructures from the observable mass function (Eq. 4)
Parameters:
-----------
alpha : Slope of the substructure mass function
frac : Substructure mass fraction
radius: Enclosed radius
sigma : Substructure mass error
Returns:
--------
mu : Predicted number of substructures for the observable mass function
m,mp,mm,mmp = create_mass_array()
_mu0 = mu0(alpha, frac, radius)
_integral = simps(dP_dm_conv(mm,mmp,alpha,sigma=sigma),mp)
return _mu0 * _integral
def LogProbNumber(data, alpha, frac, R=1, sigma=SIGMA):
Logarithm of the joint probability for the number of substructures.
Parameters:
-----------
data : Input data
alpha: Index of the mass function
frac : Substructure mass fraction
Returns:
--------
prob : Logarithm of the joint Poission probability
logging.debug(' LogProbNumber: %s'%len(data))
nsrc = data['nsrc']
_mu = mu(alpha,frac,R,sigma=sigma)
return np.sum(stats.poisson.logpmf(nsrc[:,np.newaxis],_mu),axis=0)
def LogProbMass(data, alpha, sigma=SIGMA):
Logarithm of the joint probability for mass of substructures.
Parameters:
-----------
data : Input data
alpha: Index of the mass function
Returns:
--------
prob: Logarithm of the joint spectral probability
logging.debug(' LogProbMass: %s'%len(data))
m,mp,mm,mmp = create_mass_array()
masses = np.concatenate(data['mass'])
top = np.sum(np.log([dP_dm_conv(m,mi,alpha,sigma=sigma) for mi in masses]))
bottom = len(masses)*np.log(simps(dP_dm_conv(mm,mmp,alpha,sigma=sigma),mp))
return top - bottom
def LogLike(data, alpha, frac, sigma=SIGMA):
Logarithm of the joint likelihood over all lens systems.
logging.debug('LogLike: %s'%len(data))
logpois = LogProbNumber(data, alpha, frac, sigma=sigma)
logprob = LogProbMass(data, alpha, sigma=sigma)
return logpois + logprob
def sample(size,alpha=ALPHA):
Random samples of the mass function.
Parameters:
-----------
size : Number of smaples to make
alpha : Index of the mass function
Returns:
--------
mass : Random samples of the mass function
x = create_mass_array(log=False,nsteps=(1e4,1e1))[0]
pdf = dP_dm_true(x,alpha)
size = int(size)
cdf = np.cumsum(pdf)
cdf = np.insert(cdf, 0, 0.)
cdf /= cdf[-1]
icdf = interp1d(cdf, range(0, len(cdf)), bounds_error=False, fill_value=-1)
u = np.random.uniform(size=size)
index = np.floor(icdf(u)).astype(int)
index = index[index >= 0]
masses = x[index]
return masses
def simulate(nlens=1, alpha=ALPHA, frac=FRAC, sigma=SIGMA):
Generate the simulated data set of lens, sources, and masses.
Parameters:
-----------
nlens: Number of lenses to generate.
alpha: Index of the substructure mass function
frac: Substructure mass fraction
Returns:
--------
data : Array of output lenses and substructures
# First, figure out how many lenses we are sampling
m,mp,mm,mmp = create_mass_array()
pdf = dP_dm_true(m,alpha)
_mu = mu0(alpha,frac)
lenses = stats.poisson.rvs(_mu,size=nlens)
out = []
for i,l in enumerate(lenses):
masses = sample(l,alpha=alpha)
if sigma != 0:
masses += stats.norm.rvs(size=len(masses),scale=sigma)
sel = (masses > MLOW) & (masses < MHIGH)
mass = masses[sel]
out += [(i,len(mass),mass)]
names = ['lens','nsrc','mass']
return np.rec.fromrecords(out,names=names)
# Simulate a large set of lenses
data = simulate(1000, alpha=ALPHA, frac=FRAC, sigma=SIGMA)
# Plot a histogram of the masses
bins = np.logspace(np.log10(MLOW),np.log10(MHIGH),50)
masses = np.concatenate(data['mass'])
n,b,p = plt.hist(masses,bins=bins,log=True,normed=True, label='Samples'); plt.gca().set_xscale('log')
# Plot the pdf normalized over the observable mass range
m,mp,mm,mmp = create_mass_array()
norm = simps(dP_dm_conv(mm,mmp,ALPHA,sigma=SIGMA),mp)
plt.plot(b,dP_dm_true(b,alpha=ALPHA)/norm,label='Normalized PDF')
plt.legend(loc='upper right')
plt.xlabel(r"Mass ($M_\odot$)"); plt.ylabel("Normalized Counts")
FRAC=0.005; ALPHA=1.9; MLOW=0.3e8; SIGMA=0
nlens=10; seed = 1
np.random.seed(seed)
fracs = np.linspace(0.001,0.03,151)
alphas = np.linspace(1.0,3.0,51)
data = simulate(nlens,alpha=ALPHA, frac=FRAC, sigma=SIGMA)
loglikes = np.array([LogLike(data,a,fracs) for a in alphas])
loglikes -= loglikes.max()
loglikes = loglikes.T
# Note the typo in VK09's definition of the 3 sigma p-value
levels = -stats.chi2.isf([0.0028,0.05,0.32,1.0],2)/2.
plt.contourf(alphas,fracs,loglikes,levels=levels,cmap='binary')
plt.axvline(ALPHA,ls='--',c='dodgerblue')
plt.axhline(FRAC,ls='--',c='dodgerblue')
plt.colorbar(label=r'$\Delta \log {\cal L}$')
plt.xlabel(r'Slope ($\alpha$)')
plt.ylabel(r'Mass Fraction ($f$)')
FRAC=0.005; ALPHA=1.9; SIGMA=0
nlens=200; seed = 0
fracs = np.linspace(0.001,0.03,151)
alphas = np.linspace(1.0,3.0,51)
fig,axes = plt.subplots(1,3,figsize=(10,3),sharey=True)
for i,m in enumerate([0.3e8,1.0e8,3e8]):
MLOW = m
data = simulate(nlens,alpha=ALPHA, frac=FRAC, sigma=SIGMA)
loglikes = np.array([LogLike(data,a,fracs) for a in alphas])
loglikes -= loglikes.max()
loglikes = loglikes.T
plt.sca(axes[i])
plt.contourf(alphas,fracs,loglikes,levels=levels,cmap='binary')
plt.axvline(ALPHA,ls='--',c='dodgerblue')
plt.axhline(FRAC,ls='--',c='dodgerblue')
plt.xlabel(r'Slope ($\alpha$)')
if i == 0: plt.ylabel(r'Mass Fraction ($f$)')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Model
Step2: Question
Step3: Next, we put the model on the mesh
Step4: Forward Simulation
Step5: Compute Predicted Data
Step6: Question
Step7: Electric Field in the Cylinder model
|
<ASSISTANT_TASK:>
Python Code:
from SimPEG import Mesh, EM, Utils, Maps
from matplotlib.colors import LogNorm
%pylab inline
import numpy as np
from scipy.constants import mu_0
from ipywidgets import interact, IntSlider
import cPickle as pickle
url = "https://storage.googleapis.com/simpeg/kevitsa_synthetic/"
files = ['dc_mesh.txt', 'dc_sigma.txt']
keys = ['mesh', 'sigma']
downloads = Utils.download([url + f for f in files], folder='./KevitsaDC', overwrite=True)
downloads = dict(zip(keys, downloads))
mesh3D = Mesh.TensorMesh.readUBC(downloads["mesh"])
sigmadc = mesh3D.readModelUBC(downloads["sigma"])
actind = ~np.isnan(sigmadc)
figsize(8, 4)
indy = 6
temp = 1./sigmadc.copy()
temp[~actind] = np.nan
out = mesh3D.plotSlice(temp, normal="Y", ind=indy, pcolorOpts={"norm": LogNorm(), "cmap":"jet_r"}, clim=(1e0, 1e3))
plt.ylim(-800, 250)
plt.xlim(5000, 11000)
plt.gca().set_aspect(2.)
plt.title(("y= %d m")%(mesh3D.vectorCCy[indy]))
cb = plt.colorbar(out[0], orientation="horizontal")
cb.set_label("Resistivity (Ohm-m)")
sig_halfspace = 2e-3
sig_target = 0.1
sig_air = 1e-8
times = np.logspace(-4, -2, 21)
def diffusion_distance(sigma, time):
return 1.28*np.sqrt(time/(sigma * mu_0))
print(
'min diffusion distance: {:.2e} m'.format(diffusion_distance(sig_halfspace, times.min()))
)
print(
'max diffusion distance: {:.2e} m'.format(diffusion_distance(sig_halfspace, times.max()))
)
# x-direction
csx = 20 # core mesh cell width in the x-direction
ncx = 20
npadx = 15 # number of x padding cells
# z-direction
csz = 20 # core mesh cell width in the z-direction
ncz = 40
npadz = 15 # number of z padding cells
# padding factor (expand cells to infinity)
pf = 1.3
# cell spacings in the x and z directions
hx = Utils.meshTensor([(csx, ncx), (csx, npadx, pf)])
hz = Utils.meshTensor([(csz, npadz, -pf), (csz, ncz), (csz, npadz, pf)])
# define a SimPEG mesh
mesh = Mesh.CylMesh([hx, 1, hz], x0 ="00C")
# X and Z limits we want to plot to. Try
xlim = np.r_[0., mesh.vectorCCx.max()]
zlim = np.r_[mesh.vectorCCz.max(), mesh.vectorCCz.min()]
fig, ax = plt.subplots(1,1)
mesh.plotGrid(ax=ax)
ax.set_title('Simulation Mesh')
ax.set_xlim(xlim)
ax.set_ylim(zlim)
print(
'The maximum diffusion distance (in background) is: {:.2e} m. '
'Does the mesh go sufficiently past that?'.format(
diffusion_distance(sig_halfspace, times.max())
)
)
ax.set_aspect("equal")
# create a vector that has one entry for every cell center
sigma = sig_air*np.ones(mesh.nC) # start by defining the conductivity of the air everwhere
sigma[mesh.gridCC[:,2] < 0.] = sig_halfspace # assign halfspace cells below the earth
sigma_background = sigma.copy()
sigma_layer = sigma.copy()
radius = 150.
# indices of the sphere (where (x-x0)**2 + (z-z0)**2 <= R**2)
layer_ind = np.logical_and(mesh.gridCC[:,2]>-300, mesh.gridCC[:,2]<-200)
blk_ind = (mesh.gridCC[:,0] < radius) & layer_ind
sigma[blk_ind] = sig_target # assign the conductivity of the sphere
sigma_layer[layer_ind] = sig_target # assign the conductivity of the sphere
plt.set_cmap(plt.get_cmap('jet_r'))
# Plot a cross section of the conductivity model
fig, ax = plt.subplots(1,1)
out = mesh.plotImage(np.log10(1./sigma_layer), ax=ax, mirror=True, clim=(0, 3), grid=False)
cb = plt.colorbar(out[0], ticks=np.linspace(0,3,4), format="10$^{%.1f}$")
# plot formatting and titles
cb.set_label('Resistivity (Ohm-m)', fontsize=13)
ax.axis('equal')
ax.set_xlim([-120., 120.])
ax.set_ylim([-500., 0.])
ax.set_title('Layer')
# Plot a cross section of the conductivity model
fig, ax = plt.subplots(1,1)
out = mesh.plotImage(np.log10(1./sigma), ax=ax, mirror=True, clim=(0, 3), grid=False)
# plot formatting and titles
cb = plt.colorbar(out[0], ticks=np.linspace(0,3,4), format="10$^{%.1f}$")
# plot formatting and titles
cb.set_label('Resistivity (Ohm-m)', fontsize=13)
ax.axis('equal')
ax.set_xlim([-120., 120.])
ax.set_ylim([-500., 0.])
ax.set_title('Cylinder')
rx_loc = np.array([[0., 0., 41.]])
src_loc = np.array([[0., 0., 41.]])
offTime = 0.007307
peakTime = 0.006
a = 3.
dbdt_z = EM.TDEM.Rx.Point_dbdt(locs=rx_loc, times=times+offTime, orientation='z') # vertical db_dt
rxList = [dbdt_z] # list of receivers
srcList = [
EM.TDEM.Src.CircularLoop(
rxList, loc=src_loc, radius=13., orientation='z', waveform=EM.TDEM.Src.VTEMWaveform(offTime=offTime, peakTime=peakTime, a=3.)
)
]
# solve the problem at these times
timeSteps = [(peakTime/5, 5), ((offTime-peakTime)/5, 5), (1e-5, 10), (5e-5, 10), (1e-4, 10), (5e-4, 19)]
prob = EM.TDEM.Problem3D_b(mesh, timeSteps = timeSteps, sigmaMap=Maps.IdentityMap(mesh))
survey = EM.TDEM.Survey(srcList)
prob.pair(survey)
src = srcList[0]
rx = src.rxList[0]
wave = []
for time in prob.times:
wave.append(src.waveform.eval(time))
wave = np.hstack(wave)
plt.plot(prob.times, wave, 'k.-')
plt.plot(rx.times, np.zeros_like(rx.times), 'r.')
plt.ylim(-0.2, 1.2)
plt.grid(True)
plt.title('Current Waveform')
plt.xlabel('time (s)')
d_background = survey.dpred(sigma_background)
d_layer = survey.dpred(sigma_layer)
d = survey.dpred(sigma)
area = 13**2*np.pi
figsize(6, 3)
plt.loglog((rx.times-offTime)*1e6, -d_layer*1e12/area, 'k', lw=2)
plt.loglog((rx.times-offTime)*1e6, -d*1e12/area , 'b', lw=2)
plt.loglog((rx.times-offTime)*1e6, -d_background*1e12/area, 'k--', lw=1)
plt.xlabel("Time (micro-s)")
plt.ylabel("Voltage (pV/A-m$^4$)")
plt.legend(("Layer", "Cylinder","Half-space"), loc=1, fontsize = 10)
plt.ylim(1e-4, 1e1)
plt.grid(True)
f_layer = prob.fields(sigma_layer)
plt.set_cmap(plt.get_cmap('viridis'))
def vizfield_layer(itime):
fig = plt.figure(figsize = (7*0.8,5*0.8))
ax = plt.subplot(111)
cb = plt.colorbar(mesh.plotImage(mesh.aveE2CC*f_layer[src, 'e', itime], ax=ax, mirror=True)[0])
# plot formatting and titles
cb.set_label('e$_{y}$ (V/m)', fontsize=13)
ax.axis('equal')
ax.set_xlim([-300., 300.])
ax.set_ylim([-500., 0.])
ax.set_title(('|e$_{y}$| at %d micro-s')%(prob.times[itime]*1e6))
plt.show()
interact(vizfield_layer, itime=IntSlider(min=0, max=len(prob.times)-1, step=1, value=11))
f = prob.fields(sigma)
def vizfield_cylinder(itime):
fig = plt.figure(figsize = (7*0.8,5*0.8))
ax = plt.subplot(111)
cb = plt.colorbar(mesh.plotImage(mesh.aveE2CC*f[src, 'e', itime], ax=ax, mirror=True)[0])
# plot formatting and titles
cb.set_label('e$_{y}$ (V/m)', fontsize=13)
# ax.axis('equal')
ax.set_xlim([-300., 300.])
ax.set_ylim([-500., 0.])
ax.set_title(('|e$_{y}$| at %d micro-s')%(prob.times[itime]*1e6))
plt.tight_layout()
plt.show()
interact(vizfield_cylinder, itime=IntSlider(min=0, max=len(prob.times)-1, step=1, value=11))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Graph layouts
Step2: Draw only specific nodes
Step3: Colors
Step4: Plotting with node/edge attributes
|
<ASSISTANT_TASK:>
Python Code:
G = nx.Graph() #create a graph
G.add_nodes_from([0,1,2,3]) #add some nodes
G.add_edges_from([(0,1),(1,2),(2,3),(3,0)]) #add some edges
pos = {0:[1,1],1:[1,2],2:[2,3],3:[3,2]} #dictionary of positions
nx.draw_networkx(G,pos) #plot edges as lines, nodes as dots + labels
nx.draw_networkx_nodes(G,pos) #plot only the nodes
nx.draw_networkx_edges(G,pos) #plot only the edges
#random graph with edge probability of 0.3
G = nx.gnp_random_graph(20,0.3)
pos_circular = nx.circular_layout(G)
nx.draw_networkx(G,pos_circular)
print(pos_circular)
pos_random = nx.random_layout(G)
nx.draw(G,pos_random)
pos_shell = nx.shell_layout(G)
nx.draw(G,pos_shell)
pos_spring = nx.spring_layout(G)
nx.draw(G,pos_spring)
pos_spectral = nx.spectral_layout(G)
nx.draw(G,pos_spectral)
nodes = [3,4,5,6]
edges = [e for e in G.edges() if e[0] in nodes or e[1] in nodes]
nx.draw_networkx_nodes(G,pos_spring, nodelist=nodes)
nx.draw_networkx_edges(G,pos_spring,edgelist=edges)
import matplotlib.pyplot as plt
G = nx.gnp_random_graph(30,0.15)
pos = nx.circular_layout(G)
blues = plt.cm.Blues
reds = plt.cm.hot
nx.draw_networkx_nodes(G,pos,node_color=range(30),cmap=blues)
nx.draw_networkx_edges(G,pos,edge_color=range(len(G.edges())),edge_cmap=reds,width=3)
G = nx.Graph()
G.add_nodes_from([1,2,3,4])
G.node[1]['mood'] = 'happy'
G.node[2]['mood'] = 'sad'
G.node[3]['mood'] = 'bored'
G.node[4]['x'] = 2.0
G.node[1]['mood']
G.node[4]['x']
G = nx.read_graphml("../data/visualization/medium_graph.xml")
print(len(G.nodes()))
G.nodes(data=True)
pos = nx.spring_layout(G)
nx.draw_networkx_edges(G,pos,edgecolor="DarkSlateGray",alpha =0.4)
nx.draw_networkx_nodes(G,pos,color="FireBrick",alpha=0.6,node_size=10,linewidths=0.5)
pos = {}
for k in G.node.keys():
pos[k] = (G.node[k]['x'],G.node[k]['y'])
print(pos)
import matplotlib.pyplot as plt
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.set_aspect('equal')
nx.draw_networkx_edges(G,pos,edgecolor="DarkSlateGray",alpha=0.4)
nx.draw_networkx_nodes(G,pos,color="FireBrick",alpha=0.6,node_size=10,linewidths=0.5)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Generate Random Routes
Step2: Now we simply copy the text above and go to https
Step4: Use the web UI
Step5: Be Bold and try 100 routes
|
<ASSISTANT_TASK:>
Python Code:
## import system module
import json
import rethinkdb as r
import time
import datetime as dt
import asyncio
from shapely.geometry import Point, Polygon
import random
import pandas as pd
import os
import matplotlib.pyplot as plt
## import custom module
from streettraffic.server import TrafficServer
from streettraffic.predefined.cities import San_Francisco_polygon
settings = {
'app_id': 'F8aPRXcW3MmyUvQ8Z3J9', # this is where you put your App ID from here.com
'app_code' : 'IVp1_zoGHdLdz0GvD_Eqsw', # this is where you put your App Code from here.com
'map_tile_base_url': 'https://1.traffic.maps.cit.api.here.com/maptile/2.1/traffictile/newest/normal.day/',
'json_tile_base_url': 'https://traffic.cit.api.here.com/traffic/6.2/flow.json?'
}
## initialize traffic server
server = TrafficServer(settings)
def get_random_point_in_polygon(poly):
(minx, miny, maxx, maxy) = poly.bounds
while True:
p = Point(random.uniform(minx, maxx), random.uniform(miny, maxy))
if poly.contains(p):
return p
atlanta_polygon = Polygon([[33.658529, -84.471782], [33.667928, -84.351730], [33.883809, -84.347570], [33.855681, -84.469405]])
sample_points = []
for i in range(100):
point_in_poly = get_random_point_in_polygon(atlanta_polygon)
sample_points += [[point_in_poly.x, point_in_poly.y]]
print(server.traffic_data.format_list_points_for_display(sample_points))
sample_route_count = 2
route_obj_collection = []
for i in range(sample_route_count):
point_in_poly1 = get_random_point_in_polygon(atlanta_polygon)
point_in_poly2 = get_random_point_in_polygon(atlanta_polygon)
route_obj_collection += [[
{
"lat": point_in_poly1.x,
"lng": point_in_poly1.y
},
{
"lat": point_in_poly2.x,
"lng": point_in_poly2.y
}
]]
route_obj_collection_json = json.dumps(route_obj_collection)
print(route_obj_collection_json)
# load the test.json
with open('test.json') as f:
route_traffic_pattern_collection = json.load(f)
# create a function that takes an overview_path and generate the distance
def overview_path_distance(overview_path):
This function extracts the longest_n routes in route_traffic_pattern_collection
distance = 0
for i in range(len(overview_path)-1):
point1 = overview_path[i]
point2 = overview_path[i+1]
distance += server.util.get_distance([point1['lat'], point1['lng']], [point2['lat'], point2['lng']])
return distance
# now we build the dataframe
df = pd.DataFrame(index = [json.dumps(item['origin_destination']) for item in route_traffic_pattern_collection])
df['distance (in meters)'] = [overview_path_distance(item['route']['routes'][0]['overview_path']) for item in route_traffic_pattern_collection]
for i in range(len(route_traffic_pattern_collection[0]['chartLabel'])):
df[route_traffic_pattern_collection[0]['chartLabel'][i]] = [item['chartData'][i] for item in route_traffic_pattern_collection]
df.sort_values(by='distance (in meters)')
df
# remove the 'distance (in meters)' column and then we can do analysis
del df['distance (in meters)']
df
# Now we can do all sorts of fun things with it.
# feel free to comment out the following statement and see various possibilites
#print(df.mean(axis=1))
#print(df.std())
#print(df.median())
# for each route, give me the mean Jamming Factor of all the instant (2:00:00 PM, 2:30:00 PM, ..., 4:30:00 PM)
df.mean(axis=1)
## For reproducibility, we executed the following script and store
## route_obj_collection_json
# sample_route_count = 100
# route_obj_collection = []
# for i in range(sample_route_count):
# point_in_poly1 = get_random_point_in_polygon(atlanta_polygon)
# point_in_poly2 = get_random_point_in_polygon(atlanta_polygon)
# route_obj_collection += [[
# {
# "lat": point_in_poly1.x,
# "lng": point_in_poly1.y
# },
# {
# "lat": point_in_poly2.x,
# "lng": point_in_poly2.y
# }
# ]]
# route_obj_collection_json = json.dumps(route_obj_collection)
with open('route_obj_collection_json.json') as f:
route_obj_collection_json = json.load(f)
## after copying and pasting route_obj_collection_json into WEB UI, getting results and load
## it in route_traffic_pattern_collection, we get this:
with open('route_traffic_pattern_collection.json') as f:
route_traffic_pattern_collection = json.load(f)
df = pd.DataFrame(index = [json.dumps(item['origin_destination']) for item in route_traffic_pattern_collection])
df['distance (in meters)'] = [overview_path_distance(item['route']['routes'][0]['overview_path']) for item in route_traffic_pattern_collection]
for i in range(len(route_traffic_pattern_collection[0]['chartLabel'])):
df[route_traffic_pattern_collection[0]['chartLabel'][i]] = [item['chartData'][i] for item in route_traffic_pattern_collection]
df2 = df.sort_values(by='distance (in meters)')
df2
# The following graph shows on average, what is the Jamming Factor throughout 24 hours for those 20 routes.
df3 = df2[-20:]
del df3['distance (in meters)']
df3.mean(axis=1).plot()
plt.show()
# The following graph extracts the worst jamming factor of each routes
df4 = df2[-20:]
del df4['distance (in meters)']
df4.max(axis=1).plot()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Cross Entropy Benchmarking Theory
Step2: The action of random circuits with noise
Step3: Random circuit
Step4: Estimating fidelity
Step5: Execute circuits
Step6: What's the observable
Step7: Remember
Step8: Fit
Step9: Fidelities
|
<ASSISTANT_TASK:>
Python Code:
try:
import cirq
except ImportError:
print("installing cirq...")
!pip install --quiet cirq
import cirq
print("installed cirq.")
# Standard imports
import numpy as np
from cirq.contrib.svg import SVGCircuit
exponents = np.linspace(0, 7/4, 8)
exponents
import itertools
SINGLE_QUBIT_GATES = [
cirq.PhasedXZGate(x_exponent=0.5, z_exponent=z, axis_phase_exponent=a)
for a, z in itertools.product(exponents, repeat=2)
]
SINGLE_QUBIT_GATES[:10], '...'
import cirq_google as cg
from cirq.experiments import random_quantum_circuit_generation as rqcg
q0, q1 = cirq.LineQubit.range(2)
circuit = rqcg.random_rotations_between_two_qubit_circuit(
q0, q1,
depth=4,
two_qubit_op_factory=lambda a, b, _: cirq.SQRT_ISWAP(a, b),
single_qubit_gates=SINGLE_QUBIT_GATES
)
SVGCircuit(circuit)
# Make long circuits (which we will truncate)
MAX_DEPTH = 100
N_CIRCUITS = 10
circuits = [
rqcg.random_rotations_between_two_qubit_circuit(
q0, q1,
depth=MAX_DEPTH,
two_qubit_op_factory=lambda a, b, _: cirq.SQRT_ISWAP(a, b),
single_qubit_gates=SINGLE_QUBIT_GATES)
for _ in range(N_CIRCUITS)
]
# We will truncate to these lengths
cycle_depths = np.arange(1, MAX_DEPTH + 1, 9)
cycle_depths
pure_sim = cirq.Simulator()
# Pauli Error. If there is an error, it is either X, Y, or Z
# with probability E_PAULI / 3
E_PAULI = 5e-3
noisy_sim = cirq.DensityMatrixSimulator(noise=cirq.depolarize(E_PAULI))
# These two qubit circuits have 2^2 = 4 probabilities
DIM = 4
records = []
for cycle_depth in cycle_depths:
for circuit_i, circuit in enumerate(circuits):
# Truncate the long circuit to the requested cycle_depth
circuit_depth = cycle_depth * 2 + 1
assert circuit_depth <= len(circuit)
trunc_circuit = circuit[:circuit_depth]
# Pure-state simulation
psi = pure_sim.simulate(trunc_circuit).final_state_vector
pure_probs = np.abs(psi)**2
# Noisy execution
meas_circuit = trunc_circuit + cirq.measure(q0, q1)
sampled_inds = noisy_sim.sample(meas_circuit, repetitions=10_000).values[:,0]
sampled_probs = np.bincount(sampled_inds, minlength=DIM) / len(sampled_inds)
# Save the results
records += [{
'circuit_i': circuit_i,
'cycle_depth': cycle_depth,
'circuit_depth': circuit_depth,
'pure_probs': pure_probs,
'sampled_probs': sampled_probs,
}]
print('.', end='', flush=True)
for record in records:
e_u = np.sum(record['pure_probs']**2)
u_u = np.sum(record['pure_probs']) / DIM
m_u = np.sum(record['pure_probs'] * record['sampled_probs'])
record.update(
e_u=e_u,
u_u=u_u,
m_u=m_u,
)
import pandas as pd
df = pd.DataFrame(records)
df['y'] = df['m_u'] - df['u_u']
df['x'] = df['e_u'] - df['u_u']
df['numerator'] = df['x'] * df['y']
df['denominator'] = df['x'] ** 2
df.head()
%matplotlib inline
from matplotlib import pyplot as plt
# Color by cycle depth
import seaborn as sns
colors = sns.cubehelix_palette(n_colors=len(cycle_depths))
colors = {k: colors[i] for i, k in enumerate(cycle_depths)}
_lines = []
def per_cycle_depth(df):
fid_lsq = df['numerator'].sum() / df['denominator'].sum()
cycle_depth = df.name
xx = np.linspace(0, df['x'].max())
l, = plt.plot(xx, fid_lsq*xx, color=colors[cycle_depth])
plt.scatter(df['x'], df['y'], color=colors[cycle_depth])
global _lines
_lines += [l] # for legend
return pd.Series({'fidelity': fid_lsq})
fids = df.groupby('cycle_depth').apply(per_cycle_depth).reset_index()
plt.xlabel(r'$e_U - u_U$', fontsize=18)
plt.ylabel(r'$m_U - u_U$', fontsize=18)
_lines = np.asarray(_lines)
plt.legend(_lines[[0,-1]], cycle_depths[[0,-1]], loc='best', title='Cycle depth')
plt.tight_layout()
plt.plot(
fids['cycle_depth'],
fids['fidelity'],
marker='o',
label='Least Squares')
xx = np.linspace(0, fids['cycle_depth'].max())
# In XEB, we extract the depolarizing fidelity, which is
# related to (but not equal to) the Pauli error.
# For the latter, an error involves doing X, Y, or Z with E_PAULI/3
# but for the former, an error involves doing I, X, Y, or Z with e_depol/4
e_depol = E_PAULI / (1 - 1/DIM**2)
# The additional factor of four in the exponent is because each layer
# involves two moments of two qubits (so each layer has four applications
# of a single-qubit single-moment depolarizing channel).
plt.plot(xx, (1-e_depol)**(4*xx), label=r'$(1-\mathrm{e\_depol})^{4d}$')
plt.ylabel('Circuit fidelity', fontsize=18)
plt.xlabel('Cycle Depth $d$', fontsize=18)
plt.legend(loc='best')
plt.yscale('log')
plt.tight_layout()
from cirq.experiments.xeb_fitting import fit_exponential_decays
# Ordinarily, we'd use this function to fit curves for multiple pairs.
# We add our qubit pair as a column.
fids['pair'] = [(q0, q1)] * len(fids)
fit_df = fit_exponential_decays(fids)
fit_row = fit_df.iloc[0]
print(f"Noise model fidelity: {(1-e_depol)**4:.3e}")
print(f"XEB layer fidelity: {fit_row['layer_fid']:.3e} +- {fit_row['layer_fid_std']:.2e}")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Yep, that's all that's needed!
Step2: It's important to note
Step3: What's the type for x?
Step4: What's the type for y?
Step5: There are functions you can use to explicitly cast a variable from one type to another
Step6: Data Structures
Step7: Tuples are like lists, except they're immutable once you've built them (and denoted by parentheses, instead of brackets).
Step8: Sets are probably the most different
Step9: Finally, dictionaries. Other terms that may be more familiar include
Step10: Dictionaries explicitly set up a mapping between a key--keys are unique and unordered, exactly like sets--to values, which are an arbitrary list of items. These are very powerful structures for data science-y applications.
Step11: However, using colon notation, you can "slice out" entire sections of ordered structures.
Step12: Note that the starting index is inclusive, but the ending index is exclusive. Also, if you omit the starting index, Python assumes you mean 0 (start at the beginning); likewise, if you omit the ending index, Python assumes you mean "go to the very end".
Step13: Loops
Step14: the collection to be iterated through is at the end (some_list)
Step15: while loops operate as you've probably come to expect
Step16: IMPORTANT
Step17: This "zips" together the lists and picks corresponding elements from each for every loop iteration. Way easier than trying to set up a numerical index to loop through all three simultaneously, but you can even combine this with enumerate to do exactly that
Step18: Conditionals
Step19: A couple important differences from C/C++/Java parlance
Step20: break, on the other hand, literally slams the brakes on a loop, pulling you out one level of indentation immediately.
Step21: File I/O
Step22: This code writes the string on the first line to a file named some_file.txt. We can read it back
Step23: Take note what changed
Step24: This function doesn't do anything, but it's perfectly valid. We can call it
Step25: Not terribly interesting, but a good outline. To make it interesting, we should add input arguments and return values
Step26: NumPy Arrays
Step27: We've also seen indexing and slicing before; here, however, NumPy really shines.
Step28: We can take statistics of any dimension or slice we want
Step29: Part 3
|
<ASSISTANT_TASK:>
Python Code:
print("Hello, world!")
x = 5
type(x)
y = 5.5
type(y)
x = 5 * 5
type(x)
y = 5 / 5
type(y)
x = 5 / 5
type(x)
y = int(x)
type(y)
z = str(y)
type(z)
some_list = [1, 2, 'something', 6.2, ["another", "list!"], 7371]
print(some_list[3])
type(some_list)
some_tuple = (1, 2, 'something', 6.2, ["another", "list!"], 7371)
print(some_tuple[5])
type(some_tuple)
some_set = {1, 1, 1, 1, 1, 86, "something", 73}
some_set.add(1)
print(some_set)
type(some_set)
some_dict = {"key": "value", "another_key": [1, 3, 4], 3: ["this", "value"]}
print(some_dict["another_key"])
type(some_dict)
print(some_list)
index = 0
print(some_list[index])
start = 0
end = 3
print(some_list[start : end])
print(some_list[:end])
start = 1
print(some_list[start:])
for item in some_list:
print(item)
for index, item in enumerate(some_list):
print("{}: {}".format(index, item))
i = 0
while i < 10:
print(i)
i += 2
list1 = [1, 2, 3]
list2 = [4, 5, 6]
list3 = [7, 8, 9]
for x, y, z in zip(list1, list2, list3):
print("{} {} {}".format(x, y, z))
for index, (x, y, z) in enumerate(zip(list1, list2, list3)):
print("{}: ({}, {}, {})".format(index, x, y, z))
grade = 82
if grade > 90:
print("A")
elif grade > 80:
print("B")
else:
print("Something else")
list_of_data = [4.4, 1.2, 6898.32, "bad data!", 5289.24, 25.1, "other bad data!", 52.4]
for x in list_of_data:
if type(x) == str:
continue
# This stuff gets skipped anytime the "continue" is run
print(x)
import random
i = 0
iters = 0
while True:
iters += 1
i += random.randint(0, 10)
if i > 1000:
break
print(iters)
text_to_write = "I want to save this to a file."
f = open("some_file.txt", "w")
f.write(text_to_write)
f.close()
f = open("some_file.txt", "r")
from_file = f.read()
f.close()
print(from_file)
def some_function(): # This line is the header
pass # Everything after (that's indented) is the body
some_function()
def vector_magnitude(vector):
d = 0.0
for x in vector:
d += x ** 2
return d ** 0.5
v1 = [1, 1]
d1 = vector_magnitude(v1)
print(d1)
v2 = [53.3, 13.4]
d2 = vector_magnitude(v2)
print(d2)
import numpy as np
def vectorized_magnitude(vector):
return (vector ** 2).sum() ** 0.5
v1 = np.array([1, 1])
d1 = vectorized_magnitude(v1)
print(d1)
v2 = np.array([53.3, 13.4])
d2 = vectorized_magnitude(v2)
print(d2)
X = np.random.random((500, 600, 250))
X[:400, 100:200, 0].mean()
X[X < 0.01].std()
X[:400, 100:200, 0].mean(axis = 1)
%matplotlib inline
import matplotlib.pyplot as plt
x = np.linspace(-5, 5, 100)
y = 1 / (1 + np.exp(-x))
plt.plot(x, y)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create the data
Step2: Plot the figures
|
<ASSISTANT_TASK:>
Python Code:
print(__doc__)
# Authors: Gael Varoquaux
# Jaques Grobler
# Kevin Hughes
# License: BSD 3 clause
from sklearn.decomposition import PCA
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
e = np.exp(1)
np.random.seed(4)
def pdf(x):
return 0.5 * (stats.norm(scale=0.25 / e).pdf(x)
+ stats.norm(scale=4 / e).pdf(x))
y = np.random.normal(scale=0.5, size=(30000))
x = np.random.normal(scale=0.5, size=(30000))
z = np.random.normal(scale=0.1, size=len(x))
density = pdf(x) * pdf(y)
pdf_z = pdf(5 * z)
density *= pdf_z
a = x + y
b = 2 * y
c = a - b + z
norm = np.sqrt(a.var() + b.var())
a /= norm
b /= norm
def plot_figs(fig_num, elev, azim):
fig = plt.figure(fig_num, figsize=(4, 3))
plt.clf()
ax = Axes3D(fig, rect=[0, 0, .95, 1], elev=elev, azim=azim)
ax.scatter(a[::10], b[::10], c[::10], c=density[::10], marker='+', alpha=.4)
Y = np.c_[a, b, c]
# Using SciPy's SVD, this would be:
# _, pca_score, V = scipy.linalg.svd(Y, full_matrices=False)
pca = PCA(n_components=3)
pca.fit(Y)
pca_score = pca.explained_variance_ratio_
V = pca.components_
x_pca_axis, y_pca_axis, z_pca_axis = V.T * pca_score / pca_score.min()
x_pca_axis, y_pca_axis, z_pca_axis = 3 * V.T
x_pca_plane = np.r_[x_pca_axis[:2], - x_pca_axis[1::-1]]
y_pca_plane = np.r_[y_pca_axis[:2], - y_pca_axis[1::-1]]
z_pca_plane = np.r_[z_pca_axis[:2], - z_pca_axis[1::-1]]
x_pca_plane.shape = (2, 2)
y_pca_plane.shape = (2, 2)
z_pca_plane.shape = (2, 2)
ax.plot_surface(x_pca_plane, y_pca_plane, z_pca_plane)
ax.w_xaxis.set_ticklabels([])
ax.w_yaxis.set_ticklabels([])
ax.w_zaxis.set_ticklabels([])
elev = -40
azim = -80
plot_figs(1, elev, azim)
elev = 30
azim = 20
plot_figs(2, elev, azim)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Sample Code
Step2: Function to Read and Prep Images for Modeling
Step3: Create Model with Pre-Trained Weights File. Make Predictions
Step4: Visualize Predictions
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import YouTubeVideo
YouTubeVideo('sDG5tPtsbSA', width=800, height=450)
from os.path import join
image_dir = '../input/dog-breed-identification/train/'
img_paths = [join(image_dir, filename) for filename in
['0c8fe33bd89646b678f6b2891df8a1c6.jpg',
'0c3b282ecbed1ca9eb17de4cb1b6e326.jpg',
'04fb4d719e9fe2b6ffe32d9ae7be8a22.jpg',
'0e79be614f12deb4f7cae18614b7391b.jpg']]
import numpy as np
from tensorflow.python.keras.applications.resnet50 import preprocess_input
from tensorflow.python.keras.preprocessing.image import load_img, img_to_array
image_size = 224
def read_and_prep_images(img_paths, img_height=image_size, img_width=image_size):
imgs = [load_img(img_path, target_size=(img_height, img_width)) for img_path in img_paths]
img_array = np.array([img_to_array(img) for img in imgs])
output = preprocess_input(img_array)
return(output)
from tensorflow.python.keras.applications import ResNet50
my_model = ResNet50(weights='../input/resnet50/resnet50_weights_tf_dim_ordering_tf_kernels.h5')
test_data = read_and_prep_images(img_paths)
preds = my_model.predict(test_data)
from learntools.deep_learning.decode_predictions import decode_predictions
from IPython.display import Image, display
most_likely_labels = decode_predictions(preds, top=3, class_list_path='../input/resnet50/imagenet_class_index.json')
for i, img_path in enumerate(img_paths):
display(Image(img_path))
print(most_likely_labels[i])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Run Monte Carlo model and calculate reflectance and polarization for trajectories
Step2: initialize and run trajectories
Step3: calculate reflectance
Step4: Run Monte Carlo model and calculate reflectance and polarization for trajectories
Step5: initialize trajectories, run trajectories, and calculate reflectance for each wavelength
Step6: Plot reflectance and polarization spectra
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import structcol as sc
from structcol import refractive_index as ri
from structcol import montecarlo as mc
from structcol import detector as det
import pymie as pm
from pymie import size_parameter, index_ratio
import seaborn as sns
import time
# For Jupyter notebooks only:
%matplotlib inline
# incident light wavelength
wavelength = sc.Quantity('600 nm')
# sample parameters
radius = sc.Quantity('0.140 um')
volume_fraction = sc.Quantity(0.55, '')
n_imag = 2.1e-4
n_particle = ri.n('polystyrene', wavelength) + n_imag # refractive indices can be specified as pint quantities or
n_matrix = ri.n('vacuum', wavelength) # called from the refractive_index module. n_matrix is the
n_medium = ri.n('vacuum', wavelength) # space within sample. n_medium is outside the sample
n_sample = ri.n_eff(n_particle, # refractive index of sample, calculated using Bruggeman approximation
n_matrix,
volume_fraction)
thickness = sc.Quantity('80 um')
boundary = 'film'
# Monte Carlo parameters
ntrajectories = 300 # number of trajectories
nevents = 300 # number of scattering events in each trajectory
# Calculate scattering quantities
p, mu_scat, mu_abs = mc.calc_scat(radius, n_particle, n_sample,
volume_fraction, wavelength, polarization= True)
# Initialize trajectories
r0, k0, W0, p0 = mc.initialize(nevents, ntrajectories, n_medium, n_sample, boundary, polarization=True)
r0 = sc.Quantity(r0, 'um')
k0 = sc.Quantity(k0, '')
W0 = sc.Quantity(W0, '')
p0 = sc.Quantity(p0,'')
trajectories = mc.Trajectory(r0, k0, W0, p0)
# Sample trajectory angles
sintheta, costheta, sinphi, cosphi, theta, phi= mc.sample_angles(nevents,
ntrajectories,p)
# Sample step sizes
step = mc.sample_step(nevents, ntrajectories, mu_scat)
# Update trajectories based on sampled values
trajectories.scatter(sintheta, costheta, sinphi, cosphi)
trajectories.polarize(theta, phi, sintheta, costheta, sinphi,cosphi,
n_particle, n_sample, radius, wavelength, volume_fraction)
trajectories.move(step)
trajectories.absorb(mu_abs, step)
reflectance, transmittance = det.calc_refl_trans(trajectories, thickness, n_medium, n_sample, boundary)
print('Reflectance: ' + str(reflectance))
print('Transmittance: ' + str(transmittance))
# incident light wavelength
wavelengths = sc.Quantity(np.arange(400, 800, 20), 'nm')
# sample parameters
radius = sc.Quantity('0.140 um')
volume_fraction = sc.Quantity(0.55, '')
n_imag = 2.1e-4
n_particle = ri.n('polystyrene', wavelengths) + n_imag*1j # refractive indices can be specified as pint quantities or
n_matrix = ri.n('vacuum', wavelengths) # called from the refractive_index module. n_matrix is the
n_medium = ri.n('vacuum', wavelengths) # space within sample. n_medium is outside the sample
thickness = sc.Quantity('80 um')
z_low = sc.Quantity('0 um')
# Monte Carlo parameters
ntrajectories = 300 # number of trajectories
nevents = 300 # number of scattering events in each trajectory
reflectance = np.zeros(wavelengths.size)
pol_refl_x = np.zeros(wavelengths.size)
pol_refl_y = np.zeros(wavelengths.size)
pol_refl_z = np.zeros(wavelengths.size)
for i in range(wavelengths.size):
# print wavelength
print('wavelength: ' + str(wavelengths[i]))
# calculate n_sample
n_sample = ri.n_eff(n_particle[i], n_matrix[i], volume_fraction)
# Calculate scattering quantities
p, mu_scat, mu_abs = mc.calc_scat(radius, n_particle[i], n_sample,
volume_fraction, wavelengths[i], polarization= True)
# Initialize trajectories
r0, k0, W0, p0 = mc.initialize(nevents, ntrajectories, n_medium[i], n_sample, boundary, polarization=True)
r0 = sc.Quantity(r0, 'um')
k0 = sc.Quantity(k0, '')
W0 = sc.Quantity(W0, '')
p0 = sc.Quantity(p0,'')
trajectories = mc.Trajectory(r0, k0, W0, p0)
# Sample trajectory angles
sintheta, costheta, sinphi, cosphi, theta, phi= mc.sample_angles(nevents,
ntrajectories,p)
# Sample step sizes
step = mc.sample_step(nevents, ntrajectories, mu_scat)
# Update trajectories based on sampled values
trajectories.scatter(sintheta, costheta, sinphi, cosphi)
trajectories.polarize(theta, phi, sintheta, costheta, sinphi,cosphi,
n_particle[i], n_sample, radius, wavelengths[i], volume_fraction)
trajectories.move(step)
trajectories.absorb(mu_abs, step)
# calculate reflectance and other values of interest
refl_indices, trans_indices,_,_,_,_,_,_,_,_,_,reflectance[i],_,_,_,_ = det.calc_refl_trans(trajectories,
thickness, n_medium[i], n_sample,
boundary, return_extra = True)
# calculate reflectance contribution from each polarization component
pol_refl_x[i], pol_refl_y[i], pol_refl_z[i] = det.calc_pol_frac(trajectories, refl_indices)
sns.set_style('white')
plt.figure()
plt.plot(wavelengths, reflectance , label = 'total', linewidth = 3.5)
plt.plot(wavelengths, pol_refl_x, label = 'x-polarized', linewidth = 3)
plt.plot(wavelengths, pol_refl_y, label = 'y-polarized', linewidth = 3)
plt.plot(wavelengths, pol_refl_z, label = 'z-polarized', linewidth = 3)
plt.plot(wavelengths, pol_refl_x + pol_refl_y + pol_refl_z, label = 'total polarized', linewidth = 3)
plt.xlim([400,800])
plt.ylim([0,1])
plt.ylabel('Reflectance')
plt.xlabel('Wavelength (nm)')
plt.legend()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Previously, I have created a simple sqlite3 database containing administrative open data from NHS Digital (database generator script).
Step2: Look up the GP practices on the Isle of Wight... (We could search for the code but I happen to know it...)
Step3: The folium library makes it easy to generate choropleth maps using the Leaflet javascript library. We can use various map tiles - the default is Google Maps.
Step4: Generate a choropleth map for selected GP practice, colouring LSOA by number of folk registered to that practice.
|
<ASSISTANT_TASK:>
Python Code:
#Original data source
#http:§§//www.content.digital.nhs.uk/catalogue/PUB23139
#Get the datafile
!wget -P data http://www.content.digital.nhs.uk/catalogue/PUB23139/gp-reg-patients-LSOA-alt-tall.csv
#Import best ever data handling package
import pandas as pd
#Load downloaded CSV file
df=pd.read_csv('data/gp-reg-patients-LSOA-alt-tall.csv')
#Preview first few lines
df.head()
import sqlite3
#Use homebrew database of NHS administrative info
con = sqlite3.connect("nhsadmin.sqlite")
ccode='10L'
#Find
EPRACCUR='epraccur'
epraccur_iw = pd.read_sql_query('SELECT * FROM {typ} WHERE "Commissioner"="{ccode}"'.format(typ=EPRACCUR,ccode=ccode), con)
epraccur_iw
import folium
#color brewer palettes: ‘BuGn’, ‘BuPu’, ‘GnBu’, ‘OrRd’, ‘PuBu’, ‘PuBuGn’, ‘PuRd’, ‘RdPu’, ‘YlGn’, ‘YlGnBu’, ‘YlOrBr’, and ‘YlOrRd’.
#Fiona is a powerful library for geo wrangling with various dependencies that can make installation a pain...
#...but I have it installed already so I can use it to trivially find the centre of a set of boundaries in a geojson file
import fiona
#This is a canned demo - I happen to have the Local Authority Code for the Isle of Wight...
#...and copies of LSOA geojson files by LA
# (I could get LA code from the NHS addmin db)
geojson_local='../../IWgeodata/lsoa_by_lad/E06000046.json'
fi=fiona.open(geojson_local)
centre_lat,centre_lon=((fi.bounds[0]+fi.bounds[2])/2,(fi.bounds[1]+fi.bounds[3])/2)
#Add a widget in that lets you select the GP practice by name then fudge the lookup to practice code
#We could also add another widget to select eg Male | Female | All
def generate_map(gpcode):
gpmap = folium.Map([centre_lon,centre_lat], zoom_start=11)
gpmap.choropleth(
geo_path=geojson_local,
data=df[df['PRACTICE_CODE']==gpcode],
columns=['LSOA_CODE', 'ALL_PATIENTS'],
key_on='feature.properties.LSOA11CD',
fill_color='PuBuGn', fill_opacity=0.7,
legend_name='Number of people on list in LSOA'
)
return gpmap
def generate_map_from_gpname(gpname):
gpcode=epraccur_iw[epraccur_iw['Name']==gpname]['Organisation Code'].iloc[0]
return generate_map(gpcode)
#iw_gps=epraccur_iw['Organisation Code'].unique().tolist()
iw_gps=epraccur_iw['Name'].unique().tolist()
iw_gps[:3],len(iw_gps)
from ipywidgets import interact
interact(generate_map_from_gpname, gpname=iw_gps);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Part 1
Step2: Next code, let you plot the evolution of above computed train and test accuracies.
Step3: This figure points out the necessity of selecting the adequate value of K. And, as expected, using the training error for such selection would provide a poor generalization.
Step4: After running the CV process, the classifier object contains the information of the CV process (next cell explore the parameter ".grid_scores_" to obtain this information).
Step5: Examine the fields ".best_estimator_" and ".best_params_" of the classifier generated by the CV process
Step6: Note that you can also compute the test error directly over the classifier object return by the CV process
Step7: Advanced work
Step9: 1.3
Step10: Part 2
Step11: 2.2
Step12: 2.3. SVM with polynomial kernel
Step13: 2.4. Plot the classification boundaries and support vectors
Step14: Note that the boundaries of linear and RBF SVMs and, even, their SVs are quite similar. Examine the value selected for the parameter gamma of the gaussian kernel and try to explain because RBF SVM boundary tends to be linear.
Step15: Include the necessary code to plot their classification boundaries
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
from sklearn import datasets
from sklearn.cross_validation import train_test_split
from sklearn.preprocessing import StandardScaler
# Initialize the random generator seed to compare results
np.random.seed(0)
# Load Iris data set
iris = datasets.load_iris()
X = iris.data[:,[ 1, 3]] # we only take the first two features.
Y = iris.target
# Create data partitions
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=.6)
# Normalize the data
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
from sklearn import neighbors
K_max=20
rang_K = np.arange(1, K_max+1)
vect_tr=[]
vect_test=[]
for n_neighbors in rang_K:
# Create a KNN classifier, train it and compute training and error accuracies.
clf = #<FILL IN>
acc_tr = #<FILL IN>
acc_test = #<FILL IN>
# Saving accuracies in a list
vect_tr#<FILL IN>
vect_test#<FILL IN>
print("For K = %d, train accuracy is %2.2f%% and test accuracy is %2.2f%%"
% (n_neighbors, 100*acc_tr, 100*acc_test))
###########################################################
# TEST CELL
###########################################################
from test_helper import Test
# TEST accuracy values
Test.assertEquals(np.round(np.sum(np.array(vect_tr)),2), 18.90, 'incorrect result: Training error of K-NN is uncorrect')
Test.assertEquals(np.round(np.sum(np.array(vect_test)),2), 18.37, 'incorrect result: Test error of K-NN is uncorrect')
#Plot acc vs K
import matplotlib.pyplot as plt
plt.figure()
plt.plot(rang_K,vect_tr,'b', label='Training accuracy')
plt.plot(rang_K,vect_test,'r', label='Test accuracy')
plt.legend()
plt.xlabel('K value')
plt.ylabel('Accuracy')
plt.title('Evolution of K-NN accuracy')
plt.show()
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
from sklearn import neighbors
from sklearn.grid_search import GridSearchCV
# Parameters
K_max = 20
rang_K = np.arange(1, K_max+1)
nfold = 10
# Define a dictionary with the name of the parameters to explore as a key and the ranges to explores as value
tuned_parameters = [{'n_neighbors': rang_K}]
# Cross validation proccess
clf_base = neighbors.KNeighborsClassifier( )
# Define the classfifier with the CV process (use GridSearchCV here with cv = nfold!!!)
clf = #<FILL IN>
# Train it (this executes the CV)
clf.#<FILL IN>
print 'CV process sucessfully finished'
# Printing results
print("Cross validation results:")
for params, mean_score, scores in clf.grid_scores_:
print("For K = %d, validation accuracy is %2.2f (+/-%1.3f)%%"
% (params['n_neighbors'], 100*mean_score, 100*scores.std() / 2))
# Selecting validation error (mean values)
vect_val=[CV_result[1] for CV_result in clf.grid_scores_]
# Ploting results
plt.figure()
plt.plot(rang_K,vect_tr,'b', label='Training accuracy')
plt.plot(rang_K,vect_test,'r', label='Test accuracy')
plt.plot(rang_K,vect_val,'g', label='Validation accuracy')
plt.legend()
plt.xlabel('K value')
plt.ylabel('Accuracy')
plt.title('Evolution of K-NN accuracy (including validation result)')
plt.show()
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
# Assign to K_opt the value of K selected by CV
K_opt = # <FILL IN>
print("The value optimum of K is %d" %(K_opt))
# Select the final classifier and compute its test error
KNN_acc_test = # <FILL IN>
print("The test accuracy is %2.2f" %(100*KNN_acc_test))
KNN_acc_test2 = clf.score(X_test, Y_test)
print("The test accuracy is %2.2f" %(100*KNN_acc_test2))
###########################################################
# TEST CELL
###########################################################
from test_helper import Test
# TEST accuracy values
Test.assertEquals(K_opt, 6, 'incorrect result: The value of K_opt is uncorrect')
Test.assertEquals(np.round(KNN_acc_test,4), 0.9444, 'incorrect result: Test error of K-NN after CV process is uncorrect')
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
from sklearn import neighbors
from sklearn.cross_validation import StratifiedKFold
# Parameters
K_max = 20
rang_K = np.arange(1, K_max+1)
nfold = 10
N_tr, N_dim = X_train.shape
# Create data partitions (check the structure of folds)
folds = StratifiedKFold(Y_train, n_folds=nfold, shuffle=True)
# Create variable (acc_val) to save the validation results
acc_val = np.zeros([nfold, K_max])
for f, (train, val) in enumerate(folds):
# f is an index with the number of fold
# train has the training data positions for fold f
# val has the validation data positions for fold f
# Create training and validation partitions for fold f
X_tr_f = # <FILL IN>
Y_tr_f = # <FILL IN>
X_val_f = # <FILL IN>
Y_val_f = # <FILL IN>
# Explore the values of K
for K, n_neighbors in enumerate(rang_K):
# Train a K-NN with the training data of this fold data using n_neighbors
clf = # <FILL IN>
# Evaluate its accuaracy over the validation data of this fold and save it in acc_val
acc_val[f, K] = # <FILL IN>
# Compute the average validation error (average over the different folds)
acc_val_med = # <FILL IN>
# Select the optimum value of K as the value which achieves the maximum average validation error
K_opt = # <FILL IN>
# Train the final K-NN classfier (with K_opt) and obtain its test error
clf = # <FILL IN>
acc_test = # <FILL IN>
print "K_NN with optimum K = " + np.array_str(K_opt) + \
". Test error = " + np.array_str(acc_test)
###########################################################
# TEST CELL
###########################################################
from test_helper import Test
# TEST accuracy values
Test.assertEquals(K_opt, 6, 'incorrect result: The value of K_opt is uncorrect')
Test.assertEquals(np.round(KNN_acc_test,4), 0.9444, 'incorrect result: Test error of K-NN after CV process is uncorrect')
# Plot the decision boundary
import matplotlib.pyplot as plt
def plot_boundary(clf, X, Y, plt):
Plot the classification regions for a given classifier.
Args:
clf: scikit-learn classifier object.
X (numpy dnarray): training or test data to be plotted (number data x number dimensions). Only frist two
dimensions are ploted
Y (numpy dnarray): labels of the training or test data to be plotted (number data x 1).
plt: graphic object where you wish to plot
plot_colors = "brymc"
plot_step = 0.02
n_classes = np.unique(Y).shape[0]
# Plot the decision regions
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, plot_step),
np.arange(y_min, y_max, plot_step))
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
cs = plt.contourf(xx, yy, Z, cmap=plt.cm.Paired)
plt.xlabel('Feature 1')
plt.ylabel('Feature 2')
plt.axis("tight")
# Plot the training points
for i, color in zip(range(n_classes), plot_colors):
idx = np.where(Y == i)
plt.scatter(X[idx, 0], X[idx, 1], c=color, cmap=plt.cm.Paired)
plt.axis("tight")
plt.figure(figsize=(8, 4))
# Plot classification boundary over training data
plt.subplot(1,2,1)
plot_boundary(clf, X_train, Y_train, plt)
# Plot classification boundary over test data
plt.subplot(1,2,2)
plot_boundary(clf, X_test, Y_test, plt)
plt.show()
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
from sklearn import svm
rang_C = np.logspace(-3, 3, 10)
tuned_parameters = [{'C': rang_C}]
nfold = 10
# Train a liner SVM and adjust by CV the parameter C
clf_base = svm.SVC(kernel='linear')
lin_svc = # <FILL IN>
lin_svc.# <FILL IN>
# Save the value of C selected and compute the final accuracy
C_opt = # <FILL IN>
acc_lin_svc = # <FILL IN>
print "The C value selected is " + str(C_opt)
print("The test accuracy of the linear SVM is %2.2f" %(100*acc_lin_svc))
###########################################################
# TEST CELL
###########################################################
from test_helper import Test
# TEST accuracy values
Test.assertEquals(np.round(C_opt,2), 0.46, 'incorrect result: The value of C_opt is uncorrect')
Test.assertEquals(np.round(acc_lin_svc,4), 0.9556, 'incorrect result: Test accuracy of the linear SVM after CV process is uncorrect')
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
n_dim=X_train.shape[1]
rang_g=np.array([1./8, 1./4, 1./2, 1, 2, 4, 8])/(np.sqrt(n_dim))
tuned_parameters = [{'C': rang_C, 'gamma': rang_g}]
# Train an SVM with gaussian kernel and adjust by CV the parameter C
clf_base = svm.SVC(kernel='rbf')
rbf_svc = # <FILL IN>
rbf_svc. # <FILL IN>
# Save the values of C and gamma selected and compute the final accuracy
C_opt = # <FILL IN>
g_opt = # <FILL IN>
print "The C value selected is " + str(C_opt)
print "The gamma value selected is " + str(g_opt)
acc_rbf_svc = rbf_svc.score(X_test, Y_test)
print("The test accuracy of the RBF SVM is %2.2f" %(100*acc_rbf_svc))
###########################################################
# TEST CELL
###########################################################
from test_helper import Test
# TEST accuracy values
Test.assertEquals(np.round(C_opt, 2), 2.15, 'incorrect result: The value of C_opt is uncorrect')
Test.assertEquals(np.round(g_opt, 2), 0.09, 'incorrect result: The value of g_opt is uncorrect')
Test.assertEquals(np.round(acc_rbf_svc, 4), 0.9444, 'incorrect result: Test accuracy of the rbf SVM after CV process is uncorrect')
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
rang_d=np.arange(1,5)
tuned_parameters = [{'C': rang_C, 'degree': rang_d}]
# Train an SVM with polynomial kernel and adjust by CV the parameter C
clf_base = svm.SVC(kernel='poly')
poly_svc = # <FILL IN>
poly_svc.# <FILL IN>
# Save the values of C and degree selected and compute the final accuracy
C_opt = # <FILL IN>
d_opt = # <FILL IN>
print "The C value selected is " + str(C_opt)
print "The degree value selected is " + str(d_opt)
acc_poly_svc = poly_svc.score(X_test, Y_test)
print("The test accuracy of the polynomial SVM is %2.2f" %(100*acc_poly_svc))
###########################################################
# TEST CELL
###########################################################
from test_helper import Test
# TEST accuracy values
Test.assertEquals(np.round(C_opt, 2), 10 , 'incorrect result: The value of C_opt is uncorrect')
Test.assertEquals(np.round(d_opt, 2), 3, 'incorrect result: The value of d_opt is uncorrect')
Test.assertEquals(np.round(acc_poly_svc, 4), 0.9111, 'incorrect result: Test accuracy of the polynomial SVM after CV process is uncorrect')
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
plt.figure(1,figsize=(10, 4))
# Linear SVM
plt.subplot(1,2,1)
plot_boundary(lin_svc , X_train, Y_train, plt)
plt.title ('SVM boundary and training data')
plt.subplot(1,2,2)
plot_boundary(lin_svc , X_train, Y_train, plt)
SVs=# <FILL IN>
plt.scatter(# <FILL IN> )
plt.title ('SVM boundary, training data and SVs')
plt.figure(2,figsize=(10, 4))
# RBF SVM
plt.subplot(1,2,1)
plot_boundary(rbf_svc , X_train, Y_train, plt)
plt.title ('SVM boundary and training data')
plt.subplot(1,2,2)
plot_boundary(rbf_svc , X_train, Y_train, plt)
SVs=# <FILL IN>
plt.scatter(# <FILL IN> )
plt.title ('SVM boundary, training data and SVs')
plt.figure(3,figsize=(10, 4))
# Polynomial SVM
plt.subplot(1,2,1)
plot_boundary(poly_svc , X_train, Y_train, plt)
plt.title ('SVM boundary and training data')
plt.subplot(1,2,2)
plot_boundary(poly_svc , X_train, Y_train, plt)
SVs=# <FILL IN>
plt.scatter(# <FILL IN> )
plt.title ('SVM boundary, training data and SVs')
plt.show()
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
# Logistic regression
from sklearn.linear_model import LogisticRegression
rang_C = np.logspace(-3, 3, 10)
tuned_parameters = [{'C': rang_C}]
nfold = 10
# Train a LR model and adjust by CV the parameter C
clf_LR = GridSearchCV(LogisticRegression(),
tuned_parameters, cv=nfold)
clf_LR.# <FILL IN>
acc_test_LR=# <FILL IN>
# LDA
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
clf_LDA = LDA()
clf_LDA.# <FILL IN>
acc_test_LDA=# <FILL IN>
print("The test accuracy of LR is %2.2f" %(100*acc_test_LR))
print("The test accuracy of LDA is %2.2f" %(100*acc_test_LDA))
###########################################################
# TEST CELL
###########################################################
from test_helper import Test
# TEST accuracy values
Test.assertEquals(np.round(100*acc_test_LR, 2), 87.78 , 'incorrect result: The value of acc_test_LR is uncorrect')
Test.assertEquals(np.round(100*acc_test_LDA, 2), 96.67, 'incorrect result: The value of acc_test_LDA is uncorrect')
###########################################################
# TODO: Replace <FILL IN> with appropriate code
###########################################################
plt.figure(1,figsize=(10, 4))
# LR
plt.subplot(1,2,1)
plot_boundary(# <FILL IN> )
plt.title ('LR')
# LDA
plt.subplot(1,2,2)
plot_boundary(# <FILL IN> )
plt.title ('LDA')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In this tutorial, to run commands in parallel over the engines, we will use the %px line magic.
Step2: Overview of MPI in Devito
Step3: An Operator will then generate MPI code, including sends/receives for halo exchanges. Below, we introduce a running example through which we explain how domain decomposition as well as data access (read/write) and distribution work. Performance optimizations are discussed in a later section.
Step4: Domain decomposition is performed when creating a Grid. Users may supply their own domain decomposition, but this is not shown in this notebook. Devito exploits the MPI Cartesian topology abstraction to logically split the Grid over the available MPI processes. Since u is defined over a decomposed Grid, its data get distributed too.
Step5: Globally, u consists of 4x4 points -- this is what users "see". But locally, as shown above, each rank has got a 2x2 subdomain. The key point is
Step6: The only limitation, currently, is that a data access cannot require a direct data exchange among two or more processes (e.g., the assignment u.data[0, 0] = u.data[3, 3] will raise an exception unless both entries belong to the same MPI rank).
Step7: And we can now check again the (distributed) content of our u.data
Step8: Everything as expected. We could also peek at the generated code, because we may be curious to see what sort of MPI calls Devito has generated...
Step9: Hang on. There's nothing MPI-specific here! At least apart from the header file #include "mpi.h". What's going on? Well, it's simple. Devito was smart enough to realize that this trivial Operator doesn't even need any sort of halo exchange -- the Eq implements a pure "map computation" (i.e., fully parallel), so it can just let each MPI process do its job without ever synchronizing with halo exchanges. We might want try again with a proper stencil Eq.
Step10: Uh-oh -- now the generated code looks more complicated than before, though it still is pretty much human-readable. We can spot the following routines
Step11: This is again a global data view. The shown with_halo is the "true" halo surrounding the physical domain, not the halo used for the MPI halo exchanges (often referred to as "ghost region"). So it gets trivial for a user to initialize the "true" halo region (which is typically read by a stencil Eq when an Operator iterates in proximity of the domain bounday).
Step12: MPI and SparseFunction
Step13: Let
Step14: Performance optimizations
Step15: We could now peek at the generated code to see that things now look differently.
|
<ASSISTANT_TASK:>
Python Code:
import ipyparallel as ipp
c = ipp.Client(profile='mpi')
%%px --group-outputs=engine
from mpi4py import MPI
print(f"Hi, I'm rank %d." % MPI.COMM_WORLD.rank)
%%px
from devito import configuration
configuration['mpi'] = True
%%px
# Keep generated code as simple as possible
configuration['language'] = 'C'
# Fix platform so that this notebook can be tested by py.test --nbval
configuration['platform'] = 'knl7210'
%%px
from devito import Grid, TimeFunction, Eq, Operator
grid = Grid(shape=(4, 4))
u = TimeFunction(name="u", grid=grid, space_order=2, time_order=0)
%%px --group-outputs=engine
u.data
%%px
u.data[0, 1:-1, 1:-1] = 1.
%%px --group-outputs=engine
u.data
%%px
op = Operator(Eq(u.forward, u + 1))
summary = op.apply(time_M=0)
%%px --group-outputs=engine
u.data
%%px --targets 0
print(op)
%%px --targets 0
op = Operator(Eq(u.forward, u.dx + 1))
print(op)
%%px --group-outputs=engine
u.data_with_halo
%%px
u.data_with_halo[:] = 1.
%%px --group-outputs=engine
u.data_with_halo
%%px
from devito import Function, SparseFunction
grid = Grid(shape=(4, 4), extent=(3.0, 3.0))
x, y = grid.dimensions
f = Function(name='f', grid=grid)
coords = [(0.5, 0.5), (1.5, 2.5), (1.5, 1.5), (2.5, 1.5)]
sf = SparseFunction(name='sf', grid=grid, npoint=len(coords), coordinates=coords)
%%px
sf.data[:] = 5.
op = Operator(sf.inject(field=f, expr=sf))
summary = op.apply()
%%px --group-outputs=engine
f.data
%%px
configuration['mpi'] = 'full'
%%px
op = Operator(Eq(u.forward, u.dx + 1))
# Uncomment below to show code (it's quite verbose)
# print(op)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2) Scrieti doua functii de verificare daca un numar este prim, si verificati care dintre ele este mai eficienta din punct de vedere al timpului.
Step2: 5) Sa se creeze doua scripturi care sa comunice intre ele prin date serializate. Primul script va salva periodic o lista cu toate fisierele dintr-un director iar al doilea script va adauga intr-o arhiva toate fisierele cu size mai mic de 100kb si modificate cu cel mult 5 minute in urma (nu va fi adaugat acelasi fisier de 2 ori).
|
<ASSISTANT_TASK:>
Python Code:
import time
import random
#import sys
#a = int(sys.argv[1])
#b = int(sys.argv[2])
def wait(x):
time.sleep(x)
def time_cron(a,b):
time_interval = random.uniform(a,b)
# while(1):
# measure process time
t0 = time.clock()
wait(time_interval)
print time.clock() - t0, "seconds process time"
# measure wall time
t0 = time.time()
wait(time_interval)
print time.time() - t0, "seconds wall time"
time_cron(0,2)
import os
import json
import hashlib
import time
def get_file_md5(filePath):
h = hashlib.md5()
h.update(open(filePath,"rb").read())
return h.hexdigest()
def get_file_sha256(filePath):
h = hashlib.sha256()
h.update(open(filePath,"rb").read())
return h.hexdigest()
def get_dir_data(dir_path):
json_data = {}
dir_path = os.path.realpath(dir_path)
json_file = open(os.path.basename(dir_path) + '.json', 'w')
print next(os.walk(dir_path))[2]
#print os.path.basename(dir_path)
for dir_file in next(os.walk(dir_path))[2]:
file_data = {}
#file_data["file_name"] = dir_file
file_data[dir_file] = {}
file_data[dir_file]["file_md5"] = get_file_md5(dir_file)
file_data[dir_file]["file_sha256"] = get_file_sha256(dir_file)
file_data[dir_file]["file_size"] = os.path.getsize(dir_file)
file_time = time.gmtime(os.path.getctime(dir_file))
file_data[dir_file]["file_time"] = time.strftime("%Y-%m-%d %I:%M:%S %p", file_time)
file_data[dir_file]["file_path"] = os.path.realpath(dir_file)
#print file_data
json_data.update(file_data)
#print json_data
#print json_data
json_data = json.dumps(json_data, sort_keys = True, indent=4, separators=(',', ': '))
json_file.write( json_data )
json_file.close()
get_dir_data('./')
import datetime as dt
def weekday_new_year(x):
today = dt.datetime.today()
current_year = today.year
#print today, '::', current_year
for i in range(0, x):
print current_year-i, ': ', dt.date(current_year-i, 1, 31).strftime("%A") #.weekday() shows only no
weekday_new_year(5)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Accessing the REST API
Step2: Here we create a dataset and insert two rows of two columns into it
Step3: SQL Queries
|
<ASSISTANT_TASK:>
Python Code:
from pymldb import Connection
mldb = Connection("http://localhost")
mldb.get("/v1/types")
#keyword arguments to get() are appended to the GET query string
mldb.get("/v1/types", x="y")
#dictionaries arguments to put() and post() are sent as JSON via PUT or POST
mldb.put("/v1/datasets/sample", {"type": "sparse.mutable"} )
mldb.put( "/v1/datasets/demo", {"type":"sparse.mutable"})
mldb.post("/v1/datasets/demo/rows", {"rowName": "first", "columns":[["a",1,0],["b",2,0]]})
mldb.post("/v1/datasets/demo/rows", {"rowName": "second", "columns":[["a",3,0],["b",4,0]]})
mldb.post("/v1/datasets/demo/commit")
df = mldb.query("select * from demo")
print type(df)
df
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Install Pip Packages
Step2: Configure Variables
Step3: Setup Authorization
Step4: Additionally, to interact with the underlying cluster, we configure kubectl.
Step5: Collectively, these allow us to interact with Google Cloud Services as well as the Kubernetes Cluster directly to submit TFJobs and execute Dataflow pipelines.
Step7: View Github Files
Step8: Define an experiment
Step9: Submit the Dataflow Job
Step11: When completed successfully, this should create a dataset in BigQuery named target_dataset. Additionally, it also dumps CSV files into data_dir which contain training samples (pairs of function and docstrings) for our Tensorflow Model. A representative set of results can be viewed using the following query.
Step12: This pipeline also writes a set of CSV files which contain function and docstring pairs delimited by a comma. Here, we list a subset of them.
Step13: Prepare Dataset for Training
Step14: Once this job finishes, the data directory should have a vocabulary file and a list of TFRecords prefixed by the problem name which in our case is github_function_docstring_extended. Here, we list a subset of them.
Step15: Execute Tensorflow Training
Step16: This will generate TensorFlow model checkpoints which is illustrated below.
Step17: Export Tensorflow Model
Step18: Once completed, this will generate a TensorFlow SavedModel which we will further use for both online (via TF Serving) and offline inference (via Kubeflow Batch Prediction).
Step19: Compute Function Embeddings
Step20: Modify experiments.libsonnet and set modelDir to the directory computed above
Step22: When completed successfully, this should create another table in the same BigQuery dataset which contains the function embeddings for each existing data sample available from the previous Dataflow Job. Additionally, it also dumps a CSV file containing metadata for each of the function and its embeddings. A representative query result is shown below.
Step23: The pipeline also generates a set of CSV files which will be useful to generate the search index.
Step24: Create Search Index
Step25: Using the CSV files generated from the previous step, this creates an index using NMSLib. A unified CSV file containing all the code examples for a human-readable reverse lookup during the query, is also created.
Step26: Deploy the Web App
Step27: Deploy Search UI
|
<ASSISTANT_TASK:>
Python Code:
%%bash
echo "Pip Version Info: " && python2 --version && python2 -m pip --version && echo
echo "Google Cloud SDK Info: " && gcloud --version && echo
echo "Ksonnet Version Info: " && ks version && echo
echo "Kubectl Version Info: " && kubectl version
! python2 -m pip install -U pip
# Code Search dependencies
! python2 -m pip install --user https://github.com/kubeflow/batch-predict/tarball/master
! python2 -m pip install --user -r src/requirements.txt
# BigQuery Cell Dependencies
! python2 -m pip install --user pandas-gbq
# NOTE: The RuntimeWarnings (if any) are harmless. See ContinuumIO/anaconda-issues#6678.
from pandas.io import gbq
import getpass
import subprocess
# Configuration Variables. Modify as desired.
PROJECT = subprocess.check_output(["gcloud", "config", "get-value", "project"]).strip()
# Dataflow Related Variables.
TARGET_DATASET = 'code_search'
WORKING_DIR = 'gs://{0}_code_search/workingDir'.format(PROJECT)
KS_ENV=getpass.getuser()
# DO NOT MODIFY. These are environment variables to be used in a bash shell.
%env PROJECT $PROJECT
%env TARGET_DATASET $TARGET_DATASET
%env WORKING_DIR $WORKING_DIR
%%bash
# Activate Service Account provided by Kubeflow.
gcloud auth activate-service-account --key-file=${GOOGLE_APPLICATION_CREDENTIALS}
%%bash
kubectl config set-cluster kubeflow --server=https://kubernetes.default --certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
kubectl config set-credentials jupyter --token "$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)"
kubectl config set-context kubeflow --cluster kubeflow --user jupyter
kubectl config use-context kubeflow
%%bash
cd kubeflow
# Update Ksonnet to point to the Kubernetes Cluster
ks env add code-search
# Update Ksonnet to use the namespace where kubeflow is deployed. By default it's 'kubeflow'
ks env set code-search --namespace=kubeflow
# Update the Working Directory of the application
sed -i'' "s,gs://example/prefix,${WORKING_DIR}," components/params.libsonnet
# FIXME(sanyamkapoor): This command completely replaces previous configurations.
# Hence, using string replacement in file.
# ks param set t2t-code-search workingDir ${WORKING_DIR}
query =
SELECT
MAX(CONCAT(f.repo_name, ' ', f.path)) AS repo_path,
c.content
FROM
`bigquery-public-data.github_repos.files` AS f
JOIN
`bigquery-public-data.github_repos.contents` AS c
ON
f.id = c.id
JOIN (
--this part of the query makes sure repo is watched at least twice since 2017
SELECT
repo
FROM (
SELECT
repo.name AS repo
FROM
`githubarchive.year.2017`
WHERE
type="WatchEvent"
UNION ALL
SELECT
repo.name AS repo
FROM
`githubarchive.month.2018*`
WHERE
type="WatchEvent" )
GROUP BY
1
HAVING
COUNT(*) >= 2 ) AS r
ON
f.repo_name = r.repo
WHERE
f.path LIKE '%.py' AND --with python extension
c.size < 15000 AND --get rid of ridiculously long files
REGEXP_CONTAINS(c.content, r'def ') --contains function definition
GROUP BY
c.content
LIMIT
10
gbq.read_gbq(query, dialect='standard', project_id=PROJECT)
%%bash
bq mk ${PROJECT}:${TARGET_DATASET}
%%bash
cd kubeflow
ks param set --env=code-search submit-preprocess-job targetDataset ${TARGET_DATASET}
ks apply code-search -c submit-preprocess-job
query =
SELECT *
FROM
{}.token_pairs
LIMIT
10
.format(TARGET_DATASET)
gbq.read_gbq(query, dialect='standard', project_id=PROJECT)
%%bash
LIMIT=10
gsutil ls ${WORKING_DIR}/data/*.csv | head -n ${LIMIT}
%%bash
cd kubeflow
ks show code-search -c t2t-code-search-datagen
%%bash
cd kubeflow
ks apply code-search -c t2t-code-search-datagen
%%bash
LIMIT=10
gsutil ls ${WORKING_DIR}/data/vocab*
gsutil ls ${WORKING_DIR}/data/*train* | head -n ${LIMIT}
%%bash
cd kubeflow
ks show code-search -c t2t-code-search-trainer
%%bash
cd kubeflow
ks apply code-search -c t2t-code-search-trainer
%%bash
gsutil ls ${WORKING_DIR}/output/*ckpt*
%%bash
cd kubeflow
ks show code-search -c t2t-code-search-exporter
%%bash
cd kubeflow
ks apply code-search -c t2t-code-search-exporter
%%bash
gsutil ls ${WORKING_DIR}/output/export/Servo
%%bash --out EXPORT_DIR_LS
gsutil ls ${WORKING_DIR}/output/export/Servo | grep -oE "([0-9]+)/$"
# WARNING: This routine will fail if no export has been completed successfully.
MODEL_VERSION = max([int(ts[:-1]) for ts in EXPORT_DIR_LS.split('\n') if ts])
# DO NOT MODIFY. These are environment variables to be used in a bash shell.
%env MODEL_VERSION $MODEL_VERSION
%%bash
cd kubeflow
ks apply code-search -c submit-code-embeddings-job
query =
SELECT *
FROM
{}.function_embeddings
LIMIT
10
.format(TARGET_DATASET)
gbq.read_gbq(query, dialect='standard', project_id=PROJECT)
%%bash
LIMIT=10
gsutil ls ${WORKING_DIR}/data/*index*.csv | head -n ${LIMIT}
%%bash
cd kubeflow
ks show code-search -c search-index-creator
%%bash
cd kubeflow
ks apply code-search -c search-index-creator
%%bash
gsutil ls ${WORKING_DIR}/code_search_index*
%%bash
cd ks-web-app
ks param set --env=code-search modelBasePath ${MODEL_BASE_PATH}
ks show code-search -c query-embed-server
%%bash
cd ks-web-app
ks apply code-search -c query-embed-server
%%bash
cd ks-web-app
ks param set --env=code-search search-index-server lookupFile ${LOOKUP_FILE}
ks param set --env=code-search search-index-server indexFile ${INDEX_FILE}
ks show code-search -c search-index-server
%%bash
cd kubeflow
ks apply code-search -c search-index-server
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Linear regression using the Normal Equation
Step2: The figure in the book actually corresponds to the following code, with a legend and axis labels
Step3: The LinearRegression class is based on the scipy.linalg.lstsq() function (the name stands for "least squares"), which you could call directly
Step4: This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the pseudoinverse of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use np.linalg.pinv() to compute the pseudoinverse directly
Step5: Note
Step6: Stochastic Gradient Descent
Step7: Mini-batch gradient descent
Step8: Polynomial regression
Step9: Regularized models
Step10: Logistic regression
Step11: The figure in the book actually is actually a bit fancier
Step12: Exercise solutions
Step13: We need to add the bias term for every instance ($x_0 = 1$)
Step14: And let's set the random seed so the output of this exercise solution is reproducible
Step15: The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's train_test_split() function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation
Step16: The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance
Step17: Let's test this function on the first 10 instances
Step18: Looks good, so let's create the target class probabilities matrix for the training set and the test set
Step19: Now let's implement the Softmax function. Recall that it is defined by the following equation
Step20: We are almost ready to start training. Let's define the number of inputs and outputs
Step21: Now here comes the hardest part
Step22: And that's it! The Softmax model is trained. Let's look at the model parameters
Step23: Let's make predictions for the validation set and check the accuracy score
Step24: Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of Theta since this corresponds to the bias term). Also, let's try increasing the learning rate eta.
Step25: Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out
Step26: Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant.
Step27: Still perfect, but faster.
Step28: And now let's measure the final model's accuracy on the test set
|
<ASSISTANT_TASK:>
Python Code:
# To support both python 2 and python 3
from __future__ import division, print_function, unicode_literals
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "training_linear_models"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
import numpy as np
X = 2 * np.random.rand(100, 1)
y = 4 + 3 * X + np.random.randn(100, 1)
plt.plot(X, y, "b.")
plt.xlabel("$x_1$", fontsize=18)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.axis([0, 2, 0, 15])
save_fig("generated_data_plot")
plt.show()
X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance
theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y)
theta_best
X_new = np.array([[0], [2]])
X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance
y_predict = X_new_b.dot(theta_best)
y_predict
plt.plot(X_new, y_predict, "r-")
plt.plot(X, y, "b.")
plt.axis([0, 2, 0, 15])
plt.show()
plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions")
plt.plot(X, y, "b.")
plt.xlabel("$x_1$", fontsize=18)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.legend(loc="upper left", fontsize=14)
plt.axis([0, 2, 0, 15])
save_fig("linear_model_predictions")
plt.show()
from sklearn.linear_model import LinearRegression
lin_reg = LinearRegression()
lin_reg.fit(X, y)
lin_reg.intercept_, lin_reg.coef_
lin_reg.predict(X_new)
theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6)
theta_best_svd
np.linalg.pinv(X_b).dot(y)
eta = 0.1
n_iterations = 1000
m = 100
theta = np.random.randn(2,1)
for iteration in range(n_iterations):
gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y)
theta = theta - eta * gradients
theta
X_new_b.dot(theta)
theta_path_bgd = []
def plot_gradient_descent(theta, eta, theta_path=None):
m = len(X_b)
plt.plot(X, y, "b.")
n_iterations = 1000
for iteration in range(n_iterations):
if iteration < 10:
y_predict = X_new_b.dot(theta)
style = "b-" if iteration > 0 else "r--"
plt.plot(X_new, y_predict, style)
gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y)
theta = theta - eta * gradients
if theta_path is not None:
theta_path.append(theta)
plt.xlabel("$x_1$", fontsize=18)
plt.axis([0, 2, 0, 15])
plt.title(r"$\eta = {}$".format(eta), fontsize=16)
np.random.seed(42)
theta = np.random.randn(2,1) # random initialization
plt.figure(figsize=(10,4))
plt.subplot(131); plot_gradient_descent(theta, eta=0.02)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd)
plt.subplot(133); plot_gradient_descent(theta, eta=0.5)
save_fig("gradient_descent_plot")
plt.show()
theta_path_sgd = []
m = len(X_b)
np.random.seed(42)
n_epochs = 50
t0, t1 = 5, 50 # learning schedule hyperparameters
def learning_schedule(t):
return t0 / (t + t1)
theta = np.random.randn(2,1) # random initialization
for epoch in range(n_epochs):
for i in range(m):
if epoch == 0 and i < 20: # not shown in the book
y_predict = X_new_b.dot(theta) # not shown
style = "b-" if i > 0 else "r--" # not shown
plt.plot(X_new, y_predict, style) # not shown
random_index = np.random.randint(m)
xi = X_b[random_index:random_index+1]
yi = y[random_index:random_index+1]
gradients = 2 * xi.T.dot(xi.dot(theta) - yi)
eta = learning_schedule(epoch * m + i)
theta = theta - eta * gradients
theta_path_sgd.append(theta) # not shown
plt.plot(X, y, "b.") # not shown
plt.xlabel("$x_1$", fontsize=18) # not shown
plt.ylabel("$y$", rotation=0, fontsize=18) # not shown
plt.axis([0, 2, 0, 15]) # not shown
save_fig("sgd_plot") # not shown
plt.show() # not shown
theta
from sklearn.linear_model import SGDRegressor
sgd_reg = SGDRegressor(max_iter=50, tol=-np.infty, penalty=None, eta0=0.1, random_state=42)
sgd_reg.fit(X, y.ravel())
sgd_reg.intercept_, sgd_reg.coef_
theta_path_mgd = []
n_iterations = 50
minibatch_size = 20
np.random.seed(42)
theta = np.random.randn(2,1) # random initialization
t0, t1 = 200, 1000
def learning_schedule(t):
return t0 / (t + t1)
t = 0
for epoch in range(n_iterations):
shuffled_indices = np.random.permutation(m)
X_b_shuffled = X_b[shuffled_indices]
y_shuffled = y[shuffled_indices]
for i in range(0, m, minibatch_size):
t += 1
xi = X_b_shuffled[i:i+minibatch_size]
yi = y_shuffled[i:i+minibatch_size]
gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi)
eta = learning_schedule(t)
theta = theta - eta * gradients
theta_path_mgd.append(theta)
theta
theta_path_bgd = np.array(theta_path_bgd)
theta_path_sgd = np.array(theta_path_sgd)
theta_path_mgd = np.array(theta_path_mgd)
plt.figure(figsize=(7,4))
plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic")
plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch")
plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch")
plt.legend(loc="upper left", fontsize=16)
plt.xlabel(r"$\theta_0$", fontsize=20)
plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0)
plt.axis([2.5, 4.5, 2.3, 3.9])
save_fig("gradient_descent_paths_plot")
plt.show()
import numpy as np
import numpy.random as rnd
np.random.seed(42)
m = 100
X = 6 * np.random.rand(m, 1) - 3
y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1)
plt.plot(X, y, "b.")
plt.xlabel("$x_1$", fontsize=18)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.axis([-3, 3, 0, 10])
save_fig("quadratic_data_plot")
plt.show()
from sklearn.preprocessing import PolynomialFeatures
poly_features = PolynomialFeatures(degree=2, include_bias=False)
X_poly = poly_features.fit_transform(X)
X[0]
X_poly[0]
lin_reg = LinearRegression()
lin_reg.fit(X_poly, y)
lin_reg.intercept_, lin_reg.coef_
X_new=np.linspace(-3, 3, 100).reshape(100, 1)
X_new_poly = poly_features.transform(X_new)
y_new = lin_reg.predict(X_new_poly)
plt.plot(X, y, "b.")
plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions")
plt.xlabel("$x_1$", fontsize=18)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.legend(loc="upper left", fontsize=14)
plt.axis([-3, 3, 0, 10])
save_fig("quadratic_predictions_plot")
plt.show()
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)):
polybig_features = PolynomialFeatures(degree=degree, include_bias=False)
std_scaler = StandardScaler()
lin_reg = LinearRegression()
polynomial_regression = Pipeline([
("poly_features", polybig_features),
("std_scaler", std_scaler),
("lin_reg", lin_reg),
])
polynomial_regression.fit(X, y)
y_newbig = polynomial_regression.predict(X_new)
plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width)
plt.plot(X, y, "b.", linewidth=3)
plt.legend(loc="upper left")
plt.xlabel("$x_1$", fontsize=18)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.axis([-3, 3, 0, 10])
save_fig("high_degree_polynomials_plot")
plt.show()
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
def plot_learning_curves(model, X, y):
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10)
train_errors, val_errors = [], []
for m in range(1, len(X_train)):
model.fit(X_train[:m], y_train[:m])
y_train_predict = model.predict(X_train[:m])
y_val_predict = model.predict(X_val)
train_errors.append(mean_squared_error(y_train[:m], y_train_predict))
val_errors.append(mean_squared_error(y_val, y_val_predict))
plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train")
plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val")
plt.legend(loc="upper right", fontsize=14) # not shown in the book
plt.xlabel("Training set size", fontsize=14) # not shown
plt.ylabel("RMSE", fontsize=14) # not shown
lin_reg = LinearRegression()
plot_learning_curves(lin_reg, X, y)
plt.axis([0, 80, 0, 3]) # not shown in the book
save_fig("underfitting_learning_curves_plot") # not shown
plt.show() # not shown
from sklearn.pipeline import Pipeline
polynomial_regression = Pipeline([
("poly_features", PolynomialFeatures(degree=10, include_bias=False)),
("lin_reg", LinearRegression()),
])
plot_learning_curves(polynomial_regression, X, y)
plt.axis([0, 80, 0, 3]) # not shown
save_fig("learning_curves_plot") # not shown
plt.show() # not shown
from sklearn.linear_model import Ridge
np.random.seed(42)
m = 20
X = 3 * np.random.rand(m, 1)
y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5
X_new = np.linspace(0, 3, 100).reshape(100, 1)
def plot_model(model_class, polynomial, alphas, **model_kargs):
for alpha, style in zip(alphas, ("b-", "g--", "r:")):
model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression()
if polynomial:
model = Pipeline([
("poly_features", PolynomialFeatures(degree=10, include_bias=False)),
("std_scaler", StandardScaler()),
("regul_reg", model),
])
model.fit(X, y)
y_new_regul = model.predict(X_new)
lw = 2 if alpha > 0 else 1
plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha))
plt.plot(X, y, "b.", linewidth=3)
plt.legend(loc="upper left", fontsize=15)
plt.xlabel("$x_1$", fontsize=18)
plt.axis([0, 3, 0, 4])
plt.figure(figsize=(8,4))
plt.subplot(121)
plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.subplot(122)
plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42)
save_fig("ridge_regression_plot")
plt.show()
from sklearn.linear_model import Ridge
ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42)
ridge_reg.fit(X, y)
ridge_reg.predict([[1.5]])
sgd_reg = SGDRegressor(max_iter=50, tol=-np.infty, penalty="l2", random_state=42)
sgd_reg.fit(X, y.ravel())
sgd_reg.predict([[1.5]])
ridge_reg = Ridge(alpha=1, solver="sag", random_state=42)
ridge_reg.fit(X, y)
ridge_reg.predict([[1.5]])
from sklearn.linear_model import Lasso
plt.figure(figsize=(8,4))
plt.subplot(121)
plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.subplot(122)
plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), tol=1, random_state=42)
save_fig("lasso_regression_plot")
plt.show()
from sklearn.linear_model import Lasso
lasso_reg = Lasso(alpha=0.1)
lasso_reg.fit(X, y)
lasso_reg.predict([[1.5]])
from sklearn.linear_model import ElasticNet
elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42)
elastic_net.fit(X, y)
elastic_net.predict([[1.5]])
np.random.seed(42)
m = 100
X = 6 * np.random.rand(m, 1) - 3
y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1)
X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10)
poly_scaler = Pipeline([
("poly_features", PolynomialFeatures(degree=90, include_bias=False)),
("std_scaler", StandardScaler()),
])
X_train_poly_scaled = poly_scaler.fit_transform(X_train)
X_val_poly_scaled = poly_scaler.transform(X_val)
sgd_reg = SGDRegressor(max_iter=1,
tol=-np.infty,
penalty=None,
eta0=0.0005,
warm_start=True,
learning_rate="constant",
random_state=42)
n_epochs = 500
train_errors, val_errors = [], []
for epoch in range(n_epochs):
sgd_reg.fit(X_train_poly_scaled, y_train)
y_train_predict = sgd_reg.predict(X_train_poly_scaled)
y_val_predict = sgd_reg.predict(X_val_poly_scaled)
train_errors.append(mean_squared_error(y_train, y_train_predict))
val_errors.append(mean_squared_error(y_val, y_val_predict))
best_epoch = np.argmin(val_errors)
best_val_rmse = np.sqrt(val_errors[best_epoch])
plt.annotate('Best model',
xy=(best_epoch, best_val_rmse),
xytext=(best_epoch, best_val_rmse + 1),
ha="center",
arrowprops=dict(facecolor='black', shrink=0.05),
fontsize=16,
)
best_val_rmse -= 0.03 # just to make the graph look better
plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2)
plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set")
plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set")
plt.legend(loc="upper right", fontsize=14)
plt.xlabel("Epoch", fontsize=14)
plt.ylabel("RMSE", fontsize=14)
save_fig("early_stopping_plot")
plt.show()
from sklearn.base import clone
sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None,
learning_rate="constant", eta0=0.0005, random_state=42)
minimum_val_error = float("inf")
best_epoch = None
best_model = None
for epoch in range(1000):
sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off
y_val_predict = sgd_reg.predict(X_val_poly_scaled)
val_error = mean_squared_error(y_val, y_val_predict)
if val_error < minimum_val_error:
minimum_val_error = val_error
best_epoch = epoch
best_model = clone(sgd_reg)
best_epoch, best_model
t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5
# ignoring bias term
t1s = np.linspace(t1a, t1b, 500)
t2s = np.linspace(t2a, t2b, 500)
t1, t2 = np.meshgrid(t1s, t2s)
T = np.c_[t1.ravel(), t2.ravel()]
Xr = np.array([[-1, 1], [-0.3, -1], [1, 0.1]])
yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:]
J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape)
N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape)
N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape)
t_min_idx = np.unravel_index(np.argmin(J), J.shape)
t1_min, t2_min = t1[t_min_idx], t2[t_min_idx]
t_init = np.array([[0.25], [-1]])
def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.1, n_iterations = 50):
path = [theta]
for iteration in range(n_iterations):
gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + 2 * l2 * theta
theta = theta - eta * gradients
path.append(theta)
return np.array(path)
plt.figure(figsize=(12, 8))
for i, N, l1, l2, title in ((0, N1, 0.5, 0, "Lasso"), (1, N2, 0, 0.1, "Ridge")):
JR = J + l1 * N1 + l2 * N2**2
tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape)
t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx]
levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J)
levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR)
levelsN=np.linspace(0, np.max(N), 10)
path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0)
path_JR = bgd_path(t_init, Xr, yr, l1, l2)
path_N = bgd_path(t_init, Xr, yr, np.sign(l1)/3, np.sign(l2), core=0)
plt.subplot(221 + i * 2)
plt.grid(True)
plt.axhline(y=0, color='k')
plt.axvline(x=0, color='k')
plt.contourf(t1, t2, J, levels=levelsJ, alpha=0.9)
plt.contour(t1, t2, N, levels=levelsN)
plt.plot(path_J[:, 0], path_J[:, 1], "w-o")
plt.plot(path_N[:, 0], path_N[:, 1], "y-^")
plt.plot(t1_min, t2_min, "rs")
plt.title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16)
plt.axis([t1a, t1b, t2a, t2b])
if i == 1:
plt.xlabel(r"$\theta_1$", fontsize=20)
plt.ylabel(r"$\theta_2$", fontsize=20, rotation=0)
plt.subplot(222 + i * 2)
plt.grid(True)
plt.axhline(y=0, color='k')
plt.axvline(x=0, color='k')
plt.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9)
plt.plot(path_JR[:, 0], path_JR[:, 1], "w-o")
plt.plot(t1r_min, t2r_min, "rs")
plt.title(title, fontsize=16)
plt.axis([t1a, t1b, t2a, t2b])
if i == 1:
plt.xlabel(r"$\theta_1$", fontsize=20)
save_fig("lasso_vs_ridge_plot")
plt.show()
t = np.linspace(-10, 10, 100)
sig = 1 / (1 + np.exp(-t))
plt.figure(figsize=(9, 3))
plt.plot([-10, 10], [0, 0], "k-")
plt.plot([-10, 10], [0.5, 0.5], "k:")
plt.plot([-10, 10], [1, 1], "k:")
plt.plot([0, 0], [-1.1, 1.1], "k-")
plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$")
plt.xlabel("t")
plt.legend(loc="upper left", fontsize=20)
plt.axis([-10, 10, -0.1, 1.1])
save_fig("logistic_function_plot")
plt.show()
from sklearn import datasets
iris = datasets.load_iris()
list(iris.keys())
print(iris.DESCR)
X = iris["data"][:, 3:] # petal width
y = (iris["target"] == 2).astype(np.int) # 1 if Iris-Virginica, else 0
from sklearn.linear_model import LogisticRegression
log_reg = LogisticRegression(solver="liblinear", random_state=42)
log_reg.fit(X, y)
X_new = np.linspace(0, 3, 1000).reshape(-1, 1)
y_proba = log_reg.predict_proba(X_new)
plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica")
plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica")
X_new = np.linspace(0, 3, 1000).reshape(-1, 1)
y_proba = log_reg.predict_proba(X_new)
decision_boundary = X_new[y_proba[:, 1] >= 0.5][0]
plt.figure(figsize=(8, 3))
plt.plot(X[y==0], y[y==0], "bs")
plt.plot(X[y==1], y[y==1], "g^")
plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2)
plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica")
plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica")
plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center")
plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b')
plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g')
plt.xlabel("Petal width (cm)", fontsize=14)
plt.ylabel("Probability", fontsize=14)
plt.legend(loc="center left", fontsize=14)
plt.axis([0, 3, -0.02, 1.02])
save_fig("logistic_regression_plot")
plt.show()
decision_boundary
log_reg.predict([[1.7], [1.5]])
from sklearn.linear_model import LogisticRegression
X = iris["data"][:, (2, 3)] # petal length, petal width
y = (iris["target"] == 2).astype(np.int)
log_reg = LogisticRegression(solver="liblinear", C=10**10, random_state=42)
log_reg.fit(X, y)
x0, x1 = np.meshgrid(
np.linspace(2.9, 7, 500).reshape(-1, 1),
np.linspace(0.8, 2.7, 200).reshape(-1, 1),
)
X_new = np.c_[x0.ravel(), x1.ravel()]
y_proba = log_reg.predict_proba(X_new)
plt.figure(figsize=(10, 4))
plt.plot(X[y==0, 0], X[y==0, 1], "bs")
plt.plot(X[y==1, 0], X[y==1, 1], "g^")
zz = y_proba[:, 1].reshape(x0.shape)
contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg)
left_right = np.array([2.9, 7])
boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1]
plt.clabel(contour, inline=1, fontsize=12)
plt.plot(left_right, boundary, "k--", linewidth=3)
plt.text(3.5, 1.5, "Not Iris-Virginica", fontsize=14, color="b", ha="center")
plt.text(6.5, 2.3, "Iris-Virginica", fontsize=14, color="g", ha="center")
plt.xlabel("Petal length", fontsize=14)
plt.ylabel("Petal width", fontsize=14)
plt.axis([2.9, 7, 0.8, 2.7])
save_fig("logistic_regression_contour_plot")
plt.show()
X = iris["data"][:, (2, 3)] # petal length, petal width
y = iris["target"]
softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42)
softmax_reg.fit(X, y)
x0, x1 = np.meshgrid(
np.linspace(0, 8, 500).reshape(-1, 1),
np.linspace(0, 3.5, 200).reshape(-1, 1),
)
X_new = np.c_[x0.ravel(), x1.ravel()]
y_proba = softmax_reg.predict_proba(X_new)
y_predict = softmax_reg.predict(X_new)
zz1 = y_proba[:, 1].reshape(x0.shape)
zz = y_predict.reshape(x0.shape)
plt.figure(figsize=(10, 4))
plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica")
plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor")
plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa")
from matplotlib.colors import ListedColormap
custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0'])
plt.contourf(x0, x1, zz, cmap=custom_cmap)
contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg)
plt.clabel(contour, inline=1, fontsize=12)
plt.xlabel("Petal length", fontsize=14)
plt.ylabel("Petal width", fontsize=14)
plt.legend(loc="center left", fontsize=14)
plt.axis([0, 7, 0, 3.5])
save_fig("softmax_regression_contour_plot")
plt.show()
softmax_reg.predict([[5, 2]])
softmax_reg.predict_proba([[5, 2]])
X = iris["data"][:, (2, 3)] # petal length, petal width
y = iris["target"]
X_with_bias = np.c_[np.ones([len(X), 1]), X]
np.random.seed(2042)
test_ratio = 0.2
validation_ratio = 0.2
total_size = len(X_with_bias)
test_size = int(total_size * test_ratio)
validation_size = int(total_size * validation_ratio)
train_size = total_size - test_size - validation_size
rnd_indices = np.random.permutation(total_size)
X_train = X_with_bias[rnd_indices[:train_size]]
y_train = y[rnd_indices[:train_size]]
X_valid = X_with_bias[rnd_indices[train_size:-test_size]]
y_valid = y[rnd_indices[train_size:-test_size]]
X_test = X_with_bias[rnd_indices[-test_size:]]
y_test = y[rnd_indices[-test_size:]]
def to_one_hot(y):
n_classes = y.max() + 1
m = len(y)
Y_one_hot = np.zeros((m, n_classes))
Y_one_hot[np.arange(m), y] = 1
return Y_one_hot
y_train[:10]
to_one_hot(y_train[:10])
Y_train_one_hot = to_one_hot(y_train)
Y_valid_one_hot = to_one_hot(y_valid)
Y_test_one_hot = to_one_hot(y_test)
def softmax(logits):
exps = np.exp(logits)
exp_sums = np.sum(exps, axis=1, keepdims=True)
return exps / exp_sums
n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term)
n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes)
eta = 0.01
n_iterations = 5001
m = len(X_train)
epsilon = 1e-7
Theta = np.random.randn(n_inputs, n_outputs)
for iteration in range(n_iterations):
logits = X_train.dot(Theta)
Y_proba = softmax(logits)
loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1))
error = Y_proba - Y_train_one_hot
if iteration % 500 == 0:
print(iteration, loss)
gradients = 1/m * X_train.T.dot(error)
Theta = Theta - eta * gradients
Theta
logits = X_valid.dot(Theta)
Y_proba = softmax(logits)
y_predict = np.argmax(Y_proba, axis=1)
accuracy_score = np.mean(y_predict == y_valid)
accuracy_score
eta = 0.1
n_iterations = 5001
m = len(X_train)
epsilon = 1e-7
alpha = 0.1 # regularization hyperparameter
Theta = np.random.randn(n_inputs, n_outputs)
for iteration in range(n_iterations):
logits = X_train.dot(Theta)
Y_proba = softmax(logits)
xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1))
l2_loss = 1/2 * np.sum(np.square(Theta[1:]))
loss = xentropy_loss + alpha * l2_loss
error = Y_proba - Y_train_one_hot
if iteration % 500 == 0:
print(iteration, loss)
gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]]
Theta = Theta - eta * gradients
logits = X_valid.dot(Theta)
Y_proba = softmax(logits)
y_predict = np.argmax(Y_proba, axis=1)
accuracy_score = np.mean(y_predict == y_valid)
accuracy_score
eta = 0.1
n_iterations = 5001
m = len(X_train)
epsilon = 1e-7
alpha = 0.1 # regularization hyperparameter
best_loss = np.infty
Theta = np.random.randn(n_inputs, n_outputs)
for iteration in range(n_iterations):
logits = X_train.dot(Theta)
Y_proba = softmax(logits)
xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1))
l2_loss = 1/2 * np.sum(np.square(Theta[1:]))
loss = xentropy_loss + alpha * l2_loss
error = Y_proba - Y_train_one_hot
gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]]
Theta = Theta - eta * gradients
logits = X_valid.dot(Theta)
Y_proba = softmax(logits)
xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1))
l2_loss = 1/2 * np.sum(np.square(Theta[1:]))
loss = xentropy_loss + alpha * l2_loss
if iteration % 500 == 0:
print(iteration, loss)
if loss < best_loss:
best_loss = loss
else:
print(iteration - 1, best_loss)
print(iteration, loss, "early stopping!")
break
logits = X_valid.dot(Theta)
Y_proba = softmax(logits)
y_predict = np.argmax(Y_proba, axis=1)
accuracy_score = np.mean(y_predict == y_valid)
accuracy_score
x0, x1 = np.meshgrid(
np.linspace(0, 8, 500).reshape(-1, 1),
np.linspace(0, 3.5, 200).reshape(-1, 1),
)
X_new = np.c_[x0.ravel(), x1.ravel()]
X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new]
logits = X_new_with_bias.dot(Theta)
Y_proba = softmax(logits)
y_predict = np.argmax(Y_proba, axis=1)
zz1 = Y_proba[:, 1].reshape(x0.shape)
zz = y_predict.reshape(x0.shape)
plt.figure(figsize=(10, 4))
plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica")
plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor")
plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa")
from matplotlib.colors import ListedColormap
custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0'])
plt.contourf(x0, x1, zz, cmap=custom_cmap)
contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg)
plt.clabel(contour, inline=1, fontsize=12)
plt.xlabel("Petal length", fontsize=14)
plt.ylabel("Petal width", fontsize=14)
plt.legend(loc="upper left", fontsize=14)
plt.axis([0, 7, 0, 3.5])
plt.show()
logits = X_test.dot(Theta)
Y_proba = softmax(logits)
y_predict = np.argmax(Y_proba, axis=1)
accuracy_score = np.mean(y_predict == y_test)
accuracy_score
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1 - Baseline model
Step2: Run the following cell to print sentences from X_train and corresponding labels from Y_train. Change index to see different examples. Because of the font the iPython notebook uses, the heart emoji may be colored black rather than red.
Step3: 1.2 - Overview of the Emojifier-V1
Step4: Let's see what convert_to_one_hot() did. Feel free to change index to print out different values.
Step5: All the data is now ready to be fed into the Emojify-V1 model. Let's implement the model!
Step6: You've loaded
Step8: Exercise
Step10: Expected Output
Step11: Run the next cell to train your model and learn the softmax parameters (W,b).
Step12: Expected Output (on a subset of iterations)
Step13: Expected Output
Step14: Amazing! Because adore has a similar embedding as love, the algorithm has generalized correctly even to a word it has never seen before. Words such as heart, dear, beloved or adore have embedding vectors similar to love, and so might work too---feel free to modify the inputs above and try out a variety of input sentences. How well does it work?
Step15: <font color='blue'>
Step17: 2.1 - Overview of the model
Step18: Run the following cell to check what sentences_to_indices() does, and check your results.
Step20: Expected Output
Step22: Expected Output
Step23: Run the following cell to create your model and check its summary. Because all sentences in the dataset are less than 10 words, we chose max_len = 10. You should see your architecture, it uses "20,223,927" parameters, of which 20,000,050 (the word embeddings) are non-trainable, and the remaining 223,877 are. Because our vocabulary size has 400,001 words (with valid indices from 0 to 400,000) there are 400,001*50 = 20,000,050 non-trainable parameters.
Step24: As usual, after creating your model in Keras, you need to compile it and define what loss, optimizer and metrics your are want to use. Compile your model using categorical_crossentropy loss, adam optimizer and ['accuracy'] metrics
Step25: It's time to train your model. Your Emojifier-V2 model takes as input an array of shape (m, max_len) and outputs probability vectors of shape (m, number of classes). We thus have to convert X_train (array of sentences as strings) to X_train_indices (array of sentences as list of word indices), and Y_train (labels as indices) to Y_train_oh (labels as one-hot vectors).
Step26: Fit the Keras model on X_train_indices and Y_train_oh. We will use epochs = 50 and batch_size = 32.
Step27: Your model should perform close to 100% accuracy on the training set. The exact accuracy you get may be a little different. Run the following cell to evaluate your model on the test set.
Step28: You should get a test accuracy between 80% and 95%. Run the cell below to see the mislabelled examples.
Step29: Now you can try it on your own example. Write your own sentence below.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from emo_utils import *
import emoji
import matplotlib.pyplot as plt
%matplotlib inline
X_train, Y_train = read_csv('data/train_emoji.csv')
X_test, Y_test = read_csv('data/tesss.csv')
maxLen = len(max(X_train, key=len).split())
index = 1
print(X_train[index], label_to_emoji(Y_train[index]))
Y_oh_train = convert_to_one_hot(Y_train, C = 5)
Y_oh_test = convert_to_one_hot(Y_test, C = 5)
index = 50
print(Y_train[index], "is converted into one hot", Y_oh_train[index])
word_to_index, index_to_word, word_to_vec_map = read_glove_vecs('data/glove.6B.50d.txt')
word = "cucumber"
index = 289846
print("the index of", word, "in the vocabulary is", word_to_index[word])
print("the", str(index) + "th word in the vocabulary is", index_to_word[index])
# GRADED FUNCTION: sentence_to_avg
def sentence_to_avg(sentence, word_to_vec_map):
Converts a sentence (string) into a list of words (strings). Extracts the GloVe representation of each word
and averages its value into a single vector encoding the meaning of the sentence.
Arguments:
sentence -- string, one training example from X
word_to_vec_map -- dictionary mapping every word in a vocabulary into its 50-dimensional vector representation
Returns:
avg -- average vector encoding information about the sentence, numpy-array of shape (50,)
### START CODE HERE ###
# Step 1: Split sentence into list of lower case words (≈ 1 line)
words = None
# Initialize the average word vector, should have the same shape as your word vectors.
avg = None
# Step 2: average the word vectors. You can loop over the words in the list "words".
for w in None:
avg += None
avg = None
### END CODE HERE ###
return avg
avg = sentence_to_avg("Morrocan couscous is my favorite dish", word_to_vec_map)
print("avg = ", avg)
# GRADED FUNCTION: model
def model(X, Y, word_to_vec_map, learning_rate = 0.01, num_iterations = 400):
Model to train word vector representations in numpy.
Arguments:
X -- input data, numpy array of sentences as strings, of shape (m, 1)
Y -- labels, numpy array of integers between 0 and 7, numpy-array of shape (m, 1)
word_to_vec_map -- dictionary mapping every word in a vocabulary into its 50-dimensional vector representation
learning_rate -- learning_rate for the stochastic gradient descent algorithm
num_iterations -- number of iterations
Returns:
pred -- vector of predictions, numpy-array of shape (m, 1)
W -- weight matrix of the softmax layer, of shape (n_y, n_h)
b -- bias of the softmax layer, of shape (n_y,)
np.random.seed(1)
# Define number of training examples
m = Y.shape[0] # number of training examples
n_y = 5 # number of classes
n_h = 50 # dimensions of the GloVe vectors
# Initialize parameters using Xavier initialization
W = np.random.randn(n_y, n_h) / np.sqrt(n_h)
b = np.zeros((n_y,))
# Convert Y to Y_onehot with n_y classes
Y_oh = convert_to_one_hot(Y, C = n_y)
# Optimization loop
for t in range(num_iterations): # Loop over the number of iterations
for i in range(m): # Loop over the training examples
### START CODE HERE ### (≈ 4 lines of code)
# Average the word vectors of the words from the j'th training example
avg = None
# Forward propagate the avg through the softmax layer
z = None
a = None
# Compute cost using the j'th training label's one hot representation and "A" (the output of the softmax)
cost = None
### END CODE HERE ###
# Compute gradients
dz = a - Y_oh[i]
dW = np.dot(dz.reshape(n_y,1), avg.reshape(1, n_h))
db = dz
# Update parameters with Stochastic Gradient Descent
W = W - learning_rate * dW
b = b - learning_rate * db
if t % 100 == 0:
print("Epoch: " + str(t) + " --- cost = " + str(cost))
pred = predict(X, Y, W, b, word_to_vec_map)
return pred, W, b
print(X_train.shape)
print(Y_train.shape)
print(np.eye(5)[Y_train.reshape(-1)].shape)
print(X_train[0])
print(type(X_train))
Y = np.asarray([5,0,0,5, 4, 4, 4, 6, 6, 4, 1, 1, 5, 6, 6, 3, 6, 3, 4, 4])
print(Y.shape)
X = np.asarray(['I am going to the bar tonight', 'I love you', 'miss you my dear',
'Lets go party and drinks','Congrats on the new job','Congratulations',
'I am so happy for you', 'Why are you feeling bad', 'What is wrong with you',
'You totally deserve this prize', 'Let us go play football',
'Are you down for football this afternoon', 'Work hard play harder',
'It is suprising how people can be dumb sometimes',
'I am very disappointed','It is the best day in my life',
'I think I will end up alone','My life is so boring','Good job',
'Great so awesome'])
pred, W, b = model(X_train, Y_train, word_to_vec_map)
print(pred)
print("Training set:")
pred_train = predict(X_train, Y_train, W, b, word_to_vec_map)
print('Test set:')
pred_test = predict(X_test, Y_test, W, b, word_to_vec_map)
X_my_sentences = np.array(["i adore you", "i love you", "funny lol", "lets play with a ball", "food is ready", "you are not happy"])
Y_my_labels = np.array([[0], [0], [2], [1], [4],[3]])
pred = predict(X_my_sentences, Y_my_labels , W, b, word_to_vec_map)
print_predictions(X_my_sentences, pred)
print(Y_test.shape)
print(' '+ label_to_emoji(0)+ ' ' + label_to_emoji(1) + ' ' + label_to_emoji(2)+ ' ' + label_to_emoji(3)+' ' + label_to_emoji(4))
print(pd.crosstab(Y_test, pred_test.reshape(56,), rownames=['Actual'], colnames=['Predicted'], margins=True))
plot_confusion_matrix(Y_test, pred_test)
import numpy as np
np.random.seed(0)
from keras.models import Model
from keras.layers import Dense, Input, Dropout, LSTM, Activation
from keras.layers.embeddings import Embedding
from keras.preprocessing import sequence
np.random.seed(1)
from keras.initializers import glorot_uniform
# GRADED FUNCTION: sentences_to_indices
def sentences_to_indices(X, word_to_index, max_len):
Converts an array of sentences (strings) into an array of indices corresponding to words in the sentences.
The output shape should be such that it can be given to `Embedding()` (described in Figure 4).
Arguments:
X -- array of sentences (strings), of shape (m, 1)
word_to_index -- a dictionary containing the each word mapped to its index
max_len -- maximum number of words in a sentence. You can assume every sentence in X is no longer than this.
Returns:
X_indices -- array of indices corresponding to words in the sentences from X, of shape (m, max_len)
m = X.shape[0] # number of training examples
### START CODE HERE ###
# Initialize X_indices as a numpy matrix of zeros and the correct shape
X_indices = None
for i in range(m): # loop over training examples
# Convert the ith training sentence in lower case and split is into words. You should get a list of words.
sentence_words = None
# Initialize j to 0
j = 0
# Loop over the words of sentence_words
for w in None:
# Set the (i,j)th entry of X_indices to the index of the correct word.
X_indices[i, j] =None
# Increment j to j + 1
j = None
### END CODE HERE ###
return X_indices
X1 = np.array(["funny lol", "lets play baseball", "food is ready for you"])
X1_indices = sentences_to_indices(X1,word_to_index, max_len = 5)
print("X1 =", X1)
print("X1_indices =", X1_indices)
# GRADED FUNCTION: pretrained_embedding_layer
def pretrained_embedding_layer(word_to_vec_map, word_to_index):
Creates a Keras Embedding() layer and loads in pre-trained GloVe 50-dimensional vectors.
Arguments:
word_to_vec_map -- dictionary mapping words to their GloVe vector representation.
word_to_index -- dictionary mapping from words to their indices in the vocabulary (400,001 words)
Returns:
embedding_layer -- pretrained layer Keras instance
vocab_len = len(word_to_index) + 1 # adding 1 to fit Keras embedding (requirement)
emb_dim = word_to_vec_map["cucumber"].shape[0] # define dimensionality of your GloVe word vectors (= 50)
### START CODE HERE ###
# Initialize the embedding matrix as a numpy array of zeros of shape (vocab_len, dimensions of word vectors = emb_dim)
emb_matrix = None
# Set each row "index" of the embedding matrix to be the word vector representation of the "index"th word of the vocabulary
for word, index in word_to_index.items():
emb_matrix[index, :] = None
# Define Keras embedding layer with the correct output/input sizes, make it trainable.
# Use Embedding(...). Make sure to set trainable=False.
embedding_layer = None
### END CODE HERE ###
# Build the embedding layer, it is required before setting the weights of the embedding layer. Do not modify the "None".
embedding_layer.build((None,))
# Set the weights of the embedding layer to the embedding matrix. Your layer is now pretrained.
embedding_layer.set_weights([emb_matrix])
return embedding_layer
embedding_layer = pretrained_embedding_layer(word_to_vec_map, word_to_index)
print("weights[0][1][3] =", embedding_layer.get_weights()[0][1][3])
# GRADED FUNCTION: Emojify_V2
def Emojify_V2(input_shape, word_to_vec_map, word_to_index):
Function creating the Emojify-v2 model's graph.
Arguments:
input_shape -- shape of the input, usually (max_len,)
word_to_vec_map -- dictionary mapping every word in a vocabulary into its 50-dimensional vector representation
word_to_index -- dictionary mapping from words to their indices in the vocabulary (400,001 words)
Returns:
model -- a model instance in Keras
### START CODE HERE ###
# Define sentence_indices as the input of the graph, it should be of shape input_shape and dtype 'int32' (as it contains indices).
sentence_indices = None
# Create the embedding layer pretrained with GloVe Vectors (≈1 line)
embedding_layer = None
# Propagate sentence_indices through your embedding layer, you get back the embeddings
embeddings = None
# Propagate the embeddings through an LSTM layer with 128-dimensional hidden state
# Be careful, the returned output should be a batch of sequences.
X = None
# Add dropout with a probability of 0.5
X = None
# Propagate X trough another LSTM layer with 128-dimensional hidden state
# Be careful, the returned output should be a single hidden state, not a batch of sequences.
X = None
# Add dropout with a probability of 0.5
X = None
# Propagate X through a Dense layer with softmax activation to get back a batch of 5-dimensional vectors.
X = None
# Add a softmax activation
X = None
# Create Model instance which converts sentence_indices into X.
model = None
### END CODE HERE ###
return model
model = Emojify_V2((maxLen,), word_to_vec_map, word_to_index)
model.summary()
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
X_train_indices = sentences_to_indices(X_train, word_to_index, maxLen)
Y_train_oh = convert_to_one_hot(Y_train, C = 5)
model.fit(X_train_indices, Y_train_oh, epochs = 50, batch_size = 32, shuffle=True)
X_test_indices = sentences_to_indices(X_test, word_to_index, max_len = maxLen)
Y_test_oh = convert_to_one_hot(Y_test, C = 5)
loss, acc = model.evaluate(X_test_indices, Y_test_oh)
print()
print("Test accuracy = ", acc)
# This code allows you to see the mislabelled examples
C = 5
y_test_oh = np.eye(C)[Y_test.reshape(-1)]
X_test_indices = sentences_to_indices(X_test, word_to_index, maxLen)
pred = model.predict(X_test_indices)
for i in range(len(X_test)):
x = X_test_indices
num = np.argmax(pred[i])
if(num != Y_test[i]):
print('Expected emoji:'+ label_to_emoji(Y_test[i]) + ' prediction: '+ X_test[i] + label_to_emoji(num).strip())
# Change the sentence below to see your prediction. Make sure all the words are in the Glove embeddings.
x_test = np.array(['you are not happy'])
X_test_indices = sentences_to_indices(x_test, word_to_index, maxLen)
print(x_test[0] +' '+ label_to_emoji(np.argmax(model.predict(X_test_indices))))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code::
import matplotlib.pyplot as plt
plt.scatter(x, y)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Next class is responsible for filtering out lanes detection area
Step2: Here is the result of getColorMask function, that turns all white and yellow objects into white ones and makes other colored objects black
Step3: The result of Canny edge detection
Step4: Result of getLaneLines transformed into lines
Step5: Result of lines sorting function in Cartesian coordinate system
Step6: The result
Step7: And the result of processFrame
Step8: Video processing entry point
Step9: Constants with image/video pathes for testing, pipeline initialization
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
import pandas
import os
def getPathFor(file_path):
current_directory = %pwd
path = os.path.join(current_directory, file_path)
print("About to open file: {}\n".format(path))
return path
class DetectionAreaFilter:
def __init__(self):
self._lower_yellow = np.array([20, 0, 170], dtype=np.uint8)
self._upper_yellow = np.array([55, 255, 255], dtype=np.uint8)
self._lower_white = np.array([0, 0, 220], dtype=np.uint8)
self._upper_white = np.array([255, 25, 255], dtype=np.uint8)
self._ignore_mask_color = 255
def getColorMask(self, hsv_image):
mask_yellow = cv2.inRange(hsv_image, self._lower_yellow, self._upper_yellow)
mask_white = cv2.inRange(hsv_image, self._lower_white, self._upper_white)
mask = cv2.add(mask_white, mask_yellow)
return mask
def applyDetectionArea(self, bw_image, width_adjustment=60, height_adjustment=65):
im_height = bw_image.shape[0]
im_half_height = im_height // 2
im_width = bw_image.shape[1]
im_half_width = im_width // 2
area_left_bottom = (0, im_height)
area_left_top = (im_half_width - width_adjustment, im_half_height + height_adjustment)
area_right_top = (im_half_width + width_adjustment, im_half_height + height_adjustment)
area_right_bottom = (im_width, im_height)
detection_area = [area_left_bottom, area_left_top, area_right_top, area_right_bottom]
vertices = np.array([detection_area], dtype=np.int32)
mask = np.zeros_like(bw_image)
cv2.fillPoly(mask, vertices, self._ignore_mask_color)
masked_image = cv2.bitwise_and(bw_image, mask)
return masked_image
def getEdges(image, low_threshold=50, high_threshold=150):
edges = cv2.Canny(image, low_threshold, high_threshold)
return edges
def getLaneLines(edges):
deg = np.pi/180
lines = cv2.HoughLines(edges, 1, 1*deg, 40)
if lines is None:
return np.array([])
points_array = list()
for line in lines:
for rho, theta in line:
points_array.append((rho, theta))
return np.array(points_array)
class CoordinateSorter:
def __init__(self, max_distance_delta, max_angle_delta, threshold):
if max_angle_delta < 0:
raise ValueError("[max_angle_delta] must be positive number")
if max_angle_delta < 0:
raise ValueError("[max_angle_delta] must be positive number")
if threshold < 1 or type(threshold) != int:
raise ValueError("[threshold] expected to be integer greater then or equal to 1")
self._max_point_distance = (max_distance_delta, max_angle_delta)
self._min_points_amount = threshold
def _sortPointsByDistance(self, points_dict):
set_list = list()
for key, value in points_dict.items():
indexes_set = set()
set_list.append(indexes_set)
indexes_set.add(key)
for inner_key, inner_value in points_dict.items():
point_distance = abs(np.subtract(value, inner_value))
if point_distance[0] <= self._max_point_distance[0] \
and point_distance[1] <= self._max_point_distance[1]:
indexes_set.add(inner_key)
return set_list
def _splitOnGroups(self, set_list_source):
sorted_source = list(set_list_source)
sorted_source.sort(key=len, reverse=True)
extremums = list()
def find_extremums(ordered_list_of_set_items):
if len(ordered_list_of_set_items) == 0:
return
first_extremum = ordered_list_of_set_items[0]
items_for_further_sorting = list()
for dot_set in ordered_list_of_set_items:
if dot_set.issubset(first_extremum):
continue
else:
if len(first_extremum.intersection(dot_set)):
first_extremum = first_extremum.union(dot_set)
else:
items_for_further_sorting.append(dot_set)
extremums.append(first_extremum)
find_extremums(items_for_further_sorting)
find_extremums(sorted_source)
filtered_extremums = filter(lambda x: len(x) >= self._min_points_amount, extremums)
return filtered_extremums
@staticmethod
def _getMedian(source_dict, key_set):
point_array = [source_dict[item] for item in key_set]
data_frame = pandas.DataFrame(data=point_array, columns=["distance", "angle"])
return data_frame["distance"].median(), data_frame["angle"].median()
def sort(self, points_array):
if len(points_array) < self._min_points_amount:
return []
points_dictionary = dict()
for index, coordinates in enumerate(points_array):
points_dictionary[index] = (int(coordinates[0]), coordinates[1])
point_set_list = self._sortPointsByDistance(points_dictionary)
point_groups = self._splitOnGroups(point_set_list)
resulting_points = [self._getMedian(points_dictionary, point_group) for point_group in point_groups]
return resulting_points
def convert(rho, theta, y_min, y_max):
def create_point(y):
x = (rho - y*np.sin(theta))/np.cos(theta)
return int(x), int(y)
d1 = create_point(y_max)
d2 = create_point(y_min)
return d1, d2
def drawLines(polar_coordinates_array, image, color, line_weight = 10):
y_max = image.shape[0]
y_min = int(y_max * 2 / 3)
lines = [convert(rho, theta, y_min, y_max) for rho, theta in polar_coordinates_array]
for d1, d2 in lines:
cv2.line(image, d1, d2, color, line_weight)
class ImageProcessor:
def __init__(self, detection_area_filter, coordinate_sorter):
self._bgr_line_color = (0, 0, 255)
self._detection_area_filter = detection_area_filter
self._coordinate_sorter = coordinate_sorter
def processFrame(self, bgr_frame):
frame = cv2.cvtColor(bgr_frame, cv2.COLOR_BGR2HSV)
bw_color_mask = self._detection_area_filter.getColorMask(frame)
bw_area = self._detection_area_filter.applyDetectionArea(bw_color_mask)
bw_edges = getEdges(bw_area)
polar_lane_coordinates = getLaneLines(bw_edges)
average_polar_lane_coordinates = self._coordinate_sorter.sort(polar_lane_coordinates)
lines_image = np.zeros(bgr_frame.shape, dtype=np.uint8)
drawLines(average_polar_lane_coordinates, lines_image, self._bgr_line_color)
result_image = cv2.addWeighted(lines_image, 0.9, bgr_frame, 1, 0)
return result_image
def _convert_bw_2_color(self, bw_image):
return np.dstack((bw_image, bw_image, bw_image))
def showImage(file_path):
def convert(image):
return image[..., [2, 1, 0]]
image_path = getPathFor(file_path)
rgb_image = mpimg.imread(image_path)
bgr_frame = convert(rgb_image)
frame = img_processor.processFrame(bgr_frame)
rgb_frame = convert(frame)
plt.imshow(rgb_frame)
plt.show()
def playVideo(file_path):
video_path = getPathFor(file_path)
video = cv2.VideoCapture(video_path)
print("About to start video playback...")
while video.isOpened():
_, bgr_frame = video.read()
if not isinstance(bgr_frame, np.ndarray):
# workaround to handle end of video stream.
break
frame = img_processor.processFrame(bgr_frame)
cv2.imshow("output", frame)
key = cv2.waitKey(1) & 0xFF
# stop video on ESC key pressed
if key == 27:
break
print("Video has been closed successfully.")
video.release()
cv2.destroyAllWindows()
image1 = "input/test_images/solidWhiteCurve.jpg"
image2 = "input/test_images/solidWhiteRight.jpg"
image3 = "input/test_images/solidYellowCurve.jpg"
image4 = "input/test_images/solidYellowCurve2.jpg"
image5 = "input/test_images/solidYellowLeft.jpg"
image6 = "input/test_images/whiteCarLaneSwitch.jpg"
video1 = "input/test_videos/challenge.mp4"
video2 = "input/test_videos/solidYellowLeft.mp4"
video3 = "input/test_videos/solidWhiteRight.mp4"
detection_area_filter = DetectionAreaFilter()
max_distance_delta = 40 # max distance between lines (rho1 - rho2) in polar coordinate system
max_angle_delta = np.radians(4) # max angle between lines (theta1 - theta2) in polar coordinate system
threshold = 3 # min amount of lines in set filter
coordinate_sorter = CoordinateSorter(max_distance_delta, max_angle_delta, threshold)
img_processor = ImageProcessor(detection_area_filter, coordinate_sorter)
showImage(image4)
#playVideo(video1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Loading the data
Step2: Content example
Step3: Splitting data between train/test
Step4: split used for convenience on the average by movie baseline
Step5: cleaning
Step6: Matrix used for fast evaluation
Step7: Baseline
Step8: Raw matrix are used for convenience and clarity.
Step9: Train and test dataset
Step10: Non-negative Matrix Factorization
Step11: We see that it quickly get better than the baseline.
Step12: This is due to the high sparsity of the matrix.
Step13: Despite good results in few seconds on this dataset, this can only get us so far.
Step14: Let's see some predictions
Step15: We usualy see an important difference between users, so we need to take the bias into account.
Step16: We have not been very successful with incorporating the bias and L1 into that implementation...
Step17:
Step18: For reference, let's compare it with sklearn's TSNE
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from random import random
import math
import numpy as np
import copy
from scipy import stats
import matplotlib.pyplot as plt
import pickle as pkl
from scipy.spatial import distance
import seaborn as sns
sns.set_style('darkgrid')
def loadMovieLens(path='./data/movielens'):
#Get movie titles
movies={}
rev_movies={}
for idx,line in enumerate(open(path+'/u.item')):
idx,title=line.split('|')[0:2]
movies[idx]=title
rev_movies[title]=idx
# Load data
prefs={}
for line in open(path+'/u.data'):
(user,movieid,rating,ts)=line.split('\t')
prefs.setdefault(user,{})
prefs[user][movies[movieid]]=float(rating)
return prefs,rev_movies
data,movies = loadMovieLens("data/ml-100k")
data['3']
def getRawArray(data):
d = []
for u in data.keys():
for i in data[u].keys():
d.append([u,i,data[u][i]])
return np.array(d)
# splitting while avoiding to reduce the dataset too much
def split_train_test(data,percent_test):
test={}
train={}
movie={}
for u in data.keys():
test.setdefault(u,{})
train.setdefault(u,{})
for movie in data[u]:
#print(data[u][movie])
if (random()<percent_test):
test[u][movie]=data[u][movie]
else:
train[u][movie]=data[u][movie]
return train, test
def split_train_test_by_movies(data,percent_test):
test={}
train={}
movie={}
for u in data.keys():
for movie in data[u]:
if (random()<percent_test):
try:
test[movie][u]=data[u][movie]
except KeyError:
test.setdefault(movie,{})
test[movie][u]=data[u][movie]
else:
try:
train[movie][u]=data[u][movie]
except KeyError:
train.setdefault(movie,{})
train[movie][u]=data[u][movie]
return train, test
percent_test=0.2
train,test=split_train_test(data,percent_test)
percent_test=0.2
m_train,m_test=split_train_test_by_movies(data,percent_test)
def deleteUnseenInTest(train,test):
for k in test.keys():
try:
train[k]
except KeyError:
test.pop(k,None)
def deleteUnknowData(triplet_test, trainUsers, trainItems) :
to_Del = []
for i,t in enumerate(triplet_test):
if not t[0] in trainUsers:
to_Del.append(i)
elif not t[1] in trainItems:
to_Del.append(i)
return np.delete(triplet_test, to_Del, 0)
deleteUnseenInTest(train,test)
deleteUnseenInTest(m_train,m_test)
len(test)
def getTriplet(data):
triplet = []
for u in data.keys():
for i in data[u].keys():
triplet.append([u,i,data[u][i]])
return triplet
def getDataByUsers(triplet) :
dataByUsers = {}
for t in triplet:
if not t[0] in dataByUsers.keys():
dataByUsers[t[0]] = {}
dataByUsers[t[0]][t[1]] = float(t[2])
return dataByUsers
def getDataByItems(triplet) :
dataByItems = {}
for t in triplet:
if not t[1] in dataByItems.keys():
dataByItems[t[1]] = {}
dataByItems[t[1]][t[0]] = float(t[2])
return dataByItems
# Split l'ensemble des triplets
def splitTrainTest(triplet, testProp) :
perm = np.random.permutation(triplet)
splitIndex = int(testProp * len(triplet))
return perm[splitIndex:], perm[:splitIndex]
# supprime des données de test les données inconnus en train
def deleteUnknowData(triplet_test, trainUsers, trainItems) :
to_Del = []
for i,t in enumerate(triplet_test):
if not t[0] in trainUsers:
to_Del.append(i)
elif not t[1] in trainItems:
to_Del.append(i)
return np.delete(triplet_test, to_Del, 0)
%%time
triplet = getTriplet(data)
# split 80% train 20% test
arrayTrain, arrayTest = splitTrainTest(triplet , 0.2)
# train
trainUsers = getDataByUsers(arrayTrain)
trainItems = getDataByItems(arrayTrain)
#print len(triplet_test)
arrayTest = deleteUnknowData(arrayTest, trainUsers, trainItems)
#print len(triplet_test)
# test
testUsers = getDataByUsers(arrayTest)
testItems = getDataByItems(arrayTest)
arrayAll = getRawArray(data)
arrayTrain = getRawArray(train)
arrayTest = getRawArray(test)
arrayTest = deleteUnknowData(arrayTest,train,m_train)
arrayTest[:10,:10]
class baselineMeanUser:
def __init__(self):
self.users={}
def fit(self,train):
for user in train.keys():
note=0.0
for movie in train[user].keys():
note+=train[user][movie]
note=note/len(train[user])
self.users[user]=note
def predict(self,users):
return [self.users[u] for u in users]
baseline_mu= baselineMeanUser()
baseline_mu.fit(train)
pred = baseline_mu.predict(evalArrayTest[:,0])
print("Mean Error %0.6f" %(
(np.array(pred) - np.array(evalArrayTest[:,2], float)) ** 2).mean())
class baselineMeanMovie:
def __init__(self):
self.movies={}
def fit(self,train):
for movie in train.keys():
note=0.0
for user in train[movie].keys():
note+=train[movie][user]
note=note/len(train[movie])
self.movies[movie]=note
def predict(self,movies):
res=[]
for m in movies:
try:
res.append(self.movies[m])
except:
res.append(3)
return res
baseline_mm= baselineMeanMovie()
baseline_mm.fit(m_train)
pred = baseline_mm.predict(evalArrayTest[:,1])
print("Mean Error %0.6f" %(
(np.array(pred) - np.array(evalArrayTest[:,2], float)) ** 2).mean())
rawMatrix = np.zeros((len(data.keys()),1682))
for u in data:
for m in data[u]:
rawMatrix[int(u)-1][int(movies[m])-1] = data[u][m]
print(np.shape(rawMatrix))
rawMatrix[:5,:5]
rawMatrixTrain = np.zeros((len(data.keys()),1682))
for u in train:
for m in train[u]:
rawMatrixTrain[int(u)-1][int(movies[m])-1] = train[u][m]
rawMatrixTest = np.zeros((len(data.keys()),1682))
for u in test:
for m in test[u]:
rawMatrixTest[int(u)-1][int(movies[m])-1] = test[u][m]
#from scipy import linalg
def nmf(X, latent_features, max_iter=100, eps = 1e-5,printevery=100):
print "NMF with %d latent features, %d iterations."%(latent_features, max_iter)
# mask used to ignore null element (coded by zero)
mask = np.sign(X)
# randomly initialized matrix
rows, columns = X.shape
A = np.random.rand(rows, latent_features)
Y = np.random.rand(latent_features, columns)
# Not used as I couldn't find significant improvments
#Y = linalg.lstsq(A, X)[0] # initializing that way as recommanded in a blog post
#Y = np.maximum(Y, eps) # avoiding too low values
masked_X = mask * X
masktest = np.sign(rawMatrixTest) # used for prints
masktrain = np.sign(rawMatrixTrain) # used for prints
for i in range(1, max_iter + 1):
top = np.dot(masked_X, Y.T)
bottom = (np.dot((mask * np.dot(A, Y)), Y.T)) + eps
A *= top / bottom
top = np.dot(A.T, masked_X)
bottom = np.dot(A.T, mask * np.dot(A, Y)) + eps
Y *= top / bottom
# evaluation
if i % printevery == 0 or i == 1 or i == max_iter:
X_est = np.dot(A, Y)
q = masktest*X_est - rawMatrixTest
q_train = masktrain*X_est - rawMatrixTrain
print "Iteration %d, Err %.05f, Err train %.05f"%( i, (q*q).sum()/ masktest.sum(), (q_train*q_train).sum()/ masktest.sum() )
return A, Y
%%time
A,Y = nmf(rawMatrixTrain,100,eps = 1e-5,max_iter=5,printevery=1)
resMatrix = A.dot(Y)
%%time
A,Y = nmf(rawMatrixTrain,50,eps = 1e-5,max_iter=500,printevery=100)
resMatrix = A.dot(Y)
%%time
A,Y = nmf(rawMatrixTrain,1,eps = 1e-5,max_iter=100,printevery=20)
resMatrix = A.dot(Y)
## This class is used to make predictions
class evalMF:
def __init__(self,resMatrix,dicU,dicI):
self.resMatrix=resMatrix
self.dicU = dicU
self.dicI = dicI
def fit(self):
pass
def predict(self,user,movie):
return self.resMatrix[int(user)-1][int(self.dicI[movie])-1]
mf = evalMF(resMatrix,data,movies)
# np.array([ (float(ra[2]) - mf.predict(ra[0],ra[1]))**2 for ra in evalArrayTest]).mean()
# faster evaluation
masqueTest=np.sign(rawMatrixTest)
q = masqueTest*resMatrix - rawMatrixTest
(q*q).sum()/ masqueTest.sum()
print data["1"]["Akira (1988)"]
print mf.predict("1","Akira (1988)")
print data["1"]["I.Q. (1994)"]
print mf.predict("1","I.Q. (1994)")
summ=0
for i in data["1"]:
summ+=(float(data["1"][i]) - mf.predict("1",i))**2
summ/len(data["1"])
summ=0
for i in data["3"]:
summ+=(float(data["3"][i]) - mf.predict("3",i))**2
summ/len(data["3"])
class FactoMatriceBiais():
def __init__(self, k, epsilon=1e-3, nbIter=2000, lamb=0.5):
self.k = k
self.lamb = lamb
self.epsilon = epsilon
self.nbIter = nbIter
def fit(self, trainUsers, trainItems, triplet):
self.p = {}
self.q = {}
self.bu = {} #biais sur les utilisateurs
self.bi = {} #biais sur les items
self.mu = np.random.random() * 2 - 1
for j in range(len(triplet)): # On initialise les cases vides en random
u = triplet[j][0]
i = triplet[j][1]
if not u in self.p:
self.p[u] = np.random.rand(1,self.k) # matrice ligne pour un users
self.bu[u] = np.random.rand() * 2 - 1
if not i in self.q:
self.q[i] = np.random.rand(self.k,1) # matrice colonne pour un item
self.bi[i] = np.random.rand() * 2 - 1
loss = []
for it in range(self.nbIter):
ind = np.random.randint(len(triplet))
u = triplet[ind][0]
i = triplet[ind][1]
tmp = trainUsers[u][i] - (self.mu + self.bi[i] + self.bu[u] +self.p[u].dot(self.q[i])[0][0])
self.p[u] = (1 - self.lamb * self.epsilon) * self.p[u] + self.epsilon * 2 * tmp * self.q[i].transpose()
self.bu[u] = (1 - self.lamb * self.epsilon) * self.bu[u] + self.epsilon * 2 * tmp
self.q[i] = (1 - self.lamb * self.epsilon) * self.q[i] + self.epsilon * 2 * tmp * self.p[u].transpose()
self.bi[i] = (1 - self.lamb * self.epsilon) * self.bi[i] + self.epsilon * 2 * tmp
self.mu = (1 - self.lamb * self.epsilon) * self.mu + self.epsilon * 2 * tmp
loss.append(tmp*tmp) # erreur sans régularisation
#loss.append(tmp**2 + self.lamb *(np.linalg.norm(self.p[u]).sum()**2 + np.linalg.norm(self.q[i]).sum()**2))
if ((it)%(self.nbIter*0.2) == 0) :
print "itération : " , it
print "loss : ", np.mean(loss)
print "-------"
loss = []
# evaluation
if i % printevery == 0 or i == 1 or i == max_iter:
X_est = np.dot(A, Y)
q = masktest*X_est - rawMatrixTest
q_train = masktrain*X_est - rawMatrixTrain
print "Iteration %d, Err %.05f, Err train %.05f"%( i, (q*q).sum()/ masktest.sum(), (q_train*q_train).sum()/ masktest.sum() )
def predict(self, triplet_test):
pred = np.zeros(len(triplet_test))
for ind,t in enumerate(triplet_test):
pred[ind] = self.mu + self.bu[t[0]] + self.bi[t[1]] + self.p[t[0]].dot(self.q[t[1]])[0][0]
return pred
def score(self, triplet_test) :
return ((self.predict(triplet_test) - np.array(triplet_test[:,2], float)) ** 2).mean()
%%time
k = 10
epsilon = 7e-3
nbIter = 20*len(arrayTrain)
lamb = 0.2
model = FactoMatriceBiais(k, epsilon=epsilon, nbIter=nbIter,lamb=lamb)
model.fit(trainUsers, trainItems, arrayTrain)
print "erreur en test:", model.score(arrayTest)
class tSNE():
def __init__(self,perp, nIter, lr, moment, dim=2):
self.perp = perp # entre 5 et 50
self.nIter = nIter
self.lr = lr
self.moment = moment
self.dim = dim
def fit(self,data):
nEx = np.shape(data)[0]
# Matrice des distances de ||xi - xj||² #
normx = np.sum(data**2,1)
normx = np.reshape(normx, (1, nEx))
distancex = normx + normx.T - 2 * data.dot(data.T)
# Calcul des sigma ---------------------------------------------------------------#
lperp = np.log2(self.perp)
# initialisation bornes pour la recherche dichotomique #
sup = np.ones((nEx,1)) * np.max(distancex)
inf = np.zeros((nEx,1))
self.sigma = (sup + inf) / 2.
# recherche dichotomique #
stop = False
while not stop:
# Calculer la matrice des p(i|j)
self.pcond = np.exp(-distancex / (2. * (self.sigma**2)))
self.pcond = self.pcond / np.sum(self.pcond - np.eye(nEx),1).reshape(nEx,1)
# Calculer l'entropie de p(i|j)
entropy = - np.sum(self.pcond * np.log2(self.pcond), 0)
# Mise a jour des bornes
# Si il faut augmenter sigma
up = entropy < lperp
inf[up,0] = self.sigma[up,0]
# Si il faut baisser sigma
down = entropy > lperp
sup[down,0] = self.sigma[down,0]
# Mise a jour de sigma et condition d'arrêt
old = self.sigma
self.sigma = ((sup + inf) / 2.)
if np.max(np.abs(old - self.sigma)) < 1e-5:
stop = True
#print np.exp(entropy)
#print self.sigma.T
#--------------------------------------------------------------------------#
#initialiser y
self.embeddings = np.zeros((self.nIter+2, nEx, self.dim))
self.embeddings[1] = np.random.randn(nEx, self.dim) * 1e-4
#--------------------------------------------------------------------------#
# p(ij)
self.pij = (self.pcond + self.pcond.T) / (2.*nEx)
np.fill_diagonal(self.pij, 0)
# Descente de Gradient
#loss = []
for t in xrange(1,self.nIter+1):
# Matrice des distances
normy = np.sum((self.embeddings[t]**2),1)
normy = np.reshape(normy, (1, nEx))
distancey = normy + normy.T - 2 * self.embeddings[t].dot(self.embeddings[t].T)
# q(ij)
# self.qij = (distancey.sum() + nEx*(nEx-1)) / (1 + distancey)
# np.fill_diagonal(self.qij, 0)
self.qij = 1 / (1 + distancey)
np.fill_diagonal(self.qij, 0)
self.qij = self.qij / self.qij.sum()
# Descente de gradient
yt = self.embeddings[t]
tmpgrad = 4 * ((self.pij - self.qij) / (1 + distancey)).reshape(nEx, nEx,1)
for i in range(nEx):
dy = (tmpgrad[i] * (yt[i]-yt)).sum(0)
self.embeddings[t+1][i] = yt[i] - self.lr * dy + self.moment * (yt[i] - self.embeddings[t-1,i])
#l = stats.entropy(self.qij, self.pij, 2).mean()
#loss.append(l)
#if (t % 100 == 0):
# print t,l
#if (t % 100 == 0):
# print t
X_ini = np.vstack([data.data[data.target==i]
for i in range(10)])
cols = np.hstack([data.target[data.target==i]
for i in range(10)])
%%time
from sklearn import datasets
from scipy import stats
data = datasets.load_digits()
model = tSNE(10,500,1000,0)
model.fit(X_ini)
palette = np.array(sns.color_palette("hls", 10))
t = np.shape(model.embeddings)[0] -1
# We create a scatter plot.
f = plt.figure(figsize=(8, 8))
ax = plt.subplot(aspect='equal')
sc = ax.scatter(model.embeddings[t,:,0], model.embeddings[t,:,1], lw=0, s=40,
c=palette[cols.astype(np.int)])
plt.xlim(-25, 25)
plt.ylim(-25, 25)
ax.axis('off')
ax.axis('tight')
#plt.plot(mod.embedding_[12][0],mod.embedding_[12][1], 'bv')
plt.show()
from sklearn.manifold import TSNE
mod = TSNE(random_state=1337)
%%time
X = mod.fit_transform(X_ini)
palette = np.array(sns.color_palette("hls", 10))
# We create a scatter plot.
f = plt.figure(figsize=(8, 8))
ax = plt.subplot(aspect='equal')
sc = ax.scatter(X[:,0], X[:,1], lw=0, s=40,
c=palette[cols.astype(np.int)])
plt.xlim(-25, 25)
plt.ylim(-25, 25)
ax.axis('off')
ax.axis('tight')
#plt.plot(mod.embedding_[12][0],mod.embedding_[12][1], 'bv')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: everything looks good with c,s, cxs. now to check the down sampled images as well as luminance.
|
<ASSISTANT_TASK:>
Python Code:
import os
os.environ['TF_CPP_MIN_LOG_LEVEL']='2'
import numpy as np
np.set_printoptions(threshold=np.nan)
import tensorflow as tf
import time
import pandas as pd
import matplotlib.pyplot as plt
import progressbar
data_path = 'https://raw.githubusercontent.com/michaelneuder/image_quality_analysis/master/data/sample_data/'
contrast_file_data = pd.read_csv('{}contrast_500.csv'.format(data_path), header=None)
structure_file_data = pd.read_csv('{}structure_500.csv'.format(data_path), header=None)
cxs_file_data = pd.read_csv('{}cxs_500.csv'.format(data_path), header=None)
contrast_file_data.shape
contrast_data = np.reshape(contrast_file_data.values, (500,86,86))
structure_data = np.reshape(structure_file_data.values, (500,86,86))
cxs_data = np.reshape(cxs_file_data.values, (500,86,86))
f, axarr = plt.subplots(nrows=3,ncols=3, figsize=(9,9))
for i in range(3):
index = np.random.randint(500)
axarr[i,0].imshow(contrast_data[index,:,:], cmap='viridis')
axarr[i,1].imshow(structure_data[index,:,:], cmap='BuPu')
axarr[i,2].imshow(cxs_data[index,:,:], cmap='copper')
axarr[0,0].set_title('contrast', size=15)
axarr[0,1].set_title('structure', size=15)
axarr[0,2].set_title('product', size=15)
for ax_row in axarr:
for ax in ax_row:
ax.set_xticklabels([])
ax.set_yticklabels([])
# plt.savefig('c_s_cxs.png')
plt.show()
print(contrast_data[0][0][0]*structure_data[0][0][0])
print(cxs_data[0][0][0])
ds2_orig_file_data = pd.read_csv('{}ds2_orig_500.csv'.format(data_path), header=None)
ds2_recon_file_data = pd.read_csv('{}ds2_recon_500.csv'.format(data_path), header=None)
luminance_file_data = pd.read_csv('{}luminance_500.csv'.format(data_path), header=None)
ds2_orig_file_data.shape
ds2_orig = np.reshape(ds2_orig_file_data.values, (500, 24, 24))
ds2_recon = np.reshape(ds2_orig_file_data.values, (500, 24, 24))
luminance = np.reshape(luminance_file_data.values, (500, 14, 14))
f, axarr = plt.subplots(nrows=3,ncols=3, figsize=(9,9))
for i in range(3):
index = np.random.randint(500)
axarr[i,0].imshow(ds2_orig[index,:,:], cmap='gray')
axarr[i,1].imshow(ds2_orig[index,:,:], cmap='gray')
axarr[i,2].imshow(luminance[index,:,:], cmap='copper')
axarr[0,0].set_title('orig ds2', size=15)
axarr[0,1].set_title('recon ds2', size=15)
axarr[0,2].set_title('luminance', size=15)
for ax_row in axarr:
for ax in ax_row:
ax.set_xticklabels([])
ax.set_yticklabels([])
# plt.savefig('c_s_cxs.png')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 使用列表推导的一个潜在缺陷就是如果输入非常大的时候会产生一个非常大的结果集,占用大量内存。 如果你对内存比较敏感,那么你可以使用生成器表达式迭代产生过滤的元素。比如:
Step2: 有时候,过滤规则比较复杂,不能简单的在列表推导或者生成器表达式中表达出来。 比如,假设过滤的时候需要处理一些异常或者其他复杂情况。这时候你可以将过滤代码放到一个函数中, 然后使用内建的 filter() 函数。示例如下:
Step3: filter() 函数创建了一个迭代器,因此如果你想得到一个列表的话,就得像示例那样使用 list() 去转换。
Step4: 过滤操作的一个变种就是将不符合条件的值用新的值代替,而不是丢弃它们。 比如,在一列数据中你可能不仅想找到正数,而且还想将不是正数的数替换成指定的数。 通过将过滤条件放到条件表达式中去,可以很容易的解决这个问题,就像这样:
Step5: 另外一个值得关注的过滤工具就是 itertools.compress() , 它以一个 iterable 对象和一个相对应的 Boolean 选择器序列作为输入参数。 然后输出 iterable 对象中对应选择器为 True 的元素。 当你需要用另外一个相关联的序列来过滤某个序列的时候,这个函数是非常有用的。 比如,假如现在你有下面两列数据:
Step6: 现在你想将那些对应 count 值大于 5 的地址全部输出,那么你可以这样做:
|
<ASSISTANT_TASK:>
Python Code:
mylist = [1, 4, -5, 10, -7, 2, 3, -1]
[n for n in mylist if n > 0]
[n for n in mylist if n < 0]
pos = (n for n in mylist if n > 0)
pos
for x in pos:
print(x)
values = ['1', '2', '-3', '-', '4', 'N/A', '5']
def is_int(val):
try:
x = int(val)
return True
except ValueError:
return False
ivals = list(filter(is_int, values))
print(ivals)
mylist = [1, 4, -5, 10, -7, 2, 3, -1]
import math
[math.sqrt(n) for n in mylist if n > 0]
clip_neg = [n if n > 0 else 0 for n in mylist]
clip_neg
clip_pos = [n if n < 0 else 0 for n in mylist]
clip_pos
addresses = [
'5412 N CLARK',
'5148 N CLARK',
'5800 E 58TH',
'2122 N CLARK'
'5645 N RAVENSWOOD',
'1060 W ADDISON',
'4801 N BROADWAY',
'1039 W GRANVILLE',
]
counts = [ 0, 3, 10, 4, 1, 7, 6, 1]
from itertools import compress
more5 = [n > 5 for n in counts]
more5
list(compress(addresses, more5))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Respirometer model in pyIDEAS
Step2: We define the model equations and set up the model
Step3: Reading in the observations
Step4: Decubber thesis
Step5: So, we have three datasets to work with
Step6: MC filtering
Step7: combine some output runs
Step8: An alternative objective function
Step9: Confronting the measurements and the model in the optimization class
Step10: Adding a relative measurement error for the measurements (based on the noise of the measurement)
Step11: We assume that only the kinetic parameters are uncertain and need to be estimated
Step12: Running a local optimization (actually maximuma likelihood special case situation)
Step14: Exploration of using pyIDEAS in emcee Hammer
Step16: Parameter estimation of the parameters $\mu_{max}$, $K_S$ and $\tau$
Step17: Assuming Gaussian errors, without any measurement error
Step18: Uniform priors within specified regions
Step19: Combining the probabilities of the prior and the gaussian likihood function
Step20: Applying the emcee Hammer library to sample the posterior
Step21: Check the different sampler chains
Step22: Maximum Likelihood estimation
Step23: This does indeed agree with the density regions found by the markov chain sampler
|
<ASSISTANT_TASK:>
Python Code:
# https://github.com/dfm/corner.py
import corner
import hydropy as hp
mpl.rcParams['font.size'] = 16
mpl.rcParams['axes.labelsize'] = 18
mpl.rcParams['xtick.labelsize'] = 16
mpl.rcParams['ytick.labelsize'] = 16
import pylab as p
p.rc('mathtext', default='it')
from biointense.model import Model
from biointense.measurements import ode_measurements
from biointense.optimisation import ParameterOptimisation
from biointense.sensitivity import NumericalLocalSensitivity
from biointense.confidence import BaseConfidence
system = {'dS': '-(1. - exp(-t/tau))*(1/Y)*mu_max*S/(S+K_S)*X',
'dX': '(1. - exp(-t/tau))*(1/Y)*mu_max*S/(S+K_S)*X - b*X',
'dDO': '369.73*(8.4-DO)-((1-Y)/Y)*(1.0 - exp(-t/tau))*mu_max*S/(S+K_S)*X',
'OURex': '((1.- exp(-t/tau))*mu_max*(1-Y)/Y*S/(S+K_S)*X)/(24.*60.)'}
parameters = {'mu_max': 4., 'K_S': 0.4, 'b': 0.62, 'Y': 0.78,
'tau':2.25e-04}
timestepping = np.linspace(0, 0.05, 1000)
respiro_model = Model('Respirometry', system, parameters)
respiro_model.set_independent({'t': timestepping})
respiro_model.set_initial({'S': 58.48, 'X': 675, 'DO':8.4})
respiro_model.initialize_model()
output = respiro_model.run()
axs = output.plot(subplots=True, figsize=(12, 8))
output.index = timestepping
observations = pd.read_csv("respirometer_data.txt", sep="\t", index_col=0,
names=["DO", "OURex"], skiprows=2)
observations.index = observations.index/(60*24) #everything to days
observations.index.name='t'
observations.head()
observations.plot(subplots=True, figsize=(12, 8))
0.015*24*60
(observations.index*24*60)[1300]
observations["OURex"].plot()
output["OURex"].plot()
cubber_ref = pd.read_csv("../thesis_decubber/0508Aall_pieken.csv")[['NewTime','DO', 'OUR filt']]
cubber_ref = cubber_ref.set_index('NewTime')
cubber_ref.plot()
cubber1 = pd.read_csv("../thesis_decubber/0508A1_pieken.csv")[['NewTime','DO', 'OUR filt']]
cubber1 = cubber1.set_index('NewTime')
cubber5 = pd.read_csv("../thesis_decubber/0508A5_pieken.csv")[['NewTime','DO', 'OUR filt']]
cubber5 = cubber5.set_index('NewTime')
cubber1['DO'].plot()
cubber5['DO'].plot()
def load_runs(ids=[4, 6], lengte=2745, N=30000):
allruns = np.zeros((N*len(ids), lengte))
allpars = np.zeros((N*len(ids), 2))
allSSE = np.zeros(N*len(ids))
for j,idrun in enumerate(ids):
allruns[j*N:(j+1)*N, :] = np.fromfile('respiro_'+str(N) + 'runs_' +str(idrun) + '.txt').reshape(N, lengte)
allpars[j*N:(j+1)*N, :] = np.fromfile('respiro_'+str(N) + 'pars_' +str(idrun) + '.txt').reshape(N, 2)
allSSE[j*N:(j+1)*N] = np.fromfile('respiro_'+str(N) + 'SSE_' +str(idrun) + '.txt')
return allruns, allpars, allSSE
runs, pars, SSE = load_runs()
from plot_functions_rev import Scatter_hist_withOF
import matplotlib.cm as cm
treshold = 1.
pars_behav = pars[SSE < treshold]
SSE_behav = SSE[SSE<treshold]
fig, axScatter, axHistx, axHisty,sc1 = Scatter_hist_withOF(pars[:,0], pars[:,1], data1b=pars_behav[:,0],
data2b=pars_behav[:,1], xbinwidth = 0.02, ybinwidth=0.05,
relative = False, SSE=SSE, SSEb=SSE_behav, vmin=0.7,
vmax=treshold*3, colormaps=cm.Greys_r,
cleanstyle = True,
roodlichter=0.5)
axScatter.set_xlabel(r'$\mu_{max}$',fontsize=16)
axScatter.set_ylabel(r'$K_S$',fontsize=16)
cbar = fig.colorbar(sc1, ax=axScatter, cmap='Greys', orientation='vertical',
ticks=[0.7, treshold, treshold*3], shrink=1.)
txts = cbar.ax.set_yticklabels(['0.7','<'+str(treshold),'> '+str(treshold*3)])
#plt.savefig('scatterhist1.pdf')
#plt.savefig('scatterhist1.png', dpi=300)
fig, axScatter, axHistx, axHisty,sc1 = Scatter_hist_withOF(pars[:,0], pars[:,1], data1b=pars_behav[:,0],
data2b=pars_behav[:,1], xbinwidth = 0.02, ybinwidth=0.05,
relative = False, SSE=SSE, SSEb=SSE_behav, vmin=0.7,
vmax=treshold, colormaps=cm.Greys_r,
cleanstyle = True)
axScatter.set_xlabel(r'$\mu_{max}$',fontsize=16)
axScatter.set_ylabel(r'$K_S$',fontsize=16)
cbar = fig.colorbar(sc1, ax=axScatter, cmap='Greys', orientation='vertical',
ticks=[0.7, treshold], shrink=1.)
txts = cbar.ax.set_yticklabels(['0.7','<'+str(treshold)])
#plt.savefig('scatterhist2.pdf')
#plt.savefig('scatterhist2.png', dpi=300)
def SSE_part(modelled, observed, slicet):
residuals = modelled[:slicet] - observed[:slicet]
return sum(residuals**2)
runs.shape
sse_short = np.apply_along_axis(SSE_part, 1, runs, respiro_optim.measurements.Data["OURex"].values, 1300)
plt.hist(sse_short)
treshold_short=0.3
pars_behav_short = pars[sse_short < treshold_short]
SSE_behav_short = sse_short[sse_short<treshold_short]
sse_short.max()
SSE_behav_short.min()
fig, axScatter, axHistx, axHisty,sc1 = Scatter_hist_withOF(pars[:,0], pars[:,1], xbinwidth = 0.02, ybinwidth=0.05,
relative = False, SSE=sse_short, vmin=0.16, vmax=3.2,
colormaps=cm.Greys_r,
cleanstyle = True)
axScatter.set_xlabel(r'$\mu_{max}$',fontsize=16)
axScatter.set_ylabel(r'$K_S$',fontsize=16)
cbar = fig.colorbar(sc1, ax=axScatter, cmap='Greys', orientation='vertical', shrink=1.)
#txts = cbar.ax.set_yticklabels(['0.05', '<'+str(treshold_short)])
#plt.savefig('scatterhist_start.pdf')
#plt.savefig('scatterhist_start.png', dpi=300)
fig, axScatter, axHistx, axHisty,sc1 = Scatter_hist_withOF(pars[:,0], pars[:,1], data1b=pars_behav_short[:,0],
data2b=pars_behav_short[:,1], xbinwidth = 0.02, ybinwidth=0.05,
relative = False, SSE=sse_short, SSEb=SSE_behav_short,
vmin=0.16, vmax=0.32, colormaps=cm.Greys_r,
cleanstyle = True)
axScatter.set_xlabel(r'$\mu_{max}$',fontsize=16)
axScatter.set_ylabel(r'$K_S$',fontsize=16)
cbar = fig.colorbar(sc1, ax=axScatter, cmap='Greys', orientation='vertical',
ticks=[0.16, treshold_short], shrink=1.)
txts = cbar.ax.set_yticklabels(['0.16', '<'+str(treshold_short)])
plt.savefig('scatterhist_start.pdf')
plt.savefig('scatterhist_start.png', dpi=300)
respiro_data = ode_measurements(pd.DataFrame(observations["OURex"]))
respiro_data.add_measured_errors({'OURex': 0.01})
respiro_optim = ParameterOptimisation(respiro_model, respiro_data, optim_par=['mu_max','K_S', 'tau'])
respiro_optim.local_optimize(pardict={'mu_max':3., 'K_S':0.6, 'tau': 2.26e-4}, obj_crit='wsse', method='Nelder-Mead')
#mumax_es = np.random.uniform(1.0, 4., N)
#K_s_es = np.random.uniform(0.1, 0.8, N)
plt.plot(respiro_optim.modmeas["Measured"]["OURex"])
plt.plot(respiro_optim.modmeas["Modelled"]["OURex"])
respiro_optim.dof
myrun = respiro_optim._run_model(dof_array=np.array([3.9, 0.4]))
#myrun
import emcee
def respiro_get_residuals(parameters):
run respiromodel and calculate the residuals as a numpy.array
mu_max, K_S = parameters
residuals = respiro_optim._run_model(dof_array=np.array([mu_max, K_S]))["OURex"] - \
respiro_optim.measurements.Data["OURex"]
return residuals.values
respiro_get_residuals([3.9, 0.4])
def lnprior(parameters):
mu_max, K_S = parameters
if 3.5 < mu_max < 4.2 and 0.1 < K_S < 0.9:
return 0.0
return -np.inf
# gaussian, geen meetfout,...
def lnlike(parameters):
residuals = respiro_get_residuals(parameters)
observed = respiro_optim.measurements.Data["OURex"].values
N = len(observed)
sigma2 = np.std(observed)**2.
loglike = -N*np.log(2*np.pi)/2. - N*np.log(sigma2)/2. - ((residuals)**2).sum()/(2*sigma2)
return loglike
def lnprob(parameters):
lp = lnprior(parameters)
if not np.isfinite(lp):
return -np.inf
return lp + lnlike(parameters)
ndim, nwalkers = 2, 100
pos = [[3.9, 0.32] + 1e-4*np.random.randn(ndim) for i in range(nwalkers)]
respiro_optim.measurements.Data["OURex"].values
sampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob)
sampl_out = sampler.run_mcmc(pos, 500)
fig, ax = plt.subplots(2, 1)
for i in range(100):
ax[0].plot(sampler.chain[i, :, 0])
ax[1].plot(sampler.chain[i, :, 1])
ax[0].set_ylabel(r'$\mu_{max}$')
ax[1].set_ylabel(r'$K_S$')
print("Mean acceptance fraction: {0:.3f}".format(np.mean(sampler.acceptance_fraction)))
samples = sampler.chain[:, 200:, :].reshape((-1, ndim))
import corner
import corner
fig = corner.corner(samples, labels=["$\mu_{max}$", "$K_S$"],
quantiles=None, plot_contours=True)
fig.savefig('test.pdf')
sampler.acor
temp = fig.get_axes()[2]
xlims = temp.get_xlim()
ylims = temp.get_ylim()
xlims, ylims
def respiro_get_residuals2(parameters):
run respiromodel and calculate the residuals as a numpy.array
mu_max, K_S, tau = parameters
residuals = respiro_optim._run_model(dof_array=np.array([mu_max, K_S, tau]))["OURex"] - \
respiro_optim.measurements.Data["OURex"]
return residuals.values
# gaussian, geen meetfout,...
def lnlike2(parameters):
residuals = respiro_get_residuals2(parameters)
observed = respiro_optim.measurements.Data["OURex"].values
N = len(observed)
sigma2 = np.std(observed)**2.
loglike = -N*np.log(2*np.pi)/2. - N*np.log(sigma2)/2. - ((residuals)**2).sum()/(2*sigma2)
return loglike
def lnprior2(parameters):
mu_max, K_S, tau = parameters
if 3.5 < mu_max < 4.2 and 0.1 < K_S < 0.9 and 0.000226*0.5 < tau < 0.001*1.5:
return 0.0
return -np.inf
def lnprob2(parameters):
lp = lnprior2(parameters)
if not np.isfinite(lp):
return -np.inf
return lp + lnlike2(parameters)
ndim, nwalkers = 3, 100
pos = [[3.9, 0.32, 2.25e-04] + 1e-4*np.random.randn(ndim) for i in range(nwalkers)]
sampler2 = emcee.EnsembleSampler(nwalkers, ndim, lnprob2)
sampl_out2 = sampler2.run_mcmc(pos, 500)
fig, ax = plt.subplots(3, 1, figsize=(12, 8), sharex=True)
for i in range(100):
ax[0].plot(sampler2.chain[i, :, 0], c='0.3', linewidth=0.2)
ax[1].plot(sampler2.chain[i, :, 1], c='0.3', linewidth=0.2)
ax[2].plot(sampler2.chain[i, :, 2], c='0.3', linewidth=0.2)
ax[0].set_ylabel(r'$\mu_{max}$')
ax[0].set_yticks([3.8, 3.9, 4., 4.1])
ax[1].set_ylabel(r'$K_S$')
ax[1].set_yticks([0.1, 0.4, 0.7])
ax[2].set_ylabel(r'$\tau$')
ax[2].set_yticks([-0.0, 0.0002, 0.0004])
plt.savefig("mcmc_samples.pdf")
samples = sampler2.chain[:, 200:, :].reshape((-1, ndim))
import corner
fig= corner.corner(samples, labels=["$\mu_{max}$", "$K_S$", r"$\tau$"],
quantiles=None, plot_contours=True)
#fig.savefig('test.pdf')
axs = fig.get_axes()
#axs[8].xaxis.get_major_formatter().set_powerlimits((0, 1))
scale_pow = 4
from matplotlib import ticker
from matplotlib.ticker import FuncFormatter
def my_formatter_fun(x, p):
return "%.1f" % (x * (10 ** scale_pow)) + ' 10$^{{{0:d}}}$'.format((-1*scale_pow))
axs[6].get_yaxis().set_major_formatter(ticker.FuncFormatter(my_formatter_fun))
axs[8].get_xaxis().set_major_formatter(ticker.FuncFormatter(my_formatter_fun))
for ax in axs:
ax.yaxis.set_label_coords(-0.4, 0.5)
ax.xaxis.set_label_coords(0.5, -0.5)
for tick in axs[6].get_yaxis().get_major_ticks():
tick.set_pad(-2)
tick.label1 = tick._get_text1()
plt.setp(axs[6].yaxis.get_majorticklabels(), rotation=45 )
#axs[6].yaxis.get_major_formatter().set_scientific(True)
#axs[6].set_yticklabels(['1.6e$^{-4}$','2.4e$^{-4}$','3.2e$^{-4}$','4.0e$^{-4}$'])
plot_margin = 0.25
plt.savefig('mcmc_result_respirometry.pdf', bbox_inches='tight')
sampler2.acor
import scipy.optimize as op
nll = lambda *args: -lnlike2(*args)
result = op.minimize(nll, [3.9, 0.3, 2.25e-04], method='Nelder-Mead')
m_ml, b_ml, lnf_ml = result["x"]
result
m_ml, b_ml, lnf_ml
plt.plot(respiro_optim.modmeas["Measured"]["OURex"])
myrun = respiro_optim._run_model(dof_array=np.array([3.93, 0.45, 0.000215]))
myrun.OURex.plot()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Connect Scope
Step2: 3. Generate a clock of $50\%$ duty cycle and $10\,\mu$s period
Step3: 4. Generate a clock of $25\%$ duty cycle and $20\,\mu$s period
|
<ASSISTANT_TASK:>
Python Code:
from pynq.overlays.base import BaseOverlay
base = BaseOverlay("base.bit")
from pynq.lib import Pmod_PWM
pwm = Pmod_PWM(base.PMODA,0)
import time
# Generate a 10 us clocks with 50% duty cycle
period=10
duty=50
pwm.generate(period,duty)
# Sleep for 4 seconds and stop the timer
time.sleep(4)
pwm.stop()
import time
# Generate a 20 us clocks with 25% duty cycle
period=20
duty=25
pwm.generate(period,duty)
# Sleep for 5 seconds and stop the timer
time.sleep(5)
pwm.stop()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Core (Low Level APIs)
Step2: Eager Execution
Step3: Dataset API
Step4: Save and Restore Variables
Step5: Save and Restore a Model
Step6: Serving Client
|
<ASSISTANT_TASK:>
Python Code:
import os
import sys
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from pathlib import Path
import tensorflow as tf
%matplotlib notebook
#%matplotlib inline
models_data_folder = Path.home() / "Documents/models/"
# create and add up two constants
a = tf.constant(3.0, dtype=tf.float32)
b = tf.constant(4.0)
total = a + b
print(a)
print(b)
print(total)
# execute graph via a Session
sess = tf.Session()
print(sess.run(total))
print(sess.run({'ab': (a, b), 'total': total})) # request multiple tensors
# variables
x = tf.placeholder(tf.float32, name='x')
y = tf.placeholder(tf.float32, name='y')
z = x + y
sess = tf.Session()
print(sess.run(z, feed_dict={x: 3, y: 4}))
tf.enable_eager_execution() # enable eager mode, need to be run at start
a = 3.0
b = 4.0
res = tf.multiply(a, b)
res
np.multiply(res, res)
tf.enable_eager_execution() # enable eager mode, need to be run at start
dataset = tf.data.Dataset.range(10)
print(dataset.output_types)
print(dataset.output_shapes)
# apply custom function to each element of the dataset
dataset = dataset.map(lambda x : x + 1)
for i in dataset:
print(i)
# define repeatition, batching and buffers
dataset = tf.data.Dataset.range(10)
dataset = dataset.repeat(2)
dataset = dataset.batch(2)
iterator = dataset.make_one_shot_iterator()
for i in iterator:
print(i)
# dummy variables
#v1 = tf.get_variable("v1", shape=[3], initializer=tf.zeros_initializer)
#v2 = tf.get_variablea("v2", shape=[5], initializer=tf.zeros_initializer)
v1 = tf.Variable(tf.constant(0), name='v1')
v2 = tf.Variable(tf.constant(5), name='v2')
# dummy operations
inc_v1 = v1.assign(v1+1)
dec_v2 = v2.assign(v2-1)
# Save variables
# def init op and saver
init_op = tf.global_variables_initializer()
saver = tf.train.Saver()
# run some operations and save sessions
with tf.Session() as sess:
sess.run(init_op)
inc_v1.op.run()
dec_v2.op.run()
save_path = saver.save(sess,
str(models_data_folder / 'tmp' / "model.ckpt"))
print("Model saved in {}".format(save_path))
# test behavior in new session (need to rerun initializer)
with tf.Session() as sess:
sess.run(init_op)
print(v1.eval())
print(inc_v1.eval())
print(v1.eval())
# Restore Variables
# need to redefine the variable
v1 = tf.Variable(tf.constant(0), name='v1')
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess,
str(models_data_folder / 'tmp' / "model.ckpt"))
#now v1 should have the value we previously saved
print(v1.eval())
# directory where model will be exported
# include version info in model path as required by TF
version = 0
export_dir = str(models_data_folder / "tf_test_models_export" / str(version))
# dummy model
x = tf.Variable(tf.constant(0), name='x')
y = tf.Variable(tf.constant(5), name='y')
f = tf.multiply(x, y, name='f')
# save model
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
#consider difference between eval and run
#see: https://stackoverflow.com/questions/33610685/in-tensorflow-what-is-the-difference-between-session-run-and-tensor-eval
#sess.run(f, feed_dict={x:3.0, y:5.0})
fval = f.eval(feed_dict={x:3.0, y:5.0})
print(fval)
# Init builder
builder = tf.saved_model.builder.SavedModelBuilder(export_dir)
# Build info for inputs and outputs tensors
#??Is the key associated with the tensor name?
inputs = {
'x' : tf.saved_model.utils.build_tensor_info(x),
'y' : tf.saved_model.utils.build_tensor_info(y)
}
outputs = {
'f' : tf.saved_model.utils.build_tensor_info(f)
}
# Define signature (set of inputs and outputs for the graph)
prediction_signature = (
tf.saved_model.signature_def_utils.build_signature_def(
inputs=inputs,
outputs=outputs,
# method used for the inference
method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME
)
)
# Add meta-graph (dataflow graph, variables, assets, and signatures)
# to the builder
builder.add_meta_graph_and_variables(
sess=sess,
tags=[tf.saved_model.tag_constants.SERVING],
# ??
signature_def_map={
'predict' : prediction_signature
},
# ??
#legacy_init_op = tf.group(tf.tables_initializer(), name='legacy_init_op')
)
# Finally save builder
builder.save()
# Restore model
# redefine target
x = tf.Variable(tf.constant(1), name='x')
y = tf.Variable(tf.constant(5), name='y')
#f = tf.Operation(None, name='f')
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
#print(f.eval())
mg = tf.saved_model.loader.load(
sess=sess,
tags=[tf.saved_model.tag_constants.SERVING],
export_dir
)
f = tf.get_default_graph().get_operation_by_name("f")
# ??Why session graph keeps getting new operations?
# isn't it clean every time we exit the "with" scope
#print(sess.graph.get_operations())
print(sess.run(f))
from grpc.beta import implementations
# reference local copy of Tensorflow Serving API Files
sys.path.append(str(os.getcwd() / *[os.pardir]*2 / 'ext_libs'))
import lib.predict_pb2 as predict_pb2
import lib.prediction_service_pb2 as prediction_service_pb2
host='127.0.0.1'
port=9000
channel = implementations.insecure_channel(host, int(port))
stub = prediction_service_pb2.beta_create_PredictionService_stub(channel)
# build request
request = predict_pb2.PredictRequest()
request.model_spec.name = 'ed' # model name, as given to bazel script
request.model_spec.signature_name = 'predict' # as defined in ModelBuilder
# define inputs
x = 3
y = 4
x_tensor = tf.contrib.util.make_tensor_proto(x, dtype=tf.int32)
y_tensor = tf.contrib.util.make_tensor_proto(y, dtype=tf.int32)
request.inputs['x'].CopyFrom(x_tensor)
request.inputs['y'].CopyFrom(y_tensor)
# call prediction on the server
result = stub.Predict(request, timeout=10.0)
result
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Function for Random Walks
Step3: Showing a Bunch of Realizations for a Random Walk
Step4: Determining ACF of the Random Walk
|
<ASSISTANT_TASK:>
Python Code:
# importing
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
# showing figures inline
%matplotlib inline
# plotting options
font = {'size' : 20}
plt.rc('font', **font)
plt.rc('text', usetex=True)
matplotlib.rc('figure', figsize=(18, 6) )
# Function for Generating Poisson Processes
def get_Random_Walk( N ):
Function generating a Random Walk by adding-up changes distributed uniformly on {+1,-1}
IN: N, number of steps/changes to be simulated
OUT: X, random walk
# sample steps uniformly out of { -1, +1 }
steps = - 1 + 2 * np.random.randint( 2, size = N - 1 )
# alternative solution
#steps = np.random.choice( [-1, 1], size = N - 1 )
# process by summing up increments
X = np.append( 0, np.cumsum( steps ) )
return X
# increase N_real if multiple realization should be plotted
N_steps = 50
N_trials = 1000
# initialize array for storing multiple processes
X_random_walk = np.zeros( ( N_trials, N_steps + 1 ) )
# loop for realizations
for n in range( N_trials ):
# get realizations of Poisson process
X_random_walk[ n, : ] = get_Random_Walk( N_steps + 1)
# plotting
for n in np.arange( N_trials ):
plt.plot( X_random_walk[ n, :], linewidth = 2.0 )
plt.grid( True )
plt.xlabel('$n$')
plt.ylabel('$S_n$')
plt.margins(.1)
# extract end-points
results_end = X_random_walk[ : , -1 ]
# get histogram
num_bins = 2 * N_steps + 1 # ----> EXERCISE: Can you justify the choice of num_bins?!
width = 2
bins = np.linspace( -N_steps, N_steps, num_bins, endpoint=True)
r_hist = np.histogram( results_end, bins = bins, density = True )
plt.barh( r_hist[1][:-1], 0 + r_hist[0] / np.sum(r_hist[0]) * N_trials / 20 , width, left=N_steps+0.1, color = '#ff7f0e' )
# sample time and tau vector
t = np.arange( 0, N_steps )
tau_acf = np.arange( - N_steps, N_steps )
tau_acf_pos = np.arange( 0, N_steps )
# initialize array for storing multiple processes
X_random_walk = np.zeros( ( N_trials, 3 * N_trials ) )
# loop for realizations
for n in range( N_trials ):
# get realizations of random walk
# NOTE: zeros are padded in order to avoid border effects
X_random_walk[ n, : len(t) ] = get_Random_Walk( N_steps )
# initialize empty two-dim array (t and tau)
acf_2d = np.zeros( ( len(t), len(tau_acf) ) )
# loop for all times
for ind_t, val_t in enumerate( t ):
# loop for all delays
for ind_tau, val_tau in enumerate( tau_acf_pos ):
# get acf at according index/time/delay
corr = [ X_random_walk[ _n, ind_t + ind_tau ]
* X_random_walk[ _n, ind_t ]
for _n in range( N_trials ) ]
# assign acf
# NOTE: Negative shifts are given by acf( -tau ) = acf( tau ) for real-valued processes
acf_2d[ ind_t, + ind_tau + len(t) ] = np.sum( corr ) / N_trials
acf_2d[ ind_t, - ind_tau + len(t) ] = np.sum( corr ) / N_trials
# parameters for meshing
T, Tau_acf = np.meshgrid( tau_acf, t )
# plotting
plt.contourf( T, Tau_acf , acf_2d[ : , : ] )
plt.xlabel('$\ell$')
plt.ylabel('$k$')
plt.title('$\\varphi_{XX}(k,\ell)$')
plt.colorbar();
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Link the o2scl library
Step2: Create a sample function to interpolate
Step3: Create sample data from our function
Step4: Compute the mean and standard deviation so that we can normalize the data
Step5: Copy the data into std_vector objects
Step6: Create the interpolators
Step7: Create a Gaussian process from sklearn to perform the interpolation. Like the O$_2$scl class interp_krige_optim, this is a simple one-parameter version which only varies the length scale.
|
<ASSISTANT_TASK:>
Python Code:
import o2sclpy
import matplotlib.pyplot as plot
import sys
import math
import numpy
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import RBF, ConstantKernel
plots=True
if 'pytest' in sys.modules:
plots=False
link=o2sclpy.linker()
link.link_o2scl()
def f(x):
return math.sin(1.0/(0.3+x))
xa=[0 for i in range(0,20)]
ya=[0 for i in range(0,20)]
for i in range(0,20):
if i>0:
xa[i]=xa[i-1]+((i)/40)**2
ya[i]=math.sin(1.0/(0.3+xa[i]))
m=numpy.mean(ya)
s=numpy.std(ya,ddof=1)
print('mean: %7.6e, std: %7.6e' % (m,s))
ya2=[(ya[i]-m)/s for i in range(0,20)]
xp=o2sclpy.std_vector(link)
yp=o2sclpy.std_vector(link)
xp.resize(20)
yp.resize(20)
for i in range(0,20):
xp[i]=xa[i]
yp[i]=ya2[i]
iv_lin=o2sclpy.interp_vec(link)
iv_lin.set(20,xp,yp,o2sclpy.itp_linear)
iv_csp=o2sclpy.interp_vec(link)
iv_csp.set(20,xp,yp,o2sclpy.itp_cspline)
iv_aki=o2sclpy.interp_vec(link)
iv_aki.set(20,xp,yp,o2sclpy.itp_akima)
iv_mon=o2sclpy.interp_vec(link)
iv_mon.set(20,xp,yp,o2sclpy.itp_monotonic)
iv_stef=o2sclpy.interp_vec(link)
iv_stef.set(20,xp,yp,o2sclpy.itp_steffen)
iv_ko=o2sclpy.interp_krige_optim(link)
iv_ko.set(20,xp,yp,True)
plot.plot(xa,ya,lw=0,marker='+')
plot.plot(xa,[iv_lin.eval(xa[i])*s+m for i in range(0,20)])
plot.plot(xa,[iv_csp.eval(xa[i])*s+m for i in range(0,20)])
plot.plot(xa,[iv_aki.eval(xa[i])*s+m for i in range(0,20)])
plot.plot(xa,[iv_mon.eval(xa[i])*s+m for i in range(0,20)])
plot.plot(xa,[iv_stef.eval(xa[i])*s+m for i in range(0,20)])
plot.plot(xa,[iv_ko.eval(xa[i])*s+m for i in range(0,20)])
max=xa[19]
xb=[i/2000.0*max for i in range(0,2001)]
xa2=numpy.array(xa).reshape(-1,1)
kernel=RBF(1.0,(1.0e-2,1.0e2))
gpr=GaussianProcessRegressor(kernel=kernel).fit(xa2,ya)
for hyperparameter in gpr.kernel_.hyperparameters:
print('hp',hyperparameter)
params = gpr.kernel_.get_params()
for key in sorted(params):
print("kp: %s : %s" % (key, params[key]))
params2=gpr.get_params()
for key in sorted(params2):
print("gpp: %s : %s" % (key, params2[key]))
plot.rcParams['figure.figsize'] = [11, 9]
plot.semilogy(xb,[abs(f(xb[i])-(iv_lin.eval(xb[i])*s+m))
for i in range(0,2001)],color='black',lw=0.5,label='linear')
plot.semilogy(xb,[abs(f(xb[i])-(iv_csp.eval(xb[i])*s+m))
for i in range(0,2001)],color='red',lw=0.5,label='cubic spline')
plot.semilogy(xb,[abs(f(xb[i])-(iv_stef.eval(xb[i])*s+m))
for i in range(0,2001)],color='blue',lw=0.5,label='steffen')
plot.semilogy(xb,[abs(f(xb[i])-(iv_ko.eval(xb[i])*s+m))
for i in range(0,2001)],color='purple',lw=0.5,label='GP o2scl')
plot.semilogy(xb,[abs(f(xb[i])-(gpr.predict(numpy.array(xb[i]).reshape(-1,1))))
for i in range(0,2001)],color='green',lw=0.5,label='GP sklearn')
plot.legend()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: I'll skip the data gathering step. I used a CSV file with the list of countries and regions to parse Airbnb.com website and saved the data as a sqlite dictionary. That is why I use the sqlitedict library here.
Step2: I'll also format floating point numbers.
Step4: Data overview
Step5: I collected data on total number of listings located in each country ('apt_total') and number of pet friendly listings ('apt_pets'), or in short PFL.
Step6: A quick glance at the whole dataset reveals that countries with few listings do not tell much about their attitude towards pets. Let's look at the top-5 friendliest countries.
Step7: This chart tells us that 100% of listings in Western Sahara and Wallis and Futuna are pet friendly. But that's only because there is only one listing in the country, and pets are allowed there.
Step8: Is Antarctica in the top-5 chart? Do penguins rent out their nests?
Step9: Andorra is the most welcoming country with 46% of 405 listings willing to accommodate a pet.
Step10: Japan might be popular for the lovely Shiba Inu dog breed but not for willingness to accommodate a pet. Only 4% of listings would suit a pet owner.
Step11: First, I'll look at the distribution of accommodation listings among regions.
Step12: It seems, Airbnb is very popular in Europe. I'll plot the data.
Step13: The plot and the table above reveal that the highest percentage of Airbnb listings (54%) are located in Europe. 63% of PFL are located in Europe. The highest ratio of PFL (0.24) is recorded in Europe as well.
Step14: The cheapest region in terms of accommodation is South America. On average, day rent would cost \$66. Though the cheapest stay with a pet would be possible in Africa, just \$67 a night.
Step15: The box-and-whisker plot shows the distribution of data based on the five number summary
Step16: The same calculations for PFL reveal even lower numbers.
Step17: I'll plot countries that account for more than 1% of PFL.
Step18: But, again, these are raw numbers. I'll show the top 5 most friendly countries in each region.
Step19: I'll map the ratio of PFL. First, I'll split the data into clusters using k-means. I have a module to choose the number of k.
Step20: It looks like 5 would be a right number of clusters.
Step21: To map the data I'll use the folium library.
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import division
from IPython.display import display
import pandas as pd
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
import humanize
from sqlitedict import SqliteDict
db = SqliteDict('./pet_friendly.sqlite')
pd.set_option('float_format', '{:.2f}'.format)
def read_db_in_pandas(db):
Read the DB.
# Transpose data to have keys as index.
df = pd.DataFrame(dict(db)).T
# Remove rows with no data for a country.
df = df[df['apt_total'] != 0]
# Convert columns into numbers.
df = df.convert_objects(convert_numeric=True)
return df
df = read_db_in_pandas(db)
print ("There are {0} countries in the dataset."
.format(len(db)))
print ("{0} countries have no listings published."
.format(len(db) - len(df)))
print ("We are left with {0} countries for closer examination."
.format(len(df)))
print "\nBelow is a random sample of the DataFrame:"
df.sample(n=5)
# Number of listings.
apt_sum = df['apt_total'].sum()
# Number of PFL.
pets_sum = df['apt_pets'].sum()
print ("There are {0} listings all over the world on Airbnb."
.format(humanize.intword(apt_sum)))
print ("Pets would be welcomed only in {0:.0%} of all listings."
.format(pets_sum / apt_sum))
# Average price per night.
price_mean = round(df['apt_total_price'].mean())
pet_mean = round(df['apt_pets_price'].mean())
# Difference in price.
diff = (price_mean - pet_mean) / price_mean
print ("The average price for a listing is ${0} per night."
.format(int(price_mean)))
print ("A PFL would cost {0:.0%} less, or ${1}."
.format(diff, int(pet_mean)))
# Ratio of PFL
compare = df['apt_pets']/df['apt_total']
compare.sort(ascending=False)
print "The friendliest countries:"
compare[:5]
df.loc[['Western Sahara', 'Wallis and Futuna']][['apt_total', 'apt_pets']]
# Add a threshold of 20 listings.
df_cut = df[df['apt_total'] > 20]
# Remove Antarctica.
df_cut = df_cut[df_cut.index != 'Antarctica']
print ("These changes leave us {0} countries to examine."
.format(len(df_cut)))
# Calculate ratio of PFL and add to the DataFrame.
df_cut['apt_ratio'] = (df_cut['apt_pets'] / df_cut['apt_total'])
print "\nTop-5 countries with the highest ratio of PFL:"
df_cut.sort('apt_ratio', ascending=False)[['apt_ratio', 'apt_total', 'apt_pets', 'region']][:5]
print "Bottom-5 countries with the lowest ratio of PFL: "
df_cut.sort('apt_ratio')[['apt_ratio', 'apt_total', 'apt_pets', 'region']][:5]
print ("Number of countries in the dataset "
"grouped by region:")
df_cut.groupby(['region']).size()
# All listings
region_total = (df_cut['apt_total']
.groupby(df_cut['region']).sum()
/ df_cut['apt_total'].sum())
# PFL listings
region_pets = (df_cut['apt_pets']
.groupby(df_cut['region']).sum()
/ df_cut['apt_pets'].sum())
# PFL ratio
region_ratio = (df_cut['apt_pets']
.groupby(df_cut['region']).sum()
/ df_cut['apt_total']
.groupby(df_cut['region']).sum())
region_listings = pd.concat([region_total, region_pets],
axis=1)
region_listings['ratio'] = region_ratio
print "Share of regions in Airbnb listings: "
region_listings.sort('apt_total', ascending=False)
# Make a bar chart
matplotlib.style.use('bmh')
plt.figure();
region_listings.plot(kind='bar', rot=0, figsize=(8, 6))
plt.xlabel('')
plt.ylabel('Percentage')
plt.title('The distribution of accommodation '
'listings among regions \n')
plt.show()
# Mean price per region
region_price = df_cut.groupby(df_cut['region']).mean()[['apt_total_price',
'apt_pets_price']]
# Calculate ratio of PFL price to general listing price.
region_price['ratio'] = (region_price['apt_pets_price'] /
region_price['apt_total_price'])
print "Average price in region, USD."
print "Ratio of PFL price to average price."
region_price.sort('apt_total_price')
# Make a boxplot
color = dict(boxes='DarkGreen', whiskers='DarkOrange',
medians='DarkBlue', caps='Gray')
plt.figure();
# Split the data on 'apt_pets_price' into regional groups
region_groups = df_cut.groupby(['region'])[['apt_pets_price']]
# List of regions
regions = df_cut['region'].unique()
data = []
# Add the data from the DF
for item in regions:
a = region_groups.get_group(item)
data.append(a)
# Make a new DF for regions
box_df = pd.concat(data, ignore_index=True, axis=1)
box_df.columns = regions
# Plot the data
box_df.plot(kind='box', sym='r+', color=color,
figsize=(8, 6)).set_ylim([0,300])
plt.ylabel('Average price per night, USD')
plt.title('The distribution of PFL prices in regions')
plt.show()
# Share of each country in total listings
apt_share = (df_cut['apt_total'] / df_cut['apt_total'].sum() * 100).round(2)
apt_share.sort(ascending=False)
# Count 80% of countries
quantile_count = int(len(df_cut) * 0.80)
print ("80%, or {0} countries in the dataset "
"account for {1}% of all listings available."
.format(quantile_count, apt_share[-1*quantile_count: ].sum()))
print ("These 5 countries account for {0}% "
"of all listings published on Airbnb: "
.format(apt_share[:5].sum()))
apt_share[:5]
# Share of each country in PFL
pet_share = (df_cut['apt_pets'] /
df_cut['apt_pets'].sum() * 100).round(2)
pet_share.sort(ascending=False)
print ("80%, or {0} countries represent just "
"{1}% of PFL.".format(quantile_count,
pet_share[-1 * quantile_count: ].sum()))
print ("These 5 countries account for {0}% "
"of all listings published on Airbnb: "
.format(pet_share[:5].sum()))
pet_share[:5]
# Add country codes to the DF.
pet_share_df = pd.concat(
[pet_share, df['country_code']],
axis=1).set_index('country_code').sort('apt_pets',
ascending=False)
# Make a bar chart
matplotlib.style.use('ggplot')
plt.figure();
share_plot = pet_share_df[pet_share_df['apt_pets']>1].plot(kind='bar',
legend=False,
rot=45,
title='Countries with share of all PFL > 1%\n')
share_plot.set_ylabel('Share, %')
share_plot.set_xlabel('')
plt.show()
for region in regions:
print ("Top-5 friendliest countries in {}:"
.format(region))
display(df_cut[df_cut['region'] == region].sort('apt_ratio', ascending=False)[['apt_ratio']][:5])
import choosing_k as chk
data = df_cut['apt_ratio'].values.tolist()
chk.plot_errors(data)
import kmeans_calc as kmn
threshold_scale = [0.0] + kmn.split_into_groups(data, 5)
threshold_scale
import json
import folium
from folium.utilities import split_six
columns = ['country_code', 'apt_ratio']
color_data = df_cut.set_index(columns[0])[columns[1]].to_dict()
geo_json_data = json.load(open('countries.geo.json'))
from folium.utilities import split_six, color_brewer
from folium.features import ColorScale
m = folium.Map([32, -45], tiles='Mapbox',
API_key='wrobstory.map-12345678',
zoom_start=2)
# Pass own threshold scale
color_domain = threshold_scale
# Choose color
fill_color = 'YlGnBu'
color_range = color_brewer(fill_color, n=len(color_domain))
key_on = 'id'
# I made a fix for the folium library to colour with white countries with no data.
def get_by_key(obj, key):
if len(key.split('.')) <= 1:
return obj.get(key, None)
def color_scale_fun(x):
try:
r = [u for u in color_domain if u <= color_data[get_by_key(x, key_on)]]
return color_range[len(r)]
except KeyError:
return '#FFFFFF'
# Make a map
folium.GeoJson(geo_json_data,
style_function=lambda feature: {
'fillColor': color_scale_fun(feature),
'color': 'black',
'weight': 1,
'legend_name' : 'Unemployment Rate (%)',
'fillOpacity': 0.8
}).add_to(m)
color_scale = ColorScale(color_domain, fill_color, caption="Ratio of pet friendly listings")
m.add_children(color_scale)
print "Ratio of pet friendly listings to all listings"
m
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Since the digits of the MNIST were stored in a special format, we need to load them
Step2: Now we can visualize one by one as follows (<span style="color
Step3: Now we can do some predictions using our trained LeNet model
|
<ASSISTANT_TASK:>
Python Code:
import caffe
import matplotlib.pyplot as plt
import matplotlib.ticker as plticker
import matplotlib as mpl
import numpy as np
import os
import struct
%matplotlib inline
# Function adapted from https://gist.github.com/akesling/5358964.
def load_mnist_test_data(path = "."):
fname_img = os.path.join(path, 't10k-images-idx3-ubyte')
fname_lbl = os.path.join(path, 't10k-labels-idx1-ubyte')
# Load everything in some numpy arrays
with open(fname_lbl, 'rb') as flbl:
magic, num = struct.unpack(">II", flbl.read(8))
lbl = np.fromfile(flbl, dtype=np.int8)
with open(fname_img, 'rb') as fimg:
magic, num, rows, cols = struct.unpack(">IIII", fimg.read(16))
img = np.fromfile(fimg, dtype=np.uint8).reshape(len(lbl), rows, cols)
get_img = lambda idx: (lbl[idx], np.reshape(img[idx], (28,28,1)))
# Create an iterator which returns each image in turn
for i in xrange(len(lbl)):
yield get_img(i)
test_set=load_mnist_test_data("/opt/caffe/data/mnist/")
def plot_mnist_digit(image, title=None):
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
imgplot = ax.imshow(image[:,:,0], cmap=mpl.cm.Greys)
imgplot.set_interpolation('nearest')
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
major_ticks = np.arange(0, 29, 7)
minor_ticks = np.arange(0, 28, 1)
ax.set_xticks(major_ticks)
ax.set_xticks(minor_ticks, minor=True)
ax.set_yticks(major_ticks)
ax.set_yticks(minor_ticks, minor=True)
# ax.grid(which='both',color='gray', linestyle='-',linewidth=0.5)
if not title == None:
plt.title(title, fontsize=15)
plt.show()
digit = next(test_set)
label = digit[0]; image = digit[1]
plot_mnist_digit(image, "LABEL: " + str(label))
# Creating our trained classifier
classifier = caffe.Classifier('/opt/caffe/examples/mnist/lenet.prototxt',
'/opt/caffe/examples/mnist/lenet_iter_10000.caffemodel')
for i in xrange(5):
digit = next(test_set)
label = digit[0]; image = digit[1]
prediction = classifier.predict([image], oversample=False)
predicted_label=np.argmax(prediction)
plot_mnist_digit(image, "LABEL: " + str(label) + " PREDICTED LABEL: "+ str(predicted_label))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Done. Let's test the reshape_by_symbol function
Step2: So, the reshape_by_symbol function seems to work with run_single_val. It could be added to it. Let's test the roll_evaluate function.
Step3: Let's do some previous filtering to avoid problems
|
<ASSISTANT_TASK:>
Python Code:
from predictor import evaluation as ev
from predictor.dummy_mean_predictor import DummyPredictor
predictor = DummyPredictor()
y_train_true_df, y_train_pred_df, y_val_true_df, y_val_pred_df = ev.run_single_val(x, y, ahead_days, predictor)
print(y_train_true_df.shape)
print(y_train_pred_df.shape)
print(y_val_true_df.shape)
print(y_val_pred_df.shape)
y_train_true_df.head()
y_train_pred_df.head()
y_val_true_df.head()
y_val_pred_df.head()
y_train_true_rs = ev.reshape_by_symbol(y_train_true_df)
print(y_train_true_rs.shape)
y_train_true_rs.head()
y_train_pred_rs = ev.reshape_by_symbol(y_train_pred_df)
print(y_train_pred_rs.shape)
y_train_pred_rs.head()
y_val_true_rs = ev.reshape_by_symbol(y_val_true_df)
print(y_val_true_rs.shape)
y_val_true_rs.head()
u = x.index.levels[0][0]
print(u)
fe.SPY_DF.sort_index().index.unique()
md = fe.SPY_DF.index.unique()
u in md
fe.add_market_days(u,6)
# Getting the data
GOOD_DATA_RATIO = 0.99
data_df = pd.read_pickle('../../data/data_train_val_df.pkl')
sys.path.append('../../')
import predictor.feature_extraction as fe
import utils.preprocessing as pp
data_df = pp.drop_irrelevant_symbols(data_df, GOOD_DATA_RATIO)
train_time = -1 # In real time days
base_days = 7 # In market days
step_days = 7 # market days
ahead_days = 1 # market days
today = data_df.index[-1] # Real date
tic = time()
x, y = fe.generate_train_intervals(data_df,
train_time,
base_days,
step_days,
ahead_days,
today,
fe.feature_close_one_to_one)
toc = time()
print('Elapsed time: %i seconds.' % (toc-tic))
print(data_df.shape)
data_df.head()
SAMPLES_GOOD_DATA_RATIO = 0.9
x_y_df = pd.concat([x, y], axis=1)
x_y_df = pp.drop_irrelevant_samples(x_y_df, SAMPLES_GOOD_DATA_RATIO)
x = x_y_df.iloc[:, :-1]
y = x_y_df.iloc[:, -1]
x = pp.fill_missing(x)
x_y_df.isnull().sum()
x.isnull().sum().sum()
y.isnull().sum()
x_reshaped = ev.reshape_by_symbol(x)
x_reshaped.head()
x_reshaped.isnull().sum().max()
x.shape
x_reshaped.shape
x_reshaped[x_reshaped.notnull()]
y_train_true_df, y_train_pred_df, y_val_true_df, y_val_pred_df = ev.run_single_val(x, y, ahead_days, predictor)
from sklearn.metrics import r2_score
r2_score(y_train_true_df, y_train_pred_df, multioutput='raw_values')
tickers = y_train_true_df.index.levels[1]
tickers
y_train_true_df.loc[(slice(None), 'AAPL'),:]
from sklearn.metrics import r2_score
r2_train_score = []
mre_train = []
for ticker in tickers:
y_true = y_train_true_df.loc[(slice(None), ticker),:]
y_pred = y_train_pred_df.loc[(slice(None), ticker),:]
r2_train_score.append(r2_score(y_true, y_pred))
mre_train.append(ev.mre(y_true, y_pred))
np.mean(r2_train_score)
np.mean(mre_train)
plt.plot(mre_train)
ev.get_metrics(y_train_true_df, y_train_pred_df)
train_days = 252
x_y_sorted = pd.concat([x, y], axis=1).sort_index()
start_date = x_y_sorted.index.levels[0][0]
end_date = fe.add_market_days(start_date, train_days)
start_date
end_date
start_date + ((end_date - start_date) / 2)
train_days = 252
step_eval_days = 30
r2, mre, y_val_true_df, y_val_pred_df, mean_dates = ev.roll_evaluate(x,
y,
train_days,
step_eval_days,
ahead_days,
predictor,
verbose=True)
print(r2.shape)
print(mre.shape)
print(y_val_true_df.shape)
print(y_val_pred_df.shape)
print(mean_dates.shape)
plt.plot(mean_dates, r2[:, 0], 'b', label='Mean r2 score')
plt.plot(mean_dates, r2[:, 0] + 2*r2[:, 1], 'r')
plt.plot(mean_dates, r2[:, 0] - 2*r2[:, 1], 'r')
plt.xlabel('Mean date of the training period')
plt.legend()
plt.grid()
plt.plot(mean_dates, mre[:, 0], 'b', label='Mean MRE')
plt.plot(mean_dates, mre[:, 0] + 2*mre[:, 1], 'r')
plt.plot(mean_dates, mre[:, 0] - 2*mre[:, 1], 'r')
plt.xlabel('Mean date of the training period')
plt.legend()
plt.grid()
y_val_true_df.head()
y_val_pred_df.head()
r2_scores, mre_scores, tickers = ev.get_metrics(y_val_true_df, y_val_pred_df)
eval_df = pd.DataFrame(np.array([r2_scores, mre_scores]).T, index=tickers, columns=['r2', 'mre'])
eval_df.head()
eval_df['mre'].plot()
eval_df['r2'].plot()
eval_df.sort_values(by='mre', ascending=False)
plt.scatter(eval_df['r2'], eval_df['mre'])
eval2_df = ev.get_metrics_df(y_val_true_df, y_val_pred_df)
eval2_df.head()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Intuition of PCA
Step2: By eye, what can we say about this dataset?
Step3: PCA learns what the components are an how variance is explained by them.
Step4: What could that possibly mean? Any ideas?
Step5: What are you seeing? There are three important aspects
Step6: Dimensionality reduction
Step7: What has happend?
Step8: What do we see from this plot?
Step9: Now, let's add some noise
Step10: Let's train a PCA on the noisy data, requesting that the projection preserve 50% of the variance
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns; sns.set()
rng = np.random.RandomState(1)
X = np.dot(rng.rand(2, 2), rng.randn(2, 200)).T
plt.scatter(X[:, 0], X[:, 1])
plt.axis('equal');
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
pca.fit(X)
print(pca.components_)
print(pca.explained_variance_)
def draw_vector(v0, v1, ax=None):
ax = ax or plt.gca()
arrowprops=dict(arrowstyle='->',
linewidth=2,
shrinkA=0, shrinkB=0)
ax.annotate('', v1, v0, arrowprops=arrowprops)
# plot data
plt.scatter(X[:, 0], X[:, 1], alpha=0.2)
for length, vector in zip(pca.explained_variance_, pca.components_):
v = vector * 3 * np.sqrt(length)
draw_vector(pca.mean_, pca.mean_ + v)
plt.axis('equal');
X = rng.randn(250, 2)
plt.scatter(X[:, 0], X[:, 1])
# fit estimator
pca = PCA(n_components=2)
pca.fit(X)
# plot data
plt.scatter(X[:, 0], X[:, 1], alpha=1)
for length, vector in zip(pca.explained_variance_, pca.components_):
v = vector * 3 * np.sqrt(length)
draw_vector(pca.mean_, pca.mean_ + v)
rng = np.random.RandomState(1)
X = np.dot(rng.rand(2, 2), rng.randn(2, 200)).T
pca = PCA(n_components=1)
pca.fit(X)
X_pca = pca.transform(X)
print("original shape: ", X.shape)
print("transformed shape:", X_pca.shape)
X_new = pca.inverse_transform(X_pca)
plt.scatter(X[:, 0], X[:, 1], alpha=0.5)
plt.scatter(X_new[:, 0], X_new[:, 1], alpha=0.5)
plt.axis('equal');
import seaborn as sns
iris = sns.load_dataset('iris')
X_iris = iris.drop('species', axis=1)
y_iris = iris['species']
iris.head()
from sklearn.decomposition import PCA
model = PCA(n_components=2)
model.fit(X_iris)
X_2D = model.transform(X_iris)
colormap = y_iris.copy()
colormap[colormap == 'setosa'] = 'b'
colormap[colormap == 'virginica'] = 'r'
colormap[colormap == 'versicolor'] = 'g'
plt.scatter(X_2D[:, 0], X_2D[:, 1], c=colormap)
plt.xlabel('PCA1')
plt.xlabel('PCA2')
from sklearn.datasets import load_digits
digits = load_digits()
def plot_digits(data):
fig, axes = plt.subplots(4, 10, figsize=(10, 4),
subplot_kw={'xticks':[], 'yticks':[]},
gridspec_kw=dict(hspace=0.1, wspace=0.1))
for i, ax in enumerate(axes.flat):
ax.imshow(data[i].reshape(8, 8),
cmap='binary', interpolation='nearest',
clim=(0, 16))
plot_digits(digits.data)
np.random.seed(42)
noisy = np.random.normal(digits.data, 4)
plot_digits(noisy)
pca = PCA(0.50).fit(noisy)
pca.n_components_
components = pca.transform(noisy)
filtered = pca.inverse_transform(components)
plot_digits(filtered)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Ship test
Step2: Island Test
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy
import matplotlib
from matplotlib.patches import Circle, Wedge, Polygon
from matplotlib.collections import PatchCollection
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
import matplotlib.lines as mlines
import matplotlib.path as mpath
import numpy as np
import seaborn as sns
import networkx as nx
import pandas as pd
import SeaFallLogic
class Ship():
# Rules, pg 8, "Province Boards" also inlcude information about ships
def __init__(self):
self.damage = []
# hold, a list of objects with max length hold
self.hold = []
# upgrades, a list of upgrade objects of max length 2
self.upgrades = []
# values (explore, hold, raid, sail)
self._values = (1, 1, 1, 1)
# vmax is the maximum number values can reach for (explore, hold, raid,
# sail)
self._vmax = (5, 5, 5, 5)
@property
def values(self):
return self._values
@values.setter
def values(self, values):
if not isinstance(values, tuple):
err_str = ("Not a valid data type. The data type should be a tuple"
" of 4 length.")
raise ValueError(err_str)
elif len(values) != 4:
err_str = ("Not a valid data type. The data type should be a tuple"
" of 4 length.")
raise ValueError(err_str)
for val, vmax in zip(values, self.vmax):
if val > vmax:
raise ValueError("A ship value exceeds its max.")
self._values = values
@property
def vmax(self):
return self._vmax
@vmax.setter
def vmax(self, vmax_tuple):
if not isinstance(vmax_tuple, tuple):
err_str = ("Not a valid data type. The data type should be a tuple"
" of 4 length.")
raise ValueError(err_str)
elif len(vmax_tuple) != 4:
err_str = ("Not a valid data type. The data type should be a tuple"
" of 4 length.")
raise ValueError(err_str)
for val, vmax in zip((5, 5, 5, 5), vmax_tuple):
if val > vmax:
raise ValueError("The maximum ship values are never less than (5, 5, 5, 5).")
self._vmax = vmax
ship = Ship()
ship.values
class Site():
def __init__(self, dangerous=False, defense=0):
# Rules, pg 10, "Dangerous Sites"
self.dangerous = dangerous
# Rules, pg 10, "Starting an Endeavor"
# Rules, pg 7, "Defense"
self.defense = defense
class IslandSiteMine(Site):
def __init__(self, dangerous=False, defense=0, gold=0):
super().__init__(dangerous=dangerous, defense=defense)
self.gold = gold
mine = IslandSiteMine()
mine.dangerous
mine.defense
mine2 = IslandSiteMine(dangerous=True, defense=10, gold=6)
mine2.gold
class Goods():
valid_goods = {
"iron",
"linen",
"spice",
"wood"
}
def __init__(self):
pass
Goods.valid_goods
"iron" in Goods.valid_goods
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Next, import the relevant modules
Step2: Step 1
Step3: Step 2
Step4: The df object contains the full output from BioThings Explorer. Each row shows one path that joins the input node (ANISINDIONE) to an intermediate node (a gene or protein) to another intermediate node (a DisseaseOrPhenotypicFeature) to an ending node (a Phenotypic Feature). The data frame includes a set of columns with additional details on each node and edge (including human-readable labels, identifiers, and sources). Let's remove all examples where the output_name (the phenotype label) is None, and specifically focus on paths with specific mechanistic predicates target and causes.
|
<ASSISTANT_TASK:>
Python Code:
!pip install git+https://github.com/biothings/biothings_explorer#egg=biothings_explorer
from biothings_explorer.hint import Hint
from biothings_explorer.user_query_dispatcher import FindConnection
import nest_asyncio
nest_asyncio.apply()
ht = Hint()
anisindione = ht.query("Anisindione")['ChemicalSubstance'][0]
anisindione
fc = FindConnection(input_obj=anisindione,
output_obj='PhenotypicFeature',
intermediate_nodes=['Gene', 'Disease'])
fc.connect(verbose=True)
df = fc.display_table_view()
df.head()
dfFilt = df.loc[df['output_name'].notnull()].query('pred1 == "physically_interacts_with" and pred2 == "prevents"')
dfFilt
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Get angle values and cast to boolean
Step2: Create our simple classification targets
Step3: Look at the distributions to see if we have any imbalances
Step4: looks like a perfectly balanced dataset to me, so we don't have to worry about the type of complications that arise from am imbalanced set.
Step5: Also prepare our holdout set for later
Step6: Create an image generator from this dataframe
Step7: Create a very simple convolutional model from scratch
Step8: Model Definition
Step9: Train this model
Step10: Okay, maybe that was too easy
Step12: Similar model, with some tweaks
Step13: Let's Try Regression
Step14: Similar model, with some tweaks
Step15: Attempt to train for both z and $\theta$
Step16: Similar model, with some tweaks
Step17: Calculate scoring criteria
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import os
import sys
import numpy as np
import math
track_params = pd.read_csv('../TRAIN/track_parms.csv')
track_params.tail()
# Create binary labels
track_params['phi_bool'] = track_params.phi.apply(lambda x: "+" if x > 0 else "-")
# Bin the phi values to get multi-class labels
track_params['phi_binned'], phi_bins = pd.cut(track_params.phi, bins=11,
retbins=True)
track_params['phi_binned'] = track_params['phi_binned'].astype(str)
# Bin the z values to get multi-class labels
track_params['z_binned'], z_bins = pd.cut(track_params.z, bins=11,
retbins=True)
track_params['z_binned'] = track_params['z_binned'].astype(str)
track_params[['phi', 'z']].hist()
track_params.plot(x='z', y='phi', kind='hexbin', sharex=False, cmap="Blues")
track_params.head()
holdout_track_params = pd.read_csv('../VALIDATION/track_parms.csv')
holdout_track_params['phi_bool'] = (
holdout_track_params
.phi
.apply(lambda x: "+" if x > 0 else "-")
)
holdout_track_params['phi_binned'] = pd.cut(
holdout_track_params['phi'],
bins=phi_bins
)
holdout_track_params['phi_binned'] = holdout_track_params['phi_binned'].astype(str)
holdout_track_params['z_binned'] = pd.cut(
holdout_track_params['z'],
bins=z_bins
)
holdout_track_params['z_binned'] = holdout_track_params['z_binned'].astype(str)
from tensorflow.keras.preprocessing.image import ImageDataGenerator
DATAGEN = ImageDataGenerator(rescale=1./255.,
validation_split=0.25)
height = 100
width = 36
def create_generator(target, subset, class_mode,
idg=DATAGEN, df=track_params, N=1000):
return idg.flow_from_dataframe(
dataframe=track_params.head(N),
directory="../TRAIN",
x_col="filename",
y_col=target,
subset=subset,
target_size=(height, width),
batch_size=32,
seed=314,
shuffle=True,
class_mode=class_mode,
)
binary_train_generator = create_generator(
target="phi_bool",
subset="training",
class_mode="binary"
)
binary_val_generator = create_generator(
target="phi_bool",
subset="validation",
class_mode="binary"
)
from tensorflow.keras import Sequential, Model
from tensorflow.keras.layers import (
Conv2D, Activation, MaxPooling2D,
Flatten, Dense, Dropout, Input
)
width = 36
height = 100
channels = 3
def binary_classifier():
model = Sequential()
# Layer 1
model.add(Conv2D(32, (3, 3), input_shape=(height, width, channels)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
# Layer 2
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
# Layer 3
model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
# Dense, Classification Layer
model.add(Flatten()) # this converts our 3D feature maps to 1D feature vectors
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
return model
STEP_SIZE_TRAIN = binary_train_generator.n//binary_train_generator.batch_size
STEP_SIZE_VAL = binary_val_generator.n//binary_val_generator.batch_size
binary_model = binary_classifier()
binary_history = binary_model.fit_generator(
generator=binary_train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=binary_val_generator,
validation_steps=STEP_SIZE_VAL,
epochs=5
)
plt.plot(history.history['accuracy'], label="Train Accuracy")
plt.plot(history.history['val_accuracy'], label="Validation Accuracy")
plt.legend()
plt.show()
mc_train_generator = create_generator(
target="phi_binned",
subset="training",
class_mode="categorical",
N=10000
)
mc_val_generator = create_generator(
target="phi_binned",
subset="validation",
class_mode="categorical",
N=10000
)
width = 36
height = 100
channels = 3
def multiclass_classifier():
model = Sequential()
# Layer 1
model.add(Conv2D(32, (3, 3), input_shape=(height, width, channels)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
# Layer 2
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
# Layer 3
#model.add(Conv2D(64, (3, 3)))
#model.add(Activation('relu'))
#model.add(MaxPooling2D(pool_size=(2, 2)))
# Dense, Classification Layer
model.add(Flatten())
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(11))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
return model
STEP_SIZE_TRAIN = mc_train_generator.n//mc_train_generator.batch_size
STEP_SIZE_VAL = mc_val_generator.n//mc_val_generator.batch_size
mc_model = multiclass_classifier()
mc_history = mc_model.fit_generator(
generator=mc_train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=mc_val_generator,
validation_steps=STEP_SIZE_VAL,
epochs=15
)
plt.plot(mc_history.history['accuracy'], label="Train Accuracy")
plt.plot(mc_history.history['val_accuracy'], label="Validation Accuracy")
plt.legend()
plt.show()
mc_model = multiclass_classifier()
mc_history = mc_model.fit_generator(
generator=mc_train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=mc_val_generator,
validation_steps=STEP_SIZE_VAL,
epochs=15
)
plt.plot(mc_history.history['accuracy'], label="Train Accuracy")
plt.plot(mc_history.history['val_accuracy'], label="Validation Accuracy")
plt.legend()
plt.show()
holdout_track_params.head()
mc_holdout_generator = datagen.flow_from_dataframe(
dataframe=holdout_track_params,
directory="../VALIDATION",
x_col="filename",
y_col="phi_binned",
subset=None,
target_size=(height, width),
batch_size=32,
seed=314,
shuffle=False,
class_mode="categorical",
)
holdout_track_params['y_pred'] = mc_model.predict_classes(mc_holdout_generator)
holdout_track_params['y_true'] = mc_holdout_generator.classes
from sklearn.metrics import confusion_matrix
from sklearn.utils.multiclass import unique_labels
def plot_confusion_matrix(y_true, y_pred, classes,
normalize=False,
title=None,
cmap=plt.cm.Blues):
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
if not title:
if normalize:
title = 'Normalized confusion matrix'
else:
title = 'Confusion matrix, without normalization'
# Compute confusion matrix
cm = confusion_matrix(y_true, y_pred)
# Only use the labels that appear in the data
#classes = classes[unique_labels(y_true, y_pred)]
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
fig, ax = plt.subplots(figsize=(10, 10))
im = ax.imshow(cm, interpolation='nearest', cmap=cmap)
ax.figure.colorbar(im, ax=ax)
# We want to show all ticks...
ax.set(xticks=np.arange(cm.shape[1]),
yticks=np.arange(cm.shape[0]),
# ... and label them with the respective list entries
xticklabels=classes, yticklabels=classes,
title=title,
ylabel='True label',
xlabel='Predicted label')
# Rotate the tick labels and set their alignment.
plt.setp(ax.get_xticklabels(), rotation=45, ha="right",
rotation_mode="anchor")
# Loop over data dimensions and create text annotations.
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i in range(cm.shape[0]):
for j in range(cm.shape[1]):
ax.text(j, i, format(cm[i, j], fmt),
ha="center", va="center",
color="white" if cm[i, j] > thresh else "black")
fig.tight_layout()
return ax
class_names = [x for x in mc_holdout_generator.class_indices.keys()]
plot_confusion_matrix(holdout_track_params['y_true'],
holdout_track_params['y_pred'],
classes=class_names)
reg_train_generator = create_generator(
target="phi",
subset="training",
class_mode="raw",
N=10000
)
reg_val_generator = create_generator(
target="phi",
subset="validation",
class_mode="raw",
N=10000
)
width = 36
height = 100
channels = 3
def regression_model():
model = Sequential()
# Layer 1
model.add(Conv2D(32, (3, 3), input_shape=(height, width, channels)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
# Layer 2
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
# Layer 3
model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
# Dense regression layer
model.add(Flatten())
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.compile(loss='mean_squared_error',
optimizer='adam',
metrics=['mse'])
return model
STEP_SIZE_TRAIN = reg_train_generator.n//reg_train_generator.batch_size
STEP_SIZE_VAL = reg_val_generator.n//reg_val_generator.batch_size
reg_model = regression_model()
reg_history = reg_model.fit_generator(
generator=reg_train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=reg_val_generator,
validation_steps=STEP_SIZE_VAL,
epochs=15
)
plt.plot(reg_history.history['mse'], label="Train MSE")
plt.plot(reg_history.history['val_mse'], label="Validation MSE")
plt.legend()
plt.show()
double_reg_train_generator = create_generator(
target=["z", "phi"],
subset="training",
class_mode="raw",
N=100000
)
double_reg_val_generator = create_generator(
target=["z", "phi"],
subset="validation",
class_mode="raw",
N=100000
)
width = 36
height = 100
channels = 3
def double_regression_model():
model = Sequential()
# Layer 1
model.add(Conv2D(32, (3, 3), input_shape=(height, width, channels)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
# Layer 2
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
# Layer 3
model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
# Dense regression layer
model.add(Flatten())
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(2))
model.compile(optimizer="adam",
loss="mean_squared_error",
metrics=["mse"])
return model
STEP_SIZE_TRAIN = double_reg_train_generator.n//double_reg_train_generator.batch_size
STEP_SIZE_VAL = double_reg_val_generator.n//double_reg_val_generator.batch_size
double_reg_model = double_regression_model()
double_reg_history = double_reg_model.fit_generator(
generator=double_reg_train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=double_reg_val_generator,
validation_steps=STEP_SIZE_VAL,
epochs=15
)
plt.plot(double_reg_history.history['mse'], label="Train MSE")
plt.plot(double_reg_history.history['val_mse'], label="Validation MSE")
plt.legend()
plt.show()
double_reg_holdout_generator = datagen.flow_from_dataframe(
dataframe=holdout_track_params,
directory="../VALIDATION",
x_col="filename",
y_col=["z", "phi"],
subset=None,
target_size=(height, width),
batch_size=32,
seed=314,
shuffle=False,
class_mode="raw",
)
holdout_track_params['z_pred'] = 0.0
holdout_track_params['phi_pred'] = 0.0
y_pred = double_reg_model.predict(double_reg_holdout_generator)
holdout_track_params['z_pred'] = [y[0] for y in y_pred]
holdout_track_params['phi_pred'] = [y[1] for y in y_pred]
holdout_track_params['delta_z'] = holdout_track_params.eval('z - z_pred')
holdout_track_params['delta_phi'] = holdout_track_params.eval('phi - phi_pred')
from sklearn.metrics import r2_score, mean_squared_error
print(r2_score(holdout_track_params['phi'], holdout_track_params['phi_pred']))
print(mean_squared_error(holdout_track_params['phi'], holdout_track_params['phi_pred']))
print(r2_score(holdout_track_params['z'], holdout_track_params['z_pred']))
print(mean_squared_error(holdout_track_params['z'], holdout_track_params['z_pred']))
fig, (ax0, ax1, ax2) = plt.subplots(nrows=1, ncols=3, figsize=(14, 5))
holdout_track_params['delta_z'].hist(bins=10, alpha=0.5,
log=False, ax=ax0)
holdout_track_params['delta_z'].hist(bins=10, alpha=0.5,
log=True, ax=ax1)
holdout_track_params['delta_phi'].hist(bins=10, ax=ax2,
alpha=0.5)
ax0.set_title(r'Residual $\Delta z$')
ax1.set_title(r'Residual $\Delta z$ (log)')
ax2.set_title(r'Residual: $\Delta\theta$')
plt.show()
plt.hexbin(x=holdout_track_params['z'], y=holdout_track_params['z_pred'])
_ = plt.title('z (vertex)')
_ = plt.xlabel('z_true')
_ = plt.ylabel('z_pred')
from matplotlib import cm
fig, (ax0, ax1) = plt.subplots(nrows=1, ncols=2,
figsize=(12, 6),
sharex=True,
sharey=True)
scatter = plt.scatter(x=holdout_track_params['z'],
y=holdout_track_params['z_pred'],
c=holdout_track_params['phi'],
cmap=cm.seismic,
vmin=-10., vmax=10.,
alpha=0.2)
plt.colorbar(scatter, label=r'$\theta$')
ax1.set_facecolor("#888888")
_ = plt.title(r'z (vertex), colored by $\theta$')
_ = plt.xlabel('z_true')
plt.sca(ax0)
scatter = plt.scatter(x=holdout_track_params['z'],
y=holdout_track_params['z_pred'],
alpha=0.2)
_ = plt.title('z (vertex)')
_ = plt.xlabel('z_true')
_ = plt.ylabel('z_pred')
plt.subplots_adjust(right=1.)
from matplotlib import cm
fig, (ax0, ax1) = plt.subplots(nrows=1, ncols=2,
figsize=(12, 6),
sharex=True,
sharey=True)
scatter = plt.scatter(x=holdout['z'],
y=holdout['z_pred'],
c=holdout['theta'],
cmap=cm.seismic,
vmin=-10., vmax=10.,
alpha=0.2)
plt.colorbar(scatter, label=r'$\theta$')
ax1.set_facecolor("#888888")
_ = plt.title(r'z (vertex), colored by $\theta$')
_ = plt.xlabel('z_true')
plt.sca(ax0)
scatter = plt.scatter(x=holdout['z'],
y=holdout['z_pred'],
alpha=0.2)
_ = plt.title('z (vertex)')
_ = plt.xlabel('z_true')
_ = plt.ylabel('z_pred')
plt.subplots_adjust(right=1.)
plt.scatter(x=holdout_track_params['phi'], y=holdout_track_params['phi_pred'],
alpha=0.2)
_ = plt.title(r'$\theta$')
_ = plt.xlabel(r'$\theta$_true')
_ = plt.ylabel(r'$\theta$_pred')
holdout_track_params['z_squared_error'] = holdout_track_params.eval(
'(z - z_pred) ** 2'
)
holdout_track_params['phi_squared_error'] = holdout_track_params.eval(
'(phi - phi_pred) ** 2'
)
holdout_track_params[['z_squared_error', 'phi_squared_error']].sum()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: There are various application handlers that can be used to build up Bokeh documents. For example, there is a ScriptHandler that uses the code from a .py file to produce Bokeh documents. This is the handler that is used when we run bokeh serve app.py. Here we are going to use the lesser-known FunctionHandler, that gets configured with a plain Python function to build up a document.
Step3: Now we can display our application using show, which will automatically create an Application that wraps modify_doc using FunctionHandler. The end result is that the Bokeh server will call modify_doc to build new documents for every new sessions that is opened.
|
<ASSISTANT_TASK:>
Python Code:
import yaml
from bokeh.layouts import column
from bokeh.models import ColumnDataSource, Slider
from bokeh.plotting import figure
from bokeh.themes import Theme
from bokeh.io import show, output_notebook
from bokeh.sampledata.sea_surface_temperature import sea_surface_temperature
output_notebook()
def modify_doc(doc):
df = sea_surface_temperature.copy()
source = ColumnDataSource(data=df)
plot = figure(x_axis_type='datetime', y_range=(0, 25),
y_axis_label='Temperature (Celsius)',
title="Sea Surface Temperature at 43.18, -70.43")
plot.line('time', 'temperature', source=source)
def callback(attr, old, new):
if new == 0:
data = df
else:
data = df.rolling('{0}D'.format(new)).mean()
source.data = ColumnDataSource(data=data).data
slider = Slider(start=0, end=30, value=0, step=1, title="Smoothing by N Days")
slider.on_change('value', callback)
doc.add_root(column(slider, plot))
doc.theme = Theme(json=yaml.load(
attrs:
Figure:
background_fill_color: "#DDDDDD"
outline_line_color: white
toolbar_location: above
height: 500
width: 800
Grid:
grid_line_dash: [6, 4]
grid_line_color: white
))
show(modify_doc)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Overall, precision and recall are balanced and roughly equal. Better models are better in both P and R.
Step2: Find the smallest cluster and print phrases it contains
Step3: Large clusters form (this one is 934 unigrams and NP) that share a word (e.g. bad occurs in 726 of those), and even though they are not pure (e.g. good is also in that cluster 84 times) the vast majority of "bad" stuff ends up in cluster 5, which starts to correspond to negative sentiment. This can be something the classifier picks up on.
Step4: Let's find if there is a positive sentiment cluster
Step5: Does the same hold for Turian vectors?
Step6: Is it OK to use accuracy instead of Averaged F1 score?
Step7: Are neighbours of words other words, and is there grouping by PoS tag?
|
<ASSISTANT_TASK:>
Python Code:
query_dict = {'expansions__vectors__rep': 0,
'expansions__k':3,
'labelled':'amazon_grouped-tagged',
'expansions__use_similarity': 0,
'expansions__neighbour_strategy':'linear',
'expansions__vectors__dimensionality': 100,
'document_features_ev': 'AN+NN',
'document_features_tr': 'J+N+AN+NN',
'expansions__allow_overlap': False,
'expansions__entries_of': None,
'expansions__vectors__algorithm': 'glove',
'expansions__vectors__composer__in': ['Left'],
'expansions__vectors__unlabelled': 'wiki',
'expansions__vectors__unlabelled_percentage':100,
'expansions__decode_handler': 'SignifiedOnlyFeatureHandler',
'expansions__noise': 0}
ids = Experiment.objects.filter(**query_dict).order_by('expansions__vectors__unlabelled_percentage',
'expansions__vectors__composer').values_list('id', flat=True)
ids
get_ci(ids[0])[:-1]
results = Results.objects.get(id=ids[0], classifier='MultinomialNB')
pred = results.predictions
gold = results.gold
from sklearn.metrics import classification_report, confusion_matrix
from sklearn.utils.multiclass import unique_labels
print(classification_report(gold, pred))
sns.set_style('white')
plot_confusion_matrix(gold, pred)
path = '../FeatureExtractionToolkit/word2vec_vectors/composed/AN_NN_word2vec-wiki_100percent-rep0_Add.events.filtered.strings.kmeans2000'
df = pd.read_hdf(path, key='clusters')
counts = df.clusters.value_counts()
g = sns.distplot(counts.values, kde_kws={'cut':True})
g.set(xlim=(0, None))
plt.title('Distribution of cluster sizes, k=500');
counts.describe()
df[df.clusters==counts.argmin()].head(20)
df[df.clusters == 5]
# cluster 5 (negative sentiment), 2 (royalty), 8 (cheap, expencive) are very sensible
# cluster 3 ('arm'), 1 ('product'), 15 (hot), 16 (playing) are dominated by a single word (may contain multiple senses, e.g. hot water, hot waiter)
# cluster 6 (grand slam, grand prix, door slam) dominated by a few words and a polysemous word bridging senses
# cluster 10- film characters + misc
# 11 - sentiment, mix of positive and negative
# 13- named entities
# 14- arche, tower, veranda + related words + other senses (arch enemy)
from collections import Counter
Counter([str(x).split('_')[0] for x in df[df.clusters == 5].index]).most_common(10)
from discoutils.thesaurus_loader import Vectors as vv
# not quite the same vectors (15% vs 100%), but that's all I've got on this machine
v = vv.from_tsv('../FeatureExtractionToolkit/word2vec_vectors/composed/AN_NN_word2vec-wiki_15percent-rep0_Add.events.filtered.strings')
v.init_sims(n_neighbors=30)
v.get_nearest_neighbours('bad/J')[:5]
cluster_num = df.ix['good/J_guy/N'][0]
print(cluster_num)
df[df.clusters == cluster_num]
Counter([str(x).split('_')[0] for x in df[df.clusters == cluster_num].index]).most_common(10)
cluster_num = df.ix['good/J_movie/N'][0]
print(cluster_num)
df[df.clusters == cluster_num]
Counter([str(x).split('_')[1] for x in df[df.clusters == cluster_num].index]).most_common(10)
df[df.clusters == counts.argmax()] # these appear to be names, they are 99% unigrams
path = '../FeatureExtractionToolkit/socher_vectors/composed/AN_NN_turian_Socher.events.filtered.strings.kmeans2000'
ddf = pd.read_hdf(path, key='clusters')
cluster_num = ddf.ix['bad/J_guy/N'][0]
print(cluster_num)
ddf[ddf.clusters == cluster_num]
Counter([str(x).split('_')[1] for x in ddf[ddf.clusters == cluster_num].index]).most_common(10)
gaps = []
for r in Results.objects.filter(classifier=CLASSIFIER):
gap = r.accuracy_mean - r.macrof1_mean
if abs(gap) > 0.1:
print(r.id.id)
gaps.append(gap)
plt.hist(gaps);
from discoutils.thesaurus_loader import Vectors
from discoutils.tokens import DocumentFeature
v = Vectors.from_tsv('../FeatureExtractionToolkit/word2vec_vectors/word2vec-wiki-15perc.unigr.strings.rep0')
from random import sample
sampled_words = sample(list(v.keys()), 5000)
v.init_sims(n_neighbors=100)
data = []
for w in sampled_words:
doc_feat = DocumentFeature.from_string(w)
if doc_feat.tokens[0].pos == 'N' and np.random.uniform() < 0.8:
# too many nouns, ignore some of them
continue
neigh = v.get_nearest_neighbours(w)
for rank, (n, sim) in enumerate(neigh):
pospos = doc_feat.tokens[0].pos + DocumentFeature.from_string(n).tokens[0].pos
data.append([''.join(pospos), sim, rank])
df = pd.DataFrame(data, columns='pospos sim rank'.split())
mask = df.pospos.str.len() == 2
df = df[mask]
df.pospos.value_counts(), df.shape
g = sns.FacetGrid(df, col='pospos', col_wrap=3);
g.map(plt.hist, 'sim');
g = sns.FacetGrid(df, col='pospos', col_wrap=3);
g.map(plt.hist, 'rank');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: McWilliams performed freely-evolving 2D turbulence ($R_d = \infty$, $\beta =0$) experiments on a $2\pi\times 2\pi$ periodic box.
Step2: Initial condition
Step3: Runing the model
Step4: The genius of McWilliams (1984) was that he has showed that the initial random vorticity field organizes itself into strong coherent vortices. This is true in significant part of the parameter space. This was previously suspected but unproven, mainly because people did not have computer resources to run the simulation long enough. Thirty years later we can perform such simulations in a couple of minutes on a laptop!
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import pyqg
# create the model object
m = pyqg.BTModel(L=2.*np.pi, nx=256,
beta=0., H=1., rek=0., rd=None,
tmax=40, dt=0.001, taveint=1,
ntd=4)
# in this example we used ntd=4, four threads
# if your machine has more (or fewer) cores available, you could try changing it
# generate McWilliams 84 IC condition
fk = m.wv != 0
ckappa = np.zeros_like(m.wv2)
ckappa[fk] = np.sqrt( m.wv2[fk]*(1. + (m.wv2[fk]/36.)**2) )**-1
nhx,nhy = m.wv2.shape
Pi_hat = np.random.randn(nhx,nhy)*ckappa +1j*np.random.randn(nhx,nhy)*ckappa
Pi = m.ifft( Pi_hat[np.newaxis,:,:] )
Pi = Pi - Pi.mean()
Pi_hat = m.fft( Pi )
KEaux = m.spec_var( m.wv*Pi_hat )
pih = ( Pi_hat/np.sqrt(KEaux) )
qih = -m.wv2*pih
qi = m.ifft(qih)
# initialize the model with that initial condition
m.set_q(qi)
# define a quick function for plotting and visualize the initial condition
def plot_q(m, qmax=40):
fig, ax = plt.subplots()
pc = ax.pcolormesh(m.x,m.y,m.q.squeeze(), cmap='RdBu_r')
pc.set_clim([-qmax, qmax])
ax.set_xlim([0, 2*np.pi])
ax.set_ylim([0, 2*np.pi]);
ax.set_aspect(1)
plt.colorbar(pc)
plt.title('Time = %g' % m.t)
plt.show()
plot_q(m)
for _ in m.run_with_snapshots(tsnapstart=0, tsnapint=10):
plot_q(m)
energy = m.get_diagnostic('KEspec')
enstrophy = m.get_diagnostic('Ensspec')
# this makes it easy to calculate an isotropic spectrum
from pyqg import diagnostic_tools as tools
kr, energy_iso = tools.calc_ispec(m,energy.squeeze())
_, enstrophy_iso = tools.calc_ispec(m,enstrophy.squeeze())
ks = np.array([3.,80])
es = 5*ks**-4
plt.loglog(kr,energy_iso)
plt.loglog(ks,es,'k--')
plt.text(2.5,.0001,r'$k^{-4}$',fontsize=20)
plt.ylim(1.e-10,1.e0)
plt.xlabel('wavenumber')
plt.title('Energy Spectrum')
ks = np.array([3.,80])
es = 5*ks**(-5./3)
plt.loglog(kr,enstrophy_iso)
plt.loglog(ks,es,'k--')
plt.text(5.5,.01,r'$k^{-5/3}$',fontsize=20)
plt.ylim(1.e-3,1.e0)
plt.xlabel('wavenumber')
plt.title('Enstrophy Spectrum')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Loading data
Step2: Variables used in training
Step3: Metric definition
Step4: Compute threshold vs metric quality
Step5: The best quality
Step6: Hyperparameters optimization algorithms
Step7: Grid search with folding scorer
Step8: Print best parameters
Step9: Print all qualities for used parameters
Step11: Grid search with user-defined scorer
Step12: Print all tried combinations of parameters and quality
Step13: Results comparison
Step14: ROCs
Step15: Metric
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
!cd toy_datasets; wget -O magic04.data -nc https://archive.ics.uci.edu/ml/machine-learning-databases/magic/magic04.data
import numpy, pandas
from rep.utils import train_test_split
from sklearn.metrics import roc_auc_score
columns = ['fLength', 'fWidth', 'fSize', 'fConc', 'fConc1', 'fAsym', 'fM3Long', 'fM3Trans', 'fAlpha', 'fDist', 'g']
data = pandas.read_csv('toy_datasets/magic04.data', names=columns)
labels = numpy.array(data['g'] == 'g', dtype=int)
data = data.drop('g', axis=1)
train_data, test_data, train_labels, test_labels = train_test_split(data, labels)
list(data.columns)
features = list(set(columns) - {'g'})
from rep.report import metrics
def AMS(s, b, s_norm=sum(test_labels == 1), b_norm=6*sum(test_labels == 0)):
return s * s_norm / numpy.sqrt(b * b_norm + 10.)
optimal_AMS = metrics.OptimalMetric(AMS)
sum(test_labels == 1),sum(test_labels == 0)
probs_rand = numpy.ndarray((1000, 2))
probs_rand[:, 1] = numpy.random.random(1000)
probs_rand[:, 0] = 1 - probs_rand[:, 1]
labels_rand = numpy.random.randint(0, high=2, size=1000)
optimal_AMS.plot_vs_cut(labels_rand, probs_rand)
optimal_AMS(labels_rand, probs_rand)
from rep.metaml import GridOptimalSearchCV
from rep.metaml.gridsearch import RandomParameterOptimizer, FoldingScorer
from rep.estimators import SklearnClassifier
from sklearn.ensemble import AdaBoostClassifier
from collections import OrderedDict
# define grid parameters
grid_param = OrderedDict()
grid_param['n_estimators'] = [30, 50]
grid_param['learning_rate'] = [0.2, 0.1, 0.05]
# use random hyperparameter optimization algorithm
generator = RandomParameterOptimizer(grid_param)
# define folding scorer
scorer = FoldingScorer(optimal_AMS, folds=6, fold_checks=4)
grid_sk = GridOptimalSearchCV(SklearnClassifier(AdaBoostClassifier(), features=features), generator, scorer)
grid_sk.fit(data, labels)
grid_sk.generator.best_params_
grid_sk.generator.print_results()
from sklearn import clone
def generate_scorer(test, test_labels, test_weight=None):
Generate scorer which calculate metric on fixed test dataset
def custom(base_estimator, params, X, y, sample_weight=None):
cl = clone(base_estimator)
cl.set_params(**params)
cl.fit(X, y)
res = optimal_AMS(test_labels, cl.predict_proba(test), sample_weight)
return res
return custom
# define grid parameters
grid_param = OrderedDict()
grid_param['n_estimators'] = [30, 50]
grid_param['learning_rate'] = [0.2, 0.1, 0.05]
grid_param['features'] = [features[:5], features[:8]]
# define random hyperparameter optimization algorithm
generator = RandomParameterOptimizer(grid_param)
# define specific scorer
scorer = generate_scorer(test_data, test_labels)
grid = GridOptimalSearchCV(SklearnClassifier(clf=AdaBoostClassifier(), features=features), generator, scorer)
grid.fit(train_data, train_labels)
len(train_data), len(test_data)
grid.generator.print_results()
from rep.report import ClassificationReport
from rep.data.storage import LabeledDataStorage
lds = LabeledDataStorage(test_data, test_labels)
classifiers = {'grid_fold': grid_sk.fit_best_estimator(train_data[features], train_labels),
'grid_test_dataset': grid.fit_best_estimator(train_data[features], train_labels) }
report = ClassificationReport(classifiers, lds)
report.roc().plot()
report.metrics_vs_cut(AMS, metric_label='AMS').plot()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
Step2: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
Step3: And we can see the characters encoded as integers.
Step4: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
Step5: Making training mini-batches
Step6: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
Step7: If you implemented get_batches correctly, the above output should look something like
Step8: LSTM Cell
Step9: RNN Output
Step10: Training loss
Step11: Optimizer
Step12: Build the network
Step13: Hyperparameters
Step14: Time for training
Step15: Saved checkpoints
Step16: Sampling
Step17: Here, pass in the path to a checkpoint and sample from the network.
|
<ASSISTANT_TASK:>
Python Code:
import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
with open('anna.txt', 'r') as f:
text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
encoded = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
text[:100]
encoded[:100]
len(vocab)
def get_batches(arr, n_seqs, n_steps):
'''Create a generator that returns batches of size
n_seqs x n_steps from arr.
Arguments
---------
arr: Array you want to make batches from
n_seqs: Batch size, the number of sequences per batch
n_steps: Number of sequence steps per batch
'''
# Get the batch size and number of batches we can make
batch_size = n_seqs * n_steps
n_batches = len(arr)//batch_size
# Keep only enough characters to make full batches
arr = arr[:n_batches * batch_size]
# Reshape into n_seqs rows
arr = arr.reshape((n_seqs, -1))
for n in range(0, arr.shape[1], n_steps):
# The features
x = arr[:, n:n+n_steps]
# The targets, shifted by one
y = np.zeros_like(x)
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
yield x, y
batches = get_batches(encoded, 10, 50)
x, y = next(batches)
print('x\n', x[:10, :10])
print('\ny\n', y[:10, :10])
def build_inputs(batch_size, num_steps):
''' Define placeholders for inputs, targets, and dropout
Arguments
---------
batch_size: Batch size, number of sequences per batch
num_steps: Number of sequence steps in a batch
'''
# Declare placeholders we'll feed into the graph
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
# Keep probability placeholder for drop out layers
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
return inputs, targets, keep_prob
def lstm_cell(lstm_size,keep_prod):
cell = tf.contrib.rnn.NASCell(lstm_size, reuse=tf.get_variable_scope().reuse)
return tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prod)
def build_lstm(lstm_size, num_layers, batch_size, keep_prob):
''' Build LSTM cell.
Arguments
---------
keep_prob: Scalar tensor (tf.placeholder) for the dropout keep probability
lstm_size: Size of the hidden layers in the LSTM cells
num_layers: Number of LSTM layers
batch_size: Batch size
'''
### Build the LSTM Cell
# Use a basic LSTM cell
#lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
#drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
# cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
# MultiRNNCell([BasicLSTMCell(...) for _ in range(num_layers)])
rnn_cells = tf.contrib.rnn.MultiRNNCell([lstm_cell(lstm_size,keep_prob) for _ in range(num_layers)], state_is_tuple = True)
initial_state = rnn_cells.zero_state(batch_size, tf.float32)
return rnn_cells, initial_state
def build_output(lstm_output, in_size, out_size):
''' Build a softmax layer, return the softmax output and logits.
Arguments
---------
x: Input tensor
in_size: Size of the input tensor, for example, size of the LSTM cells
out_size: Size of this softmax layer
'''
# Reshape output so it's a bunch of rows, one row for each step for each sequence.
# That is, the shape should be batch_size*num_steps rows by lstm_size columns
seq_output = tf.concat(lstm_output, axis=1)
x = tf.reshape(seq_output, [-1, in_size])
# Connect the RNN outputs to a softmax layer
with tf.variable_scope('softmax'):
softmax_w = tf.Variable(tf.truncated_normal((in_size, out_size), stddev=0.1))
softmax_b = tf.Variable(tf.zeros(out_size))
# Since output is a bunch of rows of RNN cell outputs, logits will be a bunch
# of rows of logit outputs, one for each step and sequence
logits = tf.matmul(x, softmax_w) + softmax_b
# Use softmax to get the probabilities for predicted characters
out = tf.nn.softmax(logits, name='predictions')
return out, logits
def build_loss(logits, targets, lstm_size, num_classes):
''' Calculate the loss from the logits and the targets.
Arguments
---------
logits: Logits from final fully connected layer
targets: Targets for supervised learning
lstm_size: Number of LSTM hidden units
num_classes: Number of classes in targets
'''
# One-hot encode targets and reshape to match logits, one row per batch_size per step
y_one_hot = tf.one_hot(targets, num_classes)
y_reshaped = tf.reshape(y_one_hot, logits.get_shape())
# Softmax cross entropy loss
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped)
loss = tf.reduce_mean(loss)
return loss
def build_optimizer(loss, learning_rate, grad_clip):
''' Build optmizer for training, using gradient clipping.
Arguments:
loss: Network loss
learning_rate: Learning rate for optimizer
'''
# Optimizer for training, using gradient clipping to control exploding gradients
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
return optimizer
class CharRNN:
def __init__(self, num_classes, batch_size=64, num_steps=50,
lstm_size=128, num_layers=2, learning_rate=0.001,
grad_clip=5, sampling=False):
# When we're using this network for sampling later, we'll be passing in
# one character at a time, so providing an option for that
if sampling == True:
batch_size, num_steps = 1, 1
else:
batch_size, num_steps = batch_size, num_steps
tf.reset_default_graph()
# Build the input placeholder tensors
self.inputs, self.targets, self.keep_prob = build_inputs(batch_size, num_steps)
# Build the LSTM cell
cell, self.initial_state = build_lstm(lstm_size, num_layers, batch_size, self.keep_prob)
### Run the data through the RNN layers
# First, one-hot encode the input tokens
x_one_hot = tf.one_hot(self.inputs, num_classes)
# Run each sequence step through the RNN and collect the outputs
outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=self.initial_state)
self.final_state = state
# Get softmax predictions and logits
self.prediction, self.logits = build_output(outputs, lstm_size, num_classes)
# Loss and optimizer (with gradient clipping)
self.loss = build_loss(self.logits, self.targets, lstm_size, num_classes)
self.optimizer = build_optimizer(self.loss, learning_rate, grad_clip)
batch_size = 100 # Sequences per batch
num_steps = 100 # Number of sequence steps per batch
lstm_size = 512 # Size of hidden layers in LSTMs
num_layers = 2 # Number of LSTM layers
learning_rate = 0.001 # Learning rate
keep_prob = 0.5 # Dropout keep probability
epochs = 4
# Save every N iterations
save_every_n = 200
model = CharRNN(len(vocab), batch_size=batch_size, num_steps=num_steps,
lstm_size=lstm_size, num_layers=num_layers,
learning_rate=learning_rate)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/______.ckpt')
counter = 0
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for x, y in get_batches(encoded, batch_size, num_steps):
counter += 1
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: keep_prob,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.loss,
model.final_state,
model.optimizer],
feed_dict=feed)
end = time.time()
print('Epoch: {}/{}... '.format(e+1, epochs),
'Training Step: {}... '.format(counter),
'Training loss: {:.4f}... '.format(batch_loss),
'{:.4f} sec/batch'.format((end-start)))
if (counter % save_every_n == 0):
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
tf.train.get_checkpoint_state('checkpoints')
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
samples = [c for c in prime]
model = CharRNN(len(vocab), lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
tf.train.latest_checkpoint('checkpoints')
checkpoint = tf.train.latest_checkpoint('checkpoints')
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i600_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i1200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="venkat is a good kid")
print(samp)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The Gippsland Basin Model
Step2: While we could hard-code parameter variations here, it is much easier to store our statistical information in a csv file, so we load that instead. This file accompanies the GBasin_Ve1_V4 model in the pynoddy directory.
Step3: Generate randomised model realisations
Step4: A few utility functions for visualising uncertainty have been included in the UncertaintyAnalysis class, and can be used to gain an understanding of the most uncertain parts of the Gippsland Basin. The probabability voxets for each lithology can also be accessed using ua.p_block[lithology_id], and the information entropy voxset accessed using ua.e_block.
Step5: It is immediately apparent (and not particularly surprising) that uncertainty in the Gippsland Basin model is concentrated around the thin (but economically interesting) formations comprising the La Trobe and Strzelecki Groups. The faults in the model also contribute to this uncertainty, though not by a huge amount.
|
<ASSISTANT_TASK:>
Python Code:
from IPython.core.display import HTML
css_file = 'pynoddy.css'
HTML(open(css_file, "r").read())
%matplotlib inline
#import the ususal libraries + the pynoddy UncertaintyAnalysis class
import sys, os
# determine path of repository to set paths corretly below
repo_path = os.path.realpath('/Users/flow/git/pynoddy/')
sys.path.append(repo_path)
sys.path.append(os.path.join(repo_path, "pynoddy/experiment"))
import pynoddy
# from pynoddy.experiment.UncertaintyAnalysis import UncertaintyAnalysis
# adjust some settings for matplotlib
from matplotlib import rcParams
# print rcParams
rcParams['font.size'] = 15
import pynoddy.history
import pynoddy.experiment # .uncertainty_analysis
rcParams.update({'font.size': 20})
import importlib
importlib.reload(pynoddy.history)
importlib.reload(pynoddy.output)
import pynoddy.experiment.uncertainty_analysis
importlib.reload(pynoddy.experiment.uncertainty_analysis)
importlib.reload(pynoddy)
# the model itself is now part of the repository, in the examples directory:
history_file = os.path.join(repo_path, "examples/GBasin_Ve1_V4.his")
params = os.path.join(repo_path,"examples/gipps_params.csv")
# %%timeit # Uncomment to test execution time
ua = pynoddy.experiment.uncertainty_analysis.UncertaintyAnalysis(history_file, params)
ua.estimate_uncertainty(100,verbose=False)
ua.plot_section(direction='x',data=ua.block)
ua.plot_entropy(direction='x')
ua.extent_x = 29000
ua.extent_y = 21600
ua.extent_z = 4500
output_path = os.path.join(repo_path,"sandbox/GBasin_Uncertainty")
ua.export_to_vtk(vtk_filename=output_path,data=ua.e_block)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Google Colab & Kaggle interagration
Step2: Use %%ampl_eval to evaluate AMPL commands
Step3: Use %%writeifile to create files
Step4: Use %%ampl_eval to run the script cut2.run
|
<ASSISTANT_TASK:>
Python Code:
!pip install -q amplpy ampltools
MODULES=['ampl', 'coin']
from ampltools import cloud_platform_name, ampl_notebook
from amplpy import AMPL, register_magics
if cloud_platform_name() is None:
ampl = AMPL() # Use local installation of AMPL
else:
ampl = ampl_notebook(modules=MODULES) # Install AMPL and use it
register_magics(ampl_object=ampl) # Evaluate %%ampl_eval cells with ampl.eval()
%%ampl_eval
option version;
%%writefile cut2.mod
problem Cutting_Opt;
# ----------------------------------------
param nPAT integer >= 0, default 0;
param roll_width;
set PATTERNS = 1..nPAT;
set WIDTHS;
param orders {WIDTHS} > 0;
param nbr {WIDTHS,PATTERNS} integer >= 0;
check {j in PATTERNS}: sum {i in WIDTHS} i * nbr[i,j] <= roll_width;
var Cut {PATTERNS} integer >= 0;
minimize Number: sum {j in PATTERNS} Cut[j];
subject to Fill {i in WIDTHS}:
sum {j in PATTERNS} nbr[i,j] * Cut[j] >= orders[i];
problem Pattern_Gen;
# ----------------------------------------
param price {WIDTHS} default 0;
var Use {WIDTHS} integer >= 0;
minimize Reduced_Cost:
1 - sum {i in WIDTHS} price[i] * Use[i];
subject to Width_Limit:
sum {i in WIDTHS} i * Use[i] <= roll_width;
%%writefile cut.dat
data;
param roll_width := 110 ;
param: WIDTHS: orders :=
20 48
45 35
50 24
55 10
75 8 ;
%%writefile cut2.run
# ----------------------------------------
# GILMORE-GOMORY METHOD FOR
# CUTTING STOCK PROBLEM
# ----------------------------------------
option solver cbc;
option solution_round 6;
model cut2.mod;
data cut.dat;
problem Cutting_Opt;
option relax_integrality 1;
option presolve 0;
problem Pattern_Gen;
option relax_integrality 0;
option presolve 1;
let nPAT := 0;
for {i in WIDTHS} {
let nPAT := nPAT + 1;
let nbr[i,nPAT] := floor (roll_width/i);
let {i2 in WIDTHS: i2 <> i} nbr[i2,nPAT] := 0;
};
repeat {
solve Cutting_Opt;
let {i in WIDTHS} price[i] := Fill[i].dual;
solve Pattern_Gen;
if Reduced_Cost < -0.00001 then {
let nPAT := nPAT + 1;
let {i in WIDTHS} nbr[i,nPAT] := Use[i];
}
else break;
};
display nbr;
display Cut;
option Cutting_Opt.relax_integrality 0;
option Cutting_Opt.presolve 10;
solve Cutting_Opt;
display Cut;
%%ampl_eval
commands cut2.run;
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Goal
Step3: Configure ES parameters
Step4: ES returns key-value RDD where key is ID of the document, and value is content of _source field
Step5: Every JSON document represents a connection between a source IP and destination IP.
Step6: List all the unique ip addresses
Step7: Create two columns
Step8: Join the two RDD's to create a dataframe like structure
Step9: Replace missing values with zeroes
Step10: Gather statistics on our dataset
Step11: Identify IPs that have unusual number of outgoing or incoming connections
Step12: Clean up
|
<ASSISTANT_TASK:>
Python Code:
import os
import sys
sys.path.append(os.environ["SPARK_HOME"] + "/python/lib/py4j-0.10.4-src.zip")
sys.path.append(os.environ["SPARK_HOME"] + "/python/lib/pyspark.zip")
from pyspark import SparkConf, SparkContext
sconf = SparkConf()
sconf.setAppName("ES-Spark Integration")
sconf.setMaster("spark://snehasish-barmans-macbook.local:7077")
sc = SparkContext.getOrCreate(conf = sconf)
print sc
print sc.version
def getESQuery():
query = {
"query": {
"filtered": {
"query": {
"match_all": {}
},
"filter": {
"bool": {
"must": [
{
"range": {
"@timestamp" : {
"gte" : "now-1h",
"lt" : "now"
}
}
}
],
"should": [
{ "term": { "src_addr_space": "private" } },
{ "term": { "dst_addr_space": "private" } }
]
}
}
}
},
"_source": [ "netflow.ipv4_src_addr", "netflow.ipv4_dst_addr" ]
}
return query
ES_HOST = "192.168.19.156"
ES_PORT = "9200"
ES_INDEX = "seceon_flows"
ES_QUERY = getESQuery()
es_config = {"es.nodes": ES_HOST,
"es.port": ES_PORT,
"es.resource": ES_INDEX,
"es.query": ES_QUERY,
"es.nodes.discovery": "false",
"es.nodes.wan.only": "true"
}
esrdd = sc.newAPIHadoopRDD(inputFormatClass = "org.elasticsearch.hadoop.mr.EsInputFormat",
keyClass = "org.apache.hadoop.io.NullWritable",
valueClass = "org.elasticsearch.hadoop.mr.LinkedMapWritable",
conf = es_config)
print esrdd.getNumPartitions()
esrdd = esrdd.coalesce(4)
print esrdd.getNumPartitions()
esrdd.persist()
print esrdd.is_cached
print esrdd.getStorageLevel()
rdd1 = esrdd.sample(False, 0.01).mapValues( lambda x: {"src": x["netflow"]["ipv4_src_addr"],
"dst": x["netflow"]["ipv4_dst_addr"]} )
rdd1.take(4)
rdd1.countApprox(1000, 0.1)
print rdd1.flatMap( lambda x: [ x[1]["src"], x[1]["dst"] ] ).distinct().count()
print rdd1.flatMap( lambda x: [ x[1]["src"], x[1]["dst"] ] ).distinct().collect()
rdd_outgoing = rdd1.map( lambda x: (x[1]["src"], 1) ).reduceByKey( lambda acc, x: acc + x )
rdd_outgoing.collect()
rdd_incoming = rdd1.map( lambda x: (x[1]["dst"], 1) ).reduceByKey( lambda acc, x: acc + x )
rdd_incoming.collect()
rdd_joined = rdd_outgoing.fullOuterJoin(rdd_incoming)
rdd_joined.collect()
rdd_joined = rdd_joined.mapValues(lambda x: tuple([y if y is not None else 0 for y in x]))
rdd_joined.collect()
src_stats = rdd_joined.map(lambda x: x[1][0]).stats() # outgoing(src)
print src_stats
dst_stats = rdd_joined.map(lambda x: x[1][1]).stats() # incoming(dst)
print dst_stats
src_stats_bc = sc.broadcast(src_stats)
dst_stats_bc = sc.broadcast(dst_stats)
src_stats_bc.value
rdd_joined.filter(lambda x: x[1][0] > 0).filter( lambda x: (x[1][0] - src_stats_bc.value.mean()) > (3*src_stats_bc.value.stdev()) ).collect()
rdd_joined.filter(lambda x: x[1][1] > 0).filter( lambda x: (x[1][1] - dst_stats_bc.value.mean()) > (3*dst_stats_bc.value.stdev()) ).collect()
src_stats_bc.unpersist()
dst_stats_bc.unpersist()
esrdd.unpersist()
print esrdd.is_cached
sc.stop()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: If you'd like to turn information status messages off
Step2: But it's generally a good idea to leave them on
Step3: You can set the default level by setting the environment variable
Step4: Note that the location of the mne-python preferences file (for easier manual
Step5: By default logging messages print to the console, but look at
Step6: <div class="alert alert-info"><h4>Note</h4><p>The MNE sample dataset should be downloaded automatically but be
Step7: Look at the channels in raw
Step8: Read and plot a segment of raw data
Step9: Save a segment of 150s of raw data (MEG only)
Step10: Define and read epochs
Step11: Note that, by default, we use stim_channel='STI 014'. If you have a different
Step12: Events are stored as 2D numpy array where the first column is the time
Step13: Exclude some channels (original bads + 2 more)
Step14: The variable raw.info['bads'] is just a python list.
Step15: Alternatively one can restrict to magnetometers or gradiometers with
Step16: Define the baseline period
Step17: Define peak-to-peak rejection parameters for gradiometers, magnetometers
Step18: Read epochs
Step19: Get single epochs for one condition
Step20: epochs_data is a 3D array of dimension (55 epochs, 365 channels, 106 time
Step21: or if you want to keep all the information about the data you can save your
Step22: and read them later with
Step23: Compute evoked responses for auditory responses by averaging and plot it
Step24: .. topic
Step25: It is also possible to read evoked data stored in a fif file
Step26: Or another one stored in the same file
Step27: Two evoked objects can be contrasted using
Step28: To do a weighted sum based on the number of averages, which will give
Step29: Instead of dealing with mismatches in the number of averages, we can use
Step30: Time-Frequency
Step31: Compute induced power and phase-locking values and plot gradiometers
Step32: Inverse modeling
Step33: Read the inverse operator
Step34: Define the inverse parameters
Step35: Compute the inverse solution
Step36: Save the source time courses to disk
Step37: Now, let's compute dSPM on a raw file within a label
Step38: Compute inverse solution during the first 15s
Step39: Save result in stc files
Step40: What else can you do?
|
<ASSISTANT_TASK:>
Python Code:
import mne
mne.set_log_level('WARNING')
mne.set_log_level('INFO')
mne.set_config('MNE_LOGGING_LEVEL', 'WARNING', set_env=True)
mne.get_config_path()
from mne.datasets import sample # noqa
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
print(raw_fname)
raw = mne.io.read_raw_fif(raw_fname, add_eeg_ref=False)
print(raw)
print(raw.info)
print(raw.ch_names)
start, stop = raw.time_as_index([100, 115]) # 100 s to 115 s data segment
data, times = raw[:, start:stop]
print(data.shape)
print(times.shape)
data, times = raw[2:20:3, start:stop] # access underlying data
raw.plot()
picks = mne.pick_types(raw.info, meg=True, eeg=False, stim=True,
exclude='bads')
raw.save('sample_audvis_meg_raw.fif', tmin=0, tmax=150, picks=picks,
overwrite=True)
events = mne.find_events(raw, stim_channel='STI 014')
print(events[:5])
mne.set_config('MNE_STIM_CHANNEL', 'STI101', set_env=True)
event_id = dict(aud_l=1, aud_r=2) # event trigger and conditions
tmin = -0.2 # start of each epoch (200ms before the trigger)
tmax = 0.5 # end of each epoch (500ms after the trigger)
raw.info['bads'] += ['MEG 2443', 'EEG 053']
picks = mne.pick_types(raw.info, meg=True, eeg=True, eog=True, stim=False,
exclude='bads')
mag_picks = mne.pick_types(raw.info, meg='mag', eog=True, exclude='bads')
grad_picks = mne.pick_types(raw.info, meg='grad', eog=True, exclude='bads')
baseline = (None, 0) # means from the first instant to t = 0
reject = dict(grad=4000e-13, mag=4e-12, eog=150e-6)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True, picks=picks,
baseline=baseline, preload=False, reject=reject,
add_eeg_ref=False)
print(epochs)
epochs_data = epochs['aud_l'].get_data()
print(epochs_data.shape)
from scipy import io # noqa
io.savemat('epochs_data.mat', dict(epochs_data=epochs_data), oned_as='row')
epochs.save('sample-epo.fif')
saved_epochs = mne.read_epochs('sample-epo.fif')
evoked = epochs['aud_l'].average()
print(evoked)
evoked.plot()
max_in_each_epoch = [e.max() for e in epochs['aud_l']] # doctest:+ELLIPSIS
print(max_in_each_epoch[:4]) # doctest:+ELLIPSIS
evoked_fname = data_path + '/MEG/sample/sample_audvis-ave.fif'
evoked1 = mne.read_evokeds(
evoked_fname, condition='Left Auditory', baseline=(None, 0), proj=True)
evoked2 = mne.read_evokeds(
evoked_fname, condition='Right Auditory', baseline=(None, 0), proj=True)
contrast = mne.combine_evoked([evoked1, evoked2], weights=[0.5, -0.5])
contrast = mne.combine_evoked([evoked1, -evoked2], weights='equal')
print(contrast)
average = mne.combine_evoked([evoked1, evoked2], weights='nave')
print(contrast)
epochs_eq = epochs.copy().equalize_event_counts(['aud_l', 'aud_r'])[0]
evoked1, evoked2 = epochs_eq['aud_l'].average(), epochs_eq['aud_r'].average()
print(evoked1)
print(evoked2)
contrast = mne.combine_evoked([evoked1, -evoked2], weights='equal')
print(contrast)
import numpy as np # noqa
n_cycles = 2 # number of cycles in Morlet wavelet
freqs = np.arange(7, 30, 3) # frequencies of interest
from mne.time_frequency import tfr_morlet # noqa
power, itc = tfr_morlet(epochs, freqs=freqs, n_cycles=n_cycles,
return_itc=True, decim=3, n_jobs=1)
power.plot([power.ch_names.index('MEG 1332')])
from mne.minimum_norm import apply_inverse, read_inverse_operator # noqa
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'
inverse_operator = read_inverse_operator(fname_inv)
snr = 3.0
lambda2 = 1.0 / snr ** 2
method = "dSPM"
stc = apply_inverse(evoked, inverse_operator, lambda2, method)
stc.save('mne_dSPM_inverse')
fname_label = data_path + '/MEG/sample/labels/Aud-lh.label'
label = mne.read_label(fname_label)
from mne.minimum_norm import apply_inverse_raw # noqa
start, stop = raw.time_as_index([0, 15]) # read the first 15s of data
stc = apply_inverse_raw(raw, inverse_operator, lambda2, method, label,
start, stop)
stc.save('mne_dSPM_raw_inverse_Aud')
print("Done!")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Implement the helper function which computes the probability $P_\ell^w$ that a received word $\boldsymbol{y}$ is exactly at Hamming distance $\ell$ from a codeword of weight $w$ after transmission of the zero codeword over a BSC with error probability $\delta$, with
Step2: Show performance and some bounds illustrating the decoding performance over the BSC of a binary linear block code with generator matrix
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import numpy.polynomial.polynomial as npp
from scipy.stats import norm
from scipy.special import comb
import matplotlib.pyplot as plt
def Plw(n,l,w,delta):
return np.sum([comb(w,l-r)*comb(n-w,r)*(delta**(w-l+2*r))*((1-delta)**(n-w+l-2*r)) for r in range(l+1)])
# weight enumerator polynomial
Aw = [0,0,0,4,6,8,8,4,1]
n = 10
dmin = np.nonzero(Aw)[0][0]
e = int(np.floor((dmin-1)/2))
delta_range = np.logspace(-6,-0.31,100)
Pcw_range = [np.sum([Aw[w]*np.sum([Plw(n,l,w,delta) for l in range(e+1)]) for w in range(len(Aw))]) for delta in delta_range]
Pcw_bound_range = [np.sum([comb(n,w)*((delta)**w)*((1-delta)**(n-w)) for w in range(e+1,n+1)]) for delta in delta_range]
P_F_range = np.array([1-np.sum([comb(n,w)*((delta)**w)*((1-delta)**(n-w)) for w in range(e+1)]) for delta in delta_range]) - np.array(Pcw_range)
# compute bound for ML decoding
Bhattacharyya_range = [2*np.sqrt(delta*(1-delta)) for delta in delta_range]
P_ML_bound_range = [npp.polyval(B, Aw) for B in Bhattacharyya_range]
fig = plt.figure(1,figsize=(12,7))
plt.loglog(delta_range, Pcw_range,'b-')
plt.loglog(delta_range, Pcw_bound_range,'g-')
plt.loglog(delta_range, P_F_range,'r-')
plt.loglog(delta_range, P_ML_bound_range,'k--')
plt.xlim((1,1e-6))
plt.ylim((1e-12,1))
plt.xlabel('BSC error rate $\delta$', fontsize=16)
plt.ylabel('Error rate', fontsize=16)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.grid(True)
plt.legend(['$P_{cw}$','Upper bound on $P_{cw}$', '$P_F$', 'Upper bound on $P_{\mathrm{ML}}$'], fontsize=14);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 画像分類のための敵対的正則化
Step2: ライブラリをインポートします。neural_structured_learning を nsl と略します。
Step3: ハイパーパラメータ
Step4: MNIST データセット
Step5: モデルを数値的に安定させるには、normalize 関数にデータセットをマッピングし、ピクセル値を [0, 1] に正規化します。トレーニングセットをシャッフルしてバッチ処理を行った後、基本モデルをトレーニングするために例を特徴タプル (image, label) に変換します。また、後で使用できるようにタプルをディクショナリに変換する関数も用意しています。
Step7: 基本モデル
Step8: 次に、基本モデルのトレーニングと評価を行います。
Step9: 基本モデルがテストセットで 99% の精度を達成していることが分かります。下記の敵対的摂動下でのロバスト性では、モデルがどれだけロバストであるかを確認します。
Step10: これで AdversarialRegularization を使って基本モデルをラップすることができます。ここでは、既存の基本モデル(base_model)を後で比較に使用できるように、新しい基本モデル(base_adv_model)を作成します。
Step11: 次に、敵対的正則化モデルをコンパイルしてトレーニングし、評価します。「損失ディクショナリに出力がありません」というような警告が出るかもしれませんが、これは adv_model が基本実装に依存せずトータルの損失を計算しているため、問題ありません。
Step12: 敵対的正則化モデルもまた、テストセットで非常に優れた性能(99% の精度)を達成していることが分かります。
Step13: 評価対象のモデルをディクショナリに収集し、各モデルのメトリックオブジェクトを作成します。
Step14: 摂動された例を生成し、それを用いてモデルを評価するループをここに示します。摂動された画像、ラベル、予測は次のセクションで可視化するために保存しておきます。
Step15: 入力が敵対的に摂動されると、基本モデルの精度が劇的に(99% から約 50% に)低下することが分かります。一方で、敵対的正則化されたモデルの精度低下はごくわずか(99% から 95% に)です。これは、敵対的学習がモデルのロバスト性向上に有効であることを示しています。
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!pip install --quiet neural-structured-learning
import matplotlib.pyplot as plt
import neural_structured_learning as nsl
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
class HParams(object):
def __init__(self):
self.input_shape = [28, 28, 1]
self.num_classes = 10
self.conv_filters = [32, 64, 64]
self.kernel_size = (3, 3)
self.pool_size = (2, 2)
self.num_fc_units = [64]
self.batch_size = 32
self.epochs = 5
self.adv_multiplier = 0.2
self.adv_step_size = 0.2
self.adv_grad_norm = 'infinity'
HPARAMS = HParams()
datasets = tfds.load('mnist')
train_dataset = datasets['train']
test_dataset = datasets['test']
IMAGE_INPUT_NAME = 'image'
LABEL_INPUT_NAME = 'label'
def normalize(features):
features[IMAGE_INPUT_NAME] = tf.cast(
features[IMAGE_INPUT_NAME], dtype=tf.float32) / 255.0
return features
def convert_to_tuples(features):
return features[IMAGE_INPUT_NAME], features[LABEL_INPUT_NAME]
def convert_to_dictionaries(image, label):
return {IMAGE_INPUT_NAME: image, LABEL_INPUT_NAME: label}
train_dataset = train_dataset.map(normalize).shuffle(10000).batch(HPARAMS.batch_size).map(convert_to_tuples)
test_dataset = test_dataset.map(normalize).batch(HPARAMS.batch_size).map(convert_to_tuples)
def build_base_model(hparams):
Builds a model according to the architecture defined in `hparams`.
inputs = tf.keras.Input(
shape=hparams.input_shape, dtype=tf.float32, name=IMAGE_INPUT_NAME)
x = inputs
for i, num_filters in enumerate(hparams.conv_filters):
x = tf.keras.layers.Conv2D(
num_filters, hparams.kernel_size, activation='relu')(
x)
if i < len(hparams.conv_filters) - 1:
# max pooling between convolutional layers
x = tf.keras.layers.MaxPooling2D(hparams.pool_size)(x)
x = tf.keras.layers.Flatten()(x)
for num_units in hparams.num_fc_units:
x = tf.keras.layers.Dense(num_units, activation='relu')(x)
pred = tf.keras.layers.Dense(hparams.num_classes)(x)
model = tf.keras.Model(inputs=inputs, outputs=pred)
return model
base_model = build_base_model(HPARAMS)
base_model.summary()
base_model.compile(
optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['acc'])
base_model.fit(train_dataset, epochs=HPARAMS.epochs)
results = base_model.evaluate(test_dataset)
named_results = dict(zip(base_model.metrics_names, results))
print('\naccuracy:', named_results['acc'])
adv_config = nsl.configs.make_adv_reg_config(
multiplier=HPARAMS.adv_multiplier,
adv_step_size=HPARAMS.adv_step_size,
adv_grad_norm=HPARAMS.adv_grad_norm
)
base_adv_model = build_base_model(HPARAMS)
adv_model = nsl.keras.AdversarialRegularization(
base_adv_model,
label_keys=[LABEL_INPUT_NAME],
adv_config=adv_config
)
train_set_for_adv_model = train_dataset.map(convert_to_dictionaries)
test_set_for_adv_model = test_dataset.map(convert_to_dictionaries)
adv_model.compile(
optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['acc'])
adv_model.fit(train_set_for_adv_model, epochs=HPARAMS.epochs)
results = adv_model.evaluate(test_set_for_adv_model)
named_results = dict(zip(adv_model.metrics_names, results))
print('\naccuracy:', named_results['sparse_categorical_accuracy'])
reference_model = nsl.keras.AdversarialRegularization(
base_model, label_keys=[LABEL_INPUT_NAME], adv_config=adv_config)
reference_model.compile(
optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['acc'])
models_to_eval = {
'base': base_model,
'adv-regularized': adv_model.base_model
}
metrics = {
name: tf.keras.metrics.SparseCategoricalAccuracy()
for name in models_to_eval.keys()
}
perturbed_images, labels, predictions = [], [], []
for batch in test_set_for_adv_model:
perturbed_batch = reference_model.perturb_on_batch(batch)
# Clipping makes perturbed examples have the same range as regular ones.
perturbed_batch[IMAGE_INPUT_NAME] = tf.clip_by_value(
perturbed_batch[IMAGE_INPUT_NAME], 0.0, 1.0)
y_true = perturbed_batch.pop(LABEL_INPUT_NAME)
perturbed_images.append(perturbed_batch[IMAGE_INPUT_NAME].numpy())
labels.append(y_true.numpy())
predictions.append({})
for name, model in models_to_eval.items():
y_pred = model(perturbed_batch)
metrics[name](y_true, y_pred)
predictions[-1][name] = tf.argmax(y_pred, axis=-1).numpy()
for name, metric in metrics.items():
print('%s model accuracy: %f' % (name, metric.result().numpy()))
batch_index = 0
batch_image = perturbed_images[batch_index]
batch_label = labels[batch_index]
batch_pred = predictions[batch_index]
batch_size = HPARAMS.batch_size
n_col = 4
n_row = (batch_size + n_col - 1) // n_col
print('accuracy in batch %d:' % batch_index)
for name, pred in batch_pred.items():
print('%s model: %d / %d' % (name, np.sum(batch_label == pred), batch_size))
plt.figure(figsize=(15, 15))
for i, (image, y) in enumerate(zip(batch_image, batch_label)):
y_base = batch_pred['base'][i]
y_adv = batch_pred['adv-regularized'][i]
plt.subplot(n_row, n_col, i+1)
plt.title('true: %d, base: %d, adv: %d' % (y, y_base, y_adv))
plt.imshow(tf.keras.utils.array_to_img(image), cmap='gray')
plt.axis('off')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Chemistry Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 1.8. Coupling With Chemical Reactivity
Step12: 2. Key Properties --> Software Properties
Step13: 2.2. Code Version
Step14: 2.3. Code Languages
Step15: 3. Key Properties --> Timestep Framework
Step16: 3.2. Split Operator Advection Timestep
Step17: 3.3. Split Operator Physical Timestep
Step18: 3.4. Split Operator Chemistry Timestep
Step19: 3.5. Split Operator Alternate Order
Step20: 3.6. Integrated Timestep
Step21: 3.7. Integrated Scheme Type
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
Step23: 4.2. Convection
Step24: 4.3. Precipitation
Step25: 4.4. Emissions
Step26: 4.5. Deposition
Step27: 4.6. Gas Phase Chemistry
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Step30: 4.9. Photo Chemistry
Step31: 4.10. Aerosols
Step32: 5. Key Properties --> Tuning Applied
Step33: 5.2. Global Mean Metrics Used
Step34: 5.3. Regional Metrics Used
Step35: 5.4. Trend Metrics Used
Step36: 6. Grid
Step37: 6.2. Matches Atmosphere Grid
Step38: 7. Grid --> Resolution
Step39: 7.2. Canonical Horizontal Resolution
Step40: 7.3. Number Of Horizontal Gridpoints
Step41: 7.4. Number Of Vertical Levels
Step42: 7.5. Is Adaptive Grid
Step43: 8. Transport
Step44: 8.2. Use Atmospheric Transport
Step45: 8.3. Transport Details
Step46: 9. Emissions Concentrations
Step47: 10. Emissions Concentrations --> Surface Emissions
Step48: 10.2. Method
Step49: 10.3. Prescribed Climatology Emitted Species
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Step51: 10.5. Interactive Emitted Species
Step52: 10.6. Other Emitted Species
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
Step54: 11.2. Method
Step55: 11.3. Prescribed Climatology Emitted Species
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Step57: 11.5. Interactive Emitted Species
Step58: 11.6. Other Emitted Species
Step59: 12. Emissions Concentrations --> Concentrations
Step60: 12.2. Prescribed Upper Boundary
Step61: 13. Gas Phase Chemistry
Step62: 13.2. Species
Step63: 13.3. Number Of Bimolecular Reactions
Step64: 13.4. Number Of Termolecular Reactions
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Step67: 13.7. Number Of Advected Species
Step68: 13.8. Number Of Steady State Species
Step69: 13.9. Interactive Dry Deposition
Step70: 13.10. Wet Deposition
Step71: 13.11. Wet Oxidation
Step72: 14. Stratospheric Heterogeneous Chemistry
Step73: 14.2. Gas Phase Species
Step74: 14.3. Aerosol Species
Step75: 14.4. Number Of Steady State Species
Step76: 14.5. Sedimentation
Step77: 14.6. Coagulation
Step78: 15. Tropospheric Heterogeneous Chemistry
Step79: 15.2. Gas Phase Species
Step80: 15.3. Aerosol Species
Step81: 15.4. Number Of Steady State Species
Step82: 15.5. Interactive Dry Deposition
Step83: 15.6. Coagulation
Step84: 16. Photo Chemistry
Step85: 16.2. Number Of Reactions
Step86: 17. Photo Chemistry --> Photolysis
Step87: 17.2. Environmental Conditions
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'messy-consortium', 'sandbox-1', 'atmoschem')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Line with Gaussian noise
Step5: Write a function named plot_random_line that takes the same arguments as random_line and creates a random line using random_line and then plots the x and y points using Matplotlib's scatter function
Step6: Use interact to explore the plot_random_line function using
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display
def random_line(m, b, sigma, size=10):
Create a line y = m*x + b + N(0,sigma**2) between x=[-1.0,1.0]
Parameters
----------
m : float
The slope of the line.
b : float
The y-intercept of the line.
sigma : float
The standard deviation of the y direction normal distribution noise.
size : int
The number of points to create for the line.
Returns
-------
x : array of floats
The array of x values for the line with `size` points.
y : array of floats
The array of y values for the lines with `size` points.
# YOUR CODE HERE
raise NotImplementedError()
m = 0.0; b = 1.0; sigma=0.0; size=3
x, y = random_line(m, b, sigma, size)
assert len(x)==len(y)==size
assert list(x)==[-1.0,0.0,1.0]
assert list(y)==[1.0,1.0,1.0]
sigma = 1.0
m = 0.0; b = 0.0
size = 500
x, y = random_line(m, b, sigma, size)
assert np.allclose(np.mean(y-m*x-b), 0.0, rtol=0.1, atol=0.1)
assert np.allclose(np.std(y-m*x-b), sigma, rtol=0.1, atol=0.1)
def ticks_out(ax):
Move the ticks to the outside of the box.
ax.get_xaxis().set_tick_params(direction='out', width=1, which='both')
ax.get_yaxis().set_tick_params(direction='out', width=1, which='both')
def plot_random_line(m, b, sigma, size=10, color='red'):
Plot a random line with slope m, intercept b and size points.
# YOUR CODE HERE
raise NotImplementedError()
plot_random_line(5.0, -1.0, 2.0, 50)
assert True # use this cell to grade the plot_random_line function
# YOUR CODE HERE
raise NotImplementedError()
#### assert True # use this cell to grade the plot_random_line interact
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: read_html
Step2: Plotting
Step3: It would be nice to encode by color and plot all combinations of values, but this isn't easy with matplotlib. Instead, let's use seaborn (conda install seaborn).
Step4: Excercise
Step5: Classification
Step6: Try Different Classifiers
Step7: Which Classifier is Best?
Step8: Inaccuracy Score
Step9: Excercise
Step10: Clustering
Step11: Visualize Clusters
Step12: Accuracy
Step13: Excercise
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
url = "https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data"
df = pd.read_csv(url,names=['sepal_length',
'sepal_width',
'petal_length',
'petal_width',
'species'])
df.head()
df_w = pd.read_html('https://en.wikipedia.org/wiki/Iris_flower_data_set',header=0)[0]
df_w.head()
import pylab as plt
%matplotlib inline
plt.scatter(df.sepal_length, df.petal_length)
import seaborn as sns
sns.pairplot(df,vars=['sepal_length',
'sepal_width',
'petal_length',
'petal_width'],hue='species')
sns.swarmplot(x="species", y="petal_length", data=df)
from pandas.tools.plotting import radviz
radviz(df, "species",)
## Plot 1 Here
sns.violinplot(x="species", y="petal_length", data=df)
## Plot 2 Here
sns.interactplot("petal_length", 'petal_width', "sepal_width", data=df)
import numpy as np
import matplotlib.pyplot as plt
from sklearn import datasets, svm
iris = datasets.load_iris()
X = iris.data
y = iris.target.astype(float)
# keep only two features and keep only two species
X = X[y != 0, :2]
y = y[y != 0]
X,y, X.shape
# fit the model
for fig_num, kernel in enumerate(('linear', 'rbf', 'poly')):
clf = svm.SVC(kernel=kernel, gamma=10)
clf.fit(X, y)
plt.figure(fig_num)
plt.clf()
plt.scatter(X[:, 0], X[:, 1], c=y, zorder=10)
plt.axis('tight')
x_min = X[:, 0].min()
x_max = X[:, 0].max()
y_min = X[:, 1].min()
y_max = X[:, 1].max()
XX, YY = np.mgrid[x_min:x_max:200j, y_min:y_max:200j]
Z = clf.decision_function(np.c_[XX.ravel(), YY.ravel()])
# Put the result into a color plot
Z = Z.reshape(XX.shape)
plt.pcolormesh(XX, YY, Z > 0, cmap=plt.cm.Paired)
plt.contour(XX, YY, Z, colors=['k', 'k', 'k'], linestyles=['--', '-', '--'],
levels=[-.5, 0, .5])
plt.title(kernel)
plt.show()
y_pred = clf.predict(X)
print(y,y_pred)
for kernel in ('linear', 'rbf', 'poly'):
clf = svm.SVC(kernel=kernel, gamma=10)
clf.fit(X, y)
y_pred = clf.predict(X)
print(kernel,np.mean(np.abs(y-y_pred))*100,'%')
## species==1
iris = datasets.load_iris()
X = iris.data
y = iris.target.astype(float)
# keep only two features and keep only two species
X = X[y != 1, :2] # changed here
y = y[y != 1] # changed here
# fit the model
for fig_num, kernel in enumerate(('linear', 'rbf', 'poly')):
clf = svm.SVC(kernel=kernel, gamma=10)
clf.fit(X, y)
plt.figure(fig_num)
plt.clf()
plt.scatter(X[:, 0], X[:, 1], c=y, zorder=10)
plt.axis('tight')
x_min = X[:, 0].min()
x_max = X[:, 0].max()
y_min = X[:, 1].min()
y_max = X[:, 1].max()
XX, YY = np.mgrid[x_min:x_max:200j, y_min:y_max:200j]
Z = clf.decision_function(np.c_[XX.ravel(), YY.ravel()])
# Put the result into a color plot
Z = Z.reshape(XX.shape)
plt.pcolormesh(XX, YY, Z > 0, cmap=plt.cm.Paired)
plt.contour(XX, YY, Z, colors=['k', 'k', 'k'], linestyles=['--', '-', '--'],
levels=[-.5, 0, .5])
plt.title(kernel)
y_pred = clf.predict(X)
print(kernel,np.mean(np.abs(y-y_pred))*100,'%')
plt.show()
## petals
iris = datasets.load_iris()
X = iris.data
y = iris.target.astype(float)
# keep only two features and keep only two species
X = X[y != 0, 2:] # changed here
y = y[y != 0]
# fit the model
for fig_num, kernel in enumerate(('linear', 'rbf', 'poly')):
clf = svm.SVC(kernel=kernel, gamma=10)
clf.fit(X, y)
plt.figure(fig_num)
plt.clf()
plt.scatter(X[:, 0], X[:, 1], c=y, zorder=10)
plt.axis('tight')
x_min = X[:, 0].min()
x_max = X[:, 0].max()
y_min = X[:, 1].min()
y_max = X[:, 1].max()
XX, YY = np.mgrid[x_min:x_max:200j, y_min:y_max:200j]
Z = clf.decision_function(np.c_[XX.ravel(), YY.ravel()])
# Put the result into a color plot
Z = Z.reshape(XX.shape)
plt.pcolormesh(XX, YY, Z > 0, cmap=plt.cm.Paired)
plt.contour(XX, YY, Z, colors=['k', 'k', 'k'], linestyles=['--', '-', '--'],
levels=[-.5, 0, .5])
plt.title(kernel)
y_pred = clf.predict(X)
print(kernel,np.mean(np.abs(y-y_pred))*100,'%')
plt.show()
from sklearn.cluster import KMeans, DBSCAN
iris = datasets.load_iris()
X = iris.data
y = iris.target.astype(float)
estimators = {'k_means_iris_3': KMeans(n_clusters=3),
'k_means_iris_8': KMeans(n_clusters=8),
'dbscan_iris_1': DBSCAN(eps=1)}
for name, est in estimators.items():
est.fit(X)
labels = est.labels_
df[name] = labels
sns.pairplot(df,vars=['sepal_length',
'sepal_width',
'petal_length',
'petal_width'],hue='dbscan_iris_1')
from sklearn.metrics import homogeneity_score
for name, est in estimators.items():
print('completeness', name, homogeneity_score(df[name],df['species']))
print('homogeneity', name, homogeneity_score(df['species'],df[name]))
## Algo One
from sklearn.cluster import AgglomerativeClustering, Birch
iris = datasets.load_iris()
X = iris.data
y = iris.target.astype(float)
estimators = {'k_means_iris_3': KMeans(n_clusters=3),
'k_means_iris_8': KMeans(n_clusters=8),
'dbscan_iris_1': DBSCAN(eps=1),
'AgglomerativeClustering': AgglomerativeClustering(n_clusters=3),
'Birch': Birch()}
for name, est in estimators.items():
est.fit(X)
labels = est.labels_
df[name] = labels
name='Birch'
sns.pairplot(df,vars=['sepal_length',
'sepal_width',
'petal_length',
'petal_width'],hue=name)
print('completeness', name, homogeneity_score(df[name],df['species']))
print('homogeneity', name, homogeneity_score(df['species'],df[name]))
## Algo Two
name='AgglomerativeClustering'
sns.pairplot(df,vars=['sepal_length',
'sepal_width',
'petal_length',
'petal_width'],hue=name)
print('completeness', name, homogeneity_score(df[name],df['species']))
print('homogeneity', name, homogeneity_score(df['species'],df[name]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A context manager is enabled by the with statement, and the API involves two methods. The __enter__() method is run when execution flow enters the code block inside the with. It returns an object to be used within the context. When execution flow leaves the with block, the __exit__() method of the context manager is called to clean up any resources being used.
Step2: The __enter__() method can return any object to be associated with a name specified in the as clause of the with statement. In this example, the Context returns an object that uses the open context.
Step3: The value associated with the variable c is the object returned by __enter__(), which is not necessarily the Context instance created in the with statement.
Step4: If the context manager can handle the exception, __exit__() should return a true value to indicate that the exception does not need to be propagated. Returning false causes the exception to be re-raised after __exit__() returns.
Step5: One difference with using the context manager as a decorator is that the value returned by enter() is not available inside the function being decorated, unlike when using with and as. Arguments passed to the decorated function are available in the usual way.
Step6: The generator should initialize the context, yield exactly one time, then clean up the context. The value yielded, if any, is bound to the variable in the as clause of the with statement. Exceptions from within the with block are re-raised inside the generator, so they can be handled there.
Step7: Closing Open Handles
Step8: Ignoring Exceptions
Step9: The try
Step10: Redirecting Output Streams
Step11: Dynamic Context Manager Stacks
Step12: The context managers given to ExitStack are treated as though they are in a series of nested with statements. Errors that happen anywhere within the context propagate through the normal error handling of the context managers. These context manager classes illustrate the way errors propagate.
Step13: The examples using these classes are based around variable_stack(), which uses the context managers passed to construct an ExitStack, building up the overall context one by one. The examples below pass different context managers to explore the error handling behavior. First, the normal case of no exceptions.
|
<ASSISTANT_TASK:>
Python Code:
with open('tmp/pymotw.txt', 'wt') as f:
f.write('contents go here')
class Context:
def __init__(self):
print('__init__()')
def __enter__(self):
print('__enter__()')
return self
def __exit__(self, exc_type, exc_val, exc_tb):
print('__exit__()')
with Context():
print('Doing work in the context')
class WithinContext:
def __init__(self, context):
print('WithinContext.__init__({})'.format(context))
def do_something(self):
print('WithinContext.do_something()')
def __del__(self):
print('WithinContext.__del__')
class Context:
def __init__(self):
print('Context.__init__()')
def __enter__(self):
print('Context.__enter__()')
return WithinContext(self)
def __exit__(self, exc_type, exc_val, exc_tb):
print('Context.__exit__()')
with Context() as c:
c.do_something()
class Context:
def __init__(self, handle_error):
print('__init__({})'.format(handle_error))
self.handle_error = handle_error
def __enter__(self):
print('__enter__()')
return self
def __exit__(self, exc_type, exc_val, exc_tb):
print('__exit__()')
print(' exc_type =', exc_type)
print(' exc_val =', exc_val)
print(' exc_tb =', exc_tb)
return self.handle_error
with Context(True):
raise RuntimeError('error message handled')
print()
with Context(False):
raise RuntimeError('error message propagated')
import contextlib
class Context(contextlib.ContextDecorator):
def __init__(self, how_used):
self.how_used = how_used
print('__init__({})'.format(how_used))
def __enter__(self):
print('__enter__({})'.format(self.how_used))
return self
def __exit__(self, exc_type, exc_val, exc_tb):
print('__exit__({})'.format(self.how_used))
@Context('as decorator')
def func(message):
print(message)
print()
with Context('as context manager'):
print('Doing work in the context')
print()
func('Doing work in the wrapped function')
import contextlib
@contextlib.contextmanager
def make_context():
print(' entering')
try:
yield {}
except RuntimeError as err:
print(' ERROR:', err)
finally:
print(' exiting')
print('Normal:')
with make_context() as value:
print(' inside with statement:', value)
print('\nHandled error:')
with make_context() as value:
raise RuntimeError('showing example of handling an error')
print('\nUnhandled error:')
with make_context() as value:
raise ValueError('this exception is not handled')
import contextlib
@contextlib.contextmanager
def make_context():
print(' entering')
try:
# Yield control, but not a value, because any value
# yielded is not available when the context manager
# is used as a decorator.
yield
except RuntimeError as err:
print(' ERROR:', err)
finally:
print(' exiting')
@make_context()
def normal():
print(' inside with statement')
@make_context()
def throw_error(err):
raise err
print('Normal:')
normal()
print('\nHandled error:')
throw_error(RuntimeError('showing example of handling an error'))
print('\nUnhandled error:')
throw_error(ValueError('this exception is not handled'))
import contextlib
class Door:
def __init__(self):
print(' __init__()')
self.status = 'open'
def close(self):
print(' close()')
self.status = 'closed'
print('Normal Example:')
with contextlib.closing(Door()) as door:
print(' inside with statement: {}'.format(door.status))
print(' outside with statement: {}'.format(door.status))
print('\nError handling example:')
try:
with contextlib.closing(Door()) as door:
print(' raising from inside with statement')
raise RuntimeError('error message')
except Exception as err:
print(' Had an error:', err)
import contextlib
class NonFatalError(Exception):
pass
def non_idempotent_operation():
raise NonFatalError(
'The operation failed because of existing state'
)
try:
print('trying non-idempotent operation')
non_idempotent_operation()
print('succeeded!')
except NonFatalError:
pass
print('done')
import contextlib
class NonFatalError(Exception):
pass
def non_idempotent_operation():
raise NonFatalError(
'The operation failed because of existing state'
)
with contextlib.suppress(NonFatalError):
print('trying non-idempotent operation')
non_idempotent_operation()
print('succeeded!')
print('done')
from contextlib import redirect_stdout, redirect_stderr
import io
import sys
def misbehaving_function(a):
sys.stdout.write('(stdout) A: {!r}\n'.format(a))
sys.stderr.write('(stderr) A: {!r}\n'.format(a))
capture = io.StringIO()
with redirect_stdout(capture), redirect_stderr(capture):
misbehaving_function(5)
print(capture.getvalue())
import contextlib
@contextlib.contextmanager
def make_context(i):
print('{} entering'.format(i))
yield {}
print('{} exiting'.format(i))
def variable_stack(n, msg):
with contextlib.ExitStack() as stack:
for i in range(n):
stack.enter_context(make_context(i))
print(msg)
variable_stack(2, 'inside context')
import contextlib
class Tracker:
"Base class for noisy context managers."
def __init__(self, i):
self.i = i
def msg(self, s):
print(' {}({}): {}'.format(
self.__class__.__name__, self.i, s))
def __enter__(self):
self.msg('entering')
class HandleError(Tracker):
"If an exception is received, treat it as handled."
def __exit__(self, *exc_details):
received_exc = exc_details[1] is not None
if received_exc:
self.msg('handling exception {!r}'.format(
exc_details[1]))
self.msg('exiting {}'.format(received_exc))
# Return Boolean value indicating whether the exception
# was handled.
return received_exc
class PassError(Tracker):
"If an exception is received, propagate it."
def __exit__(self, *exc_details):
received_exc = exc_details[1] is not None
if received_exc:
self.msg('passing exception {!r}'.format(
exc_details[1]))
self.msg('exiting')
# Return False, indicating any exception was not handled.
return False
class ErrorOnExit(Tracker):
"Cause an exception."
def __exit__(self, *exc_details):
self.msg('throwing error')
raise RuntimeError('from {}'.format(self.i))
class ErrorOnEnter(Tracker):
"Cause an exception."
def __enter__(self):
self.msg('throwing error on enter')
raise RuntimeError('from {}'.format(self.i))
def __exit__(self, *exc_info):
self.msg('exiting')
print('No errors:')
variable_stack([
HandleError(1),
PassError(2),
],"test error")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Per university ratio
Step2: As we can see except for the first few universites which clearly have a higher ratio than the others, the ratio tends to decrease in a linear fashion
Step3: Same observations as for the previous bar charts. It is good to note that the universities and their order are not the same as the previous ones.
Step4: Russia is the exception with a high number in comparison of the others. Again the rest decreases in a linear fashion.
Step5: This bar chart is a bit different than the previous ones. Indeed the first ratios decrease way faster than the last ones but we could still use a combintion of two linear functions to approximate the decrease.
Step6: Ratio between local and international students
Step7: Oceania is clearly the leader here.
Step8: URL containing the requested data in json
Step9: Per university ratio
Step10: One university is clearly above the others, then it decreases in a linear fashion.
Step11: We can see that LSE school has 70% of international students which is really impressive.
Step12: In Denmark there are around 6 students for a staff member. It's clearly above the average.
Step13: English talking countries are more international (in term of international students ratio) in general and also small european countries like Luxembourg (really impressive how high), Austria and Switzerland.
Step14: It's quite suprising to see Africa first. We wouldn't have expect that at first. Although Europe is quite at the back.
Step15: Quick observations on both ranking
Step16: And then we use regex on every university name.
Step17: Once we know that the data is ready to be merged, we apply an inner merge. We chose to use an inner merge instead of an outer merge because it makes more sense to only keep universities present in both rankings.
Step18: Once the merge is done, we select and clean the data we are interested in. Moreover, since the number of student is different depending on the ranking, we decided to keep the average.
Step19: Plot of the correlation
Step20: Above, a few things are interesting to notice
Step21: Alternative method
Step22: We then plotted the different Universities according to their respective ranking and showed the first two main component of the data (green and red).
Step23: Once we have the principle component, we can use it to transform our data and change its basis from (rank times, rank topuni) to (main component, second component)
Step24: To get a final ranking system, we simply project the points on the main component axis and rate them depending on how far left they are. The more on the left, the better the rank.
Step25: We now add this new rank in the DataFrame and show the 20 first universities in our new ranking system.
Step26: Best university according to both ranking
|
<ASSISTANT_TASK:>
Python Code:
r = requests.get("https://www.topuniversities.com/sites/default/files/qs-rankings-data/357051.txt")
json_dict = r.json()
json_dict = json_dict['data']
json_dict = json_dict[:199]
for e in json_dict:
#Delete useless informations
del e['nid']
del e['logo']
del e['core_id']
del e['score']
del e['cc']
del e['stars']
del e['guide']
#Get the university specific url
r = requests.get("https://www.topuniversities.com/" + e['url'])
soup = BeautifulSoup(r.text, 'html.parser')
#Get faculty members numbers
scrap_list = soup.find_all('div', class_='faculty-main wrapper col-md-4')
if(len(scrap_list) == 0): #If no informations about the faculty members is given
e['# Faculty members'] = 0
e['# Int. faculty members'] = 0
else:
n_list = scrap_list[0].find_all(class_="number")
if(len(n_list) == 2):
e['# Faculty members'] = int(re.sub(',', '', n_list[0].text.strip()))
e['# Int. faculty members'] = int(re.sub(',', '', n_list[1].text.strip()))
else: #If they don't have international faculty member
e['# Faculty members'] = int(re.sub(',', '', n_list[0].text.strip()))
e['# Int. faculty members'] = 0
#Get students numbers
n_list = soup.find_all('div', class_='students-main wrapper col-md-4')
if(len(n_list) != 0):
e['# Students'] = int(re.sub(',', '', n_list[0].find_all(class_="number")[0].text.strip()))
else:
e['# Students'] = 0
n_list = soup.find_all('div', class_='int-students-main wrapper col-md-4')
if(len(n_list) != 0):
e['# Int. students'] = int(re.sub(',', '', n_list[0].find_all(class_="number")[0].text.strip()))
else:
e['# Int. students'] = 0
#We don't need the url anymore, we can delete it
del e['url']
df1 = pd.DataFrame(json_dict)
df1=df1.rename(columns = {'rank_display':'Rank', 'title' : 'University', 'country' : 'Country', 'region' : 'Region'})
df1["Rank"] = df1["Rank"].apply(lambda x: int(re.sub("[^0-9]", "", x)))
df1.index = df1['University']
del df1['University']
def uni_ratio_fac_stu(df):
#Compute ratio
df['Ratio Faculty members'] = df['# Faculty members']/df['# Students']
#Sort the rows according to ratio and locate the top 30 ones
df_sorted_ratios = df.sort_values(by=['Ratio Faculty members'], ascending=False).head(30)
#Plot the bar chart and print the best one
fig, ax = plt.subplots(figsize=(20,10))
plot = sns.barplot(y=df_sorted_ratios['Ratio Faculty members'], x=df_sorted_ratios.index, ax=ax)
plot.set_xticklabels(plot.get_xticklabels(), rotation=90)
print("Best university according to ratio between faculty members and students is : " + df_sorted_ratios.iloc[0].name)
#Delete the ratio column of the dataframe
del df_sorted_ratios
uni_ratio_fac_stu(df1)
def uni_ratio_int_stu(df):
#Compute ratio
df['Ratio Int. students'] = df['# Int. students']/df['# Students']
#Sort the rows according to ratio and locate the top 30 ones
df_sorted_ratios = df.sort_values(by=['Ratio Int. students'], ascending=False).head(30)
#Plot the bar chart and print the best one
fig, ax = plt.subplots(figsize=(20,10))
plot = sns.barplot(y=df_sorted_ratios['Ratio Int. students'], x=df_sorted_ratios.index, ax=ax)
plot.set_xticklabels(plot.get_xticklabels(), rotation=90)
print("Best university according to ratio between local and international students is : " + df_sorted_ratios.iloc[0].name)
#Delete the ratio column of the dataframe
del df_sorted_ratios
uni_ratio_int_stu(df1)
def country_ratio_fac_stu(df):
#Group by country and compute the ratios
df['Ratio'] = df['# Faculty members']/df['# Students']
#Sort the rows according to ratios
sorted_df = df.sort_values(by=['Ratio'], ascending=False)
#Plot the bar chart and print the best one
fig, ax = plt.subplots(figsize=(20,10))
plot = sns.barplot(y=sorted_df.Ratio, x=sorted_df.index, ax=ax)
plot.set_xticklabels(plot.get_xticklabels(), rotation=50)
print("Best country according to ratio between faculty members and students is : " + sorted_df.iloc[0].name)
#Delete datas we don't need anymore
del df['Ratio']
del sorted_df
df1_country = df1.groupby(['Country']).agg(np.mean)
country_ratio_fac_stu(df1_country)
def country_ratio_int_stu(df):
#Compute the ratios
df['Ratio'] = df['# Int. students']/df['# Students']
#Sort the rows according to ratios
sorted_df = df.sort_values(by=['Ratio'], ascending=False)
#Plot the bar chart and print the best one
fig, ax = plt.subplots(figsize=(20,10))
plot = sns.barplot(y=sorted_df.Ratio, x=sorted_df.index, ax=ax)
plot.set_xticklabels(plot.get_xticklabels(), rotation=50)
print("Best country according to ratio between local and international students is : " + sorted_df.iloc[0].name)
#Delete datas we don't need anymore
del df
del sorted_df
country_ratio_int_stu(df1_country)
def region_ratio_fac_stu(df):
#Group by region and compute the ratios
df['Ratio'] = df['# Faculty members']/df['# Students']
#Sort the rows according to ratios
sorted_df = df.sort_values(by=['Ratio'], ascending=False)
#Plot the bar chart and print the best one
fig, ax = plt.subplots(figsize=(20,10))
df = df.groupby(['Region']).agg(sum)
plot = sns.barplot(y=sorted_df.Ratio, x=sorted_df.index, ax=ax)
plot.set_xticklabels(plot.get_xticklabels(), rotation=50)
print("Best region according to ratio between faculty members and students is : " + sorted_df.iloc[0].name)
#Delete datas we don't need anymore
del df['Ratio']
df1_region = df1.groupby(['Region']).agg(np.mean)
region_ratio_fac_stu(df1_region)
def region_ratio_int_stu(df):
#Compute the ratios
df['Ratio'] = df['# Int. students']/df['# Students']
#Sort the rows according to ratios
sorted_df = df.sort_values(by=['Ratio'], ascending=False)
#Plot the bar chart and print the best one
fig, ax = plt.subplots(figsize=(20,10))
plot = sns.barplot(y=sorted_df.Ratio, x=sorted_df.index, ax=ax)
plot.set_xticklabels(plot.get_xticklabels(), rotation=50)
print("Best region according to ratio between local and international students is : " + sorted_df.iloc[0].name)
#Delete datas we don't need anymore
del df
del sorted_df
region_ratio_int_stu(df1_region)
countries = dict(df1[['Country', 'Region']].drop_duplicates().values.tolist())
countries['Luxembourg'] = 'Europe'
timeshighereducation_url = "https://www.timeshighereducation.com/sites/default/files/the_data_rankings/world_university_rankings_2018_limit0_369a9045a203e176392b9fb8f8c1cb2a.json"
ranking_brute = requests.get(timeshighereducation_url).json()['data']
infos_to_keep = ["rank", "name", "location", "stats_number_students", "stats_pc_intl_students", "stats_student_staff_ratio"]
column_names = ["Rank", "University", "Country", "# Students", "% Int. students", "% Faculty members"]
# creating dataframe
df2 = pd.DataFrame(ranking_brute[:200], index=range(1, 201), columns=infos_to_keep)
# more expressive column names
df2.columns = column_names
df2["Rank"] = df2["Rank"].apply(lambda x: int(re.sub("[^0-9]", "", x)))
# string to numerical values
df2["# Students"] = pd.to_numeric(df2["# Students"].map(lambda x: x.replace(",", "")))
df2["% Int. students"] = pd.to_numeric(df2["% Int. students"].map(lambda x: x.replace("%", ".")))
df2["% Faculty members"] = pd.to_numeric(df2["% Faculty members"])
# adding number of international students
df2["# Int. students"] = (df2["# Students"] * df2["% Int. students"] / 100).map(round)
# adding number of faculty members
df2["# Faculty members"] = (df2["# Students"] / df2["% Faculty members"]).map(round)
# Changing Russian federation to Russia
df2['Country'].replace('Russian Federation', 'Russia', inplace=True)
# adding regions
df2["Region"] = df2["Country"].map(lambda c: countries[c])
# Usign meaningful index
df2.index = df2['University']
del df2['University']
# Deleting % Int. students because it is not needed anymore
del df2['% Int. students']
del df2["% Faculty members"]
uni_ratio_fac_stu(df2)
uni_ratio_int_stu(df2)
df2_country = df2.groupby(['Country']).agg(np.mean)
country_ratio_fac_stu(df2_country)
country_ratio_int_stu(df2_country)
df2_region = df2.groupby(['Region']).agg(np.mean)
region_ratio_fac_stu(df2_region)
region_ratio_int_stu(df2_region)
df2.rename({'ETH Zurich – Swiss Federal Institute of Technology Zurich' : 'ETH Zurich – Swiss Federal Institute of Technology',
'Wageningen University & Research' : 'Wageningen University'}, inplace=True)
df1.rename({'UCL (University College London)': 'University College London',
'KAIST - Korea Advanced Institute of Science & Technology' : 'Korea Advanced Institute of Science and Technology (KAIST)',
'Ludwig-Maximilians-Universität München' : 'LMU Munich',
'Ruprecht-Karls-Universität Heidelberg' : 'Heidelberg University',
'University of North Carolina, Chapel Hill' : 'University of North Carolina at Chapel Hill',
'Trinity College Dublin, The University of Dublin' : 'Trinity College Dublin',
'KIT, Karlsruhe Institute of Technology' : 'Karlsruhe Institute of Technology',
'Humboldt-Universität zu Berlin' : 'Humboldt University of Berlin',
'Freie Universitaet Berlin': 'Free University of Berlin',
'Université de Montréal' : 'University of Montreal',
'Université Pierre et Marie Curie (UPMC)' : 'Pierre and Marie Curie University',
'Technische Universität Berlin (TU Berlin)' : 'Technical University of Berlin',
'Universitat Autònoma de Barcelona' : 'Autonomous University of Barcelona',
'Eberhard Karls Universität Tübingen' : 'University of Tübingen',
'Albert-Ludwigs-Universitaet Freiburg' : 'University of Freiburg',
'Scuola Superiore Sant\'Anna Pisa di Studi Universitari e di Perfezionamento': 'Scuola Superiore Sant’Anna'}, inplace=True)
regex1 = r'\([^()]*\)'
regex2 = r"(the)|(university)|(of)|(de)|(-)|( )|(’)|(')|\."
p = re.compile('('+regex1+'|'+regex2+')')
for df in [df1, df2]:
df['University_regex'] = df.index.values
# We first take care of the case
df['University_regex'] = df['University_regex'].apply(lambda x: x.casefold())
# Then we remove useless words, spaces, parentheses, their content and so on
df['University_regex'] = df['University_regex'].apply(lambda x: str(re.sub(p, '', x)))
# We then normalize the text to remove accents
df['University_regex'] = df['University_regex'].apply(lambda x: unicodedata.normalize("NFKD", x))
# And swith the encoding to utf-8
df['University_regex'] = df['University_regex'].apply(lambda x: x.rstrip().encode('ascii', errors='ignore')
.decode('utf-8'))
df_merged = df2.reset_index().merge(df1, how='inner', on='University_regex').set_index('University')
#Rename columns and put them in a new dataframe
df_all = df_merged.copy()
df_all['# Students (topuni)'] = df_merged['# Students_x']
df_all['# Students (times)'] = df_merged['# Students_y']
df_all['# Int. students (topuni)'] = df_merged['# Int. students_x']
df_all['# Int. students (times)'] = df_merged['# Int. students_y']
df_all['# Faculty members (topuni)'] = df_merged['# Faculty members_x']
df_all['# Faculty members (times)'] = df_merged['# Faculty members_y']
df_all['Rank topuni'] = df_merged['Rank_x']
df_all['Rank times'] = df_merged['Rank_y']
df_all['Country'] = df_merged['Country_x']
df_all['Region'] = df_merged['Region_x']
df_all['Ratio Int. students (topuni)'] = df_merged['Ratio Int. students_x']
df_all['Ratio Int. students (times)'] = df_merged['Ratio Int. students_y']
df_all['Ratio Faculty members (topuni)'] = df_merged['Ratio Faculty members_x']
df_all['Ratio Faculty members (times)'] = df_merged['Ratio Faculty members_y']
#Columns to keep
df_all = df_all[['# Students (topuni)', '# Students (times)',
'# Int. students (topuni)', '# Int. students (times)', '# Faculty members (topuni)',
'# Faculty members (times)', 'Rank topuni', 'Rank times', 'Country', 'Region',
'Ratio Int. students (topuni)', 'Ratio Int. students (times)', 'Ratio Faculty members (topuni)', 'Ratio Faculty members (times)']]
del df_merged
fig, ax = plt.subplots(figsize=(10,10))
sns.heatmap(df_all.corr(), ax=ax, square=True)
df_all['Rank mean'] = (df_all['Rank times'] + df_all['Rank topuni'])/2.0
best = df_all.sort_values(by=['Rank mean'], ascending=True).iloc[0]
print("The best university according to both ranking is " + best.name + " with a mean rank score of {}".format(best["Rank mean"]))
del df_all['Rank mean']
pca = PCA(n_components=2)
# Creates pairs of ranks for each university
ranks = np.array([df_all[['Rank times']].values.flatten(), df_all[['Rank topuni']].values.flatten()]).T
# Tells the PCA what the data is and then get the data in its new basis
ranks_new_basis = pca.fit(ranks).transform(ranks)
# Get the ratio of importance for each component
pca_score = pca.explained_variance_ratio_
# Get the components
V = pca.components_
# Multiplies the components by their ratio of importance compared to the least important component
ranks_pca_axis_x, ranks_pca_axis_y = V.T * pca_score / pca_score.min()
starting_point = (100, 100)
fig = plt.figure(figsize=(10, 10))
ax = fig.add_subplot(111)
plt.title('Universities ranking')
plt.plot(df_all[['Rank times']].values.flatten(), df_all[['Rank topuni']].values.flatten(), '.')
plt.plot([starting_point[0], starting_point[0] + ranks_pca_axis_x[0]*20], [starting_point[1], starting_point[1] + ranks_pca_axis_y[0]*20], label='Main component')
plt.plot([starting_point[0], starting_point[0] + ranks_pca_axis_x[1]*20], [starting_point[1], starting_point[1] + ranks_pca_axis_y[1]*20], label='Second component')
plt.xlabel('Ranking according to Times')
plt.ylabel('Ranking according to Topuni')
plt.legend()
plt.show()
fig = plt.figure(figsize=(10, 10))
ax = fig.add_subplot(111)
plt.scatter(ranks_new_basis.T[0], ranks_new_basis.T[1])
# Creates main component arrow
plt.plot([-125, 150], [0, 0], c='g', label='Main component')
plt.plot([140, 150], [4, 0], c='g')
plt.plot([140, 150], [-4, 0], c='g')
plt.xlabel('Main component')
plt.ylabel('Second component')
plt.title('Universities ranking in new basis')
plt.legend()
plt.show()
# Keep only main component value of data
ranks_pca1 = ranks_new_basis.T[0]
# Get indices of sorted data
temp = ranks_pca1.argsort()
# Creates new array and then aranges values depending on indices of sorted data
new_ranks = np.empty(len(ranks_pca1), int)
new_ranks[temp] = np.arange(len(ranks_pca1))
df_all['Rank total'] = new_ranks + 1
print("Best university according to PCA is : " + df_all.sort_values(by='Rank total').iloc[0].name)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Red dots are interpolated values.
Step2: We want to obtain a stationary "feature" from the data, firt differences are an easy place to start.
Step3: It appears that early temperature sensors had rather imprecise readings.
Step4: It looks like there is still some nonstationarity in the first differences.
|
<ASSISTANT_TASK:>
Python Code:
print('Original bounds: ', t[0], t[-1])
t_obs = t[D['T_flag'] != -1]
D = D[t_obs[0]:t_obs[-1]] # Truncate dataframe so it is sandwiched between observed values
t = D.index
T = D['T']
print('New bounds: ', t[0], t[-1])
t_obs = D.index[D['T_flag'] != -1]
t_interp = D.index[D['T_flag'] == -1]
T_obs = D.loc[t_obs, 'T']
T_interp = D.loc[t_interp, 'T']
c = ['b' if flag != -1 else 'orange' for flag in D['T_flag']
plt.scatter(t, T, c = c, alpha = 0.5, s = 0.5)
plt.title('T')
#obs = plt.scatter(t_obs, T_obs, marker = '.', alpha = 0.5, s = 0.5, color = 'blue');
#interp = plt.scatter(t_interp, T_interp, marker = '.', alpha = 0.5, s = 0.5, color = 'red');
# If I plot one after the other, the red is much more prominant... Very annoying
#plt.legend((obs, interp), ('Observed', 'Interpolated'), markerscale = 15);
# Centre the data
mu = D['T'].mean()
D.loc[:, 'T'] = D.loc[:, 'T'] - mu
T = D['T']
print('E[T] = ', mu)
T0 = T[0]
dT = T.diff()
dT = dT - dT.mean() # Center the differences
dT_obs = dT[t_obs]
dT_interp = dT[t_interp]
plt.scatter(t, dT, marker = '.', alpha = 0.5, s = 0.5, c = c)
#obs = plt.scatter(t_obs, dT_obs, marker = '.', alpha = 0.5, s = 0.5, color = 'blue');
#interp = plt.scatter(t_interp, dT_interp, marker = '.', alpha = 0.5, s = 0.5, color = 'red');
#plt.legend((obs, interp), ('Observed', 'Interpolated'), markerscale = 15);
plt.title('dT')
rolling1w_dT = dT.rolling(window = 7*24) # 1 week rolling window of dT
rolling1m_dT = dT.rolling(window = 30*24) # 1 month rolling window of dT
rolling1y_dT = dT.rolling(window = 365*24) # 1 year rolling dindow of dT
fig, axes = plt.subplots(3, 1)
axes[0].plot(rolling1w_dT.var())
axes[1].plot(rolling1m_dT.var())
axes[2].plot(rolling1y_dT.var())
from itertools import product
t_days = [t[np.logical_and(t.month == m, t.day == d)] for m, d in product(range(1,13), range(1, 32))]
day_vars = pd.Series(dT[ti].var() for ti in t_days)
day_vars = day_vars.dropna()
plt.scatter(day_vars.index, day_vars)
r = day_vars.rolling(window = 20, center = True)
plt.plot(day_vars.index, r.mean(), color = 'red', linewidth = 2)
plt.title('Variance of dT, folded by days')
<END_TASK>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.