text_prompt
stringlengths 168
30.3k
| code_prompt
stringlengths 67
124k
|
|---|---|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Determine the basic properties of this instrument.
Step2: Determine the basic properties of this night, including a nightly observing log.
Step3: Determine the basic properties of this observation, including the data associated with it.
Step4: -
|
<ASSISTANT_TASK:>
Python Code:
from mosasaurus.Target import Target
# load a target from SIMBAD
t = Target(starname='GJ1132', name='GJ1132b')
t.summarize()
t.star.summarize()
# create a target from values (in case you want to work offline)
import astropy.units as u
t = Target(starname='GJ1132', name='GJ1132b',
ra=153.71622708*u.deg, dec=-47.15684639*u.deg,
pmra=-1046., pmdec=416., epoch=2000.0)
t.summarize()
t.star.summarize()
from mosasaurus.instruments import LDSS3C
i = LDSS3C(grism='vph-red')
i.summarize()
i.keysforlogheader
from mosasaurus.Night import Night
n = Night('ut160227_28', instrument=i)
n.createNightlyLog(remake=False)
n.summarize()
from mosasaurus.Observation import Observation
o = Observation(t, i, n)
o.summarize()
o.loadHeaders()
col = n.log['object']
col.
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now we add the type_I_migration effect, and set the appropriate disk parameters. Note that we chose code units of AU for all the distances above. We require
Step2: We can also add an inner disk edge (ide) to halt migration. This is an artificial prescription for halting the planet at ide_position (in code units, here AU).
Step3: We set the timestep to 5% of the orbital period at the inner disk edge to make sure we always resolve the orbit
Step4: We now integrate the system
Step5: and compare to the analytical predictions
Step6: The analytical solution is obtained by solving the ODE for a circular orbit. With the chosen surface profile and flaring index we have
|
<ASSISTANT_TASK:>
Python Code:
import rebound
import reboundx
import astropy.units as u
import astropy.constants as constants
import numpy as np
import matplotlib.pyplot as plt
sim = rebound.Simulation()
sim.units = ('yr', 'AU', 'Msun')
sim.add(m = 1)
a0=1
sim.add(m = 1.e-4, a=a0, e=0, inc = 0)
sim.move_to_com()
ps = sim.particles
rebx = reboundx.Extras(sim)
mig = rebx.load_force("type_I_migration")
rebx.add_force(mig)
mig.params["tIm_scale_height_1"] = 0.03
mig.params["tIm_surface_density_1"] = ((1000* u.g /u.cm**2).to(u.Msun/u.AU**2)).value #transformed from g/cm^2 to code units
mig.params["tIm_surface_density_exponent"] = 1
mig.params["tIm_flaring_index"] = 0.25
mig.params["ide_position"] = 0.1
mig.params["ide_width"] = mig.params["tIm_scale_height_1"]*mig.params["ide_position"]**mig.params["tIm_flaring_index"]
print('Planet will stop within {0:.3f} AU of the inner disk edge at {1} AU'.format(mig.params["ide_width"], mig.params["ide_position"]))
sim.integrator = 'whfast'
sim.dt = mig.params["ide_position"]**(3/2)/20
times = np.linspace(0, 4e3, 1000)
a_integration = np.zeros((1000))
for i, t in enumerate(times):
sim.integrate(t)
a_integration[i] = ps[1].a
h0 = mig.params["tIm_scale_height_1"]
sd0 = mig.params["tIm_surface_density_1"]
alpha = mig.params["tIm_surface_density_exponent"] = 1
# Combining Eqs 3.6 and 3.3 of Pichierri et al. 2018
tau_tilde = h0**2 / ((2.7+1.1*alpha)*ps[1].m*sd0*(np.sqrt(sim.G)))
a_analytical = a0*np.maximum(1 - (times/tau_tilde), mig.params["ide_position"])
plt.plot(times*0.001, a_integration, label = 'Numerical evolution', c = 'green', linewidth = 4, alpha = 0.6)
plt.plot(times*0.001, a_analytical, label = 'Analytical prediction', c = 'brown', linestyle = "dashed", linewidth = 1)
plt.xlim(np.min(times)*0.001, np.max(times)*0.001)
plt.xlabel('time [kyr]')
plt.ylabel('Semi-major axis [AU]')
plt.legend()
plt.ylim(0,1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Visualize the "subject" head movements. By providing the measurement
Step2: This can also be visualized using a quiver.
Step3: Process our simulated raw data (taking into account head movements).
Step4: First, take the average of stationary data (bilateral auditory patterns).
Step5: Second, take a naive average, which averages across epochs that have been
Step6: Third, use raw movement compensation (restores pattern).
Step7: Fourth, use evoked movement compensation. For these data, which contain
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Eric Larson <larson.eric.d@gmail.com>
#
# License: BSD-3-Clause
from os import path as op
import mne
from mne.preprocessing import maxwell_filter
print(__doc__)
data_path = op.join(mne.datasets.misc.data_path(verbose=True), 'movement')
head_pos = mne.chpi.read_head_pos(op.join(data_path, 'simulated_quats.pos'))
raw = mne.io.read_raw_fif(op.join(data_path, 'simulated_movement_raw.fif'))
raw_stat = mne.io.read_raw_fif(op.join(data_path,
'simulated_stationary_raw.fif'))
mne.viz.plot_head_positions(
head_pos, mode='traces', destination=raw.info['dev_head_t'], info=raw.info)
mne.viz.plot_head_positions(
head_pos, mode='field', destination=raw.info['dev_head_t'], info=raw.info)
# extract our resulting events
events = mne.find_events(raw, stim_channel='STI 014')
events[:, 2] = 1
raw.plot(events=events)
topo_kwargs = dict(times=[0, 0.1, 0.2], ch_type='mag', vmin=-500, vmax=500,
time_unit='s')
evoked_stat = mne.Epochs(raw_stat, events, 1, -0.2, 0.8).average()
evoked_stat.plot_topomap(title='Stationary', **topo_kwargs)
epochs = mne.Epochs(raw, events, 1, -0.2, 0.8)
evoked = epochs.average()
evoked.plot_topomap(title='Moving: naive average', **topo_kwargs)
raw_sss = maxwell_filter(raw, head_pos=head_pos)
evoked_raw_mc = mne.Epochs(raw_sss, events, 1, -0.2, 0.8).average()
evoked_raw_mc.plot_topomap(title='Moving: movement compensated (raw)',
**topo_kwargs)
evoked_evo_mc = mne.epochs.average_movements(epochs, head_pos=head_pos)
evoked_evo_mc.plot_topomap(title='Moving: movement compensated (evoked)',
**topo_kwargs)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Test drug_gene_heatprop module
Step2: More detailed description of methods below...
Step3: What is this drug-gene graph like?
Step4: But we probably want to focus on within-cluster interactions, instead of the whole graph
Step5: Above, we plot the drug-gene interaction network for our sample cluster
Step6: First let's plot the focal cluster of interest (Adrenocortical carcinoma cluster 250250)
Step7: Now we will convert the cluster correlation matrix back to network form
Step8: Now let's look up the drugs associated with these genes to see if there are any good candidates
|
<ASSISTANT_TASK:>
Python Code:
# import some useful packages
import numpy as np
import matplotlib.pyplot as plt
import seaborn
import networkx as nx
import pandas as pd
import random
import json
# latex rendering of text in graphs
import matplotlib as mpl
mpl.rc('text', usetex = False)
mpl.rc('font', family = 'serif')
% matplotlib inline
# load the module
import sys
sys.path.append('../source')
import drug_gene_heatprop
import imp
imp.reload(drug_gene_heatprop)
path_to_DB_file = '../drugbank.0.json.new' # set path to drug bank file
path_to_cluster_file = 'sample_matrix.csv' # set path to cluster file
seed_genes = ['LETM1','RPL3','GRK4','RWDD4A'] # set seed genes (must be in cluster)
gene_drug_df = drug_gene_heatprop.drug_gene_heatprop(seed_genes,path_to_DB_file,path_to_cluster_file,
plot_flag=True)
gene_drug_df.head(25)
def load_DB_data(fname):
'''
Load and process the drug bank data
'''
with open(fname, 'r') as f:
read_data = f.read()
f.closed
si = read_data.find('\'\n{\n\t"source":')
sf = read_data.find('\ncurl')
DBdict = dict()
# fill in DBdict
while si > 0:
db_temp = json.loads(read_data[si+2:sf-2])
DBdict[db_temp['drugbank_id']]=db_temp
# update read_data
read_data = read_data[sf+10:]
si = read_data.find('\'\n{\n\t"source":')
sf = read_data.find('\ncurl')
return DBdict
DBdict = load_DB_data('/Users/brin/Documents/DrugBank/drugbank.0.json.new')
# make a network out of drug-gene interactions
DB_el = []
for d in DBdict.keys():
node_list = DBdict[d]['node_list']
for n in node_list:
DB_el.append((DBdict[d]['drugbank_id'],n['name']))
G_DB = nx.Graph()
G_DB.add_edges_from(DB_el)
gene_nodes,drug_nodes = nx.bipartite.sets(G_DB)
gene_nodes = list(gene_nodes)
drug_nodes = list(drug_nodes)
print('--> there are '+str(len(gene_nodes)) + ' genes with ' + str(len(drug_nodes)) + ' corresponding drugs')
DB_degree = pd.Series(G_DB.degree())
DB_degree.sort(ascending=False)
plt.figure(figsize=(18,5))
plt.bar(np.arange(70),DB_degree.head(70),width=.5)
tmp = plt.xticks(np.arange(70)+.4,list(DB_degree.head(70).index),rotation=90,fontsize=11)
plt.xlim(1,71)
plt.ylim(0,200)
plt.grid('off')
plt.ylabel('number of connections (degree)',fontsize=16)
# load a sample cluster for network visualization
sample_genes = pd.read_csv('/Users/brin/Documents/DrugBank/sample_cluster.csv',header=None)
sample_genes = list(sample_genes[0])
# also include neighbor genes
neighbor_genes = [nx.neighbors(G_DB,x) for x in sample_genes if x in G_DB.nodes()]
neighbor_genes = [val for sublist in neighbor_genes for val in sublist]
sub_genes = []
sub_genes.extend(sample_genes)
sub_genes.extend(neighbor_genes)
G_DB_sample = nx.subgraph(G_DB,sub_genes)
drug_nodes = list(np.intersect1d(neighbor_genes,G_DB.nodes()))
gene_nodes = list(np.intersect1d(sample_genes,G_DB.nodes()))
# return label positions offset by dx
def calc_pos_labels(pos,dx=.03):
# input node positions from nx.spring_layout()
pos_labels = dict()
for key in pos.keys():
pos_labels[key] = np.array([pos[key][0]+dx,pos[key][1]+dx])
return pos_labels
pos = nx.spring_layout(G_DB_sample,k=.27)
pos_labels = calc_pos_labels(pos)
plt.figure(figsize=(14,14))
nx.draw_networkx_nodes(G_DB_sample,pos=pos,nodelist = drug_nodes,node_shape='s',node_size=80,alpha=.7,label='drugs')
nx.draw_networkx_nodes(G_DB_sample,pos=pos,nodelist = gene_nodes,node_shape='o',node_size=80,node_color='blue',alpha=.7,label='genes')
nx.draw_networkx_edges(G_DB_sample,pos=pos,alpha=.5)
nx.draw_networkx_labels(G_DB_sample,pos=pos_labels,font_size=10)
plt.grid('off')
plt.legend(fontsize=12)
plt.title('Adrenocortical carcinoma cluster 250250',fontsize=16)
sample_mat = pd.read_csv('/Users/brin/Documents/DrugBank/sample_matrix.csv',index_col=0)
print(sample_mat.head())
idx_to_node = dict(zip(range(len(sample_mat)),list(sample_mat.index)))
sample_mat = np.array(sample_mat)
sample_mat = sample_mat[::-1,0:-1] # reverse the indices for use in graph creation
plt.figure(figsize=(7,7))
plt.matshow(sample_mat,cmap='bwr',vmin=-1,vmax=1,fignum=False)
plt.grid('off')
plt.title('Adrenocortical carcinoma cluster 250250',fontsize='16')
G_cluster = nx.Graph()
G_cluster = nx.from_numpy_matrix(np.abs(sample_mat))
G_cluster = nx.relabel_nodes(G_cluster,idx_to_node)
pos = nx.spring_layout(G_cluster,k=.4)
seed_genes = ['STIM2','USP46','FRYL','COQ2'] #['STIM2','USP46'] # input gene list here
plt.figure(figsize=(10,10))
nx.draw_networkx_nodes(G_cluster,pos=pos,node_size=20,alpha=.5,node_color='blue')
nx.draw_networkx_nodes(G_cluster,pos=pos,nodelist=seed_genes,node_size=50,alpha=.7,node_color='red',linewidths=2)
nx.draw_networkx_edges(G_cluster,pos=pos,alpha=.03)
plt.grid('off')
plt.title('Sample subnetwork: pre-heat propagation',fontsize=16)
Wprime = network_prop.normalized_adj_matrix(G_cluster,weighted=True)
Fnew = network_prop.network_propagation(G_cluster,Wprime,seed_genes)
plt.figure(figsize=(10,10))
nx.draw_networkx_edges(G_cluster,pos=pos,alpha=.03)
nx.draw_networkx_nodes(G_cluster,pos=pos,node_size=20,alpha=.8,node_color=Fnew[G_cluster.nodes()],cmap='jet',
vmin=0,vmax=.005)
nx.draw_networkx_nodes(G_cluster,pos=pos,nodelist=seed_genes,node_size=50,alpha=.7,node_color='red',linewidths=2)
plt.grid('off')
plt.title('Sample subnetwork: post-heat propagation',fontsize=16)
N = 50
Fnew.sort(ascending=False)
print('Top N hot genes: ')
Fnew.head(N)
# plot the hot subgraph in gene-gene space
G_cluster_sub = nx.subgraph(G_cluster,list(Fnew.head(N).index))
pos = nx.spring_layout(G_cluster_sub,k=.5)
plt.figure(figsize=(10,10))
nx.draw_networkx_nodes(G_cluster_sub,pos=pos,node_size=100,node_color=Fnew[G_cluster_sub.nodes()],cmap='jet',
vmin=0,vmax=.005)
nx.draw_networkx_edges(G_cluster_sub,pos=pos,alpha=.05)
pos_labels = calc_pos_labels(pos,dx=.05)
nx.draw_networkx_labels(G_cluster_sub,pos=pos_labels)
plt.grid('off')
plt.title('Sample cluster: hot subnetwork \n (genes most related to input list)', fontsize=16)
top_N_genes = list(Fnew.head(N).index)
top_N_genes = list(np.setdiff1d(top_N_genes,seed_genes)) # only keep non-seed genes
top_N_genes = Fnew[top_N_genes]
top_N_genes.sort(ascending=False)
top_N_genes = list(top_N_genes.index)
drug_candidates_list = seed_genes # build up a list of genes and drugs that may be related to input list
for g in top_N_genes:
if g in G_DB.nodes(): # check if g is in drugbank graph
drug_candidates_list.append(g)
drug_neighs_temp = list(nx.neighbors(G_DB,g))
drug_candidates_list.extend(drug_neighs_temp)
# make a subgraph of these drug/gene candidates
G_DB_sub = nx.subgraph(G_DB,drug_candidates_list)
# define drug_nodes and gene_nodes from the subgraph
drug_nodes = list(np.intersect1d(neighbor_genes,G_DB_sub.nodes()))
gene_nodes = list(np.intersect1d(sample_genes,G_DB_sub.nodes()))
plt.figure(figsize=(12,12))
pos = nx.spring_layout(G_DB_sub)
pos_labels = calc_pos_labels(pos,dx=.05)
nx.draw_networkx_nodes(G_DB_sub,pos=pos,nodelist=gene_nodes,node_size=100,alpha=.7,node_color='blue',label='genes')
nx.draw_networkx_nodes(G_DB_sub,pos=pos,nodelist=drug_nodes,node_size=100,alpha=.7,node_color='red',node_shape='s',label='drugs')
nx.draw_networkx_edges(G_DB_sub,pos=pos,alpha=.5)
nx.draw_networkx_labels(G_DB_sub,pos=pos_labels,font_color='black')
plt.grid('off')
ax_min = np.min(pos.values())-.3
ax_max = np.max(pos.values())+.3
plt.xlim(ax_min,ax_max)
plt.ylim(ax_min,ax_max)
plt.legend(fontsize=14)
#plt.axes().set_aspect('equal')
plt.title('Genes in hot subnetwork with associated drugs', fontsize=16)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The Actual Code
Step2: I will be using the below cell for analyzing Cluster Component's Performance
Step3: Cluster Components Conditions
Step4: Prediction
Step5: Prediction
Step6: Predicting what good data will look like
Step7: Visualization relative to prediction
Step8: Predicting what difficult data will look like
Step9: Visualization relative to prediction
Step10: Results of Good Data Relative to Predictions
Step11: Difficult Data Prediction
Step12: Results of Difficult Data Result Relative to Prediction
Step13: Summary of Performances
Step14: Expectation for Synthetic Data
Step15: Real Data Analysis
Step16: Entire Volume
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import sys
sys.path.insert(0,'../code/functions/')
import tiffIO as tIO
import connectLib as cLib
import plosLib as pLib
import time
import scipy.ndimage as ndimage
import numpy as np
import scipy.ndimage as ndimage
class ClusterComponent:
def __init__(self, bianIm):
self.s = [[[1 for k in xrange(3)] for j in xrange(3)] for i in xrange(3)]
label_im, nr_objects = ndimage.label(bianIm, self.s)
self.numClusters = nr_objects
self.labeledIm = label_im
self.volumes = self.getVolumes()
self.centroids = self.getCentroids()
def volumeThreshold(self, threshold=250):
mask = self.labeledIm > self.labeledIm.mean()
sizes = ndimage.sum(mask, self.labeledIm, range(self.numClusters + 1))
mask_size = sizes > threshold
remove_pixel = mask_size[self.labeledIm]
self.labeledIm[remove_pixel] = 0
new_label_im, new_nr_objects = ndimage.label(self.labeledIm, self.s)
self.labeledIm = new_label_im
self.numClusters = new_nr_objects
self.volumes = self.getVolumes()
self.centroids = self.getCentroids()
def getVolumes(self):
mask = self.labeledIm > self.labeledIm.mean()
temp, temp_nr_objects = ndimage.label(self.labeledIm, self.s)
sizesTemp = ndimage.sum(mask, temp, range(self.numClusters + 1))
sizesTempRemoved0 = [sizesTemp[i] for i in range(1, len(sizesTemp))]
return sizesTempRemoved0
def getCentroids(self):
centers = ndimage.measurements.center_of_mass(self.labeledIm, self.labeledIm, [i for i in range(self.numClusters)])
return centers
def ccAnalysis(input):
labeledNumClusters = []
labeledNumCentroids = []
labeledNumVolumes = []
times = []
for i in range(10):
start_time = time.time()
clusterList = ClusterComponent(input)
times.append((time.time() - start_time))
labeledNumClusters.append(clusterList.numClusters)
labeledNumCentroids.append(len(clusterList.centroids))
labeledNumVolumes.append(len(clusterList.volumes))
print 'Average Number of Clusters:\n\tExpected: Around 450\tActual: ' + str(np.mean(labeledNumClusters))
print 'Average Number of Centroids:\n\tExpected: Around 450\tActual: ' + str(np.mean(labeledNumCentroids))
print 'Average Number of Volumes:\n\tExpected: Around 450\tActual: ' + str(np.mean(labeledNumVolumes))
print 'Average Time Taken to Execute: ' + str(np.mean(times))
return clusterList
import numpy as np
import matplotlib.pyplot as plt
clusterGrid = np.zeros((100, 1000, 1000))
for i in range(40):
for j in range(40):
for k in range(40):
clusterGrid[20*(2*j): 20*(2*j + 1), 20*(2*i): 20*(2*i + 1), 20*(2*k): 20*(2*k + 1)] = 1
plt.imshow(clusterGrid[5])
plt.axis('off')
plt.title('Slice at z=5')
plt.show()
clusterGrid = clusterGrid + 1
plt.imshow(clusterGrid[5])
plt.axis('off')
plt.title('Slice at z=5')
plt.show()
simEasyGrid = np.zeros((100, 100, 100))
for i in range(4):
for j in range(4):
for k in range(4):
simEasyGrid[20*(2*j): 20*(2*j + 1), 20*(2*i): 20*(2*i + 1), 20*(2*k): 20*(2*k + 1)] = 1
plt.imshow(simEasyGrid[5])
plt.axis('off')
plt.show()
simDiffGrid = simEasyGrid + 1
plt.imshow(simDiffGrid[5])
plt.axis('off')
plt.show()
goodDatCC = ClusterComponent(simEasyGrid)
print "num volumes: " + str(len(goodDatCC.volumes))
print "num clusters: " + str(goodDatCC.numClusters)
print "num centroids: " + str(len(goodDatCC.centroids))
%matplotlib inline
import matplotlib.pyplot as plt
import pylab
labeledNumClusters = []
labeledNumCentroids = []
labeledNumVolumes = []
times = []
for i in range(10):
start_time = time.time()
clusterList = ClusterComponent(simEasyGrid)
labeledNumClusters.append(clusterList.numClusters)
labeledNumCentroids.append(len(clusterList.centroids))
labeledNumVolumes.append(len(clusterList.volumes))
times.append((time.time() - start_time))
pylab.hist(labeledNumClusters, normed=1)
pylab.xlabel('Number of Clusters')
pylab.ylabel('Number of Trials')
pylab.show()
print 'Average Number of Components on Easy Simulation Data:\n\tExpected: 27\tActual: ' + str(np.mean(labeledNumClusters))
pylab.hist(labeledNumCentroids, normed=1)
pylab.xlabel('Number of Centroids')
pylab.ylabel('Number of Trials')
pylab.show()
print 'Average Number of Components on Easy Simulation Data:\n\tExpected: 27\tActual: ' + str(np.mean(labeledNumCentroids))
pylab.hist(labeledNumVolumes, normed=1)
pylab.xlabel('Number of Volumes')
pylab.ylabel('Number of Trials')
pylab.show()
print 'Average Number of Components on Easy Simulation Data:\n\tExpected: 27\tActual: ' + str(np.mean(labeledNumVolumes))
pylab.hist(times, normed=1)
pylab.xlabel('Time Taken to Execute')
pylab.ylabel('Number of Trials')
plt.show()
print 'Average Time Taken to Execute: ' + str(np.mean(times))
badDatCC = ClusterComponent(simDiffGrid)
print "num volumes: " + str(len(badDatCC.volumes))
print "num clusters: " + str(badDatCC.numClusters)
print "num centroids: " + str(len(badDatCC.centroids))
%matplotlib inline
import matplotlib.pyplot as plt
import pylab
labeledNumClusters = []
labeledNumCentroids = []
labeledNumVolumes = []
times = []
for i in range(10):
start_time = time.time()
clusterList = ClusterComponent(simDiffGrid)
labeledNumClusters.append(clusterList.numClusters)
labeledNumCentroids.append(len(clusterList.centroids))
labeledNumVolumes.append(len(clusterList.volumes))
times.append((time.time() - start_time))
pylab.hist(labeledNumClusters, normed=1)
pylab.xlabel('Number of Components')
pylab.ylabel('Number of Trials')
pylab.show()
print 'Average Number of Components on Easy Simulation Data:\n\tExpected: 27\tActual: ' + str(np.mean(labeledNumClusters))
pylab.hist(labeledNumCentroids, normed=1)
pylab.xlabel('Number of Components')
pylab.ylabel('Number of Trials')
pylab.show()
print 'Average Number of Components on Easy Simulation Data:\n\tExpected: 27\tActual: ' + str(np.mean(labeledNumCentroids))
pylab.hist(labeledNumVolumes, normed=1)
pylab.xlabel('Number of Components')
pylab.ylabel('Number of Trials')
pylab.show()
print 'Average Number of Components on Easy Simulation Data:\n\tExpected: 27\tActual: ' + str(np.mean(labeledNumVolumes))
pylab.hist(times, normed=1)
pylab.xlabel('Time Taken to Execute')
pylab.ylabel('Number of Trials')
plt.show()
print 'Average Time Taken to Execute: ' + str(np.mean(times))
from random import randrange as rand
from mpl_toolkits.mplot3d import axes3d, Axes3D
def generatePointSet():
center = (rand(0, 99), rand(0, 99), rand(0, 99))
toPopulate = []
for z in range(-1, 5):
for y in range(-1, 5):
for x in range(-1, 5):
curPoint = (center[0]+z, center[1]+y, center[2]+x)
#only populate valid points
valid = True
for dim in range(3):
if curPoint[dim] < 0 or curPoint[dim] >= 100:
valid = False
if valid:
toPopulate.append(curPoint)
return set(toPopulate)
def generateTestVolume():
#create a test volume
volume = np.zeros((100, 100, 100))
myPointSet = set()
for _ in range(rand(500, 800)):
potentialPointSet = generatePointSet()
#be sure there is no overlap
while len(myPointSet.intersection(potentialPointSet)) > 0:
potentialPointSet = generatePointSet()
for elem in potentialPointSet:
myPointSet.add(elem)
#populate the true volume
for elem in myPointSet:
volume[elem[0], elem[1], elem[2]] = 60000
#introduce noise
noiseVolume = np.copy(volume)
for z in range(noiseVolume.shape[0]):
for y in range(noiseVolume.shape[1]):
for x in range(noiseVolume.shape[2]):
if not (z, y, x) in myPointSet:
noiseVolume[z][y][x] = rand(0, 10000)
return volume
foreground = generateTestVolume()
#displaying the random clusters
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
z, y, x = foreground.nonzero()
ax.scatter(x, y, z, zdir='z', c='r')
plt.title('Random Foreground Clusters')
plt.show()
clusterList = ccAnalysis(foreground)
plt.imshow(clusterList.labeledIm[8])
plt.show()
dataSubset = tIO.unzipChannels(tIO.loadTiff('../data/SEP-GluA1-KI_tp1.tif'))[0][0:5]
plt.imshow(dataSubset[0], cmap="gray")
plt.axis('off')
plt.title('Raw Data Slice at z=0')
plt.show()
#finding the clusters after plosPipeline
plosOutSub = pLib.pipeline(dataSubset)
#binarize output of plos lib
bianOutSub = cLib.otsuVox(plosOutSub)
#dilate the output based on neigborhood size
bianOutSub = ndimage.morphology.binary_dilation(bianOutSub).astype(int)
im = ClusterComponent(bianOutSub)
plt.imshow(im.labeledIm[3])
plt.title('Before Volume Thresholding')
plt.axis('off')
plt.show()
print "Average Volume Before Thresholding: " + str(np.mean(im.getVolumes()))
print "Max Volume Before Thresholding: " + str(np.max(im.getVolumes()))
print "num volumes: " + str(len(im.volumes))
print "num clusters: " + str(im.numClusters)
print "num centroids: " + str(len(im.centroids))
im.volumeThreshold(200)
plt.imshow(im.labeledIm[3])
plt.title('After Volume Thresholding')
plt.axis('off')
plt.show()
print "Average Volume After Thresholding: " + str(np.mean(im.getVolumes()))
print "Max Volume After Thresholding: " + str(np.max(im.getVolumes()))
print "num volumes: " + str(len(im.getVolumes()))
print "num clusters: " + str(im.numClusters)
print "num centroids: " + str(len(im.getCentroids()))
start_time = time.time()
im = ClusterComponent(bianOutSub)
im.volumeThreshold(200)
print "Total Time Taken: " + str(time.time() - start_time) + " seconds"
dataFull = tIO.unzipChannels(tIO.loadTiff('../data/SEP-GluA1-KI_tp1.tif'))[0]
plt.imshow(dataSubset[3], cmap="gray")
plt.axis('off')
plt.title('Raw Data Slice at z=0')
plt.show()
#finding the clusters after plosPipeline
plosOutFull = pLib.pipeline(dataFull)
#binarize output of plos lib
bianOutFull = cLib.otsuVox(plosOutFull)
#dilate the output based on neigborhood size
bianOutFullDil = ndimage.morphology.binary_dilation(bianOutFull).astype(int)
fullIm = ClusterComponent(bianOutFullDil)
plt.imshow(fullIm.labeledIm[270])
plt.title('Before Volume Thresholding')
plt.axis('off')
plt.show()
print "Average Volume Before Thresholding: " + str(np.mean(fullIm.volumes))
print "num volumes: " + str(len(fullIm.volumes))
print "num clusters: " + str(fullIm.numClusters)
print "num centroids: " + str(len(fullIm.centroids))
fullIm.volumeThreshold(100)
plt.imshow(fullIm.labeledIm[270])
plt.title('After Volume Thresholding')
plt.axis('off')
plt.show()
print "num volumes: " + str(len(fullIm.volumes))
print "num clusters: " + str(fullIm.numClusters)
print "num centroids: " + str(len(fullIm.centroids))
start_time = time.time()
fullIm = ClusterComponent(bianOutFullDil)
fullIm.volumeThreshold(100)
print "Total Time Taken: " + str(time.time() - start_time) + " seconds"
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Really Simple API
Step2: 1) Instantiate an object and set the parameters
Step3: 2) Fit the model
Step4: 3) Apply / evaluate
Step5: And again
Step6: Exercises
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
from sklearn.datasets import load_digits
from sklearn.model_selection import train_test_split
digits = load_digits()
X_train, X_test, y_train, y_test = train_test_split(
digits.data, digits.target, random_state=0)
X_train.shape
np.bincount(y_train)
from sklearn.svm import LinearSVC
svm = LinearSVC()
svm.fit(X_train, y_train)
print(svm.predict(X_train))
print(y_train)
svm.score(X_train, y_train)
svm.score(X_test, y_test)
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier(n_estimators=50)
rf.fit(X_train, y_train)
rf.score(X_test, y_test)
# %load solutions/train_iris.py
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Problem Setup
Step2: Next we need to set the bounds for the problem. In this example, the space is one dimensional. So the upper and lower bounds will both be lists with one element in each.
Step3: For illustrative purposes, lets plot the latent function, ground_truth(), over a range of x.
Step4: Initialise the Sampler
Step5: The sampler uses an acquisition function to determine what the next most beneficial datapoint to obtain would be. The quantification of the benefit varies from application to application. As a result, the user is free to provide their own acquisition function or choose an existing one from the dora library.
Step6: We also set an explore_priority scalar. The larger this value, the more the sampler will seek to explore areas of the parameter space with sparse observations rather than attempting to refine an area where the model believes will maximise it's objective function.
Step7: When initialising the sampler, we specify the class of the model it will base its predictions on. In this case, it's the Gaussian process. Seeding the model is unnecessary unless the user desires a repeatable set of initial picks prior to training.
Step8: Active sampling strategy
Step9: The user can then evaluate their system using the acquired parameters.
Step10: The sampler can then be updated using the sampler.update() function and passing in the observation with the parameters associated UID.
Step11: Lets request the parameter values of the remaining training points and update the sampler with the corresponding observated values.
Step12: The sampler now has enough data to train a model of the function which it will do if it receives another request.
Step13: Once the sampler is trained, it provides the user with a new parameter value to assess their query the ground truth with. The value of this parameter was determined using the sampler's internal probabilitic model of the ground truth and the acquistion function provided during initialisation. In the plot below it is represented by the red circle. The mean function and standard deviation function of the sampler's probabilistic model of the ground truth is shown as the blue dashed line and blue shaded area, respectively.
Step14: Examining the values of the acquistion function below shows why the sampler requested an observation for that specific parameter value. ie. it is the acquistion function's maximum.
Step15: You can run the cell below a number of times to iterate through the pick() and update() iterations while the sampler searches for the system of interest's maximum value.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import logging
from scipy.stats import norm
import dora.active_sampling as sampling
import time
from dora.active_sampling import pltutils
import matplotlib.pyplot as pl
import matplotlib as mpl
%matplotlib inline
def ground_truth(X):
return np.sin(X-5) + np.sin(X/2-2) + 0.4*np.sin(X/5-2) + 0.4*np.sin(X-3) + 0.2*np.sin(X/0.3-3)
# Set up a problem bounds
lower = [0]
upper = [30]
x = np.arange(0,30,0.1)
fx = ground_truth(x)
pl.figure(figsize=(15,5))
pl.plot(x,fx);
pl.xlabel('x')
pl.ylabel('f(x)')
pl.title('Ground Truth')
pl.show()
n_train = 8
acq_name = 'pred_upper_bound'
explore_priority = 1.
sampler = sampling.GaussianProcess(lower, upper, acq_name=acq_name,
n_train=n_train, seed=11)
xq, uid = sampler.pick()
print('Parameter:',xq)
print('Unique ID:',uid)
yq_true = ground_truth(xq)
print('Observation value:', yq_true)
# Update the sampler about the new observation
sampler.update(uid, yq_true)
print('Sampler has been updated with the new observation value')
for i in range(n_train-1):
xq, uid = sampler.pick()
yq_true = ground_truth(xq)
sampler.update(uid, yq_true)
pl.figure(figsize=(15,5))
pl.plot(x,fx,'k');
pl.plot(sampler.X, sampler.y,'go', markersize=7)
#pl.plot(sampler.X[-1], sampler.y[-1],'ro', markersize=10)
pl.xlabel('x')
pl.ylabel('f(x)')
pl.title('Ground Truth and observed training data')
pl.legend(('Ground truth', 'Observations'))
pl.show()
xq, uid = sampler.pick()
xquery = x[:,np.newaxis]
mf, vf = sampler.predict(xquery)
pl.figure(figsize=(15,5))
pl.plot(x,fx,'k');
pl.plot(sampler.X[:-1], sampler.y[:-1],'go', markersize=10)
pl.plot(sampler.X[-1], sampler.y[-1],'ro', markersize=10)
pl.plot(xquery, mf,'b--')
y1 = mf - np.sqrt(vf)*2
y2 = mf + np.sqrt(vf)*2
pl.fill_between(xquery[:,0], y1[:,0], y2[:,0], where=(y2 >= y1)[:,0], facecolor='lightblue')
pl.xlabel('x')
pl.ylabel('f(x)')
pl.legend(('Ground truth', 'Observations', "Requested observation", "Sampler's predicted mean",'Predicted 2 standard deviation'))
pl.title("Sampler's predicted mean function and predicted two standard deviations")
pl.show()
acq_value, acq_max_ind = sampler.eval_acq(x)
pl.figure(figsize=(15,5))
pl.plot(x,fx,'k');
pl.plot(sampler.X[:-1], sampler.y[:-1],'go', markersize=10)
pl.plot(x, acq_value,'r--')
pl.plot(x[acq_max_ind], acq_value[acq_max_ind],'rD', markersize=10)
pl.plot(x, acq_value,'r--')
pl.xlabel('x')
pl.ylabel('f(x)')
pl.title("The sampler's acquistion function")
pl.legend(('Ground truth', 'Observations', 'Acquistion function', 'Acquistion function maximum') )
pl.show()
# Observe the value of the system of interest using the requested parameter value.
yq_true = ground_truth(xq)
# Update the sampler about the new observation
sampler.update(uid, yq_true)
# Plot the results
xquery = x[:,np.newaxis]
mf, vf = sampler.predict(xquery)
pl.figure(figsize=(15,15))
pl.subplot(2,1,1)
pl.plot(x,fx,'k');
pl.plot(sampler.X, sampler.y,'go', markersize=10)
pl.plot(sampler.X[-1], sampler.y[-1],'ro', markersize=10)
pl.plot(xquery, mf,'b--')
y1 = mf - np.sqrt(vf)*2
y2 = mf + np.sqrt(vf)*2
pl.fill_between(xquery[:,0], y1[:,0], y2[:,0], where=(y2 >= y1)[:,0], facecolor='lightblue')
pl.xlabel('x')
pl.ylabel('f(x)')
pl.legend(('Ground truth', 'Observations', "Most recently updated observation", "Sampler's predicted mean"))
pl.title("Sampler's predicted mean function and predicted two standard deviations")
# Visualise the acquistion function
acq_value, acq_max_ind = sampler.eval_acq(x)
pl.subplot(2,1,2)
pl.plot(x,fx,'k');
pl.plot(sampler.X, sampler.y,'go', markersize=10)
pl.plot(x, acq_value,'r--')
pl.plot(x[acq_max_ind], acq_value[acq_max_ind],'rD', markersize=10)
pl.xlabel('x')
pl.ylabel('f(x)')
pl.title("The new acquistion function (after update)")
pl.legend(('Ground truth', 'Observations', 'Acquistion function', 'Acquistion function maximum') )
pl.show()
#Request a new parameter value to observe
xq, uid = sampler.pick()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Language Translation
Step3: Explore the Data
Step6: Implement Preprocessing Function
Step8: Preprocess all the data and save it
Step10: Check Point
Step12: Check the Version of TensorFlow and Access to GPU
Step15: Build the Neural Network
Step18: Process Decoder Input
Step21: Encoding
Step24: Decoding - Training
Step27: Decoding - Inference
Step30: Build the Decoding Layer
Step33: Build the Neural Network
Step34: Neural Network Training
Step36: Build the Graph
Step40: Batch and pad the source and target sequences
Step43: Train
Step45: Save Parameters
Step47: Checkpoint
Step50: Sentence to Sequence
Step52: Translate
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
source_sentences = source_text.split("\n")
target_sentences = target_text.split("\n")
source_id_text = [[source_vocab_to_int[word] for word in sentence.split()] for sentence in source_sentences]
target_id_text = [[target_vocab_to_int[word] for word in (sentence + ' <EOS>').split()] for sentence in target_sentences]
return source_id_text, target_id_text
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
import problem_unittests as tests
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
from tensorflow.python.layers.core import Dense
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def model_inputs():
Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.
:return: Tuple (input, targets, learning rate, keep probability, target sequence length,
max target sequence length, source sequence length)
input_placeholer = tf.placeholder(tf.int32, [None, None], name="input")
targets_placeholder = tf.placeholder(tf.int32, [None, None], name="targets")
learning_rate_placeholder = tf.placeholder(tf.float32, name="learning_rate")
keep_prob_placeholder = tf.placeholder(tf.float32, name="keep_prob")
target_sequence_length_placeholder = tf.placeholder(tf.int32, [None], name="target_sequence_length")
max_target_sequence_length = tf.reduce_max(target_sequence_length_placeholder, name="max_target_len")
source_sequence_length_placeholder = tf.placeholder(tf.int32, [None], name="source_sequence_length")
return input_placeholer, targets_placeholder, learning_rate_placeholder, keep_prob_placeholder, target_sequence_length_placeholder, max_target_sequence_length, source_sequence_length_placeholder
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
def process_decoder_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for encoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
go_id = target_vocab_to_int["<GO>"]
init_list = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
preprocessed = tf.concat([tf.fill([batch_size, 1], go_id), init_list], 1)
return preprocessed
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_encoding_input(process_decoder_input)
from imp import reload
reload(tests)
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size,
encoding_embedding_size):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:param source_sequence_length: a list of the lengths of each sequence in the batch
:param source_vocab_size: vocabulary size of source data
:param encoding_embedding_size: embedding size of source data
:return: tuple (RNN output, RNN state)
# TODO: Implement Function
encoder_embedded_input = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size)
lstm_cells = [tf.contrib.rnn.LSTMCell(rnn_size) for _ in range(num_layers)]
lstm_cells = [tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob) for cell in lstm_cells]
encoder_stacked_cell = tf.contrib.rnn.MultiRNNCell(lstm_cells)
encoding_rnn_output, encoding_rnn_state = tf.nn.dynamic_rnn(encoder_stacked_cell, encoder_embedded_input, dtype=tf.float32)
return encoding_rnn_output, encoding_rnn_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input,
target_sequence_length, max_summary_length,
output_layer, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_summary_length: The length of the longest sequence in the batch
:param output_layer: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing training logits and sample_id
training_helper = tf.contrib.seq2seq.TrainingHelper(dec_embed_input, target_sequence_length)
decoder = tf.contrib.seq2seq.BasicDecoder(
dec_cell, training_helper, encoder_state, output_layer=output_layer
)
(training_decoder_outputs, _) = tf.contrib.seq2seq.dynamic_decode(
decoder, impute_finished=True, maximum_iterations=max_summary_length
)
training_rnn_output = training_decoder_outputs.rnn_output
simple_id = training_decoder_outputs.sample_id
training_rnn_output_drop = tf.nn.dropout(training_rnn_output, keep_prob)
return tf.contrib.seq2seq.BasicDecoderOutput(training_rnn_output_drop, simple_id)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,
end_of_sequence_id, max_target_sequence_length,
vocab_size, output_layer, batch_size, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param max_target_sequence_length: Maximum length of target sequences
:param vocab_size: Size of decoder/target vocabulary
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_layer: Function to apply the output layer
:param batch_size: Batch size
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing inference logits and sample_id
start_tokens = tf.tile(
tf.constant([start_of_sequence_id], dtype=tf.int32),
[batch_size],
name='start_tokens'
)
embedding_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(
dec_embeddings, start_tokens, end_of_sequence_id
)
decoder = tf.contrib.seq2seq.BasicDecoder(
dec_cell, embedding_helper, encoder_state, output_layer=output_layer
)
(training_decoder_outputs, _) = tf.contrib.seq2seq.dynamic_decode(decoder, impute_finished=True, maximum_iterations=max_target_sequence_length)
training_rnn_output = training_decoder_outputs.rnn_output
simple_id = training_decoder_outputs.sample_id
training_rnn_output_drop = tf.nn.dropout(training_rnn_output, keep_prob)
return tf.contrib.seq2seq.BasicDecoderOutput(training_rnn_output_drop, simple_id)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
def decoding_layer(dec_input, encoder_state,
target_sequence_length, max_target_sequence_length,
rnn_size,
num_layers, target_vocab_to_int, target_vocab_size,
batch_size, keep_prob, decoding_embedding_size):
Create decoding layer
:param dec_input: Decoder input
:param encoder_state: Encoder state
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_target_sequence_length: Maximum length of target sequences
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param target_vocab_size: Size of target vocabulary
:param batch_size: The size of the batch
:param keep_prob: Dropout keep probability
:param decoding_embedding_size: Decoding embedding size
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
decoding_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))
decoding_embed_input = tf.nn.embedding_lookup(decoding_embeddings, dec_input)
lstm_cells = [tf.contrib.rnn.LSTMCell(rnn_size) for _ in range(num_layers)]
lstm_cells = [tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob) for cell in lstm_cells]
decoder_cell = tf.contrib.rnn.MultiRNNCell(lstm_cells)
output_layer = Dense(
target_vocab_size, kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1)
)
with tf.variable_scope("decode"):
training_logits = decoding_layer_train(
encoder_state, decoder_cell, decoding_embed_input,
target_sequence_length, max_target_sequence_length,
output_layer, keep_prob
)
with tf.variable_scope("decode", reuse=True):
inference_logits = decoding_layer_infer(
encoder_state, decoder_cell, decoding_embeddings,
target_vocab_to_int['<GO>'], target_vocab_to_int['<EOS>'], max_target_sequence_length,
target_vocab_size, output_layer, batch_size, keep_prob
)
return training_logits, inference_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
def seq2seq_model(input_data, target_data, keep_prob, batch_size,
source_sequence_length, target_sequence_length,
max_target_sentence_length,
source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size,
rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param source_sequence_length: Sequence Lengths of source sequences in the batch
:param target_sequence_length: Sequence Lengths of target sequences in the batch
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
(_, encoder_state) = encoding_layer(
input_data, rnn_size, num_layers,
keep_prob, source_sequence_length, source_vocab_size,
enc_embedding_size
)
decoder_input = process_decoder_input(target_data, target_vocab_to_int, batch_size)
training_decoder_output, inference_decoder_output = decoding_layer(
decoder_input, encoder_state, target_sequence_length,
max_target_sentence_length, rnn_size, num_layers,
target_vocab_to_int, target_vocab_size, batch_size,
keep_prob, dec_embedding_size
)
return training_decoder_output, inference_decoder_output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
# Number of Epochs
epochs = 25
# Batch Size
batch_size = 512
# RNN Size
rnn_size = 512
# Number of Layers
num_layers = 3
# Embedding Size
encoding_embedding_size = 256
decoding_embedding_size = 256
# Learning Rate
learning_rate = 0.001
# Dropout Keep Probability
keep_probability = 0.6
display_step = 10
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()
#sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),
targets,
keep_prob,
batch_size,
source_sequence_length,
target_sequence_length,
max_target_sequence_length,
len(source_vocab_to_int),
len(target_vocab_to_int),
encoding_embedding_size,
decoding_embedding_size,
rnn_size,
num_layers,
target_vocab_to_int)
training_logits = tf.identity(train_logits.rnn_output, name='logits')
inference_logits = tf.identity(inference_logits.sample_id, name='predictions')
masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
training_logits,
targets,
masks)
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
DON'T MODIFY ANYTHING IN THIS CELL
def pad_sentence_batch(sentence_batch, pad_int):
Pad sentences with <PAD> so that each sentence of a batch has the same length
max_sentence = max([len(sentence) for sentence in sentence_batch])
return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]
def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):
Batch targets, sources, and the lengths of their sentences together
for batch_i in range(0, len(sources)//batch_size):
start_i = batch_i * batch_size
# Slice the right amount for the batch
sources_batch = sources[start_i:start_i + batch_size]
targets_batch = targets[start_i:start_i + batch_size]
# Pad
pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))
pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))
# Need the lengths for the _lengths parameters
pad_targets_lengths = []
for target in pad_targets_batch:
pad_targets_lengths.append(len(target))
pad_source_lengths = []
for source in pad_sources_batch:
pad_source_lengths.append(len(source))
yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths
DON'T MODIFY ANYTHING IN THIS CELL
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1])],
'constant')
return np.mean(np.equal(target, logits))
# Split data to training and validation sets
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = source_int_text[:batch_size]
valid_target = target_int_text[:batch_size]
(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,
valid_target,
batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>']))
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(
get_batches(train_source, train_target, batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>'])):
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
target_sequence_length: targets_lengths,
source_sequence_length: sources_lengths,
keep_prob: keep_probability})
if batch_i % display_step == 0 and batch_i > 0:
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch,
source_sequence_length: sources_lengths,
target_sequence_length: targets_lengths,
keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_sources_batch,
source_sequence_length: valid_sources_lengths,
target_sequence_length: valid_targets_lengths,
keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
unk_int = vocab_to_int['<UNK>']
word_ids = [vocab_to_int.get(word, unk_int) for word in sentence.lower().split()]
return word_ids
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
translate_sentence = 'he saw a old yellow truck .'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('predictions:0')
target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')
source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,
target_sequence_length: [len(translate_sentence)*2]*batch_size,
source_sequence_length: [len(translate_sentence)]*batch_size,
keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in translate_logits]))
print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits])))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This is what our data looks like
Step3: now we can proceed with the analysis
Step4: TRAINING TIME!
|
<ASSISTANT_TASK:>
Python Code:
#import statements
from keras.layers.core import Dense, Activation, Dropout
from keras.layers.recurrent import LSTM
from keras.models import Sequential
import lstm, time
import pandas as pd
import matplotlib as plt
from sklearn.preprocessing import StandardScaler
import numpy as np
#read the annotated file
spx_data = pd.read_csv('data/spx_data.csv')
spx_data.head()
# reorder data frame to put the closing price at the end of the arrays
spx_data = spx_data[['Open', 'High', 'Low', 'Adj Close', 'Volume', 'Close']]
spx_data_close = spx_data[['Close']]
spx_data
plt.pyplot.plot(spx_data_close)
plt.pyplot.show()
#this is a plot of the closing prices over time
#normalize the data
normalized_spx_data = StandardScaler().fit_transform(spx_data)
## N.B the function below is a modifed form of a helped function written by Jakob Aungier
## (full citation in lstm.py)
def prep_data(data, seq_len):
Returns the input data and targets for testing and validation sets
data -> the full corpus of data
seq_len -> the length of our sequence
sequence_length = seq_len + 1
result = []
for index in range(len(data) - sequence_length):
result.append(data[index: index + sequence_length])
result = np.array(result)
# 90% train, 10% validation partition
partition = round(0.9 * result.shape[0])
train = result[:int(partition), :]
np.random.shuffle(train)
x_train = train[:, :-1]
y_train = train[:, -1, -1]
x_test = result[int(partition):, :-1]
y_test = result[int(partition):, -1, -1]
x_train = np.reshape(x_train, (x_train.shape[0], x_train.shape[1], 6))
x_test = np.reshape(x_test, (x_test.shape[0], x_test.shape[1], 6))
return [x_train, y_train, x_test, y_test]
X_train, y_train, X_test, y_test = prep_data(normalized_spx_data, 20)
#building Model
model = Sequential()
model.add(LSTM(input_dim=6,
output_dim=50,
return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(100, return_sequences=False))
model.add(Dropout(0.2))
model.add(Dense(
output_dim=1))
model.add(Activation('linear'))
start = time.time()
model.compile(loss='mse', optimizer='rmsprop')
print('compilation time:', time.time() - start)
#Train the model
model.fit(
X_train,
y_train,
batch_size=512,
nb_epoch=200,
validation_split=0.05)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read in the files
Step2: Here's a plot of nominal GDP
Step3: And here is nominal GDP growth
Step4: Fit information equilibrium parameters
Step5: And we can show the relationship between the growth rates (i.e. compute the inflation rate equal to the growth rate of the CPI)
Step6: Additionally, rearranging the terms and looking at the growth rate, we can show a form of Okun's law. Since g_p = g_a - g_b, we can say g_b = g_a - g_p. The right hand side of the last equation when A is nominal GDP and p is the CPI is the CPI-deflated real GDP growth. Okun's law is an inverse relationship between the change in unemployment and RGDP growth, but in our case we will look at the direct relationship of RGDP growth and change in employment (PAYEMS).
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import IEtools
import pylab as pl
%pylab inline
filename1='C:/econdata/GDP.xls'
filename2='C:/econdata/PAYEMS.xls'
filename3='C:/econdata/CPIAUCSL.xls'
gdp = IEtools.FREDxlsRead(filename1)
lab = IEtools.FREDxlsRead(filename2)
cpi = IEtools.FREDxlsRead(filename3)
pl.plot(gdp['interp'].x,gdp['interp'](gdp['interp'].x))
pl.ylabel(gdp['name']+' [G$]')
pl.yscale('log')
pl.show()
pl.plot(gdp['growth'].x,gdp['growth'](gdp['growth'].x))
pl.ylabel(gdp['name']+' growth [%]')
pl.show()
result = IEtools.fitGeneralInfoEq(gdp['data'],lab['data'], guess=[1.0,0.0])
print(result)
print('IT index = ',np.round(result.x[0],decimals=2))
time=gdp['interp'].x
pl.plot(time,np.exp(result.x[0]*np.log(lab['interp'](time))+result.x[1]),label='model')
pl.plot(time,gdp['interp'](time),label='data')
pl.yscale('log')
pl.ylabel(gdp['name']+' [G$]')
pl.legend()
pl.show()
time=gdp['data'][:,0]
der1=gdp['growth'](time)-lab['growth'](time)
der2=cpi['growth'](time)
pl.plot(time,der1,label='model')
pl.plot(time,der2,label='data')
pl.legend()
pl.show()
time=gdp['data'][:,0]
der1=gdp['growth'](time)-cpi['growth'](time)
der2=lab['growth'](time)
pl.plot(time,der1,label='model')
pl.plot(time,der2,label='data')
pl.legend()
pl.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Filter data with different window lenghts.
Step2: Calculate the threshold for each filtered data
Step3: Plot Each data and the threshold
Step4: Iterate through all the datas and save alle shapes with a higher amplitude then the threshold
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
from scipy.signal import medfilt
import gitInformation
from neo.io import NeuralynxIO
% matplotlib inline
gitInformation.printInformation()
# Session folder with all needed neuralynx files
sessionfolder = 'C:\\Users\\Dominik\\Documents\\GitRep\\kt-2015-DSPHandsOn\\MedianFilter\\Python\\07. Real Data\\Figure'
# Loading the files with all datas and store them as a np.array
NIO = NeuralynxIO(sessiondir = sessionfolder, cachedir = sessionfolder)
block = NIO.read_block()
seg = block.segments[0]
analogsignal = seg.analogsignalarrays[0]
csc = analogsignal.magnitude
plt.plot(csc)
plt.savefig('realdata.png', dpi = 400)
filtered1 = medfilt(csc,25)
new_data1 = csc-filtered1
filtered2 = medfilt(csc,35)
new_data2 = csc-filtered2
filtered3 = medfilt(csc,45)
new_data3 = csc-filtered3
filtered4 = medfilt(csc,55)
new_data4 = csc-filtered4
filtered5 = medfilt(csc,95)
new_data5 = csc-filtered5
threshold1 = 4*np.median(abs(new_data1)/0.6745)
threshold2 = 4*np.median(abs(new_data2)/0.6745)
threshold3 = 4*np.median(abs(new_data3)/0.6745)
threshold4 = 4*np.median(abs(new_data4)/0.6745)
threshold5 = 4*np.median(abs(new_data5)/0.6745)
threshold6 = 4*np.median(abs(csc)/0.6745)
plt.figure(figsize=(30,7))
plt.axis([0, 63000, -0.00003, 0.00016])
plt.plot(new_data1, color = 'r')
plt.hlines(threshold1, 0, len(new_data1), color = 'black')
plt.savefig('Threshold.png', dpi = 400)
plt.figure(figsize=(30,7))
plt.axis([0, 63000, -0.00003, 0.00016])
plt.plot(new_data2, color = 'r')
plt.hlines(threshold2, 0, len(new_data1), color = 'black')
plt.savefig('ThresholdWL35.png', dpi = 400)
plt.figure(figsize=(30,7))
plt.axis([0, 63000, -0.00003, 0.00016])
plt.plot(new_data3, color = 'r')
plt.hlines(threshold3, 0, len(new_data1), color = 'black')
plt.savefig('ThresholdWL45.png', dpi = 400)
plt.figure(figsize=(30,7))
plt.axis([0, 63000, -0.00003, 0.00016])
plt.plot(new_data4, color = 'r')
plt.hlines(threshold4, 0, len(new_data1), color = 'black')
plt.savefig('ThresholdWL55.png', dpi = 400)
plt.figure(figsize=(30,7))
plt.axis([0, 63000, -0.00003, 0.00016])
plt.plot(new_data5, color = 'r')
plt.hlines(threshold5, 0, len(new_data1), color = 'black')
plt.savefig('ThresholdWL95.png', dpi = 400)
def threshHold(new_data, threshold):
count = -1
count2 = 0
timer = 0
# Dictionary with all thresholded shapes
thresholds = {}
# Get the value in the new_data array:
for i in new_data:
# Increment the counter (counter = position in the array)
count += 1
if i >= threshold:
# check the thresholded window if some values are bigger then 0.00005
temp = [i for i in new_data[count -6 : count + 18] if i >= 0.00005]
# If no values are bigger then 0.00005 and the dead time is zero,
# save the window in the dictionary
if len(temp) == 0 and timer == 0:
# set the timer to 20, so 20 samples will be passed
timer = 16
# increment count2, for the array name
count2 += 1
thresholds["spike{0}".format(count2)] = new_data[count -6 : count + 18]
elif timer > 0:
# Decrement the timer.
timer -= 1
else:
pass
return thresholds
thresholds1 = threshHold(new_data1, threshold1)
thresholds2 = threshHold(new_data2, threshold2)
thresholds3 = threshHold(new_data3, threshold3)
thresholds4 = threshHold(new_data4, threshold4)
thresholds5 = threshHold(new_data5, threshold5)
for i in thresholds1:
plt.plot(thresholds1[i], color = 'black', linewidth = 0.5)
plt.xlabel('Window lenght = 25')
#plt.savefig('Wl25.png', dpi = 400)
for i in thresholds2:
plt.plot(thresholds2[i], color = 'black', linewidth = 0.5)
plt.xlabel('Window lenght = 35')
#plt.savefig('Wl35.png', dpi = 400)
for i in thresholds3:
plt.plot(thresholds3[i], color = 'black', linewidth = 0.5)
plt.xlabel('Window lenght = 45')
#plt.savefig('Wl45.png', dpi = 400)
for i in thresholds4:
plt.plot(thresholds4[i], color = 'black', linewidth = 0.5)
plt.xlabel('Window lenght = 55')
#plt.savefig('Wl55.png', dpi = 400)
for i in thresholds5:
plt.plot(thresholds5[i], color = 'black', linewidth = 0.5)
plt.xlabel('Window lenght = 95')
#plt.savefig('Wl95.png', dpi = 400)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <table class="tfo-notebook-buttons" align="left">
Step3: If you're putting a nontrivial chunk of forward pass code into the shim, you want to know that it is behaving the same way as it did in TF1.x. For example, consider trying to put an entire TF-Slim Inception-Resnet-v2 model into the shim as such
Step6: As it so happens, this layer actually works perfectly fine out of the box (complete with accurate regularization loss tracking).
Step 1
Step7: The first scope (assert_no_variable_creations()) will raise an error immediately once you try creating a variable within the scope. This allows you to inspect the stacktrace (and use interactive debugging) to figure out exactly what lines of code created a variable instead of reusing an existing one.
Step9: In the example below, observe how these decorators work on a layer that incorrectly creates new weights each time instead of reusing existing ones.
Step11: You can fix the layer by making sure it only creates the weights once and then reuses them each time.
Step12: Troubleshooting
Step13: Next, do the same for the shim-wrapped layer in TF2.
Step14: The shim-based InceptionResnetV2 layer passes this test. However, in the case where they don't match, you can run it through a diff (text or other) to see where the differences are.
Step 3
Step15: Get the TF2 results.
Step16: The numbers match between TF1.x and TF2 when you remove sources of randomness, and the TF2-compatible InceptionResnetV2 layer passes the test.
Step 4
Step17: However, notice that in constant mode, because b and c were generated with the same seed and have the same shape, they will have exactly the same values.
Step18: Trace order
Step19: However, notice that in this mode random generation is sensitive to program order, and so the following generated random numbers do not match.
Step20: To allow for debugging variations due to tracing order, DeterministicRandomTestTool in num_random_ops mode allows you to see how many random operations have been traced with the operation_seed property.
Step21: If you need to account for varying trace order in your tests, you can even set the auto-incrementing operation_seed explicitly. For example, you can use this to make random number generation match across two different program orders.
Step22: However, DeterministicRandomTestTool disallows reusing already-used operation seeds, so make sure the auto-incremented sequences cannot overlap. This is because eager execution generates different numbers for follow-on usages of the same operation seed while TF1 graphs and sessions do not, so raising an error helps keep session and eager stateful random number generation in line.
Step23: Verifying Inference
Step24: Verifying Training
Step25: You have now verified that the InceptionResnetV2 model running eagerly with decorators around tf.keras.layers.Layer numerically matches the slim network running in TF1 graphs and sessions.
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!pip uninstall -y -q tensorflow
# Install tf-nightly as the DeterministicRandomTestTool is available only in
# Tensorflow 2.8
!pip install -q tf-nightly
!pip install -q tf_slim
import tensorflow as tf
import tensorflow.compat.v1 as v1
import numpy as np
import tf_slim as slim
import sys
from contextlib import contextmanager
!git clone --depth=1 https://github.com/tensorflow/models.git
import models.research.slim.nets.inception_resnet_v2 as inception
# TF1 Inception resnet v2 forward pass based on slim layers
def inception_resnet_v2(inputs, num_classes, is_training):
with slim.arg_scope(
inception.inception_resnet_v2_arg_scope(batch_norm_scale=True)):
return inception.inception_resnet_v2(inputs, num_classes, is_training=is_training)
class InceptionResnetV2(tf.keras.layers.Layer):
Slim InceptionResnetV2 forward pass as a Keras layer
def __init__(self, num_classes, **kwargs):
super().__init__(**kwargs)
self.num_classes = num_classes
@tf.compat.v1.keras.utils.track_tf1_style_variables
def call(self, inputs, training=None):
is_training = training or False
# Slim does not accept `None` as a value for is_training,
# Keras will still pass `None` to layers to construct functional models
# without forcing the layer to always be in training or in inference.
# However, `None` is generally considered to run layers in inference.
with slim.arg_scope(
inception.inception_resnet_v2_arg_scope(batch_norm_scale=True)):
return inception.inception_resnet_v2(
inputs, self.num_classes, is_training=is_training)
@contextmanager
def assert_no_variable_creations():
Assert no variables are created in this context manager scope.
def invalid_variable_creator(next_creator, **kwargs):
raise ValueError("Attempted to create a new variable instead of reusing an existing one. Args: {}".format(kwargs))
with tf.variable_creator_scope(invalid_variable_creator):
yield
@contextmanager
def catch_and_raise_created_variables():
Raise all variables created within this context manager scope (if any).
created_vars = []
def variable_catcher(next_creator, **kwargs):
var = next_creator(**kwargs)
created_vars.append(var)
return var
with tf.variable_creator_scope(variable_catcher):
yield
if created_vars:
raise ValueError("Created vars:", created_vars)
model = InceptionResnetV2(1000)
height, width = 299, 299
num_classes = 1000
inputs = tf.ones( (1, height, width, 3))
# Create all weights on the first call
model(inputs)
# Verify that no new weights are created in followup calls
with assert_no_variable_creations():
model(inputs)
with catch_and_raise_created_variables():
model(inputs)
class BrokenScalingLayer(tf.keras.layers.Layer):
Scaling layer that incorrectly creates new weights each time:
@tf.compat.v1.keras.utils.track_tf1_style_variables
def call(self, inputs):
var = tf.Variable(initial_value=2.0)
bias = tf.Variable(initial_value=2.0, name='bias')
return inputs * var + bias
model = BrokenScalingLayer()
inputs = tf.ones( (1, height, width, 3))
model(inputs)
try:
with assert_no_variable_creations():
model(inputs)
except ValueError as err:
import traceback
traceback.print_exc()
model = BrokenScalingLayer()
inputs = tf.ones( (1, height, width, 3))
model(inputs)
try:
with catch_and_raise_created_variables():
model(inputs)
except ValueError as err:
print(err)
class FixedScalingLayer(tf.keras.layers.Layer):
Scaling layer that incorrectly creates new weights each time:
def __init__(self):
super().__init__()
self.var = None
self.bias = None
@tf.compat.v1.keras.utils.track_tf1_style_variables
def call(self, inputs):
if self.var is None:
self.var = tf.Variable(initial_value=2.0)
self.bias = tf.Variable(initial_value=2.0, name='bias')
return inputs * self.var + self.bias
model = FixedScalingLayer()
inputs = tf.ones( (1, height, width, 3))
model(inputs)
with assert_no_variable_creations():
model(inputs)
with catch_and_raise_created_variables():
model(inputs)
# Build the forward pass inside a TF1.x graph, and
# get the counts, shapes, and names of the variables
graph = tf.Graph()
with graph.as_default(), tf.compat.v1.Session(graph=graph) as sess:
height, width = 299, 299
num_classes = 1000
inputs = tf.ones( (1, height, width, 3))
out, endpoints = inception_resnet_v2(inputs, num_classes, is_training=False)
tf1_variable_names_and_shapes = {
var.name: (var.trainable, var.shape) for var in tf.compat.v1.global_variables()}
num_tf1_variables = len(tf.compat.v1.global_variables())
height, width = 299, 299
num_classes = 1000
model = InceptionResnetV2(num_classes)
# The weights will not be created until you call the model
inputs = tf.ones( (1, height, width, 3))
# Call the model multiple times before checking the weights, to verify variables
# get reused rather than accidentally creating additional variables
out, endpoints = model(inputs, training=False)
out, endpoints = model(inputs, training=False)
# Grab the name: shape mapping and the total number of variables separately,
# because in TF2 variables can be created with the same name
num_tf2_variables = len(model.variables)
tf2_variable_names_and_shapes = {
var.name: (var.trainable, var.shape) for var in model.variables}
# Verify that the variable counts, names, and shapes all match:
assert num_tf1_variables == num_tf2_variables
assert tf1_variable_names_and_shapes == tf2_variable_names_and_shapes
graph = tf.Graph()
with graph.as_default(), tf.compat.v1.Session(graph=graph) as sess:
height, width = 299, 299
num_classes = 1000
inputs = tf.ones( (1, height, width, 3))
out, endpoints = inception_resnet_v2(inputs, num_classes, is_training=False)
# Rather than running the global variable initializers,
# reset all variables to a constant value
var_reset = tf.group([var.assign(tf.ones_like(var) * 0.001) for var in tf.compat.v1.global_variables()])
sess.run(var_reset)
# Grab the outputs & regularization loss
reg_losses = tf.compat.v1.get_collection(tf.compat.v1.GraphKeys.REGULARIZATION_LOSSES)
tf1_regularization_loss = sess.run(tf.math.add_n(reg_losses))
tf1_output = sess.run(out)
print("Regularization loss:", tf1_regularization_loss)
tf1_output[0][:5]
height, width = 299, 299
num_classes = 1000
model = InceptionResnetV2(num_classes)
inputs = tf.ones((1, height, width, 3))
# Call the model once to create the weights
out, endpoints = model(inputs, training=False)
# Reset all variables to the same fixed value as above, with no randomness
for var in model.variables:
var.assign(tf.ones_like(var) * 0.001)
tf2_output, endpoints = model(inputs, training=False)
# Get the regularization loss
tf2_regularization_loss = tf.math.add_n(model.losses)
print("Regularization loss:", tf2_regularization_loss)
tf2_output[0][:5]
# Create a dict of tolerance values
tol_dict={'rtol':1e-06, 'atol':1e-05}
# Verify that the regularization loss and output both match
# when we fix the weights and avoid randomness by running inference:
np.testing.assert_allclose(tf1_regularization_loss, tf2_regularization_loss.numpy(), **tol_dict)
np.testing.assert_allclose(tf1_output, tf2_output.numpy(), **tol_dict)
random_tool = v1.keras.utils.DeterministicRandomTestTool()
with random_tool.scope():
graph = tf.Graph()
with graph.as_default(), tf.compat.v1.Session(graph=graph) as sess:
a = tf.random.uniform(shape=(3,1))
a = a * 3
b = tf.random.uniform(shape=(3,3))
b = b * 3
c = tf.random.uniform(shape=(3,3))
c = c * 3
graph_a, graph_b, graph_c = sess.run([a, b, c])
graph_a, graph_b, graph_c
random_tool = v1.keras.utils.DeterministicRandomTestTool()
with random_tool.scope():
a = tf.random.uniform(shape=(3,1))
a = a * 3
b = tf.random.uniform(shape=(3,3))
b = b * 3
c = tf.random.uniform(shape=(3,3))
c = c * 3
a, b, c
# Demonstrate that the generated random numbers match
np.testing.assert_allclose(graph_a, a.numpy(), **tol_dict)
np.testing.assert_allclose(graph_b, b.numpy(), **tol_dict)
np.testing.assert_allclose(graph_c, c.numpy(), **tol_dict)
np.testing.assert_allclose(b.numpy(), c.numpy(), **tol_dict)
random_tool = v1.keras.utils.DeterministicRandomTestTool(mode='num_random_ops')
with random_tool.scope():
graph = tf.Graph()
with graph.as_default(), tf.compat.v1.Session(graph=graph) as sess:
a = tf.random.uniform(shape=(3,1))
a = a * 3
b = tf.random.uniform(shape=(3,3))
b = b * 3
c = tf.random.uniform(shape=(3,3))
c = c * 3
graph_a, graph_b, graph_c = sess.run([a, b, c])
graph_a, graph_b, graph_c
random_tool = v1.keras.utils.DeterministicRandomTestTool(mode='num_random_ops')
with random_tool.scope():
a = tf.random.uniform(shape=(3,1))
a = a * 3
b = tf.random.uniform(shape=(3,3))
b = b * 3
c = tf.random.uniform(shape=(3,3))
c = c * 3
a, b, c
# Demonstrate that the generated random numbers match
np.testing.assert_allclose(graph_a, a.numpy(), **tol_dict)
np.testing.assert_allclose(graph_b, b.numpy(), **tol_dict )
np.testing.assert_allclose(graph_c, c.numpy(), **tol_dict)
# Demonstrate that with the 'num_random_ops' mode,
# b & c took on different values even though
# their generated shape was the same
assert not np.allclose(b.numpy(), c.numpy(), **tol_dict)
random_tool = v1.keras.utils.DeterministicRandomTestTool(mode='num_random_ops')
with random_tool.scope():
a = tf.random.uniform(shape=(3,1))
a = a * 3
b = tf.random.uniform(shape=(3,3))
b = b * 3
random_tool = v1.keras.utils.DeterministicRandomTestTool(mode='num_random_ops')
with random_tool.scope():
b_prime = tf.random.uniform(shape=(3,3))
b_prime = b_prime * 3
a_prime = tf.random.uniform(shape=(3,1))
a_prime = a_prime * 3
assert not np.allclose(a.numpy(), a_prime.numpy())
assert not np.allclose(b.numpy(), b_prime.numpy())
random_tool = v1.keras.utils.DeterministicRandomTestTool(mode='num_random_ops')
with random_tool.scope():
print(random_tool.operation_seed)
a = tf.random.uniform(shape=(3,1))
a = a * 3
print(random_tool.operation_seed)
b = tf.random.uniform(shape=(3,3))
b = b * 3
print(random_tool.operation_seed)
random_tool = v1.keras.utils.DeterministicRandomTestTool(mode='num_random_ops')
with random_tool.scope():
print(random_tool.operation_seed)
a = tf.random.uniform(shape=(3,1))
a = a * 3
print(random_tool.operation_seed)
b = tf.random.uniform(shape=(3,3))
b = b * 3
random_tool = v1.keras.utils.DeterministicRandomTestTool(mode='num_random_ops')
with random_tool.scope():
random_tool.operation_seed = 1
b_prime = tf.random.uniform(shape=(3,3))
b_prime = b_prime * 3
random_tool.operation_seed = 0
a_prime = tf.random.uniform(shape=(3,1))
a_prime = a_prime * 3
np.testing.assert_allclose(a.numpy(), a_prime.numpy(), **tol_dict)
np.testing.assert_allclose(b.numpy(), b_prime.numpy(), **tol_dict)
random_tool = v1.keras.utils.DeterministicRandomTestTool(mode='num_random_ops')
with random_tool.scope():
random_tool.operation_seed = 1
b_prime = tf.random.uniform(shape=(3,3))
b_prime = b_prime * 3
random_tool.operation_seed = 0
a_prime = tf.random.uniform(shape=(3,1))
a_prime = a_prime * 3
try:
c = tf.random.uniform(shape=(3,1))
raise RuntimeError("An exception should have been raised before this, " +
"because the auto-incremented operation seed will " +
"overlap an already-used value")
except ValueError as err:
print(err)
random_tool = v1.keras.utils.DeterministicRandomTestTool(mode='num_random_ops')
with random_tool.scope():
graph = tf.Graph()
with graph.as_default(), tf.compat.v1.Session(graph=graph) as sess:
height, width = 299, 299
num_classes = 1000
inputs = tf.ones( (1, height, width, 3))
out, endpoints = inception_resnet_v2(inputs, num_classes, is_training=False)
# Initialize the variables
sess.run(tf.compat.v1.global_variables_initializer())
# Grab the outputs & regularization loss
reg_losses = tf.compat.v1.get_collection(tf.compat.v1.GraphKeys.REGULARIZATION_LOSSES)
tf1_regularization_loss = sess.run(tf.math.add_n(reg_losses))
tf1_output = sess.run(out)
print("Regularization loss:", tf1_regularization_loss)
height, width = 299, 299
num_classes = 1000
random_tool = v1.keras.utils.DeterministicRandomTestTool(mode='num_random_ops')
with random_tool.scope():
model = InceptionResnetV2(num_classes)
inputs = tf.ones((1, height, width, 3))
tf2_output, endpoints = model(inputs, training=False)
# Grab the regularization loss as well
tf2_regularization_loss = tf.math.add_n(model.losses)
print("Regularization loss:", tf2_regularization_loss)
# Verify that the regularization loss and output both match
# when using the DeterministicRandomTestTool:
np.testing.assert_allclose(tf1_regularization_loss, tf2_regularization_loss.numpy(), **tol_dict)
np.testing.assert_allclose(tf1_output, tf2_output.numpy(), **tol_dict)
random_tool = v1.keras.utils.DeterministicRandomTestTool(mode='num_random_ops')
with random_tool.scope():
graph = tf.Graph()
with graph.as_default(), tf.compat.v1.Session(graph=graph) as sess:
height, width = 299, 299
num_classes = 1000
inputs = tf.ones( (1, height, width, 3))
out, endpoints = inception_resnet_v2(inputs, num_classes, is_training=True)
# Initialize the variables
sess.run(tf.compat.v1.global_variables_initializer())
# Grab the outputs & regularization loss
reg_losses = tf.compat.v1.get_collection(tf.compat.v1.GraphKeys.REGULARIZATION_LOSSES)
tf1_regularization_loss = sess.run(tf.math.add_n(reg_losses))
tf1_output = sess.run(out)
print("Regularization loss:", tf1_regularization_loss)
height, width = 299, 299
num_classes = 1000
random_tool = v1.keras.utils.DeterministicRandomTestTool(mode='num_random_ops')
with random_tool.scope():
model = InceptionResnetV2(num_classes)
inputs = tf.ones((1, height, width, 3))
tf2_output, endpoints = model(inputs, training=True)
# Grab the regularization loss as well
tf2_regularization_loss = tf.math.add_n(model.losses)
print("Regularization loss:", tf2_regularization_loss)
# Verify that the regularization loss and output both match
# when using the DeterministicRandomTestTool
np.testing.assert_allclose(tf1_regularization_loss, tf2_regularization_loss.numpy(), **tol_dict)
np.testing.assert_allclose(tf1_output, tf2_output.numpy(), **tol_dict)
random_tool = v1.keras.utils.DeterministicRandomTestTool()
with random_tool.scope():
graph = tf.Graph()
with graph.as_default(), tf.compat.v1.Session(graph=graph) as sess:
height, width = 299, 299
num_classes = 1000
inputs = tf.ones( (1, height, width, 3))
out, endpoints = inception_resnet_v2(inputs, num_classes, is_training=True)
# Initialize the variables
sess.run(tf.compat.v1.global_variables_initializer())
# Get the outputs & regularization losses
reg_losses = tf.compat.v1.get_collection(tf.compat.v1.GraphKeys.REGULARIZATION_LOSSES)
tf1_regularization_loss = sess.run(tf.math.add_n(reg_losses))
tf1_output = sess.run(out)
print("Regularization loss:", tf1_regularization_loss)
height, width = 299, 299
num_classes = 1000
random_tool = v1.keras.utils.DeterministicRandomTestTool()
with random_tool.scope():
keras_input = tf.keras.Input(shape=(height, width, 3))
layer = InceptionResnetV2(num_classes)
model = tf.keras.Model(inputs=keras_input, outputs=layer(keras_input))
inputs = tf.ones((1, height, width, 3))
tf2_output, endpoints = model(inputs, training=True)
# Get the regularization loss
tf2_regularization_loss = tf.math.add_n(model.losses)
print("Regularization loss:", tf2_regularization_loss)
# Verify that the regularization loss and output both match
# when using the DeterministicRandomTestTool
np.testing.assert_allclose(tf1_regularization_loss, tf2_regularization_loss.numpy(), **tol_dict)
np.testing.assert_allclose(tf1_output, tf2_output.numpy(), **tol_dict)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Generate test datasets
Step2: Preparing the method/parameter combinations
Step3: Generate the list of commands and run them
Step4: Next, we will vary the number of query seqs, and keep the number of ref seqs constant
Step5: Let's look at the first command in each list and the total number of commands as a sanity check...
|
<ASSISTANT_TASK:>
Python Code:
from os.path import join, expandvars
from joblib import Parallel, delayed
from tax_credit.framework_functions import (runtime_make_test_data,
runtime_make_commands,
clock_runtime,
)
## project_dir should be the directory where you've downloaded (or cloned) the
## short-read-tax-assignment repository.
project_dir = expandvars("$HOME/Desktop/projects/short-read-tax-assignment")
data_dir = join(project_dir, "data")
results_dir = expandvars("$HOME/Desktop/projects/tax-credit-runtime")
runtime_results = join(results_dir, 'runtime_results.txt')
tmpdir = join(results_dir, 'tmp')
ref_db_dir = expandvars("$HOME/Desktop/projects/short-read-tax-assignment/data/ref_dbs/")
ref_seqs = join(ref_db_dir, 'gg_13_8_otus/99_otus_clean_515f-806r_trim250.fasta')
ref_taxa = join(ref_db_dir, 'gg_13_8_otus/99_otu_taxonomy_clean.tsv')
num_iters = 1
sampling_depths = [1, 10, 100, 1000, 10000]
runtime_make_test_data(ref_seqs, tmpdir, sampling_depths)
qiime1_template = 'source activate qiime1; source ~/.bashrc; assign_taxonomy.py -i {1} -o {0} -r {2} -t {3} -m {4} {5}'
blast_template = 'source activate qiime2-2017.2; qiime tools import --input-path {1} --output-path {1}.qza --type "FeatureData[Sequence]"; qiime tools import --input-path {2} --output-path {2}.qza --type "FeatureData[Sequence]"; qiime tools import --input-path {3} --output-path {3}.qza --type "FeatureData[Taxonomy]"; qiime feature-classifier blast --i-query {1}.qza --o-classification {0}/assign.tmp --i-reference-reads {2}.qza --i-reference-taxonomy {3}.qza {5}'
vsearch_template = 'source activate qiime2-2017.2; qiime tools import --input-path {1} --output-path {1}.qza --type "FeatureData[Sequence]"; qiime tools import --input-path {2} --output-path {2}.qza --type "FeatureData[Sequence]"; qiime tools import --input-path {3} --output-path {3}.qza --type "FeatureData[Taxonomy]"; qiime feature-classifier vsearch --i-query {1}.qza --o-classification {0}/assign.tmp --i-reference-reads {2}.qza --i-reference-taxonomy {3}.qza {5}'
# {method: template, method-specific params}
methods = {
'rdp': (qiime1_template, '--confidence 0.5 --rdp_max_memory 16000'),
'uclust': (qiime1_template, '--min_consensus_fraction 0.51 --similarity 0.8 --uclust_max_accepts 3'),
'sortmerna': (qiime1_template, '--sortmerna_e_value 0.001 --min_consensus_fraction 0.51 --similarity 0.8 '
'--sortmerna_best_N_alignments 3 --sortmerna_coverage 0.8'),
'blast' : (qiime1_template, '-e 0.001'),
'blast+' : (blast_template, '--p-evalue 0.001'),
'vsearch' : (vsearch_template, '--p-min-id 0.90')
}
commands_a = runtime_make_commands(tmpdir, tmpdir, methods, ref_taxa,
sampling_depths, num_iters=1, subsample_ref=True)
commands_b = runtime_make_commands(tmpdir, tmpdir, methods, ref_taxa,
sampling_depths, num_iters=1, subsample_ref=False)
print(len(commands_a + commands_b))
print(commands_a[0])
print(commands_b[4])
Parallel(n_jobs=4)(delayed(clock_runtime)(command, runtime_results, force=False) for command in (list(set(commands_a + commands_b))))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Solar Abundances and NIR Broadband Photometric Colors
Step2: Now we'll need to load appropriate stellar evolution isochrones with bolometric corrections derived from MARCS model atmospheres. MARCS uses Grevesse et al. (2007), but differences between colors derived from GAS07, AGSS09, and CIFIST11 are negligible.
Step3: We've loaded isochrones with both Grevesse & Sauval (1998) and Grevesse et al. (2007) solar abundance distributions. The former is similar to Anders & Grevesse (1989) and Grevesse & Noels (1993) distributions in that the solar surface $(Z/X) = 0.023$ – $0.025$. On the other hand, the Grevesse et al. (2007) distribution has a characteristic $(Z/X) = 0.014$.
Step4: Now CMDs against $M_J$
Step5: And finally, against $M_H$
Step6: Spot Temperatures on AGB Stars
Step7: Energy equipartition dictates that the temperature ratio of a spot to its surroundings is roughly equal to $0.4^{(\gamma - 1)/\gamma}$. This leads to the following spot temperatures,
Step8: Temperature contrasts are on the order of 86 – 87%, meaning spots are only slightly cooler than the surrounding photosphere.
Step9: These values of the magnetic field strength are two orders of magnitude larger than observed magnetic field strengths. Are they dynamically important? At the risk of over-relying on simple theories, I'll adopt the convective velocities from mixing length theory.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
Pleiades_NIR = np.genfromtxt('../Projects/pleiades_colors/data/Stauffer_Pleiades_nir.dat',
usecols=(2, 3, 5, 6, 8, 9)) # J, errJ, H, errH, K, errK
dist_mod = 5.62 # distance modulus for the Pleiades
iso_gs98 = np.genfromtxt('../Projects/pleiades_colors/data/dmestar_00120.0myr_z+0.00_a+0.00_phx.iso')
iso_gas07 = np.genfromtxt('../Projects/pleiades_colors/data/dmestar_00120.0myr_z+0.00_a+0.00_marcs.iso')
fig, ax = plt.subplots(1, 3, figsize=(15., 8.), sharey=True)
ax[0].set_ylabel('$M_K$', fontsize=22.)
for axis in ax:
axis.grid(True)
axis.set_ylim(9., 3.)
axis.tick_params(which='major', axis='both', length=15., labelsize=16.)
# (J-K)
ax[0].set_xlim(0.2, 1.2)
ax[0].set_xlabel('$(J - K)$', fontsize=22.)
ax[0].plot(Pleiades_NIR[:,0] - Pleiades_NIR[:,4] - 0.02, Pleiades_NIR[:,4] - 0.01 - dist_mod, 'o', c='#555555',
markersize=4.0, alpha=0.1)
ax[0].plot(iso_gs98[:,10] - iso_gs98[:,12], iso_gs98[:,12], dashes=(20., 5.), lw=3, c='#B22222')
ax[0].plot(iso_gas07[:,11] - iso_gas07[:,13], iso_gas07[:,13], lw=3, c='#0094b2')
# (J-H)
ax[1].set_xlim(0.0, 1.0)
ax[1].set_xlabel('$(J - H)$', fontsize=22.)
ax[1].plot(Pleiades_NIR[:,0] - Pleiades_NIR[:,2] - 0.01, Pleiades_NIR[:,4] - 0.01 - dist_mod, 'o', c='#555555',
markersize=4.0, alpha=0.1)
ax[1].plot(iso_gs98[:,10] - iso_gs98[:,11], iso_gs98[:,12], dashes=(20., 5.), lw=3, c='#B22222')
ax[1].plot(iso_gas07[:,11] - iso_gas07[:,12], iso_gas07[:,13], lw=3, c='#0094b2')
# (H-K)
ax[2].set_xlim(0.0, 0.5)
ax[2].set_xlabel('$(H - K)$', fontsize=22.)
ax[2].plot(Pleiades_NIR[:,2] - Pleiades_NIR[:,4] - 0.01, Pleiades_NIR[:,4] - 0.01 - dist_mod, 'o', c='#555555',
markersize=4.0, alpha=0.1)
ax[2].plot(iso_gs98[:,11] - iso_gs98[:,12], iso_gs98[:,12], dashes=(20., 5.), lw=3, c='#B22222')
ax[2].plot(iso_gas07[:,12] - iso_gas07[:,13], iso_gas07[:,13], lw=3, c='#0094b2')
fig, ax = plt.subplots(1, 3, figsize=(15., 8.), sharey=True)
ax[0].set_ylabel('$M_J$', fontsize=22.)
for axis in ax:
axis.grid(True)
axis.set_ylim(10., 3.)
axis.tick_params(which='major', axis='both', length=15., labelsize=16.)
# (J-K)
ax[0].set_xlim(0.2, 1.2)
ax[0].set_xlabel('$(J - K)$', fontsize=22.)
ax[0].plot(Pleiades_NIR[:,0] - Pleiades_NIR[:,4] - 0.02, Pleiades_NIR[:,0] - 0.01 - dist_mod, 'o', c='#555555',
markersize=4.0, alpha=0.1)
ax[0].plot(iso_gs98[:,10] - iso_gs98[:,12], iso_gs98[:,10], dashes=(20., 5.), lw=3, c='#B22222')
ax[0].plot(iso_gas07[:,11] - iso_gas07[:,13], iso_gas07[:,11], lw=3, c='#0094b2')
# (J-H)
ax[1].set_xlim(0.0, 1.0)
ax[1].set_xlabel('$(J - H)$', fontsize=22.)
ax[1].plot(Pleiades_NIR[:,0] - Pleiades_NIR[:,2] - 0.01, Pleiades_NIR[:,0] - 0.01 - dist_mod, 'o', c='#555555',
markersize=4.0, alpha=0.1)
ax[1].plot(iso_gs98[:,10] - iso_gs98[:,11], iso_gs98[:,10], dashes=(20., 5.), lw=3, c='#B22222')
ax[1].plot(iso_gas07[:,11] - iso_gas07[:,12], iso_gas07[:,11], lw=3, c='#0094b2')
# (H-K)
ax[2].set_xlim(0.0, 0.5)
ax[2].set_xlabel('$(H - K)$', fontsize=22.)
ax[2].plot(Pleiades_NIR[:,2] - Pleiades_NIR[:,4] - 0.01, Pleiades_NIR[:,0] - 0.01 - dist_mod, 'o', c='#555555',
markersize=4.0, alpha=0.1)
ax[2].plot(iso_gs98[:,11] - iso_gs98[:,12], iso_gs98[:,10], dashes=(20., 5.), lw=3, c='#B22222')
ax[2].plot(iso_gas07[:,12] - iso_gas07[:,13], iso_gas07[:,11], lw=3, c='#0094b2')
fig, ax = plt.subplots(1, 3, figsize=(15., 8.), sharey=True)
ax[0].set_ylabel('$M_H$', fontsize=22.)
for axis in ax:
axis.grid(True)
axis.set_ylim(10., 3.)
axis.tick_params(which='major', axis='both', length=15., labelsize=16.)
# (J-K)
ax[0].set_xlim(0.2, 1.2)
ax[0].set_xlabel('$(J - K)$', fontsize=22.)
ax[0].plot(Pleiades_NIR[:,0] - Pleiades_NIR[:,4] - 0.02, Pleiades_NIR[:,2] - 0.01 - dist_mod, 'o', c='#555555',
markersize=4.0, alpha=0.1)
ax[0].plot(iso_gs98[:,10] - iso_gs98[:,12], iso_gs98[:,11], dashes=(20., 5.), lw=3, c='#B22222')
ax[0].plot(iso_gas07[:,11] - iso_gas07[:,13], iso_gas07[:,12], lw=3, c='#0094b2')
# (J-H)
ax[1].set_xlim(0.0, 1.0)
ax[1].set_xlabel('$(J - H)$', fontsize=22.)
ax[1].plot(Pleiades_NIR[:,0] - Pleiades_NIR[:,2] - 0.01, Pleiades_NIR[:,2] - 0.01 - dist_mod, 'o', c='#555555',
markersize=4.0, alpha=0.1)
ax[1].plot(iso_gs98[:,10] - iso_gs98[:,11], iso_gs98[:,11], dashes=(20., 5.), lw=3, c='#B22222')
ax[1].plot(iso_gas07[:,11] - iso_gas07[:,12], iso_gas07[:,12], lw=3, c='#0094b2')
# (H-K)
ax[2].set_xlim(0.0, 0.5)
ax[2].set_xlabel('$(H - K)$', fontsize=22.)
ax[2].plot(Pleiades_NIR[:,2] - Pleiades_NIR[:,4] - 0.01, Pleiades_NIR[:,2] - 0.01 - dist_mod, 'o', c='#555555',
markersize=4.0, alpha=0.1)
ax[2].plot(iso_gs98[:,11] - iso_gs98[:,12], iso_gs98[:,11], dashes=(20., 5.), lw=3, c='#B22222')
ax[2].plot(iso_gas07[:,12] - iso_gas07[:,13], iso_gas07[:,12], lw=3, c='#0094b2')
# log(g) = -0.5; [m/H] = 0.0
Teff = np.array([2500., 2700., 2800.])
Ttau = np.array([2850., 3000., 3200.])
Peff = np.array([5.0e1, 1.3e2, 1.7e2])
Ptau = np.array([3.8e2, 4.7e2, 5.5e2])
Rtau = np.array([1.41e-9, 2.33e-9, 2.61e-9])
gammas = (np.log(Ptau) - np.log(1.0e13))/np.log(Rtau)
fig, ax = plt.subplots(1, 1, figsize=(8,4))
ax.set_xlabel('Effective Temperature (K)', fontsize=20.)
ax.set_ylabel('Spot Temperature (K)', fontsize=20.)
ax.tick_params(which='major', axis='both', length=10., labelsize=16.)
ax.grid(True)
ax.plot(Teff, Ttau*0.4**((gammas - 1)/gammas), '--o', c="#b22222", markersize=10.)
fig, ax = plt.subplots(1, 1, figsize=(8,4))
ax.set_xlabel('Effective Temperature (K)', fontsize=20.)
ax.set_ylabel('Equipartition Strength (G)', fontsize=20.)
ax.tick_params(which='major', axis='both', length=10., labelsize=16.)
ax.grid(True)
ax.plot(Teff, np.sqrt(8.0*np.pi*Ptau), '--o', c="#b22222", markersize=10.)
# arbitrary velocity values; rho ~ 10**-9
velocities = np.arange(1.0, 5.0, 0.5) # log-scale
# B-fields to get V_A = 0.1 V_C
B_010 = np.sqrt(4.0e-9*np.pi*(0.1*10**velocities)**2)
B_100 = np.sqrt(4.0e-9*np.pi*(10**velocities)**2)
print B_100
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <h1> Using the workload manager SLURM </h1>
Step2: <b> We can see here that the number of processes are the number of core your computer possess. <br/> Your computer can be seen as a node that possess X cores </b>
Step3: <h2>the correlation image </h2>
Step4: CNMFSetParms define Dictionaries of CNMF parameters.
Step5: <h2> Preprocessing of the datas and initialization of the components </h2>
Step6: <h2> HALS </h2>
Step7: <h1> CNMF process </h1>
Step8: <h2> Update temporal </h2>
Step9: <h2> Merging components </h2>
Step10: A refining step
Step11: <h1>DISCARD LOW QUALITY COMPONENT </h1>
Step12: accepted components
Step13: discarded components
|
<ASSISTANT_TASK:>
Python Code:
try:
if __IPYTHON__:
# this is used for debugging purposes only. allows to reload classes when changed
get_ipython().magic(u'load_ext autoreload')
get_ipython().magic(u'autoreload 2')
except NameError:
print('Not IPYTHON')
pass
import sys
import numpy as np
from time import time
from scipy.sparse import coo_matrix
import psutil
import glob
import os
import scipy
from ipyparallel import Client
import pylab as pl
import caiman as cm
from caiman.components_evaluation import evaluate_components
from caiman.utils.visualization import plot_contours,view_patches_bar,nb_plot_contour,nb_view_patches
from caiman.base.rois import extract_binary_masks_blob
import caiman.source_extraction.cnmf as cnmf
from caiman.utils.utils import download_demo
#import bokeh.plotting as bp
import bokeh.plotting as bpl
try:
from bokeh.io import vform, hplot
except:
# newer version of bokeh does not use vform & hplot, instead uses column & row
from bokeh.layouts import column as vform
from bokeh.layouts import row as hplot
from bokeh.models import CustomJS, ColumnDataSource, Slider
from IPython.display import display, clear_output
import matplotlib as mpl
import matplotlib.cm as cmap
import numpy as np
bpl.output_notebook()
# frame rate in Hz
final_frate=10
#backend='SLURM'
backend='local'
if backend == 'SLURM':
n_processes = np.int(os.environ.get('SLURM_NPROCS'))
else:
# roughly number of cores on your machine minus 1
n_processes = np.maximum(np.int(psutil.cpu_count()),1)
print('using ' + str(n_processes) + ' processes')
#%% start cluster for efficient computation
single_thread=False
if single_thread:
dview=None
else:
try:
c.close()
except:
print('C was not existing, creating one')
print("Stopping cluster to avoid unnencessary use of memory....")
sys.stdout.flush()
if backend == 'SLURM':
try:
cm.stop_server(is_slurm=True)
except:
print('Nothing to stop')
slurm_script='/mnt/xfs1/home/agiovann/SOFTWARE/Constrained_NMF/SLURM/slurmStart.sh'
cm.start_server(slurm_script=slurm_script)
pdir, profile = os.environ['IPPPDIR'], os.environ['IPPPROFILE']
c = Client(ipython_dir=pdir, profile=profile)
else:
cm.stop_server()
cm.start_server()
c=Client()
print('Using '+ str(len(c)) + ' processes')
dview=c[:len(c)]
#%% FOR LOADING ALL TIFF FILES IN A FILE AND SAVING THEM ON A SINGLE MEMORY MAPPABLE FILE
fnames=['demoMovieJ.tif']
base_folder='./example_movies/' # folder containing the demo files
# %% download movie if not there
if fnames[0] in ['Sue_2x_3000_40_-46.tif','demoMovieJ.tif']:
download_demo(fnames[0])
fnames = [os.path.join('example_movies',fnames[0])]
m_orig = cm.load_movie_chain(fnames[:1])
downsample_factor=1 # use .2 or .1 if file is large and you want a quick answer
final_frate=final_frate*downsample_factor
name_new=cm.save_memmap_each(fnames
, dview=dview,base_name='Yr', resize_fact=(1, 1, downsample_factor)
, remove_init=0,idx_xy=None )
name_new.sort()
fname_new=cm.save_memmap_join(name_new,base_name='Yr', n_chunks=12, dview=dview)
print(fnames)
print(fname_new)
print ("\n we can see we are loading the file (line1) into a memorymapped object (line2)")
Yr,dims,T=cm.load_memmap(fname_new)
Y=np.reshape(Yr,dims+(T,),order='F')
#%% visualize correlation image
Cn = cm.local_correlations(Y)
pl.imshow(Cn,cmap='gray')
pl.show()
K=30 # number of neurons expected per patch
gSig=[6,6] # expected half size of neurons
merge_thresh=0.8 # merging threshold, max correlation allowed
p=2 #order of the autoregressive system
options = cnmf.utilities.CNMFSetParms(Y
,n_processes,p=p,gSig=gSig,K=K,ssub=2,tsub=2, normalize_init=True)
Yr,sn,g,psx = cnmf.pre_processing.preprocess_data(Yr
,dview=dview
,n_pixels_per_process=100, noise_range = [0.25,0.5]
,noise_method = 'logmexp', compute_g=False, p = 2,
lags = 5, include_noise = False, pixels = None
,max_num_samples_fft=3000, check_nan = True)
Ain, Cin, b_in, f_in, center=cnmf.initialization.initialize_components(Y
,K=30, gSig=[5, 5], gSiz=None, ssub=1, tsub=1, nIter=5, maxIter=5, nb=1
, use_hals=False, normalize_init=True, img=None, method='greedy_roi'
, max_iter_snmf=500, alpha_snmf=10e2, sigma_smooth_snmf=(.5, .5, .5)
, perc_baseline_snmf=20)
p1=nb_plot_contour(Cn,Ain,dims[0],dims[1],thr=0.9,face_color=None
, line_color='black',alpha=0.4,line_width=2)
bpl.show(p1)
Ain, Cin, b_in, f_in = cnmf.initialization.hals(Y, Ain, Cin, b_in, f_in, maxIter=5)
p1=nb_plot_contour(Cn,Ain,dims[0],dims[1],thr=0.9,face_color=None
, line_color='black',alpha=0.4,line_width=2)
bpl.show(p1)
options['spatial_params']['n_pixels_per_process'] = 2000
A,b,Cin,f_in = cnmf.spatial.update_spatial_components(Yr, Cin, f_in, Ain, sn=sn, dview=dview,**options['spatial_params'])
p1=nb_plot_contour(Cn,A.todense(),dims[0],dims[1],thr=0.9,face_color=None,
line_color='black',alpha=0.4,line_width=2)
bpl.show(p1)
options['temporal_params']['block_size'] = 2000
options['temporal_params']['p'] = 0 # fast updating without deconvolution
C,A,b,f,S,bl,c1,neurons_sn,g,YrA,lam = cnmf.temporal.update_temporal_components(
Yr,A,b,Cin,f_in,bl=None,c1=None,sn=None,g=None,**options['temporal_params'])
clear_output(wait=True)
A_m,C_m,nr_m,merged_ROIs,S_m,bl_m,c1_m,sn_m,g_m=cnmf.merging.merge_components(
Yr,A,b,C,f,S,sn,options['temporal_params'], options['spatial_params'],
dview=dview, bl=bl, c1=c1, sn=neurons_sn, g=g, thr=merge_thresh,
mx=50, fast_merge = True)
A2,b2,C2,f = cnmf.spatial.update_spatial_components(Yr, C_m, f, A_m,
sn=sn,dview=dview, **options['spatial_params'])
options['temporal_params']['p'] = p # set it back to perform full deconvolution
C2,A2,b2,f2,S2,bl2,c12,neurons_sn2,g21,YrA, lam = cnmf.temporal.update_temporal_components(
Yr,A2,b2,C2,f,dview=dview, bl=None,c1=None,sn=None,g=None,**options['temporal_params'])
clear_output(wait=True)
#evaluation
fitness_raw, fitness_delta, erfc_raw,erfc_delta, r_values, significant_samples = evaluate_components(Y, C2+YrA, A2, C2, b2, f2, final_frate,
remove_baseline=True,N=5, robust_std=False,
Athresh=0.1, Npeaks=10, thresh_C=0.3)
#different thresholding ( needs to pass at least one of them )
traces = C2 + YrA
idx_components_r=np.where(r_values>=.6)[0]
idx_components_raw=np.where(fitness_raw<-60)[0]
idx_components_delta=np.where(fitness_delta<-20)[0]
#merging to have all that have passed at least one threshold.
idx_components=np.union1d(idx_components_r,idx_components_raw)
idx_components=np.union1d(idx_components,idx_components_delta)
#finding the bad components
idx_components_bad=np.setdiff1d(range(len(traces)),idx_components)
clear_output(wait=True)
print(' ***** ')
print(len(traces))
print(len(idx_components))
fg=pl.figure(figsize=(12,20))
pl.subplot(1,2,1)
crd = plot_contours(A2.tocsc()[:,idx_components],Cn,thr=0.9)
pl.subplot(1,2,2)
crd = plot_contours(A2.tocsc()[:,idx_components_bad],Cn,thr=0.9)
p2=nb_plot_contour(Cn,A2.tocsc()[:,idx_components].todense(),dims[0],dims[1],thr=0.9,face_color='purple', line_color='black',alpha=0.3,line_width=2)
bpl.show(p2)
discard_traces_fluo=nb_view_patches(Yr,A2.tocsc()[:,idx_components],C2[idx_components],b2,f2,dims[0],dims[1],thr = 0.8,image_neurons=Cn, denoised_color='red')
discard_traces_fluo=nb_view_patches(Yr,A2.tocsc()[:,idx_components_bad],C2[idx_components_bad],b2,f2,dims[0],dims[1],thr = 0.8,image_neurons=Cn, denoised_color='red')
cm.stop_server()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Process blastp results
Step2: 2.2 Annotate blast hits with orf stats
Step3: 2.3 Extract best hit for each ORF ( q_cov > 0.8 and pct_id > 40% and e-value < 1)
Step4: 2.4 Extract selected orfs for further analysis
Step5: 2.4.2 Extract fasta
Step6: 2.4.3 Write out filtered blast hits
|
<ASSISTANT_TASK:>
Python Code:
#Load blast hits
blastp_hits = pd.read_csv("2_blastp_hits.csv")
blastp_hits.head()
#Filter out Metahit 2010 hits, keep only Metahit 2014
blastp_hits = blastp_hits[blastp_hits.db != "metahit_pep"]
#Assumes the Fasta file comes with the header format of EMBOSS getorf
fh = open("1_orf/d9539_asm_v1.2_orf.fa")
header_regex = re.compile(r">([^ ]+?) \[([0-9]+) - ([0-9]+)\]")
orf_stats = []
for line in fh:
header_match = header_regex.match(line)
if header_match:
is_reverse = line.rstrip(" \n").endswith("(REVERSE SENSE)")
q_id = header_match.group(1)
#Position in contig
q_cds_start = int(header_match.group(2) if not is_reverse else header_match.group(3))
q_cds_end = int(header_match.group(3) if not is_reverse else header_match.group(2))
#Length of orf in aminoacids
q_len = (q_cds_end - q_cds_start + 1) / 3
orf_stats.append( pd.Series(data=[q_id,q_len,q_cds_start,q_cds_end,("-" if is_reverse else "+")],
index=["q_id","orf_len","q_cds_start","q_cds_end","strand"]))
orf_stats_df = pd.DataFrame(orf_stats)
print(orf_stats_df.shape)
orf_stats_df.head()
#Write orf stats to fasta
orf_stats_df.to_csv("1_orf/orf_stats.csv",index=False)
blastp_hits_annot = blastp_hits.merge(orf_stats_df,left_on="query_id",right_on="q_id")
#Add query coverage calculation
blastp_hits_annot["q_cov_calc"] = (blastp_hits_annot["q_end"] - blastp_hits_annot["q_start"] + 1 ) * 100 / blastp_hits_annot["q_len"]
blastp_hits_annot.sort_values(by="bitscore",ascending=False).head()
assert blastp_hits_annot.shape[0] == blastp_hits.shape[0]
! mkdir -p 4_msa_prots
#Get best hit (highest bitscore) for each ORF
gb = blastp_hits_annot[ (blastp_hits_annot.q_cov > 80) & (blastp_hits_annot.pct_id > 40) & (blastp_hits_annot.e_value < 1) ].groupby("query_id")
reliable_orfs = pd.DataFrame( hits.ix[hits.bitscore.idxmax()] for q_id,hits in gb )[["query_id","db","subject_id","pct_id","q_cov","q_len",
"bitscore","e_value","strand","q_cds_start","q_cds_end"]]
reliable_orfs = reliable_orfs.sort_values(by="q_cds_start",ascending=True)
reliable_orfs
reliable_orfs["orf_id"] = ["orf{}".format(x) for x in range(1,reliable_orfs.shape[0]+1) ]
reliable_orfs["cds_len"] = reliable_orfs["q_cds_end"] - reliable_orfs["q_cds_start"] +1
reliable_orfs.sort_values(by="q_cds_start",ascending=True).to_csv("3_filtered_orfs/filt_orf_stats.csv",index=False,header=True)
reliable_orfs.sort_values(by="q_cds_start",ascending=True).to_csv("3_filtered_orfs/filt_orf_list.txt",index=False,header=False,columns=["query_id"])
! ~/utils/bin/seqtk subseq 1_orf/d9539_asm_v1.2_orf.fa 3_filtered_orfs/filt_orf_list.txt > 3_filtered_orfs/d9539_asm_v1.2_orf_filt.fa
filt_blastp_hits = blastp_hits_annot[ blastp_hits_annot.query_id.apply(lambda x: x in reliable_orfs.query_id.tolist())]
filt_blastp_hits.to_csv("3_filtered_orfs/d9539_asm_v1.2_orf_filt_blastp.csv")
filt_blastp_hits.head()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: To get access to NumPy and matplotlib, IPython's %pylab magic command is quite useful. With the inline argument, all plots will be made a part of the notebook.
Step2: Next, we need to import from Qinfer.
Step3: Analytic Solution
Step4: Sequential Monte Carlo
Step5: Let's make a model to play with, using the prior $p \sim \mathrm{Uni}(0, 1)$.
Step6: We need to allocate an array to hold performance data. A record array is a rather convienent structure for doing so. First, let's define the fields in this array,
Step7: ... and then the array itself.
Step8: Now, we run the experiments!
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import division, print_function
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
try: plt.style.use('ggplot')
except: pass
# We need distributions to model priors.
from qinfer import distributions
# The noisy coin model has already been implmented, so let's import it here.
from qinfer.test_models import NoisyCoinModel
# Next, we need to import the sequential Monte Carlo updater class.
from qinfer.smc import SMCUpdater
# We'll be demonstrating approximate likelihood evaluation (ALE) as well.
from qinfer import ale
import time
from scipy.special import betaln, gammaln
def exactBME(k, K, a, b, gamma=1):
idx_k = np.arange(k+1)
idx_K = np.arange(K-k+1)[np.newaxis].transpose()
numerator = (
gammaln(k+1) - gammaln(idx_k+1) - gammaln(k-idx_k+1) + gammaln(K-k+1) -
gammaln(idx_K+1) - gammaln(K-k-idx_K+1) + (idx_k+idx_K)*np.log(a-b) +
(k-idx_k)*np.log(b) + (K-k-idx_K)*np.log(1-a) +
betaln(idx_k+gamma+1,idx_K+gamma)
)
denominator = (
gammaln(k+1) - gammaln(idx_k+1) - gammaln(k-idx_k+1) + gammaln(K-k+1) -
gammaln(idx_K+1) - gammaln(K-k-idx_K+1) + (idx_k+idx_K)*np.log(a-b) +
(k-idx_k)*np.log(b) + (K-k-idx_K)*np.log(1-a) +
betaln(idx_k+gamma,idx_K+gamma)
)
bme = np.sum(np.exp(numerator))/np.sum(np.exp(denominator))
var = np.sum(np.exp(
numerator - betaln(idx_k+gamma+1,idx_K+gamma) +
betaln(idx_k + gamma + 2, idx_K + gamma)
)) / np.sum(np.exp(denominator)) - bme ** 2
return bme, var
N_PARTICLES = 5000
N_EXP = 250
N_TRIALS = 100
prior = distributions.UniformDistribution([0, 1])
model = NoisyCoinModel()
performance_dtype = [
('outcome', 'i1'),
('est_mean', 'f8'), ('est_cov_mat', 'f8'),
('true_err', 'f8'), ('resample_count', 'i8'),
('elapsed_time', 'f8'),
('like_count', 'i8'), ('sim_count', 'i8'),
('bme', 'f8'),
('var', 'f8'),
('bme_err', 'f8')
]
performance = np.empty((N_TRIALS, N_EXP), dtype=performance_dtype)
true_params = np.empty((N_TRIALS, model.n_modelparams))
ALPHA = 0.1
BETA = 0.8
expparams = np.array([(ALPHA, BETA)], dtype=model.expparams_dtype)
for idx_trial in range(N_TRIALS):
# First, make new updaters using the constructors
# defined above.
updater = SMCUpdater(model, N_PARTICLES, prior)
# Sample true set of modelparams.
truemp = prior.sample()
true_params[idx_trial, :] = truemp
# Now loop over experiments, updating each of the
# updaters with the same data, so that we can compare
# their estimation performance.
for idx_exp in range(N_EXP):
# Make a short hand for indexing the current simulation
# and experiment.
idxs = np.s_[idx_trial, idx_exp]
# Start by simulating and recording the data.
outcome = model.simulate_experiment(truemp, expparams)
performance['outcome'][idxs] = outcome
# Reset the like_count and sim_count
# properties so that we can count how many were used
# by this update. Note that this is a hack;
# an appropriate method should be added to
# Simulatable.
model._sim_count = 0
model._call_count = 0
# Time the actual update.
tic = toc = None
tic = time.time()
updater.update(outcome, expparams)
performance[idxs]['elapsed_time'] = time.time() - tic
# Record the performance of this updater.
est_mean = updater.est_mean()
performance[idxs]['est_mean'] = est_mean
performance[idxs]['true_err'] = np.abs(est_mean - truemp) ** 2
performance[idxs]['est_cov_mat'] = updater.est_covariance_mtx()
performance[idxs]['resample_count'] = updater.resample_count
performance[idxs]['like_count'] = model.call_count
performance[idxs]['sim_count'] = model.sim_count
# Finally, record the ideal stats.
performance[idxs]['bme'], performance[idxs]['var'] = exactBME(
idx_exp + 1 - np.sum(performance[idxs]['outcome']), idx_exp + 1,
ALPHA, BETA
)
performance[idxs]['bme_err'] = np.abs(performance[idxs]['bme'] - truemp) ** 2
plt.semilogy(np.mean(performance['true_err'], axis=0))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: About the data
Step2: Note that the result of the loading process is an
Step3: Visualize events
Step4: As it can be seen above, conditions are coded as Face/A and Face/B.
Step5: Now let's take a closer look at the information in the epochs
Step6: The first column of the summary table above provides more or less the same
Step7: We can also compute the difference wave contrasting Face A and Face B.
Step8: As expected, no clear pattern appears when contrasting
Step9: We do see a difference between Face A and B, but it is pretty small.
Step10: As shown above, there are some considerable differences between the
Step11: Define predictor variables and design matrix
Step12: Now we can set up the linear model to be used in the analysis using
Step13: Extract regression coefficients
Step14: Plot model results
Step15: We can also plot the corresponding T values.
Step16: Conversely, there appears to be no (or very small) systematic effects when
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Jose C. Garcia Alanis <alanis.jcg@gmail.com>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
from mne.datasets.limo import load_data
from mne.stats import linear_regression
from mne.viz import plot_events, plot_compare_evokeds
from mne import combine_evoked
print(__doc__)
# subject to use
subj = 1
# This step can take a little while if you're loading the data for the
# first time.
limo_epochs = load_data(subject=subj)
print(limo_epochs)
fig = plot_events(limo_epochs.events, event_id=limo_epochs.event_id)
fig.suptitle("Distribution of events in LIMO epochs")
print(limo_epochs.metadata.head())
# We want include all columns in the summary table
epochs_summary = limo_epochs.metadata.describe(include='all').round(3)
print(epochs_summary)
# only show -250 to 500 ms
ts_args = dict(xlim=(-0.25, 0.5))
# plot evoked response for face A
limo_epochs['Face/A'].average().plot_joint(times=[0.15],
title='Evoked response: Face A',
ts_args=ts_args)
# and face B
limo_epochs['Face/B'].average().plot_joint(times=[0.15],
title='Evoked response: Face B',
ts_args=ts_args)
# Face A minus Face B
difference_wave = combine_evoked([limo_epochs['Face/A'].average(),
-limo_epochs['Face/B'].average()],
weights='equal')
# plot difference wave
difference_wave.plot_joint(times=[0.15], title='Difference Face A - Face B')
# Create a dictionary containing the evoked responses
conditions = ["Face/A", "Face/B"]
evokeds = {condition: limo_epochs[condition].average()
for condition in conditions}
# concentrate analysis an occipital electrodes (e.g. B11)
pick = evokeds["Face/A"].ch_names.index('B11')
# compare evoked responses
plot_compare_evokeds(evokeds, picks=pick, ylim=dict(eeg=(-15, 7.5)))
phase_coh = limo_epochs.metadata['phase-coherence']
# get levels of phase coherence
levels = sorted(phase_coh.unique())
# create labels for levels of phase coherence (i.e., 0 - 85%)
labels = ["{0:.2f}".format(i) for i in np.arange(0., 0.90, 0.05)]
# create dict of evokeds for each level of phase-coherence
evokeds = {label: limo_epochs[phase_coh == level].average()
for level, label in zip(levels, labels)}
# pick channel to plot
electrodes = ['C22', 'B11']
# create figures
for electrode in electrodes:
fig, ax = plt.subplots(figsize=(8, 4))
plot_compare_evokeds(evokeds,
axes=ax,
ylim=dict(eeg=(-20, 15)),
picks=electrode,
cmap=("Phase coherence", "magma"))
limo_epochs.interpolate_bads(reset_bads=True)
limo_epochs.drop_channels(['EXG1', 'EXG2', 'EXG3', 'EXG4'])
# name of predictors + intercept
predictor_vars = ['face a - face b', 'phase-coherence', 'intercept']
# create design matrix
design = limo_epochs.metadata[['phase-coherence', 'face']].copy()
design['face a - face b'] = np.where(design['face'] == 'A', 1, -1)
design['intercept'] = 1
design = design[predictor_vars]
reg = linear_regression(limo_epochs,
design_matrix=design,
names=predictor_vars)
print('predictors are:', list(reg))
print('fields are:', [field for field in getattr(reg['intercept'], '_fields')])
reg['phase-coherence'].beta.plot_joint(ts_args=ts_args,
title='Effect of Phase-coherence',
times=[0.23])
# use unit=False and scale=1 to keep values at their original
# scale (i.e., avoid conversion to micro-volt).
ts_args = dict(xlim=(-0.25, 0.5),
unit=False)
topomap_args = dict(scalings=dict(eeg=1),
average=0.05)
fig = reg['phase-coherence'].t_val.plot_joint(ts_args=ts_args,
topomap_args=topomap_args,
times=[0.23])
fig.axes[0].set_ylabel('T-value')
ts_args = dict(xlim=(-0.25, 0.5))
reg['face a - face b'].beta.plot_joint(ts_args=ts_args,
title='Effect of Face A vs. Face B',
times=[0.23])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Al-Fe (Heat Capacity and Degree of Ordering)
Step2: We also compute degree of ordering at fixed temperature as a function of composition.
Step3: Plots
Step4: For the heat capacity curve shown below we notice a sharp increase in the heat capacity around 750 K. This is indicative of a magnetic phase transition and, indeed, the temperature at the peak of the curve coincides with 75% of 1043 K, the Curie temperature of pure Fe. (Pure bcc Al is paramagnetic so it has an effective Curie temperature of 0 K.)
Step5: To understand more about what's happening around 700 K, we plot the degree of ordering versus composition. Note that this plot excludes all other phases except B2_BCC. We observe the presence of disordered bcc (A2) until around 13% Al or Fe, when the phase begins to order.
Step6: Al-Ni (Degree of Ordering)
Step7: Plots
Step8: In the plot below we see that the degree of ordering does not change at all in each phase. There is a very abrupt disappearance of the completely ordered gamma-prime phase, leaving the completely disordered gamma phase. This is a first-order phase transition.
|
<ASSISTANT_TASK:>
Python Code:
# Only needed in a Jupyter Notebook
%matplotlib inline
# Optional plot styling
import matplotlib
matplotlib.style.use('bmh')
import matplotlib.pyplot as plt
from pycalphad import equilibrium
from pycalphad import Database, Model
import pycalphad.variables as v
import numpy as np
db = Database('alfe_sei.TDB')
my_phases = ['LIQUID', 'B2_BCC']
eq = equilibrium(db, ['AL', 'FE', 'VA'], my_phases, {v.X('AL'): 0.25, v.T: (300, 2000, 50), v.P: 101325},
output=['heat_capacity', 'degree_of_ordering'])
print(eq)
eq2 = equilibrium(db, ['AL', 'FE', 'VA'], 'B2_BCC', {v.X('AL'): (0,1,0.01), v.T: 700, v.P: 101325},
output='degree_of_ordering')
print(eq2)
plt.gca().set_title('Al-Fe: Degree of bcc ordering vs T [X(AL)=0.25]')
plt.gca().set_xlabel('Temperature (K)')
plt.gca().set_ylabel('Degree of ordering')
plt.gca().set_ylim((-0.1,1.1))
# Generate a list of all indices where B2 is stable
phase_indices = np.nonzero(eq.Phase.values == 'B2_BCC')
# phase_indices[2] refers to all temperature indices
# We know this because pycalphad always returns indices in order like P, T, X's
plt.plot(np.take(eq['T'].values, phase_indices[2]), eq['degree_of_ordering'].values[phase_indices])
plt.show()
plt.gca().set_title('Al-Fe: Heat capacity vs T [X(AL)=0.25]')
plt.gca().set_xlabel('Temperature (K)')
plt.gca().set_ylabel('Heat Capacity (J/mol-atom-K)')
# np.squeeze is used to remove all dimensions of size 1
# For a 1-D/"step" calculation, this aligns the temperature and heat capacity arrays
# In 2-D/"map" calculations, we'd have to explicitly select the composition of interest
plt.plot(eq['T'].values, np.squeeze(eq['heat_capacity'].values))
plt.show()
plt.gca().set_title('Al-Fe: Degree of bcc ordering vs X(AL) [T=700 K]')
plt.gca().set_xlabel('X(AL)')
plt.gca().set_ylabel('Degree of ordering')
# Select all points in the datasets where B2_BCC is stable, dropping the others
eq2_b2_bcc = eq2.where(eq2.Phase == 'B2_BCC', drop=True)
plt.plot(eq2_b2_bcc['X_AL'].values, eq2_b2_bcc['degree_of_ordering'].values.squeeze())
plt.show()
db_alni = Database('NI_AL_DUPIN_2001.TDB')
phases = ['LIQUID', 'FCC_L12']
eq_alni = equilibrium(db_alni, ['AL', 'NI', 'VA'], phases, {v.X('AL'): 0.10, v.T: (300, 2500, 20), v.P: 101325},
output='degree_of_ordering')
print(eq_alni)
from pycalphad.plot.utils import phase_legend
phase_handles, phasemap = phase_legend(phases)
plt.gca().set_title('Al-Ni: Phase fractions vs T [X(AL)=0.1]')
plt.gca().set_xlabel('Temperature (K)')
plt.gca().set_ylabel('Phase Fraction')
plt.gca().set_ylim((0,1.1))
plt.gca().set_xlim((300, 2000))
for name in phases:
phase_indices = np.nonzero(eq_alni.Phase.values == name)
plt.scatter(np.take(eq_alni['T'].values, phase_indices[2]), eq_alni.NP.values[phase_indices], color=phasemap[name])
plt.gca().legend(phase_handles, phases, loc='lower right')
plt.gca().set_title('Al-Ni: Degree of fcc ordering vs T [X(AL)=0.1]')
plt.gca().set_xlabel('Temperature (K)')
plt.gca().set_ylabel('Degree of ordering')
plt.gca().set_ylim((-0.1,1.1))
# Generate a list of all indices where FCC_L12 is stable and ordered
L12_phase_indices = np.nonzero(np.logical_and((eq_alni.Phase.values == 'FCC_L12'),
(eq_alni.degree_of_ordering.values > 0.01)))
# Generate a list of all indices where FCC_L12 is stable and disordered
fcc_phase_indices = np.nonzero(np.logical_and((eq_alni.Phase.values == 'FCC_L12'),
(eq_alni.degree_of_ordering.values <= 0.01)))
# phase_indices[2] refers to all temperature indices
# We know this because pycalphad always returns indices in order like P, T, X's
plt.plot(np.take(eq_alni['T'].values, L12_phase_indices[2]), eq_alni['degree_of_ordering'].values[L12_phase_indices],
label='$\gamma\prime$ (ordered fcc)', color='red')
plt.plot(np.take(eq_alni['T'].values, fcc_phase_indices[2]), eq_alni['degree_of_ordering'].values[fcc_phase_indices],
label='$\gamma$ (disordered fcc)', color='blue')
plt.legend()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Then we take the data as it appeared in Hazard's slides. The example given was symmetric, which means that the base matrix is log-skew-symmetric (every element is the reciprocal of its transpose). However, this payoff function can also work for non-symmetric cases.
Step2: Hazard's talk used the convention that the elements of the matrix are how many of the row player's unit it takes to equal one of the column player's unit. We use the opposite convention, where higher is better for the row player, so we take the inverse, which for the symmetric case is the same as the transpose.
Step3: Now we can construct our problem instance and solve it.
Step4: Finally we print the result, normalizing the handicaps so that they sum to 1. Note that a global scale in the handicaps merely scales the global payoffs and does not change the Nash equilibrium. Since the value of the (balanced) game is 0 this does not change the value either.
|
<ASSISTANT_TASK:>
Python Code:
import _initpath
import numpy
import zerosum.balance
data = numpy.array([
[1.0, 3.0, 0.5],
[1.0 / 3.0, 1.0, 0.5],
[2.0, 2.0, 1.0]])
names = ['Hammer', 'Spear', 'Curse']
data = 1.0 / data
balance = zerosum.balance.HazardSymmetricBalance(data)
result = balance.optimize()
for name, handicap in zip(names, result.handicaps / numpy.sum(result.handicaps)):
print("%8s: %0.3f" % (name, handicap))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Chemistry Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 1.8. Coupling With Chemical Reactivity
Step12: 2. Key Properties --> Software Properties
Step13: 2.2. Code Version
Step14: 2.3. Code Languages
Step15: 3. Key Properties --> Timestep Framework
Step16: 3.2. Split Operator Advection Timestep
Step17: 3.3. Split Operator Physical Timestep
Step18: 3.4. Split Operator Chemistry Timestep
Step19: 3.5. Split Operator Alternate Order
Step20: 3.6. Integrated Timestep
Step21: 3.7. Integrated Scheme Type
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
Step23: 4.2. Convection
Step24: 4.3. Precipitation
Step25: 4.4. Emissions
Step26: 4.5. Deposition
Step27: 4.6. Gas Phase Chemistry
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Step30: 4.9. Photo Chemistry
Step31: 4.10. Aerosols
Step32: 5. Key Properties --> Tuning Applied
Step33: 5.2. Global Mean Metrics Used
Step34: 5.3. Regional Metrics Used
Step35: 5.4. Trend Metrics Used
Step36: 6. Grid
Step37: 6.2. Matches Atmosphere Grid
Step38: 7. Grid --> Resolution
Step39: 7.2. Canonical Horizontal Resolution
Step40: 7.3. Number Of Horizontal Gridpoints
Step41: 7.4. Number Of Vertical Levels
Step42: 7.5. Is Adaptive Grid
Step43: 8. Transport
Step44: 8.2. Use Atmospheric Transport
Step45: 8.3. Transport Details
Step46: 9. Emissions Concentrations
Step47: 10. Emissions Concentrations --> Surface Emissions
Step48: 10.2. Method
Step49: 10.3. Prescribed Climatology Emitted Species
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Step51: 10.5. Interactive Emitted Species
Step52: 10.6. Other Emitted Species
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
Step54: 11.2. Method
Step55: 11.3. Prescribed Climatology Emitted Species
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Step57: 11.5. Interactive Emitted Species
Step58: 11.6. Other Emitted Species
Step59: 12. Emissions Concentrations --> Concentrations
Step60: 12.2. Prescribed Upper Boundary
Step61: 13. Gas Phase Chemistry
Step62: 13.2. Species
Step63: 13.3. Number Of Bimolecular Reactions
Step64: 13.4. Number Of Termolecular Reactions
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Step67: 13.7. Number Of Advected Species
Step68: 13.8. Number Of Steady State Species
Step69: 13.9. Interactive Dry Deposition
Step70: 13.10. Wet Deposition
Step71: 13.11. Wet Oxidation
Step72: 14. Stratospheric Heterogeneous Chemistry
Step73: 14.2. Gas Phase Species
Step74: 14.3. Aerosol Species
Step75: 14.4. Number Of Steady State Species
Step76: 14.5. Sedimentation
Step77: 14.6. Coagulation
Step78: 15. Tropospheric Heterogeneous Chemistry
Step79: 15.2. Gas Phase Species
Step80: 15.3. Aerosol Species
Step81: 15.4. Number Of Steady State Species
Step82: 15.5. Interactive Dry Deposition
Step83: 15.6. Coagulation
Step84: 16. Photo Chemistry
Step85: 16.2. Number Of Reactions
Step86: 17. Photo Chemistry --> Photolysis
Step87: 17.2. Environmental Conditions
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nasa-giss', 'sandbox-1', 'atmoschem')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Generalization of Taylor FD operators
|
<ASSISTANT_TASK:>
Python Code:
# Execute this cell to load the notebook's style sheet, then ignore it
from IPython.core.display import HTML
css_file = '../../style/custom.css'
HTML(open(css_file, "r").read())
# import SymPy libraries
from sympy import symbols, differentiate_finite, Function
# Define symbols
x, h = symbols('x h')
f = Function('f')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Read the targets catalog
Step3: Read the templates and compute colors on a redshift grid.
Step5: Generate some plots
|
<ASSISTANT_TASK:>
Python Code:
import os
from time import time
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
import fitsio
import seaborn as sns
from speclite import filters
from desitarget import desi_mask
from desisim.io import read_basis_templates
%pylab inline
sns.set(style='white', font_scale=1.5, font='sans-serif', palette='Set2')
setcolors = sns.color_palette()
def flux2colors(cat):
Convert DECam/WISE fluxes to magnitudes and colors.
colors = dict()
with warnings.catch_warnings(): # ignore missing fluxes (e.g., for QSOs)
warnings.simplefilter('ignore')
for ii, band in zip((1, 2, 4), ('g', 'r', 'z')):
colors[band] = 22.5 - 2.5 * np.log10(cat['DECAM_FLUX'][..., ii].data)
for ii, band in zip((0, 1), ('W1', 'W2')):
colors[band] = 22.5 - 2.5 * np.log10(cat['WISE_FLUX'][..., ii].data)
colors['gr'] = colors['g'] - colors['r']
colors['rz'] = colors['r'] - colors['z']
colors['rW1'] = colors['r'] - colors['W1']
colors['W1W2'] = colors['W1'] - colors['W2']
return colors
lrgfile = os.path.join( os.getenv('DESI_ROOT'), 'data', 'targets-dr3.1-EisDawLRG.fits' )
# Select just LRG targets.
print('Reading {}'.format(lrgfile))
cat = fitsio.read(lrgfile, ext=1, upper=True, columns=['DESI_TARGET'])
these = np.where( (cat['DESI_TARGET'] & desi_mask.LRG) != 0 )[0]
print('Number of LRG targets = {}'.format(len(these)))
cat = fitsio.read(lrgfile, ext=1, upper=True, rows=these)
data = flux2colors(cat)
filt = filters.load_filters('decam2014-g', 'decam2014-r', 'decam2014-z', 'wise2010-W1')
flux, wave, meta = read_basis_templates(objtype='LRG')
nt = len(meta)
print('Number of templates = {}'.format(nt))
zmin, zmax, dz = 0.0, 2.0, 0.1
nz = np.round( (zmax - zmin) / dz ).astype('i2')
print('Number of redshift points = {}'.format(nz))
cc = dict(
redshift = np.linspace(0.0, 2.0, nz),
gr = np.zeros( (nt, nz) ),
rz = np.zeros( (nt, nz) ),
rW1 = np.zeros( (nt, nz), )
)
t0 = time()
for iz, red in enumerate(cc['redshift']):
zwave = wave.astype('float') * (1 + red)
phot = filt.get_ab_maggies(flux, zwave, mask_invalid=False)
cc['gr'][:, iz] = -2.5 * np.log10( phot['decam2014-g'] / phot['decam2014-r'] )
cc['rz'][:, iz] = -2.5 * np.log10( phot['decam2014-r'] / phot['decam2014-z'] )
cc['rW1'][:, iz] = -2.5 * np.log10( phot['decam2014-r'] / phot['wise2010-W1'] )
print('Total time = {:.2f} sec.'.format(time() - t0))
figsize = (8, 6)
grrange = (0.0, 3.0)
rzrange = (0.0, 2.5)
rW1range = (-1, 5)
mzrange = (17.5, 20.5)
ntspace = 5 # spacing between model curves
def rzz(pngfile=None):
r-z vs apparent magnitude z
fig, ax = plt.subplots(figsize=figsize)
hb = ax.hexbin(data['z'], data['rz'], bins='log', cmap='Blues_r',
mincnt=100, extent=mzrange+rzrange)
ax.set_xlabel('z')
ax.set_ylabel('r - z')
ax.set_xlim(mzrange)
ax.set_ylim(rzrange)
cb = fig.colorbar(hb, ax=ax)
cb.set_label(r'log$_{10}$ (Number of Galaxies per Bin)')
if pngfile:
fig.savefig(pngfile)
def grz(models=False, pngfile=None):
fig, ax = plt.subplots(figsize=figsize)
hb = ax.hexbin(data['rz'], data['gr'], bins='log', cmap='Blues_r',
mincnt=100, extent=rzrange+grrange)
ax.set_xlabel('r - z')
ax.set_ylabel('g - r')
ax.set_xlim(rzrange)
ax.set_ylim(grrange)
cb = fig.colorbar(hb, ax=ax)
cb.set_label(r'log$_{10}$ (Number of Galaxies per Bin)')
if models:
for tt in np.arange(0, nt, ntspace):
ax.scatter(cc['rz'][tt, 0], cc['gr'][tt, 0], marker='o',
facecolors='none', s=80, edgecolors='k',
linewidth=1)
ax.plot(cc['rz'][tt, :], cc['gr'][tt, :], marker='s',
markersize=5, ls='-', alpha=0.5)
ax.text(0.1, 0.05, 'z=0', ha='left', va='bottom',
transform=ax.transAxes, fontsize=14)
if pngfile:
fig.savefig(pngfile)
def rzW1(models=False, pngfile=None):
fig, ax = plt.subplots(figsize=figsize)
hb = ax.hexbin(data['rz'], data['rW1'], bins='log', cmap='Blues_r',
mincnt=100, extent=rzrange+grrange)
ax.set_xlabel('r - z')
ax.set_ylabel('r - W1')
ax.set_xlim(rzrange)
ax.set_ylim(rW1range)
cb = fig.colorbar(hb, ax=ax)
cb.set_label(r'log$_{10}$ (Number of Galaxies per Bin)')
if models:
for tt in np.arange(0, nt, ntspace):
ax.scatter(cc['rz'][tt, 0], cc['rW1'][tt, 0], marker='o',
facecolors='none', s=80, edgecolors='k',
linewidth=1)
ax.plot(cc['rz'][tt, :], cc['rW1'][tt, :], marker='s',
markersize=5, ls='-', alpha=0.5)
ax.text(0.1, 0.05, 'z=0', ha='left', va='bottom',
transform=ax.transAxes, fontsize=14)
if pngfile:
fig.savefig(pngfile)
grz(models=True)
rzW1(models=True)
rzz()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Some notes on selecting data
Step2: or multiple columns
Step3: But, slicing accesses the rows
Step4: So as a summary, [] provides the following convenience shortcuts
Step5: But the row or column indexer can also be a list, slice, boolean array, ..
Step6: Selecting by position with iloc works similar as indexing numpy arrays
Step7: The different indexing methods can also be used to assign data
Step8: Boolean indexing (filtering)
Step9: <div class="alert alert-success">
Step10: This can then be used to filter the dataframe with boolean indexing
Step11: Let's say we want to select all data for which the capital starts with a 'B'. In Python, when having a string, we could use the startswith method
Step12: In pandas, these are available on a Series through the str namespace
Step13: For an overview of all string methods, see
Step14: How to avoid this?
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
try:
import seaborn
except ImportError:
pass
data = {'country': ['Belgium', 'France', 'Germany', 'Netherlands', 'United Kingdom'],
'population': [11.3, 64.3, 81.3, 16.9, 64.9],
'area': [30510, 671308, 357050, 41526, 244820],
'capital': ['Brussels', 'Paris', 'Berlin', 'Amsterdam', 'London']}
countries = pd.DataFrame(data)
countries
countries = countries.set_index('country')
countries
countries['area']
countries[['area', 'population']]
countries['France':'Netherlands']
countries.loc['Germany', 'area']
countries.loc['France':'Germany', ['area', 'population']]
countries.iloc[0:2,1:3]
countries2 = countries.copy()
countries2.loc['Belgium':'Germany', 'population'] = 10
countries2
countries['area'] > 100000
s = countries['capital']
s.isin?
s.isin(['Berlin', 'London'])
countries[countries['capital'].isin(['Berlin', 'London'])]
'Berlin'.startswith('B')
countries['capital'].str.startswith('B')
countries.loc['Belgium', 'capital'] = 'Ghent'
countries
countries['capital']['Belgium'] = 'Antwerp'
countries
countries[countries['capital'] == 'Antwerp']['capital'] = 'Brussels'
countries
cast = pd.read_csv('data/cast.csv')
cast.head()
titles = pd.read_csv('data/titles.csv')
titles.head()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: (Optional) plotting of the results
Step2: Photon Propagation outside of CRPropa with EleCa and DINT
Step3: The file 'photon_output.txt' will contain approximately 300 photons and can be processed as the photon example below.
Step4: Propagation with DINT
Step5: Combined Propagation
Step6: (Optional) Plotting of Results
|
<ASSISTANT_TASK:>
Python Code:
from crpropa import *
obs = Observer()
obs.add(ObserverPoint())
obs.add(ObserverInactiveVeto())
t = TextOutput("photon_electron_output.txt", Output.Event1D)
obs.onDetection(t)
sim = ModuleList()
sim.add(SimplePropagation())
sim.add(Redshift())
sim.add(EMPairProduction(CMB(),True))
sim.add(EMPairProduction(IRB_Gilmore12(),True))
sim.add(EMPairProduction(URB_Protheroe96(),True))
sim.add(EMDoublePairProduction(CMB(),True))
sim.add(EMDoublePairProduction(IRB_Gilmore12(),True))
sim.add(EMDoublePairProduction(URB_Protheroe96(),True))
sim.add(EMInverseComptonScattering(IRB_Gilmore12(),True))
sim.add(EMInverseComptonScattering(CMB(),True))
sim.add(EMInverseComptonScattering(URB_Protheroe96(),True))
sim.add(EMTripletPairProduction(CMB(),True))
sim.add(EMTripletPairProduction(IRB_Gilmore12(),True))
sim.add(EMTripletPairProduction(URB_Protheroe96(),True))
sim.add(MinimumEnergy(0.01 * EeV))
source = Source()
source.add(SourcePosition(Vector3d(4,0,0)*Mpc))
source.add(SourceRedshift1D())
source.add(SourceParticleType(22))
source.add(SourceEnergy(1000*EeV))
sim.add(obs)
sim.run(source,1000,True)
t.close()
%matplotlib inline
from pylab import *
t.close()
figure(figsize=(6,6))
a = loadtxt("photon_electron_output.txt")
E = logspace(16,23,71)
idx = a[:,1] == 22
photons = a[idx,2] * 1e18
idx = fabs(a[:,1]) == 11
ep = a[idx,2] * 1e18
data,bins = histogram(photons,E)
bincenter = (E[1:] -E[:-1])/2 + E[:-1]
plot(bincenter, data,label="photons")
data,bins = histogram(ep,E)
plot(bincenter, data, label="electrons / positrons")
grid()
loglog()
xlim(1e16, 1e21)
ylim(1e1, 1e4)
legend(loc="lower right")
xlabel("Energy [eV]")
ylabel("Number of Particles")
show()
from crpropa import *
# source setup
source = Source()
source.add(SourceParticleType(nucleusId(1, 1)))
source.add(SourcePowerLawSpectrum(10 * EeV, 100 * EeV, -2))
source.add(SourceUniform1D(3 * Mpc, 100.00001 * Mpc))
# setup module list for proton propagation
m = ModuleList()
m.add(SimplePropagation(0, 10 * Mpc))
m.add(MinimumEnergy(1 * EeV))
# observer
obs1 = Observer() # proton output
obs1.add( ObserverPoint() )
obs1.add( ObserverPhotonVeto() ) # we don't want photons here
obs1.onDetection( TextOutput('proton_output.txt', Output.Event1D) )
m.add(obs1)
obs2 = Observer() # photon output
obs2.add( ObserverDetectAll() ) # stores the photons at creation without propagating them
obs2.add( ObserverNucleusVeto() ) # we don't want hadrons here
out2 = TextOutput('photon_output.txt', Output.Event1D)
out2.enable(Output.CreatedIdColumn) # enables the necessary columns to be compatible with the DINT and EleCa propagation
out2.enable(Output.CreatedEnergyColumn)
out2.enable(Output.CreatedPositionColumn)
obs2.onDetection( out2 )
m.add(obs2)
# secondary electrons are disabled here
m.add(ElectronPairProduction(CMB(), False))
# enable secondary photons
m.add(PhotoPionProduction(CMB(), True))
# run simulation
m.run(source, 10000, True)
import crpropa
# Signature: ElecaPropagation(inputfile, outputfile, showProgress=True, lowerEnergyThreshold=5*EeV, magneticFieldStrength=1*nG, background="ALL")
crpropa.ElecaPropagation("photon_output.txt", "photons_eleca.dat", True, 0.1*crpropa.EeV, 0.1*crpropa.nG)
import crpropa
# Signature: DintPropagation(inputfile, outputfile, IRFlag=4, RadioFlag=4, magneticFieldStrength=1*nG, aCutcascade_Magfield=0)
crpropa.DintPropagation("photon_output.txt", "spectrum_dint.dat", 4, 4, 0.1*crpropa.nG)
import crpropa
# Signature: DintElecaPropagation(inputfile, outputfile, showProgress=True, crossOverEnergy=0.5*EeV, magneticFieldStrength=1*nG, aCutcascade_Magfield=0)
crpropa.DintElecaPropagation("photon_output.txt", "spectrum_dint_eleca.dat", True, 0.5*crpropa.EeV, 0.1*crpropa.nG)
%matplotlib inline
from pylab import *
figure(figsize=(6,6))
loglog(clip_on=False)
yscale("log", nonposy='clip')
xlabel('Energy [eV]')
ylabel ('$E^2 dN/dE$ [a.u.]')
# Plot the EleCa spectrum
elecaPhotons = genfromtxt("photons_eleca.dat")
binEdges = 10**arange(12, 24, .1)
logBinCenters = log10(binEdges[:-1]) + 0.5 * (log10(binEdges[1:]) - log10(binEdges[:-1]))
binWidths = (binEdges[1:] - binEdges[:-1])
data = histogram(elecaPhotons[:,1] * 1E18, bins=binEdges)
J = data[0] / binWidths
E = 10**logBinCenters
step(E, J * E**2, c='m', label='EleCa')
#Plot the DINT spectrum
data = genfromtxt("spectrum_dint.dat", names=True)
lE = data['logE']
E = 10**lE
dE = 10**(lE + 0.05) - 10**(lE - 0.05)
J = data['photons'] / dE
step(E, J * E**2 , c='b', where='mid', label='DINT')
#Plot the combined DINT+EleCa spectrum
data = genfromtxt("spectrum_dint_eleca.dat", names=True)
lE = data['logE']
E = 10**lE
dE = 10**(lE + 0.05) - 10**(lE - 0.05)
J = data['photons'] / dE
step(E, J * E**2 , c='r', where='mid', label='Combined')
# Nice limits
xlim(1e14, 1e20)
ylim(bottom=1e17)
legend(loc='upper left')
show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1.2 Expresión
Step2: Asignación
Step3: Cálculo de expresión y luego asignación
Step4: Asignación
Step5: 2- Input y Output
Step6: 2.2 Salida de datos
Step7: Ejemplo con entrada y salida de datos
Step8: 3- Tipos de Datos
Step9: 3.1 Int
Step10: 3.2 Tipo de dato real
Step11: 3.2 Float
Step12: OBSERVACION
Step13: 3-Tipos de datos lógico
Step14: Operaciones en datos enteros
Step16: Tipo de dato texto
Step17: Operaciones en cadenas de texto
Step18: Funciones
Step19: Preguntas
Step20: Problema de prueba
Step21: Implementaciones Correctas
|
<ASSISTANT_TASK:>
Python Code:
x = 1
x = 1.0
x = True
x = "True"
(1.0 + 2.0)*3
(100>0) and (1./3<1./2)
a = 3
b = 4
c = (a**2 + b**2)**0.5
peri = a + b + c
a = 1
b = a
c = a + b
a = c
print a, b, c
nombre = raw_input("Ingrese su nombre: ")
edad = int(raw_input("Ingrese su edad [años]: "))
altura = float(raw_input("Ingrese su peso [kg]: "))
print nombre
print edad
print altura
print nombre, edad, altura
print "Hola", nombre, "tienes", edad, "años y mides", altura , "metros"
# Entrada de datos
a = float(raw_input("Ingrese el lado de un cuadrado: "))
# Cálculo de la respuesta
perimetro = a*4
# Salida de datos
print "El cuadrado de lado ", a, "tiene perimetro", perimetro
42
int(1.237237243E6)
100000000000000000000000000000000000000000000000000000000000
print 3+7
print 1000000000 - 1000
print 10*10
print 3/2
print 2**2
print 19%2
231.45
float(3/2)
print 10.0 + 1e2
print 3.14 - 1e-1
print 2.0*1e2
print 1e-3/0.01
print 1.1e0**2.0
print 1.23e1%2.0
print 1 + 1.0, 1.0 + 1
print 1 - 1.0, 1.0 - 1
print 2*3.0, 2.0*3
print 1./2, 1/2.
print 2**3.0, 2.0**3
print 15%2.0, 15.0%2
print True and True
print True and False
print False and True
print False and False
print True or True
print True or False
print False or True
print False or False
print not True
print not False
a = 10.5
print 5<=a<20
print int(a)==int(10.9999999999999999999999999999999999999999999)
print a!=1E3
print a>0 and a**2<200
print a<0 or a>0
"Hola clase"
'Hola mundo'
'''Hmm'''
Hey Jude
print "hola" + ' ' + "mundo"
print "waka"*2 + "eeo"
print len('paralelepipedo')
print "pollo" in "repollo"
print "bcdo" in "abcdario"
# Valor absoluto, para int o float
print abs(-4-5), abs(-4.0-5)
print min(-1,20), min(3.0, 200), min(10,10.0)
print max(-1,20), max(1.0, 200), max(10,10.0)
print round(1.49), round(1.50), round(1.51)
print round(-1.49), round(-1.50), round(-1.51)
print type(1)
print type(1.0)
print type(True)
print type("True")
print type(float(1))
print type(type(1))
print type(float(int(bool(str(int(float(int(str(0)))))))))
print "Hola\nmundo\n\tcruel"
# Programa de Conversión de Temperatura INCORRECTO
f = float(raw_input('Ingrese temperatura en grados Fahrenheit: '))
c = 5 / 9 * (f - 32)
print 'El equivalente en grados Celsius es: ', c
# Programa de Conversión de Temperatura INCORRECTO
f = float(raw_input('Ingrese temperatura en grados Fahrenheit: '))
c = (f - 32) * ( 5 / 9 )
print 'El equivalente en grados Celsius es: ', c
# Programa de Conversión de Temperatura INCORRECTO
f = float(raw_input('Ingrese temperatura en grados Fahrenheit:'))
c = (f - 32) * 5 / 9
print 'El equivalente en grados Celsius es: ', c
# Programa de Conversión de Temperatura CORRECTO
f = float(raw_input('Ingrese temperatura en grados Fahrenheit:'))
c = (f - 32.0) * (5. / 9.)
print 'El equivalente en grados Celsius es: ', c
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Enable inline plotting in the Jupyter Notebook
Step2: Intro to H2O Data Munging
Step3: View the top of the H2O frame.
Step4: View the bottom of the H2O Frame
Step5: Select a column
Step6: Select a few columns
Step7: Select a subset of rows
Step8: Key attributes
Step9: Select rows based on value
Step10: Boolean masks can be used to subselect rows based on a criteria.
Step11: Get summary statistics of the data and additional data distribution information.
Step12: Set up the predictor and response column names
Step13: Machine Learning With H2O
Step14: The performance of the model can be checked using the holdout dataset
Step15: Train-Test Split
Step16: There was a massive jump in the R^2 value. This is because the original data is not shuffled.
Step17: However, you can still make use of the cross_val_score from Scikit-Learn
Step18: You still must use H2O to make the folds. Currently, there is no H2OStratifiedKFold. Additionally, the H2ORandomForestEstimator is similar to the scikit-learn RandomForestRegressor object with its own train method.
Step19: There isn't much difference in the R^2 value since the fold strategy is exactly the same. However, there was a major difference in terms of computation time and memory usage.
Step20: Grid Search
Step21: If you have 0.16.1, then your system can't handle complex randomized grid searches (it works in every other version of sklearn, including the soon to be released 0.16.2 and the older versions).
Step24: We might be tempted to think that we just had a large improvement; however we must be cautious. The function below creates a more detailed report.
Step25: Based on the grid search report, we can narrow the parameters to search and rerun the analysis. The parameters below were chosen after a few runs
Step26: Transformations
Step27: Normalize Data
Step28: Then, we can apply PCA and keep the top 5 components. A user warning is expected here.
Step29: Although this is MUCH simpler than keeping track of all of these transformations manually, it gets to be somewhat of a burden when you want to chain together multiple transformers.
Steps
Step30: This is so much easier!!!
Steps
Step31: Currently Under Development (drop-in scikit-learn pieces)
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy
from numpy.random import choice
from sklearn.datasets import load_boston
from h2o.estimators.random_forest import H2ORandomForestEstimator
import h2o
h2o.init()
# transfer the boston data from pandas to H2O
boston_data = load_boston()
X = pd.DataFrame(data=boston_data.data, columns=boston_data.feature_names)
X["Median_value"] = boston_data.target
X = h2o.H2OFrame.from_python(X.to_dict("list"))
# select 10% for valdation
r = X.runif(seed=123456789)
train = X[r < 0.9,:]
valid = X[r >= 0.9,:]
h2o.export_file(train, "Boston_housing_train.csv", force=True)
h2o.export_file(valid, "Boston_housing_test.csv", force=True)
%matplotlib inline
import matplotlib.pyplot as plt
fr = h2o.import_file("Boston_housing_train.csv")
fr.head()
fr.tail()
fr["CRIM"].head() # Tab completes
columns = ["CRIM", "RM", "RAD"]
fr[columns].head()
fr[2:7,:] # explicitly select all columns with :
# The columns attribute is exactly like Pandas
print "Columns:", fr.columns, "\n"
print "Columns:", fr.names, "\n"
print "Columns:", fr.col_names, "\n"
# There are a number of attributes to get at the shape
print "length:", str( len(fr) ), "\n"
print "shape:", fr.shape, "\n"
print "dim:", fr.dim, "\n"
print "nrow:", fr.nrow, "\n"
print "ncol:", fr.ncol, "\n"
# Use the "types" attribute to list the column types
print "types:", fr.types, "\n"
fr.shape
mask = fr["CRIM"]>1
fr[mask,:].shape
fr.describe()
x = fr.names[:]
y="Median_value"
x.remove(y)
# Define and fit first 400 points
model = H2ORandomForestEstimator(seed=42)
model.train(x=x, y=y, training_frame=fr[:400,:])
model.predict(fr[400:fr.nrow,:]) # Predict the rest
perf = model.model_performance(fr[400:fr.nrow,:])
perf.r2() # get the r2 on the holdout data
perf.mse() # get the mse on the holdout data
perf # display the performance object
r = fr.runif(seed=12345) # build random uniform column over [0,1]
train= fr[r<0.75,:] # perform a 75-25 split
test = fr[r>=0.75,:]
model = H2ORandomForestEstimator(seed=42)
model.train(x=x, y=y, training_frame=train, validation_frame=test)
perf = model.model_performance(test)
perf.r2()
model = H2ORandomForestEstimator(nfolds=10) # build a 10-fold cross-validated model
model.train(x=x, y=y, training_frame=fr)
scores = numpy.array([m.r2() for m in model.xvals]) # iterate over the xval models using the xvals attribute
print "Expected R^2: %.2f +/- %.2f \n" % (scores.mean(), scores.std()*1.96)
print "Scores:", scores.round(2)
from sklearn.cross_validation import cross_val_score
from h2o.cross_validation import H2OKFold
from h2o.model.regression import h2o_r2_score
from sklearn.metrics.scorer import make_scorer
model = H2ORandomForestEstimator(seed=42)
scorer = make_scorer(h2o_r2_score) # make h2o_r2_score into a scikit_learn scorer
custom_cv = H2OKFold(fr, n_folds=10, seed=42) # make a cv
scores = cross_val_score(model, fr[x], fr[y], scoring=scorer, cv=custom_cv)
print "Expected R^2: %.2f +/- %.2f \n" % (scores.mean(), scores.std()*1.96)
print "Scores:", scores.round(2)
h2o.__PROGRESS_BAR__=False
h2o.no_progress()
from sklearn import __version__
sklearn_version = __version__
print sklearn_version
%%time
from sklearn.grid_search import RandomizedSearchCV # Import grid search
from scipy.stats import randint, uniform
model = H2ORandomForestEstimator(seed=42) # Define model
params = {"ntrees": randint(20,50),
"max_depth": randint(1,10),
"min_rows": randint(1,10), # scikit's min_samples_leaf
"mtries": randint(2,fr[x].shape[1]),} # Specify parameters to test
scorer = make_scorer(h2o_r2_score) # make h2o_r2_score into a scikit_learn scorer
custom_cv = H2OKFold(fr, n_folds=10, seed=42) # make a cv
random_search = RandomizedSearchCV(model, params,
n_iter=30,
scoring=scorer,
cv=custom_cv,
random_state=42,
n_jobs=1) # Define grid search object
random_search.fit(fr[x], fr[y])
print "Best R^2:", random_search.best_score_, "\n"
print "Best params:", random_search.best_params_
def report_grid_score_detail(random_search, charts=True):
Input fit grid search estimator. Returns df of scores with details
df_list = []
for line in random_search.grid_scores_:
results_dict = dict(line.parameters)
results_dict["score"] = line.mean_validation_score
results_dict["std"] = line.cv_validation_scores.std()*1.96
df_list.append(results_dict)
result_df = pd.DataFrame(df_list)
result_df = result_df.sort("score", ascending=False)
if charts:
for col in get_numeric(result_df):
if col not in ["score", "std"]:
plt.scatter(result_df[col], result_df.score)
plt.title(col)
plt.show()
for col in list(result_df.columns[result_df.dtypes == "object"]):
cat_plot = result_df.score.groupby(result_df[col]).mean()[0]
cat_plot.sort()
cat_plot.plot(kind="barh", xlim=(.5, None), figsize=(7, cat_plot.shape[0]/2))
plt.show()
return result_df
def get_numeric(X):
Return list of numeric dtypes variables
return X.dtypes[X.dtypes.apply(lambda x: str(x).startswith(("float", "int", "bool")))].index.tolist()
report_grid_score_detail(random_search).head()
%%time
params = {"ntrees": randint(30,40),
"max_depth": randint(4,10),
"mtries": randint(4,10),}
custom_cv = H2OKFold(fr, n_folds=5, seed=42) # In small datasets, the fold size can have a big
# impact on the std of the resulting scores. More
random_search = RandomizedSearchCV(model, params, # folds --> Less examples per fold --> higher
n_iter=10, # variation per sample
scoring=scorer,
cv=custom_cv,
random_state=43,
n_jobs=1)
random_search.fit(fr[x], fr[y])
print "Best R^2:", random_search.best_score_, "\n"
print "Best params:", random_search.best_params_
report_grid_score_detail(random_search)
from h2o.transforms.preprocessing import H2OScaler
from h2o.transforms.decomposition import H2OPCA
y_train = train.pop("Median_value")
y_test = test.pop("Median_value")
norm = H2OScaler()
norm.fit(train)
X_train_norm = norm.transform(train)
X_test_norm = norm.transform(test)
print X_test_norm.shape
X_test_norm
pca = H2OPCA(k=5)
pca.fit(X_train_norm)
X_train_norm_pca = pca.transform(X_train_norm)
X_test_norm_pca = pca.transform(X_test_norm)
# prop of variance explained by top 5 components?
print X_test_norm_pca.shape
X_test_norm_pca[:5]
model = H2ORandomForestEstimator(seed=42)
model.train(x=X_train_norm_pca.names, y=y_train.names, training_frame=X_train_norm_pca.cbind(y_train))
y_hat = model.predict(X_test_norm_pca)
h2o_r2_score(y_test,y_hat)
from h2o.transforms.preprocessing import H2OScaler
from h2o.transforms.decomposition import H2OPCA
from sklearn.pipeline import Pipeline # Import Pipeline <other imports not shown>
model = H2ORandomForestEstimator(seed=42)
pipe = Pipeline([("standardize", H2OScaler()), # Define pipeline as a series of steps
("pca", H2OPCA(k=5)),
("rf", model)]) # Notice the last step is an estimator
pipe.fit(train, y_train) # Fit training data
y_hat = pipe.predict(test) # Predict testing data (due to last step being an estimator)
h2o_r2_score(y_test, y_hat) # Notice the final score is identical to before
pipe = Pipeline([("standardize", H2OScaler()),
("pca", H2OPCA()),
("rf", H2ORandomForestEstimator(seed=42))])
params = {"standardize__center": [True, False], # Parameters to test
"standardize__scale": [True, False],
"pca__k": randint(2, 6),
"rf__ntrees": randint(50,80),
"rf__max_depth": randint(4,10),
"rf__min_rows": randint(5,10), }
# "rf__mtries": randint(1,4),} # gridding over mtries is
# problematic with pca grid over
# k above
from sklearn.grid_search import RandomizedSearchCV
from h2o.cross_validation import H2OKFold
from h2o.model.regression import h2o_r2_score
from sklearn.metrics.scorer import make_scorer
custom_cv = H2OKFold(fr, n_folds=5, seed=42)
random_search = RandomizedSearchCV(pipe, params,
n_iter=30,
scoring=make_scorer(h2o_r2_score),
cv=custom_cv,
random_state=42,
n_jobs=1)
random_search.fit(fr[x],fr[y])
results = report_grid_score_detail(random_search)
results.head()
best_estimator = random_search.best_estimator_ # fetch the pipeline from the grid search
h2o_model = h2o.get_model(best_estimator._final_estimator._id) # fetch the model from the pipeline
save_path = h2o.save_model(h2o_model, path=".", force=True)
print save_path
# assumes new session
my_model = h2o.load_model(path=save_path)
my_model.predict(X_test_norm_pca)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Writing a Module
Step2: This is a really simple equation, but by making this a function, we can make it more sophisticated or change the exponent in one spot, and it will be updated everywhere. We can quickly test this out to see that the function works the way that we expect when called.
Step3: You can also pass a NumPy array to the function
Step4: If we want to use this module that we've created, we can inport it as we have with other modules. When we import it, we're using the file name, and then when we call the function, we have to include the function name and for this module those two are the same. These could also be different, where the file name is one thing, and then it contains one or more functions that have names that don't match it.
Step5: This is what the "stellar_radius.py" file looks like
Step6: This all works for if we simply want to import the function into another python function, but it can often be useful to be able to run these functions from a command line (for example, so that we can use execute this python function in another scripting language or so that we can run it from the command line for quick calculations)
Step8: The structure of your directory would be
Step10: And one that computes the square roots for a given number _n_.
Step11: For each folder and subfolder, you need a "__init__.py" file
Step12: And for the one in the adv folder. This file can be empty
Step13: The last step to take when creating your own modules is to append your PYTHONPATH.
|
<ASSISTANT_TASK:>
Python Code:
# %load ./arguments.py
#!/usr/bin/python
import sys
# it's easy to print this list of course:
print sys.argv
# or it can be iterated via a for loop:
for i in range(len(sys.argv)):
if i == 0:
print "Function name: %s" % sys.argv[0]
else:
print "%d. argument: %s" % (i,sys.argv[i])
# copied from http://www.python-course.eu/sys_module.php
def stellar_r(mass):
starR=mass**0.8
return starR
print(stellar_r(1))
print(stellar_r(0.9))
%matplotlib inline
import matplotlib.pyplot
import numpy as np
mass_arr = np.arange(0.8, 5, 0.1)
stellar_r( mass_arr )
import stellar_radius as staR
print staR.stellar_radius(0.9)
# %load ./stellar_radius.py
#!/usr/bin/env python
import numpy as np
import sys
def stellar_radius(mass):
starR=mass**0.8
return starR
if __name__ == "__main__":
print stellar_radius(float(sys.argv[1])) #mass
!mkdir "$HOME/custom_utilities"
!mkdir "$HOME/custom_utilities/python_modules/"
# %load ./modules/adv/fib.py
#! /usr/bin/env python
# NAME
# DATE
# Vanderbilt University
from __future__ import print_function, division, absolute_import
__author__ =['YOUR NAME']
__copyright__ =["Copyright 2017 YOUR NAME, Name of Project"]
__email__ =['Email Address']
__maintainer__ =['Your Name']
__all__["add", "division", "multiply", "subtract"]
from math import sqrt
#----------------------------------------------------------------------
def fibonacci(n):
http://stackoverflow.com/questions/494594/how-to-write-the-fibonacci-sequence-in-python
return ((1+sqrt(5))**n-(1-sqrt(5))**n)/(2**n*sqrt(5))
# %load ./modules/adv/sqrt.py
#! /usr/bin/env python
# NAME
# DATE
# Vanderbilt University
from __future__ import print_function, division, absolute_import
__author__ =['YOUR NAME']
__copyright__ =["Copyright 2017 YOUR NAME, Name of Project"]
__email__ =['Email Address']
__maintainer__ =['Your Name']
__all__["add", "division", "multiply", "subtract"]
import math
#----------------------------------------------------------------------
def squareroot(n):
return math.sqrt(n)
# %load ./modules/__init__.py
#! /usr/bin/env python
# NAME
# DATE
# Vanderbilt University
from __future__ import print_function, division#, absolute_import
__author__ =['YOUR NAME']
__copyright__ =["Copyright 2017 YOUR NAME, Name of Project"]
__email__ =['Email Address']
__maintainer__ =['Your Name']
__all__ = ["add", "division", "multiply", "subtract", "fibonacci", "squareroot"]
from arithmetic import add
from arithmetic import division
from arithmetic import multiply
from arithmetic import subtract
from adv.fib import fibonacci
from adv.sqrt import squareroot
# %load ./modules/adv/__init__.py
!echo 'export PYTHONPATH="$HOME/custom_utilities/python_modules/:$PYTHONPATH"' >> $HOME/.bash_profile
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Generate Input Files
Step2: With the nuclides we defined, we will now create three materials for the fuel, water, and cladding of the fuel pin.
Step3: With our three materials, we can now create a materials file object that can be exported to an actual XML file.
Step4: Now let's move on to the geometry. Our problem will have three regions for the fuel, the clad, and the surrounding coolant. The first step is to create the bounding surfaces -- in this case two cylinders and six reflective planes.
Step5: With the surfaces defined, we can now create cells that are defined by intersections of half-spaces created by the surfaces.
Step6: OpenMC requires that there is a "root" universe. Let us create a root cell that is filled by the pin cell universe and then assign it to the root universe.
Step7: We now must create a geometry that is assigned a root universe, put the geometry into a geometry file, and export it to XML.
Step8: With the geometry and materials finished, we now just need to define simulation parameters. In this case, we will use 5 inactive batches and 15 active batches each with 2500 particles.
Step9: Let us also create a plot file that we can use to verify that our pin cell geometry was created successfully.
Step10: With the plots.xml file, we can now generate and view the plot. OpenMC outputs plots in .ppm format, which can be converted into a compressed format like .png with the convert utility.
Step11: As we can see from the plot, we have a nice pin cell with fuel, cladding, and water! Before we run our simulation, we need to tell the code what we want to tally. The following code shows how to create a variety of tallies.
Step12: Now we a have a complete set of inputs, so we can go ahead and run our simulation.
Step13: Tally Data Processing
Step14: You may have also noticed we instructed OpenMC to create a summary file with lots of geometry information in it. This can help to produce more sensible output from the Python API, so we will use the summary file to link against.
Step15: We have a tally of the total fission rate and the total absorption rate, so we can calculate k-infinity as
Step16: Notice that even though the neutron production rate and absorption rate are separate tallies, we still get a first-order estimate of the uncertainty on the quotient of them automatically!
Step17: The fast fission factor can be calculated as
Step18: The thermal flux utilization is calculated as
Step19: The final factor is the number of fission neutrons produced per absorption in fuel, calculated as $$\eta = \frac{\langle \nu\Sigma_f\phi \rangle_T}{\langle \Sigma_a \phi \rangle^F_T}$$
Step20: Now we can calculate $k_\infty$ using the product of the factors form the four-factor formula.
Step21: We see that the value we've obtained here has exactly the same mean as before. However, because of the way it was calculated, the standard deviation appears to be larger.
Step22: We see that when the two tallies with multiple bins were divided, the derived tally contains the outer product of the combinations. If the filters/scores are the same, no outer product is needed. The get_values(...) method allows us to obtain a subset of tally scores. In the following example, we obtain just the neutron production microscopic cross sections.
Step23: The same idea can be used not only for scores but also for filters and nuclides.
Step24: A more advanced method is to use get_slice(...) to create a new derived tally that is a subset of an existing tally. This has the benefit that we can use get_pandas_dataframe() to see the tallies in a more human-readable format.
|
<ASSISTANT_TASK:>
Python Code:
%load_ext autoreload
%autoreload 2
import glob
from IPython.display import Image
import numpy as np
import openmc
from openmc.statepoint import StatePoint
from openmc.summary import Summary
from openmc.source import Source
from openmc.stats import Box
%matplotlib inline
# Instantiate some Nuclides
h1 = openmc.Nuclide('H-1')
b10 = openmc.Nuclide('B-10')
o16 = openmc.Nuclide('O-16')
u235 = openmc.Nuclide('U-235')
u238 = openmc.Nuclide('U-238')
zr90 = openmc.Nuclide('Zr-90')
# 1.6 enriched fuel
fuel = openmc.Material(name='1.6% Fuel')
fuel.set_density('g/cm3', 10.31341)
fuel.add_nuclide(u235, 3.7503e-4)
fuel.add_nuclide(u238, 2.2625e-2)
fuel.add_nuclide(o16, 4.6007e-2)
# borated water
water = openmc.Material(name='Borated Water')
water.set_density('g/cm3', 0.740582)
water.add_nuclide(h1, 4.9457e-2)
water.add_nuclide(o16, 2.4732e-2)
water.add_nuclide(b10, 8.0042e-6)
# zircaloy
zircaloy = openmc.Material(name='Zircaloy')
zircaloy.set_density('g/cm3', 6.55)
zircaloy.add_nuclide(zr90, 7.2758e-3)
# Instantiate a MaterialsFile, add Materials
materials_file = openmc.MaterialsFile()
materials_file.add_material(fuel)
materials_file.add_material(water)
materials_file.add_material(zircaloy)
materials_file.default_xs = '71c'
# Export to "materials.xml"
materials_file.export_to_xml()
# Create cylinders for the fuel and clad
fuel_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, R=0.39218)
clad_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, R=0.45720)
# Create boundary planes to surround the geometry
# Use both reflective and vacuum boundaries to make life interesting
min_x = openmc.XPlane(x0=-0.63, boundary_type='reflective')
max_x = openmc.XPlane(x0=+0.63, boundary_type='reflective')
min_y = openmc.YPlane(y0=-0.63, boundary_type='reflective')
max_y = openmc.YPlane(y0=+0.63, boundary_type='reflective')
min_z = openmc.ZPlane(z0=-0.63, boundary_type='reflective')
max_z = openmc.ZPlane(z0=+0.63, boundary_type='reflective')
# Create a Universe to encapsulate a fuel pin
pin_cell_universe = openmc.Universe(name='1.6% Fuel Pin')
# Create fuel Cell
fuel_cell = openmc.Cell(name='1.6% Fuel')
fuel_cell.fill = fuel
fuel_cell.region = -fuel_outer_radius
pin_cell_universe.add_cell(fuel_cell)
# Create a clad Cell
clad_cell = openmc.Cell(name='1.6% Clad')
clad_cell.fill = zircaloy
clad_cell.region = +fuel_outer_radius & -clad_outer_radius
pin_cell_universe.add_cell(clad_cell)
# Create a moderator Cell
moderator_cell = openmc.Cell(name='1.6% Moderator')
moderator_cell.fill = water
moderator_cell.region = +clad_outer_radius
pin_cell_universe.add_cell(moderator_cell)
# Create root Cell
root_cell = openmc.Cell(name='root cell')
root_cell.fill = pin_cell_universe
# Add boundary planes
root_cell.region = +min_x & -max_x & +min_y & -max_y & +min_z & -max_z
# Create root Universe
root_universe = openmc.Universe(universe_id=0, name='root universe')
root_universe.add_cell(root_cell)
# Create Geometry and set root Universe
geometry = openmc.Geometry()
geometry.root_universe = root_universe
# Instantiate a GeometryFile
geometry_file = openmc.GeometryFile()
geometry_file.geometry = geometry
# Export to "geometry.xml"
geometry_file.export_to_xml()
# OpenMC simulation parameters
batches = 20
inactive = 5
particles = 2500
# Instantiate a SettingsFile
settings_file = openmc.SettingsFile()
settings_file.batches = batches
settings_file.inactive = inactive
settings_file.particles = particles
settings_file.output = {'tallies': True, 'summary': True}
source_bounds = [-0.63, -0.63, -0.63, 0.63, 0.63, 0.63]
settings_file.source = Source(space=Box(
source_bounds[:3], source_bounds[3:]))
# Export to "settings.xml"
settings_file.export_to_xml()
# Instantiate a Plot
plot = openmc.Plot(plot_id=1)
plot.filename = 'materials-xy'
plot.origin = [0, 0, 0]
plot.width = [1.26, 1.26]
plot.pixels = [250, 250]
plot.color = 'mat'
# Instantiate a PlotsFile, add Plot, and export to "plots.xml"
plot_file = openmc.PlotsFile()
plot_file.add_plot(plot)
plot_file.export_to_xml()
# Run openmc in plotting mode
executor = openmc.Executor()
executor.plot_geometry(output=False)
# Convert OpenMC's funky ppm to png
!convert materials-xy.ppm materials-xy.png
# Display the materials plot inline
Image(filename='materials-xy.png')
# Instantiate an empty TalliesFile
tallies_file = openmc.TalliesFile()
# Create Tallies to compute microscopic multi-group cross-sections
# Instantiate energy filter for multi-group cross-section Tallies
energy_filter = openmc.Filter(type='energy', bins=[0., 0.625e-6, 20.])
# Instantiate flux Tally in moderator and fuel
tally = openmc.Tally(name='flux')
tally.add_filter(openmc.Filter(type='cell', bins=[fuel_cell.id, moderator_cell.id]))
tally.add_filter(energy_filter)
tally.add_score('flux')
tallies_file.add_tally(tally)
# Instantiate reaction rate Tally in fuel
tally = openmc.Tally(name='fuel rxn rates')
tally.add_filter(openmc.Filter(type='cell', bins=[fuel_cell.id]))
tally.add_filter(energy_filter)
tally.add_score('nu-fission')
tally.add_score('scatter')
tally.add_nuclide(u238)
tally.add_nuclide(u235)
tallies_file.add_tally(tally)
# Instantiate reaction rate Tally in moderator
tally = openmc.Tally(name='moderator rxn rates')
tally.add_filter(openmc.Filter(type='cell', bins=[moderator_cell.id]))
tally.add_filter(energy_filter)
tally.add_score('absorption')
tally.add_score('total')
tally.add_nuclide(o16)
tally.add_nuclide(h1)
tallies_file.add_tally(tally)
# K-Eigenvalue (infinity) tallies
fiss_rate = openmc.Tally(name='fiss. rate')
abs_rate = openmc.Tally(name='abs. rate')
fiss_rate.add_score('nu-fission')
abs_rate.add_score('absorption')
tallies_file.add_tally(fiss_rate)
tallies_file.add_tally(abs_rate)
# Resonance Escape Probability tallies
therm_abs_rate = openmc.Tally(name='therm. abs. rate')
therm_abs_rate.add_score('absorption')
therm_abs_rate.add_filter(openmc.Filter(type='energy', bins=[0., 0.625]))
tallies_file.add_tally(therm_abs_rate)
# Thermal Flux Utilization tallies
fuel_therm_abs_rate = openmc.Tally(name='fuel therm. abs. rate')
fuel_therm_abs_rate.add_score('absorption')
fuel_therm_abs_rate.add_filter(openmc.Filter(type='energy', bins=[0., 0.625]))
fuel_therm_abs_rate.add_filter(openmc.Filter(type='cell', bins=[fuel_cell.id]))
tallies_file.add_tally(fuel_therm_abs_rate)
# Fast Fission Factor tallies
therm_fiss_rate = openmc.Tally(name='therm. fiss. rate')
therm_fiss_rate.add_score('nu-fission')
therm_fiss_rate.add_filter(openmc.Filter(type='energy', bins=[0., 0.625]))
tallies_file.add_tally(therm_fiss_rate)
# Instantiate energy filter to illustrate Tally slicing
energy_filter = openmc.Filter(type='energy', bins=np.logspace(np.log10(1e-8), np.log10(20), 10))
# Instantiate flux Tally in moderator and fuel
tally = openmc.Tally(name='need-to-slice')
tally.add_filter(openmc.Filter(type='cell', bins=[fuel_cell.id, moderator_cell.id]))
tally.add_filter(energy_filter)
tally.add_score('nu-fission')
tally.add_score('scatter')
tally.add_nuclide(h1)
tally.add_nuclide(u238)
tallies_file.add_tally(tally)
# Export to "tallies.xml"
tallies_file.export_to_xml()
# Remove old HDF5 (summary, statepoint) files
!rm statepoint.*
# Run OpenMC with MPI!
executor.run_simulation()
# Load the statepoint file
sp = StatePoint('statepoint.20.h5')
# Load the summary file and link with statepoint
su = Summary('summary.h5')
sp.link_with_summary(su)
# Compute k-infinity using tally arithmetic
fiss_rate = sp.get_tally(name='fiss. rate')
abs_rate = sp.get_tally(name='abs. rate')
keff = fiss_rate / abs_rate
keff.get_pandas_dataframe()
# Compute resonance escape probability using tally arithmetic
therm_abs_rate = sp.get_tally(name='therm. abs. rate')
res_esc = therm_abs_rate / abs_rate
res_esc.get_pandas_dataframe()
# Compute fast fission factor factor using tally arithmetic
therm_fiss_rate = sp.get_tally(name='therm. fiss. rate')
fast_fiss = fiss_rate / therm_fiss_rate
fast_fiss.get_pandas_dataframe()
# Compute thermal flux utilization factor using tally arithmetic
fuel_therm_abs_rate = sp.get_tally(name='fuel therm. abs. rate')
therm_util = fuel_therm_abs_rate / therm_abs_rate
therm_util.get_pandas_dataframe()
# Compute neutrons produced per absorption (eta) using tally arithmetic
eta = therm_fiss_rate / fuel_therm_abs_rate
eta.get_pandas_dataframe()
keff = res_esc * fast_fiss * therm_util * eta
keff.get_pandas_dataframe()
# Compute microscopic multi-group cross-sections
flux = sp.get_tally(name='flux')
flux = flux.get_slice(filters=['cell'], filter_bins=[(fuel_cell.id,)])
fuel_rxn_rates = sp.get_tally(name='fuel rxn rates')
mod_rxn_rates = sp.get_tally(name='moderator rxn rates')
fuel_xs = fuel_rxn_rates / flux
fuel_xs.get_pandas_dataframe()
# Show how to use Tally.get_values(...) with a CrossScore
nu_fiss_xs = fuel_xs.get_values(scores=['(nu-fission / flux)'])
print(nu_fiss_xs)
# Show how to use Tally.get_values(...) with a CrossScore and CrossNuclide
u235_scatter_xs = fuel_xs.get_values(nuclides=['(U-235 / total)'],
scores=['(scatter / flux)'])
print(u235_scatter_xs)
# Show how to use Tally.get_values(...) with a CrossFilter and CrossScore
fast_scatter_xs = fuel_xs.get_values(filters=['energy'],
filter_bins=[((0.625e-6, 20.),)],
scores=['(scatter / flux)'])
print(fast_scatter_xs)
# "Slice" the nu-fission data into a new derived Tally
nu_fission_rates = fuel_rxn_rates.get_slice(scores=['nu-fission'])
nu_fission_rates.get_pandas_dataframe()
# "Slice" the H-1 scatter data in the moderator Cell into a new derived Tally
need_to_slice = sp.get_tally(name='need-to-slice')
slice_test = need_to_slice.get_slice(scores=['scatter'], nuclides=['H-1'],
filters=['cell'], filter_bins=[(moderator_cell.id,)])
slice_test.get_pandas_dataframe()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step4: Library functions
Step5: Download model and vocabulary
Step6: Load the model into TensorFlow
Step7: Load tensors for class prediction
Step8: Load mapping from neural network outputs to Pfam family names
Step9: Download data for inference
Step10: Predict domain Pfam labels for 126 thousand domains
Step11: Compute accuracy
|
<ASSISTANT_TASK:>
Python Code:
import json
import numpy as np
import tensorflow.compat.v1 as tf
import tqdm
# Suppress noisy log messages.
from tensorflow.python.util import deprecation
deprecation._PRINT_DEPRECATION_WARNINGS = False
AMINO_ACID_VOCABULARY = [
'A', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'K', 'L', 'M', 'N', 'P', 'Q', 'R',
'S', 'T', 'V', 'W', 'Y'
]
def residues_to_one_hot(amino_acid_residues):
Given a sequence of amino acids, return one hot array.
Supports ambiguous amino acid characters B, Z, and X by distributing evenly
over possible values, e.g. an 'X' gets mapped to [.05, .05, ... , .05].
Supports rare amino acids by appropriately substituting. See
normalize_sequence_to_blosum_characters for more information.
Supports gaps and pads with the '.' and '-' characters; which are mapped to
the zero vector.
Args:
amino_acid_residues: string. consisting of characters from
AMINO_ACID_VOCABULARY
Returns:
A numpy array of shape (len(amino_acid_residues),
len(AMINO_ACID_VOCABULARY)).
Raises:
ValueError: if sparse_amino_acid has a character not in the vocabulary + X.
to_return = []
normalized_residues = amino_acid_residues.replace('U', 'C').replace('O', 'X')
for char in normalized_residues:
if char in AMINO_ACID_VOCABULARY:
to_append = np.zeros(len(AMINO_ACID_VOCABULARY))
to_append[AMINO_ACID_VOCABULARY.index(char)] = 1.
to_return.append(to_append)
elif char == 'B': # Asparagine or aspartic acid.
to_append = np.zeros(len(AMINO_ACID_VOCABULARY))
to_append[AMINO_ACID_VOCABULARY.index('D')] = .5
to_append[AMINO_ACID_VOCABULARY.index('N')] = .5
to_return.append(to_append)
elif char == 'Z': # Glutamine or glutamic acid.
to_append = np.zeros(len(AMINO_ACID_VOCABULARY))
to_append[AMINO_ACID_VOCABULARY.index('E')] = .5
to_append[AMINO_ACID_VOCABULARY.index('Q')] = .5
to_return.append(to_append)
elif char == 'X':
to_return.append(
np.full(len(AMINO_ACID_VOCABULARY), 1. / len(AMINO_ACID_VOCABULARY)))
elif char == _PFAM_GAP_CHARACTER:
to_return.append(np.zeros(len(AMINO_ACID_VOCABULARY)))
else:
raise ValueError('Could not one-hot code character {}'.format(char))
return np.array(to_return)
def _test_residues_to_one_hot():
expected = np.zeros((3, 20))
expected[0, 0] = 1. # Amino acid A
expected[1, 1] = 1. # Amino acid C
expected[2, :] = .05 # Amino acid X
actual = residues_to_one_hot('ACX')
np.testing.assert_allclose(actual, expected)
_test_residues_to_one_hot()
def pad_one_hot_sequence(sequence: np.ndarray,
target_length: int) -> np.ndarray:
Pads one hot sequence [seq_len, num_aas] in the seq_len dimension.
sequence_length = sequence.shape[0]
pad_length = target_length - sequence_length
if pad_length < 0:
raise ValueError(
'Cannot set a negative amount of padding. Sequence length was {}, target_length was {}.'
.format(sequence_length, target_length))
pad_values = [[0, pad_length], [0, 0]]
return np.pad(sequence, pad_values, mode='constant')
def _test_pad_one_hot():
input_one_hot = residues_to_one_hot('ACX')
expected = np.array(input_one_hot.tolist() + np.zeros((4, 20)).tolist())
actual = pad_one_hot_sequence(input_one_hot, 7)
np.testing.assert_allclose(expected, actual)
_test_pad_one_hot()
def batch_iterable(iterable, batch_size):
Yields batches from an iterable.
If the number of elements in the iterator is not a multiple of batch size,
the last batch will have fewer elements.
Args:
iterable: a potentially infinite iterable.
batch_size: the size of batches to return.
Yields:
array of length batch_size, containing elements, in order, from iterable.
Raises:
ValueError: if batch_size < 1.
if batch_size < 1:
raise ValueError(
'Cannot have a batch size of less than 1. Received: {}'.format(
batch_size))
current = []
for item in iterable:
if len(current) == batch_size:
yield current
current = []
current.append(item)
# Prevent yielding an empty batch. Instead, prefer to end the generation.
if current:
yield current
def _test_batch_iterable():
itr = [1, 2, 3]
batched_itr = list(batch_iterable(itr, 2))
assert batched_itr == [[1, 2], [3]]
_test_batch_iterable()
# Get a TensorFlow SavedModel
!wget -qN https://storage.googleapis.com/brain-genomics-public/research/proteins/pfam/models/single_domain_per_sequence_zipped_models/seed_random_32.0/5356760.tar.gz
# unzip
!tar xzf 5356760.tar.gz
# Get the vocabulary for the model, which tells you which output index means which family
!wget https://storage.googleapis.com/brain-genomics-public/research/proteins/pfam/models/single_domain_per_sequence_zipped_models/trained_model_pfam_32.0_vocab.json
# Find the unzipped path
!ls *5356760*
sess = tf.Session()
graph = tf.Graph()
with graph.as_default():
saved_model = tf.saved_model.load(sess, ['serve'], 'trn-_cnn_random__random_sp_gpu-cnn_for_random_pfam-5356760')
top_pick_signature = saved_model.signature_def['serving_default']
top_pick_signature_tensor_name = top_pick_signature.outputs['output'].name
sequence_input_tensor_name = saved_model.signature_def['confidences'].inputs['sequence'].name
sequence_lengths_input_tensor_name = saved_model.signature_def['confidences'].inputs['sequence_length'].name
with open('trained_model_pfam_32.0_vocab.json') as f:
vocab = json.loads(f.read())
%%shell
for i in `seq 0 9`; do
wget https://storage.googleapis.com/brain-genomics-public/research/proteins/pfam/random_split/test/data-0000$i-of-00010;
done
import glob
import pandas as pd
test_dfs = []
for f_name in glob.glob('data*'):
with open(f_name) as f:
test_dfs.append(pd.read_csv(f))
test_df = pd.concat(test_dfs)
import math
def infer(batch):
seq_lens = [len(seq) for seq in batch]
one_hots = [residues_to_one_hot(seq) for seq in batch]
padded_sequence_inputs = [pad_one_hot_sequence(seq, max(seq_lens)) for seq in one_hots]
with graph.as_default():
return sess.run(
top_pick_signature_tensor_name,
{
sequence_input_tensor_name: padded_sequence_inputs,
sequence_lengths_input_tensor_name: seq_lens,
})
# Sort test_df by sequence length so that batches have as little padding as
# possible -> faster inference.
test_df = test_df.sort_values('sequence', key=lambda col: [len(c) for c in col])
inference_results = []
batches = list(batch_iterable(test_df.sequence, 32))
for seq_batch in tqdm.tqdm(batches, position=0):
inference_results.extend(infer(seq_batch))
test_df['predicted_label'] = [vocab[i] for i in inference_results]
# Convert true labels from PF00001.21 to PF00001
test_df['true_label'] = test_df.family_accession.apply(lambda s: s.split('.')[0])
print('family calling error rate (percentage) = {:.03f}'.format(100-sum(test_df.true_label == test_df.predicted_label) / len(test_df) * 100))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Cover Sheets
Step2: Do the FILER_ID values vary between amendments to the same any campaign filing?
Step3: Joining to FILER_FILINGS_CD
Step4: Do any records have conflicting CVR_CAMPAIGN_DISCLOSURE_CD.FORM_TYPE and FILER_FILINGS_CD.FORM_ID values?
Step5: Do any records have conflicting CVR_CAMPAIGN_DISCLOSURE_CD.FILER_ID and FILER_FILINGS_CD.FILER_ID values?
Step6: But one thing to note is that these FILER_ID fields are too different data types
Step7: Should probably look more into these later, but this might have something to do with conflicting filer_ids on CVR_CAMPAIGN_DISCLOSURE_CD and FILER_FILINGS_CD.
Step8: Mostly this seems to be a problem for Form 497 filings.
Step9: Does the FILER_FILINGS_CD.FORM_ID value ever vary between amendments to the same filing?
Step10: But there aren't any CVR_CAMPAIGN_DISCLOSURE_CD or SMRY_CD records for this filing_id, so maybe it isn't real.
Step11: Does the FILER_FILINGS_CD.FILER_ID value ever vary between amendments to the same filing?
|
<ASSISTANT_TASK:>
Python Code:
%load_ext sql
from django.conf import settings
connection_string = 'postgresql+psycopg2://{USER}:{PASSWORD}@{HOST}:{PORT}/{NAME}'.format(
**settings.DATABASES['default']
)
%sql $connection_string
%%sql
SELECT cvr."FILING_ID", COUNT(DISTINCT cvr."FORM_TYPE")
FROM "CVR_CAMPAIGN_DISCLOSURE_CD" cvr
JOIN "FILER_FILINGS_CD" ff
ON cvr."FILING_ID" = ff."FILING_ID"
AND cvr."AMEND_ID" = ff."FILING_SEQUENCE"
GROUP BY 1
HAVING COUNT(DISTINCT cvr."FORM_TYPE") > 1;
%%sql
SELECT "FILING_ID", COUNT(DISTINCT "FILER_ID")
FROM "CVR_CAMPAIGN_DISCLOSURE_CD"
GROUP BY 1
HAVING COUNT(DISTINCT "FILER_ID") > 1
ORDER BY COUNT(DISTINCT "FILER_ID") DESC;
%%sql
SELECT cvr."FORM_TYPE", cvr."FILING_ID", cvr."AMEND_ID", cvr."FILER_ID", cvr."FILER_NAML", cvr."RPT_DATE", *
FROM "CVR_CAMPAIGN_DISCLOSURE_CD" cvr
LEFT JOIN "FILER_FILINGS_CD" ff
ON cvr."FILING_ID" = ff."FILING_ID"
AND cvr."AMEND_ID" = ff."FILING_SEQUENCE"
WHERE ff."FILING_ID" IS NULL or ff."FILING_SEQUENCE" IS NULL;
%%sql
SELECT cvr."FORM_TYPE", ff."FORM_ID", cvr."FILING_ID", cvr."AMEND_ID", cvr."RPT_DATE", *
FROM "CVR_CAMPAIGN_DISCLOSURE_CD" cvr
JOIN "FILER_FILINGS_CD" ff
ON cvr."FILING_ID" = ff."FILING_ID"
AND cvr."AMEND_ID" = ff."FILING_SEQUENCE"
WHERE UPPER(cvr."FORM_TYPE") <> UPPER(ff."FORM_ID")
ORDER BY cvr."RPT_DATE" DESC, cvr."FILING_ID" DESC, cvr."AMEND_ID" DESC;
%%sql
SELECT COUNT(*)::float / (
SELECT COUNT(*)
FROM "CVR_CAMPAIGN_DISCLOSURE_CD" CVR
JOIN "FILER_FILINGS_CD" FF
ON CVR."FILING_ID" = FF."FILING_ID"
AND CVR."AMEND_ID" = FF."FILING_SEQUENCE"
) as pct_conflict
FROM "CVR_CAMPAIGN_DISCLOSURE_CD" CVR
JOIN "FILER_FILINGS_CD" FF
ON CVR."FILING_ID" = FF."FILING_ID"
AND CVR."AMEND_ID" = FF."FILING_SEQUENCE"
WHERE CVR."FILER_ID" <> FF."FILER_ID"::VARCHAR;
%%sql
SELECT
cvr."FILER_ID" as cvr_filer_id,
fn."FILER_ID" as filername_filer_id,
f."FILER_ID" as filer_filer_id
FROM (
SELECT DISTINCT cvr."FILER_ID"
FROM "CVR_CAMPAIGN_DISCLOSURE_CD" cvr
LEFT JOIN "FILER_XREF_CD" x
ON cvr."FILER_ID" = x."XREF_ID"
WHERE x."XREF_ID" IS NULL
) cvr
LEFT JOIN "FILERNAME_CD" fn
ON cvr."FILER_ID" = fn."FILER_ID"::varchar
LEFT JOIN "FILERS_CD" f
ON cvr."FILER_ID" = f."FILER_ID"::varchar
ORDER BY cvr."FILER_ID"::VARCHAR DESC;
%%sql
SELECT COUNT(*)::float / (
SELECT COUNT(*)
FROM "CVR_CAMPAIGN_DISCLOSURE_CD" CVR
JOIN "FILER_FILINGS_CD" FF
ON CVR."FILING_ID" = FF."FILING_ID"
AND CVR."AMEND_ID" = FF."FILING_SEQUENCE"
) as pct_conflict
FROM "CVR_CAMPAIGN_DISCLOSURE_CD" CVR
JOIN "FILER_FILINGS_CD" FF
ON CVR."FILING_ID" = FF."FILING_ID"
AND CVR."AMEND_ID" = FF."FILING_SEQUENCE"
JOIN "FILER_XREF_CD" X
ON CVR."FILER_ID" = X."XREF_ID"
WHERE X."FILER_ID" <> FF."FILER_ID";
%%sql
SELECT cvr."FORM_TYPE", COUNT(*)
FROM "CVR_CAMPAIGN_DISCLOSURE_CD" CVR
JOIN "FILER_FILINGS_CD" FF
ON CVR."FILING_ID" = FF."FILING_ID"
AND CVR."AMEND_ID" = FF."FILING_SEQUENCE"
JOIN "FILER_XREF_CD" X
ON CVR."FILER_ID" = X."XREF_ID"
WHERE X."FILER_ID" <> FF."FILER_ID"
GROUP BY 1;
%%sql
SELECT "FILING_ID", COUNT(DISTINCT "FORM_ID")
FROM "FILER_FILINGS_CD"
WHERE "FORM_ID" IN (
SELECT DISTINCT "FORM_TYPE"
FROM "CVR_CAMPAIGN_DISCLOSURE_CD"
)
GROUP BY 1
HAVING COUNT(DISTINCT "FORM_ID") > 1
ORDER BY 1 DESC;
%%sql
SELECT *
FROM "FILER_FILINGS_CD"
WHERE "FILING_ID" = 826532;
%%sql
SELECT *
FROM "CVR_CAMPAIGN_DISCLOSURE_CD"
WHERE "FILING_ID" = 826532;
SELECT *
FROM "SMRY_CD"
WHERE "FILING_ID" = 826532;
%%sql
SELECT "FILING_ID", COUNT(DISTINCT "FILING_SEQUENCE"), COUNT(DISTINCT "FILER_ID")
FROM "FILER_FILINGS_CD"
WHERE "FORM_ID" IN (
SELECT DISTINCT "FORM_TYPE"
FROM "CVR_CAMPAIGN_DISCLOSURE_CD"
)
GROUP BY 1
HAVING COUNT(DISTINCT "FILER_ID") > 1
ORDER BY 1 DESC;
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load Data
Step2: Train Models
Step3: Random model
Step4: Plain wikipedia model
Step5: Debiased model
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import pandas as pd
from model_tool import ToxModel
SPLITS = ['train', 'dev', 'test']
wiki = {}
debias = {}
random = {}
for split in SPLITS:
wiki[split] = '../data/wiki_%s.csv' % split
debias[split] = '../data/wiki_debias_%s.csv' % split
random[split] = '../data/wiki_debias_random_%s.csv' % split
hparams = {'epochs': 4}
MODEL_NAME = 'cnn_debias_random_tox_v3'
debias_random_model = ToxModel(hparams=hparams)
debias_random_model.train(random['train'], random['dev'], text_column = 'comment', label_column = 'is_toxic', model_name = MODEL_NAME)
random_test = pd.read_csv(random['test'])
debias_random_model.score_auc(random_test['comment'], random_test['is_toxic'])
MODEL_NAME = 'cnn_wiki_tox_v3'
wiki_model = ToxModel(hparams=hparams)
wiki_model.train(wiki['train'], wiki['dev'], text_column = 'comment', label_column = 'is_toxic', model_name = MODEL_NAME)
wiki_test = pd.read_csv(wiki['test'])
wiki_model.score_auc(wiki_test['comment'], wiki_test['is_toxic'])
MODEL_NAME = 'cnn_debias_tox_v3'
debias_model = ToxModel(hparams=hparams)
debias_model.train(debias['train'], debias['dev'], text_column = 'comment', label_column = 'is_toxic', model_name = MODEL_NAME)
debias_test = pd.read_csv(debias['test'])
debias_model.score_auc(debias_test['comment'], debias_test['is_toxic'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Useful keyboard shortcuts
Step2: 3D plots
Step3: Animations
Step4: Interactive plots with Plotly
Step5: IPython built-in magic commands
Step7: Load an external python script
Step8: Load a specific symbol (funtion, class, ...)
Step10: Load specific lines
Step11: Time measurement
Step12: %timeit
Step13: ipywidget
Step14: ipywidgets.interact
Step15: Using interact as a decorator with named parameters
Step16: Integer (IntSlider)
Step17: Float (FloatSlider)
Step18: Boolean (Checkbox)
Step19: List (Dropdown)
Step20: Dictionnary (Dropdown)
Step21: Using interact as a decorator
Step22: Integer (IntSlider)
Step23: Float (FloatSlider)
Step24: Boolean (Checkbox)
Step25: List (Dropdown)
Step26: Dictionnary (Dropdown)
Step27: Using interact as a function
Step28: Integer (IntSlider)
Step29: Float (FloatSlider)
Step30: Boolean (Checkbox)
Step31: List (Dropdown)
Step32: Dictionnary (Dropdown)
Step33: Example of using multiple widgets on one function
Step34: Display images (PNG, JPEG, GIF, ...)
Step35: Within a Markdown cell
Step36: Generate a sound
Step37: Generate a multi-channel (stereo or more) sound
Step38: From URL
Step39: From file
Step40: From bytes
Step41: Youtube widget
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib notebook
# As an alternative, one may use: %pylab notebook
# For old Matplotlib and Ipython versions, use the non-interactive version:
# %matplotlib inline or %pylab inline
# To ignore warnings (http://stackoverflow.com/questions/9031783/hide-all-warnings-in-ipython)
import warnings
warnings.filterwarnings('ignore')
import math
import numpy as np
import matplotlib.pyplot as plt
import ipywidgets
from ipywidgets import interact
x = np.arange(-2 * np.pi, 2 * np.pi, 0.1)
y = np.sin(x)
plt.plot(x, y)
from mpl_toolkits.mplot3d import axes3d
# Build datas ###############
x = np.arange(-5, 5, 0.25)
y = np.arange(-5, 5, 0.25)
xx,yy = np.meshgrid(x, y)
z = np.sin(np.sqrt(xx**2 + yy**2))
# Plot data #################
fig = plt.figure()
ax = axes3d.Axes3D(fig)
ax.plot_wireframe(xx, yy, z)
plt.show()
from matplotlib.animation import FuncAnimation
# Plots
fig, ax = plt.subplots()
def update(frame):
x = np.arange(frame/10., frame/10. + 2. * math.pi, 0.1)
ax.clear()
ax.plot(x, np.cos(x))
# Optional: save plots
filename = "img_{:03}.png".format(frame)
plt.savefig(filename)
# Note: "interval" is in ms
anim = FuncAnimation(fig, update, interval=100)
plt.show()
%%html
<div id="toc"></div>
%%javascript
var toc = document.getElementById("toc");
toc.innerHTML = "<b>Table of contents:</b>";
toc.innerHTML += "<ol>"
var h_list = $("h2, h3"); //$("h2"); // document.getElementsByTagName("h2");
for(var i = 0 ; i < h_list.length ; i++) {
var h = h_list[i];
var h_str = h.textContent.slice(0, -1); // "slice(0, -1)" remove the last character
if(h_str.length > 0) {
if(h.tagName == "H2") { // https://stackoverflow.com/questions/10539419/javascript-get-elements-tag
toc.innerHTML += "<li><a href=\"#" + h_str.replace(/\s+/g, '-') + "\">" + h_str + "</a></li>";
} else if(h.tagName == "H3") { // https://stackoverflow.com/questions/10539419/javascript-get-elements-tag
toc.innerHTML += "<li> <a href=\"#" + h_str.replace(/\s+/g, '-') + "\">" + h_str + "</a></li>";
}
}
}
toc.innerHTML += "</ol>"
%run ./notebook_snippets_run_test.py
%run ./notebook_snippets_run_mpl_test.py
# %load ./notebook_snippets_run_mpl_test.py
#!/usr/bin/env python3
# Copyright (c) 2012 Jérémie DECOCK (http://www.jdhp.org)
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
This module has been written to illustrate the ``%run`` magic command in
``notebook_snippets.ipynb``.
import numpy as np
import matplotlib.pyplot as plt
def main():
x = np.arange(-10, 10, 0.1)
y = np.cos(x)
plt.plot(x, y)
plt.grid(True)
plt.show()
if __name__ == '__main__':
main()
# %load -s main ./notebook_snippets_run_mpl_test.py
def main():
x = np.arange(-10, 10, 0.1)
y = np.cos(x)
plt.plot(x, y)
plt.grid(True)
plt.show()
# %load -r 22-41 ./notebook_snippets_run_mpl_test.py
This module has been written to illustrate the ``%run`` magic command in
``notebook_snippets.ipynb``.
import numpy as np
import matplotlib.pyplot as plt
def main():
x = np.arange(-10, 10, 0.1)
y = np.cos(x)
plt.plot(x, y)
plt.grid(True)
plt.show()
if __name__ == '__main__':
main()
%%time
plt.hist(np.random.normal(loc=0.0, scale=1.0, size=100000), bins=50)
%%timeit
plt.hist(np.random.normal(loc=0.0, scale=1.0, size=100000), bins=50)
#help(ipywidgets)
#dir(ipywidgets)
from ipywidgets import IntSlider
from IPython.display import display
slider = IntSlider(min=1, max=10)
display(slider)
#help(ipywidgets.interact)
@interact(text="IPython Widgets")
def greeting(text):
print("Hello {}".format(text))
@interact(num=5)
def square(num):
print("{} squared is {}".format(num, num*num))
@interact(num=(0, 100))
def square(num):
print("{} squared is {}".format(num, num*num))
@interact(num=(0, 100, 10))
def square(num):
print("{} squared is {}".format(num, num*num))
@interact(num=5.)
def square(num):
print("{} squared is {}".format(num, num*num))
@interact(num=(0., 10.))
def square(num):
print("{} squared is {}".format(num, num*num))
@interact(num=(0., 10., 0.5))
def square(num):
print("{} squared is {}".format(num, num*num))
@interact(upper=False)
def greeting(upper):
text = "hello"
if upper:
print(text.upper())
else:
print(text.lower())
@interact(name=["John", "Bob", "Alice"])
def greeting(name):
print("Hello {}".format(name))
@interact(word={"One": "Un", "Two": "Deux", "Three": "Trois"})
def translate(word):
print(word)
x = np.arange(-2 * np.pi, 2 * np.pi, 0.1)
@interact(function={"Sin": np.sin, "Cos": np.cos})
def plot(function):
y = function(x)
plt.plot(x, y)
@interact
def greeting(text="World"):
print("Hello {}".format(text))
@interact
def square(num=2):
print("{} squared is {}".format(num, num*num))
@interact
def square(num=(0, 100)):
print("{} squared is {}".format(num, num*num))
@interact
def square(num=(0, 100, 10)):
print("{} squared is {}".format(num, num*num))
@interact
def square(num=5.):
print("{} squared is {}".format(num, num*num))
@interact
def square(num=(0., 10.)):
print("{} squared is {}".format(num, num*num))
@interact
def square(num=(0., 10., 0.5)):
print("{} squared is {}".format(num, num*num))
@interact
def greeting(upper=False):
text = "hello"
if upper:
print(text.upper())
else:
print(text.lower())
@interact
def greeting(name=["John", "Bob", "Alice"]):
print("Hello {}".format(name))
@interact
def translate(word={"One": "Un", "Two": "Deux", "Three": "Trois"}):
print(word)
x = np.arange(-2 * np.pi, 2 * np.pi, 0.1)
@interact
def plot(function={"Sin": np.sin, "Cos": np.cos}):
y = function(x)
plt.plot(x, y)
def greeting(text):
print("Hello {}".format(text))
interact(greeting, text="IPython Widgets")
def square(num):
print("{} squared is {}".format(num, num*num))
interact(square, num=5)
def square(num):
print("{} squared is {}".format(num, num*num))
interact(square, num=(0, 100))
def square(num):
print("{} squared is {}".format(num, num*num))
interact(square, num=(0, 100, 10))
def square(num):
print("{} squared is {}".format(num, num*num))
interact(square, num=5.)
def square(num):
print("{} squared is {}".format(num, num*num))
interact(square, num=(0., 10.))
def square(num):
print("{} squared is {}".format(num, num*num))
interact(square, num=(0., 10., 0.5))
def greeting(upper):
text = "hello"
if upper:
print(text.upper())
else:
print(text.lower())
interact(greeting, upper=False)
def greeting(name):
print("Hello {}".format(name))
interact(greeting, name=["John", "Bob", "Alice"])
def translate(word):
print(word)
interact(translate, word={"One": "Un", "Two": "Deux", "Three": "Trois"})
x = np.arange(-2 * np.pi, 2 * np.pi, 0.1)
def plot(function):
y = function(x)
plt.plot(x, y)
interact(plot, function={"Sin": np.sin, "Cos": np.cos})
@interact(upper=False, name=["john", "bob", "alice"])
def greeting(upper, name):
text = "hello {}".format(name)
if upper:
print(text.upper())
else:
print(text.lower())
from IPython.display import Image
Image("fourier.gif")
from IPython.display import Audio
framerate = 44100
t = np.linspace(0, 5, framerate*5)
data = np.sin(2*np.pi*220*t) + np.sin(2*np.pi*224*t)
Audio(data, rate=framerate)
data_left = np.sin(2 * np.pi * 220 * t)
data_right = np.sin(2 * np.pi * 224 * t)
Audio([data_left, data_right], rate=framerate)
Audio("http://www.nch.com.au/acm/8k16bitpcm.wav")
Audio(url="http://www.w3schools.com/html/horse.ogg")
#Audio('/path/to/sound.wav')
#Audio(filename='/path/to/sound.ogg')
#Audio(b'RAW_WAV_DATA..)
#Audio(data=b'RAW_WAV_DATA..)
from IPython.display import YouTubeVideo
vid = YouTubeVideo("0HlRtU8clt4")
display(vid)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: These sentences have very similar content, and as such the WMD should be low. Before we compute the WMD, we want to remove stopwords ("the", "to", etc.), as these do not contribute a lot to the information in the sentences.
Step2: Now, as mentioned earlier, we will be using some downloaded pre-trained embeddings. We load these into a Gensim Word2Vec model class. Note that the embeddings we have chosen here require a lot of memory.
Step3: So let's compute WMD using the wmdistance method.
Step4: Let's try the same thing with two completely unrelated sentences. Notice that the distance is larger.
Step5: Normalizing word2vec vectors
Step6: Part 2
Step7: Below is a plot with a histogram of document lengths and includes the average document length as well. Note that these are the pre-processed documents, meaning stopwords are removed, punctuation is removed, etc. Document lengths have a high impact on the running time of WMD, so when comparing running times with this experiment, the number of documents in query corpus (about 4000) and the length of the documents (about 62 words on average) should be taken into account.
Step8: Now we want to initialize the similarity class with a corpus and a word2vec model (which provides the embeddings and the wmdistance method itself).
Step9: The num_best parameter decides how many results the queries return. Now let's try making a query. The output is a list of indeces and similarities of documents in the corpus, sorted by similarity.
Step10: The query and the most similar documents, together with the similarities, are printed below. We see that the retrieved documents are discussing the same thing as the query, although using different words. The query talks about getting a seat "outdoor", while the results talk about sitting "outside", and one of them says the restaurant has a "nice view".
Step11: Let's try a different query, also taken directly from one of the reviews in the corpus.
Step12: This time around, the results are more straight forward; the retrieved documents basically contain the same words as the query.
|
<ASSISTANT_TASK:>
Python Code:
from time import time
start_nb = time()
# Initialize logging.
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s')
sentence_obama = 'Obama speaks to the media in Illinois'
sentence_president = 'The president greets the press in Chicago'
sentence_obama = sentence_obama.lower().split()
sentence_president = sentence_president.lower().split()
# Import and download stopwords from NLTK.
from nltk.corpus import stopwords
from nltk import download
download('stopwords') # Download stopwords list.
# Remove stopwords.
stop_words = stopwords.words('english')
sentence_obama = [w for w in sentence_obama if w not in stop_words]
sentence_president = [w for w in sentence_president if w not in stop_words]
start = time()
from gensim.models import Word2Vec
model = Word2Vec.load_word2vec_format('/data/w2v_googlenews/GoogleNews-vectors-negative300.bin.gz', binary=True)
print 'Cell took %.2f seconds to run.' %(time() - start)
distance = model.wmdistance(sentence_obama, sentence_president)
print 'distance = %.4f' % distance
sentence_orange = 'Oranges are my favorite fruit'
sentence_orange = sentence_orange.lower().split()
sentence_orange = [w for w in sentence_orange if w not in stop_words]
distance = model.wmdistance(sentence_obama, sentence_orange)
print 'distance = %.4f' % distance
# Normalizing word2vec vectors.
start = time()
model.init_sims(replace=True) # Normalizes the vectors in the word2vec class.
distance = model.wmdistance(sentence_obama, sentence_president) # Compute WMD as normal.
print 'Cell took %.2f seconds to run.' %(time() - start)
# Pre-processing a document.
from nltk import word_tokenize
download('punkt') # Download data for tokenizer.
def preprocess(doc):
doc = doc.lower() # Lower the text.
doc = word_tokenize(doc) # Split into words.
doc = [w for w in doc if not w in stop_words] # Remove stopwords.
doc = [w for w in doc if w.isalpha()] # Remove numbers and punctuation.
return doc
start = time()
import json
# Business IDs of the restaurants.
ids = ['4bEjOyTaDG24SY5TxsaUNQ', '2e2e7WgqU1BnpxmQL5jbfw', 'zt1TpTuJ6y9n551sw9TaEg',
'Xhg93cMdemu5pAMkDoEdtQ', 'sIyHTizqAiGu12XMLX3N3g', 'YNQgak-ZLtYJQxlDwN-qIg']
w2v_corpus = [] # Documents to train word2vec on (all 6 restaurants).
wmd_corpus = [] # Documents to run queries against (only one restaurant).
documents = [] # wmd_corpus, with no pre-processing (so we can see the original documents).
with open('/data/yelp_academic_dataset_review.json') as data_file:
for line in data_file:
json_line = json.loads(line)
if json_line['business_id'] not in ids:
# Not one of the 6 restaurants.
continue
# Pre-process document.
text = json_line['text'] # Extract text from JSON object.
text = preprocess(text)
# Add to corpus for training Word2Vec.
w2v_corpus.append(text)
if json_line['business_id'] == ids[0]:
# Add to corpus for similarity queries.
wmd_corpus.append(text)
documents.append(json_line['text'])
print 'Cell took %.2f seconds to run.' %(time() - start)
from matplotlib import pyplot as plt
%matplotlib inline
# Document lengths.
lens = [len(doc) for doc in wmd_corpus]
# Plot.
plt.rc('figure', figsize=(8,6))
plt.rc('font', size=14)
plt.rc('lines', linewidth=2)
plt.rc('axes', color_cycle=('#377eb8','#e41a1c','#4daf4a',
'#984ea3','#ff7f00','#ffff33'))
# Histogram.
plt.hist(lens, bins=20)
plt.hold(True)
# Average length.
avg_len = sum(lens) / float(len(lens))
plt.axvline(avg_len, color='#e41a1c')
plt.hold(False)
plt.title('Histogram of document lengths.')
plt.xlabel('Length')
plt.text(100, 800, 'mean = %.2f' % avg_len)
plt.show()
# Train Word2Vec on all the restaurants.
model = Word2Vec(w2v_corpus, workers=3, size=100)
# Initialize WmdSimilarity.
from gensim.similarities import WmdSimilarity
num_best = 10
instance = WmdSimilarity(wmd_corpus, model, num_best=10)
start = time()
sent = 'Very good, you should seat outdoor.'
query = preprocess(sent)
sims = instance[query] # A query is simply a "look-up" in the similarity class.
print 'Cell took %.2f seconds to run.' %(time() - start)
# Print the query and the retrieved documents, together with their similarities.
print 'Query:'
print sent
for i in range(num_best):
print
print 'sim = %.4f' % sims[i][1]
print documents[sims[i][0]]
start = time()
sent = 'I felt that the prices were extremely reasonable for the Strip'
query = preprocess(sent)
sims = instance[query] # A query is simply a "look-up" in the similarity class.
print 'Query:'
print sent
for i in range(num_best):
print
print 'sim = %.4f' % sims[i][1]
print documents[sims[i][0]]
print '\nCell took %.2f seconds to run.' %(time() - start)
print 'Notebook took %.2f seconds to run.' %(time() - start_nb)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Just like how each ordinary differential equation (ODE) is governed by a vector field, a stochastic differential equation (SDE) is governed by two vector fields, which are called the drift and diffusion functions
Step2: The functions f and g are arbitrarily chosen for demonstration purposes. The attributes noise_type and sde_type must be included to inform the solver about how to interpret the SDE, and determine which numerical methods are available. See below for more detail on how the output of g depends on the different noise types.
Step3: For Itô SDEs method='euler' means the strong order 0.5 Euler-Maruyama method is used. Other possible methods include the strong order 1.0 milstein and the strong order 1.5 srk, both of which are of slightly higher order. If method is set to None, an appropriate solver would be chosen based on noise_type and sde_type under the hood.
Step4: For Stratonovich SDEs, the methods midpoint, euler_heun, heun, milstein, and log_ode are supported.
Step5: Trivially, the previous code may be adapted to run on GPUs, just by moving all tensors to a GPU
Step6: A side note is that multi-GPU data parallel is possible with the existing codebase, but the use case has not been tested out extensively and may require defining non-standard SDE classes and methods.
Step7: We can also create the Brownian motion on GPUs by specifying device
Step8: Having a Brownian motion object helps us gain control over the randomness better. We can feed the object into the solver such that the solver's solution is conditioned on this path.
Step9: 3. Noise type of SDEs affects which solvers can be used and what strong orders can be attained<a id='noise_type'></a>
Step10: Switching to adjoint-mode gradient computation is as simple as replacing sdeint with sdeint_adjoint
|
<ASSISTANT_TASK:>
Python Code:
import torch
from torch import nn
import os
import sys
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path:
sys.path.append(module_path)
%matplotlib inline
import matplotlib.pyplot as plt
import torchsde
def plot(ts, samples, xlabel, ylabel, title=''):
ts = ts.cpu()
samples = samples.squeeze().t().cpu()
plt.figure()
for i, sample in enumerate(samples):
plt.plot(ts, sample, marker='x', label=f'sample {i}')
plt.title(title)
plt.xlabel(xlabel)
plt.ylabel(ylabel)
plt.legend()
plt.show()
class SDE(nn.Module):
def __init__(self):
super().__init__()
self.theta = nn.Parameter(torch.tensor(0.1), requires_grad=True) # Scalar parameter.
self.noise_type = "diagonal"
self.sde_type = "ito"
def f(self, t, y):
return torch.sin(t) + self.theta * y
def g(self, t, y):
return 0.3 * torch.sigmoid(torch.cos(t) * torch.exp(-y))
batch_size, state_size, t_size = 3, 1, 100
sde = SDE()
ts = torch.linspace(0, 1, t_size)
y0 = torch.full(size=(batch_size, state_size), fill_value=0.1)
with torch.no_grad():
ys = torchsde.sdeint(sde, y0, ts, method='euler') # (t_size, batch_size, state_size) = (100, 3, 1).
plot(ts, ys, xlabel='$t$', ylabel='$Y_t$')
sde.sde_type = "stratonovich"
with torch.no_grad():
ys = torchsde.sdeint(sde, y0, ts, method="midpoint") # (t_size, batch_size, state_size) = (100, 3, 1).
plot(ts, ys, xlabel='$t$', ylabel='$Y_t$')
class SDENewName(nn.Module):
def __init__(self):
super().__init__()
self.theta = nn.Parameter(torch.tensor(0.1), requires_grad=False) # Scalar parameter.
self.noise_type = "diagonal"
self.sde_type = "ito"
def h(self, t, y):
return torch.sin(t) + self.theta * y
def g(self, t, y):
return 0.3 * torch.sigmoid(torch.cos(t) * torch.exp(-y))
sde_new_name = SDENewName()
with torch.no_grad():
# Supply a dictionary to the argument `names`.
ys = torchsde.sdeint(sde_new_name, y0, ts, method='euler', names={'drift': 'h'})
plot(ts, ys, xlabel='$t$', ylabel='$Y_t$')
if torch.cuda.is_available():
gpu = torch.device('cuda')
sde = SDE().to(gpu)
ts = ts.to(gpu)
y0 = y0.to(gpu)
with torch.no_grad():
ys = torchsde.sdeint(sde, y0, ts, method='euler') # (100, 3, 1).
plot(ts, ys, xlabel='$t$', ylabel='$Y_t$')
ts = torch.linspace(0, 1, t_size)
bm = torchsde.BrownianInterval(t0=0.0, t1=1.0, size=(batch_size, state_size))
bm_increments = torch.stack([bm(t0, t1) for t0, t1 in zip(ts[:-1], ts[1:])], dim=0)
bm_queries = torch.cat((torch.zeros(1, batch_size, state_size), torch.cumsum(bm_increments, dim=0)))
plot(ts, bm_queries, xlabel='$t$', ylabel='$W_t$', title='Query')
bm_increments2 = torch.stack([bm(t0, t1) for t0, t1 in zip(ts[:-1], ts[1:])], dim=0)
bm_queries2 = torch.cat((torch.zeros(1, batch_size, state_size), torch.cumsum(bm_increments2, dim=0)))
plot(ts, bm_queries, xlabel='$t$', ylabel='$W_t$',
title='Query again (samples should be same as before)')
assert torch.allclose(bm_queries, bm_queries2)
if torch.cuda.is_available():
bm = torchsde.BrownianInterval(t0=0.0, t1=1.0, size=(batch_size, state_size), device=gpu)
print(bm(0.0, 0.5))
sde = SDE()
ts = torch.linspace(0, 1, t_size)
y0 = torch.zeros((batch_size, 1)).fill_(0.1) # (batch_size, state_size).
bm = torchsde.BrownianInterval(t0=0.0, t1=1.0, size=(batch_size, state_size))
with torch.no_grad():
ys = torchsde.sdeint(sde, y0, ts, method='milstein', bm=bm)
plot(ts, ys, xlabel='$t$', ylabel='$Y_t$', title='Solve SDE')
with torch.no_grad():
ys = torchsde.sdeint(sde, y0, ts, method='milstein', bm=bm)
plot(ts, ys, xlabel='$t$', ylabel='$Y_t$',
title='Solve SDE again (samples should be same as before)')
# Use a new BM sample, we expect different sample paths.
bm = torchsde.BrownianInterval(t0=0.0, t1=1.0, size=(batch_size, state_size))
with torch.no_grad():
ys = torchsde.sdeint(sde, y0, ts, method='milstein', bm=bm)
plot(ts, ys, xlabel='$t$', ylabel='$Y_t$',
title='Solve SDE (expect different sample paths)')
ys = torchsde.sdeint(sde, y0, ts, method='euler', bm=bm)
y_final = ys[-1]
target = torch.randn_like(y_final)
loss = ((target - y_final) ** 2).sum(dim=1).mean(dim=0)
loss.backward()
print(sde.theta.grad)
ys = torchsde.sdeint(sde, y0, ts, method='euler', bm=bm)
y_final = ys[-1]
target = torch.randn_like(y_final)
loss = ((target - y_final) ** 2).sum(dim=1).mean(dim=0)
grad, = torch.autograd.grad(loss, sde.theta)
print(grad)
ys = torchsde.sdeint_adjoint(sde, y0, ts, method='euler', bm=bm)
y_final = ys[-1]
target = torch.randn_like(y_final)
loss = ((target - y_final) ** 2).sum(dim=1).mean(dim=0)
loss.backward()
print(sde.theta.grad)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The August birthday problem
Step2: I'll roll the data so September comes first.
Step3: Here are the diagnosis rates, which we can check against the rates in the table.
Step4: Here's what the rates look like as a function of months after the September cutoff.
Step5: For the first 9 months, from September to May, we see what we would expect if at least some of the excess diagnoses are due to behavioral differences due to age. For each month of difference in age, we see an increase in the number of diagnoses.
Step6: By transposing the results, we can get them into two arrays for plotting.
Step7: Here's what the plot looks like with error bars.
Step8: It seems like the lower rates in the last 3 months are unlikely to be due to random variation, so it might be good to investigate the effect of "red shirting".
Step9: Here's a Suite that estimates the parameters of a logistic regression model, b0 and b1.
Step10: The prior distributions are uniform over a grid that covers the most likely values.
Step11: Here's the update.
Step12: Here's the posterior marginal distribution for b0.
Step13: And the posterior marginal distribution for b1.
Step14: Let's see what the posterior regression lines look like, superimposed on the data.
Step15: Most of these regression lines fall within the credible intervals of the observed rates, so in that sense it seems like this model is not ruled out by the data.
Step16: Here are posterior predictive CDFs for diagnosis rates.
Step17: And we can compute the posterior predictive distribution for the difference.
Step18: To summarize, we can compute the mean and 95% credible interval for this difference.
Step19: A difference of 21 diagnoses, on a base rate of 71 diagnoses, is an increase of 30% (18%, 42%)
|
<ASSISTANT_TASK:>
Python Code:
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
import math
import numpy as np
import pandas as pd
from thinkbayes2 import Pmf, Cdf, Suite, Joint
import thinkplot
totals = np.array([32690, 31238, 34405, 34565, 34977, 34415,
36577, 36319, 35353, 34405, 31285, 31617])
diagnosed = np.array([265, 280, 307, 312, 317, 287,
320, 309, 225, 240, 232, 243])
totals = np.roll(totals, -8)
diagnosed = np.roll(diagnosed, -8)
rates = diagnosed / totals * 10000
np.round(rates, 1)
xs = np.arange(12)
thinkplot.plot(xs, rates)
thinkplot.decorate(xlabel='Months after cutoff',
ylabel='Diagnosis rate per 10,000')
import scipy.stats
pcount = 1
res = []
for (x, d, t) in zip(xs, diagnosed, totals):
a = d + pcount
b = t-d + pcount
ci = scipy.stats.beta(a, b).ppf([0.025, 0.975])
res.append(ci * 10000)
low, high = np.transpose(res)
low
high
import matplotlib.pyplot as plt
def errorbar(xs, low, high, **options):
for x, l, h in zip(xs, low, high):
plt.vlines(x, l, h, **options)
errorbar(xs, low, high, color='gray', alpha=0.7)
thinkplot.plot(xs, rates)
thinkplot.decorate(xlabel='Months after cutoff',
ylabel='Diagnosis rate per 10,000')
from scipy.special import expit, logit
for (x, d, t) in zip(xs, diagnosed, totals):
print(x, logit(d/t))
class August(Suite, Joint):
def Likelihood(self, data, hypo):
x, d, t = data
b0, b1 = hypo
p = expit(b0 + b1 * x)
like = scipy.stats.binom.pmf(d, t, p)
return like
from itertools import product
b0 = np.linspace(-4.75, -5.1, 101)
b1 = np.linspace(-0.05, 0.05, 101)
hypos = product(b0, b1)
suite = August(hypos);
for data in zip(xs, diagnosed, totals):
suite.Update(data)
pmf0 = suite.Marginal(0)
b0 = pmf0.Mean()
print(b0)
thinkplot.Pdf(pmf0)
thinkplot.decorate(title='Posterior marginal distribution',
xlabel='Intercept log odds (b0)',
ylabel='Pdf')
pmf1 = suite.Marginal(1)
b1 = pmf1.Mean()
print(b1)
thinkplot.Pdf(pmf1)
thinkplot.decorate(title='Posterior marginal distribution',
xlabel='Slope log odds (b0)',
ylabel='Pdf')
for i in range(100):
b0, b1 = suite.Random()
ys = expit(b0 + b1 * xs) * 10000
thinkplot.plot(xs, ys, color='green', alpha=0.01)
errorbar(xs, low, high, color='gray', alpha=0.7)
thinkplot.plot(xs, rates)
thinkplot.decorate(xlabel='Months after cutoff',
ylabel='Diagnosis rate per 10,000')
def posterior_predictive(x):
pmf = Pmf()
for (b0, b1), p in suite.Items():
base = expit(b0 + b1 * x) * 10000
pmf[base] += p
return pmf
pmf0 = posterior_predictive(0)
thinkplot.Cdf(pmf0.MakeCdf(), label='September')
pmf1 = posterior_predictive(11)
thinkplot.Cdf(pmf1.MakeCdf(), label='August')
thinkplot.decorate(title='Posterior predictive distribution',
xlabel='Diagnosis rate per 10,000',
ylabel='CDF')
pmf0.Mean()
def posterior_predictive_diff():
pmf = Pmf()
for (b0, b1), p in suite.Items():
p0 = expit(b0) * 10000
p1 = expit(b0 + b1 * 11) * 10000
diff = p1 - p0
pmf[diff] += p
return pmf
pmf_diff = posterior_predictive_diff()
thinkplot.Cdf(pmf_diff.MakeCdf())
thinkplot.decorate(title='Posterior predictive distribution',
xlabel='11 month increase in diagnosis rate per 10,000',
ylabel='CDF')
pmf_diff.Mean()
pmf_diff.CredibleInterval(95)
pmf_diff.Mean() / pmf0.Mean()
pmf_diff.CredibleInterval(95) / pmf0.Mean()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <font color='red'>Please put your datahub API key into a file called APIKEY and place it to the notebook folder or assign your API key directly to the variable API_key!</font>
Step2: At first, we need to define the dataset names and temporal ranges. Please note that the datasets have different time ranges. So we will download the data from 1981, when CHRIPS starts (ARC2 is from 1983).
Step3: Then we define spatial range. We decided to analyze Kenya, where agriculture is the second largest contributor to the GDP, after the service sector. Most of its agricultural production comes from the fertile highlands of Kenya in South-Western part of the country, where they grow tea, coffee, sisal, pyrethrum, corn, and wheat. However, feel free to change the area according to your interest.
Step4: Download the data with package API
Step5: Work with downloaded files
Step6: In the plot below we see the ARC2 and CHIRPS time-series, where the annual precipitation is averaged over the area. We can see that one or the other dataset over/under estimates the values, however the trend remains the same. We can also see that 1996 and 2005 have been quite wet years for South-West Kenya.
Step7: In the plot above, we used data from 1982 to show all the data from CHIRPS. We now want to limit the data to have the same time range for both of the datasets, so that we can compare them.
Step8: Then we will find out the maximum precipitation over the whole period, and we will see that CHIRPS shows much higher values than ARC2. The differences between ARC2 and CHIRPS are brought out in CHIRPS Reality Checks document as well.
Step9: In this section, we will find minimum, maximum and average number of dry days. Interestingly, CHIRPS and ARC2 datasets have very similar values for dry days. We can see that there is 9,912 - 10,406 dry days in 34 years on average. Which is not that much, only about 27 days per year.
Step10: Monthly averages over the period
Step11: In the violin plot below we can see that CHRIPS has significantly bigger maximum values during April, May and November. However, during most of the months the mean values of ARC2 and CHIRPS are quite similar.
Step12: We will now demonstrate the mean monthly values on the bar plot as well so that it would be easier to follow the monthly averages. They are similar for both of the datasets. The biggest differences are in April and November — we saw the same thing in the previous plot. In addition, we can also see that the wettest month of the year is April and the summer months are the driest.
Step13: Finally, let’s see the monthly anomalies for 2016. The period used for computing the climatology is 1983-2017. Positive values in the plot means that 2016 precipitation was above long term normal. It seems that April in 2016 had significant precipitation in South-West Kenya. At the same time, October and December, which are short rain periods, had less precipitation than normal.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib notebook
import numpy as np
from dh_py_access import package_api
import dh_py_access.lib.datahub as datahub
import xarray as xr
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap
from po_data_process import get_data_in_pandas_dataframe, make_plot,get_comparison_graph
import dh_py_access.package_api as package_api
import matplotlib.gridspec as gridspec
import calendar
#import warnings
import datetime
#warnings.filterwarnings("ignore")
import matplotlib
print (matplotlib.__version__)
server = 'api.planetos.com'
API_key = open('APIKEY').readlines()[0].strip() #'<YOUR API KEY HERE>'
version = 'v1'
dh=datahub.datahub(server,version,API_key)
dataset1='noaa_arc2_africa_01'
variable_name1 = 'pr'
dataset2='chg_chirps_global_05'
variable_name2 = 'precip'
time_start = '1981-01-01T00:00:00'
time_end = '2017-11-01T00:00:00'
area_name = 'Kenya'
latitude_north = 1.6; longitude_west = 34.2
latitude_south = -2.5; longitude_east = 38.4
package_arc2_africa_01 = package_api.package_api(dh,dataset1,variable_name1,longitude_west,longitude_east,latitude_south,latitude_north,time_start,time_end,area_name=area_name)
package_chg_chirps_global_05 = package_api.package_api(dh,dataset2,variable_name2,longitude_west,longitude_east,latitude_south,latitude_north,time_start,time_end,area_name=area_name)
package_arc2_africa_01.make_package()
package_chg_chirps_global_05.make_package()
package_arc2_africa_01.download_package()
package_chg_chirps_global_05.download_package()
dd1 = xr.open_dataset(package_arc2_africa_01.local_file_name)
dd2 = xr.open_dataset(package_chg_chirps_global_05.local_file_name)
yearly_sum1 = dd1.pr.resample(time="1AS").sum('time')
yearly_mean_sum1 = yearly_sum1.mean(axis=(1,2))
yearly_sum2 = dd2.precip.resample(time="1AS").sum('time')
yearly_mean_sum2 = yearly_sum2.mean(axis=(1,2))
fig = plt.figure(figsize=(10,5))
plt.plot(yearly_mean_sum1.time,yearly_mean_sum1, '*-',linewidth = 1,label = dataset1)
plt.plot(yearly_mean_sum2.time,yearly_mean_sum2, '*-',linewidth = 1,c='red',label = dataset2)
plt.legend()
plt.grid()
plt.show()
time_start = '1983-01-01T00:00:00'
dd2 = dd2.sel(time = slice(time_start,time_end))
dd2_dat = np.ma.masked_where(np.isnan(dd2.precip.data),dd2.precip.data)
dd2_dat = dd2.precip.data
dd1_dat = dd1.pr.data
# maximum precipitation over the whole period
print ('\033[1mMaximum precipitation over the whole period \033[0m')
print(dataset1 + '\t' + str(np.nanmax(dd1_dat)))
print(dataset2 + '\t' + str(np.nanmax(dd2_dat)))
dd1_dry_days = np.sum(np.where(dd1_dat>0.1,0,1),axis=0)
dd2_dry_days = np.sum(np.where(dd2_dat>0.1,0,1),axis=0)
# minimum, maximum and average nr of dry days
print ('\033[1mNumber of dry days:\tMinimum\t Maximum Average\033[0m')
print(dataset1 + '\t' + str(np.amin(dd1_dry_days)), '\t',str(np.amax(dd1_dry_days)),'\t',str(np.mean(dd1_dry_days)))
print(dataset2 + '\t' + str(np.amin(dd2_dry_days)),'\t',str(np.amax(dd2_dry_days)),'\t',str(np.mean(dd2_dry_days)))
##help(dd1.precip.resample)
dd1_monthly_avg = dd1.pr.resample(time="1MS").sum('time')
dd2_monthly_avg = dd2.precip.resample(time="1MS").sum('time')
mm_data1 = [];mm_data2 = []
for i in range(12):
mmm1 = np.mean(dd1_monthly_avg[i::12,:,:],axis=0).values
mm_data1.append(mmm1.mean(axis=1))
mmm2 = np.mean(dd2_monthly_avg[i::12,:,:],axis=0).values
mm_data2.append(mmm2.mean(axis=1))
fig = plt.figure(figsize = (8,8))
ax = fig.add_subplot(111)
ax.violinplot(mm_data1,np.arange(0.75,12.75,1),
showmeans=True,
showmedians=False)
ax.violinplot(mm_data2,np.arange(1.25,13.25,1),
showmeans=True,
showmedians=False)
plt.setp(ax, xticks = np.arange(1,13,1),
xticklabels=[calendar.month_abbr[m] for m in np.arange(1,13,1)])
plt.show()
averaged_monthly_mean2 = np.mean(mm_data2,axis = (1))
averaged_monthly_mean1 = np.mean(mm_data1,axis = (1))
fig = plt.figure(figsize = (8,6))
ax = fig.add_subplot(111)
bar_width = 0.35
opacity = 0.4
ax.bar(np.arange(0,12,1)-bar_width/2,averaged_monthly_mean2,
bar_width,
alpha=opacity,
color='b',
label = dataset2)
ax.bar(np.arange(0,12,1) + bar_width/2,averaged_monthly_mean1,
bar_width,
alpha=opacity,
color='r',
label = dataset1)
plt.legend()
plt.setp(ax, xticks = np.arange(0,12,1),
xticklabels=[calendar.month_abbr[m+1] for m in np.arange(0,12,1)])
plt.show()
time_start = '2016-01-01T00:00:00'
time_end = '2016-12-31T23:00:00'
dd2_2016 = dd2.sel(time = slice(time_start,time_end))
dd1_2016 = dd1.sel(time = slice(time_start,time_end))
dd1_monthly2016_avg = dd1_2016.pr.resample(time="1MS").sum('time')
dd2_monthly2016_avg = dd2_2016.precip.resample(time="1MS").sum('time')
fig = plt.figure(figsize=(10,5))
ax = fig.add_subplot(111)
plt.plot(np.arange(1,13,1),np.mean(dd2_monthly2016_avg,axis = (1,2))-averaged_monthly_mean2, '*-',linewidth = 1,label = dataset1)
plt.plot(np.arange(1,13,1),np.mean(dd1_monthly2016_avg,axis = (1,2))-averaged_monthly_mean1, '*-',linewidth = 1,c='red',label = dataset2)
plt.setp(ax, xticks = np.arange(1,13,1),
xticklabels=[calendar.month_abbr[m] for m in np.arange(1,13,1)])
plt.legend()
plt.grid()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Calculate Fantasy Points
Step2: Get the average QB fantasy points by year
Step3: Observation
Step4: Observation
Step5: Observation
Step6: Observation
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
import matplotlib as mp
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
qb_games = pd.read_csv('qb_games.csv')
qb_games.columns.values
qb_games['Fantasy Points'] = (qb_games['Pass Yds']/25) + (6 * qb_games['Pass TD']) - (2 * qb_games['Pass Int']) + (qb_games['Rush Yds'] /10) + (6 * qb_games['Rush TD'])
qb_fantasy = qb_games[['Name','Career Year', 'Year', 'Game Count', 'Career Games', 'Date', 'Pass Att', 'Pass Yds', 'Pass TD', 'Pass Int', 'Pass Rate', 'Rush Att', 'Rush Yds', 'Rush TD', 'Fantasy Points']]
qb_fantasy.head(10)
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
print(len(qb_fantasy))
yearly_fantasy_points = qb_fantasy.groupby(['Career Year'], as_index=False).mean()
yearly_fantasy_points[['Career Year', 'Pass Att', 'Pass Rate', 'Fantasy Points']]
color = ['red']
ax = sns.barplot(yearly_fantasy_points['Career Year'], (yearly_fantasy_points['Fantasy Points'] ), palette=color)
color = ['blue']
ax = sns.barplot(yearly_fantasy_points['Career Year'], (yearly_fantasy_points['Pass Att'] ), palette=color)
color = ['green']
ax = sns.barplot(yearly_fantasy_points['Career Year'], (yearly_fantasy_points['Pass Rate'] ), palette=color)
qb_means = qb_fantasy[['Pass Att', 'Pass Rate', 'Fantasy Points']].mean()
qb_means
pass_att = qb_means['Pass Att']
qb_upper_pass_att = qb_fantasy.loc[qb_fantasy['Pass Att'] > pass_att]
qb_pass_att_mean = qb_upper_pass_att['Pass Att'].mean()
print('Shifting data to only include pass attempts when greater than %d average pass attempts' %(pass_att))
qb_att = qb_upper_pass_att.groupby(['Career Year'], as_index=False).mean()
color = ['blue']
ax = sns.barplot(qb_att['Career Year'], (qb_att['Pass Att'] ), palette=color)
pass_rate = qb_means['Pass Rate']
qb_upper_pass_rate = qb_fantasy.loc[qb_fantasy['Pass Rate'] > pass_rate]
qb_pass_rate_mean = qb_upper_pass_rate['Pass Rate'].mean()
print('Shifting data to only include pass attempts when greater than %d average pass attempts' %(pass_rate))
qb_rate = qb_upper_pass_rate.groupby(['Career Year'], as_index=False).mean()
color = ['green']
ax = sns.barplot(qb_rate['Career Year'], (qb_rate['Pass Rate'] ), palette=color)
qb_upper_fantasy_rate = qb_fantasy.loc[qb_fantasy['Pass Rate'] > pass_rate]
qb_name = qb_upper_fantasy_rate.groupby(['Name'], as_index=False)
print(len(qb_name))
qb_fantasy_rate_mean = qb_upper_fantasy_rate['Fantasy Points'].mean()
print(qb_fantasy_rate_mean)
qb_rate = qb_upper_fantasy_rate.groupby(['Career Year'], as_index=False).mean()
color = ['red']
ax = sns.barplot(qb_rate['Career Year'], (qb_rate['Fantasy Points'] ), palette=color)
qb_upper_pass_rate = qb_fantasy.loc[qb_fantasy['Pass Rate'] > pass_rate]
qb_fantasy_rate = qb_upper_pass_rate.mean()
print(qb_fantasy_rate['Fantasy Points'])
qb_rate = qb_upper_pass_rate.groupby(['Career Year'], as_index=False).mean()
color = ['green']
ax = sns.barplot(qb_rate['Career Year'], (qb_rate['Fantasy Points'] ), palette=color)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: La commande suivante permet de verifier qu'une carte GPU est bien disponible sur la machine utilisée. Si c'est le cas et si Keras a bien été installé dans la configuration GPU (c'est généralement le cas dans l'environement virtuel GPU d'Anaconda), deux options vont apparaitre, une CPU et une GPU. La configuration GPU sera alors automatiquement utilisée.
Step2: Prise en charge des données
Step3: Illustration des données
Step4: La fonction img_to_array génére un array numpy a partir d'une image PIL .
Step5: Pré-traitements
Step6: Or les images doivent être de même dimensions pour être utilisée dans un même réseau.
Step7: La commande .flow() genere de nouveaux exemples à partir de l'image originale et les sauve dans le dossier spécifié dans save_to_dir.
Step8: Illustration des images transformées.
Step9: Classification d'image à l'aide du Deep Learning
Step10: Réseau convolutionnel
Step11: Définition du modèle
Step12: Apprentissage
Step13: Prédiction
Step14: Q Commentez les valeurs de prédictions d'apprentissage et de validation. Comparez les avec les résultats de la dernière epochs d'apprentissage. Qu'observez vous? Est-ce normal?
Step15: Création des caractéristiques
Step16: Construction d'un réseaux de neurone classique.
Step17: Apprentissage
Step18: Q Commentez les performances de ce nouveau modèle
Step19: Prédiction
Step20: Ajustement fin du réseau VGG16
Step21: On ajoute au modèle VGG, notre bloc de réseaux de neuronne construit précédemment pour générer des features.
Step22: Enfin on assemble les deux parties du modèles
Step23: Gèle des 4 premiers blocs de convolution
Step24: Generate Data
Step25: Apprentissage
Step26: Prédiction
Step27: Autres modèles
|
<ASSISTANT_TASK:>
Python Code:
# Utils
import sys
import os
import shutil
import time
import pickle
import numpy as np
# Deep Learning Librairies
import tensorflow as tf
import keras.preprocessing.image as kpi
import keras.layers as kl
import keras.optimizers as ko
import keras.backend as k
import keras.models as km
import keras.applications as ka
# Visualisaiton des données
from matplotlib import pyplot as plt
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
MODE = "GPU" if "GPU" in [k.device_type for k in device_lib.list_local_devices()] else "CPU"
print(MODE)
data_dir = '' # chemin d'accès aux données
N_train = 200 #2000
N_val = 80 #800
data_dir_sub = data_dir+'subsample_%d_Ntrain_%d_Nval' %(N_train, N_val)
img = kpi.load_img(data_dir_sub+'/train/cats/cat.1.jpg') # this is a PIL image
img
x = kpi.img_to_array(img)
plt.imshow(x/255, interpolation='nearest')
plt.show()
x_0 = kpi.img_to_array(kpi.load_img(data_dir_sub+"/train/cats/cat.0.jpg"))
x_1 = kpi.img_to_array(kpi.load_img(data_dir_sub+"/train/cats/cat.1.jpg"))
x_0.shape, x_1.shape
datagen = kpi.ImageDataGenerator(
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
img_width = 150
img_height = 150
img = kpi.load_img(data_dir_sub+"/train/cats/cat.1.jpg") # this is a PIL image
x = kpi.img_to_array(img)
x_ = x.reshape((1,) + x.shape)
if not(os.path.isdir(data_dir_sub+"/preprocessing_example")):
os.mkdir(data_dir_sub+"/preprocessing_example")
i = 0
for batch in datagen.flow(x_, batch_size=1,save_to_dir=data_dir_sub+"/preprocessing_example", save_prefix='cat', save_format='jpeg'):
i += 1
if i > 7:
break
X_list=[]
for f in os.listdir(data_dir_sub+"/preprocessing_example"):
X_list.append(kpi.img_to_array(kpi.load_img(data_dir_sub+"/preprocessing_example/"+f)))
fig=plt.figure(figsize=(16,8))
fig.patch.set_alpha(0)
ax = fig.add_subplot(3,3,1)
ax.imshow(x/255, interpolation="nearest")
ax.set_title("Image original")
for i,xt in enumerate(X_list):
ax = fig.add_subplot(3,3,i+2)
ax.imshow(xt/255, interpolation="nearest")
ax.set_title("Random transformation %d" %(i+1))
plt.tight_layout()
plt.savefig("cats_transformation.png", dpi=100, bbox_to_anchor="tight", facecolor=fig.get_facecolor())
plt.show()
epochs = 10
batch_size=20
# this is the augmentation configuration we will use for training
train_datagen = kpi.ImageDataGenerator(
rescale=1./255,
)
# this is the augmentation configuration we will use for testing:
# only rescaling
valid_datagen = kpi.ImageDataGenerator(rescale=1./255)
# this is a generator that will read pictures found in
# subfolers of 'data/train', and indefinitely generate
# batches of augmented image data
train_generator = train_datagen.flow_from_directory(
data_dir_sub+"/train/", # this is the target directory
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='binary') # since we use binary_crossentropy loss, we need binary labels
# this is a similar generator, for validation data
validation_generator = valid_datagen.flow_from_directory(
data_dir_sub+"/validation/",
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='binary')
model_conv = km.Sequential()
model_conv.add(kl.Conv2D(32, (3, 3), input_shape=(img_width, img_height, 3), data_format="channels_last"))
model_conv.add(kl.Activation('relu'))
model_conv.add(kl.MaxPooling2D(pool_size=(2, 2)))
model_conv.add(kl.Conv2D(32, (3, 3)))
model_conv.add(kl.Activation('relu'))
model_conv.add(kl.MaxPooling2D(pool_size=(2, 2)))
model_conv.add(kl.Conv2D(64, (3, 3)))
model_conv.add(kl.Activation('relu'))
model_conv.add(kl.MaxPooling2D(pool_size=(2, 2)))
model_conv.add(kl.Flatten()) # this converts our 3D feature maps to 1D feature vectors
model_conv.add(kl.Dense(64))
model_conv.add(kl.Activation('relu'))
model_conv.add(kl.Dropout(0.5))
model_conv.add(kl.Dense(1))
model_conv.add(kl.Activation('sigmoid'))
model_conv.compile(loss='binary_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
model_conv.summary()
ts = time.time()
model_conv.fit_generator(train_generator, steps_per_epoch=N_train // batch_size, epochs=epochs,
validation_data=validation_generator,validation_steps=N_val // batch_size)
te = time.time()
t_learning_conv_simple_model = te-ts
print("Learning TIme for %d epochs : %d seconds"%(epochs,t_learning_conv_simple_model))
model_conv.save(data_dir_sub+'/'+MODE+'_models_convolutional_network_%d_epochs_%d_batch_size.h5' %(epochs, batch_size))
ts = time.time()
score_conv_val = model_conv.evaluate_generator(validation_generator, N_val /batch_size, verbose=1)
score_conv_train = model_conv.evaluate_generator(train_generator, N_train / batch_size, verbose=1)
te = time.time()
t_prediction_conv_simple_model = te-ts
print('Train accuracy:', score_conv_train[1])
print('Validation accuracy:', score_conv_val[1])
print("Time Prediction: %.2f seconds" %t_prediction_conv_simple_model )
model_VGG16_without_top = ka.VGG16(include_top=False, weights='imagenet')
model_VGG16_without_top.summary()
features_train_path = data_dir_sub+'/features_train.npy'
features_validation_path = data_dir_sub+'/features_validation.npy'
if os.path.isfile(features_train_path) and os.path.isfile(features_validation_path):
print("Load Features")
features_train = np.load(open(features_train_path, "rb"))
features_validation = np.load(open(features_validation_path, "rb"))
else:
print("Generate Features")
datagen = kpi.ImageDataGenerator(rescale=1. / 255)
generator = datagen.flow_from_directory(
data_dir_sub+"/train",
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode=None, # this means our generator will only yield batches of data, no labels
shuffle=False)
features_train = model_VGG16_without_top.predict_generator(generator, N_train / batch_size, verbose = 1)
# save the output as a Numpy array
np.save(open(features_train_path, 'wb'), features_train)
generator = datagen.flow_from_directory(
data_dir_sub+"/validation",
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode=None,
shuffle=False)
features_validation = model_VGG16_without_top.predict_generator(generator, N_val / batch_size, verbose = 1)
# save the output as a Numpy array
np.save(open(features_validation_path, 'wb'), features_validation)
model_VGG_fcm = km.Sequential()
model_VGG_fcm.add(kl.Flatten(input_shape=features_train.shape[1:]))
model_VGG_fcm.add(kl.Dense(64, activation='relu'))
model_VGG_fcm.add(kl.Dropout(0.5))
model_VGG_fcm.add(kl.Dense(1, activation='sigmoid'))
model_VGG_fcm.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model_VGG_fcm.summary()
# On créer des vecteurs labels
train_labels = np.array([0] * int((N_train/2)) + [1] * int((N_train/2)))
validation_labels = np.array([0] * int((N_val/2)) + [1] * int((N_val/2)))
model_VGG_fcm.fit(features_train, train_labels,
epochs=epochs,
batch_size=batch_size,
validation_data=(features_validation, validation_labels))
t_learning_VGG_fcm = te-ts
model_VGG_fcm.save_weights(data_dir_sub+'/weights_model_VGG_fully_connected_model_%d_epochs_%d_batch_size.h5' %(epochs, batch_size))
ts = time.time()
score_VGG_fcm_val = model_VGG_fcm.evaluate(features_validation, validation_labels)
score_VGG_fcm_train = model_VGG_fcm.evaluate(features_train, train_labels)
te = time.time()
t_prediction_VGG_fcm = te-ts
print('Train accuracy:', score_VGG_fcm_train[1])
print('Validation accuracy:', score_VGG_fcm_val[1])
print("Time Prediction: %.2f seconds" %t_prediction_VGG_fcm)
# build the VGG16 network
model_VGG16_without_top = ka.VGG16(include_top=False, weights='imagenet', input_shape=(150,150,3))
print('Model loaded.')
# build a classifier model to put on top of the convolutional model
top_model = km.Sequential()
top_model.add(kl.Flatten(input_shape=model_VGG16_without_top.output_shape[1:]))
top_model.add(kl.Dense(64, activation='relu'))
top_model.add(kl.Dropout(0.5))
top_model.add(kl.Dense(1, activation='sigmoid'))
# note that it is necessary to start with a fully-trained
# classifier, including the top classifier,
# in order to successfully do fine-tuning
top_model.load_weights(data_dir_sub+'/weights_model_VGG_fully_connected_model_%d_epochs_%d_batch_size.h5' %(epochs, batch_size))
# add the model on top of the convolutional base
model_VGG_LastConv_fcm = km.Model(inputs=model_VGG16_without_top.input, outputs=top_model(model_VGG16_without_top.output))
model_VGG_LastConv_fcm.summary()
for layer in model_VGG_LastConv_fcm.layers[:15]:
layer.trainable = False
# prepare data augmentation configuration
train_datagen = kpi.ImageDataGenerator(
rescale=1. / 255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
test_datagen = kpi.ImageDataGenerator(rescale=1. / 255)
train_generator = train_datagen.flow_from_directory(
data_dir_sub+"/train/",
target_size=(img_height, img_width),
batch_size=batch_size,
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
data_dir_sub+"/validation/",
target_size=(img_height, img_width),
batch_size=batch_size,
class_mode='binary')
model_VGG_LastConv_fcm.compile(loss='binary_crossentropy',
optimizer=ko.SGD(lr=1e-4, momentum=0.9),
metrics=['accuracy'])
# fine-tune the model
ts = time.time()
model_VGG_LastConv_fcm.fit_generator(
train_generator,
steps_per_epoch=N_train // batch_size,
epochs=epochs,
validation_data=validation_generator,
validation_steps=N_val // batch_size)
te = time.time()
t_learning_VGG_LastConv_fcm = te-ts
ts = time.time()
score_VGG_LastConv_fcm_val = model_VGG_LastConv_fcm.evaluate_generator(validation_generator, N_val // batch_size)
score_VGG_LastConv_fcm_train = model_VGG_LastConv_fcm.evaluate_generator(train_generator, N_train // batch_size)
te = time.time()
t_prediction_VGG_LastConv_fcm = te-ts
print('Train accuracy:', score_VGG_LastConv_fcm_val[1])
print('Validation accuracy:', score_VGG_LastConv_fcm_train[1])
print("Time Prediction: %.2f seconds" %t_prediction_VGG_LastConv_fcm)
data_dir_test = data_dir+'test/'
N_test = len(os.listdir(data_dir_test+"/test"))
test_datagen = kpi.ImageDataGenerator(rescale=1. / 255)
test_generator = test_datagen.flow_from_directory(
data_dir_test,
#data_dir_sub+"/train/",
target_size=(img_height, img_width),
batch_size=batch_size,
class_mode=None,
shuffle=False)
test_prediction = model_VGG_LastConv_fcm.predict_generator(test_generator, N_test // batch_size)
images_test = [data_dir_test+"/test/"+k for k in os.listdir(data_dir_test+"/test")][:9]
x_test = [kpi.img_to_array(kpi.load_img(image_test))/255 for image_test in images_test] # this is a PIL image
fig = plt.figure(figsize=(10,10))
for k in range(9):
ax = fig.add_subplot(3,3,k+1)
ax.imshow(x_test[k], interpolation='nearest')
pred = test_prediction[k]
if pred >0.5:
title = "Probabiliy for dog : %.1f" %(pred*100)
else:
title = "Probabiliy for cat : %.1f" %((1-pred)*100)
ax.set_title(title)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Neuron Model
Step2: Agreement is excellent.
Step3: Agreement is as good as possible
Step5: ISIs are as predicted
Step6: I_h channel
Step7: The time constant is extremely long, up to 1s, for relevant voltages where $I_h$ is perceptible. We thus need long test runs.
Step8: Agreement is very good
Step9: Time constants here are much shorter than for I_h
Step10: Also here the results are in good agreement and the error appears acceptable.
Step11: Perfect agreement
Step structure is because $V$ changes only every second.
Step12: Properties of I_DK
Step13: Note that current in steady state is
Step15: Looks very fine.
Step16: AMPA, GABA_A, GABA_B channels
Step17: Looks quite good, but the error is maybe a bit larger than one would hope.
Step18: Looks good for all
Step19: Looks good
Step20: Looks fine, too.
Step21: Perfect agreement, synapse model looks fine.
|
<ASSISTANT_TASK:>
Python Code:
import sys
import math
import numpy as np
import pandas as pd
import scipy.optimize as so
import scipy.integrate as si
import matplotlib.pyplot as plt
import nest
%matplotlib inline
plt.rcParams['figure.figsize'] = (12, 3)
def Vpass(t, V0, gNaL, ENa, gKL, EK, taum, I=0):
tau_eff = taum/(gNaL + gKL)
Vinf = (gNaL*ENa + gKL*EK + I)/(gNaL + gKL)
return V0*np.exp(-t/tau_eff) + Vinf*(1-np.exp(-t/tau_eff))
def theta(t, th0, theq, tauth):
return th0*np.exp(-t/tauth) + theq*(1-np.exp(-t/tauth))
nest.ResetKernel()
nest.SetDefaults('ht_neuron', {'g_peak_NaP': 0., 'g_peak_KNa': 0.,
'g_peak_T': 0., 'g_peak_h': 0.,
'tau_theta': 10.})
hp = nest.GetDefaults('ht_neuron')
V_0 = [-100., -70., -55.]
th_0 = [-65., -51., -10.]
T_sim = 20.
nrns = nest.Create('ht_neuron', n=len(V_0), params={'V_m': V_0, 'theta': th_0})
nest.Simulate(T_sim)
V_th_sim = nrns.get(['V_m', 'theta'])
for (V0, th0, Vsim, thsim) in zip(V_0, th_0, V_th_sim['V_m'], V_th_sim['theta']):
Vex = Vpass(T_sim, V0, hp['g_NaL'], hp['E_Na'], hp['g_KL'], hp['E_K'], hp['tau_m'])
thex = theta(T_sim, th0, hp['theta_eq'], hp['tau_theta'])
print('Vex = {:.3f}, Vsim = {:.3f}, Vex-Vsim = {:.3e}'.format(Vex, Vsim, Vex-Vsim))
print('thex = {:.3f}, thsim = {:.3f}, thex-thsim = {:.3e}'.format(thex, thsim, thex-thsim))
def t_first_spike(gNaL, ENa, gKL, EK, taum, theq, tI, I):
tau_eff = taum/(gNaL + gKL)
Vinf0 = (gNaL*ENa + gKL*EK)/(gNaL + gKL)
VinfI = (gNaL*ENa + gKL*EK + I)/(gNaL + gKL)
return tI - tau_eff * np.log((theq-VinfI) / (Vinf0-VinfI))
nest.ResetKernel()
nest.resolution = 0.001
nest.SetDefaults('ht_neuron', {'g_peak_NaP': 0., 'g_peak_KNa': 0.,
'g_peak_T': 0., 'g_peak_h': 0.})
hp = nest.GetDefaults('ht_neuron')
I = [25., 50., 100.]
tI = 1.
delay = 1.
T_sim = 40.
nrns = nest.Create('ht_neuron', n=len(I))
dcgens = nest.Create('dc_generator', n=len(I), params={'amplitude': I, 'start': tI})
srs = nest.Create('spike_recorder', n=len(I))
nest.Connect(dcgens, nrns, 'one_to_one', {'delay': delay})
nest.Connect(nrns, srs, 'one_to_one')
nest.Simulate(T_sim)
t_first_sim = [t[0] for t in srs.get('events', 'times')]
for dc, tf_sim in zip(I, t_first_sim):
tf_ex = t_first_spike(hp['g_NaL'], hp['E_Na'], hp['g_KL'], hp['E_K'],
hp['tau_m'], hp['theta_eq'], tI+delay, dc)
print('tex = {:.4f}, tsim = {:.4f}, tex-tsim = {:.4f}'.format(tf_ex,
tf_sim,
tf_ex-tf_sim))
def Vspike(tspk, gNaL, ENa, gKL, EK, taum, tauspk, I=0):
tau_eff = taum/(gNaL + gKL + taum/tauspk)
Vinf = (gNaL*ENa + gKL*EK + I + taum/tauspk*EK)/(gNaL + gKL + taum/tauspk)
return ENa*np.exp(-tspk/tau_eff) + Vinf*(1-np.exp(-tspk/tau_eff))
def thetaspike(tspk, ENa, theq, tauth):
return ENa*np.exp(-tspk/tauth) + theq*(1-np.exp(-tspk/tauth))
def Vpost(t, tspk, gNaL, ENa, gKL, EK, taum, tauspk, I=0):
Vsp = Vspike(tspk, gNaL, ENa, gKL, EK, taum, tauspk, I)
return Vpass(t-tspk, Vsp, gNaL, ENa, gKL, EK, taum, I)
def thetapost(t, tspk, ENa, theq, tauth):
thsp = thetaspike(tspk, ENa, theq, tauth)
return theta(t-tspk, thsp, theq, tauth)
def threshold(t, tspk, gNaL, ENa, gKL, EK, taum, tauspk, I, theq, tauth):
return Vpost(t, tspk, gNaL, ENa, gKL, EK, taum, tauspk, I) - thetapost(t, tspk, ENa, theq, tauth)
nest.ResetKernel()
nest.resolution = 0.001
nest.SetDefaults('ht_neuron', {'g_peak_NaP': 0., 'g_peak_KNa': 0.,
'g_peak_T': 0., 'g_peak_h': 0.})
hp = nest.GetDefaults('ht_neuron')
I = [25., 50., 100.]
tI = 1.
delay = 1.
T_sim = 1000.
nrns = nest.Create('ht_neuron', n=len(I))
dcgens = nest.Create('dc_generator', n=len(I), params={'amplitude': I, 'start': tI})
srs = nest.Create('spike_recorder', n=len(I))
nest.Connect(dcgens, nrns, 'one_to_one', {'delay': delay})
nest.Connect(nrns, srs, 'one_to_one')
nest.Simulate(T_sim)
isi_sim = []
for ev in srs.events:
t_spk = ev['times']
isi = np.diff(t_spk)
isi_sim.append((np.min(isi), np.mean(isi), np.max(isi)))
for dc, (isi_min, isi_mean, isi_max) in zip(I, isi_sim):
isi_ex = so.bisect(threshold, hp['t_ref'], 50,
args=(hp['t_ref'], hp['g_NaL'], hp['E_Na'], hp['g_KL'], hp['E_K'],
hp['tau_m'], hp['tau_spike'], dc, hp['theta_eq'], hp['tau_theta']))
print('isi_ex = {:.4f}, isi_sim (min, mean, max) = ({:.4f}, {:.4f}, {:.4f})'.format(
isi_ex, isi_min, isi_mean, isi_max))
nest.ResetKernel()
class Channel:
Base class for channel models in Python.
def tau_m(self, V):
raise NotImplementedError()
def tau_h(self, V):
raise NotImplementedError()
def m_inf(self, V):
raise NotImplementedError()
def h_inf(self, V):
raise NotImplementedError()
def D_inf(self, V):
raise NotImplementedError()
def dh(self, h, t, V):
return (self.h_inf(V)-h)/self.tau_h(V)
def dm(self, m, t, V):
return (self.m_inf(V)-m)/self.tau_m(V)
def voltage_clamp(channel, DT_V_seq, nest_dt=0.1):
"Run voltage clamp with voltage V through intervals DT."
# NEST part
nest_g_0 = {'g_peak_h': 0., 'g_peak_T': 0., 'g_peak_NaP': 0., 'g_peak_KNa': 0.}
nest_g_0[channel.nest_g] = 1.
nest.ResetKernel()
nest.resolution = nest_dt
nrn = nest.Create('ht_neuron', params=nest_g_0)
mm = nest.Create('multimeter', params={'record_from': ['V_m', 'theta', channel.nest_I],
'interval': nest_dt})
nest.Connect(mm, nrn)
# ensure we start from equilibrated state
nrn.set(V_m=DT_V_seq[0][1], equilibrate=True, voltage_clamp=True)
for DT, V in DT_V_seq:
nrn.set(V_m=V, voltage_clamp=True)
nest.Simulate(DT)
t_end = nest.biological_time
# simulate a little more so we get all data up to t_end to multimeter
nest.Simulate(2 * nest.min_delay)
tmp = pd.DataFrame(mm.events)
nest_res = tmp[tmp.times <= t_end]
# Control part
t_old = 0.
try:
m_old = channel.m_inf(DT_V_seq[0][1])
except NotImplementedError:
m_old = None
try:
h_old = channel.h_inf(DT_V_seq[0][1])
except NotImplementedError:
h_old = None
try:
D_old = channel.D_inf(DT_V_seq[0][1])
except NotImplementedError:
D_old = None
t_all, I_all = [], []
if D_old is not None:
D_all = []
for DT, V in DT_V_seq:
t_loc = np.arange(0., DT+0.1*nest_dt, nest_dt)
I_loc = channel.compute_I(t_loc, V, m_old, h_old, D_old)
t_all.extend(t_old + t_loc[1:])
I_all.extend(I_loc[1:])
if D_old is not None:
D_all.extend(channel.D[1:])
m_old = channel.m[-1] if m_old is not None else None
h_old = channel.h[-1] if h_old is not None else None
D_old = channel.D[-1] if D_old is not None else None
t_old = t_all[-1]
if D_old is None:
ctrl_res = pd.DataFrame({'times': t_all, channel.nest_I: I_all})
else:
ctrl_res = pd.DataFrame({'times': t_all, channel.nest_I: I_all, 'D': D_all})
return nest_res, ctrl_res
nest.ResetKernel()
class Ih(Channel):
nest_g = 'g_peak_h'
nest_I = 'I_h'
def __init__(self, ht_params):
self.hp = ht_params
def tau_m(self, V):
return 1/(np.exp(-14.59-0.086*V) + np.exp(-1.87 + 0.0701*V))
def m_inf(self, V):
return 1/(1+np.exp((V+75)/5.5))
def compute_I(self, t, V, m0, h0, D0):
self.m = si.odeint(self.dm, m0, t, args=(V,))
return - self.hp['g_peak_h'] * self.m * (V - self.hp['E_rev_h'])
ih = Ih(nest.GetDefaults('ht_neuron'))
V = np.linspace(-110, 30, 100)
plt.plot(V, ih.tau_m(V));
ax = plt.gca();
ax.set_xlabel('Voltage V [mV]');
ax.set_ylabel('Time constant tau_m [ms]', color='b');
ax2 = ax.twinx()
ax2.plot(V, ih.m_inf(V), 'g');
ax2.set_ylabel('Steady-state m_h^inf', color='g');
ih = Ih(nest.GetDefaults('ht_neuron'))
nr, cr = voltage_clamp(ih, [(500, -65.), (500, -80.), (500, -100.), (500, -90.), (500, -55.)])
plt.subplot(1, 2, 1)
plt.plot(nr.times, nr.I_h, label='NEST');
plt.plot(cr.times, cr.I_h, label='Control');
plt.legend(loc='upper left');
plt.xlabel('Time [ms]');
plt.ylabel('I_h [mV]');
plt.title('I_h current')
plt.subplot(1, 2, 2)
plt.plot(nr.times, (nr.I_h-cr.I_h)/np.abs(cr.I_h));
plt.title('Relative I_h error')
plt.xlabel('Time [ms]');
plt.ylabel('Rel. error (NEST-Control)/|Control|');
nest.ResetKernel()
class IT(Channel):
nest_g = 'g_peak_T'
nest_I = 'I_T'
def __init__(self, ht_params):
self.hp = ht_params
def tau_m(self, V):
return 0.13 + 0.22/(np.exp(-(V+132)/16.7) + np.exp((V+16.8)/18.2))
def tau_h(self, V):
return 8.2 + (56.6 + 0.27 * np.exp((V+115.2)/5.0)) /(1 + np.exp((V+86.0)/3.2))
def m_inf(self, V):
return 1/(1+np.exp(-(V+59.0)/6.2))
def h_inf(self, V):
return 1/(1+np.exp((V+83.0)/4.0))
def compute_I(self, t, V, m0, h0, D0):
self.m = si.odeint(self.dm, m0, t, args=(V,))
self.h = si.odeint(self.dh, h0, t, args=(V,))
return - self.hp['g_peak_T'] * self.m**2 * self.h * (V - self.hp['E_rev_T'])
iT = IT(nest.GetDefaults('ht_neuron'))
V = np.linspace(-110, 30, 100)
plt.plot(V, 10 * iT.tau_m(V), 'b-', label='10 * tau_m');
plt.plot(V, iT.tau_h(V), 'b--', label='tau_h');
ax1 = plt.gca();
ax1.set_xlabel('Voltage V [mV]');
ax1.set_ylabel('Time constants [ms]', color='b');
ax2 = ax1.twinx()
ax2.plot(V, iT.m_inf(V), 'g-', label='m_inf');
ax2.plot(V, iT.h_inf(V), 'g--', label='h_inf');
ax2.set_ylabel('Steady-state', color='g');
ln1, lb1 = ax1.get_legend_handles_labels()
ln2, lb2 = ax2.get_legend_handles_labels()
plt.legend(ln1+ln2, lb1+lb2, loc='upper right');
iT = IT(nest.GetDefaults('ht_neuron'))
nr, cr = voltage_clamp(iT, [(200, -65.), (200, -80.), (200, -100.), (200, -90.), (200, -70.),
(200, -55.)],
nest_dt=0.1)
plt.subplot(1, 2, 1)
plt.plot(nr.times, nr.I_T, label='NEST');
plt.plot(cr.times, cr.I_T, label='Control');
plt.legend(loc='upper left');
plt.xlabel('Time [ms]');
plt.ylabel('I_T [mV]');
plt.title('I_T current')
plt.subplot(1, 2, 2)
plt.plot(nr.times, (nr.I_T-cr.I_T)/np.abs(cr.I_T));
plt.title('Relative I_T error')
plt.xlabel('Time [ms]');
plt.ylabel('Rel. error (NEST-Control)/|Control|');
nest.ResetKernel()
class INaP(Channel):
nest_g = 'g_peak_NaP'
nest_I = 'I_NaP'
def __init__(self, ht_params):
self.hp = ht_params
def m_inf(self, V):
return 1/(1+np.exp(-(V+55.7)/7.7))
def compute_I(self, t, V, m0, h0, D0):
return self.I_V_curve(V * np.ones_like(t))
def I_V_curve(self, V):
self.m = self.m_inf(V)
return - self.hp['g_peak_NaP'] * self.m**3 * (V - self.hp['E_rev_NaP'])
iNaP = INaP(nest.GetDefaults('ht_neuron'))
V = np.arange(-110., 30., 1.)
nr, cr = voltage_clamp(iNaP, [(1, v) for v in V], nest_dt=0.1)
plt.subplot(1, 2, 1)
plt.plot(nr.times, nr.I_NaP, label='NEST');
plt.plot(cr.times, cr.I_NaP, label='Control');
plt.legend(loc='upper left');
plt.xlabel('Time [ms]');
plt.ylabel('I_NaP [mV]');
plt.title('I_NaP current')
plt.subplot(1, 2, 2)
plt.plot(nr.times, (nr.I_NaP-cr.I_NaP));
plt.title('I_NaP error')
plt.xlabel('Time [ms]');
plt.ylabel('Error (NEST-Control)');
nest.ResetKernel()
class IDK(Channel):
nest_g = 'g_peak_KNa'
nest_I = 'I_KNa'
def __init__(self, ht_params):
self.hp = ht_params
def m_DK(self, D):
return 1/(1+(0.25/D)**3.5)
def D_inf(self, V):
return 1250. * self.D_influx(V) + 0.001
def D_influx(self, V):
return 0.025 / ( 1 + np.exp(-(V+10)/5.) )
def dD(self, D, t, V):
return (self.D_inf(V) - D)/1250.
def compute_I(self, t, V, m0, h0, D0):
self.D = si.odeint(self.dD, D0, t, args=(V,))
self.m = self.m_DK(self.D)
return - self.hp['g_peak_KNa'] * self.m * (V - self.hp['E_rev_KNa'])
iDK = IDK(nest.GetDefaults('ht_neuron'))
D=np.linspace(0.01, 1.5,num=200);
V=np.linspace(-110, 30, num=200);
ax1 = plt.subplot2grid((1, 9), (0, 0), colspan=4);
ax2 = ax1.twinx()
ax3 = plt.subplot2grid((1, 9), (0, 6), colspan=3);
ax1.plot(V, -iDK.m_DK(iDK.D_inf(V))*(V - iDK.hp['E_rev_KNa']), 'g');
ax1.set_ylabel('Current I_inf(V)', color='g');
ax2.plot(V, iDK.m_DK(iDK.D_inf(V)), 'b');
ax2.set_ylabel('Activation m_inf(D_inf(V))', color='b');
ax1.set_xlabel('Membrane potential V [mV]');
ax2.set_title('Steady-state activation and current');
ax3.plot(D, iDK.m_DK(D), 'b');
ax3.set_xlabel('D');
ax3.set_ylabel('Activation m_inf(D)', color='b');
ax3.set_title('Activation as function of D');
nr, cr = voltage_clamp(iDK, [(500, -65.), (500, -35.), (500, -25.), (500, 0.), (5000, -70.)],
nest_dt=1.)
ax1 = plt.subplot2grid((1, 9), (0, 0), colspan=4);
ax2 = plt.subplot2grid((1, 9), (0, 6), colspan=3);
ax1.plot(nr.times, nr.I_KNa, label='NEST');
ax1.plot(cr.times, cr.I_KNa, label='Control');
ax1.legend(loc='lower right');
ax1.set_xlabel('Time [ms]');
ax1.set_ylabel('I_DK [mV]');
ax1.set_title('I_DK current');
ax2.plot(nr.times, (nr.I_KNa-cr.I_KNa)/np.abs(cr.I_KNa));
ax2.set_title('Relative I_DK error')
ax2.set_xlabel('Time [ms]');
ax2.set_ylabel('Rel. error (NEST-Control)/|Control|');
nest.ResetKernel()
class SynChannel:
Base class for synapse channel models in Python.
def t_peak(self):
return self.tau_1 * self.tau_2 / (self.tau_2 - self.tau_1) * np.log(self.tau_2/self.tau_1)
def beta(self, t):
val = ( ( np.exp(-t/self.tau_1) - np.exp(-t/self.tau_2) ) /
( np.exp(-self.t_peak()/self.tau_1) - np.exp(-self.t_peak()/self.tau_2) ) )
val[t < 0] = 0
return val
def syn_voltage_clamp(channel, DT_V_seq, nest_dt=0.1):
"Run voltage clamp with voltage V through intervals DT with single spike at time 1"
spike_time = 1.0
delay = 1.0
nest.ResetKernel()
nest.resolution = nest_dt
try:
nrn = nest.Create('ht_neuron', params={'theta': 1e6, 'theta_eq': 1e6,
'instant_unblock_NMDA': channel.instantaneous})
except:
nrn = nest.Create('ht_neuron', params={'theta': 1e6, 'theta_eq': 1e6})
mm = nest.Create('multimeter',
params={'record_from': ['g_'+channel.receptor],
'interval': nest_dt})
sg = nest.Create('spike_generator', params={'spike_times': [spike_time]})
nest.Connect(mm, nrn)
nest.Connect(sg, nrn, syn_spec={'weight': 1.0, 'delay': delay,
'receptor_type': channel.rec_code})
# ensure we start from equilibrated state
nrn.set(V_m=DT_V_seq[0][1], equilibrate=True, voltage_clamp=True)
for DT, V in DT_V_seq:
nrn.set(V_m=V, voltage_clamp=True)
nest.Simulate(DT)
t_end = nest.biological_time
# simulate a little more so we get all data up to t_end to multimeter
nest.Simulate(2 * nest.min_delay)
tmp = pd.DataFrame(mm.get('events'))
nest_res = tmp[tmp.times <= t_end]
# Control part
t_old = 0.
t_all, g_all = [], []
m_fast_old = (channel.m_inf(DT_V_seq[0][1])
if channel.receptor == 'NMDA' and not channel.instantaneous else None)
m_slow_old = (channel.m_inf(DT_V_seq[0][1])
if channel.receptor == 'NMDA' and not channel.instantaneous else None)
for DT, V in DT_V_seq:
t_loc = np.arange(0., DT+0.1*nest_dt, nest_dt)
g_loc = channel.g(t_old+t_loc-(spike_time+delay), V, m_fast_old, m_slow_old)
t_all.extend(t_old + t_loc[1:])
g_all.extend(g_loc[1:])
m_fast_old = channel.m_fast[-1] if m_fast_old is not None else None
m_slow_old = channel.m_slow[-1] if m_slow_old is not None else None
t_old = t_all[-1]
ctrl_res = pd.DataFrame({'times': t_all, 'g_'+channel.receptor: g_all})
return nest_res, ctrl_res
nest.ResetKernel()
class PlainChannel(SynChannel):
def __init__(self, hp, receptor):
self.hp = hp
self.receptor = receptor
self.rec_code = hp['receptor_types'][receptor]
self.tau_1 = hp['tau_rise_'+receptor]
self.tau_2 = hp['tau_decay_'+receptor]
self.g_peak = hp['g_peak_'+receptor]
self.E_rev = hp['E_rev_'+receptor]
def g(self, t, V, mf0, ms0):
return self.g_peak * self.beta(t)
def I(self, t, V):
return - self.g(t) * (V-self.E_rev)
ampa = PlainChannel(nest.GetDefaults('ht_neuron'), 'AMPA')
am_n, am_c = syn_voltage_clamp(ampa, [(25, -70.)], nest_dt=0.1)
plt.subplot(1, 2, 1);
plt.plot(am_n.times, am_n.g_AMPA, label='NEST');
plt.plot(am_c.times, am_c.g_AMPA, label='Control');
plt.xlabel('Time [ms]');
plt.ylabel('g_AMPA');
plt.title('AMPA Channel');
plt.subplot(1, 2, 2);
plt.plot(am_n.times, (am_n.g_AMPA-am_c.g_AMPA)/am_c.g_AMPA);
plt.xlabel('Time [ms]');
plt.ylabel('Rel error');
plt.title('AMPA rel error');
ampa = PlainChannel(nest.GetDefaults('ht_neuron'), 'AMPA')
am_n, am_c = syn_voltage_clamp(ampa, [(25, -70.)], nest_dt=0.001)
plt.subplot(1, 2, 1);
plt.plot(am_n.times, am_n.g_AMPA, label='NEST');
plt.plot(am_c.times, am_c.g_AMPA, label='Control');
plt.xlabel('Time [ms]');
plt.ylabel('g_AMPA');
plt.title('AMPA Channel');
plt.subplot(1, 2, 2);
plt.plot(am_n.times, (am_n.g_AMPA-am_c.g_AMPA)/am_c.g_AMPA);
plt.xlabel('Time [ms]');
plt.ylabel('Rel error');
plt.title('AMPA rel error');
gaba_a = PlainChannel(nest.GetDefaults('ht_neuron'), 'GABA_A')
ga_n, ga_c = syn_voltage_clamp(gaba_a, [(50, -70.)])
plt.subplot(1, 2, 1);
plt.plot(ga_n.times, ga_n.g_GABA_A, label='NEST');
plt.plot(ga_c.times, ga_c.g_GABA_A, label='Control');
plt.xlabel('Time [ms]');
plt.ylabel('g_GABA_A');
plt.title('GABA_A Channel');
plt.subplot(1, 2, 2);
plt.plot(ga_n.times, (ga_n.g_GABA_A-ga_c.g_GABA_A)/ga_c.g_GABA_A);
plt.xlabel('Time [ms]');
plt.ylabel('Rel error');
plt.title('GABA_A rel error');
gaba_b = PlainChannel(nest.GetDefaults('ht_neuron'), 'GABA_B')
gb_n, gb_c = syn_voltage_clamp(gaba_b, [(750, -70.)])
plt.subplot(1, 2, 1);
plt.plot(gb_n.times, gb_n.g_GABA_B, label='NEST');
plt.plot(gb_c.times, gb_c.g_GABA_B, label='Control');
plt.xlabel('Time [ms]');
plt.ylabel('g_GABA_B');
plt.title('GABA_B Channel');
plt.subplot(1, 2, 2);
plt.plot(gb_n.times, (gb_n.g_GABA_B-gb_c.g_GABA_B)/gb_c.g_GABA_B);
plt.xlabel('Time [ms]');
plt.ylabel('Rel error');
plt.title('GABA_B rel error');
class NMDAInstantChannel(SynChannel):
def __init__(self, hp, receptor):
self.hp = hp
self.receptor = receptor
self.rec_code = hp['receptor_types'][receptor]
self.tau_1 = hp['tau_rise_'+receptor]
self.tau_2 = hp['tau_decay_'+receptor]
self.g_peak = hp['g_peak_'+receptor]
self.E_rev = hp['E_rev_'+receptor]
self.S_act = hp['S_act_NMDA']
self.V_act = hp['V_act_NMDA']
self.instantaneous = True
def m_inf(self, V):
return 1. / ( 1. + np.exp(-self.S_act*(V-self.V_act)))
def g(self, t, V, mf0, ms0):
return self.g_peak * self.m_inf(V) * self.beta(t)
def I(self, t, V):
return - self.g(t) * (V-self.E_rev)
nmdai = NMDAInstantChannel(nest.GetDefaults('ht_neuron'), 'NMDA')
ni_n, ni_c = syn_voltage_clamp(nmdai, [(50, -60.), (50, -50.), (50, -20.), (50, 0.), (50, -60.)])
plt.subplot(1, 2, 1);
plt.plot(ni_n.times, ni_n.g_NMDA, label='NEST');
plt.plot(ni_c.times, ni_c.g_NMDA, label='Control');
plt.xlabel('Time [ms]');
plt.ylabel('g_NMDA');
plt.title('NMDA Channel (instant unblock)');
plt.subplot(1, 2, 2);
plt.plot(ni_n.times, (ni_n.g_NMDA-ni_c.g_NMDA)/ni_c.g_NMDA);
plt.xlabel('Time [ms]');
plt.ylabel('Rel error');
plt.title('NMDA (inst) rel error');
class NMDAChannel(SynChannel):
def __init__(self, hp, receptor):
self.hp = hp
self.receptor = receptor
self.rec_code = hp['receptor_types'][receptor]
self.tau_1 = hp['tau_rise_'+receptor]
self.tau_2 = hp['tau_decay_'+receptor]
self.g_peak = hp['g_peak_'+receptor]
self.E_rev = hp['E_rev_'+receptor]
self.S_act = hp['S_act_NMDA']
self.V_act = hp['V_act_NMDA']
self.tau_fast = hp['tau_Mg_fast_NMDA']
self.tau_slow = hp['tau_Mg_slow_NMDA']
self.instantaneous = False
def m_inf(self, V):
return 1. / ( 1. + np.exp(-self.S_act*(V-self.V_act)) )
def dm(self, m, t, V, tau):
return ( self.m_inf(V) - m ) / tau
def g(self, t, V, mf0, ms0):
self.m_fast = si.odeint(self.dm, mf0, t, args=(V, self.tau_fast))
self.m_slow = si.odeint(self.dm, ms0, t, args=(V, self.tau_slow))
a = 0.51 - 0.0028 * V
m_inf = self.m_inf(V)
mfs = self.m_fast[:]
mfs[mfs > m_inf] = m_inf
mss = self.m_slow[:]
mss[mss > m_inf] = m_inf
m = np.squeeze(a * mfs + ( 1 - a ) * mss)
return self.g_peak * m * self.beta(t)
def I(self, t, V):
raise NotImplementedError()
nmda = NMDAChannel(nest.GetDefaults('ht_neuron'), 'NMDA')
nm_n, nm_c = syn_voltage_clamp(nmda, [(50, -70.), (50, -50.), (50, -20.), (50, 0.), (50, -60.)])
plt.subplot(1, 2, 1);
plt.plot(nm_n.times, nm_n.g_NMDA, label='NEST');
plt.plot(nm_c.times, nm_c.g_NMDA, label='Control');
plt.xlabel('Time [ms]');
plt.ylabel('g_NMDA');
plt.title('NMDA Channel');
plt.subplot(1, 2, 2);
plt.plot(nm_n.times, (nm_n.g_NMDA-nm_c.g_NMDA)/nm_c.g_NMDA);
plt.xlabel('Time [ms]');
plt.ylabel('Rel error');
plt.title('NMDA rel error');
nest.ResetKernel()
sp = nest.GetDefaults('ht_synapse')
P0 = sp['P']
dP = sp['delta_P']
tP = sp['tau_P']
spike_times = [10., 12., 20., 20.5, 100., 200., 1000.]
expected = [(0., P0, P0)]
for idx, t in enumerate(spike_times):
tlast, Psend, Ppost = expected[idx]
Psend = 1 - (1-Ppost)*math.exp(-(t-tlast)/tP)
expected.append((t, Psend, (1-dP)*Psend))
expected_weights = list(zip(*expected[1:]))[1]
sg = nest.Create('spike_generator', params={'spike_times': spike_times})
n = nest.Create('parrot_neuron', 2)
wr = nest.Create('weight_recorder')
nest.SetDefaults('ht_synapse', {'weight_recorder': wr, 'weight': 1.0})
nest.Connect(sg, n[:1])
nest.Connect(n[:1], n[1:], syn_spec='ht_synapse')
nest.Simulate(1200)
rec_weights = wr.get('events', 'weights')
print('Recorded weights:', rec_weights)
print('Expected weights:', expected_weights)
print('Difference :', np.array(rec_weights) - np.array(expected_weights))
nest.ResetKernel()
nrn = nest.Create('ht_neuron')
ppg = nest.Create('pulsepacket_generator', n=4,
params={'pulse_times': [700., 1700., 2700., 3700.],
'activity': 700, 'sdev': 50.})
pr = nest.Create('parrot_neuron', n=4)
mm = nest.Create('multimeter',
params={'interval': 0.1,
'record_from': ['V_m', 'theta',
'g_AMPA', 'g_NMDA',
'g_GABA_A', 'g_GABA_B',
'I_NaP', 'I_KNa', 'I_T', 'I_h']})
weights = {'AMPA': 25., 'NMDA': 20., 'GABA_A': 10., 'GABA_B': 1.}
receptors = nest.GetDefaults('ht_neuron')['receptor_types']
nest.Connect(ppg, pr, 'one_to_one')
for p, (rec_name, rec_wgt) in zip(pr, weights.items()):
nest.Connect(p, nrn, syn_spec={'synapse_model': 'ht_synapse',
'receptor_type': receptors[rec_name],
'weight': rec_wgt})
nest.Connect(mm, nrn)
nest.Simulate(5000)
data = nest.GetStatus(mm)[0]['events']
t = data['times']
def texify_name(name):
return r'${}_{{\mathrm{{{}}}}}$'.format(*name.split('_'))
fig = plt.figure(figsize=(12,10))
Vax = fig.add_subplot(311)
Vax.plot(t, data['V_m'], 'k', lw=1, label=r'$V_m$')
Vax.plot(t, data['theta'], 'r', alpha=0.5, lw=1, label=r'$\Theta$')
Vax.set_ylabel('Potential [mV]')
Vax.legend(fontsize='small')
Vax.set_title('ht_neuron driven by sinousiodal Poisson processes')
Iax = fig.add_subplot(312)
for iname, color in (('I_h', 'blue'), ('I_KNa', 'green'),
('I_NaP', 'red'), ('I_T', 'cyan')):
Iax.plot(t, data[iname], color=color, lw=1, label=texify_name(iname))
#Iax.set_ylim(-60, 60)
Iax.legend(fontsize='small')
Iax.set_ylabel('Current [mV]')
Gax = fig.add_subplot(313)
for gname, sgn, color in (('g_AMPA', 1, 'green'), ('g_GABA_A', -1, 'red'),
('g_GABA_B', -1, 'cyan'), ('g_NMDA', 1, 'magenta')):
Gax.plot(t, sgn*data[gname], lw=1, label=texify_name(gname), color=color)
#Gax.set_ylim(-150, 150)
Gax.legend(fontsize='small')
Gax.set_ylabel('Conductance')
Gax.set_xlabel('Time [ms]');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <span id="plat_prod">Choose Platforms and Products ▴</span>
Step2: Choose product
Step3: <span id="extents">Get the Extents of the Cube ▴</span>
Step4: <span id="define_extents">Define the Extents of the Analysis ▴</span>
Step5: <span id="retrieve_data">Load Data from the Data Cube ▴</span>
Step6: <span id="derive_products">Derive Products ▴</span>
Step7: Water Classification
Step8: Normalized Indices
Step9: TSM
Step10: EVI
Step11: <span id="combine_data">Combine Data ▴</span>
Step12: <span id="export">Export Data ▴</span>
Step13: Check to see what files were exported. The size of these files is also shown.
Step14: Sanity check using gdalinfo to make sure that all of our bands exist .
Step15: Zip all GeoTIFFs.
Step16: <span id="export_netcdf">Export to NetCDF ▴</span>
|
<ASSISTANT_TASK:>
Python Code:
import sys
import os
sys.path.append(os.environ.get('NOTEBOOK_ROOT'))
import xarray as xr
import numpy as np
import datacube
from utils.data_cube_utilities.data_access_api import DataAccessApi
from datacube.utils.aws import configure_s3_access
configure_s3_access(requester_pays=True)
api = DataAccessApi()
dc = api.dc
list_of_products = dc.list_products()
list_of_products
platform = 'LANDSAT_7'
product = 'ls7_usgs_sr_scene'
collection = 'c1'
level = 'l2'
from utils.data_cube_utilities.dc_load import get_product_extents
from utils.data_cube_utilities.dc_time import dt_to_str
full_lat, full_lon, min_max_dates = get_product_extents(api, platform, product)
# Print the extents of the combined data.
print("Latitude Extents:", full_lat)
print("Longitude Extents:", full_lon)
print("Time Extents:", list(map(dt_to_str, min_max_dates)))
## The code below renders a map that can be used to orient yourself with the region.
from utils.data_cube_utilities.dc_display_map import display_map
display_map(full_lat, full_lon)
######### Ghana - Pambros Salt Ponds ##################
lon = (-0.3013, -0.2671)
lat = (5.5155, 5.5617)
time_extents = ('2015-01-01', '2015-12-31')
from utils.data_cube_utilities.dc_display_map import display_map
display_map(lat, lon)
landsat_dataset = dc.load(latitude = lat,
longitude = lon,
platform = platform,
time = time_extents,
product = product,
measurements = ['red', 'green', 'blue', 'nir', 'swir1', 'swir2', 'pixel_qa'],
dask_chunks={'time':1, 'latitude':1000, 'longitude':1000})
from utils.data_cube_utilities.clean_mask import landsat_qa_clean_mask
plt_col_lvl_params = dict(platform=platform, collection=collection, level=level)
clear_xarray = landsat_qa_clean_mask(landsat_dataset, cover_types=['clear'], **plt_col_lvl_params)
water_xarray = landsat_qa_clean_mask(landsat_dataset, cover_types=['water'], **plt_col_lvl_params)
shadow_xarray = landsat_qa_clean_mask(landsat_dataset, cover_types=['cld_shd'], **plt_col_lvl_params)
# clean_xarray = xr.ufuncs.logical_or(clear_xarray, water_xarray).rename("clean_mask")
clean_xarray = (clear_xarray | water_xarray).rename("clean_mask")
# landsat_qa_clean_mask(cover_types=[])
from utils.data_cube_utilities.dc_water_classifier import wofs_classify
water_classification = wofs_classify(landsat_dataset,
clean_mask = clean_xarray,
mosaic = False)
wofs_xarray = water_classification.wofs
def NDVI(dataset):
return ((dataset.nir - dataset.red)/(dataset.nir + dataset.red)).rename("NDVI")
def NDWI(dataset):
return ((dataset.green - dataset.nir)/(dataset.green + dataset.nir)).rename("NDWI")
def NDBI(dataset):
return ((dataset.swir2 - dataset.nir)/(dataset.swir2 + dataset.nir)).rename("NDBI")
ndbi_xarray = NDBI(landsat_dataset) # Urbanization - Reds
ndvi_xarray = NDVI(landsat_dataset) # Dense Vegetation - Greens
ndwi_xarray = NDWI(landsat_dataset) # High Concentrations of Water - Blues
from utils.data_cube_utilities.dc_water_quality import tsm
tsm_xarray = tsm(landsat_dataset, clean_mask = wofs_xarray.values.astype(bool) ).tsm
def EVI(dataset, c1 = None, c2 = None, L = None):
return ((dataset.nir - dataset.red)/((dataset.nir + (c1 * dataset.red) - (c2 *dataset.blue) + L))).rename("EVI")
evi_xarray = EVI(landsat_dataset, c1 = 6, c2 = 7.5, L = 1 )
combined_dataset = xr.merge([landsat_dataset,
## <span id="combine_data">Combine Data [▴](#top)</span> clean_xarray,
clear_xarray,
water_xarray,
shadow_xarray,
evi_xarray,
ndbi_xarray,
ndvi_xarray,
ndwi_xarray,
wofs_xarray,
tsm_xarray])
# Copy original crs to merged dataset
combined_dataset = combined_dataset.assign_attrs(landsat_dataset.attrs)
combined_dataset
from utils.data_cube_utilities.import_export import export_xarray_to_multiple_geotiffs
# Ensure the output directory exists before writing to it.
if platform == 'LANDSAT_7':
!mkdir -p output/geotiffs/landsat7
else:
!mkdir -p output/geotiffs/landsat8
output_path = "output/geotiffs/landsat{0}/landsat{0}".format(7 if platform=='LANDSAT_7' else 8)
export_xarray_to_multiple_geotiffs(combined_dataset, output_path)
if platform == 'LANDSAT_7':
!ls -lah output/geotiffs/landsat7/*.tif
else:
!ls -lah output/geotiffs/landsat8/*.tif
if platform == 'LANDSAT_7':
!gdalinfo output/geotiffs/landsat7/landsat7_2015_01_09_03_06_13.tif
else:
!gdalinfo output/geotiffs/landsat8/landsat8_2015_01_01_03_07_41.tif
if platform == 'LANDSAT_7':
!tar -cvzf output/geotiffs/landsat7/landsat_7.tar.gz output/geotiffs/landsat7/*.tif
else:
!tar -cvzf output/geotiffs/landsat8/landsat_8.tar.gz output/geotiffs/landsat8/*.tif
combined_dataset
import os
import pathlib
from utils.data_cube_utilities.import_export import export_xarray_to_netcdf
# Ensure the output directory exists before writing to it.
ls_num = 7 if platform=='LANDSAT_7' else 8
output_dir = f"output/netcdfs/landsat{ls_num}"
pathlib.Path(output_dir).mkdir(parents=True, exist_ok=True)
output_file_path = output_dir + f"/ls{ls_num}_netcdf_example.nc"
export_xarray_to_netcdf(combined_dataset.red, output_file_path)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We do this to save time with all future functions. Instead of needing to type (for example)
Step2: Elements of a one-dimensional array are accessed with the same syntax as a list
Step3: Differences between arrays and lists
Step4: but this NumPy array can only contain integers, so you get an error when you try to add a string to a NumPy array.
Step5: Once an array has been created, its dtype is fixed and it can only store elements of the same type. For this example where the dtype is integer, if we store a floating point number it will be automatically converted into an integer
Step6: Array Creation
Step7: Numpy also offers the arange function, which works like the builtin range but returns an array instead of a list
Step8: But notice something important about how it is used
Step9: The max number in the range is exclusive because python numbering starts at zero.
Step10: It is often useful to create arrays with random numbers that follow a specific distribution. The np.random module contains a number of functions that can be used to this effect, for example this will produce an array of 5 random samples taken from a standard normal distribution, or "bell curve" with a mean (average) value of 0 and variance (width) of 1
Step11: whereas this will also give 5 samples, but from a normal distribution (bell curve) with a mean of 10 and a variance of 3
Step12: Exercise 1
Step13: Exercise 2
Step14: Multidimensional arrays
Step15: How are these different? Print them to see.
Step16: With two-dimensional arrays we start seeing the power of NumPy. If we wanted to access the second element in the first list of lst2, we would have to type lst2[0],[1]. We do this because first we need to access the "zeroth" list (remember Python starts numbering elements at 0), and then the "first" element of that list. (Again, since the number 1 corresponds to the second item in a list.) Whereas, if we wanted to access the second element in the first row of arr2, we simply tell it the row number and column number. Remembering that since NumPy starts numbering at 0, we would access the "zeroth" row and "first" element as [0,1].
Step17: Most of the array creation functions listed above can be used with more than one dimension, for example
Step18: The shape of an array can be changed at any time, as long as the total number of elements is unchanged. For example, if we want a 2x4 array with numbers increasing from 0, the easiest way to create it is
Step19: But note that reshaping (like most NumPy operations) provides a view of the same memory
Step20: This lack of copying allows for very efficient vectorized operations.
Step21: It's also possible to do the computation along a single dimension, by passing the axis parameter; for example
Step22: As you can see in this example, the value of the axis parameter is the dimension which will be consumed once the operation has been carried out. This is why to sum along the rows we use axis=1.
Step23: NumPy can also create some useful matrices
Step24: To access the elements of a multidimensional (in this case 2D) array
Step25: Exercise 3
Step26: Exercise 4
Step27: Exercise 5
Step28: Slicing Basics
Step29: You can also give the step size
Step30: Exercise 6
Step31: Boolean Masks
Step32: Now that we have this mask, we can use it to either read those values or to reset them to 0
Step33: Exercise 7
Step34: Operations with Arrays
Step35: Importantly, you must remember that even the multiplication operator is by default applied element-wise, it is not the matrix multiplication from linear algebra (as is the case in Matlab, for example)
Step36: We may also multiply an array by a scalar
Step37: Universal Functions or ufuncs
Step38: These are examples of Universal Functions or ufuncs in NumPy. They operate element-wise on an array. We've already seen examples of these in the various arithmetic operations
Step39: These are binary ufuncs
Step40: Ufuncs are very fast. If you wanted to create a very large array and then add 1 to every element in the array, you could write a for loop to do it element by element, but it would take a long time to complete compared to what you can do with arrays. Here's an example, using the %%timeit magic function to show how long the two methods take.
Step41: NumPy ufuncs are faster than Python functions involving loops, because the looping happens in compiled code. This is only possible when types are known beforehand, which is why NumPy arrays must be typed.
Step42: How is this different from min() and max()?
Step43: We've only scratched the surface of what NumPy can do. It would be overwhelming to try to show you all of NumPy's capabilities in an introductory notebook, so we will end here with a final exercise. As you follow along in the subsequent notebooks, notice where and how NumPy arrays are used. You may have ideas about something you'd like to do with an array to solve a problem or complete an exercise. Check the NumPy documentation or Google for help with more advanced commands and syntax.
Step44: (b) The index of each boolean element represents the number. “Cross out” 0 and 1, which are not primes. You can either set them to False or 0 (which python recognizes as equivalent to False for boolean types) in the array is_prime. Then print the array again to see what changed.
Step45: (c) For each subsequent integer j starting from 2, cross out its higher multiples.
Step46: (d) Look up the documentation for np.nonzero (try help(np.nonzero) or np.nonzero?) and use it to print the prime numbers that are left in the array is_prime.
Step47: (e) Finally, combine the above code into a new function, called eratosthenes_sieve() that takes one argument, maximum, the maximum number to test for primes, and returns an array containing the prime numbers between 2 and maximum.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
lst = [10, 20, 30, 40]
arr = np.array([10, 20, 30, 40])
arr[0]
lst[0]
arr[-1]
arr[2:]
lst[-1] = 'a string inside a list'
lst
arr[-1] = 'a string inside an array'
arr.dtype
arr[-1] = 1.234
arr
np.zeros(5, dtype=float)
np.zeros(3, dtype=int)
np.zeros(3, dtype=complex)
print '5 ones:', np.ones(5)
a = np.empty(4)
a.fill(5.5)
a
a = np.empty(3,dtype=int)
a.fill(7)
a
np.arange(5)
c = np.arange(2,16,2)
c
print "A linear sequence between 0 and 1 with 4 elements:"
print np.linspace(0, 1, 4)
print "A logarithmic sequence between 10**1 and 10**3 with 4 elements:"
print np.logspace(1, 3, 4)
np.random.randn(5)
norm10 = np.random.normal(10, 3, 5)
norm10
#Your code here
#Your code here
lst2 = [[1, 2], [3, 4]]
arr2 = np.array([[1, 2], [3, 4]])
lst2
print lst2[0][1]
print arr2[0,1]
np.zeros((2,3))
arr = np.arange(8).reshape(2, 4)
print arr
arr = np.arange(8)
arr2 = arr.reshape(2, 4)
arr[0] = 1000
print arr
print arr2
print 'Slicing in the second row:', arr2[1, 2:4]
print 'All rows, third column :', arr2[:, 2]
print 'First row: ', arr2[0]
print 'Second row: ', arr2[1]
#Print some properties of the array arr2
print 'Data type :', arr2.dtype
print 'Total number of elements :', arr2.size
print 'Number of dimensions :', arr2.ndim
print 'Shape (dimensionality) :', arr2.shape
print 'Memory used (in bytes) :', arr2.nbytes
#Print some useful information that the arrays can calculate for us
print 'Minimum and maximum :', arr2.min(), arr2.max()
print 'Sum and product of all elements :', arr2.sum(), arr2.prod()
print 'Mean and standard deviation :', arr2.mean(), arr2.std()
print 'For the following array:\n', arr2
print 'The sum of elements along the rows is :', arr2.sum(axis=1)
print 'The sum of elements along the columns is :', arr2.sum(axis=0)
print 'Array:\n', arr2
print 'Transpose:\n', arr2.T
#identity matrix
c = np.eye(3)
c
#diagonal matrix wth elements of an array
d = np.diag(np.array([1,2,3,4]))
d
e = np.diag(np.linspace(0,1,6))
e
e[1,1]
e[2,2]
#Your code here
#Your code here
#Your code here
a = np.arange(10)
a
a[:4]
a[2:4]
a[5:]
a[:]
a[2:9:3]
#Your code here
norm10 = np.random.normal(10, 3, 5)
norm10
mask = norm10 > 9
mask
print 'Values above 9:', norm10[mask]
print 'Resetting all values above 9 to 0...'
norm10[mask] = 0
print norm10
#Your code here
arr1 = np.arange(4)
arr2 = np.arange(10, 14)
print arr1, '+', arr2, '=', arr1+arr2
print arr1, '*', arr2, '=', arr1*arr2
1.5 * arr1
x = np.linspace(0, 2*np.pi, 100)
y = np.sin(x)
x = np.random.random(4)
print x
print x + 1 # add 1 to each element of x
print x * 2 # multiply each element of x by 2
print x * x # multiply each element of x by itself
print x[1:] - x[:-1]
-x
np.sin(x)
x = np.random.random(10000)
%%timeit
# compute element-wise x + 1 via a ufunc
y = np.zeros_like(x)
y = x + 1
%%timeit
# compute element-wise x + 1 via a loop
y = np.zeros_like(x)
for i in range(len(x)):
y[i] = x[i] + 1
x = np.random.random(5)
print x
print np.minimum(x, 0.5)
print np.maximum(x, 0.5)
print np.min(x)
print np.max(x)
#Your code here
#Your code here
#Your code here
#Your code here
#Your code here
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Question 3
Step2: ...and do a short summary about the data;
Step3: Question 5
Step4: Question 6
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
deaths_df = pd.read_csv('https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_deaths_global.csv')
# Question 3
# Display first 5 rows
# of the loaded data
deaths_df.head(5)
# Yank out 3 unecessary columns
# i.e 'Lat','Long','Province/State'
# Leaving 'Country/Region' & deaths per day
# Columns
death_cases_worldwide = deaths_df.drop(['Lat','Long','Province/State'], axis=1);
death_cases_worldwide
death_cases_worldwide_ = death_cases_worldwide.head(5)
def calc_increment(x):
current_col_idx = death_cases_worldwide_.columns.get_loc(x.name)
if current_col_idx > 1:
prev_column_idx = current_col_idx - 1;
prev_column = death_cases_worldwide_.iloc[:, (current_col_idx-1):]
death_cases_of_today = x.iloc[0]
death_cases_of_yesterday = prev_column.iloc[0].iloc[0]
increment = int(death_cases_of_today) - int(death_cases_of_yesterday)
x.iloc[0] = increment
death_cases_worldwide_.apply( calc_increment, axis=0 )
# print(death_cases_worldwide_)
# Import library
import matplotlib.pyplot as plt
# Import numpy
import numpy as np
#Specify X axis to be that of Country/Region
death_cases_worldwide.plot(x='Country/Region')
#Finally Show the graph
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: DataLoader helpers
Step2: DataLoader -
Step3: Arguments to DataLoader
Step4: If you don't set bs, then dataset is assumed to provide an iterator or a __getitem__ that returns a batch.
Step5: If you do set bs, then dataset is assumed to provide an iterator or a __getitem__ that returns a single item of a batch.
Step6: Iterable dataloaders require specific tests.
Step7: Override get_idxs to return the same index until consumption of the DL. This is intented to test consistent sampling behavior when num_workers>1.
Step8: Export -
|
<ASSISTANT_TASK:>
Python Code:
#|export
from __future__ import annotations
from fastai.torch_basics import *
from torch.utils.data.dataloader import _MultiProcessingDataLoaderIter,_SingleProcessDataLoaderIter,_DatasetKind
_loaders = (_MultiProcessingDataLoaderIter,_SingleProcessDataLoaderIter)
#|hide
from nbdev.showdoc import *
bs = 4
letters = list(string.ascii_lowercase)
#|export
def _wif(worker_id):
set_num_threads(1)
info = get_worker_info()
ds = info.dataset.d
ds.num_workers,ds.offs = info.num_workers,info.id
set_seed(info.seed)
ds.wif()
class _FakeLoader:
def _fn_noops(self, x=None, *args, **kwargs): return x
_IterableDataset_len_called,_auto_collation,collate_fn,drop_last = None,False,_fn_noops,False
_index_sampler,generator,prefetch_factor = Inf.count,None,2
dataset_kind = _dataset_kind = _DatasetKind.Iterable
def __init__(self, d, pin_memory, num_workers, timeout, persistent_workers,pin_memory_device):
self.dataset,self.default,self.worker_init_fn,self.pin_memory_device = self,d,_wif,pin_memory_device
store_attr('d,pin_memory,num_workers,timeout,persistent_workers,pin_memory_device')
def __iter__(self): return iter(self.d.create_batches(self.d.sample()))
@property
def multiprocessing_context(self): return (None,multiprocessing)[self.num_workers>0]
@contextmanager
def no_multiproc(self):
old_num_workers = self.num_workers
try:
self.num_workers = 0
yield self.d
finally: self.num_workers = old_num_workers
_collate_types = (ndarray, Tensor, typing.Mapping, str)
#|export
def fa_collate(t):
"A replacement for PyTorch `default_collate` which maintains types and handles `Sequence`s"
b = t[0]
return (default_collate(t) if isinstance(b, _collate_types)
else type(t[0])([fa_collate(s) for s in zip(*t)]) if isinstance(b, Sequence)
else default_collate(t))
#e.g. x is int, y is tuple
t = [(1,(2,3)),(1,(2,3))]
test_eq(fa_collate(t), default_collate(t))
test_eq(L(fa_collate(t)).map(type), [Tensor,tuple])
t = [(1,(2,(3,4))),(1,(2,(3,4)))]
test_eq(fa_collate(t), default_collate(t))
test_eq(L(fa_collate(t)).map(type), [Tensor,tuple])
test_eq(L(fa_collate(t)[1]).map(type), [Tensor,tuple])
#|export
def fa_convert(t):
"A replacement for PyTorch `default_convert` which maintains types and handles `Sequence`s"
return (default_convert(t) if isinstance(t, _collate_types)
else type(t)([fa_convert(s) for s in t]) if isinstance(t, Sequence)
else default_convert(t))
t0 = array([1,2])
t = [t0,(t0,t0)]
test_eq(fa_convert(t), default_convert(t))
test_eq(L(fa_convert(t)).map(type), [Tensor,tuple])
#|export
class SkipItemException(Exception):
"Raised to notify `DataLoader` to skip an item"
pass
show_doc(SkipItemException, title_level=3)
#|export
def collate_error(e:Exception, batch):
"Raises error when the batch could not collate, stating what items in the batch are different sizes and their types"
err = f'Error when trying to collate the data into batches with fa_collate, at least two tensors in the batch are not the same size.\n\n'
# we need to iterate through the entire batch and find a mismatch
length = len(batch[0])
for idx in range(length): # for each type in the batch
for i, item in enumerate(batch):
if i == 0: shape_a, type_a = item[idx].shape, item[idx].__class__.__name__
elif item[idx].shape != shape_a:
shape_b = item[idx].shape
if shape_a != shape_b:
err += f'Mismatch found on axis {idx} of the batch and is of type `{type_a}`:\n\tItem at index 0 has shape: {shape_a}\n\tItem at index {i} has shape: {shape_b}\n\nPlease include a transform in `after_item` that ensures all data of type {type_a} is the same size'
e.args = [err]
raise
#|hide
batch = [torch.rand(3, 375, 500), torch.rand(3, 375, 500), torch.rand(3, 500, 333)]
with ExceptionExpected(RuntimeError, "Mismatch found on axis 0 of the batch and is of type `Tensor`"):
try:
fa_collate(batch)
except Exception as e:
collate_error(e, batch)
#|export
@funcs_kwargs
class DataLoader(GetAttr):
_noop_methods = 'wif before_iter after_item before_batch after_batch after_iter'.split()
for o in _noop_methods: exec(f"def {o}(self, x=None, *args, **kwargs): return x")
_methods = _noop_methods + 'create_batches create_item create_batch retain \
get_idxs sample shuffle_fn do_batch create_batch'.split()
_default = 'dataset'
def __init__(self, dataset=None, bs=None, num_workers=0, pin_memory=False, timeout=0, batch_size=None,
shuffle=False, drop_last=False, indexed=None, n=None, device=None, persistent_workers=False,
pin_memory_device='', **kwargs):
if batch_size is not None: bs = batch_size # PyTorch compatibility
assert not (bs is None and drop_last)
if indexed is None: indexed = (hasattr(dataset,'__getitem__')
and not isinstance(dataset, IterableDataset))
if not indexed and shuffle: raise ValueError("Can only shuffle an indexed dataset (not an iterable one).")
if n is None:
try: n = len(dataset)
except TypeError: pass
store_attr('dataset,bs,shuffle,drop_last,indexed,n,pin_memory,timeout,device')
self.rng,self.num_workers,self.offs = random.Random(random.randint(0,2**32-1)),1,0
if sys.platform == "win32" and IN_NOTEBOOK and num_workers > 0:
print("Due to IPython and Windows limitation, python multiprocessing isn't available now.")
print("So `number_workers` is changed to 0 to avoid getting stuck")
num_workers = 0
self.fake_l = _FakeLoader(self, pin_memory, num_workers, timeout, persistent_workers=persistent_workers,
pin_memory_device=pin_memory_device)
def __len__(self):
if self.n is None: raise TypeError
if self.bs is None: return self.n
return self.n//self.bs + (0 if self.drop_last or self.n%self.bs==0 else 1)
def get_idxs(self):
idxs = Inf.count if self.indexed else Inf.nones
if self.n is not None: idxs = list(itertools.islice(idxs, self.n))
if self.shuffle: idxs = self.shuffle_fn(idxs)
return idxs
def sample(self):
return (b for i,b in enumerate(self.__idxs) if i//(self.bs or 1)%self.num_workers==self.offs)
def __iter__(self):
self.randomize()
self.before_iter()
self.__idxs=self.get_idxs() # called in context of main process (not workers/subprocesses)
for b in _loaders[self.fake_l.num_workers==0](self.fake_l):
# pin_memory causes tuples to be converted to lists, so convert them back to tuples
if self.pin_memory and type(b) == list: b = tuple(b)
if self.device is not None: b = to_device(b, self.device)
yield self.after_batch(b)
self.after_iter()
if hasattr(self, 'it'): del(self.it)
def create_batches(self, samps):
if self.dataset is not None: self.it = iter(self.dataset)
res = filter(lambda o:o is not None, map(self.do_item, samps))
yield from map(self.do_batch, self.chunkify(res))
def new(self, dataset=None, cls=None, **kwargs):
if dataset is None: dataset = self.dataset
if cls is None: cls = type(self)
cur_kwargs = dict(dataset=dataset, num_workers=self.fake_l.num_workers, pin_memory=self.pin_memory, timeout=self.timeout,
bs=self.bs, shuffle=self.shuffle, drop_last=self.drop_last, indexed=self.indexed, device=self.device)
for n in self._methods:
o = getattr(self, n)
if not isinstance(o, MethodType): cur_kwargs[n] = o
return cls(**merge(cur_kwargs, kwargs))
@property
def prebatched(self): return self.bs is None
def do_item(self, s):
try: return self.after_item(self.create_item(s))
except SkipItemException: return None
def chunkify(self, b): return b if self.prebatched else chunked(b, self.bs, self.drop_last)
def shuffle_fn(self, idxs): return self.rng.sample(idxs, len(idxs))
def randomize(self): self.rng = random.Random(self.rng.randint(0,2**32-1))
def retain(self, res, b): return retain_types(res, b[0] if is_listy(b) else b)
def create_item(self, s):
if self.indexed: return self.dataset[s or 0]
elif s is None: return next(self.it)
else: raise IndexError("Cannot index an iterable dataset numerically - must use `None`.")
def create_batch(self, b):
try: return (fa_collate,fa_convert)[self.prebatched](b)
except Exception as e:
if not self.prebatched: collate_error(e,b)
raise
def do_batch(self, b): return self.retain(self.create_batch(self.before_batch(b)), b)
def to(self, device): self.device = device
def one_batch(self):
if self.n is not None and len(self)==0: raise ValueError(f'This DataLoader does not contain any batches')
with self.fake_l.no_multiproc(): res = first(self)
if hasattr(self, 'it'): delattr(self, 'it')
return res
#|export
add_docs(DataLoader, "API compatible with PyTorch DataLoader, with a lot more callbacks and flexibility",
get_idxs = "Return a list of indices to reference the dataset. Calls `shuffle_fn` internally if `shuffle=True`.",
sample = "Same as `get_idxs` but returns a generator of indices to reference the dataset.",
create_batches = "Takes output of `sample` as input, and returns batches of data. Does not apply `after_batch`.",
new = "Create a new `DataLoader` with given arguments keeping remaining arguments same as original `DataLoader`.",
prebatched = "Check if `bs` is None.",
do_item = "Combines `after_item` and `create_item` to get an item from dataset by providing index as input.",
chunkify = "Used by `create_batches` to turn generator of items (`b`) into batches.",
shuffle_fn = "Returns a random permutation of `idxs`.",
randomize = "Set's `DataLoader` random number generator state.",
retain = "Cast each item of `res` to type of matching item in `b` if its a superclass.",
create_item = "Subset of the dataset containing the index values of sample if exists, else next iterator.",
create_batch = "Collate a list of items into a batch.",
do_batch = "Combines `create_batch` and `before_batch` to get a batch of items. Input is a list of items to collate.",
to = "Sets `self.device=device`.",
one_batch = "Return one batch from `DataLoader`.",
wif = "See pytorch `worker_init_fn` for details.",
before_iter = "Called before `DataLoader` starts to read/iterate over the dataset.",
after_item = "Takes output of `create_item` as input and applies this function on it.",
before_batch = "It is called before collating a list of items into a batch. Input is a list of items.",
after_batch = "After collating mini-batch of items, the mini-batch is passed through this function.",
after_iter = "Called after `DataLoader` has fully read/iterated over the dataset.")
class RandDL(DataLoader):
def create_item(self, s):
r = random.random()
return r if r<0.95 else stop()
L(RandDL())
L(RandDL(bs=4, drop_last=True)).map(len)
dl = RandDL(bs=4, num_workers=4, drop_last=True)
L(dl).map(len)
test_num_workers = 0 if sys.platform == "win32" else 4
test_eq(dl.fake_l.num_workers, test_num_workers)
with dl.fake_l.no_multiproc():
test_eq(dl.fake_l.num_workers, 0)
L(dl).map(len)
test_eq(dl.fake_l.num_workers, test_num_workers)
def _rand_item(s):
r = random.random()
return r if r<0.95 else stop()
L(DataLoader(create_item=_rand_item))
ds1 = DataLoader(letters)
test_eq(L(ds1), letters)
test_eq(len(ds1), 26)
test_shuffled(L(DataLoader(letters, shuffle=True)), letters)
ds1 = DataLoader(letters, indexed=False)
test_eq(L(ds1), letters)
test_eq(len(ds1), 26)
t2 = L(tensor([0,1,2]),tensor([3,4,5]))
ds2 = DataLoader(t2)
test_eq_type(L(ds2), t2)
t3 = L(array([0,1,2], dtype=np.int64),array([3,4,5], dtype=np.int64))
ds3 = DataLoader(t3)
test_eq_type(L(ds3), t3.map(tensor))
ds4 = DataLoader(t3, create_batch=noop, after_iter=lambda: setattr(t3, 'f', 1))
test_eq_type(L(ds4), t3)
test_eq(t3.f, 1)
def twoepochs(d): return ' '.join(''.join(list(o)) for _ in range(2) for o in d)
ds1 = DataLoader(letters, bs=4, drop_last=True, num_workers=0)
test_eq(twoepochs(ds1), 'abcd efgh ijkl mnop qrst uvwx abcd efgh ijkl mnop qrst uvwx')
ds1 = DataLoader(letters,4,num_workers=2)
test_eq(twoepochs(ds1), 'abcd efgh ijkl mnop qrst uvwx yz abcd efgh ijkl mnop qrst uvwx yz')
ds1 = DataLoader(range(12), bs=4, num_workers=3)
test_eq_type(L(ds1), L(tensor([0,1,2,3]),tensor([4,5,6,7]),tensor([8,9,10,11])))
ds1 = DataLoader([str(i) for i in range(11)], bs=4, after_iter=lambda: setattr(t3, 'f', 2))
test_eq_type(L(ds1), L(['0','1','2','3'],['4','5','6','7'],['8','9','10']))
test_eq(t3.f, 2)
it = iter(DataLoader(map(noop,range(20)), bs=4, num_workers=1))
test_eq_type([next(it) for _ in range(3)], [tensor([0,1,2,3]),tensor([4,5,6,7]),tensor([8,9,10,11])])
class DummyIterableDataset(IterableDataset):
def __iter__(self):
yield from range(11)
ds1 = DataLoader(DummyIterableDataset(), bs=4)
# Check it yields fine, and check we can do multiple passes
for i in range(3):
test_eq_type(L(ds1), L(tensor([0,1,2,3]),tensor([4,5,6,7]),tensor([8,9,10])))
# Check `drop_last` works fine (with multiple passes, since this will prematurely terminate the iterator)
ds1 = DataLoader(DummyIterableDataset(), bs=4, drop_last=True)
for i in range(3):
test_eq_type(L(ds1), L(tensor([0,1,2,3]),tensor([4,5,6,7])))
class SleepyDL(list):
def __getitem__(self,i):
time.sleep(random.random()/50)
return super().__getitem__(i)
t = SleepyDL(letters)
%time test_eq(DataLoader(t, num_workers=0), letters)
%time test_eq(DataLoader(t, num_workers=2), letters)
%time test_eq(DataLoader(t, num_workers=4), letters)
dl = DataLoader(t, shuffle=True, num_workers=1)
test_shuffled(L(dl), letters)
test_shuffled(L(dl), L(dl))
L(dl)
class SleepyQueue():
"Simulate a queue with varying latency"
def __init__(self, q): self.q=q
def __iter__(self):
while True:
time.sleep(random.random()/100)
try: yield self.q.get_nowait()
except queues.Empty: return
q = Queue()
for o in range(30): q.put(o)
it = SleepyQueue(q)
if not (sys.platform == "win32" and IN_NOTEBOOK):
%time test_shuffled(L(DataLoader(it, num_workers=4)), L(range(30)))
class A(TensorBase): pass
for nw in (0,2):
t = A(tensor([1,2]))
dl = DataLoader([t,t,t,t,t,t,t,t], bs=4, num_workers=nw)
b = first(dl)
test_eq(type(b), A)
t = (A(tensor([1,2])),)
dl = DataLoader([t,t,t,t,t,t,t,t], bs=4, num_workers=nw)
b = first(dl)
test_eq(type(b[0]), A)
list(DataLoader(list(range(50)),bs=32,shuffle=True,num_workers=3))
class A(TensorBase): pass
t = A(tensor(1,2))
tdl = DataLoader([t,t,t,t,t,t,t,t], bs=4, num_workers=2, after_batch=to_device)
b = first(tdl)
test_eq(type(b), A)
# Unknown attributes are delegated to `dataset`
test_eq(tdl.pop(), tensor(1,2))
class AdamantDL(DataLoader):
def get_idxs(self):
r=random.randint(0,self.n-1)
return [r] * self.n
test_eq(torch.cat(tuple(AdamantDL((list(range(50))),bs=16,num_workers=4))).unique().numel(),1)
#|hide
from nbdev.export import notebook2script
notebook2script()
# from subprocess import Popen, PIPE
# # test num_workers > 0 in scripts works when python process start method is spawn
# process = Popen(["python", "dltest.py"], stdout=PIPE)
# _, err = process.communicate(timeout=15)
# exit_code = process.wait()
# test_eq(exit_code, 0)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Introduction
Step2: Dates and times in pandas
Step3: Like with datetime.datetime objects, there are several useful attributes available on the Timestamp. For example, we can get the month (experiment with tab completion!)
Step4: There is also a Timedelta type, which can e.g. be used to add intervals of time
Step5: Parsing datetime strings
Step6: A detailed overview of how to specify the format string, see the table in the python documentation
Step7: The to_datetime function can also be used to convert a full series of strings
Step8: Notice the data type of this series has changed
Step9: To quickly construct some regular time series data, the pd.date_range function comes in handy
Step10: Time series data
Step11: We already know how to parse a date column with Pandas
Step12: With set_index('datetime'), we set the column with datetime values as the index, which can be done by both Series and DataFrame.
Step13: The steps above are provided as built-in functionality of read_csv
Step14: <div class="alert alert-info">
Step15: Similar to a Series with datetime data, there are some attributes of the timestamp values available
Step16: The plot method will also adapt its labels (when you zoom in, you can see the different levels of detail of the datetime labels)
Step17: We have too much data to sensibly plot on one figure. Let's see how we can easily select part of the data or aggregate the data to other time resolutions in the next sections.
Step18: But, for convenience, indexing a time series also works with strings
Step19: A nice feature is "partial string" indexing, where we can do implicit slicing by providing a partial datetime string.
Step20: Or all data of January up to March 2012
Step21: Exercises
Step22: <div class="alert alert-success">
Step23: <div class="alert alert-success">
Step24: <div class="alert alert-success">
Step25: <div class="alert alert-success">
Step26: The power of pandas
Step27: Other mathematical methods can also be specified
Step28: <div class="alert alert-info">
Step29: <div class="alert alert-success">
Step30: <div class="alert alert-success">
Step31: <div class="alert alert-success">
Step32: <div class="alert alert-success">
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('ggplot')
import datetime
dt = datetime.datetime(year=2016, month=12, day=19, hour=13, minute=30)
dt
print(dt) # .day,...
print(dt.strftime("%d %B %Y"))
ts = pd.Timestamp('2016-12-19')
ts
ts.month
ts + pd.Timedelta('5 days')
pd.to_datetime("2016-12-09")
pd.to_datetime("09/12/2016")
pd.to_datetime("09/12/2016", format="%d/%m/%Y")
s = pd.Series(['2016-12-09 10:00:00', '2016-12-09 11:00:00', '2016-12-09 12:00:00'])
s
ts = pd.to_datetime(s)
ts
ts.dt.hour
ts.dt.dayofweek
pd.Series(pd.date_range(start="2016-01-01", periods=10, freq='3H'))
data = pd.read_csv("data/vmm_flowdata.csv")
data.head()
data['Time'] = pd.to_datetime(data['Time'])
data = data.set_index("Time")
data
data = pd.read_csv("data/vmm_flowdata.csv", index_col=0, parse_dates=True)
data.index
data.index.day
data.index.dayofyear
data.index.year
%matplotlib widget
data.plot()
# switching back to static inline plots (the default)
%matplotlib inline
data[pd.Timestamp("2012-01-01 09:00"):pd.Timestamp("2012-01-01 19:00")]
data["2012-01-01 09:00":"2012-01-01 19:00"]
data['2013':]
data['2012-01':'2012-03']
data = pd.read_csv("data/vmm_flowdata.csv", index_col=0, parse_dates=True)
data['2012':]
data[data.index.month == 1]
data[data.index.month.isin([4, 5, 6])]
data[(data.index.hour > 8) & (data.index.hour < 20)]
data.resample('D').mean().head()
data.resample('D').max().head()
data.resample('M').mean().plot() # 10D
data.resample('M').std().plot() # 'A'
subset = data['2011':'2012']['L06_347']
subset.resample('M').agg(['mean', 'median']).plot()
daily = data['LS06_348'].resample('D').mean() # daily averages calculated
daily.resample('M').agg(['min', 'max']).plot() # monthly minimum and maximum values of these daily averages
data['2013':'2013'].mean().plot(kind='barh')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The case of $a>c$
Step2: The case of $a=c$ and $b>d$
Step3: $a=c$ and $b < d$
Step4: $a < c$
Step5: We see in the above case that the population seems to stabilise at a mixed strategy. This leads to the general definition of the fitness of a mixed strategy
Step6: Looking at $x=(.4, .6)$ (which is the only symmetric nash equilibrium), we have
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import nashpy as nash
import matplotlib.pyplot as plt
A = np.array([[4, 3], [2, 1]])
game = nash.Game(A)
timepoints = np.linspace(0, 10, 1000)
epsilon = 10 ** -1
xs = game.replicator_dynamics(
y0=[1 - epsilon, epsilon],
timepoints=timepoints,
)
plt.plot(xs);
A = np.array([[4, 3], [4, 1]])
game = nash.Game(A)
xs = game.replicator_dynamics(
y0=[1 - epsilon, epsilon],
timepoints=timepoints,
)
plt.plot(xs);
A = np.array([[4, 3], [4, 5]])
game = nash.Game(A)
xs = game.replicator_dynamics(
y0=[1 - epsilon, epsilon],
timepoints=timepoints,
)
plt.plot(xs);
A = np.array([[1, 3], [4, 1]])
game = nash.Game(A)
xs = game.replicator_dynamics(
y0=[1 - epsilon, epsilon],
timepoints=timepoints,
)
plt.plot(xs);
import nashpy as nash
game = nash.Game(A, A.transpose())
list(game.support_enumeration())
import sympy as sym
sym.init_printing()
A = sym.Matrix(A)
y_1, y_2 = sym.symbols("y_1, y_2")
y = sym.Matrix([y_1, y_2])
A, y
rhs = sym.expand((y.transpose() * A * y)[0].subs({y_2: 1 - y_1}))
rhs
lhs = sym.expand((sym.Matrix([[.4, .6]]) * A * y)[0].subs({y_2: 1-y_1}))
lhs
sym.factor(lhs - rhs)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Define our markiv hcain functions. First to create the dics. First attempt only takes triplets of words a b c and adds {'a b'
Step2: Load the books and buildthe dictionaries, and run some simple test for proof of principle.
|
<ASSISTANT_TASK:>
Python Code:
import pensieve as pens
import textacy
from collections import defaultdict
from random import random
def make_markov_chain(docs):
my_dict = defaultdict(list)
inverse_dict = defaultdict(list)
for doc in docs:
print("Reading ",doc)
d = pens.Doc(doc)
for p in d.paragraphs:
for sent in p.doc.sents:
#print(sent.text)
bow = textacy.extract.words(sent)
for i_word, word in enumerate(bow):
if i_word < 3:
continue
key = sent[i_word-2].text+' '+sent[i_word-1].text
value = sent[i_word].text
my_dict[key].append(value)
inverse_dict[value].append(key)
return my_dict, inverse_dict
def sample_from_chain(mv_dict, key):
options = len(mv_dict[key])
x = 999
while x > options-1:
x = int(10*(random()/options)-1)
#rint(x)
#print(x,key, options)
return(mv_dict[key][x])
def make_chain(mkv_chain, key):
counter = 0
chain = key
while key in mkv_chain:
#if counter > 5:
# return chain
chain+=' '+sample_from_chain(mkv_chain,key)
key = chain.split()[-2]+' '+chain.split()[-1]
counter +=1
return chain
all_books = ['../../clusterpot/book1.txt',
'../../clusterpot/book2.txt',
'../../clusterpot/book3.txt',
'../../clusterpot/book4.txt',
'../../clusterpot/book5.txt',
'../../clusterpot/book6.txt',
'../../clusterpot/book7.txt']
mkv_chain, inv_chain = make_markov_chain(all_books)
#print(mkv_chain)
for i in range(20):
print('\n',make_chain(mkv_chain,'He said'))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Query by Committee
Step2: Stochastic Gradient Descent
Step3: Random selection of data points at each iteration.
Step4: SVM with Random Sampling
Step5: Contextual Bandits
Step6: Each cluster has a context vector containing 4 pieces of information
Step7: We'll use Thompson Sampling with linear payoff and with Gaussian prior and likelihood. The algorithm is described in <a href="http
Step8: Initially, we choose 100 random points to sample.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import pickle
import seaborn as sns
from pandas import DataFrame, Index
from sklearn import metrics
from sklearn.linear_model import SGDClassifier
from sklearn.svm import SVC
from sklearn.kernel_approximation import RBFSampler, Nystroem
from sklearn.linear_model import PassiveAggressiveClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.cluster import MiniBatchKMeans
from sklearn.utils import shuffle
from sklearn.cross_validation import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.cross_validation import KFold
from scipy.spatial.distance import cosine
from IPython.core.display import HTML
from mclearn import *
%matplotlib inline
sns.set_palette("husl", 7)
HTML(open("styles/stylesheet.css", "r").read())
# read in the data
sdss = pd.io.parsers.read_csv("data/sdss_dr7_photometry.csv.gz", compression="gzip", index_col=["ra", "dec"])
# save the names of the 11 feature vectors and the target column
feature_names = ["psfMag_u", "psfMag_g", "psfMag_r", "psfMag_i", "psfMag_z",
"petroMag_u", "petroMag_g", "petroMag_r", "petroMag_i", "petroMag_z", "petroRad_r"]
target_name = "class"
X_train, X_test, y_train, y_test = train_test_split(np.array(sdss[feature_names]), np.array(sdss['class']), train_size=100000, test_size=30000)
# shuffle the data
X_train, y_train = shuffle(X_train, y_train)
X_test, y_test = shuffle(X_test, y_test)
accuracies = []
predictions = [[] for i in range(10)]
forests = [None] * 11
# initially, pick 100 random points to query
X_train_cur, y_train_cur = X_train[:100], y_train[:100]
X_train_pool, y_train_pool = X_train[100:], y_train[100:]
# find the accuracy rate, given the current training example
forests[-1] = RandomForestClassifier(n_jobs=-1, class_weight='auto', random_state=5)
forests[-1].fit(X_train_cur, y_train_cur)
y_pred_test = forests[-1].predict(X_test)
confusion_test = metrics.confusion_matrix(y_test, y_pred_test)
accuracies.append(balanced_accuracy_expected(confusion_test))
# query by committee to pick the next point to sample
kfold = KFold(len(y_train_cur), n_folds=10, shuffle=True)
for i, (train_index, test_index) in enumerate(kfold):
forests[i] = RandomForestClassifier(n_jobs=-1, class_weight='auto', random_state=5)
forests[i].fit(X_train_cur[train_index], y_train_cur[train_index])
predictions[i] = forests[i].predict(X_train_pool)
# normalise features to have mean 0 and variance 1
scaler = StandardScaler()
scaler.fit(X_train) # fit only on training data
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
# approximates feature map of an RBF kernel by Monte Carlo approximation of its Fourier transform.
rbf_feature = RBFSampler(n_components=200, gamma=0.3, random_state=1)
X_train_rbf = rbf_feature.fit_transform(X_train)
X_test_rbf = rbf_feature.transform(X_test)
benchmark_sgd = SGDClassifier(loss="hinge", alpha=0.000001, penalty="l1", n_iter=10, n_jobs=-1,
class_weight='auto', fit_intercept=True, random_state=1)
benchmark_sgd.fit(X_train_rbf[:100], y_train[:100])
benchmark_y_pred = benchmark_sgd.predict(X_test_rbf)
benchmark_confusion = metrics.confusion_matrix(y_test, benchmark_y_pred)
benchmark_learning_curve = []
sample_sizes = np.concatenate((np.arange(100, 1000, 100), np.arange(1000, 10000, 1000), np.arange(10000, 100000, 10000),
np.arange(100000, 1000000, 100000), np.arange(1000000, len(X_train), 500000), [len(X_train)]))
benchmark_learning_curve.append(balanced_accuracy_expected(benchmark_confusion))
classes = np.unique(y_train)
for i, j in zip(sample_sizes[:-1], sample_sizes[1:]):
for _ in range(10):
X_train_partial, y_train_partial = shuffle(X_train_rbf[i:j], y_train[i:j])
benchmark_sgd.partial_fit(X_train_partial, y_train_partial, classes=classes)
benchmark_y_pred = benchmark_sgd.predict(X_test_rbf)
benchmark_confusion = metrics.confusion_matrix(y_test, benchmark_y_pred)
benchmark_learning_curve.append(balanced_accuracy_expected(benchmark_confusion))
# save output for later re-use
with open('results/sdss_active_learning/sgd_benchmark.pickle', 'wb') as f:
pickle.dump((benchmark_sgd, sample_sizes, benchmark_learning_curve), f, pickle.HIGHEST_PROTOCOL)
plot_learning_curve(sample_sizes, benchmark_learning_curve, "Benchmark Learning Curve (Random Selection)")
svm_random = SVC(kernel='rbf', random_state=7, cache_size=2000, class_weight='auto')
svm_random.fit(X_train[:100], y_train[:100])
svm_y_pred = svm_random.predict(X_test)
svm_confusion = metrics.confusion_matrix(y_test, svm_y_pred)
svm_learning_curve = []
sample_sizes = np.concatenate((np.arange(200, 1000, 100), np.arange(1000, 20000, 1000)))
svm_learning_curve.append(balanced_accuracy_expected(svm_confusion))
previous_h = svm_random.predict(X_train)
rewards = []
for i in sample_sizes:
svm_random.fit(X_train[:i], y_train[:i])
svm_y_pred = svm_random.predict(X_test)
svm_confusion = metrics.confusion_matrix(y_test, svm_y_pred)
svm_learning_curve.append(balanced_accuracy_expected(svm_confusion))
current_h = svm_random.predict(X_train)
reward = 0
for i, j in zip(current_h, previous_h):
reward += 1 if i != j else 0
reward = reward / len(current_h)
previous_h = current_h
rewards.append(reward)
# save output for later re-use
with open('results/sdss_active_learning/sgd_svm_random.pickle', 'wb') as f:
pickle.dump((sample_sizes, svm_learning_curve, rewards), f, pickle.HIGHEST_PROTOCOL)
log_rewards = np.log(rewards)
beta, intercept = np.polyfit(sample_sizes, log_rewards, 1)
alpha = np.exp(intercept)
plt.plot(sample_sizes, rewards)
plt.plot(sample_sizes, alpha * np.exp(beta * sample_sizes))
plot_learning_curve(sample_sizes, svm_learning_curve, "SVM Learning Curve (Random Selection)")
n_clusters = 100
kmeans = MiniBatchKMeans(n_clusters=n_clusters, init_size=100*n_clusters, random_state=2)
X_train_transformed = kmeans.fit_transform(X_train)
unlabelled_points = set(range(0, len(X_train)))
empty_clusters = set()
cluster_sizes = [len(np.flatnonzero(kmeans.labels_ == i)) for i in range(n_clusters)]
cluster_points = [list(np.flatnonzero(kmeans.labels_ == i)) for i in range(n_clusters)]
no_labelled = [0 for i in range(n_clusters)]
prop_labelled = [0 for i in range(n_clusters)]
d_means = []
d_var = []
for i in range(n_clusters):
distance, distance_squared, count = 0, 0, 0
for j, p1 in enumerate(cluster_points[i]):
for p2 in cluster_points[i][j+1:]:
d = np.fabs(X_train_transformed[p1][i] - X_train_transformed[p2][i])
distance += d
distance_squared += d**2
count += 1
if cluster_sizes[i] > 1:
d_means.append(distance / count)
d_var.append((distance_squared / count) - (distance / count)**2)
else:
d_means.append(0)
d_var.append(0)
context = np.array([list(x)for x in zip(d_means, d_var, cluster_sizes, prop_labelled)])
context_size = 4
B = np.eye(context_size)
mu = np.array([0] * context_size)
f = np.array([0] * context_size)
v_squared = 0.25
active_sgd = SVC(kernel='rbf', random_state=7, cache_size=2000, class_weight='auto')
#active_sgd = SGDClassifier(loss="hinge", alpha=0.000001, penalty="l1", n_iter=10, n_jobs=-1,
# class_weight='auto', fit_intercept=True, random_state=1)
X_train_cur, y_train_cur = X_train[:100], y_train[:100]
active_sgd.fit(X_train_cur, y_train_cur)
# update context
for i in np.arange(0, 100):
this_cluster = kmeans.labels_[i]
cluster_points[this_cluster].remove(i)
unlabelled_points.remove(i)
if not cluster_points[this_cluster]:
empty_clusters.add(this_cluster)
no_labelled[this_cluster] += 1
context[this_cluster][3] = no_labelled[this_cluster] / cluster_sizes[this_cluster]
# initial prediction
active_y_pred = active_sgd.predict(X_test)
active_confusion = metrics.confusion_matrix(y_test, active_y_pred)
active_learning_curve = []
active_learning_curve.append(balanced_accuracy_expected(active_confusion))
classes = np.unique(y_train)
# compute the current hypothesis
previous_h = active_sgd.predict(X_train)
active_steps = [100]
no_choices = 1
rewards = []
for i in range(2000 // no_choices):
mu_sample = np.random.multivariate_normal(mu, v_squared * np.linalg.inv(B))
reward_sample = [np.dot(c, mu_sample) for c in context]
chosen_arm = np.argmax(reward_sample)
while chosen_arm in empty_clusters:
reward_sample[chosen_arm] = float('-inf')
chosen_arm = np.argmax(reward_sample)
# select a random point in the cluster
query = np.random.choice(cluster_points[chosen_arm], min(len(cluster_points[chosen_arm]), no_choices), replace=False)
# update context
for q in query:
cluster_points[chosen_arm].remove(q)
unlabelled_points.remove(q)
if not cluster_points[chosen_arm]:
empty_clusters.add(chosen_arm)
no_labelled[chosen_arm] += len(query)
context[chosen_arm][3] = no_labelled[chosen_arm] / cluster_sizes[chosen_arm]
active_steps.append(active_steps[-1] + len(query))
# run stochastic gradient descent
#active_sgd.partial_fit(X_train_rbf[query], y_train[query], classes=classes)
X_train_cur = np.vstack((X_train_cur, X_train[query]))
y_train_cur = np.concatenate((y_train_cur, y_train[query]))
active_sgd = SVC(kernel='rbf', random_state=7, cache_size=2000, class_weight='auto')
active_sgd.fit(X_train_cur, y_train_cur)
active_y_pred = active_sgd.predict(X_test)
active_confusion = metrics.confusion_matrix(y_test, active_y_pred)
active_learning_curve.append(balanced_accuracy_expected(active_confusion))
# compute the reward from choosing such arm
current_h = active_sgd.predict(X_train)
reward = 0
for i, j in zip(current_h, previous_h):
reward += 1 if i != j else 0
reward = reward / len(current_h)
reward = reward / (alpha * np.exp(beta * len(y_train_cur)))
previous_h = current_h
rewards.append(reward)
# compute posterior distribution
B = B + np.outer(context[chosen_arm], context[chosen_arm])
f = f + reward * context[chosen_arm]
mu = np.dot(np.linalg.inv(B), f)
plot_learning_curve(active_steps, active_learning_curve, "SVM Learning Curve (Active Learning)")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: If we use it with a string, it loops over its characters.
Step2: If use it with a dictionary, it loops over its keys
Step3: So there are many types of objects which can be used with a for loop. These are called iterable objects.
Step4: Iteration Protocol
Step6: Having seen the mechanics behind the iterator protocol, it is easy to add iterator behavior to your classes. Define an _iter_() method which returns an object with a _next_() method. If the class defines _next_(), then _iter_() can just return self
Step7: Generators
Step8: Anything that can be done with generators can also be done with class-based iterators as described in the previous section. What makes generators so compact is that the iter() and next() methods are created automatically.
Step9: Generator Expressions
Step10: Generator expressions are more compact but less versatile than full generator definitions and tend to be more memory friendly than equivalent list comprehensions.
Step11: Note that generators provide another way to deal with infinity, for example
Step12: Use of Generators
Step13: This was lengthy. Now lets do the same using a generator function.
Step14: Since generators keep track of details automatically, it was concise and much cleaner in implementation.
Step15: 4. Pipelining Generators
Step16: Using Itertools
|
<ASSISTANT_TASK:>
Python Code:
for i in [1,2,3]:
print(i)
for ch in 'test':
print(ch)
for k in {1:'test1',2:'test'}:
print(k)
",".join(["a","b","c"])
",".join(('this','is','a','test'))
",".join({'key1':'value','key2':'value2'})
x = iter([1,2,3])
print(x)
print(next(x))
print(next(x))
print(next(x))
print(next(x)) # <-- will create an error
class Reverse:
Iterator for looping over a sequence backwards.
def __init__(self, data):
self.data = data
self.index = len(data)
def __iter__(self):
return self
def __next__(self):
if self.index == 0:
raise StopIteration
self.index = self.index - 1
return self.data[self.index]
rev = Reverse('spam')
for i in rev:
print(i)
def reverse(data):
for index in range(len(data)-1, -1, -1):
yield data[index]
for ch in reverse('shallow'):
print(ch)
def samplegen():
print("begin")
for i in range(3):
print("before yield", i)
yield i
print("after yield", i)
print("end")
f = samplegen()
print(next(f))
print(next(f))
print(next(f))
print(next(f))
a = (x * x for x in range(10))
sum(a)
xvec = [5,16,7]
yvec = [4,12,18]
sum(x * y for x,y in zip(xvec,yvec))
data = 'golf'
list(data[i] for i in range(len(data)-1, -1, -1))
# unique_words = set(word for line in page for word in line.split())
# valedictorian = max((student.gpa, student.name) for student in graduates)
from time import gmtime, strftime
def myGen():
while True:
yield strftime("%a, %d %b %Y %H:%M:%S +0000", gmtime())
myGeneratorInstance = myGen()
next(myGeneratorInstance)
next(myGeneratorInstance)
# Iterator Class
class PowTwo:
def __init__(self, max = 0):
self.max = max
def __iter__(self):
self.n = 0
return self
def __next__(self):
if self.n > self.max:
raise StopIteration
result = 2 ** self.n
self.n += 1
return result
def PowTwoGen(max = 0):
n = 0
while n < max:
yield 2 ** n
n += 1
def all_event():
n = 0
while True:
yield n
n += 2
with open('sells.log') as file:
ip_col = (line[3] for line in file)
per_hr = (int(x) for x in ip_col if x != 'N/A')
print("IPs =", sum(per_hr))
import itertools
horses = [1,2,3,4]
races = itertools.permutations(horses)
print(list(races))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Checkerboard
Step3: Use vizarray to visualize a checkerboard of size=20 with a block size of 10px.
Step4: Use vizarray to visualize a checkerboard of size=27 with a block size of 5px.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import antipackage
import github.ellisonbg.misc.vizarray as va
def checkerboard(size):
Return a 2d checkboard of 0.0 and 1.0 as a NumPy array
# YOUR CODE HERE
raise NotImplementedError()
a = checkerboard(4)
assert a[0,0]==1.0
assert a.sum()==8.0
assert a.dtype==np.dtype(float)
assert np.all(a[0,0:5:2]==1.0)
assert np.all(a[1,0:5:2]==0.0)
b = checkerboard(5)
assert b[0,0]==1.0
assert b.sum()==13.0
assert np.all(b.ravel()[0:26:2]==1.0)
assert np.all(b.ravel()[1:25:2]==0.0)
# YOUR CODE HERE
raise NotImplementedError()
assert True
# YOUR CODE HERE
raise NotImplementedError()
assert True
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: According to your installation, you will see different available environments
Step2: There are 4 different configurations for the simple arm (we see 5 above, but the 'default' one actually corresponds to the 'low_dimensional' one).
Step3: Once you have choose a configuration, say the 'default' one, you can create the Environment instance
Step4: If you already now the name of the environment and the configuration you want to use (here
Step5: Each environment has an update method, we take as argument a motor command vector $m$ (here
Step6: We observe that the second value $\pi$ has been bounded to $m_maxs[1] = \pi/3$. The two last values correspond to the resulting $x, y$ end-effector postion.
Step7: The base of the arm is fixed at (0, 0) (circle). The first angle position m[0] corresponds to the angle between a horizontal line and the segment attached to the base, anticlock-wise. Each following angle position, m[1] and m[2] are measured with respect to their respective previous segment.
Step8: Exercise
|
<ASSISTANT_TASK:>
Python Code:
from explauto.environment import environments
print 'Available environments: {}'.format(environments.keys())
env_cls, env_configs, _ = environments['simple_arm']
print 'Available configurations for the simple arm environment: {}'.format(env_configs.keys())
config = env_configs['default']
print 'Default configuration for the simple arm:'
for config_key, value in config.items():
print '\t{}: {}'.format(config_key, value)
environment = env_cls(**config)
from explauto import Environment
environment = Environment.from_configuration('simple_arm', 'default')
from math import pi
m = [-pi/6., pi, pi/4.]
print environment.update(m)
%pylab inline
ax = axes()
m = [-pi/6., pi/3., pi/4.]
environment.plot_arm(ax, m)
motor_configurations = environment.random_motors(n=10)
ax = axes()
for m in motor_configurations:
environment.plot_arm(ax, m)
%load exercise_solutions/setting_environment__high_dim_configuration.py
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Howto with examples on 'pickle' module ==> 28/May/2016
Step4: This is the content of nopickle.txt
Step6: ** This is the content of pickle.txt
Step8: Short notes on the above code output
|
<ASSISTANT_TASK:>
Python Code:
An example to store the output without "pickle"
testfile = 'nopickle.txt'
var1 = 1143
var2 = ["AECS", "LAYOUT", "KUNDALAHALLI"]
var3 = 58.30
var4 = ("Bangalore", 560037)
def ezhudhu():
with open(testfile, 'w+') as f:
f.write(str(var1))
f.write(str(var2))
f.write(str(var3))
f.write(str(var4))
f.close()
return None
def main():
ezhudhu()
if __name__ == '__main__':
main()
Let us now read the contents of 'nopickle.txt', also check it's type
Does it retain the original type of the variables ?
with open(testfile, 'r') as f:
print f.readline()
print(type(f.readline()))
An example of store the outputin a file using 'pickle'
import pickle
testfile = 'pickle.txt'
var1 = 1143
var2 = ["AECS", "LAYOUT", "KUNDALAHALLI"]
var3 = 58.30
var4 = ("Bangalore", 560037)
def baree():
with open(testfile, 'w+') as f:
pickle.dump(var1, f)
pickle.dump(var2, f)
pickle.dump(var3, f)
pickle.dump(var4, f)
f.close()
return None
def main():
baree()
if __name__ == '__main__':
main()
Let us now read the contents of 'pickle.txt'.
Do the variables retain their types?
def pickout(fileobj):
print "what's this file object : "
print fileobj
print "it's type : "
print type(fileobj)
print "==" * 5 + "==" * 5
while True:
pickline = pickle.load(fileobj)
yield pickline
with open('pickle.txt', 'rb') as f:
for info in pickout(f):
print info,
print type(info)
Example on how to read contents of 'pickle.txt'.
End of file condition handled.
def pickout(fileobj):
print "what's this file object : "
print fileobj
print "it's type : "
print type(fileobj)
print "==" * 5 + "==" * 5
try:
while True:
pickline = pickle.load(fileobj)
yield pickline
except EOFError:
pass
with open('pickle.txt', 'rb') as f:
for info in pickout(f):
print info,
print type(info)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The variables defined above can be modified for any other species, resulting in new results for the following commands.
Step2: Sequences of each chromosome/contig
Step3: For each contig/chromosome download the corresponding FASTA file from NCBI
Step4: Concatenate all contigs/chromosomes into single files
Step5: Remove all the other files (with single chromosome/contig)
Step6: Creation of an index file for GEM mapper
Step7: The path to the index file will be
Step8: Cleanup
|
<ASSISTANT_TASK:>
Python Code:
species = 'Mus_musculus'
taxid = '10090'
assembly = 'GRCm38.p6'
genbank = 'GCF_000001635.26'
sumurl = ('ftp://ftp.ncbi.nlm.nih.gov/genomes/all/{0}/{1}/{2}/{3}/{4}_{5}/'
'{4}_{5}_assembly_report.txt').format(genbank[:3], genbank[4:7], genbank[7:10],
genbank[10:13], genbank, assembly)
crmurl = ('https://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi'
'?db=nuccore&id=%s&rettype=fasta&retmode=text')
print sumurl
! wget -q $sumurl -O chromosome_list.txt
! head chromosome_list.txt
import os
dirname = 'genome'
! mkdir -p {dirname}
contig = []
for line in open('chromosome_list.txt'):
if line.startswith('#'):
continue
seq_name, seq_role, assigned_molecule, _, genbank, _, refseq, _ = line.split(None, 7)
if seq_role == 'assembled-molecule':
name = 'chr%s.fasta' % assigned_molecule
else:
name = 'chr%s_%s.fasta' % (assigned_molecule, seq_name.replace('/', '-'))
contig.append(name)
outfile = os.path.join(dirname, name)
if os.path.exists(outfile) and os.path.getsize(outfile) > 10:
continue
error_code = os.system('wget "%s" --no-check-certificate -O %s' % (crmurl % (genbank), outfile))
if error_code:
error_code = os.system('wget "%s" --no-check-certificate -O %s' % (crmurl % (refseq), outfile))
if error_code:
print genbank
def write_to_fasta(line):
contig_file.write(line)
def write_to_fastas(line):
contig_file.write(line)
simple_file.write(line)
os.system('mkdir -p {}/{}-{}'.format(dirname, species, assembly))
contig_file = open('{0}/{1}-{2}/{1}-{2}_contigs.fa'.format(dirname, species, assembly),'w')
simple_file = open('{0}/{1}-{2}/{1}-{2}.fa'.format(dirname, species, assembly),'w')
for molecule in contig:
fh = open('{0}/{1}'.format(dirname, molecule))
oline = '>%s\n' % (molecule.replace('.fasta', ''))
_ = fh.next()
# if molecule is an assembled chromosome we write to both files, otherwise only to the *_contigs one
write = write_to_fasta if '_' in molecule else write_to_fastas
for line in fh:
write(oline)
oline = line
# last line usually empty...
if line.strip():
write(line)
contig_file.close()
simple_file.close()
! rm -f {dirname}/*.fasta
! gem-indexer -T 8 -i {dirname}/{species}-{assembly}/{species}-{assembly}_contigs.fa -o {dirname}/{species}-{assembly}/{species}-{assembly}_contigs
! gem-indexer -i {dirname}/{species}-{assembly}/{species}-{assembly}.fa \
-o {dirname}/{species}-{assembly}/{species}-{assembly} -T 8
! gem-mappability -I {dirname}/{species}-{assembly}/{species}-{assembly}.gem -l 50 \
-o {dirname}/{species}-{assembly}/{species}-{assembly}.50mer -T 8
! gem-2-wig -I {dirname}/{species}-{assembly}/{species}-{assembly}.gem \
-i {dirname}/{species}-{assembly}/{species}-{assembly}.50mer.mappability \
-o {dirname}/{species}-{assembly}/{species}-{assembly}.50mer
! wigToBigWig {dirname}/{species}-{assembly}/{species}-{assembly}.50mer.wig \
{dirname}/{species}-{assembly}/{species}-{assembly}.50mer.sizes \
{dirname}/{species}-{assembly}/{species}-{assembly}.50mer.bw
! bigWigToBedGraph {dirname}/{species}-{assembly}/{species}-{assembly}.50mer.bw \
{dirname}/{species}-{assembly}/{species}-{assembly}.50mer.bedGraph
! rm -f {dirname}/{species}-{assembly}/{species}-{assembly}.50mer.mappability
! rm -f {dirname}/{species}-{assembly}/{species}-{assembly}.50mer.wig
! rm -f {dirname}/{species}-{assembly}/{species}-{assembly}.50mer.bw
! rm -f {dirname}/{species}-{assembly}/{species}-{assembly}.50mer.sizes
! rm -f {dirname}/{species}-{assembly}/*.log
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Section 1
Step2: <ol start="2">
Step3: <font color="red">Quelles variables sont présentes? Quel est le type de la variable spike? Quelle est sa taille?</font>
Step4: Le champ "spikes(1).times" contient les temps de décharges de potentiels d’action pour
Step5: <font color="red">Combien y-a-t-il eu de décharges pour l’essai 2? pour l’essai 10?</font>
Step6: <font color="blue">Il y a 51 décharges pour l'essai 2.</font>
Step7: <font color="blue">Il y a 85 décharges pour l'essai 10.</font>
Step8: <ol start="6">
Step9: Attention à la deuxième instruction! Elle permet de dessiner
Step10: Notez l’utilisation de la commande line.
Step11: <font color="red"> <ol start="10">
Step12: <font color="red"> <ol start="11">
Step13: Section 2
Step14: <ol start="2">
Step15: <ol start="3">
Step16: <ol start="4">
Step17: <ol start="5">
Step18: <font color="red">Examinez le contenu de la variable histo. Quelle est sa taille ? Son minimum, maximum, sa moyenne (voir les fonctions matlab min, max, mean ).</font>
Step19: <ol start="6">
Step20: <font color="red"> <ol start="10">
Step21: Section 3
Step22: <ol start="2">
Step23: <font color="red"> Quels est la taille et le contenu des vecteurs x et y? A quoi sert l'opération ' ? </font>
Step24: <font color="blue"> Les variables x et y sont tout deux des vecteurs de 1 colonne et 8 lignes. L'operation ' transposer un vecteur ligne en colonne ou le contraire.</font>
Step25: <font color="red"> Quel est le type de la variable ftheta?</font>
Step26: <font color="blue">ftheta est une fonction en ligne (inline).</font>
Step27: <font color="red"> Quelles sont les valeurs de theta_chap? A quoi sert l’argument [1 1]? Essayez de reproduire l’estimation avec d’autres valeurs pour cet argument, est-ce que cela affecte le résultat?</font>
Step28: <font color="red"><ol start="6">
Step29: <ol start="7">
Step30: <font color="red">Quelle est la taille de x? La taille de y?</font>
Step31: <font color="blue">Les variables x et y sont des vecteurs lignes de longueur 301.</font>
Step32: <font color="blue">La commande randn permet de simuler du bruit suivant une distribution normale (Gaussienne) de moyenne nulle et de variance 1.</font>
Step33: <ol start="8">
Step34: <font color="red">Quelle est la valeur de la fonction ftheta pour theta=[0 1 1] et x=0 ?</font>
Step35: <ol start="9">
Step36: <font color="red">Quelles sont les valeurs de theta_chap ?</font>
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from pymatbridge import Octave
octave = Octave()
octave.start()
%load_ext pymatbridge
%%matlab
load('Chap17_Data')
%%matlab
whos
%%matlab
fieldnames(spike)
%%matlab
size(spike(1).times)
%%matlab
size(spike(2).times) %nb de décharges pour l'essai 2
%%matlab
size(spike(10).times) %nb de décharges pour l'essai 10
%%matlab
t1 = spike(1).times;
t2 = spike(2).times;
%%matlab
figure
hold on
%%matlab
for num_temps = 1:length(t1)
line([t1(num_temps) t1(num_temps)], [0 1])
end
%%matlab
for num_temps = 1:length(t1)
line([t1(num_temps) t1(num_temps)], [0 1])
end
xlabel('Temp (sec)');
%Idem pour l’axe des y:
ylabel('Essai #')
%Enfin, on fixe les limites de l’axe des y
ylim([0 3])
% save the result
print('figure_dispersion.png','-dpng')
%%matlab
for num_temps = 1:length(t1)
line([t1(num_temps) t1(num_temps)], [0 0.5])
end
xlabel('Temp (sec)')
%Idem pour l’axe des y:
ylabel('Essai #')
%Enfin, on fixe les limites de l’axe des y
ylim([0 3])
%%matlab
% Charger les donnees
load('Chap17_Data')
% Preparer une figure
figure
% permettre la superposition de plusieurs graphiques dans la meme figure
hold on
% Donner un label à l'axe des x
xlabel('Temp (sec)');
% Donner un label à l'axe des y
ylabel('Essai #');
% Ajuster les limites de l'axe des y
ylim([0 length(spike)]);
for num_spike = 1:length(spike) %faire une boucle pour tout les essaies
t = spike(num_spike).times; %definir la variable pour chaque essai
for num_temps=1:length(t) %faire une boucle pour tous les points temps
line([t(num_temps) t(num_temps)], [0+(num_spike-1) 1+(num_spike-1)]); %dessiner une line pour chaque point temps t1(i) avec longueur de 1
end
end
%%matlab
clear
%%matlab
load('Chap17_Data')
%%matlab
centres = [-0.95:0.1:0.95];
%%matlab
histo = zeros(1,length(centres));
%%matlab
histo = hist(spike(1).times,centres);
%%matlab
whos histo % elle est de taille 1x20
%%matlab
min(histo)
max(histo)
mean(histo)
%%matlab
bar(centres,histo);
%Ajuster les limites de l’axe des x
xlim([-1.1 1]);
xlabel('Temps (sec)'); %Donner un label à l’axe des x
ylabel('# essai');%Donner un label à l’axe des y
%%matlab
%Charger les donnees
load('Chap17_Data')
% Definir les centres des intervalles pour l'histogramme
centres = [-0.95:0.1:0.95];
% Initialiser une matrice de zéros histo dont la longueur est égale au nombre d'intervalles:
histo = zeros(length(centres),1);
% Faire une boucle à travers tous les essais et recuperer le nombre de decharges par intervalle avec la fonction histc
for jj = 1:47
histo=histo+histc(spike(jj).times,centres);
end
% Dessiner l'histograme avec la fonction bar
bar(centres,histo);
%Ajuster les limites de l'axe des x
xlim([-1.1 1]);
%Donner un label à l’axe des x
xlabel('Temps (sec)');
%Donner un label à l’axe des y
ylabel('# essai');
%%matlab
clear
%%matlab
x = [ 165 165 157 170 175 165 182 178 ]';
y = [ 47 56 49 60 82 52 78 90 ]';
%%matlab
whos
%%matlab
ftheta = inline('theta(1)+theta(2)*x','theta','x');
%%matlab
whos ftheta
%%matlab
% theta_chap = nlinfit(x, y, ftheta, [1 1] );
theta_chap = [-237.5729 1.7794];
%%matlab
figure
plot(x,y,'b.');
hold on
plot(x,ftheta(theta_chap,x),'r');
%%matlab
figure
plot(x,y,'b.');
hold on
plot(x,ftheta(theta_chap,x),'r');
ylim([40 95])
xlabel('taille')
ylabel('poids')
print('regression_notes.png','-dpng')
%%matlab
clear
x = 0:0.1:30;
y = cos(x) + randn(1,301);
%%matlab
size(x)
size(y)
%%matlab
help randn
%%matlab
figure
plot(x,y,'.')
print('donnees_cosinus.png','-dpng')
%%matlab
ftheta = inline('theta(1)+theta(2)*cos(x-theta(3))','theta','x');
%%matlab
ftheta([0 1 1],0)
%%matlab
theta_chap = nlinfit(x, y, ftheta, [0 1 1] );
%%matlab
figure
plot(x,y,'b');
hold on
plot(x,ftheta(theta_chap,x),'r');
Faites une sauvegarde de cette image, dans un fichier .
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Using maxlen to limit the num of items in a deque
Step2: 2. Heapq
Step3: nlargest / nsmallest wraps creation of a heap for one-time access
Step4: 3. Counter
|
<ASSISTANT_TASK:>
Python Code:
from collections import deque
dq = deque()
dq.append(1)
dq.append(2)
dq.appendleft(3)
dq
v = dq.pop()
v
dq.popleft()
dq
dq = deque(maxlen = 3)
for n in range(10):
dq.append(n)
dq
import heapq
nums = [1, 8, 2, 23, 7, -4, 18, 23, 42, 37, 2]
#heapq is created from a list
heap = list(nums)
heapq.heapify(heap)
#now the 1st element is guarenteed to be the smallest
heap
heapq.heappop(heap)
heap
heapq.heappush(heap, -10)
heap
# nlargest and nsmallest wrap a heapq to provide its results
print(heapq.nlargest(3, nums)) # Prints [42, 37, 23]
print(heapq.nsmallest(3, nums)) # Prints [-4, 1, 2]
# providing an alternate sort key to nlargest/nsmallest
portfolio = [
{'name': 'IBM', 'shares': 100, 'price': 91.1},
{'name': 'AAPL', 'shares': 50, 'price': 543.22},
{'name': 'FB', 'shares': 200, 'price': 21.09},
{'name': 'HPQ', 'shares': 35, 'price': 31.75},
{'name': 'YHOO', 'shares': 45, 'price': 16.35},
{'name': 'ACME', 'shares': 75, 'price': 115.65}
]
heapq.nsmallest(3, portfolio, key=lambda s: s['price'])
words = [
'look', 'into', 'my', 'eyes', 'look', 'into', 'my', 'eyes',
'the', 'eyes', 'the', 'eyes', 'the', 'eyes', 'not', 'around', 'the',
'eyes', "don't", 'look', 'around', 'the', 'eyes', 'look', 'into',
'my', 'eyes', "you're", 'under'
]
from collections import Counter
word_counts = Counter(words) #Works with any hashable items, not just strings!
word_counts.most_common(3)
morewords = ['why','are','you','not','looking','in','my','eyes']
for word in morewords:
word_counts[word] += 1
word_counts.most_common(3)
evenmorewords = ['seriously','look','into','them','while','i','look','at', 'you']
word_counts.update(evenmorewords)
word_counts.most_common(3)
a = Counter(words)
b = Counter(morewords)
c = Counter(evenmorewords)
# combine counters
d = b + c
d
# subtract counts
e = a-d
e
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: datetime 模块解决了绝大部分时间与日期相关的操作问题,其中包含了:
Step3: Date
Step4: 获得 date 对象之后,可以分别获取年、月、日等属性(strftime也是通用的格式化方法,将在后面介绍):
Step5: Time
Step6: datetime.datetime
Step7: 从 datetime.datetime 中我们可以获取 date 和 time,同样也可以通过 date 和 time 组合得来:
Step8: 时间与日期的运算
Step9: 既然是时间段,那就可以通过两个时间点相减得到:
Step10: 时间段还支持一些算术+、-、*、/、//、%、abs 等,这里就不一一举例了。
Step11: 可以通过 strftime() and strptime() Behavior 查看转换格式表,Python 使用的格式与 C standard(1989)是一致的,不过需要注意的是,所有格式都是 zero-padded,也就是自动补零的,如果想要去掉补零,可以用 %-m 等方式,但据说在 Windows 系统上是不能用的
|
<ASSISTANT_TASK:>
Python Code:
from calendar import TextCalendar, HTMLCalendar
tc = TextCalendar(firstweekday=6)
tc.prmonth(2016, 3)
object
timedelta
tzinfo
timezone
time
date
datetime
pass
import time as _time
from datetime import date, time, datetime
d1 = date(2016, 3, 29)
d2 = date.today()
d3 = date.fromtimestamp(_time.time())
print(d1)
print(d2)
print(d3)
print("{}/{}/{}".format(d2.day, d2.month, d2.year))
# date.timetuple() 返回 time 模块中的 struct_time 结构,可以直接转换成 Tuple
print("time.struct_time: {}".format(tuple(d2.timetuple())))
# 星期数
print("Monday is 0: {}\nMonday is 1: {}".format(d2.weekday(), d2.isoweekday()))
t1 = time(22, 57, 6, 6)
t2 = datetime.now().time()
print(t1)
print(t2)
dt1 = datetime(2016, 3, 30, 22, 2)
dt2 = datetime.now()
dt3 = datetime.fromtimestamp(_time.time())
print(dt1)
print(dt2)
print(dt3)
dt = datetime.now()
dt = datetime.fromtimestamp(_time.time())
d = dt.date()
t = dt.time()
print("Date: {}\nTime: {}".format(d, t))
print("Datetime: {}".format(datetime.combine(date.today(), time(2,3,3))))
from datetime import timedelta
td = timedelta(weeks=1, days=2, hours=3,minutes=4, seconds=0, microseconds=0, milliseconds=0)
print("Time duration: {}".format(td))
current = datetime.now()
today = datetime.combine(date.today(), time(0,0,0))
td = current - today
print("{:.0f}s of Today".format(td.total_seconds()))
today = date.today()
lastyear = today.replace(year=today.year-1)
print(today - lastyear)
t1 = current.time()
t2 = time(0, 0, 0)
try:
print(t1 - t2)
except TypeError as err:
print(err)
print(datetime.strftime.__doc__)
print(datetime.strptime.__doc__)
fmat = "%y-%m-%d"
dt = datetime.now()
s = dt.strftime(fmat)
print(s)
print(datetime.strptime(s, fmat))
fmat = "%y/%-m/%-d"
dt = datetime.now()
dt = dt - timedelta(days=22)
print(dt.strftime(fmat))
# 当然也可以用
print("{}/{}/{}".format(dt.strftime("%y"), dt.month, dt.day))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import in Experiment object
Step2: Adjust properties of tilt event
Step3: Now, we can define a stochastic variable for the tilt rotation
Step4: This method stores the current events and properties in a "base_events" attribute which is not changed in the following experiments
Step5: Set random seed to ensure reproducibility
Step6: Define parameter distributions for stochastic properties
Step7: Let's set up a simple uncertainty estimation from scratch
Step8: As a first test, we can now extract a 1-D profile
Step9: Next step (homework for Alex)
Step10: First step
Step11: Extract information at the position of the sampling line
Step12: For comparison, see the complete model section
Step13: Homework 2 for Alex
Step14: Load history back and check possibility to adjust thicknesses
Step15: Test change of layer thicknesses
Step16: Now
|
<ASSISTANT_TASK:>
Python Code:
from IPython.core.display import HTML
css_file = 'pynoddy.css'
HTML(open(css_file, "r").read())
import sys, os
import matplotlib.pyplot as plt
# adjust some settings for matplotlib
from matplotlib import rcParams
# print rcParams
rcParams['font.size'] = 15
# determine path of repository to set paths corretly below
repo_path = os.path.realpath('../..')
import pynoddy.history
import numpy as np
%matplotlib inline
# Combined: model generation and output vis to test:
history = "simple_model.his"
output_name = "simple_out"
#
# A general note: the 'reload' statements are only important
# for development purposes (when modules were chnaged), but not
# in required for normal execution.
#
reload(pynoddy.history)
reload(pynoddy.events)
# create pynoddy object
nm = pynoddy.history.NoddyHistory()
# add stratigraphy
strati_options = {'num_layers' : 8,
'layer_names' : ['layer 1', 'layer 2', 'layer 3',
'layer 4', 'layer 5', 'layer 6',
'layer 7', 'layer 8'],
'layer_thickness' : [1500, 500, 500, 500, 500, 500, 500, 500]}
nm.add_event('stratigraphy', strati_options )
nm.write_history(history)
# Compute the model
reload(pynoddy)
pynoddy.compute_model(history, output_name)
# Plot output
import pynoddy.output
reload(pynoddy.output)
nout = pynoddy.output.NoddyOutput(output_name)
nout.plot_section('y', layer_labels = strati_options['layer_names'][::-1],
colorbar = True, title="",
savefig = False, fig_filename = "ex01_strati.eps")
reload(pynoddy.history)
reload(pynoddy.events)
import pynoddy.experiment
reload(pynoddy.experiment)
ex1 = pynoddy.experiment.Experiment(history)
ex1.plot_section()
reload(pynoddy.history)
reload(pynoddy.events)
nm = pynoddy.history.NoddyHistory()
# add stratigraphy
strati_options = {'num_layers' : 8,
'layer_names' : ['layer 1', 'layer 2', 'layer 3',
'layer 4', 'layer 5', 'layer 6',
'layer 7', 'layer 8'],
'layer_thickness' : [1500, 500, 500, 500, 500,
500, 500, 500]}
nm.add_event('stratigraphy', strati_options )
tilt_options = {'name' : 'Tilt',
'pos' : (4000, 3500, 5000),
'rotation' : 0.,
'plunge_direction' : 0,
'plunge' : 0.}
nm.add_event('tilt', tilt_options)
nm.write_history(history)
# Compute the model
reload(pynoddy)
pynoddy.compute_model(history, output_name)
# Plot output
import pynoddy.output
reload(pynoddy.output)
nout = pynoddy.output.NoddyOutput(output_name)
nout.plot_section('x', layer_labels = strati_options['layer_names'][::-1],
colorbar = True, title="",
savefig = False, fig_filename = "ex01_strati.eps")
ex1 = pynoddy.experiment.Experiment(history)
ex1.plot_section()
ex1.events[2].properties['Rotation'] = 20.
ex1.plot_section('y')
ex1.freeze()
ex1.base_events[2].properties
ex1.set_random_seed(12345)
param_stats = [{'event' : 2,
'parameter': 'Rotation',
'stdev': 10.0,
'type': 'normal'}
]
ex1.set_parameter_statistics(param_stats)
ex1.random_draw()
ex1.events[2].properties
ex1.plot_section(colorbar=True, colorbar_orientation='horizontal')
# extract only layer 4:
l4 = ex1.get_section('y').block[:,:,:] == 4
plt.imshow(l4[:,0,:].T, origin = 'lower left', cmap = 'gray_r')
# change resolution to increase simulation speed:
resolution = 100
ex1.change_cube_size(resolution)
# initialise output variable
tmp = ex1.get_section('y')
prob_4 = np.zeros_like(tmp.block[:,:,:])
n_draws = 1000
# now: generate random models and extract blocks of layer '4'
for i in range(n_draws):
ex1.random_draw()
tmp = ex1.get_section('y', resolution = resolution)
prob_4 += (tmp.block[:,:,:] == 4)
# Normalise
prob_4 = prob_4 / float(n_draws)
fig = plt.figure(figsize = (12,8))
ax = fig.add_subplot(111)
ax.imshow(prob_4.transpose()[:,0,:],
origin = 'lower left',
interpolation = 'none')
plt.title("Estimated probability of unit 4")
plt.xlabel("x (E-W)")
plt.ylabel("z")
plt.plot(prob_4[20,:,:][0], np.arange(0,50,1))
reload(pynoddy.experiment)
ex1 = pynoddy.experiment.Experiment(history)
ex1.add_sampling_line(2500, 3500)
plt.plot(ex1.get_model_lines(), np.arange(0,5000,1))
ex1.plot_section()
reload(pynoddy.history)
reload(pynoddy.events)
nm = pynoddy.history.NoddyHistory()
# add stratigraphy
strati_options = {'num_layers' : 8,
'layer_names' : ['layer 1', 'layer 2', 'layer 3',
'layer 4', 'layer 5', 'layer 6',
'layer 7', 'layer 8'],
'layer_thickness' : [1500, 500, 500, 500, 500,
500, 500, 500]}
nm.add_event('stratigraphy', strati_options )
tilt_options = {'name' : 'Tilt',
'pos' : (4000, 3500, 5000),
'rotation' : 0.,
'plunge_direction' : 0,
'plunge' : 0.}
nm.add_event('tilt', tilt_options)
nm.write_history(history)
# Compute the model
reload(pynoddy)
pynoddy.compute_model(history, output_name)
# Plot output
import pynoddy.output
reload(pynoddy.output)
nout = pynoddy.output.NoddyOutput(output_name)
nout.plot_section('x', layer_labels = strati_options['layer_names'][::-1],
colorbar = True, title="",
savefig = False, fig_filename = "ex01_strati.eps")
reload(pynoddy.history)
reload(pynoddy.events)
history = 'simple_model.his'
nm = pynoddy.history.NoddyHistory(history)
history = "test.his"
nm.write_history(history)
# Compute the model
reload(pynoddy)
pynoddy.compute_model(history, output_name)
strati.layers[-1].property_lines['Remanent Magnetization']
nout = pynoddy.output.NoddyOutput(output_name)
nout.plot_section('x', layer_labels = strati_options['layer_names'][::-1],
colorbar = True, title="",
savefig = False, fig_filename = "ex01_strati.eps")
reload(pynoddy.history)
reload(pynoddy.events)
history = 'simple_model.his'
nm = pynoddy.history.NoddyHistory(history)
nm.events[1].layers[5].properties['Height'] = 4200.
nm.write_history(history)
# Compute the model
reload(pynoddy)
pynoddy.compute_model(history, output_name)
nout = pynoddy.output.NoddyOutput(output_name)
nout.plot_section('x', layer_labels = strati_options['layer_names'][::-1],
colorbar = True, title="",
savefig = False, fig_filename = "ex01_strati.eps")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import all data (data was previously cleaning in other notebooks)
Step2: Now I have one dict with metro stations and one with bike stations
Step3: step 1) create list of bike stations along each line
Step5: Iterate through each metro line, calculating the distance between each station of that line to each bikeshare station.
Step6: I now have a dictionary of a list of bike stations (values) within 0.25 miles of each metro line (key).
Step7: step 2)
Step8: About 10% to 15% of bike stations are within 0.25 miles of each metro line
|
<ASSISTANT_TASK:>
Python Code:
import pickle
from geopy.distance import vincenty
station_data = pickle.load( open( "station_data.p", "rb" ) )
bike_location = pickle.load( open( "bike_location.p", "rb" ) )
print(station_data['RD']['Bethesda'])
print(bike_location['Silver Spring Metro/Colesville Rd & Wayne Ave'])
vincenty(station_data['RD']['Silver Spring'], bike_location['11th & O St NW']).miles
for key_bike in bike_location:
dist = vincenty(station_data['RD']['Silver Spring'], bike_location[key_bike]).miles
if dist <= 0.3:
print([key_bike ,dist])
def close_stations(distance):
This fn will return a dict of bikeshare stations close
to each metro stop based on the suppled distance in miles
lines = ['RD', 'YL', 'GR','BL', 'OR', 'SV']
bikes_close = dict()
for ii in range(len(lines)):
bikes_temp = []
for key_metro in station_data[lines[ii]]:
for key_bike in bike_location:
dist = vincenty(station_data[lines[ii]][key_metro], bike_location[key_bike]).miles
if dist <= distance:
bikes_temp.append(key_bike)
print([lines[ii], key_metro, key_bike, dist])
bikes_close[lines[ii]] = list(set(bikes_temp))
return bikes_close
lines = ['RD', 'YL', 'GR','BL', 'OR', 'SV']
bikes_close = dict()
for ii in range(len(lines)):
bikes_temp = []
for key_metro in station_data[lines[ii]]:
for key_bike in bike_location:
dist = vincenty(station_data[lines[ii]][key_metro], bike_location[key_bike]).miles
if dist <= 0.25:
bikes_temp.append(key_bike)
print([lines[ii], key_metro, key_bike, dist])
bikes_close[lines[ii]] = list(set(bikes_temp))
print(len(bikes_close['GR']))
print(bikes_close['GR'][:5])
pickle.dump( bikes_close, open( "bikes_close.p", "wb" ) )
for ii in bikes_close:
print(ii, len(bikes_close[ii]))
fn_test = close_stations(0.1)
for ii in fn_test:
print(ii, len(fn_test[ii]))
fn_test
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Combine statistics and hard limits
Step2: Dump the file for later use
Step3: Helper methods to plot the problematic data
Step4: Merge all .pickle files in folder
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib notebook
import glob
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import json
import requests
import time
import pickle
from ipywidgets import FloatProgress
from IPython.display import display
#from API_client.python.datahub import datahub
#from API_client.python.lib.dataset import dataset
from apiclient.datahub import datahub
from apiclient.dataset import dataset
month = 12
%%time
data_file_path = '/data/files/eneli/synop_data_history_decoded/' ## insert data file path here
dtypes = {'cloud_below_station_type':str,
'cloud_location_type_AC':str,
'cloud_location_type_AS':str,
'cloud_location_type_CB':str,
'cloud_location_type_CI':str,
'cloud_location_type_CS':str,
'cloud_location_type_CU':str,
'cloud_location_type_ST':str,
'cloud_location_type_SC':str,
'cloud_type_high_compass_dir':str,
'cloud_type_low_compass_dir':str,
'cloud_type_middle_compass_dir':str,
'hoar_frost':str,
'hoar_frost_phenom_descr':str,
'weather_present_simple':str,
'state_of_ground_snow':str}
#fls = glob.glob('/data/files/eneli/synop_data_state_csv_metaf2xml/2018/5/*/*')
fls = glob.glob(data_file_path + '/2017/{0}/*'.format(month))
fin_csvs = []
usecols = ['time','station','pressure','temperature','station_pressure']
f = FloatProgress(min = 0, max = len(fls))
display(f)
for i in fls:
#try:
fin_csvs.append(pd.read_csv(i, usecols=usecols, dtype=dtypes))
#except:
# print("cannot read ", i)
# with open(i) as fff:
# print(len(fff.readlines()))
f.value += 1
fin = pd.concat(fin_csvs)
stations = fin['station'].unique()
physical_limits = {
"temperature": [-80, 65],
"wind_speed": [0, 90],
"pressure": [800,1100],
"station_pressure": [400,1100],
"precipitation_1_hour_accumulation": [0,200],
"precipitation_2_hour_accumulation": [0,250],
"precipitation_3_hour_accumulation": [0,400],
"precipitation_6_hour_accumulation": [0,400],
"precipitation_9_hour_accumulation": [0,400],
"precipitation_12_hour_accumulation": [0,400],
"precipitation_15_hour_accumulation": [0,400],
"precipitation_18_hour_accumulation": [0,400],
"precipitation_24_hour_accumulation": [0,400],
"rel_humidity1":[0,105],
"rel_humidity2":[0,105],
"rel_humidity3":[0,105],
"rel_humidity4":[0,105]
}
%%time
## st_table -- statistics of observations by station and month
st_table = {}
f = FloatProgress(min = 0, max = len(stations))
display(f)
for st in stations:
stn = "{0:05d}".format(int(st))
st_table[stn] = {}
t_temp = fin[fin['station'] == st]
varis = [i for i in t_temp.columns if not i in ['time','elevation','lon','lat','station']]
f.value+=1
for v in varis:
v_temp = t_temp[v]
if v_temp.dtype == np.float64:
st_table[stn][v] = {}
if ~np.all(v_temp.isnull()):
st_table[stn][v] = {}
st_table[stn][v][v+'_max'] = v_temp.max()
st_table[stn][v][v+'_min'] = v_temp.min()
st_table[stn][v][v+'_count'] = len(v_temp)
st_table[stn][v][v+'_quantiles'] = list(v_temp.quantile([0.05,0.25,0.5,0.75,0.95]))
def make_limit_dictionary(stations, physical_limits, monthlist, varlist):
ret_dict = {}
for st in stations:
stn = "{0:05d}".format(int(st))
ret_dict[stn] = {}
for mon in monthlist:
ret_dict[stn][mon] = {}
for var in varlist:
ret_dict[stn][mon][var] = {}
if var in physical_limits:
ret_dict[stn][mon][var][var+'_max'] = physical_limits[var][1]
ret_dict[stn][mon][var][var+'_min'] = physical_limits[var][0]
return ret_dict
%%time
## limit_dict -- dictionary with stations and limitations
## cols = [i for i in fin.columns if not i in ['time','elevation','lon','lat','station','station_name','report_modifier','station_name','station_type','synop_code']]
monthlist=list([month,])
limit_dict = make_limit_dictionary(stations, physical_limits, monthlist, physical_limits.keys())
def add_criteria(station, month, variable, st_table, limit_dict):
def get_vbounds():
if variable == 'temperature':
maxmindiff = 3 * (st_table[station][variable][variable + '_quantiles'][4] - st_table[station][variable][variable + '_quantiles'][0])
vmax = maxmindiff + st_table[station][variable][variable + '_quantiles'][4]
vmin = -maxmindiff + st_table[station][variable][variable + '_quantiles'][0]
elif variable in ['station_pressure', 'pressure']:
vmax = st_table[station][variable][variable + '_quantiles'][4] * 1.05
vmin = st_table[station][variable][variable + '_quantiles'][0] / 1.05
else:
vmax = st_table[station][variable][variable + '_quantiles'][4] * 1.5
vmin = st_table[station][variable][variable + '_quantiles'][0] / 1.5
return vmax, vmin
assert type(station) == str
assert type(variable) == str
assert type(st_table) == dict
if not variable in st_table[station]:
#print("no variable, returning", variable)
return
## if data is too sparse, better leave it
if not variable + '_count' in st_table[station][variable]:
return
if st_table[station][variable][variable + '_count'] < 2000:
return
if variable + '_max' in st_table[station][variable]:
#print("var found", variable)
#vmax = st_table[station][variable][variable + '_quantiles'][4] * 1.5
#vmin = st_table[station][variable][variable + '_quantiles'][0] / 1.5
vmax, vmin = get_vbounds()
if not variable + '_max' in limit_dict[station][month][variable]:
limit_dict[station][month][variable][variable + '_max'] = vmax
limit_dict[station][month][variable][variable + '_min'] = vmin
else:
if limit_dict[station][month][variable][variable + '_max'] > vmax:
limit_dict[station][month][variable][variable + '_max'] = vmax
if limit_dict[station][month][variable][variable + '_min'] < vmin:
limit_dict[station][month][variable][variable + '_min'] = vmin
else:
pass
##print("var not found in",st_table[station][variable], variable)
%%time
for strange in range(len(stations)):
for i in limit_dict["{0:05d}".format(stations[strange])][month].keys():
add_criteria("{0:05d}".format(stations[strange]),month,i,st_table, limit_dict)
pickle.dump(limit_dict, open('limit_dict_{0}.pickle'.format(month),'wb'))
# var = 'station_pressure'
def find_stlistike():
stlistike = []
f = FloatProgress(min=0, max=len(stations))
display(f)
var = 'pressure'
for st in range(len(stations)):
if np.any(fin[fin['station']==stations[st]][var] > limit_dict["{0:05d}".format(stations[st])][month][var][var + '_max']):
print(st,)
stlistike.append(st)
f.value += 1
return stlistike
def plot_suspicious(var, stlistike):
for st in stlistike:
trtr = fin[fin['station']==stations[st]][['time',var,'station']].drop_duplicates().sort_values('time')
trtr['datetime'] = (trtr['time']*1.e9).apply(pd.to_datetime)
fig=plt.figure(figsize=(15,10))
plt.plot(trtr['datetime'],trtr[var],'*')
plt.plot(trtr['datetime'],np.ones(len(trtr))*limit_dict["{0:05d}".format(stations[st])][month][var][var + '_max'])
plt.plot(trtr['datetime'],np.ones(len(trtr))*limit_dict["{0:05d}".format(stations[st])][month][var][var + '_min'])
plt.title("{0} {1}".format(st, len(trtr.index)))
plt.show()
def check_suspicious(var, st):
st = 410
fig=plt.figure(figsize=(15,10))
trtr = fin[fin['station']==stations[st]][['time',var,'station']].drop_duplicates().sort_values('time')
trtr['datetime'] = (trtr['time']*1.e9).apply(pd.to_datetime)
trfilt=trtr[trtr['datetime'].apply(lambda x: x.year) == 2010]
print(len(trfilt))
plt.plot(trfilt['datetime'],trfilt['temperature'])
plt.show()
stl = find_stlistike()
plot_suspicious('pressure',stl[160:200])
a = [pickle.load(open(i,'rb')) for i in glob.glob('limit_dict_*.pickle')]
from itertools import groupby
fd = {}
for oo in range(1,13):
a = pickle.load(open('limit_dict_{0}.pickle'.format(oo),'rb'))
for k,v in a.items(): # k is station ID
if not k in fd.keys():
fd[k] = v
else:
fd[k].update(v)
pickle.dump(fd, open('limit_dict_year.pickle'.format(month),'wb'))
fd['01001'][12]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Setup Google Cloud project
Step2: Set configurations
Step3: Create Interactive Context
Step4: 1. Hyperparameter generation
Step5: 2. Data extraction
Step6: Extract train and eval splits
Step7: Extract test split
Step8: Read sample extract tfrecords
Step9: 3. Data validation
Step10: Generate statistics
Step11: Validate statistics against schema
Step12: 4. Data transformation
Step13: Read sample transformed tfrecords
Step14: 5. Model training
Step15: Get the latest model to warm start
Step16: Train the model
Step17: 6. Model evaluation
Step18: Evaluate and validate the model against the baseline model.
Step19: 7. Model pushing
Step20: 8. Model Upload to Vertex AI
|
<ASSISTANT_TASK:>
Python Code:
import os
import json
import numpy as np
import tfx
import tensorflow as tf
import tensorflow_transform as tft
import tensorflow_data_validation as tfdv
import tensorflow_model_analysis as tfma
from tensorflow_transform.tf_metadata import schema_utils
import logging
from src.common import features
from src.model_training import data
from src.tfx_pipelines import components
logging.getLogger().setLevel(logging.ERROR)
tf.get_logger().setLevel('ERROR')
print("TFX Version:", tfx.__version__)
print("Tensorflow Version:", tf.__version__)
PROJECT = '[your-project-id]' # Change to your project id.
REGION = 'us-central1' # Change to your region.
BUCKET = '[your-bucket-name]' # Change to your bucket name.
SERVICE_ACCOUNT = "[your-service-account]"
if PROJECT == "" or PROJECT is None or PROJECT == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT = shell_output[0]
if SERVICE_ACCOUNT == "" or SERVICE_ACCOUNT is None or SERVICE_ACCOUNT == "[your-service-account]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.account)' 2>/dev/null
SERVICE_ACCOUNT = shell_output[0]
if BUCKET == "" or BUCKET is None or BUCKET == "[your-bucket-name]":
# Get your bucket name to GCP project id
BUCKET = PROJECT
# Try to create the bucket if it doesn't exists
! gsutil mb -l $REGION gs://$BUCKET
print("")
PARENT = f"projects/{PROJECT}/locations/{REGION}"
print("Project ID:", PROJECT)
print("Region:", REGION)
print("Bucket name:", BUCKET)
print("Service Account:", SERVICE_ACCOUNT)
print("Vertex API Parent URI:", PARENT)
VERSION = 'v01'
DATASET_DISPLAY_NAME = 'chicago-taxi-tips'
MODEL_DISPLAY_NAME = f'{DATASET_DISPLAY_NAME}-classifier-{VERSION}'
WORKSPACE = f'gs://{BUCKET}/{DATASET_DISPLAY_NAME}'
RAW_SCHEMA_DIR = 'src/raw_schema'
MLMD_SQLLITE = 'mlmd.sqllite'
ARTIFACT_STORE = os.path.join(WORKSPACE, 'tfx_artifacts_interactive')
MODEL_REGISTRY = os.path.join(WORKSPACE, 'model_registry')
PIPELINE_NAME = f'{MODEL_DISPLAY_NAME}-train-pipeline'
PIPELINE_ROOT = os.path.join(ARTIFACT_STORE, PIPELINE_NAME)
REMOVE_ARTIFACTS = True
if tf.io.gfile.exists(ARTIFACT_STORE) and REMOVE_ARTIFACTS:
print("Removing previous artifacts...")
tf.io.gfile.rmtree(ARTIFACT_STORE)
if tf.io.gfile.exists(MLMD_SQLLITE) and REMOVE_ARTIFACTS:
print("Deleting previous mlmd.sqllite...")
tf.io.gfile.rmtree(MLMD_SQLLITE)
print(f'Pipeline artifacts directory: {PIPELINE_ROOT}')
print(f'Local metadata SQLlit path: {MLMD_SQLLITE}')
import ml_metadata as mlmd
from ml_metadata.proto import metadata_store_pb2
from tfx.orchestration.experimental.interactive.interactive_context import InteractiveContext
connection_config = metadata_store_pb2.ConnectionConfig()
connection_config.sqlite.filename_uri = MLMD_SQLLITE
connection_config.sqlite.connection_mode = 3 # READWRITE_OPENCREATE
mlmd_store = mlmd.metadata_store.MetadataStore(connection_config)
context = InteractiveContext(
pipeline_name=PIPELINE_NAME,
pipeline_root=PIPELINE_ROOT,
metadata_connection_config=connection_config
)
hyperparams_gen = components.hyperparameters_gen(
num_epochs=5,
learning_rate=0.001,
batch_size=512,
hidden_units='64,64',
)
context.run(hyperparams_gen, enable_cache=False)
json.load(
tf.io.gfile.GFile(
os.path.join(
hyperparams_gen.outputs['hyperparameters'].get()[0].uri, 'hyperparameters.json')
)
)
from src.common import datasource_utils
from tfx.extensions.google_cloud_big_query.example_gen.component import BigQueryExampleGen
from tfx.proto import example_gen_pb2, transform_pb2
sql_query = datasource_utils.get_training_source_query(
PROJECT, REGION, DATASET_DISPLAY_NAME, ml_use='UNASSIGNED', limit=5000)
output_config = example_gen_pb2.Output(
split_config=example_gen_pb2.SplitConfig(
splits=[
example_gen_pb2.SplitConfig.Split(name="train", hash_buckets=4),
example_gen_pb2.SplitConfig.Split(name="eval", hash_buckets=1),
]
)
)
train_example_gen = BigQueryExampleGen(query=sql_query, output_config=output_config)
beam_pipeline_args=[
f"--project={PROJECT}",
f"--temp_location={os.path.join(WORKSPACE, 'tmp')}"
]
context.run(
train_example_gen,
beam_pipeline_args=beam_pipeline_args,
enable_cache=False
)
sql_query = datasource_utils.get_training_source_query(
PROJECT, REGION, DATASET_DISPLAY_NAME, ml_use='TEST', limit=1000)
output_config = example_gen_pb2.Output(
split_config=example_gen_pb2.SplitConfig(
splits=[
example_gen_pb2.SplitConfig.Split(name="test", hash_buckets=1),
]
)
)
test_example_gen = BigQueryExampleGen(query=sql_query, output_config=output_config)
beam_pipeline_args=[
f"--project={PROJECT}",
f"--temp_location={os.path.join(WORKSPACE, 'tmp')}"
]
context.run(
test_example_gen,
beam_pipeline_args=beam_pipeline_args,
enable_cache=False
)
train_uri = os.path.join(train_example_gen.outputs['examples'].get()[0].uri, "Split-train/*")
source_raw_schema = tfdv.load_schema_text(os.path.join(RAW_SCHEMA_DIR, 'schema.pbtxt'))
raw_feature_spec = schema_utils.schema_as_feature_spec(source_raw_schema).feature_spec
def _parse_tf_example(tfrecord):
return tf.io.parse_single_example(tfrecord, raw_feature_spec)
tfrecord_filenames = tf.data.Dataset.list_files(train_uri)
dataset = tf.data.TFRecordDataset(tfrecord_filenames, compression_type="GZIP")
dataset = dataset.map(_parse_tf_example)
for raw_features in dataset.shuffle(1000).batch(3).take(1):
for key in raw_features:
print(f"{key}: {np.squeeze(raw_features[key], -1)}")
print("")
schema_importer = tfx.dsl.components.common.importer.Importer(
source_uri=RAW_SCHEMA_DIR,
artifact_type=tfx.types.standard_artifacts.Schema,
reimport=False
)
context.run(schema_importer)
statistics_gen = tfx.components.StatisticsGen(
examples=train_example_gen.outputs['examples'])
context.run(statistics_gen)
!rm -r {RAW_SCHEMA_DIR}/.ipynb_checkpoints/
example_validator = tfx.components.ExampleValidator(
statistics=statistics_gen.outputs['statistics'],
schema=schema_importer.outputs['result'],
)
context.run(example_validator)
context.show(example_validator.outputs['anomalies'])
_transform_module_file = 'src/preprocessing/transformations.py'
transform = tfx.components.Transform(
examples=train_example_gen.outputs['examples'],
schema=schema_importer.outputs['result'],
module_file=_transform_module_file,
splits_config=transform_pb2.SplitsConfig(
analyze=['train'], transform=['train', 'eval']),
)
context.run(transform, enable_cache=False)
transformed_train_uri = os.path.join(transform.outputs['transformed_examples'].get()[0].uri, "Split-train/*")
transform_graph_uri = transform.outputs['transform_graph'].get()[0].uri
tft_output = tft.TFTransformOutput(transform_graph_uri)
transform_feature_spec = tft_output.transformed_feature_spec()
for input_features, target in data.get_dataset(
transformed_train_uri, transform_feature_spec, batch_size=3).take(1):
for key in input_features:
print(f"{key} ({input_features[key].dtype}): {input_features[key].numpy().tolist()}")
print(f"target: {target.numpy().tolist()}")
from tfx.dsl.components.common.resolver import Resolver
from tfx.dsl.experimental import latest_artifacts_resolver
from tfx.dsl.experimental import latest_blessed_model_resolver
latest_model_resolver = Resolver(
strategy_class=latest_artifacts_resolver.LatestArtifactsResolver,
latest_model=tfx.types.Channel(type=tfx.types.standard_artifacts.Model)
)
context.run(latest_model_resolver, enable_cache=False)
_train_module_file = 'src/model_training/runner.py'
trainer = tfx.components.Trainer(
module_file=_train_module_file,
examples=transform.outputs['transformed_examples'],
schema=schema_importer.outputs['result'],
base_model=latest_model_resolver.outputs['latest_model'],
transform_graph=transform.outputs['transform_graph'],
hyperparameters=hyperparams_gen.outputs['hyperparameters'],
)
context.run(trainer, enable_cache=False)
blessed_model_resolver = Resolver(
strategy_class=latest_blessed_model_resolver.LatestBlessedModelResolver,
model=tfx.types.Channel(type=tfx.types.standard_artifacts.Model),
model_blessing=tfx.types.Channel(type=tfx.types.standard_artifacts.ModelBlessing)
)
context.run(blessed_model_resolver, enable_cache=False)
from tfx.components import Evaluator
eval_config = tfma.EvalConfig(
model_specs=[
tfma.ModelSpec(
signature_name='serving_tf_example',
label_key=features.TARGET_FEATURE_NAME,
prediction_key='probabilities')
],
slicing_specs=[
tfma.SlicingSpec(),
],
metrics_specs=[
tfma.MetricsSpec(
metrics=[
tfma.MetricConfig(class_name='ExampleCount'),
tfma.MetricConfig(
class_name='BinaryAccuracy',
threshold=tfma.MetricThreshold(
value_threshold=tfma.GenericValueThreshold(
lower_bound={'value': 0.8}),
# Change threshold will be ignored if there is no
# baseline model resolved from MLMD (first run).
change_threshold=tfma.GenericChangeThreshold(
direction=tfma.MetricDirection.HIGHER_IS_BETTER,
absolute={'value': -1e-10}))),
])
])
evaluator = Evaluator(
examples=test_example_gen.outputs['examples'],
example_splits=['test'],
model=trainer.outputs['model'],
baseline_model=blessed_model_resolver.outputs['model'],
eval_config=eval_config,
schema=schema_importer.outputs['result']
)
context.run(evaluator, enable_cache=False)
evaluation_results = evaluator.outputs['evaluation'].get()[0].uri
print("validation_ok:", tfma.load_validation_result(evaluation_results).validation_ok, '\n')
for entry in list(tfma.load_metrics(evaluation_results))[0].metric_keys_and_values:
value = entry.value.double_value.value
if value:
print(entry.key.name, ":", round(entry.value.double_value.value, 3))
exported_model_location = os.path.join(MODEL_REGISTRY, MODEL_DISPLAY_NAME)
push_destination=tfx.proto.pusher_pb2.PushDestination(
filesystem=tfx.proto.pusher_pb2.PushDestination.Filesystem(
base_directory=exported_model_location,
)
)
pusher = tfx.components.Pusher(
model=trainer.outputs['model'],
model_blessing=evaluator.outputs['blessing'],
push_destination=push_destination
)
context.run(pusher, enable_cache=False)
serving_runtime = 'tf2-cpu.2-5'
serving_image_uri = f"us-docker.pkg.dev/vertex-ai/prediction/{serving_runtime}:latest"
labels = {
'dataset_name': DATASET_DISPLAY_NAME,
'pipeline_name': PIPELINE_NAME
}
labels = json.dumps(labels)
vertex_model_uploader = components.vertex_model_uploader(
project=PROJECT,
region=REGION,
model_display_name=MODEL_DISPLAY_NAME,
pushed_model_location=exported_model_location,
serving_image_uri=serving_image_uri,
model_blessing=evaluator.outputs['blessing'],
explanation_config='',
labels=labels
)
context.run(vertex_model_uploader, enable_cache=False)
vertex_model_uploader.outputs['uploaded_model'].get()[0].get_string_custom_property('model_uri')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Ice Albedo
Step7: 1.4. Atmospheric Coupling Variables
Step8: 1.5. Oceanic Coupling Variables
Step9: 1.6. Prognostic Variables
Step10: 2. Key Properties --> Software Properties
Step11: 2.2. Code Version
Step12: 2.3. Code Languages
Step13: 3. Grid
Step14: 3.2. Adaptive Grid
Step15: 3.3. Base Resolution
Step16: 3.4. Resolution Limit
Step17: 3.5. Projection
Step18: 4. Glaciers
Step19: 4.2. Description
Step20: 4.3. Dynamic Areal Extent
Step21: 5. Ice
Step22: 5.2. Grounding Line Method
Step23: 5.3. Ice Sheet
Step24: 5.4. Ice Shelf
Step25: 6. Ice --> Mass Balance
Step26: 7. Ice --> Mass Balance --> Basal
Step27: 7.2. Ocean
Step28: 8. Ice --> Mass Balance --> Frontal
Step29: 8.2. Melting
Step30: 9. Ice --> Dynamics
Step31: 9.2. Approximation
Step32: 9.3. Adaptive Timestep
Step33: 9.4. Timestep
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cccr-iitm', 'sandbox-1', 'landice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: ** ATENÇÃO ****
Step2: Usando a expressão with
Step3: Manipulando Arquivos CSV (comma-separated values )
Step4: Manipulando Arquivos JSON (Java Script Object Notation )
|
<ASSISTANT_TASK:>
Python Code:
# Versão da Linguagem Python
from platform import python_version
print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version())
texto = "Cientista de Dados é a profissão que mais tem crescido em todo mundo.\n"
texto = texto + "Esses profissionais precisam se especializar em Programação, Estatística e Machine Learning.\n"
texto += "E claro, em Big Data."
print(texto)
# Importando o módulo os
import os
# Criando um arquivo
arquivo = open(os.path.join('arquivos/cientista.txt'),'w')
# Gravando os dados no arquivo
for palavra in texto.split():
arquivo.write(palavra+' ')
# Fechando o arquivo
arquivo.close()
# Lendo o arquivo
arquivo = open('arquivos/cientista.txt','r')
conteudo = arquivo.read()
arquivo.close()
print(conteudo)
with open('arquivos/cientista.txt','r') as arquivo:
conteudo = arquivo.read()
print(len(conteudo))
print(conteudo)
with open('arquivos/cientista.txt','w') as arquivo:
arquivo.write(texto[:21])
arquivo.write('\n')
arquivo.write(texto[:33])
# Lendo o arquivo
arquivo = open('arquivos/cientista.txt','r')
conteudo = arquivo.read()
arquivo.close()
print (conteudo)
# Importando o módulo csv
import csv
with open('arquivos/numeros.csv','w') as arquivo:
writer = csv.writer(arquivo)
writer.writerow(('primeira','segunda','terceira'))
writer.writerow((55,93,76))
writer.writerow((62,14,86))
# Leitura de arquivos csv
with open('arquivos/numeros.csv','r') as arquivo:
leitor = csv.reader(arquivo)
for x in leitor:
print ('Número de colunas:', len(x))
print(x)
# Código alternativo para eventuais problemas com linhas em branco no arquivo
with open('arquivos/numeros.csv','r', encoding='utf8', newline = '\r\n') as arquivo:
leitor = csv.reader(arquivo)
for x in leitor:
print ('Número de colunas:', len(x))
print(x)
# Gerando uma lista com dados do arquivo csv
with open('arquivos/numeros.csv','r') as arquivo:
leitor = csv.reader(arquivo)
dados = list(leitor)
print (dados)
# Impriminfo a partir da segunda linha
for linha in dados[1:]:
print (linha)
# Criando um dicionário
dict = {'nome': 'Guido van Rossum',
'linguagem': 'Python',
'similar': ['c','Modula-3','lisp'],
'users': 1000000}
for k,v in dict.items():
print (k,v)
# Importando o módulo Json
import json
# Convertendo o dicionário para um objeto json
json.dumps(dict)
# Criando um arquivo Json
with open('arquivos/dados.json','w') as arquivo:
arquivo.write(json.dumps(dict))
# Leitura de arquivos Json
with open('arquivos/dados.json','r') as arquivo:
texto = arquivo.read()
data = json.loads(texto)
print (data)
print (data['nome'])
# Imprimindo um arquivo Json copiado da internet
from urllib.request import urlopen
response = urlopen("http://vimeo.com/api/v2/video/57733101.json").read().decode('utf8')
data = json.loads(response)[0]
print ('Título: ', data['title'])
print ('URL: ', data['url'])
print ('Duração: ', data['duration'])
print ('Número de Visualizações: ', data['stats_number_of_plays'])
# Copiando o conteúdo de um arquivo para outro
import os
arquivo_fonte = 'arquivos/dados.json'
arquivo_destino = 'arquivos/json_data.txt'
# Método 1
with open(arquivo_fonte,'r') as infile:
text = infile.read()
with open(arquivo_destino,'w') as outfile:
outfile.write(text)
# Método 2
open(arquivo_destino,'w').write(open(arquivo_fonte,'r').read())
# Leitura de arquivos Json
with open('arquivos/json_data.txt','r') as arquivo:
texto = arquivo.read()
data = json.loads(texto)
print(data)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <div id="2">2. Coexistência em equilíbrio
Step2: <div id="3">3. Não-linearidade relativa</div>
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib notebook
from numpy import *
from scipy.integrate import odeint
from matplotlib.pyplot import *
ion()
def consumer_resource1(y, t, r, K, b1, m1, b2, m2):
return array([ y[0] * (r*(1-y[0]/K) - b1*y[1] - b2*y[2]),
y[1] * (b1*y[0] - m1),
y[2] * (b2*y[0] - m2)])
t = arange(0, 200, .1)
y0 = [1, 1., 1.]
pars = (1., 1., 1., 0.1, 0.5, 0.1)
y = odeint(consumer_resource1, y0, t, pars)
plot(t, y)
xlabel('tempo')
ylabel('populações')
legend(['$R$', '$C_1$', '$C_2$'])
x = arange(0, 0.8, 0.01)
plot(x, 1.*x)
plot(x, 0.5*x)
legend(['$C_1$', '$C_2$'], frameon=False, loc='best')
axhline(0.1, c='k', ls=':')
xlabel('$R$')
ylabel('resposta funcional')
text(0.07, 0.16, '$R^*_1$')
text(0.2, 0.05, '$R^*_2$')
text(0.7, 0.11, '$d$')
def consumer_resource2(y, t, r, K, b1, m1, b2, m2):
return array([ y[0] * (r*(1-y[0]/K) - b1*y[1] - b2*y[2]),
y[1] * (b1*y[0] - m1) - 0.2*y[1]**2,
y[2] * (b2*y[0] - m2)])
pars = (1., 1., 1., 0.1, 0.5, 0.1)
y = odeint(consumer_resource2, y0, t, pars)
plot(t, y)
xlabel('temop')
ylabel('populações')
legend(['$R$', '$C_1$', '$C_2$'])
def consumer_resource3(y, t, r, K, b1, m1, h1, b2, m2):
return array([ y[0] * (r*(1-y[0]/K) - b1*y[1]/(1+b1*h1*y[0]) - b2*y[2]),
y[1] * (b1*y[0]/(1+b1*h1*y[0]) - m1),
y[2] * (b2*y[0] - m2)])
t = arange(0, 400, .1)
# note que os outros parâmetros não foram alterados!
pars = (1., 1., 1., 0.1, 3., 0.5, 0.1)
y = odeint(consumer_resource3, y0, t, pars)
plot(t, y)
xlabel('tempo')
ylabel('populações')
legend(['$R$', '$C_1$', '$C_2$'], loc='upper left')
print('média de R (últimos T-200): %.2f' % y[-2000:,0].mean())
x = arange(0, 0.8, 0.01)
plot(x, 1.*x/(1+3*x), 'g')
plot(x, 0.5*x, 'r')
legend(['$C_1$', '$C_2$'], frameon=False, loc='best')
axhline(0.1, c='k', ls=':')
xlabel('$R$')
ylabel('resposta funcional')
text(0.1, 0.12, '$R^*_1$')
text(0.2, 0.05, '$R^*_2$')
text(0.7, 0.11, '$d$')
plot([0.01, 0.6], 2*[0.02], '.-b')
text(0.4, 0.04, "amplitude de\nvalores de $R$")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 0. Select the Raw Data File
Step2: 1. Exploring the Raw Data File
Step3: 2. Loading the data
Step4: 3. Read the parameters
Step5: 3.a Prepare to read the data
Step6: 3.b Read the data
Step7: 4.a Preparing some necessary parameters
Step8: 4b. Calling the NumpyTranslator to create the pycroscopy data file
Step9: Notes on pycroscopy translation
Steps 1-3 would be performed anyway in order to begin data analysis
|
<ASSISTANT_TASK:>
Python Code:
# downloading packages:
!pip install wget pycroscopy
# Ensure python 3 compatibility:
from __future__ import division, print_function, absolute_import, unicode_literals
# In case some of these packages are not installed, install them
#!pip install -U os wget numpy h5py matplotlib pycroscopy
# The package for accessing files in directories, etc.:
import os
import wget
# The mathematical computation package:
import numpy as np
# The package used for creating and manipulating HDF5 files:
import h5py
# Packages for plotting:
import matplotlib.pyplot as plt
# Finally import pycroscopy for certain scientific analysis:
import pycroscopy as px
# set up notebook to show plots within the notebook
% matplotlib inline
# download the data file from Github:
url = 'https://raw.githubusercontent.com/pycroscopy/pycroscopy/master/data/STS.asc'
data_file_path = 'temp.asc'
if os.path.exists(data_file_path):
os.remove(data_file_path)
_ = wget.download(url, data_file_path)
with open(data_file_path, 'r') as file_handle:
for lin_ind in range(10):
print(file_handle.readline())
# Extracting the raw data into memory
file_handle = open(data_file_path, 'r')
string_lines = file_handle.readlines()
file_handle.close()
# Reading parameters stored in the first few rows of the file
parm_dict = dict()
for line in string_lines[3:17]:
line = line.replace('# ', '')
line = line.replace('\n', '')
temp = line.split('=')
test = temp[1].strip()
try:
test = float(test)
# convert those values that should be integers:
if test % 1 == 0:
test = int(test)
except ValueError:
pass
parm_dict[temp[0].strip()] = test
# Print out the parameters extracted
for key in parm_dict.keys():
print(key, ':\t', parm_dict[key])
num_rows = int(parm_dict['y-pixels'])
num_cols = int(parm_dict['x-pixels'])
num_pos = num_rows * num_cols
spectra_length = int(parm_dict['z-points'])
# num_headers = len(string_lines) - num_pos
num_headers = 403
# Extract the STS data from subsequent lines
raw_data_2d = np.zeros(shape=(num_pos, spectra_length), dtype=np.float32)
for line_ind in range(num_pos):
this_line = string_lines[num_headers + line_ind]
string_spectrum = this_line.split('\t')[:-1] # omitting the new line
raw_data_2d[line_ind] = np.array(string_spectrum, dtype=np.float32)
max_v = 1 # This is the one parameter we are not sure about
folder_path, file_name = os.path.split(data_file_path)
file_name = file_name[:-4] + '_'
# Generate the x / voltage / spectroscopic axis:
volt_vec = np.linspace(-1 * max_v, 1 * max_v, spectra_length)
h5_path = os.path.join(folder_path, file_name + '.h5')
tran = px.io.NumpyTranslator()
h5_path = tran.translate(h5_path, raw_data_2d, num_rows, num_cols,
qty_name='Current', data_unit='nA', spec_name='Bias',
spec_unit='V', spec_val=volt_vec, scan_height=100,
scan_width=200, spatial_unit='nm', data_type='STS',
translator_name='ASC', parms_dict=parm_dict)
with h5py.File(h5_path, mode='r') as h5_file:
# See if a tree has been created within the hdf5 file:
px.hdf_utils.print_tree(h5_file)
h5_main = h5_file['Measurement_000/Channel_000/Raw_Data']
fig, axes = plt.subplots(ncols=2, figsize=(11,5))
spat_map = np.reshape(h5_main[:, 100], (100, 100))
px.plot_utils.plot_map(axes[0], spat_map, origin='lower')
axes[0].set_title('Spatial map')
axes[0].set_xlabel('X')
axes[0].set_ylabel('Y')
axes[1].plot(np.linspace(-1.0, 1.0, h5_main.shape[1]),
h5_main[250])
axes[1].set_title('IV curve at a single pixel')
axes[1].set_xlabel('Tip bias [V]')
axes[1].set_ylabel('Current [nA]')
# Remove both the original and translated files:
os.remove(h5_path)
os.remove(data_file_path)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Analyzing public cloud and hybrid networks
Step3: Initializing the Network and Snapshot
Step4: The network snapshot that we just initialized is illustrated below. It has a datacenter network with the standard leaf-spine design on the left. Though not strictly necessary, we have included a host srv-101 in this network to enable end-to-end analysis. The exit gateway of the datacenter connects to an Internet service provider (ASN 65200) that we call isp_dc.
Step5: Paths across VPCs within an AWS region
Step6: The trace above shows how traffic goes from host["east2_private"] to host["east2_public"] -- via the source subnet and VPC, then to the transit gateway, and finally to the destination VPC and subnet. Along the way, it also shows where the flow encounters security groups (at both instances) and network ACLs (at subnets). In this instance, all security groups and network ACLs permit this particular flow.
Step7: We see that such traffic does not reach the destination but instead is dropped by the AWS backbone (ASN 16509). This happens because, in our network, there is no (transit gateway or VPC) peering between VPCs in different regions. So, the source subnet is unaware of the address space of the destination subnet, which makes it use the default route that points to the Internet gateway (igw-02fd68f94367a67c7). The Internet gateway forwards the packet to aws-backbone, after NAT'ing its source IP. The packet is eventually dropped as it is using a private address as destination. Recall that using the instance name as destination amounts to using its private IP.
Step8: This traceroute starts out like the previous one, up until the AWS backbone (isp_16509) -- from source subnet to the Internet gateway which forwards it to the backbone, after source NAT'ing the packet. The backbone carries it to the internet gateway in the destination region (igw-0a8309f3192e7cea3), and this gateway NATs the packet's destination from the public IP to the instance's private IP.
Step9: In the output above, we see all expected tunnels. Each transit gateways has two established sessions to exitgw. The default AWS behavior is to have two IPSec tunnels between gateways and physical nodes.
Step10: The output above shows that all BGP sessions are established as expected.
Step11: We see that this traffic travels on the IPSec links between the datacenter's exitgw and the transit gateway in the destination region (tgw-06b348adabd13452d), and then makes it to the destination instance after making it successfully past the network ACL on the subnet node and the security group on the instance.
Step12: We now see that the traffic traverses the Internet via isp_65200 and the Internet gateway (igw-02fd68f94367a67c7), which NATs the destination address of the packet from the public to the private IP.
Step13: We see that Batfish correctly computes that the two instances in the public subnets are accessible from the Internet, and the other two are not.
Step14: We see that all four instances are accessible from the datacenter host.
Step15: We see that, against our policy, the public-facing instance allows non-SSH traffic. To see examples of such traffic, we can run the following query.
Step16: We thus see that our misconfigured public instance allows TCP traffic to port 3306 (MySQL).
|
<ASSISTANT_TASK:>
Python Code:
# Import packages
%run startup.py
bf = Session(host="localhost")
def show_first_trace(trace_answer_frame):
Prints the first trace in the answer frame.
In the presence of multipath routing, Batfish outputs all traces
from the source to destination. This function picks the first one.
if len(trace_answer_frame) == 0:
print("No flows found")
else:
show("Flow: {}".format(trace_answer_frame.iloc[0]['Flow']))
show(trace_answer_frame.iloc[0]['Traces'][0])
def is_reachable(start_location, end_location, headers=None):
Checks if the start_location can reach the end_location using specified packet headers.
All possible headers are considered if headers is None.
ans = bf.q.reachability(pathConstraints=PathConstraints(startLocation=start_location,
endLocation=end_location),
headers=headers).answer()
return len(ans.frame()) > 0
# Initialize a network and snapshot
NETWORK_NAME = "hybrid-cloud"
SNAPSHOT_NAME = "snapshot"
SNAPSHOT_PATH = "networks/hybrid-cloud"
bf.set_network(NETWORK_NAME)
bf.init_snapshot(SNAPSHOT_PATH, name=SNAPSHOT_NAME, overwrite=True)
#Instances in AWS in each region and VPC type (public, private)
hosts = {}
hosts["east2_private"] = "i-04cd3db5124a05ee6"
hosts["east2_public"] = "i-01602d9efaed4409a"
hosts["west2_private"] = "i-0a5d64b8b58c6dd09"
hosts["west2_public"] = "i-02cae6eaa9edeed70"
#Public IPs of instances in AWS
public_ips = {}
public_ips["east2_public"] = "13.59.144.125" # of i-01602d9efaed4409a
public_ips["west2_public"] = "54.191.42.182" # of i-02cae6eaa9edeed70
# traceroute between instances in the same region, using SSH
ans = bf.q.traceroute(startLocation=hosts["east2_private"],
headers=HeaderConstraints(dstIps=hosts["east2_public"],
applications="ssh")).answer()
show_first_trace(ans.frame())
# traceroute between instances across region using the destination's private IP
ans = bf.q.traceroute(startLocation=hosts["east2_public"],
headers=HeaderConstraints(dstIps=hosts["west2_public"],
applications="ssh")).answer()
show_first_trace(ans.frame())
# traceroute betwee instances across region using the destination's public IP
ans = bf.q.traceroute(startLocation=hosts["east2_public"],
headers=HeaderConstraints(dstIps=public_ips["west2_public"],
applications="ssh")).answer()
show_first_trace(ans.frame())
# show the status of all IPSec tunnels between exitgw and AWS transit gateways
ans = bf.q.ipsecSessionStatus(nodes="exitgw", remoteNodes="/^tgw-/").answer()
show(ans.frame())
# show the status of all BGP sessions between exitgw and AWS transit gateways
ans = bf.q.bgpSessionStatus(nodes="exitgw", remoteNodes="/^tgw-/").answer()
show(ans.frame())
# traceroute from DC host to an instances using private IP
ans = bf.q.traceroute(startLocation="srv-101",
headers=HeaderConstraints(dstIps=hosts["east2_public"],
applications="ssh")).answer()
show_first_trace(ans.frame())
# traceroute from DC host to an instances using public IP
ans = bf.q.traceroute(startLocation="srv-101",
headers=HeaderConstraints(dstIps=public_ips["east2_public"],
applications="ssh")).answer()
show_first_trace(ans.frame())
# compute which instances are open to the Internet
reachable_from_internet = [key for (key, value) in hosts.items() if is_reachable("internet", value)]
print("\nInstances reachable from the Internet: {}".format(sorted(reachable_from_internet)))
# compute which instances are NOT open to the Internet
unreachable_from_internet = [key for (key, value) in hosts.items() if not is_reachable("internet", value)]
print("\nInstances NOT reachable from the Internet: {}".format(sorted(unreachable_from_internet)))
# compute which instances are reachable from data center
reachable_from_dc = [key for (key,value) in hosts.items() if is_reachable("srv-101", value)]
print("\nInstances reachable from the DC: {}".format(sorted(reachable_from_dc)))
tcp_non_ssh = HeaderConstraints(ipProtocols="tcp", dstPorts="!22")
reachable_from_internet_non_ssh = [key for (key, value) in hosts.items()
if is_reachable("internet", value, tcp_non_ssh)]
print("\nInstances reachable from the Internet with non-SSH traffic: {}".format(
sorted(reachable_from_internet_non_ssh)))
ans = bf.q.reachability(pathConstraints=PathConstraints(startLocation="internet",
endLocation=hosts["east2_public"]),
headers=tcp_non_ssh).answer()
show_first_trace(ans.frame())
flow=ans.frame().iloc[0]['Flow'] # the rogue flow uncovered by Batfish above
ans = bf.q.testFilters(nodes=hosts["east2_public"],
filters="~INGRESS_ACL~eni-01997085076a9b98a",
headers=HeaderConstraints(srcIps=flow.srcIp,
dstIps="10.20.1.207", # destination IP after the NAT at Step 3 above
srcPorts=flow.srcPort,
dstPorts=flow.dstPort,
ipProtocols=flow.ipProtocol)).answer()
show(ans.frame())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Generate 1000 random datasets of 50 vectors with constrained origins (to induce positive spatial autocorrrlation), then calculate the vector Moran's I from the destination perspective (VMD) and a psuedo p value (based on 99 permutations) using randomization technique A and randomization technique B for each of the 1000 datasets.
|
<ASSISTANT_TASK:>
Python Code:
dest_A_rand_I = []
dest_B_rand_I = []
dest_A_rand_p = []
dest_B_rand_p = []
for i in range(1000):
phi = np.random.uniform(0,np.pi*2, 50).reshape((-1,1))
num = np.arange(0,50).reshape((-1,1))
OX = np.random.randint(0,500, 50).reshape((-1,1))
OY = np.random.randint(0,500, 50).reshape((-1,1))
DX = np.cos(phi)*(np.random.randint(0,500, 50)).reshape((-1,1))
DY = np.sin(phi)*np.random.randint(0,500, 50).reshape((-1,1))
vecs = np.hstack([num, OX, OY, DX, DY])
dests = vecs[:, 3:5]
wd = DistanceBand(dests, threshold=9999, alpha=-1.5, binary=False)
vmd = VecMoran(vecs, wd, focus='destination', rand='A', permutations=999)
dest_A_rand_I.append(vmd.I)
dest_A_rand_p.append(vmd.p_z_sim)
vmd = VecMoran(vecs, wd, focus='destination', rand='B', permutations=999)
dest_B_rand_I.append(vmd.I)
dest_B_rand_p.append(vmd.p_z_sim)
X,Y,U,V = zip(*vecs[:,1:])
plt.subplot(111)
for x in range(0,len(vecs[:,1])):
plt.arrow(X[x], #x1
Y[x], # y1
U[x]-X[x], # x2 - x1
V[x]-Y[x], # y2 - y1
fc="k", ec="k", head_width=0.05, head_length=0.1)
plt.xlim([-510,550])
plt.ylim([-510,550])
plt.title('Example of 50 random vectors')
plt.show()
dest_A_cons_I = []
dest_B_cons_I = []
dest_A_cons_p = []
dest_B_cons_p = []
for i in range(1000):
phi = np.random.uniform(0,np.pi*2, 50).reshape((-1,1))
num = np.arange(0,50).reshape((-1,1))
OX = np.random.randint(450,500, 50).reshape((-1,1))
OY = np.random.randint(450,500, 50).reshape((-1,1))
DX = np.cos(phi)*(np.random.randint(450,500, 50)).reshape((-1,1))
DY = np.sin(phi)*np.random.randint(450,500, 50).reshape((-1,1))
vecs = np.hstack([num, OX, OY, DX, DY])
dests = vecs[:, 3:5]
wd = DistanceBand(dests, threshold=9999, alpha=-1.5, binary=False)
vmd = VecMoran(vecs, wd, focus='destination', rand='A', permutations=999)
dest_A_cons_I.append(vmd.I)
dest_A_cons_p.append(vmd.p_z_sim)
vmd = VecMoran(vecs, wd, focus='destination', rand='B', permutations=999)
dest_B_cons_I.append(vmd.I)
dest_B_cons_p.append(vmd.p_z_sim)
X,Y,U,V = zip(*vecs[:,1:])
plt.subplot(111)
for x in range(0,len(vecs[:,1])):
plt.arrow(X[x], #x1
Y[x], # y1
U[x]-X[x], # x2 - x1
V[x]-Y[x], # y2 - y1
fc="k", ec="k", head_width=0.05, head_length=0.1)
plt.xlim([-510,550])
plt.ylim([-510,550])
plt.title('Example of 50 random vectors with constrained origins')
plt.show()
#Method A random
plt.hist(dest_A_rand_I, bins = 25)
plt.title('Distribution of VMD I values from random vectors - Method A')
plt.show()
plt.hist(dest_A_rand_p, bins = 25)
plt.title('Distribution of p values from random vectors - Method A')
plt.show()
#Method A constricted
plt.hist(dest_A_cons_I, bins=25)
plt.title('Distribution of VMD I values from constrained vectors - Method A')
plt.show()
plt.hist(dest_A_cons_p, bins=25)
plt.title('Distribution of p values from constrained vectors - Method A')
plt.show()
#Method B random
plt.hist(dest_B_rand_I, bins=25)
plt.title('Distribution of VMD I values from random vectors - Method B')
plt.show()
plt.hist(dest_B_rand_p, bins=25)
plt.title('Distribution of p values from random vectors - Method B')
plt.show()
#Method B constricted
plt.hist(dest_B_cons_I, bins=25)
plt.title('Distribution of VMD I values from constrained vectors - Method B')
plt.show()
plt.hist(dest_B_cons_p, bins=25)
plt.title('Distribution of p values from constrained vectors - Method B')
plt.show()
print 'a'
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Con la muestra de datos que se define en Ingresos_2, la idea es procesarla y explorar los datos. Esto de manera sencilla implica conocer los tipos de variables, que ya se hizo con el comando info(). Pero lo que se puede buscar es hacer un análisis exploratorio sencillo. Esto tanto de manera gráfica, como de manera de estruturas de datos. Para revisar las posibles relaciones entre los montos y los años o alguna otra variable, se hace uso de agrupaciones. Con esto se hace cierto resumen entre las variables.
Step2: La misma idea se puede hacer con respecto a los meses para conocer como se comportan los montos. Esto ayuda a responder la pregunta, *¿qué mes tiene la media más alta en montos?
Step3: Lo que se observa del grupo anterior es que al ver la media, muestra los valores para MONTO y CICLO. La cuestion es que CICLO es tomada como una variable numérica cuando se cargan los datos y al hacer la agrupación respecto a los meses muestra la media de las variables numéricas. Para que solo se muestren los valores de la suma de los montos, se hace del siguiente modo
Step4: Para elegir los 5 registros con los montos más alta, hacemos la siguiente selección al Grupo_3
Step5: Los anteriores ejemplos, muestran como resolver preguntas sencillas donde se involugra algun variable categórica y su relación con otra numérica. Se pueden hacer agrupaciones más sofisticadas, donde se buscar ver la relación de más de 2 variables; ejemplo de Ciclo, Mes y Sector.
|
<ASSISTANT_TASK:>
Python Code:
#Se prepara el entorno de trabajo
%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
#matplotlib.style.use('ggplot') se puede correr este código para usar gráficos del tipo de ggplot2 en R
plt.rcParams['figure.figsize']=(20,7)
# -*- coding: utf-8 -*-
#Se cargan los datos desde un directorio
Ingresos=pd.read_csv('/Datos/Ingreso.csv')
Ingresos.head()
#Cantidad de filas y columnas.
Ingresos.shape
#Descripción en general de las variables
Ingresos.info()
#Con dtypes es facil ver el tipo de dato que se tienen en las variables
Ingresos.dtypes
#Se seleccionan las columnas que yo considero más importates. En este caso Ciclo, Mes, Nombre, Tema, Sector
# y Monto, son a mi parecer los más relevantes.
#Si solo se deseara elegir una sola columna, se usaría Ingresos['Nombre de la columna']
Ingresos_2=Ingresos[['CICLO','MES','NOMBRE','TEMA','SECTOR','MONTO']]
Ingresos_2.head()
#Para responder ¿cuál es el año con mayor monto registrado? se hace la siguente agrupación.
Grupo_1=Ingresos_2.groupby('CICLO')
#La cantidad de registros para cada año
Grupo_1.size()
Grupo_1.sum()
Grupo_2=Ingresos_2.groupby('MES')
#La cantidad de registros en cada mes
Grupo_2.size()
Grupo_2.mean()
Grupo_2.mean()['MONTO']
#Si uno desea saber cual es el mes con la media de montos mayor, basta elegir el máximo de los valores de las medias al
#ordenarlas. En este caso corresponde al mes de Diciembre.
Grupo_2['MONTO'].mean().sort_values()[-1:]
#Agrupamos la información por años y mes para conocer como se comportan los montos
Grupo_3=Ingresos_2.groupby(['CICLO','MES'])
#Vemos el valor de la suma de los montos
Grupo_3.sum()
Grupo_3.sum().sort_values(by='MONTO')[-5:]
#Se puede revisar como se comporta por CICLO y SECTOR, elegimos los 15 registros con valor más
# alto en la suma de su MONTO
Ingresos_2.groupby(['CICLO','SECTOR']).sum().sort_values(by='MONTO')[-15:]
#Graficamos el resultado anterior.
Ingresos_2.groupby(['CICLO','SECTOR']).sum().sort_values(by='MONTO')[-15:].plot(kind='bar')
Grupo_1['MONTO'].agg([np.size,sum,max,min])
#El comportamiento entre máximo y mínimos
Grupo_1['MONTO'].agg([max,min]).plot(kind='bar')
#Se explora como se comporta la agrupación por mes por medio de una gráfica de barras.
Grupo_2['MONTO'].agg([max,min,np.mean])
#Los datos originales, permiten explorar gráficamente como se relacionan los valors de los montos con respecto a los meses.
sns.barplot(data=Ingresos_2,x="MES",y="MONTO",palette="PRGn")
#También se puede explorar con los datos agrupados por Año-Mes el comportamiento de los máximos, mínimos y la media de los montos
Grupo_3['MONTO'].agg([max,min,np.mean]).plot(title='Comportamiento de la suma de los Montos')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Implementing Custom Aggregations
Step2: Design summary
Step3: Instead of summing value, the example task is to sum value * 2.0 and then divide the sum by 2.0. The aggregation result is thus mathematically equivalent to directly summing the value, and could be thought of as consisting of three parts
Step4: Whether everything works as expected can be verified with the following code
Step5: Statefulness and measurements
Step6: Note that the state that comes into next_fn as input is placed at server. In order to use it at clients, it first needs to be communicated, which is achieved using the tff.federated_broadcast operator.
Step7: Structured types
Step8: This example highlights a pattern which may be useful to follow when structuring TFF code. When not dealing with very simple operations, the code becomes more legible when the tff.tf_computations that will be used as building blocks inside a tff.federated_computation are created in a separate place. Inside of the tff.federated_computation, these building blocks are only connected using the intrinsic operators.
Step9: Inner aggregations
Step10: When delegating to the inner_process.next function, the return structure we get is a tff.templates.MeasuredProcessOutput, with the same three fields - state, result and measurements. When creating the overall return structure of the composed aggregation process, the state and measurements fields should be generally composed and returned together. In contrast, the result field corresponds to the value being aggregated and instead "flows through" the composed aggregation.
Step11: ... and with a different inner aggregation. For example, an ExampleTaskFactory
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@test {"skip": true}
!pip install --quiet --upgrade tensorflow-federated
!pip install --quiet --upgrade nest-asyncio
import nest_asyncio
nest_asyncio.apply()
import collections
import tensorflow as tf
import tensorflow_federated as tff
class ExampleTaskFactory(tff.aggregators.UnweightedAggregationFactory):
def create(self, value_type):
@tff.federated_computation()
def initialize_fn():
return tff.federated_value((), tff.SERVER)
@tff.federated_computation(initialize_fn.type_signature.result,
tff.type_at_clients(value_type))
def next_fn(state, value):
scaled_value = tff.federated_map(
tff.tf_computation(lambda x: x * 2.0), value)
summed_value = tff.federated_sum(scaled_value)
unscaled_value = tff.federated_map(
tff.tf_computation(lambda x: x / 2.0), summed_value)
measurements = tff.federated_value((), tff.SERVER)
return tff.templates.MeasuredProcessOutput(
state=state, result=unscaled_value, measurements=measurements)
return tff.templates.AggregationProcess(initialize_fn, next_fn)
client_data = [1.0, 2.0, 5.0]
factory = ExampleTaskFactory()
aggregation_process = factory.create(tff.TensorType(tf.float32))
print(f'Type signatures of the created aggregation process:\n'
f' - initialize: {aggregation_process.initialize.type_signature}\n'
f' - next: {aggregation_process.next.type_signature}\n')
state = aggregation_process.initialize()
output = aggregation_process.next(state, client_data)
print(f'Aggregation result: {output.result} (expected 8.0)')
class ExampleTaskFactory(tff.aggregators.UnweightedAggregationFactory):
def create(self, value_type):
@tff.federated_computation()
def initialize_fn():
return tff.federated_value(0.0, tff.SERVER)
@tff.federated_computation(initialize_fn.type_signature.result,
tff.type_at_clients(value_type))
def next_fn(state, value):
new_state = tff.federated_map(
tff.tf_computation(lambda x: x + 1.0), state)
state_at_clients = tff.federated_broadcast(new_state)
scaled_value = tff.federated_map(
tff.tf_computation(lambda x, y: x * y), (value, state_at_clients))
summed_value = tff.federated_sum(scaled_value)
unscaled_value = tff.federated_map(
tff.tf_computation(lambda x, y: x / y), (summed_value, new_state))
return tff.templates.MeasuredProcessOutput(
state=new_state, result=unscaled_value, measurements=summed_value)
return tff.templates.AggregationProcess(initialize_fn, next_fn)
client_data = [1.0, 2.0, 5.0]
factory = ExampleTaskFactory()
aggregation_process = factory.create(tff.TensorType(tf.float32))
print(f'Type signatures of the created aggregation process:\n'
f' - initialize: {aggregation_process.initialize.type_signature}\n'
f' - next: {aggregation_process.next.type_signature}\n')
state = aggregation_process.initialize()
output = aggregation_process.next(state, client_data)
print('| Round #1')
print(f'| Aggregation result: {output.result} (expected 8.0)')
print(f'| Aggregation measurements: {output.measurements} (expected 8.0 * 1)')
output = aggregation_process.next(output.state, client_data)
print('\n| Round #2')
print(f'| Aggregation result: {output.result} (expected 8.0)')
print(f'| Aggregation measurements: {output.measurements} (expected 8.0 * 2)')
output = aggregation_process.next(output.state, client_data)
print('\n| Round #3')
print(f'| Aggregation result: {output.result} (expected 8.0)')
print(f'| Aggregation measurements: {output.measurements} (expected 8.0 * 3)')
@tff.tf_computation()
def scale(value, factor):
return tf.nest.map_structure(lambda x: x * factor, value)
@tff.tf_computation()
def unscale(value, factor):
return tf.nest.map_structure(lambda x: x / factor, value)
@tff.tf_computation()
def add_one(value):
return value + 1.0
class ExampleTaskFactory(tff.aggregators.UnweightedAggregationFactory):
def create(self, value_type):
@tff.federated_computation()
def initialize_fn():
return tff.federated_value(0.0, tff.SERVER)
@tff.federated_computation(initialize_fn.type_signature.result,
tff.type_at_clients(value_type))
def next_fn(state, value):
new_state = tff.federated_map(add_one, state)
state_at_clients = tff.federated_broadcast(new_state)
scaled_value = tff.federated_map(scale, (value, state_at_clients))
summed_value = tff.federated_sum(scaled_value)
unscaled_value = tff.federated_map(unscale, (summed_value, new_state))
return tff.templates.MeasuredProcessOutput(
state=new_state, result=unscaled_value, measurements=summed_value)
return tff.templates.AggregationProcess(initialize_fn, next_fn)
client_data = [[[1.0, 2.0], [3.0, 4.0, 5.0]],
[[1.0, 1.0], [3.0, 0.0, -5.0]]]
factory = ExampleTaskFactory()
aggregation_process = factory.create(
tff.to_type([(tf.float32, (2,)), (tf.float32, (3,))]))
print(f'Type signatures of the created aggregation process:\n'
f' - initialize: {aggregation_process.initialize.type_signature}\n'
f' - next: {aggregation_process.next.type_signature}\n')
state = aggregation_process.initialize()
output = aggregation_process.next(state, client_data)
print(f'Aggregation result: [{output.result[0]}, {output.result[1]}]\n'
f' Expected: [[2. 3.], [6. 4. 0.]]')
@tff.tf_computation()
def scale(value, factor):
return tf.nest.map_structure(lambda x: x * factor, value)
@tff.tf_computation()
def unscale(value, factor):
return tf.nest.map_structure(lambda x: x / factor, value)
@tff.tf_computation()
def add_one(value):
return value + 1.0
class ExampleTaskFactory(tff.aggregators.UnweightedAggregationFactory):
def __init__(self, inner_factory=None):
if inner_factory is None:
inner_factory = tff.aggregators.SumFactory()
self._inner_factory = inner_factory
def create(self, value_type):
inner_process = self._inner_factory.create(value_type)
@tff.federated_computation()
def initialize_fn():
my_state = tff.federated_value(0.0, tff.SERVER)
inner_state = inner_process.initialize()
return tff.federated_zip((my_state, inner_state))
@tff.federated_computation(initialize_fn.type_signature.result,
tff.type_at_clients(value_type))
def next_fn(state, value):
my_state, inner_state = state
my_new_state = tff.federated_map(add_one, my_state)
my_state_at_clients = tff.federated_broadcast(my_new_state)
scaled_value = tff.federated_map(scale, (value, my_state_at_clients))
# Delegation to an inner factory, returning values placed at SERVER.
inner_output = inner_process.next(inner_state, scaled_value)
unscaled_value = tff.federated_map(unscale, (inner_output.result, my_new_state))
new_state = tff.federated_zip((my_new_state, inner_output.state))
measurements = tff.federated_zip(
collections.OrderedDict(
scaled_value=inner_output.result,
example_task=inner_output.measurements))
return tff.templates.MeasuredProcessOutput(
state=new_state, result=unscaled_value, measurements=measurements)
return tff.templates.AggregationProcess(initialize_fn, next_fn)
client_data = [1.0, 2.0, 5.0]
factory = ExampleTaskFactory()
aggregation_process = factory.create(tff.TensorType(tf.float32))
state = aggregation_process.initialize()
output = aggregation_process.next(state, client_data)
print('| Round #1')
print(f'| Aggregation result: {output.result} (expected 8.0)')
print(f'| measurements[\'scaled_value\']: {output.measurements["scaled_value"]}')
print(f'| measurements[\'example_task\']: {output.measurements["example_task"]}')
output = aggregation_process.next(output.state, client_data)
print('\n| Round #2')
print(f'| Aggregation result: {output.result} (expected 8.0)')
print(f'| measurements[\'scaled_value\']: {output.measurements["scaled_value"]}')
print(f'| measurements[\'example_task\']: {output.measurements["example_task"]}')
client_data = [1.0, 2.0, 5.0]
# Note the inner delegation can be to any UnweightedAggregaionFactory.
# In this case, each factory creates process that multiplies by the iteration
# index (1, 2, 3, ...), thus their combination multiplies by (1, 4, 9, ...).
factory = ExampleTaskFactory(ExampleTaskFactory())
aggregation_process = factory.create(tff.TensorType(tf.float32))
state = aggregation_process.initialize()
output = aggregation_process.next(state, client_data)
print('| Round #1')
print(f'| Aggregation result: {output.result} (expected 8.0)')
print(f'| measurements[\'scaled_value\']: {output.measurements["scaled_value"]}')
print(f'| measurements[\'example_task\']: {output.measurements["example_task"]}')
output = aggregation_process.next(output.state, client_data)
print('\n| Round #2')
print(f'| Aggregation result: {output.result} (expected 8.0)')
print(f'| measurements[\'scaled_value\']: {output.measurements["scaled_value"]}')
print(f'| measurements[\'example_task\']: {output.measurements["example_task"]}')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Line plot of sunspot data
Step2: Use np.loadtxt to read the data into a NumPy array called data. Then create two new 1d NumPy arrays named years and ssc that have the sequence of year and sunspot counts.
Step4: Make a line plot showing the sunspot count as a function of year.
Step5: Describe the choices you have made in building this visualization and how they make it effective.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import os
assert os.path.isfile('yearssn.dat')
data = np.loadtxt('yearssn.dat')
#Creates two arrays, year is the first column of data and ssc is the second column of data
year = data[:,0]
ssc = data[:,1]
print (data)
assert len(year)==315
assert year.dtype==np.dtype(float)
assert len(ssc)==315
assert ssc.dtype==np.dtype(float)
#Worked with Natasha Proctor
#This sets ax as the current axis (This allows me to manipulate spines and ticks)
ax = plt.gca()
#Plots the data
s = plt.plot(year,ssc)
#All of these following lines are for formatting.
plt.xlim(1700.5,2014.5)
plt.ylim(0,190.2)
plt.title('Sunspots Per Year')
plt.xlabel('Time')
plt.ylabel('Sunspots')
These lines utilize the fact I set ax to get current axis.
The spines are lines that compose the box around the graph.
So set_visible(False) makes whatever spine I chose to be invisible.
The tick calls get rid of the ticks on the spines that I have taken out.
set_aspect stretches the graph out so it can be closer to a max slope of 1.
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.get_xaxis().tick_bottom()
ax.get_yaxis().tick_left()
ax.set_aspect(0.25)
plt.tight_layout()
assert True # leave for grading
#Refer back to #1 for more explanation on the use of ax here
#plt.subplot works by taking parameters (rows of graphs, cols of graphs, which graph)
#So plt.subplo
Century_1 = year[0:100]
Ssc_1 = ssc[0:100]
plt.subplot(2,2,1)
ax1 = plt.gca()
ax1.spines['right'].set_visible(False)
ax1.spines['top'].set_visible(False)
ax1.get_xaxis().tick_bottom()
ax1.axes.get_yaxis().tick_left()
for label in ax1.xaxis.get_ticklabels()[::2] and ax1.yaxis.get_ticklabels()[::2]:
label.set_visible(False)
plt.plot(Century_1,Ssc_1)
plt.tight_layout()
assert True # leave for grading
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Imports
Step2: Load ImageNet dataset
Step3: Define a set of evaluation metrics
Step4: Load pre-trained Epinet
Step5: From the checkpoint, we can recover an enn sampler, which is a function that takes a batch of images and one random key, and returns multiple sample logits per input image. To recover the enn sampler, we can use make_epinet_sampler_from_checkpoint (from enn/checkpoints/utils.py) which takes the checkpoint and also the number of sample logits we want per image (num_enn_samples).
Step6: Load pre-trained ResNet
Step7: From the checkpoint, we can recover an enn sampler, which is a function that takes a batch of images and one random key, and returns multiple sample logits per input image. To recover the enn sampler for ResNet-50, we can use make_enn_sampler_from_checkpoint (from enn/checkpoints/utils.py) which takes the checkpoint and also the number of sample logits we want per image (num_enn_samples). Here we set num_enn_samples=1, as having num_enn_samples > 1 just results in multiple similar sample logits per input image.
Step8: Compare Epinet and ResNet results
|
<ASSISTANT_TASK:>
Python Code:
# Copyright 2022 DeepMind Technologies Limited. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
!git clone https://github.com/deepmind/enn.git
!pip install -q enn/
#@title General imports
import warnings
warnings.filterwarnings('ignore')
#@title Development imports
from typing import Callable, NamedTuple
import numpy as np
import pandas as pd
import plotnine as gg
from acme.utils.loggers.terminal import TerminalLogger
import dataclasses
import chex
import haiku as hk
import jax
import jax.numpy as jnp
import optax
import dill
#@title ENN imports
import enn
from enn import datasets
from enn.checkpoints import base as checkpoint_base
from enn.networks.epinet import base as epinet_base
from enn.checkpoints import utils
from enn.checkpoints import imagenet
from enn.checkpoints import catalog
from enn import metrics as enn_metrics
!wget https://storage.googleapis.com/dm-enn/processed_batch.npzs --no-check-certificate
with open('processed_batch.npzs', 'rb') as file:
batch = dill.load(file)
images, labels = batch['images'], batch['labels']
# Define a dict of metrics including `accuracy`, `marginal nll`, and `joint nll`.
evaluation_metrics = {
'accuracy': enn_metrics.make_accuracy_calculator(),
'marginal nll': enn_metrics.make_nll_marginal_calculator(),
'joint nll': enn_metrics.make_nll_polyadic_calculator(tau=10, kappa=2),
}
# Get the Epinet checkpoint
epinet_resnet50_imagenet_ckpt = catalog.ImagenetModels.RESNET_50_FINAL_EPINET.value
epinet_resnet50_imagenet_ckpt
# Set the number of sample logits per input image
num_enn_samples = 100
# Recover the enn sampler
epinet_enn_sampler = utils.make_epinet_sampler_from_checkpoint(
epinet_resnet50_imagenet_ckpt,
num_enn_samples=num_enn_samples,)
# Get the epinet logits
key = jax.random.PRNGKey(seed=0)
epinet_logits = epinet_enn_sampler(images, key)
# epinet logits has shape [num_enn_sample, eval_batch_size, num_classes]
epinet_logits.shape
# Labels loaded from our dataset has shape [eval_batch_size,]. Our evaluation
# metrics requires labels to have shape [eval_batch_size, 1].
eval_labels = labels[:, None]
# Evaluate
epinet_results = {key: float(metric(epinet_logits, eval_labels))
for key, metric in evaluation_metrics.items()}
epinet_results
# Get the ResNet-50 checkpoint
resnet50_imagenet_ckpt = catalog.ImagenetModels.RESNET_50.value
resnet50_imagenet_ckpt
# Set the number of sample logits per input image to 1
num_enn_samples = 1
# Recover the enn sampler
resnet50_enn_sampler = utils.make_enn_sampler_from_checkpoint(
resnet50_imagenet_ckpt,
num_enn_samples=num_enn_samples,)
# Get the epinet logits
key = jax.random.PRNGKey(seed=0)
resnet50_logits = resnet50_enn_sampler(images, key)
# ResNet logits has shape [num_enn_sample, eval_batch_size, num_classes]
resnet50_logits.shape
# Labels loaded from our dataset has shape [eval_batch_size,]. Our evaluation
# metrics requires labels to have shape [eval_batch_size, 1].
eval_labels = labels[:, None]
# Evaluate
resnet50_results = {key: float(metric(resnet50_logits, eval_labels))
for key, metric in evaluation_metrics.items()}
resnet50_results
# Make a dataframe of the results
resnet50_results['model'] = 'resnet'
epinet_results['model'] = 'epinet'
df = pd.DataFrame([resnet50_results, epinet_results])
df
# Compare the results
plt_df = pd.melt(df, id_vars=['model'], value_vars=evaluation_metrics.keys())
p = (gg.ggplot(plt_df)
+ gg.aes(x='model', y='value', fill='model')
+ gg.geom_col()
+ gg.facet_wrap('variable', scales='free',)
+ gg.theme(figure_size=(14, 4), panel_spacing=0.7)
)
p
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step 1
Step1: Step 2
Step2: Step 3
Step3: 観測
Step4: 空間
Step5: Descrete
Step6: Box
Step7: スペースからサンプリングすることも、ある値がスペースに含まれているか調べることもできる。
Step8: 環境
Step9: 自分で環境を作ることも可能
|
<ASSISTANT_TASK:>
Python Code:
# gym オープンソースライブラリの読み込み
import gym
# 環境を作る
env = gym.make('CartPole-v0') # 'CartPole-v0' は環境ID
#env = gym.make('MountainCar-v0') # 'MountainCar-v0'という別の環境
#env = gym.make('MsPacman-v0') # 'MsPacman-v0'という別の環境
env.seed(42)
# 環境の初期化(最初の観測が得られる)
env.reset()
# 描画
env.render()
# 行動選択(手動)
action = 0 # 0: Left, 1: Right
# 環境に対して選択された行動を実行
# printで囲む
env.step(action)
# 描画
env.render()
# 画面を閉じる
env.render(close=True)
import time
# gym オープンソースライブラリの読み込み
import gym
# 環境を作る
env = gym.make('CartPole-v0') # 'CartPole-v0' は環境ID
#env = gym.make('MountainCar-v0') # 'MountainCar-v0'という別の環境
#env = gym.make('MsPacman-v0') # 'MsPacman-v0'という別の環境
# ランダムな行動選択
env.action_space.sample()
# 行動空間(エージェントが選択可能な行動が定義されている空間)
env.action_space
# 環境の初期化(最初の観測が得られる)
env.reset()
for _ in range(100):
time.sleep(0.1) # 描画を遅くするために0.1秒スリープ
env.render() # 描画
action = env.action_space.sample() # ランダムな行動選択
print(action), # 選択された行動をプリント
print(env.step(action)) # 選択行動を実行
env.render(close=True) # 画面を閉じる
#!python keyboard_agent.py CartPole-v1
#!python keyboard_agent.py LunarLander-v2
#!python keyboard_agent.py MountainCar-v0
#!python keyboard_agent.py SpaceInvaders-v0
#!python keyboard_agent.py Breakout-v0
#!python keyboard_agent.py Acrobot-v1
import numpy as np
np.set_printoptions(suppress=True) # Scientific Notation (例 1.0e-0.5)を使わない
all_obs = []
import gym
env = gym.make('CartPole-v0')
for i_episode in range(5): # 5エピソード回す
observation = env.reset() # 環境を初期化し、最初の観測を得る。
all_obs.append(observation) # 観測を記録
for t in range(100): # 各エピソードの最大ステップ数は100
env.render()
print(observation)
action = env.action_space.sample() # ランダム方策
observation, reward, done, info = env.step(action) # 選択行動の実行
all_obs.append(observation) # 観測を記録
if done:
print("Episode finished after {} timesteps\n".format(t+1))
break
env.render(close=True)
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(10, 4))
ax.plot(np.array(all_obs))
ax.legend(['x', 'x_dot', 'theta', 'theta_dot'])
import gym
env = gym.make('CartPole-v0') # 'CartPole-v0' は環境ID
#env = gym.make('MountainCar-v0') # 'MountainCar-v0'という別の環境
#env = gym.make('MsPacman-v0') # 'MsPacman-v0'という別の環境
print(env.action_space)
print(env.observation_space)
env.action_space.n
env.observation_space.high
env.observation_space.low
from gym import spaces
space = spaces.Discrete(8) # {0, 1, 2, ..., 7}
# サンプリング
x = space.sample()
x
assert space.contains(x)
assert space.n == 8
from gym import envs
# 使用可能な環境を列挙
envs.registry.all()
import gym
from gym import wrappers # ラッパの呼び出し
env = gym.make('CartPole-v0')
env = wrappers.Monitor(env, './cartpole-v0-experiment-1', force=True) # envをMonitorでラッピング。force=Trueで前の結果を削除。
for i_episode in range(10):
observation = env.reset()
for t in range(100):
env.render()
print(observation)
action = env.action_space.sample()
observation, reward, done, info = env.step(action)
if done:
print("Episode finished after {} timesteps".format(t+1))
break
env.render(close=True)
#!open .
# 結果をOpenAI Gym側のサーバーにアップロードする方法。
import gym
#gym.upload('/tmp/cartpole-v0-experiment-1', api_key='YOUR_API_KEY')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Image Classification
Step2: Explore the Data
Step5: Implement Preprocess Functions
Step8: One-hot encode
Step10: Randomize Data
Step12: Check Point
Step17: Build the network
Step20: Convolution and Max Pooling Layer
Step23: Flatten Layer
Step26: Fully-Connected Layer
Step29: Output Layer
Step32: Create Convolutional Model
Step35: Train the Neural Network
Step37: Show Stats
Step38: Hyperparameters
Step40: Train on a Single CIFAR-10 Batch
Step42: Fully Train the Model
Step45: Checkpoint
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
# TODO: Implement Function
x_prime = map(lambda x1: 0.1 + ((x1*(0.9-0.1))/(255)), x)
return np.array(list(x_prime))
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_normalize(normalize)
def one_hot_encode(x):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
# TODO: Implement Function
# one_hot_encoded_labels = np.zeros((len(x), max(x)+1))
# one_hot_encoded_labels[np.arange(len(x)),x] = 1
# return one_hot_encoded_labels
return np.eye(10)[x]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_one_hot_encode(one_hot_encode)
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
DON'T MODIFY ANYTHING IN THIS CELL
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
# TODO: Implement Function
return tf.placeholder(dtype=tf.float32, shape=[None, image_shape[0], image_shape[1], image_shape[2]], name="x")
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
# TODO: Implement Function
return tf.placeholder(dtype=tf.float32, shape=[None, n_classes], name="y")
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
# TODO: Implement Function
return tf.placeholder(dtype=tf.float32, name="keep_prob")
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
# TODO: Implement Function
# print(x_tensor.shape)
# print(conv_ksize)
# print(conv_num_outputs)
color_channels = x_tensor.shape[3].value
weights = tf.Variable(tf.truncated_normal([conv_ksize[0], conv_ksize[1], color_channels, conv_num_outputs], mean=0, stddev=0.1))
biases = tf.Variable(tf.zeros(conv_num_outputs))
layer = tf.nn.conv2d(x_tensor, weights, strides=[1, conv_strides[0], conv_strides[1], 1], padding='SAME')
layer = tf.add(layer, biases)
layer = tf.nn.relu(layer)
layer = tf.nn.max_pool(layer, ksize=[1, pool_ksize[0], pool_ksize[1], 1],
strides=[1, pool_strides[0], pool_strides[1], 1], padding='SAME')
return layer
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
# TODO: Implement Function
# print(x_tensor)
return tf.contrib.layers.flatten(x_tensor)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_flatten(flatten)
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
#Step 1: create the weights and bias
size = x_tensor.shape[1].value
weights = tf.Variable(tf.truncated_normal([size, num_outputs], mean=0, stddev=0.1))
bias = tf.Variable(tf.zeros(num_outputs))
#Step 2: apply matmul
layer = tf.matmul(x_tensor, weights)
#Step 3: add bias
layer = tf.nn.bias_add(layer, bias)
#Step 4: apply relu
layer = tf.nn.relu(layer)
return layer
# return tf.layers.dense(flatten(x_tensor), num_outputs, activation=tf.nn.relu)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_fully_conn(fully_conn)
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
#Step 1: create the weights and bias
size = x_tensor.shape[1].value
weights = tf.Variable(tf.truncated_normal([size, num_outputs], mean=0, stddev=0.1))
bias = tf.Variable(tf.zeros(num_outputs))
#Step 2: apply matmul
layer = tf.matmul(x_tensor, weights)
#Step 3: add bias
layer = tf.nn.bias_add(layer, bias)
return layer
# return tf.layers.dense(flatten(x_tensor), num_outputs)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_output(output)
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
conv_ksize = [5, 5]
conv_strides = [1, 1]
pool_ksize = [2, 2]
pool_strides = [1, 1]
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
layer_1 = conv2d_maxpool(x, 16, conv_ksize, conv_strides, pool_ksize, pool_strides)
layer_2 = conv2d_maxpool(layer_1, 32, conv_ksize, conv_strides, pool_ksize, pool_strides)
layer_3 = conv2d_maxpool(layer_2, 64, conv_ksize, conv_strides, pool_ksize, pool_strides)
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
flat_layer = flatten(layer_3)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
fully_connected = fully_conn(flat_layer, 64)
fully_connected = tf.nn.dropout(fully_connected, keep_prob)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
output_layer = output(fully_connected, 10)
# TODO: return output
return output_layer
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
# TODO: Implement Function
session.run(optimizer, feed_dict={
x: feature_batch,
y: label_batch,
keep_prob: keep_probability})
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_train_nn(train_neural_network)
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
# TODO: Implement Function
loss = session.run(cost, feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.0})
validation_accuracy = session.run(accuracy, feed_dict={x: valid_features, y: valid_labels, keep_prob: 1.0})
print('Loss: {:>10.4f} Validation Accuracy: {:.6f}'.format(loss, validation_accuracy))
# TODO: Tune Parameters
epochs = 20
batch_size = 64
keep_probability = 0.8
DON'T MODIFY ANYTHING IN THIS CELL
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import data as a list of lines
Step2: Import data as a data frame
Step3: Goal 2
Step4: Goal 3
|
<ASSISTANT_TASK:>
Python Code:
MovieTextFile = open("tmdb_5000_movies.csv")
# for line in MovieTextFile:
# print(line) # not quite right
# type(MovieTextFile)
import csv
with open("tmdb_5000_movies.csv",encoding="utf8") as f:
reader = csv.reader(f)
MovieList = list(reader)
MovieList[:5]
import pandas as pd
movies = pd.read_csv("tmdb_5000_movies.csv")
movieFrame = pd.DataFrame(movies)
movieFrame[:5]
#pull out genres array of JSON strings from data frame
genres = movieFrame['genres']
# genresFrame = pd.DataFrame(genres)
genres[:5]
# Pull out list of names for each row of the data frame
# Start with testing first row and iterating through JSON string
import json
genreList = []
genre = json.loads(genres[0])
for i,val in enumerate(genre):
genreList.append(genre[i]['name'])
genreList
# Iterate through indices of genre array to create a list of lists of genre names
import json
genresAll = []
for k,x in enumerate(genres):
genreList = []
genre = json.loads(genres[k])
for i,val in enumerate(genre):
genreList.append(genre[i]['name'])
genresAll.append(genreList)
genresAll[:10]
genreSeries['W'] = pd.Series(genresAll,index=movieFrame.index)
genreFrame = pd.DataFrame(genreSeries['W'],columns=['GenreList'])
genreFrame[:5]
genreDummies = genreFrame.GenreList.astype(str).str.strip('[]').str.get_dummies(', ')
genreDummies[:10]
# append lists as a column at end of dataframe
movieGenreFrame = pd.merge(movieFrame,genreFrame,how='inner',left_index=True, right_index=True)
movieGenreFrame[:5]
wideMovieFrame = pd.merge(movieGenreFrame,genreDummies,how='inner',left_index=True,right_index=True)
wideMovieFrame[:5]
longMovieFrame = pd.melt(wideMovieFrame, id_vars=movieGenreFrame.columns, value_vars=genreDummies.columns,
var_name='Genre',value_name="genre_present")
longMovieFrame[:10]
# test results with 'Avatar' example
longMovieFrame[longMovieFrame['title']=='Avatar']
# If only retaining "true" genres
longMovieFrameTrimmed = longMovieFrame[longMovieFrame['genre_present']==1]
longMovieFrameTrimmed[:5]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import libraries and define constants
Step2: Tutorial
Step3: Create the study configuration
Step4: Create the study
Step7: Metric evaluation functions
Step8: Set configuration parameters for running trials
Step9: Run Vertex Vizier trials
Step10: List the optimal solutions
Step11: Cleaning up
|
<ASSISTANT_TASK:>
Python Code:
# Setup your dependencies
import os
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# Google Cloud Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_GOOGLE_CLOUD_NOTEBOOK:
USER_FLAG = "--user"
# Upgrade the specified package to the newest available version
! pip install {USER_FLAG} --upgrade google-cloud-aiplatform
import os
if not os.getenv("IS_TESTING"):
# Restart the kernel after pip installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
# Import necessary libraries
import datetime
import json
from google.cloud import aiplatform_v1beta1
# Fill in your project ID and region
REGION = "us-central1" # @param {type:"string"}
PROJECT_ID = "qwiklabs-gcp-00-866bdf7714fe" # @param {type:"string"}
# These will be automatically filled in.
STUDY_DISPLAY_NAME = "{}_study_{}".format(
PROJECT_ID.replace("-", ""), datetime.datetime.now().strftime("%Y%m%d_%H%M%S")
) # @param {type: 'string'}
ENDPOINT = REGION + "-aiplatform.googleapis.com"
PARENT = "projects/{}/locations/{}".format(PROJECT_ID, REGION)
print("ENDPOINT: {}".format(ENDPOINT))
print("REGION: {}".format(REGION))
print("PARENT: {}".format(PARENT))
# If you don't know your project ID, you might be able to get your project ID
# using gcloud command by executing the second cell below.
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "qwiklabs-gcp-00-866bdf7714fe":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
# Parameter Configuration
param_r = {"parameter_id": "r", "double_value_spec": {"min_value": 0, "max_value": 1}}
param_theta = {
"parameter_id": "theta",
"double_value_spec": {"min_value": 0, "max_value": 1.57},
}
# TODO
# Objective Metrics
metric_y1 = {"metric_id": "y1", "goal": "MINIMIZE"}
# TODO
# Objective Metrics
metric_y2 = {"metric_id": "y2", "goal": "MAXIMIZE"}
# Put it all together in a study configuration
study = {
"display_name": STUDY_DISPLAY_NAME,
"study_spec": {
"algorithm": "RANDOM_SEARCH",
"parameters": [
param_r,
param_theta,
],
"metrics": [metric_y1, metric_y2],
},
}
print(json.dumps(study, indent=2, sort_keys=True))
# TODO
# Create the study using study configuration and send request through VizierServiceClient
vizier_client = aiplatform_v1beta1.VizierServiceClient(
client_options=dict(api_endpoint=ENDPOINT)
)
study = vizier_client.create_study(parent=PARENT, study=study)
STUDY_ID = study.name
print("STUDY_ID: {}".format(STUDY_ID))
import math
# r * sin(theta)
def Metric1Evaluation(r, theta):
Evaluate the first metric on the trial.
return r * math.sin(theta)
# r * cos(theta)
def Metric2Evaluation(r, theta):
Evaluate the second metric on the trial.
return r * math.cos(theta)
def CreateMetrics(trial_id, r, theta):
print(("=========== Start Trial: [{}] =============").format(trial_id))
# TODO
# Evaluate both objective metrics for this trial
y1 = Metric1Evaluation(r, theta)
y2 = Metric2Evaluation(r, theta)
print(
"[r = {}, theta = {}] => y1 = r*sin(theta) = {}, y2 = r*cos(theta) = {}".format(
r, theta, y1, y2
)
)
metric1 = {"metric_id": "y1", "value": y1}
metric2 = {"metric_id": "y2", "value": y2}
# Return the results for this trial
return [metric1, metric2]
client_id = "client1" # @param {type: 'string'}
suggestion_count_per_request = 5 # @param {type: 'integer'}
max_trial_id_to_stop = 4 # @param {type: 'integer'}
print("client_id: {}".format(client_id))
print("suggestion_count_per_request: {}".format(suggestion_count_per_request))
print("max_trial_id_to_stop: {}".format(max_trial_id_to_stop))
trial_id = 0
while int(trial_id) < max_trial_id_to_stop:
suggest_response = vizier_client.suggest_trials(
{
"parent": STUDY_ID,
"suggestion_count": suggestion_count_per_request,
"client_id": client_id,
}
)
for suggested_trial in suggest_response.result().trials:
trial_id = suggested_trial.name.split("/")[-1]
trial = vizier_client.get_trial({"name": suggested_trial.name})
if trial.state in ["COMPLETED", "INFEASIBLE"]:
continue
for param in trial.parameters:
if param.parameter_id == "r":
r = param.value
elif param.parameter_id == "theta":
theta = param.value
print("Trial : r is {}, theta is {}.".format(r, theta))
# TODO
# Store your measurement and send the request
vizier_client.add_trial_measurement(
{
"trial_name": suggested_trial.name,
"measurement": {
# TODO
"metrics": CreateMetrics(suggested_trial.name, r, theta)
},
}
)
response = vizier_client.complete_trial(
{"name": suggested_trial.name, "trial_infeasible": False}
)
# TODO
# List all the pareto-optimal trails
optimal_trials = vizier_client.list_optimal_trials({"parent": STUDY_ID})
print("optimal_trials: {}".format(optimal_trials))
vizier_client.delete_study({"name": STUDY_ID})
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
Step2: And let's make our system a little more interesting so that we can discriminate between the various t0s
Step3: t0 Parameters
Step4: 't0_perpass' defines the time at which both components in our orbit is at periastron passage. By default this parameter is constrained by 't0_supconj'. For more details or information on how to change which parameter is editable, see the Constraints Tutorial.
Step5: The 't0_ref' defines the time at which the primary component in our orbit passes an arbitrary reference point. This 't0_ref' is defined in the same way as PHOEBE legacy's 'HJD0' parameter, so is included for convenience translating between the two.
Step6: In addition, there is a single 't0' parameter that is system-wide. This parameter simply defines the time at which all parameters are defined and therefore at which all computations start. The value of this parameter begins to play an important role if any parameter is given a time-derivative (see apsidal motion for an example) or when using N-body instead of Keplerian dynamics (coming in a future release).
Step7: Influence on Oribits (positions)
Step8: To visualize where these times are with respect to the orbits, we can plot the model orbit and highlight the positions of each star at the times defined by these parameters. Note here that the observer is in the positive w-direction.
Step9: Influence on Phasing
Step10: Similarly, if plotting phases on any axis, passing the 't0' keyword will set the zero-phase accordingly. To see this, let's compute a light curve and phase it with the various t0s shown in the orbits above.
|
<ASSISTANT_TASK:>
Python Code:
!pip install -I "phoebe>=2.1,<2.2"
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
b.set_value('sma@binary', 20)
b.set_value('q', 0.8)
b.set_value('ecc', 0.8)
b.set_value('per0', 45)
b.get_parameter('t0_supconj', context='component')
b.get_parameter('t0_perpass', context='component')
b.get_parameter('t0_perpass', context='constraint')
b.get_parameter('t0_ref', context='component')
b.get_parameter('t0_ref', context='constraint')
b.get_parameter('t0', context='system')
b.add_dataset('orb', times=np.linspace(-1,1,1001))
b.run_compute(ltte=False)
afig, mplfig = b.plot(x='us', y='ws', z=0, time='t0_supconj', show=True)
afig, mplfig = b.plot(x='us', y='ws', z=0, time='t0_perpass', show=True)
afig, mplfig = b.plot(x='us', y='ws', z=0, time='t0_ref', show=True)
b.to_phase(0.0)
b.to_phase(0.0, component='binary', t0='t0_supconj')
b.to_phase(0.0, component='binary', t0='t0_perpass')
b.to_phase(0.0, component='binary', t0='t0_ref')
b.add_dataset('lc', times=np.linspace(0,1,51), ld_func='linear', ld_coeffs=[0.0])
b.run_compute(ltte=False, irrad_method='none', atm='blackbody')
afig, mplfig = b['lc01@model'].plot(x='phases', t0='t0_supconj', xlim=(-0.3,0.3), show=True)
afig, mplfig = b['lc01@model'].plot(x='phases', t0='t0_perpass', xlim=(-0.3,0.3), show=True)
afig, mplfig = b['lc01@model'].plot(x='phases', t0='t0_ref', xlim=(-0.3,0.3), show=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Build Sentiment Scores and Categories
Step2: Bag of Words
Step3: Bernoulli
Step4: Logistic Model
Step5: TFIDF
Step6: Logistic Model
Step7: Bernouilli Model
Step8: KNN model
Step9: Random Forest
Step10: Decision Tree
Step11: SVC
Step12: Gradient Boosting
|
<ASSISTANT_TASK:>
Python Code:
#Import data from json file and create a list
data = []
with open('/home/borjaregueral/Digital_Music_5.json') as f:
for line in f:
data.append(json.loads(line))
#Create a dataframe with the columns that are interesting for this exercise
#Columns left out: 'helpful', 'reviewTime', 'reviewerID','reviewerName'
names = ["overall", "reviewText"]
amazonraw = pd.DataFrame(data, columns=names)
amazonraw['overall'] = amazonraw['overall'].astype(int)
amazonraw.head()
#Analyse the dataset: types, length of the dataframe and NaN
amazonraw.info()
amazonraw.dtypes
amazonraw.overall.describe()
#Change the Overall variable into a categorical variable
#Ratings equal or lower than 3 have been considered negative as the mean is 4.25.
#The hypothesis is that although the abovmentioned ratings could be considered positive they are negative
amazonraw.loc[amazonraw['overall'] <= 3, 'Sentiment'] = 0
amazonraw.loc[amazonraw['overall'] >=4 , 'Sentiment'] = 1
amazonraw.loc[amazonraw['Sentiment'] == 0, 'Category'] ='Negative'
amazonraw.loc[amazonraw['Sentiment'] == 1, 'Category'] = 'Positive'
#Count the each of the categories
a = amazonraw['Category'].value_counts('Positive')
b = pd.value_counts(amazonraw['Category'].values, sort=False)
print('Number of ocurrencies:\n', b)
print('\n')
print('Frequency of each value:\n', a)
#Downsample majority class (due to computational restrictions we downsample the majority instead of upsampling the minority)
# Separate majority and minority classes
amazon_majority = amazonraw[amazonraw.Sentiment == 1]
amazon_minority = amazonraw[amazonraw.Sentiment == 0]
# Downsample mairlinesass
amazon_majority_downsampled = resample(amazon_majority, replace=False, n_samples=12590, random_state=123)
# Combine minority class with downsampled majority class
amazon = pd.concat([amazon_majority_downsampled, amazon_minority])
# Display new class counts
amazon.Category.value_counts()
#Graphical representation of the positive and negative reviews
plt.figure(figsize=(20, 5))
plt.subplot(1, 2, 1)
sns.set(style="white")
ax = sns.countplot(x="overall", data=amazonraw)
plt.title('Amazon Ratings')
plt.subplot(1, 2, 2)
sns.set(style="white")
ax = sns.countplot(x="Category", data=amazon)
plt.title('Categories in the downsampled dataset')
#Create new dataframe that has the Categories, Overall scores, Sentiment and ReviewText
names = ['Category',"overall",'Sentiment', "reviewText"]
amazon1 = pd.DataFrame(amazon, columns=names)
amazon.head()
#Lines are reshuffled and 50% of the dataset is used to reduce the computing effort
amazon2 = amazon1.sample(frac=1, random_state=7)
#Predictors and prediced variables are formed
X = amazon2['reviewText']
y = amazon2['Sentiment']
#Split the data set into train and test 70/30
X_train, X_test, y_train, y_test = train_test_split(X,y, test_size=0.3, random_state=135)
#KFold for cross validation analysis
kf = KFold(5)
#Analysis starts with Bag of Words and common English words are extracted
vect = CountVectorizer(analyzer = 'word', stop_words='english').fit(X_train)
X_trainvec = vect.transform(X_train)
X_testvec = vect.transform(X_test)
#Count the number of english words and take a look at the type of words that are extracted
print("Number of stop words is :", len(ENGLISH_STOP_WORDS), "\n")
print("Examples: ", list(ENGLISH_STOP_WORDS)[::10])
#Take a look at the features identified by bag of words
features_names = vect.get_feature_names()
print(len(features_names))
print("\n")
# print first 20 features
print(features_names[:20])
print("\n")
# print last 20 features
print(features_names[-20:])
#Size of the X_trainvector sparse matrix
print(X_trainvec.shape)
X_trainvec
#Check the size of the y_train vector to avoid problems when running the logistic regression model
y_train.shape
# Initialize and fit the model.
l3 = BernoulliNB()
l3.fit(X_trainvec, y_train)
# Predict on training set
predtrain_y = l3.predict(X_trainvec)
#Predicting on the test set
l3 = BernoulliNB()
l3.fit(X_testvec, y_test)
# Predict on training set
predtest_y = l3.predict(X_testvec)
#Evaluation of the model (testing)
target_names = ['0.0', '1.0']
print(classification_report(y_test, predtest_y, target_names=target_names))
confusion = confusion_matrix(y_test, predtest_y)
print(confusion)
# Accuracy tables.
table_test = pd.crosstab(y_test, predtest_y, margins=True)
test_tI_errors = table_test.loc[0.0,1.0] / table_test.loc['All','All']
test_tII_errors = table_test.loc[1.0,0.0] / table_test.loc['All','All']
print((
'Bernouilli accuracy: {}\n'
'Percent Type I errors: {}\n'
'Percent Type II errors: {}\n\n'
).format(cross_val_score(l3,X_testvec,y_test,cv=kf).mean(),test_tI_errors, test_tII_errors))
# Initialize and fit the model.
lr = LogisticRegression()
lr.fit(X_trainvec, y_train)
#Once the model has been trained test it on the test dataset
lr.fit(X_testvec, y_test)
# Predict on test set
predtest_y = lr.predict(X_testvec)
#Evaluate model (test set)
target_names = ['0.0', '1.0']
print(classification_report(y_test, predtest_y, target_names=target_names))
confusion = confusion_matrix(y_test, predtest_y)
print(confusion)
# Accuracy tables.
table_test = pd.crosstab(y_test, predtest_y, margins=True)
test_tI_errors = table_test.loc[0.0,1.0] / table_test.loc['All','All']
test_tII_errors = table_test.loc[1.0,0.0] / table_test.loc['All','All']
print((
'Logistics accuracy: {}\n'
'Percent Type I errors: {}\n'
'Percent Type II errors: {}\n\n'
).format(cross_val_score(lr,X_testvec,y_test,cv=kf).mean(),test_tI_errors, test_tII_errors))
vect2 = TfidfVectorizer(min_df=20, analyzer = 'word', stop_words = 'english',
ngram_range = (1,3)
).fit(X_train)
X_train_vectorized = vect2.transform(X_train)
X_test_vectorized = vect2.transform(X_test)
features_names = vect2.get_feature_names()
print(len(features_names))
# Initialize and fit the model.
lr2 = LogisticRegression(class_weight='balanced')
#Create range of values to fit parameters
k1 = ['l1', 'l2']
k2 = np.arange(50) + 1
k3 = ['balanced', None]
parameters = {'penalty': k1,
'C': k2,
'class_weight':k3}
#Fit parameters
lrr = GridSearchCV(lr2, param_grid=parameters, cv=kf)
#Fit on Training set
lrr.fit(X_train_vectorized, y_train)
#The best hyper parameters set
print("Best Hyper Parameters:", lrr.best_params_)
#Once the model has been trained test it on the test dataset
lr2.fit(X_test_vectorized, y_test)
# Predict on test set
predtest2_y = lrr.predict(X_test_vectorized)
#Evaluate model (test set)
target_names = ['0.0', '1.0']
print(classification_report(y_test, predtest2_y, target_names=target_names))
confusion = confusion_matrix(y_test, predtest2_y)
print(confusion)
# Accuracy tables.
table_test = pd.crosstab(y_test, predtest2_y, margins=True)
test_tI_errors = table_test.loc[0.0,1.0] / table_test.loc['All','All']
test_tII_errors = table_test.loc[1.0,0.0] / table_test.loc['All','All']
print((
'Losgistics model accuracy: {}\n'
'Percent Type I errors: {}\n'
'Percent Type II errors: {}\n\n'
).format(cross_val_score(lr2,X_test_vectorized,y_test,cv=kf).mean(),test_tI_errors, test_tII_errors))
# Initialize and fit the model.
l3 = BernoulliNB()
#Create range of values to fit parameters
k1 = np.arange(50) + 1
parameters = {'alpha': k1
}
#Fit parameters
l33 = GridSearchCV(l3, param_grid=parameters, cv=kf)
#Fit on Training set
l33.fit(X_train_vectorized, y_train)
#The best hyper parameters set
print("Best Hyper Parameters:", l33.best_params_)
# Predict on the test data set
l33.fit(X_test_vectorized, y_test)
# Predict on training set
predtest3_y = l33.predict(X_test_vectorized)
#Evaluation of the model (testing)
target_names = ['0.0', '1.0']
print(classification_report(y_test, predtest3_y, target_names=target_names))
confusion = confusion_matrix(y_test, predtest3_y)
print(confusion)
# Accuracy tables.
table_test = pd.crosstab(y_test, predtest3_y, margins=True)
test_tI_errors = table_test.loc[0.0,1.0] / table_test.loc['All','All']
test_tII_errors = table_test.loc[1.0,0.0] / table_test.loc['All','All']
print((
'Bernouilli set accuracy: {}\n'
'Percent Type I errors: {}\n'
'Percent Type II errors: {}\n\n'
).format(cross_val_score(l33,X_test_vectorized,y_test,cv=kf).mean(),test_tI_errors, test_tII_errors))
# Initialize and fit the model
KNN = KNeighborsClassifier(n_jobs=-1)
#Create range of values to fit parameters
k1 = [1,3,5,7,9,11,13,15,17,19,21]
k3 = ['uniform', 'distance']
parameters = {'n_neighbors': k1,
'weights':k3}
#Fit parameters
clf = GridSearchCV(KNN, param_grid=parameters, cv=kf)
#Fit the tunned model
clf.fit(X_train_vectorized, y_train)
#The best hyper parameters set
print("Best Hyper Parameters:", clf.best_params_)
#Initialize the model on test dataset
clf.fit(X_test_vectorized, y_test)
# Predict on test dataset
predtest3_y = clf.predict(X_test_vectorized)
#Evaluate model on the test set
target_names = ['0.0', '1.0']
print(classification_report(y_test, predtest3_y, target_names=target_names))
#Create confusion matrix
confusion = confusion_matrix(y_test, predtest3_y)
print(confusion)
# Accuracy tables.
table_test = pd.crosstab(y_test, predtest3_y, margins=True)
test_tI_errors = table_test.loc[0.0,1.0] / table_test.loc['All','All']
test_tII_errors = table_test.loc[1.0,0.0] / table_test.loc['All','All']
#Print Results
print((
'KNN accuracy: {}\n'
'Percent Type I errors: {}\n'
'Percent Type II errors: {}\n\n'
).format(cross_val_score(clf,X_test_vectorized,y_test,cv=kf).mean(),test_tI_errors, test_tII_errors))
#For the Random Forest hyperparameters tuning,due to computational restrictions,
#grid search will be applied to one paramter at a time on the train set
#updating the value as we move along the hyperparameters tuning
#Number of trees
param_test1 = {'n_estimators':range(300,400,20)}
gsearch1 = GridSearchCV(estimator = RandomForestClassifier(),
param_grid = param_test1, scoring='roc_auc',n_jobs=-1,iid=False, cv=kf)
gsearch1.fit(X_train_vectorized, y_train)
gsearch1.grid_scores_, gsearch1.best_params_, gsearch1.best_score_
#Max depth and min sample split
#Tried values for max depth from 2-60 with values under 0.8641. To find the value that increases accuracy
# the range between 60-80 is used
# min sample split values from 50-500 being the value between 80-120 the ones that increases accuracy
param_test2 = {'max_depth':range(61,80,2), 'min_samples_split': range(80,121,20)}
gsearch2 = GridSearchCV(estimator = RandomForestClassifier(n_estimators = 360),
param_grid = param_test2, scoring='roc_auc',n_jobs=-1,iid=False, cv=kf)
gsearch2.fit(X_train_vectorized, y_train)
gsearch2.grid_scores_, gsearch2.best_params_, gsearch2.best_score_
#Re run the min_sample split with the min_sample leaf
param_test3 = {'min_samples_leaf':range(2,33,10)}
gsearch3 = GridSearchCV(estimator = RandomForestClassifier(n_estimators = 360, max_depth = 65 , min_samples_split = 80 ),
param_grid = param_test3, scoring='roc_auc',n_jobs=-1,iid=False, cv=kf)
gsearch3.fit(X_train_vectorized, y_train)
gsearch3.grid_scores_, gsearch3.best_params_, gsearch3.best_score_
#Based on the results shown for the minimum sample split, we will lwave it in the default number
#Re run the min_sample split with the min_sample leaf
param_test4 = {'criterion':['gini', 'entropy']}
gsearch4 = GridSearchCV(estimator = RandomForestClassifier(n_estimators = 360, max_depth = 65 , min_samples_split = 80),
param_grid = param_test4, scoring='roc_auc',n_jobs=-1,iid=False, cv=kf)
gsearch4.fit(X_train_vectorized, y_train)
gsearch4.grid_scores_, gsearch4.best_params_, gsearch4.best_score_
#Fit in test dataset
gsearch4.fit(X_test_vectorized, y_test)
#Predict on test dataset
predtestrf_y = gsearch4.predict(X_test_vectorized)
#Test Scores
target_names = ['0', '1']
print(classification_report(y_test, predtestrf_y, target_names=target_names))
cnf = confusion_matrix(y_test, predtestrf_y)
print(cnf)
table_test = pd.crosstab(y_test, predtestrf_y, margins=True)
test_tI_errors = table_test.loc[0.0,1.0]/table_test.loc['All','All']
test_tII_errors = table_test.loc[1.0,0.0]/table_test.loc['All','All']
print((
'Random Forest accuracy:{}\n'
'Percent Type I errors: {}\n'
'Percent Type II errors: {}'
).format(cross_val_score(gsearch4,X_test_vectorized,y_test,cv=kf).mean(),test_tI_errors, test_tII_errors))
# Train model
OTM = DecisionTreeClassifier()
#Create range of values to fit parameters
k2 = ['auto', 'sqrt', 'log2']
parameters = {'max_features': k2
}
#Fit parameters
OTM1 = GridSearchCV(OTM, param_grid=parameters, cv=kf)
#Fit the tunned model
OTM1.fit(X_train_vectorized, y_train)
#The best hyper parameters set
print("Best Hyper Parameters:", OTM1.best_params_)
#Fit on test dataset
OTM1.fit(X_test_vectorized, y_test)
#Predict parameters on test dataset
predtestrf_y = OTM1.predict(X_test_vectorized)
#Test Scores
target_names = ['0', '1']
print(classification_report(y_test, predtestrf_y, target_names=target_names))
cnf = confusion_matrix(y_test, predtestrf_y)
print(cnf)
table_test = pd.crosstab(y_test, predtestrf_y, margins=True)
test_tI_errors = table_test.loc[0.0,1.0]/table_test.loc['All','All']
test_tII_errors = table_test.loc[1.0,0.0]/table_test.loc['All','All']
print((
'Decision Tree accuracy:{}\n'
'Percent Type I errors: {}\n'
'Percent Type II errors: {}'
).format(cross_val_score(OTM1,X_test_vectorized,y_test,cv=kf).mean(),test_tI_errors, test_tII_errors))
# Train model
svc = SVC()
#Create range of values to fit parameters
ks1 = np.arange(20)+1
ks4 = ['linear','rbf']
parameters = {'C': ks1,
'kernel': ks4}
#Fit parameters
svc1 = GridSearchCV(svc, param_grid=parameters, cv=kf)
#Fit the tunned model
svc1.fit(X_train_vectorized, y_train)
#The best hyper parameters set
print("Best Hyper Parameters:", svc1.best_params_)
#Fit tunned model on Test set
svc1.fit(X_test_vectorized, y_test)
# Predict on training set
predtestsvc_y = svc1.predict(X_test_vectorized)
#Test Scores
target_names = ['0.0', '1.0']
print(classification_report(y_test, predtestsvc_y, target_names=target_names))
cnf = confusion_matrix(y_test, predtestsvc_y)
print(cnf)
table_test = pd.crosstab(y_test, predtestsvc_y, margins=True)
print((
'SVC accuracy:{}\n'
).format(cross_val_score(svc1,X_test_vectorized,y_test,cv=kf).mean(),test_tI_errors, test_tII_errors))
#For the Gradient Boosting hyperparameters tuning,due to computational restrictions,
#grid search will be applied to one paramter at a time on the train set
#updating the value as we move along the hyperparameters tuning
#Number of trees
param_test1 = {'n_estimators':range(20,90,10)}
gsearch1 = GridSearchCV(estimator = GradientBoostingClassifier(learning_rate=0.1, min_samples_split=500,min_samples_leaf=50,max_depth=8,max_features='sqrt',subsample=0.8,random_state=10),
param_grid = param_test1, scoring='roc_auc',n_jobs=4,iid=False, cv=kf)
gsearch1.fit(X_train_vectorized, y_train)
gsearch1.grid_scores_, gsearch1.best_params_, gsearch1.best_score_
#Max depth and min sample split
param_test2 = {'max_depth':range(5,20,2), 'min_samples_split':range(200,1001,200)}
gsearch2 = GridSearchCV(estimator = GradientBoostingClassifier(learning_rate=0.1, n_estimators=80, max_features='sqrt', subsample=0.8, random_state=10),
param_grid = param_test2, scoring='roc_auc',n_jobs=4,iid=False, cv=kf)
gsearch2.fit(X_train_vectorized, y_train)
gsearch2.grid_scores_, gsearch2.best_params_, gsearch2.best_score_
#Re run the min_sample split with the min_sample leaf
param_test3 = {'min_samples_split':range(200,1001,200),'min_samples_leaf':range(30,71,10)}
gsearch3 = GridSearchCV(estimator = GradientBoostingClassifier(learning_rate=0.1, n_estimators=80,max_depth=19,min_samples_split=600,max_features='sqrt', subsample=0.8, random_state=10),
param_grid = param_test3, scoring='roc_auc',n_jobs=4,iid=False, cv=kf)
gsearch3.fit(X_train_vectorized, y_train)
gsearch3.grid_scores_, gsearch3.best_params_, gsearch3.best_score_
#Max features considering the results obtained
#for the combination of the 'min_samples_split', 'min_samples_leaf' and 'max_depth'
#The value of 600 has been maintained as it is the one that gives a better accuracy for every value of 'max_depth'
param_test4 = {'max_features':range(60,74,2)}
gsearch4 = GridSearchCV(estimator = GradientBoostingClassifier(learning_rate=0.1, n_estimators=80,max_depth=19,min_samples_split=600,min_samples_leaf=40,max_features='sqrt', subsample=0.8, random_state=10),
param_grid = param_test4, scoring='roc_auc',n_jobs=4,iid=False, cv=kf)
gsearch4.fit(X_train_vectorized, y_train)
gsearch4.grid_scores_, gsearch4.best_params_, gsearch4.best_score_
#Tuning the subsample
param_test5 = {'subsample':[0.6,0.7,0.75,0.8,0.85,0.9,0.95]}
gsearch5 = GridSearchCV(estimator = GradientBoostingClassifier(learning_rate=0.1,
n_estimators=80,max_depth=19,min_samples_split=600,
min_samples_leaf=40,max_features=62,
subsample=0.8, random_state=10),
param_grid = param_test5, scoring='roc_auc',n_jobs=4,iid=False, cv=kf)
gsearch5.fit(X_train_vectorized, y_train)
gsearch5.grid_scores_, gsearch5.best_params_, gsearch5.best_score_
#Instead of having a 10% learning rate, we halve the learning rate and double the number of trees to see if we
#can improve the accuracy
param_test5 = {'subsample':[0.8,0.85,0.9,0.95]}
gsearch5 = GridSearchCV(estimator = GradientBoostingClassifier(learning_rate=0.05, n_estimators=160,
max_depth=19,min_samples_split=600,
min_samples_leaf=40,max_features=62,
subsample=0.9, random_state=10),
param_grid = param_test5, scoring='roc_auc',n_jobs=4,iid=False, cv=kf)
gsearch5.fit(X_train_vectorized, y_train)
gsearch5.grid_scores_, gsearch5.best_params_, gsearch5.best_score_
#Fit on the test set
gsearch5.fit(X_test_vectorized, y_test)
# Predict on test set
predtestrf_y = gsearch5.predict(X_test_vectorized)
#Test Scores
target_names = ['0', '1']
print(classification_report(y_test, predtestrf_y, target_names=target_names))
cnf = confusion_matrix(y_test, predtestrf_y)
print(cnf)
table_test = pd.crosstab(y_test, predtestrf_y, margins=True)
test_tI_errors = table_test.loc[0.0,1.0]/table_test.loc['All','All']
test_tII_errors = table_test.loc[1.0,0.0]/table_test.loc['All','All']
print((
'Gradient Boosting accuracy:{}\n'
'Percent Type I errors: {}\n'
'Percent Type II errors: {}'
).format(cross_val_score(gsearch5,X_test_vectorized,y_test,cv=kf).mean(),test_tI_errors, test_tII_errors))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load Wikipedia dataset
Step2: Extract word count vectors
Step3: Find nearest neighbors
Step4: Let's look at the top 10 nearest neighbors by performing the following query
Step6: All of the 10 people are politicians, but about half of them have rather tenuous connections with Obama, other than the fact that they are politicians.
Step7: Let's extract the list of most frequent words that appear in both Obama's and Barrio's documents. We've so far sorted all words from Obama and Barrio's articles by their word frequencies. We will now use a dataframe operation known as join. The join operation is very useful when it comes to playing around with data
Step8: Since both tables contained the column named count, SFrame automatically renamed one of them to prevent confusion. Let's rename the columns to tell which one is for which. By inspection, we see that the first column (count) is for Obama and the second (count.1) for Barrio.
Step9: Note. The join operation does not enforce any particular ordering on the shared column. So to obtain, say, the five common words that appear most often in Obama's article, sort the combined table by the Obama column. Don't forget ascending=False to display largest counts first.
Step10: Quiz Question. Among the words that appear in both Barack Obama and Francisco Barrio, take the 5 that appear most frequently in Obama. How many of the articles in the Wikipedia dataset contain all of those 5 words?
Step11: Checkpoint. Check your has_top_words function on two random articles
Step12: Quiz Question. Measure the pairwise distance between the Wikipedia pages of Barack Obama, George W. Bush, and Joe Biden. Which of the three pairs has the smallest distance?
Step13: Quiz Question. Collect all words that appear both in Barack Obama and George W. Bush pages. Out of those words, find the 10 words that show up most often in Obama's page.
Step14: Note. Even though common words are swamping out important subtle differences, commonalities in rarer political words still matter on the margin. This is why politicians are being listed in the query result instead of musicians, for example. In the next subsection, we will introduce a different metric that will place greater emphasis on those rarer words.
Step15: Let's determine whether this list makes sense.
Step16: Using the join operation we learned earlier, try your hands at computing the common words shared by Obama's and Schiliro's articles. Sort the common words by their TF-IDF weights in Obama's document.
Step17: The first 10 words should say
Step18: Notice the huge difference in this calculation using TF-IDF scores instead of raw word counts. We've eliminated noise arising from extremely common words.
Step19: The distance is larger than the distances we found for the 10 nearest neighbors, which we repeat here for readability
Step20: But one may wonder, is Biden's article that different from Obama's, more so than, say, Schiliro's? It turns out that, when we compute nearest neighbors using the Euclidean distances, we unwittingly favor short articles over long ones. Let us compute the length of each Wikipedia document, and examine the document lengths for the 100 nearest neighbors to Obama's page.
Step21: To see how these document lengths compare to the lengths of other documents in the corpus, let's make a histogram of the document lengths of Obama's 100 nearest neighbors and compare to a histogram of document lengths for all documents.
Step22: Relative to the rest of Wikipedia, nearest neighbors of Obama are overwhemingly short, most of them being shorter than 300 words. The bias towards short articles is not appropriate in this application as there is really no reason to favor short articles over long articles (they are all Wikipedia articles, after all). Many of the Wikipedia articles are 300 words or more, and both Obama and Biden are over 300 words long.
Step23: From a glance at the above table, things look better. For example, we now see Joe Biden as Barack Obama's nearest neighbor! We also see Hillary Clinton on the list. This list looks even more plausible as nearest neighbors of Barack Obama.
Step24: Indeed, the 100 nearest neighbors using cosine distance provide a sampling across the range of document lengths, rather than just short articles like Euclidean distance provided.
Step25: Let's look at the TF-IDF vectors for this tweet and for Barack Obama's Wikipedia entry, just to visually see their differences.
Step26: Now, compute the cosine distance between the Barack Obama article and this tweet
Step27: Let's compare this distance to the distance between the Barack Obama article and all of its Wikipedia 10 nearest neighbors
|
<ASSISTANT_TASK:>
Python Code:
import graphlab
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
'''Check GraphLab Create version'''
from distutils.version import StrictVersion
assert (StrictVersion(graphlab.version) >= StrictVersion('1.8.5')), 'GraphLab Create must be version 1.8.5 or later.'
wiki = graphlab.SFrame('people_wiki.gl')
wiki
wiki['word_count'] = graphlab.text_analytics.count_words(wiki['text'])
wiki
model = graphlab.nearest_neighbors.create(wiki, label='name', features=['word_count'],
method='brute_force', distance='euclidean')
model.query(wiki[wiki['name']=='Barack Obama'], label='name', k=10)
def top_words(name):
Get a table of the most frequent words in the given person's wikipedia page.
row = wiki[wiki['name'] == name]
word_count_table = row[['word_count']].stack('word_count', new_column_name=['word','count'])
return word_count_table.sort('count', ascending=False)
obama_words = top_words('Barack Obama')
obama_words
barrio_words = top_words('Francisco Barrio')
barrio_words
combined_words = obama_words.join(barrio_words, on='word')
combined_words
combined_words = combined_words.rename({'count':'Obama', 'count.1':'Barrio'})
combined_words
combined_words.sort('Obama', ascending=False)
common_words = combined_words['word'][:5]
common_words = set(common_words)
def has_top_words(word_count_vector):
# extract the keys of word_count_vector and convert it to a set
unique_words = set(word_count_vector.keys())
print "length of unique words = " + str(len(unique_words))
# return True if common_words is a subset of unique_words
# return False otherwise
return 1 if common_words.issubset(unique_words) else 0
wiki['has_top_words'] = wiki['word_count'].apply(has_top_words)
# use has_top_words column to answer the quiz question
print "#articles in the Wikipedia dataset contain all of those 5 words = " + str(wiki['has_top_words'].sum())
print 'Output from your function:', has_top_words(wiki[32]['word_count'])
print 'Correct output: True'
print 'Also check the length of unique_words. It should be 167'
print 'Output from your function:', has_top_words(wiki[33]['word_count'])
print 'Correct output: False'
print 'Also check the length of unique_words. It should be 188'
obama = wiki[wiki['name'] == 'Barack Obama']
bush = wiki[wiki['name'] == 'George W. Bush']
biden = wiki[wiki['name'] == 'Joe Biden']
isinstance(obama['word_count'][0], dict)
# pair-wise distances
obama_bush = graphlab.toolkits.distances.euclidean(obama['word_count'][0], bush['word_count'][0])
print "distance b/w obama and bush = " + str(obama_bush)
obama_biden = graphlab.toolkits.distances.euclidean(obama['word_count'][0], biden['word_count'][0])
print "distance b/w obama and biden = " + str(obama_biden)
bush_biden = graphlab.toolkits.distances.euclidean(biden['word_count'][0], bush['word_count'][0])
print "distance b/w biden and bush = " + str(bush_biden)
bush_words = top_words('Francisco Barrio')
bush_words
new_combined_words = obama_words.join(bush_words, on='word')
new_combined_words
new_combined_words = new_combined_words.rename({'count':'Obama', 'count.1':'Bush'})
new_combined_words
new_combined_words.sort('Obama', ascending=False)
new_combined_words.print_rows(10)
wiki['tf_idf'] = graphlab.text_analytics.tf_idf(wiki['word_count'])
model_tf_idf = graphlab.nearest_neighbors.create(wiki, label='name', features=['tf_idf'],
method='brute_force', distance='euclidean')
model_tf_idf.query(wiki[wiki['name'] == 'Barack Obama'], label='name', k=10)
def top_words_tf_idf(name):
row = wiki[wiki['name'] == name]
word_count_table = row[['tf_idf']].stack('tf_idf', new_column_name=['word','weight'])
return word_count_table.sort('weight', ascending=False)
obama_tf_idf = top_words_tf_idf('Barack Obama')
obama_tf_idf
schiliro_tf_idf = top_words_tf_idf('Phil Schiliro')
schiliro_tf_idf
combined_words_tf_idf = obama_tf_idf.join(schiliro_tf_idf, on='word')
combined_words_tf_idf
combined_words_tf_idf = combined_words_tf_idf.rename({'weight': 'Obama', 'weight.1' : 'Schiliro'})
combined_words_tf_idf
combined_words_tf_idf.sort('Obama', ascending=False)
combined_words_tf_idf.print_rows(10)
common_words = set(combined_words_tf_idf['word'][:5])
common_words
def has_top_words(word_count_vector):
# extract the keys of word_count_vector and convert it to a set
unique_words = set(word_count_vector.keys())
# return True if common_words is a subset of unique_words
# return False otherwise
return 1 if common_words.issubset(unique_words) else 0
wiki['has_top_words'] = wiki['word_count'].apply(has_top_words)
# use has_top_words column to answer the quiz question
print "#articles in the Wikipedia dataset contain all of those 5 words = " + str(wiki['has_top_words'].sum())
obama = wiki[wiki['name'] == 'Barack Obama']
biden = wiki[wiki['name'] == 'Joe Biden']
obama_biden = graphlab.toolkits.distances.euclidean(obama['tf_idf'][0], biden['tf_idf'][0])
print "distance between obama and biden based on tf-idf = " + str(obama_biden)
model_tf_idf.query(wiki[wiki['name'] == 'Barack Obama'], label='name', k=10)
def compute_length(row):
return len(row['text'].split(' '))
wiki['length'] = wiki.apply(compute_length)
nearest_neighbors_euclidean = model_tf_idf.query(wiki[wiki['name'] == 'Barack Obama'], label='name', k=100)
nearest_neighbors_euclidean = nearest_neighbors_euclidean.join(wiki[['name', 'length']], on={'reference_label':'name'})
nearest_neighbors_euclidean.sort('rank')
plt.figure(figsize=(10.5,4.5))
plt.hist(wiki['length'], 50, color='k', edgecolor='None', histtype='stepfilled', normed=True,
label='Entire Wikipedia', zorder=3, alpha=0.8)
plt.hist(nearest_neighbors_euclidean['length'], 50, color='r', edgecolor='None', histtype='stepfilled', normed=True,
label='100 NNs of Obama (Euclidean)', zorder=10, alpha=0.8)
plt.axvline(x=wiki['length'][wiki['name'] == 'Barack Obama'][0], color='k', linestyle='--', linewidth=4,
label='Length of Barack Obama', zorder=2)
plt.axvline(x=wiki['length'][wiki['name'] == 'Joe Biden'][0], color='g', linestyle='--', linewidth=4,
label='Length of Joe Biden', zorder=1)
plt.axis([0, 1000, 0, 0.04])
plt.legend(loc='best', prop={'size':15})
plt.title('Distribution of document length')
plt.xlabel('# of words')
plt.ylabel('Percentage')
plt.rcParams.update({'font.size':16})
plt.tight_layout()
model2_tf_idf = graphlab.nearest_neighbors.create(wiki, label='name', features=['tf_idf'],
method='brute_force', distance='cosine')
nearest_neighbors_cosine = model2_tf_idf.query(wiki[wiki['name'] == 'Barack Obama'], label='name', k=100)
nearest_neighbors_cosine = nearest_neighbors_cosine.join(wiki[['name', 'length']], on={'reference_label':'name'})
nearest_neighbors_cosine.sort('rank')
plt.figure(figsize=(10.5,4.5))
plt.figure(figsize=(10.5,4.5))
plt.hist(wiki['length'], 50, color='k', edgecolor='None', histtype='stepfilled', normed=True,
label='Entire Wikipedia', zorder=3, alpha=0.8)
plt.hist(nearest_neighbors_euclidean['length'], 50, color='r', edgecolor='None', histtype='stepfilled', normed=True,
label='100 NNs of Obama (Euclidean)', zorder=10, alpha=0.8)
plt.hist(nearest_neighbors_cosine['length'], 50, color='b', edgecolor='None', histtype='stepfilled', normed=True,
label='100 NNs of Obama (cosine)', zorder=11, alpha=0.8)
plt.axvline(x=wiki['length'][wiki['name'] == 'Barack Obama'][0], color='k', linestyle='--', linewidth=4,
label='Length of Barack Obama', zorder=2)
plt.axvline(x=wiki['length'][wiki['name'] == 'Joe Biden'][0], color='g', linestyle='--', linewidth=4,
label='Length of Joe Biden', zorder=1)
plt.axis([0, 1000, 0, 0.04])
plt.legend(loc='best', prop={'size':15})
plt.title('Distribution of document length')
plt.xlabel('# of words')
plt.ylabel('Percentage')
plt.rcParams.update({'font.size': 16})
plt.tight_layout()
sf = graphlab.SFrame({'text': ['democratic governments control law in response to popular act']})
sf['word_count'] = graphlab.text_analytics.count_words(sf['text'])
encoder = graphlab.feature_engineering.TFIDF(features=['word_count'], output_column_prefix='tf_idf')
encoder.fit(wiki)
sf = encoder.transform(sf)
sf
tweet_tf_idf = sf[0]['tf_idf.word_count']
tweet_tf_idf
obama = wiki[wiki['name'] == 'Barack Obama']
obama
obama_tf_idf = obama[0]['tf_idf']
graphlab.toolkits.distances.cosine(obama_tf_idf, tweet_tf_idf)
model2_tf_idf.query(obama, label='name', k=10)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: You can explore the files if you'd like, but we're going to get the ones from convote_v1.1/data_stage_one/development_set/. It's a bunch of text files.
Step2: So great, we have 702 of them. Now let's import them.
Step3: In class we had the texts variable. For the homework can just do speeches_df['content'] to get the same sort of list of stuff.
Step4: Doing our analysis
Step5: Okay, it's far too big to even look at. Let's try to get a list of features from a new CountVectorizer that only takes the top 100 words.
Step6: Now let's push all of that into a dataframe with nicely named columns.
Step7: Everyone seems to start their speeches with "mr chairman" - how many speeches are there total, and many don't mention "chairman" and how many mention neither "mr" nor "chairman"?
Step8: What is the index of the speech thank is the most thankful, a.k.a. includes the word 'thank' the most times?
Step9: If I'm searching for China and trade, what are the top 3 speeches to read according to the CountVectoriser?
Step10: Now what if I'm using a TfidfVectorizer?
Step11: What's the content of the speeches? Here's a way to get them
Step12: Now search for something else! Another two terms that might show up. elections and chaos? Whatever you thnik might be interesting.
Step13: Enough of this garbage, let's cluster
Step14: Which one do you think works the best?
Step15: Term Frequency Vectorizer
Step16: Simple Counting Vectorizer
|
<ASSISTANT_TASK:>
Python Code:
# If you'd like to download it through the command line...
!curl -O http://www.cs.cornell.edu/home/llee/data/convote/convote_v1.1.tar.gz
# And then extract it through the command line...
!tar -zxf convote_v1.1.tar.gz
# glob finds files matching a certain filename pattern
import glob
# Give me all the text files
paths = glob.glob('convote_v1.1/data_stage_one/development_set/*')
paths[:5]
len(paths)
speeches = []
for path in paths:
with open(path) as speech_file:
speech = {
'pathname': path,
'filename': path.split('/')[-1],
'content': speech_file.read()
}
speeches.append(speech)
speeches_df = pd.DataFrame(speeches)
speeches_df.head()
All_speeches = speeches_df['content']
First_five_speeches = speeches_df['content'].head(5)
First_five_speeches
count_vectorizer = CountVectorizer(stop_words='english')
speech_tokens = count_vectorizer.fit_transform(All_speeches)
count_vectorizer.get_feature_names()
All_tokens = pd.DataFrame(speech_tokens.toarray(), columns=count_vectorizer.get_feature_names())
#All_tokens
count_vectorizer_100 = CountVectorizer(max_features=100, stop_words='english')
speech_tokens_top100 = count_vectorizer_100.fit_transform(speeches_df['content'])
Top_100_tokens = pd.DataFrame(speech_tokens_top100.toarray(), columns=count_vectorizer_100.get_feature_names())
Top_100_tokens.head()
speeches_df.info()
Top_100_tokens['No_chairman'] = Top_100_tokens['chairman'] == 0
Top_100_tokens[Top_100_tokens['No_chairman'] == True].count().head(1)
Top_100_tokens['no_mr'] = Top_100_tokens['mr'] == 0
Top_100_tokens[Top_100_tokens['no_mr'] == True].count().head(1)
Top_100_tokens['thank'].sort_values(ascending=False).head(1)
Top_100_tokens['china trade'] = Top_100_tokens['china'] + Top_100_tokens['trade']
Top_100_tokens['china trade'].sort_values(ascending=False).head(3)
idf_vectorizer = TfidfVectorizer(stop_words='english', use_idf=True)
Top_100_tokens_idf = idf_vectorizer.fit_transform(All_speeches)
idf_df = pd.DataFrame(Top_100_tokens_idf.toarray(), columns=idf_vectorizer.get_feature_names())
idf_df['china trade'] = idf_df['china'] + idf_df['trade']
idf_df['china trade'].sort_values(ascending=False).head(3)
# index 0 is the first speech, which was the first one imported.
paths[402]
# Pass that into 'cat' using { } which lets you put variables in shell commands
# that way you can pass the path to cat
!cat {paths[577]}
All_tokens['chaos'] = All_tokens['chaos'].sort_values(ascending=False) >= 1
All_tokens[All_tokens['chaos'] == True].count().head(1)
#simple counting vectorizer,
from sklearn.cluster import KMeans
number_of_clusters = 8
km = KMeans(n_clusters=number_of_clusters)
count_vectorizer = CountVectorizer(stop_words='english')
X = count_vectorizer.fit_transform(All_speeches)
km.fit(X)
print("Top terms per cluster:")
order_centroids = km.cluster_centers_.argsort()[:, ::-1]
terms = count_vectorizer.get_feature_names()
for i in range(number_of_clusters):
top_ten_words = [terms[ind] for ind in order_centroids[i, :5]]
print("Cluster {}: {}".format(i, ' '.join(top_ten_words)))
# term frequency vectorizer,
vectorizer = TfidfVectorizer(use_idf=True, stop_words='english')
X = vectorizer.fit_transform(All_speeches)
number_of_clusters = 8
km = KMeans(n_clusters=number_of_clusters)
km.fit(X)
print("Top terms per cluster:")
order_centroids = km.cluster_centers_.argsort()[:, ::-1]
terms = count_vectorizer.get_feature_names()
for i in range(number_of_clusters):
top_ten_words = [terms[ind] for ind in order_centroids[i, :10]]
print("Cluster {}: {}".format(i, ' '.join(top_ten_words)))
#term frequency inverse document frequency vectorizer
def oh_tokenizer(str_input):
words = re.sub(r"[^A-Za-z0-9\-]", " ", str_input).lower().split()
return words
l2_vectorizer = TfidfVectorizer(use_idf=True, stop_words='english', tokenizer=oh_tokenizer)
X = l2_vectorizer.fit_transform(speeches_df['content'])
l2_df = pd.DataFrame(X.toarray(), columns=l2_vectorizer.get_feature_names())
for i in range(number_of_clusters):
top_ten_words = [l2_df[ind] for ind in order_centroids[i, :9]]
print("Cluster {}: {}".format(i, ' '.join(top_ten_words)))
!curl -O https://github.com/ledeprogram/courses/raw/master/algorithms/data/hp.zip
!unzip hp.zip
import glob
paths = glob.glob('hp/*.txt')
paths[:5]
len(paths)
Harry_Potter_fiction = []
for path in paths:
with open(path) as Harry_file:
speech = {
'pathname': path,
'filename': path.split('/')[-1],
'content': Harry_file.read()
}
Harry_Potter_fiction.append(speech)
Harry_df = pd.DataFrame(Harry_Potter_fiction)
Harry_df.head()
All_of_Harry = Harry_df['content']
All_of_Harry.head()
vectorizer = TfidfVectorizer(use_idf=True, stop_words='english')
X = vectorizer.fit_transform(All_of_Harry)
# KMeans clustering is a method of clustering.
from sklearn.cluster import KMeans
number_of_clusters = 2
km = KMeans(n_clusters=number_of_clusters)
km.fit(X)
print("Top terms per cluster:")
order_centroids = km.cluster_centers_.argsort()[:, ::-1]
terms = vectorizer.get_feature_names()
for i in range(number_of_clusters):
top_ten_words = [terms[ind] for ind in order_centroids[i, :10]]
print("Cluster {}: {}".format(i, ' '.join(top_ten_words)))
#Cluster 1 is about Lily and James, whoever they are. Wait: His parents.
#Cluster 2 is about Harry and Hermione.
from sklearn.cluster import KMeans
number_of_clusters = 2
km = KMeans(n_clusters=number_of_clusters)
count_vectorizer = CountVectorizer(stop_words='english')
X = count_vectorizer.fit_transform(All_of_Harry)
km.fit(X)
print("Top terms per cluster:")
order_centroids = km.cluster_centers_.argsort()[:, ::-1]
terms = count_vectorizer.get_feature_names()
for i in range(number_of_clusters):
top_ten_words = [terms[ind] for ind in order_centroids[i, :10]]
print("Cluster {}: {}".format(i, ' '.join(top_ten_words)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Testing variable assignment and operations in python
Step2: Extra
Step3: Simple Linear Regression
Step5: From linear regression using the model $$p(y_i/\mathbf{x_i}) = \eta(y_i/\mathbf{w}^T\mathbf{x_i},\sigma^2)$$
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%pylab inline
from __future__ import print_function
import matplotlib.pyplot as plt
import matplotlib.animation as animation
import theano
import numpy as np
from theano import tensor as T
from numpy.linalg import inv
x = 2
print(x)
y = x**2
print(y)
# Theano symbolic gradient example
B = T.scalar('E')
R = T.sqr(B)
A = T.grad(R,B)
Z = theano.function([B], A)
# Theano symbolic gradient example - Numeric
a = range(10)
da= range(10)
for idx,x in enumerate(a):
da[idx] = Z(x)
plt.plot(a,da)
plt.xlabel('x')
plt.ylabel('dx')
plt.title('Gradient of $f(x)=x^2$')
plt.show()
a = 3
b = 2
N = 100
# y = ax+b
x = np.reshape(range(N),(N,1))
y = a*x + b + 10*np.random.randn(N,1)
#plot
plt.scatter(x,y)
plt.xlabel('x')
plt.ylabel('y')
plt.title('Plot of $y = 3x+2 + 10*\eta (0,1)$')
plt.show()
# Linear regression (MSE)
# Augment x with 1
X = np.hstack((np.ones((N,1)),x))
w = np.dot(inv(X.T.dot(X)),X.T.dot(y))
print('a = ',w[1],'b = ',w[0])
plt.scatter(x,y)
plt.plot(x,X.dot(w))
plt.xlabel('x')
plt.ylabel('y')
plt.legend(['Fitted model','Input'])
plt.title('Plot of $y = 3x+2 + 10*\eta (0,1)$')
plt.show()
from tempfile import NamedTemporaryFile
VIDEO_TAG = <video controls>
<source src="data:video/x-m4v;base64,{0}" type="video/mp4">
Your browser does not support the video tag.
</video>
def anim_to_html(anim):
if not hasattr(anim, '_encoded_video'):
with NamedTemporaryFile(suffix='.mp4') as f:
anim.save(f.name, fps=20, extra_args=['-vcodec', 'libx264'])
video = open(f.name, "rb").read()
anim._encoded_video = video.encode("base64")
return VIDEO_TAG.format(anim._encoded_video)
from IPython.display import HTML
def display_animation(anim):
plt.close(anim._fig)
return HTML(anim_to_html(anim))
# Animation: MSE gradient descent
fig1 = plt.figure()
def init():
line.set_data([], [])
return line,
def update_w(i):
global w
off = 2*a*X.T.dot((X.dot(w)-y))
w = w - off
line.set_data(x,X.dot(w))
return line,
X = np.hstack((np.ones((N,1)),x))
w = np.random.rand(X.shape[1],1)
ax = plt.axes(xlim=(-20, 120), ylim=(-50, 350))
line, = ax.plot([], [], lw=2)
a = 0.0000001
plt.scatter(x,y)
plt.xlabel('x')
plt.ylabel('y')
plt.legend(['Fitted model','Input'])
plt.title('Plot of $y = 3x+2 + 10*\eta (0,1)$')
line_ani = animation.FuncAnimation(fig1, update_w,init_func=init, frames=100, interval=25, blit=True)
#plt.show()
display_animation(line_ani)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables
Step9: 2. Key Properties --> Seawater Properties
Step10: 2.2. Eos Functional Temp
Step11: 2.3. Eos Functional Salt
Step12: 2.4. Eos Functional Depth
Step13: 2.5. Ocean Freezing Point
Step14: 2.6. Ocean Specific Heat
Step15: 2.7. Ocean Reference Density
Step16: 3. Key Properties --> Bathymetry
Step17: 3.2. Type
Step18: 3.3. Ocean Smoothing
Step19: 3.4. Source
Step20: 4. Key Properties --> Nonoceanic Waters
Step21: 4.2. River Mouth
Step22: 5. Key Properties --> Software Properties
Step23: 5.2. Code Version
Step24: 5.3. Code Languages
Step25: 6. Key Properties --> Resolution
Step26: 6.2. Canonical Horizontal Resolution
Step27: 6.3. Range Horizontal Resolution
Step28: 6.4. Number Of Horizontal Gridpoints
Step29: 6.5. Number Of Vertical Levels
Step30: 6.6. Is Adaptive Grid
Step31: 6.7. Thickness Level 1
Step32: 7. Key Properties --> Tuning Applied
Step33: 7.2. Global Mean Metrics Used
Step34: 7.3. Regional Metrics Used
Step35: 7.4. Trend Metrics Used
Step36: 8. Key Properties --> Conservation
Step37: 8.2. Scheme
Step38: 8.3. Consistency Properties
Step39: 8.4. Corrected Conserved Prognostic Variables
Step40: 8.5. Was Flux Correction Used
Step41: 9. Grid
Step42: 10. Grid --> Discretisation --> Vertical
Step43: 10.2. Partial Steps
Step44: 11. Grid --> Discretisation --> Horizontal
Step45: 11.2. Staggering
Step46: 11.3. Scheme
Step47: 12. Timestepping Framework
Step48: 12.2. Diurnal Cycle
Step49: 13. Timestepping Framework --> Tracers
Step50: 13.2. Time Step
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Step52: 14.2. Scheme
Step53: 14.3. Time Step
Step54: 15. Timestepping Framework --> Barotropic
Step55: 15.2. Time Step
Step56: 16. Timestepping Framework --> Vertical Physics
Step57: 17. Advection
Step58: 18. Advection --> Momentum
Step59: 18.2. Scheme Name
Step60: 18.3. ALE
Step61: 19. Advection --> Lateral Tracers
Step62: 19.2. Flux Limiter
Step63: 19.3. Effective Order
Step64: 19.4. Name
Step65: 19.5. Passive Tracers
Step66: 19.6. Passive Tracers Advection
Step67: 20. Advection --> Vertical Tracers
Step68: 20.2. Flux Limiter
Step69: 21. Lateral Physics
Step70: 21.2. Scheme
Step71: 22. Lateral Physics --> Momentum --> Operator
Step72: 22.2. Order
Step73: 22.3. Discretisation
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Step75: 23.2. Constant Coefficient
Step76: 23.3. Variable Coefficient
Step77: 23.4. Coeff Background
Step78: 23.5. Coeff Backscatter
Step79: 24. Lateral Physics --> Tracers
Step80: 24.2. Submesoscale Mixing
Step81: 25. Lateral Physics --> Tracers --> Operator
Step82: 25.2. Order
Step83: 25.3. Discretisation
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Step85: 26.2. Constant Coefficient
Step86: 26.3. Variable Coefficient
Step87: 26.4. Coeff Background
Step88: 26.5. Coeff Backscatter
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Step90: 27.2. Constant Val
Step91: 27.3. Flux Type
Step92: 27.4. Added Diffusivity
Step93: 28. Vertical Physics
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
Step96: 30.2. Closure Order
Step97: 30.3. Constant
Step98: 30.4. Background
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
Step100: 31.2. Closure Order
Step101: 31.3. Constant
Step102: 31.4. Background
Step103: 32. Vertical Physics --> Interior Mixing --> Details
Step104: 32.2. Tide Induced Mixing
Step105: 32.3. Double Diffusion
Step106: 32.4. Shear Mixing
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
Step108: 33.2. Constant
Step109: 33.3. Profile
Step110: 33.4. Background
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
Step112: 34.2. Constant
Step113: 34.3. Profile
Step114: 34.4. Background
Step115: 35. Uplow Boundaries --> Free Surface
Step116: 35.2. Scheme
Step117: 35.3. Embeded Seaice
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Step119: 36.2. Type Of Bbl
Step120: 36.3. Lateral Mixing Coef
Step121: 36.4. Sill Overflow
Step122: 37. Boundary Forcing
Step123: 37.2. Surface Pressure
Step124: 37.3. Momentum Flux Correction
Step125: 37.4. Tracers Flux Correction
Step126: 37.5. Wave Effects
Step127: 37.6. River Runoff Budget
Step128: 37.7. Geothermal Heating
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Step132: 40.2. Ocean Colour
Step133: 40.3. Extinction Depth
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Step135: 41.2. From Sea Ice
Step136: 41.3. Forced Mode Restoring
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cmcc', 'cmcc-esm2-hr5', 'ocean')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's import everything we need to run our backtesting algorithm
Step2: The class below implements all the logic you need to run the momentum backtester. Go through it and make sure you understand each part. You can run it first and make changes later to see if you made any improvements over the naive strategy.
Step3: Initialize everything we've created so far
Step4: Start Trading ...
|
<ASSISTANT_TASK:>
Python Code:
%load_ext autoreload
%autoreload 2
!pip install qq-training-wheels auquan_toolbox --upgrade
from qq_training_wheels.momentum_trading import MomentumTradingParams
from backtester.trading_system import TradingSystem
from backtester.features.feature import Feature
import numpy as np
class MyTradingFunctions():
def __init__(self):
self.count = 0
# When to start trading
self.start_date = '2015/01/02'
# When to end trading
self.end_date = '2017/08/31'
self.params = {}
def getSymbolsToTrade(self):
'''
Specify the stock names that you want to trade.
'''
return ['AAPL']
def getInstrumentFeatureConfigDicts(self):
'''
Specify all Features you want to use by creating config dictionaries.
Create one dictionary per feature and return them in an array.
Feature config Dictionary have the following keys:
featureId: a str for the type of feature you want to use
featureKey: {optional} a str for the key you will use to call this feature
If not present, will just use featureId
params: {optional} A dictionary with which contains other optional params if needed by the feature
msDict = {
'featureKey': 'ms_5',
'featureId': 'moving_sum',
'params': {
'period': 5,
'featureName': 'basis'
}
}
return [msDict]
You can now use this feature by in getPRediction() calling it's featureKey, 'ms_5'
'''
ma1Dict = {
'featureKey': 'ma_90',
'featureId': 'moving_average',
'params': {
'period': 90,
'featureName': 'adjClose'
}
}
mom30Dict = {
'featureKey': 'mom_30',
'featureId': 'momentum',
'params': {
'period': 30,
'featureName': 'adjClose'
}
}
mom10Dict = {
'featureKey': 'mom_10',
'featureId': 'momentum',
'params': {
'period': 10,
'featureName': 'adjClose'
}
}
return [ma1Dict, mom10Dict, mom30Dict]
def getPrediction(self, time, updateNum, instrumentManager, predictions):
'''
Combine all the features to create the desired predictions for each stock.
'predictions' is Pandas Series with stock as index and predictions as values
We first call the holder for all the instrument features for all stocks as
lookbackInstrumentFeatures = instrumentManager.getLookbackInstrumentFeatures()
Then call the dataframe for a feature using its feature_key as
ms5Data = lookbackInstrumentFeatures.getFeatureDf('ms_5')
This returns a dataFrame for that feature for ALL stocks for all times upto lookback time
Now you can call just the last data point for ALL stocks as
ms5 = ms5Data.iloc[-1]
You can call last datapoint for one stock 'ABC' as
value_for_abs = ms5['ABC']
Output of the prediction function is used by the toolbox to make further trading decisions and evaluate your score.
'''
self.updateCount() # uncomment if you want a counter
# holder for all the instrument features for all instruments
lookbackInstrumentFeatures = instrumentManager.getLookbackInstrumentFeatures()
def hurst_f(input_ts, lags_to_test=20):
# interpretation of return value
# hurst < 0.5 - input_ts is mean reverting
# hurst = 0.5 - input_ts is effectively random/geometric brownian motion
# hurst > 0.5 - input_ts is trending
tau = []
lagvec = []
# Step through the different lags
for lag in range(2, lags_to_test):
# produce price difference with lag
pp = np.subtract(input_ts[lag:].values, input_ts[:-lag].values)
# Write the different lags into a vector
lagvec.append(lag)
# Calculate the variance of the differnce vector
tau.append(np.sqrt(np.std(pp)))
# linear fit to double-log graph (gives power)
m = np.polyfit(np.log10(lagvec), np.log10(tau), 1)
# calculate hurst
hurst = m[0]*2
print(hurst)
return hurst
# dataframe for a historical instrument feature (ma_90 in this case). The index is the timestamps
# of upto lookback data points. The columns of this dataframe are the stock symbols/instrumentIds.
mom10Data = lookbackInstrumentFeatures.getFeatureDf('mom_10')
mom30Data = lookbackInstrumentFeatures.getFeatureDf('mom_30')
ma90Data = lookbackInstrumentFeatures.getFeatureDf('ma_90')
# Here we are making predictions on the basis of Hurst exponent if enough data is available, otherwise
# we simply get out of our position
if len(ma90Data.index)>20:
mom30 = mom30Data.iloc[-1]
mom10 = mom10Data.iloc[-1]
ma90 = ma90Data.iloc[-1]
# Calculate Hurst Exponent
hurst = ma90Data.apply(hurst_f, axis=0)
# Go long if Hurst > 0.5 and both long term and short term momentum are positive
predictions[(hurst > 0.5) & (mom30 > 0) & (mom10 > 0)] = 1
# Go short if Hurst > 0.5 and both long term and short term momentum are negative
predictions[(hurst > 0.5) & (mom30 <= 0) & (mom10 <= 0)] = 0
# Get out of position if Hurst > 0.5 and long term momentum is positive while short term is negative
predictions[(hurst > 0.5) & (mom30 > 0) & (mom10 <= 0)] = 0.5
# Get out of position if Hurst > 0.5 and long term momentum is negative while short term is positive
predictions[(hurst > 0.5) & (mom30 <= 0) & (mom10 > 0)] = 0.5
# Get out of position if Hurst < 0.5
predictions[hurst <= 0.5] = 0.5
else:
# If no sufficient data then don't take any positions
predictions.values[:] = 0.5
return predictions
def updateCount(self):
self.count = self.count + 1
tf = MyTradingFunctions()
tsParams = MomentumTradingParams(tf)
tradingSystem = TradingSystem(tsParams)
results = tradingSystem.startTrading()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Image Classification
Step2: Explore the Data
Step5: Implement Preprocess Functions
Step8: One-hot encode
Step10: Randomize Data
Step12: Check Point
Step17: Build the network
Step20: Convolution and Max Pooling Layer
Step23: Flatten Layer
Step26: Fully-Connected Layer
Step29: Output Layer
Step32: Create Convolutional Model
Step35: Train the Neural Network
Step37: Show Stats
Step38: Hyperparameters
Step40: Train on a Single CIFAR-10 Batch
Step42: Fully Train the Model
Step45: Checkpoint
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
return x / 255
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_normalize(normalize)
from sklearn.preprocessing import LabelBinarizer
encoder = None
def one_hot_encode(x):
One hot encode a list of sample labels. Return
a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
global encoder
if encoder is None:
encoder = LabelBinarizer()
encoder.fit(x)
return encoder.transform(x)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_one_hot_encode(one_hot_encode)
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
DON'T MODIFY ANYTHING IN THIS CELL
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
return tf.placeholder(
tf.float32,
shape=[None, *image_shape],
name='x'
)
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
return tf.placeholder(
tf.float32,
shape=[None, n_classes],
name='y'
)
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
return tf.placeholder(
tf.float32,
name='keep_prob'
)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
from math import ceil
def conv2d_maxpool(x_tensor, conv_num_outputs=32, conv_ksize=[4,4],
conv_strides=[3,3], pool_ksize=[2,2], pool_strides=[2,2]):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the
convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the
convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and
max pooling of x_tensor
W = tf.Variable(conv2d_normal_distribution((
*conv_ksize,
int(x_tensor.shape[3]),
conv_num_outputs
), stddev=conv2d_stddev))
bias = conv2d_bias(shape=(conv_num_outputs,))
conv = tf.nn.conv2d(
x_tensor,
W,
strides=[1, *conv_strides, 1],
padding='SAME'
)
conv_w_bias = conv + bias
a = tf.nn.relu(conv_w_bias)
return tf.nn.max_pool(
a,
[1, *pool_ksize, 1],
strides=[1, *pool_strides, 1],
padding='SAME'
)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
from functools import reduce
from operator import mul
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
return tf.reshape(
x_tensor,
[tf.shape(x_tensor)[0], int(reduce(mul, x_tensor.shape[1:]))]
)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_flatten(flatten)
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
W = tf.random_normal((int(x_tensor.shape[1]), num_outputs), stddev=0.1)
W = tf.Variable(W)
bias = tf.Variable(tf.zeros([num_outputs]))
h = tf.matmul(x_tensor, W) + bias
a = tf.nn.relu(h)
return a
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_fully_conn(fully_conn)
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
W = tf.random_normal([int(x_tensor.shape[1]), num_outputs], stddev=0.1)
W = tf.Variable(W)
bias = tf.Variable(tf.zeros([num_outputs]))
h = tf.matmul(x_tensor, W) + bias
return h
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_output(output)
# condensed parameter list
conv2d_normal_distribution = tf.truncated_normal
conv2d_stddev = 0.1
def conv2d_bias(shape):
# fill, zeros, random_normal, truncated_normal
return tf.Variable(tf.zeros(shape))
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
# image is 32x32x3
x = conv2d_maxpool(
x, conv_num_outputs=16, conv_ksize=[5,5],
conv_strides=[2,2], pool_ksize=[1,1], pool_strides=[1,1]
)
x = tf.nn.dropout(x, keep_prob)
x = conv2d_maxpool(
x, conv_num_outputs=32, conv_ksize=[3,3],
conv_strides=[1,1], pool_ksize=[2,2], pool_strides=[2,2]
)
x = tf.nn.dropout(x, keep_prob)
x = conv2d_maxpool(
x, conv_num_outputs=48, conv_ksize=[3,3],
conv_strides=[1,1], pool_ksize=[1,1], pool_strides=[1,1]
)
x = flatten(x)
x = fully_conn(x, 864)
x = tf.nn.dropout(x, keep_prob)
x = fully_conn(x, 864)
x = tf.nn.dropout(x, keep_prob)
x = output(x, 10)
return x
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
def train_neural_network(session, optimizer, keep_probability,
feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
feed_dict = {
x: feature_batch,
y: label_batch,
keep_prob: keep_probability
}
session.run(
optimizer,
feed_dict=feed_dict
)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_train_nn(train_neural_network)
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
feed_dict = {
x: feature_batch,
y: label_batch,
keep_prob: 1.0
}
loss = session.run(cost, feed_dict=feed_dict)
feed_dict = {
x: valid_features,
y: valid_labels,
keep_prob: 1.0
}
acc = session.run(accuracy, feed_dict=feed_dict)
print('loss: {:6.4f} accuracy:'.format(loss), acc)
# TODO: Tune Parameters
epochs = 20
batch_size = 256
keep_probability = 0.7
DON'T MODIFY ANYTHING IN THIS CELL
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We draw and plot samples from the stick-breaking process.
Step2: As stated above, as $\alpha \to \infty$, samples from the Dirichlet process converge to the base distribution.
Step3: Dirichlet process mixtures
Step4: We now focus on a single mixture and decompose it into its individual (weighted) mixture components.
Step5: Sampling from these stochastic processes is fun, but these ideas become truly useful when we fit them to data. The discreteness of samples and the stick-breaking representation of the Dirichlet process lend themselves nicely to Markov chain Monte Carlo simulation of posterior distributions. We will perform this sampling using pymc3.
Step6: For convenience in specifying the prior, we standardize the waiting time between eruptions.
Step7: Observant readers will have noted that we have not been continuing the stick-breaking process indefinitely as indicated by its definition, but rather have been truncating this process after a finite number of breaks. Obviously, when computing with Dirichlet processes, it is necessary to only store a finite number of its point masses and weights in memory. This restriction is not terribly onerous, since with a finite number of observations, it seems quite likely that the number of mixture components that contribute non-neglible mass to the mixture will grow slower than the number of samples. This intuition can be formalized to show that the (expected) number of components that contribute non-negligible mass to the mixture approaches $\alpha \log N$, where $N$ is the sample size.
Step8: We sample from the model 2,000 times using NUTS initialized with ADVI.
Step9: The posterior distribution of $\alpha$ is highly concentrated between 0.25 and 1.
Step10: To verify that truncation is not biasing our results, we plot the posterior expected mixture weight of each component.
Step11: We see that only three mixture components have appreciable posterior expected weights, so we conclude that truncating the Dirichlet process to forty components has not appreciably affected our estimates.
Step12: As above, we can decompose this density estimate into its (weighted) mixture components.
Step13: The Dirichlet process mixture model is incredibly flexible in terms of the family of parametric component distributions ${f_{\theta}\ |\ f_{\theta} \in \Theta}$. We illustrate this flexibility below by using Poisson component distributions to estimate the density of sunspots per year.
Step14: For this example, the model is
Step15: For the sunspot model, the posterior distribution of $\alpha$ is concentrated between 0.6 and 1.2, indicating that we should expect more components to contribute non-negligible amounts to the mixture than for the Old Faithful waiting time model.
Step16: Indeed, we see that between ten and fifteen mixture components have appreciable posterior expected weight.
Step17: We now calculate and plot the fitted density estimate.
Step18: Again, we can decompose the posterior expected density into weighted mixture densities.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from __future__ import division
from matplotlib import pyplot as plt
import numpy as np
import pymc3 as pm
import scipy as sp
import seaborn as sns
from statsmodels.datasets import get_rdataset
from theano import tensor as tt
blue, *_ = sns.color_palette()
SEED = 5132290 # from random.org
np.random.seed(SEED)
N = 20
K = 30
alpha = 2.
P0 = sp.stats.norm
beta = sp.stats.beta.rvs(1, alpha, size=(N, K))
w = np.empty_like(beta)
w[:, 0] = beta[:, 0]
w[:, 1:] = beta[:, 1:] * (1 - beta[:, :-1]).cumprod(axis=1)
omega = P0.rvs(size=(N, K))
x_plot = np.linspace(-3, 3, 200)
sample_cdfs = (w[..., np.newaxis] * np.less.outer(omega, x_plot)).sum(axis=1)
fig, ax = plt.subplots(figsize=(8, 6))
ax.plot(x_plot, sample_cdfs[0], c='gray', alpha=0.75,
label='DP sample CDFs');
ax.plot(x_plot, sample_cdfs[1:].T, c='gray', alpha=0.75);
ax.plot(x_plot, P0.cdf(x_plot), c='k', label='Base CDF');
ax.set_title(r'$\alpha = {}$'.format(alpha));
ax.legend(loc=2);
fig, (l_ax, r_ax) = plt.subplots(ncols=2, sharex=True, sharey=True, figsize=(16, 6))
K = 50
alpha = 10.
beta = sp.stats.beta.rvs(1, alpha, size=(N, K))
w = np.empty_like(beta)
w[:, 0] = beta[:, 0]
w[:, 1:] = beta[:, 1:] * (1 - beta[:, :-1]).cumprod(axis=1)
omega = P0.rvs(size=(N, K))
sample_cdfs = (w[..., np.newaxis] * np.less.outer(omega, x_plot)).sum(axis=1)
l_ax.plot(x_plot, sample_cdfs[0], c='gray', alpha=0.75,
label='DP sample CDFs');
l_ax.plot(x_plot, sample_cdfs[1:].T, c='gray', alpha=0.75);
l_ax.plot(x_plot, P0.cdf(x_plot), c='k', label='Base CDF');
l_ax.set_title(r'$\alpha = {}$'.format(alpha));
l_ax.legend(loc=2);
K = 200
alpha = 50.
beta = sp.stats.beta.rvs(1, alpha, size=(N, K))
w = np.empty_like(beta)
w[:, 0] = beta[:, 0]
w[:, 1:] = beta[:, 1:] * (1 - beta[:, :-1]).cumprod(axis=1)
omega = P0.rvs(size=(N, K))
sample_cdfs = (w[..., np.newaxis] * np.less.outer(omega, x_plot)).sum(axis=1)
r_ax.plot(x_plot, sample_cdfs[0], c='gray', alpha=0.75,
label='DP sample CDFs');
r_ax.plot(x_plot, sample_cdfs[1:].T, c='gray', alpha=0.75);
r_ax.plot(x_plot, P0.cdf(x_plot), c='k', label='Base CDF');
r_ax.set_title(r'$\alpha = {}$'.format(alpha));
r_ax.legend(loc=2);
N = 5
K = 30
alpha = 2
P0 = sp.stats.norm
f = lambda x, theta: sp.stats.norm.pdf(x, theta, 0.3)
beta = sp.stats.beta.rvs(1, alpha, size=(N, K))
w = np.empty_like(beta)
w[:, 0] = beta[:, 0]
w[:, 1:] = beta[:, 1:] * (1 - beta[:, :-1]).cumprod(axis=1)
theta = P0.rvs(size=(N, K))
dpm_pdf_components = f(x_plot[np.newaxis, np.newaxis, :], theta[..., np.newaxis])
dpm_pdfs = (w[..., np.newaxis] * dpm_pdf_components).sum(axis=1)
fig, ax = plt.subplots(figsize=(8, 6))
ax.plot(x_plot, dpm_pdfs.T, c='gray');
ax.set_yticklabels([]);
fig, ax = plt.subplots(figsize=(8, 6))
ix = 1
ax.plot(x_plot, dpm_pdfs[ix], c='k', label='Density');
ax.plot(x_plot, (w[..., np.newaxis] * dpm_pdf_components)[ix, 0],
'--', c='k', label='Mixture components (weighted)');
ax.plot(x_plot, (w[..., np.newaxis] * dpm_pdf_components)[ix].T,
'--', c='k');
ax.set_yticklabels([]);
ax.legend(loc=1);
old_faithful_df = get_rdataset('faithful', cache=True).data[['waiting']]
old_faithful_df['std_waiting'] = (old_faithful_df.waiting - old_faithful_df.waiting.mean()) / old_faithful_df.waiting.std()
old_faithful_df.head()
fig, ax = plt.subplots(figsize=(8, 6))
n_bins = 20
ax.hist(old_faithful_df.std_waiting, bins=n_bins, color=blue, lw=0, alpha=0.5);
ax.set_xlabel('Standardized waiting time between eruptions');
ax.set_ylabel('Number of eruptions');
N = old_faithful_df.shape[0]
K = 30
def stick_breaking(beta):
portion_remaining = tt.concatenate([[1], tt.extra_ops.cumprod(1 - beta)[:-1]])
return beta * portion_remaining
with pm.Model() as model:
alpha = pm.Gamma('alpha', 1., 1.)
beta = pm.Beta('beta', 1., alpha, shape=K)
w = pm.Deterministic('w', stick_breaking(beta))
tau = pm.Gamma('tau', 1., 1., shape=K)
lambda_ = pm.Uniform('lambda', 0, 5, shape=K)
mu = pm.Normal('mu', 0, tau=lambda_ * tau, shape=K)
obs = pm.NormalMixture('obs', w, mu, tau=lambda_ * tau,
observed=old_faithful_df.std_waiting.values)
with model:
trace = pm.sample(2000, n_init=50000, random_seed=SEED)
pm.traceplot(trace, varnames=['alpha']);
fig, ax = plt.subplots(figsize=(8, 6))
plot_w = np.arange(K) + 1
ax.bar(plot_w - 0.5, trace['w'].mean(axis=0), width=1., lw=0);
ax.set_xlim(0.5, K);
ax.set_xlabel('Component');
ax.set_ylabel('Posterior expected mixture weight');
post_pdf_contribs = sp.stats.norm.pdf(np.atleast_3d(x_plot),
trace['mu'][:, np.newaxis, :],
1. / np.sqrt(trace['lambda'] * trace['tau'])[:, np.newaxis, :])
post_pdfs = (trace['w'][:, np.newaxis, :] * post_pdf_contribs).sum(axis=-1)
post_pdf_low, post_pdf_high = np.percentile(post_pdfs, [2.5, 97.5], axis=0)
fig, ax = plt.subplots(figsize=(8, 6))
n_bins = 20
ax.hist(old_faithful_df.std_waiting.values, bins=n_bins, normed=True,
color=blue, lw=0, alpha=0.5);
ax.fill_between(x_plot, post_pdf_low, post_pdf_high,
color='gray', alpha=0.45);
ax.plot(x_plot, post_pdfs[0],
c='gray', label='Posterior sample densities');
ax.plot(x_plot, post_pdfs[::100].T, c='gray');
ax.plot(x_plot, post_pdfs.mean(axis=0),
c='k', label='Posterior expected density');
ax.set_xlabel('Standardized waiting time between eruptions');
ax.set_yticklabels([]);
ax.set_ylabel('Density');
ax.legend(loc=2);
fig, ax = plt.subplots(figsize=(8, 6))
n_bins = 20
ax.hist(old_faithful_df.std_waiting.values, bins=n_bins, normed=True,
color=blue, lw=0, alpha=0.5);
ax.plot(x_plot, post_pdfs.mean(axis=0),
c='k', label='Posterior expected density');
ax.plot(x_plot, (trace['w'][:, np.newaxis, :] * post_pdf_contribs).mean(axis=0)[:, 0],
'--', c='k', label='Posterior expected mixture\ncomponents\n(weighted)');
ax.plot(x_plot, (trace['w'][:, np.newaxis, :] * post_pdf_contribs).mean(axis=0),
'--', c='k');
ax.set_xlabel('Standardized waiting time between eruptions');
ax.set_yticklabels([]);
ax.set_ylabel('Density');
ax.legend(loc=2);
sunspot_df = get_rdataset('sunspot.year', cache=True).data
sunspot_df.head()
K = 50
N = sunspot_df.shape[0]
with pm.Model() as model:
alpha = pm.Gamma('alpha', 1., 1.)
beta = pm.Beta('beta', 1, alpha, shape=K)
w = pm.Deterministic('w', stick_breaking(beta))
mu = pm.Uniform('mu', 0., 300., shape=K)
obs = pm.Mixture('obs', w, pm.Poisson.dist(mu), observed=sunspot_df['sunspot.year'])
with model:
step = pm.Metropolis()
trace_ = pm.sample(100000, step=step, random_seed=SEED)
trace = trace_[50000::50]
pm.traceplot(trace, varnames=['alpha']);
fig, ax = plt.subplots(figsize=(8, 6))
plot_w = np.arange(K) + 1
ax.bar(plot_w - 0.5, trace['w'].mean(axis=0), width=1., lw=0);
ax.set_xlim(0.5, K);
ax.set_xlabel('Component');
ax.set_ylabel('Posterior expected mixture weight');
x_plot = np.arange(250)
post_pmf_contribs = sp.stats.poisson.pmf(np.atleast_3d(x_plot),
trace['mu'][:, np.newaxis, :])
post_pmfs = (trace['w'][:, np.newaxis, :] * post_pmf_contribs).sum(axis=-1)
post_pmf_low, post_pmf_high = np.percentile(post_pmfs, [2.5, 97.5], axis=0)
fig, ax = plt.subplots(figsize=(8, 6))
ax.hist(sunspot_df['sunspot.year'].values, bins=40, normed=True, lw=0, alpha=0.75);
ax.fill_between(x_plot, post_pmf_low, post_pmf_high,
color='gray', alpha=0.45)
ax.plot(x_plot, post_pmfs[0],
c='gray', label='Posterior sample densities');
ax.plot(x_plot, post_pmfs[::200].T, c='gray');
ax.plot(x_plot, post_pmfs.mean(axis=0),
c='k', label='Posterior expected density');
ax.set_xlabel('Yearly sunspot count');
ax.set_yticklabels([]);
ax.legend(loc=1);
fig, ax = plt.subplots(figsize=(8, 6))
ax.hist(sunspot_df['sunspot.year'].values, bins=40, normed=True, lw=0, alpha=0.75);
ax.plot(x_plot, post_pmfs.mean(axis=0),
c='k', label='Posterior expected density');
ax.plot(x_plot, (trace['w'][:, np.newaxis, :] * post_pmf_contribs).mean(axis=0)[:, 0],
'--', c='k', label='Posterior expected\nmixture components\n(weighted)');
ax.plot(x_plot, (trace['w'][:, np.newaxis, :] * post_pmf_contribs).mean(axis=0),
'--', c='k');
ax.set_xlabel('Yearly sunspot count');
ax.set_yticklabels([]);
ax.legend(loc=1);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step5: To audit the osm file, first we need to know the overview of the data.
Step7: This will save the jakarta osm that has been audited into jakarta_audit.osm
Step8: The processed map has ben saved to jakarta_audit.osm.json
Step9: Okay let's test if the data is something that we expect
Step10: The data seems about right.
Step11: Okay, it seems that we have sucessfully insert all of our data into MongoDB instance.
Step12: Overview of the data
Step13: Show 5 data that have street
Step14: Show the top 5 of contributed users
Step15: Show the restaurant's name, the food they serve, and contact number
|
<ASSISTANT_TASK:>
Python Code:
OSMFILE = 'dataset/jakarta.osm'
%%writefile 02-codes/audit.py
import xml.etree.cElementTree as ET
from collections import defaultdict
import re
import pprint
from optparse import OptionParser
# OSMFILE = "sample.osm"
# OSMFILE = "example_audit.osm"
#In Indonesia, type first, then name. So the regex has to be changed.
#street_type_re = re.compile(r'\b\S+\.?$', re.IGNORECASE)
street_type_re = re.compile(r'^\b\S+\.?', re.IGNORECASE)
# expected = ["Street", "Avenue", "Boulevard", "Drive", "Court", "Place", "Square", "Lane", "Road",
# "Trail", "Parkway", "Commons"]
expected = ['Jalan', 'Gang','Street', 'Road']
# UPDATE THIS VARIABLE
#Mapping has to sort in length descending.
#languange English-Indonesian{Street: Jalan}.
#{Sudirman Stret:Jalan Sudirman}
mapping = {
'jl.':'Jalan',
'JL.':'Jalan',
'Jl.':'Jalan',
'GG':'Gang',
'gg': 'Gang',
'jl' :'Jalan',
'JL':'Jalan',
'Jl':'Jalan',
}
def audit_street_type(street_types, street_name):
m = street_type_re.search(street_name)
if m:
street_type = m.group()
if street_type not in expected:
street_types[street_type].add(street_name)
#return True if need to be updated
return True
return False
def is_street_name(elem):
Perhaps the addr:full should also included to be fixed
return (elem.attrib['k'] == "addr:street") or (elem.attrib['k'] == "addr:full")
def is_name_is_street(elem):
Some people fill the name of the street in k=name.
Should change this
s = street_type_re.search(elem.attrib['v'])
#print s
return (elem.attrib['k'] == "name") and s and s.group() in mapping.keys()
def audit(osmfile):
osm_file = open(osmfile, "r")
street_types = defaultdict(set)
# tree = ET.parse(osm_file, events=("start",))
tree = ET.parse(osm_file)
listtree = list(tree.iter())
for elem in listtree:
if elem.tag == "node" or elem.tag == "way":
n_add = None
for tag in elem.iter("tag"):
if is_street_name(tag):
if audit_street_type(street_types, tag.attrib['v']):
#Update the tag attribtue
tag.attrib['v'] = update_name(tag.attrib['v'],mapping)
elif is_name_is_street(tag):
tag.attrib['v'] = update_name(tag.attrib['v'],mapping)
n_add = tag.attrib['v']
elif tag.attrib['k'] == 'phone':
# print tag.attrib['v']
tag.attrib['v'] = update_phone(tag.attrib['v'])
if n_add:
elem.append(ET.Element('tag',{'k':'addr:street', 'v':n_add}))
#write the to the file we've been audit
tree.write(osmfile[:osmfile.find('.osm')]+'_audit.osm')
return street_types
def update_phone(number):
Uniform all the incosistent number
stripped = re.sub('[^A-Za-z0-9]+', '', number)
replace0to62 = re.sub('^0', '62',stripped)
separate_area_code = re.sub('^6221','6221 ',replace0to62)
tidy_country_code = re.sub('^62', '+62 ', separate_area_code )
fixed = tidy_country_code
return fixed
def update_name(name, mapping):
Fixed abreviate name so the name can be uniform.
The reason why mapping in such particular order, is to prevent the shorter keys get first.
dict_map = sorted(mapping.keys(), key=len, reverse=True)
for key in dict_map:
if name.find(key) != -1:
name = name.replace(key,mapping[key])
return name
#essentially, in Indonesia, you specify the all type of street as Street.
#So if it doesnt have any prefix, add 'Jalan'
return 'Jalan ' + name
def test():
st_types = audit(OSMFILE)
# pprint.pprint(dict(st_types))
#assert len(st_types) == 3
# for st_type, ways in st_types.iteritems():
# for name in ways:
# better_name = update_name(name, mapping)
# print name, "=>", better_name
if __name__ == '__main__':
test()
# parser = OptionParser()
# parser.add_option('-d', '--data', dest='audited_data', help='osm data that want to be audited')
# (opts,args) = parser.parse_args()
# audit(opts.audited_data)
# %load 02-codes/data.py
#!/usr/bin/env python
import xml.etree.ElementTree as ET
import pprint
import re
import codecs
import json
lower = re.compile(r'^([a-z]|_)*$')
lower_colon = re.compile(r'^([a-z]|_)*:([a-z]|_)*$')
problemchars = re.compile(r'[=\+/&<>;\'"\?%#$@\,\. \t\r\n]')
addresschars = re.compile(r'addr:(\w+)')
CREATED = [ "version", "changeset", "timestamp", "user", "uid"]
OSM_FILE = 'dataset/jakarta_audit.osm'
def shape_element(element):
#node = defaultdict(set)
node = {}
if element.tag == "node" or element.tag == "way" :
#create the dictionary based on exaclty the value in element attribute.
node = {'created':{}, 'type':element.tag}
for k in element.attrib:
try:
v = element.attrib[k]
except KeyError:
continue
if k == 'lat' or k == 'lon':
continue
if k in CREATED:
node['created'][k] = v
else:
node[k] = v
try:
node['pos']=[float(element.attrib['lat']),float(element.attrib['lon'])]
except KeyError:
pass
if 'address' not in node.keys():
node['address'] = {}
#Iterate the content of the tag
for stag in element.iter('tag'):
#Init the dictionry
k = stag.attrib['k']
v = stag.attrib['v']
#Checking if indeed prefix with 'addr' and no ':' afterwards
if k.startswith('addr:'):
if len(k.split(':')) == 2:
content = addresschars.search(k)
if content:
node['address'][content.group(1)] = v
else:
node[k]=v
if not node['address']:
node.pop('address',None)
#Special case when the tag == way, scrap all the nd key
if element.tag == "way":
node['node_refs'] = []
for nd in element.iter('nd'):
node['node_refs'].append(nd.attrib['ref'])
# if 'address' in node.keys():
# pprint.pprint(node['address'])
return node
else:
return None
def process_map(file_in, pretty = False):
Process the osm file to json file to be prepared for input file to monggo
file_out = "{0}.json".format(file_in)
data = []
with codecs.open(file_out, "w") as fo:
for _, element in ET.iterparse(file_in):
el = shape_element(element)
if el:
data.append(el)
if pretty:
fo.write(json.dumps(el, indent=2)+"\n")
else:
fo.write(json.dumps(el) + "\n")
return data
def test():
data = process_map(OSM_FILE)
pprint.pprint(data[500])
if __name__ == "__main__":
test()
from data import *
import pprint
data = process_map('dataset/jakarta_audit.osm')
import json
pprint.pprint(data[0:2])
from pymongo import MongoClient
client = MongoClient('mongodb://localhost:27017')
db = client.examples
db.jktosm.remove()
[db.jktosm.insert(e) for e in data]
pipeline = [
{'$limit' : 2}
]
pprint.pprint(db.jktosm.aggregate(pipeline)['result'])
!ls -lh dataset/jakarta*
pipeline = [
{'$match': {'address.street':{'$exists':1}}},
{'$limit' : 1}
]
result = db.jktosm.aggregate(pipeline)['result']
pprint.pprint(result)
pipeline = [
{'$match': {'created.user':{'$exists':1}}},
{'$group': {'_id':'$created.user',
'count':{'$sum':1}}},
{'$sort': {'count':-1}},
{'$limit' : 5}
]
result = db.jktosm.aggregate(pipeline)['result']
pprint.pprint(result)
pipeline = [
{'$match': {'amenity':'restaurant',
'name':{'$exists':1},
'cuisine':{'$exists':1},
'phone':{'$exists':1}
}
},
{'$project':{'_id':'$name',
'cuisine':'$cuisine',
'contact':'$phone'}}
]
result = db.jktosm.aggregate(pipeline)['result']
pprint.pprint(result)
pipeline = [
{'$match': {
'phone':{'$exists':1}
}
},
{'$project':{'_id':'$phone'}},
{'$limit': 20}
]
result = db.jktosm.aggregate(pipeline)['result']
pprint.pprint(result)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The timestep for these simulations is 2 fs (can be found in /data/choderalab/fah/initial-models/projects/ABL1_HUMAN_D0_V1/RUN0/integrator.xml [stepSize=".002"]).
Step2: Load all trajectories > 1 us.
Step3: There are __ sims in this. The shortest one is run0-clone93.h5.
Step4: Section 1
Step5: Section 2
|
<ASSISTANT_TASK:>
Python Code:
#Import libraries
import matplotlib.pyplot as plt
import mdtraj as md
import glob
import numpy as np
from msmbuilder.dataset import dataset
%pylab inline
#Import longest trajectory.
t = md.load("run0-clone138.h5")
frame = np.arange(len(t))[:, np.newaxis]
# Using 0.25 so that units are in ns.
time = frame * .250
sim_time = time[-1] * 1e-3
print "Length of this longest simulation of Src is %s us." % ''.join(map(str, sim_time))
rmsd = md.rmsd(t,t,frame=0)
plt.plot(time, rmsd)
plt.xlabel('time (ns)')
plt.ylabel('RMSD(nm)')
plt.title('RMSD')
# For now making dir long_sims in bash using:
# > for file in $(find * -type f -size +300000); do cp $file long_sims/$file; done
filenames = glob.glob("run0*.h5")
trajectories = [md.load(filename) for filename in filenames]
len(trajectories)
t_long_min = md.load("run0-clone93.h5")
frame = np.arange(len(t_long_min))[:, np.newaxis]
# Using 0.25 so that units are in ns.
time = frame * .250
sim_time = time[-1] * 1e-3
print "Length of run0-clone1.h5 %s us." % ''.join(map(str, sim_time))
frame = np.arange(len(trajectories))[:, np.newaxis]
# Using 0.25 so that units are in ns.
time = frame * .250
sim_time = time[-1] * 1e-3
print "The total length of all these long sims is %s us." % ''.join(map(str, sim_time))
from msmbuilder import msm, featurizer, utils, decomposition
# Make dihedral_features
dihedrals = featurizer.DihedralFeaturizer(types=["phi", "psi", "chi2"]).transform(trajectories)
# Make tICA features
tica = decomposition.tICA(n_components = 4)
X = tica.fit_transform(dihedrals)
#Note the default lagtime here is 1 (=250ps),
#which is super short according to lit for building reasonable protein MSM.
Xf = np.concatenate(X)
hexbin(Xf[:,0], Xf[:, 1], bins='log')
title("Dihedral tICA Analysis")
xlabel("Slowest Coordinate")
ylabel("Second Slowest Coordinate")
savefig("src_10467_msm.png", bbox_inches="tight")
#Load trajectory with ensembler models
t_models = md.load("../../ensembler-models/traj-refine_implicit_md.xtc", top = "../../ensembler-models/topol-renumbered-implicit.pdb")
#Now make dihedrals of this.
dihedrals_models = featurizer.DihedralFeaturizer(types=["phi", "psi", "chi2"]).transform([t_models])
x_models = tica.transform(dihedrals_models)
#do not use fit here because don't want to change tica object, want to use one generated from sims.
#Now plot on the slow MSM features found above.
hexbin(Xf[:,0], Xf[:, 1], bins='log')
plot(x_models[0][:, 0], x_models[0][:, 1], 'o', markersize=5, label="ensembler models", color='white')
title("Dihedral tICA Analysis")
xlabel("Slowest Coordinate")
ylabel("Second Slowest Coordinate")
legend(loc=0)
savefig("src_10467_msm_wmodels.png", bbox_inches="tight")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Dropout
Step2: Dropout forward pass
Step3: Dropout backward pass
Step4: Fully-connected nets with Dropout
Step5: Regularization experiment
|
<ASSISTANT_TASK:>
Python Code:
# As usual, a bit of setup
from __future__ import print_function
import time
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.fc_net import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
returns relative error
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in data.items():
print('%s: ' % k, v.shape)
np.random.seed(231)
x = np.random.randn(500, 500) + 10
for p in [0.3, 0.6, 0.75]:
out, _ = dropout_forward(x, {'mode': 'train', 'p': p})
out_test, _ = dropout_forward(x, {'mode': 'test', 'p': p})
print('Running tests with p = ', p)
print('Mean of input: ', x.mean())
print('Mean of train-time output: ', out.mean())
print('Mean of test-time output: ', out_test.mean())
print('Fraction of train-time output set to zero: ', (out == 0).mean())
print('Fraction of test-time output set to zero: ', (out_test == 0).mean())
print()
np.random.seed(231)
x = np.random.randn(10, 10) + 10
dout = np.random.randn(*x.shape)
dropout_param = {'mode': 'train', 'p': 0.8, 'seed': 123}
out, cache = dropout_forward(x, dropout_param)
dx = dropout_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda xx: dropout_forward(xx, dropout_param)[0], x, dout)
print('dx relative error: ', rel_error(dx, dx_num))
np.random.seed(231)
N, D, H1, H2, C = 2, 15, 20, 30, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))
for dropout in [0, 0.25, 0.5]:
print('Running check with dropout = ', dropout)
model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
weight_scale=5e-2, dtype=np.float64,
dropout=dropout, seed=123)
loss, grads = model.loss(X, y)
print('Initial loss: ', loss)
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)
print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))
print()
# Train two identical nets, one with dropout and one without
np.random.seed(231)
num_train = 500
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
solvers = {}
dropout_choices = [0, 0.75]
for dropout in dropout_choices:
model = FullyConnectedNet([500], dropout=dropout)
print(dropout)
solver = Solver(model, small_data,
num_epochs=25, batch_size=100,
update_rule='adam',
optim_config={
'learning_rate': 5e-4,
},
verbose=True, print_every=100)
solver.train()
solvers[dropout] = solver
# Plot train and validation accuracies of the two models
train_accs = []
val_accs = []
for dropout in dropout_choices:
solver = solvers[dropout]
train_accs.append(solver.train_acc_history[-1])
val_accs.append(solver.val_acc_history[-1])
plt.subplot(3, 1, 1)
for dropout in dropout_choices:
plt.plot(solvers[dropout].train_acc_history, 'o', label='%.2f dropout' % dropout)
plt.title('Train accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(ncol=2, loc='lower right')
plt.subplot(3, 1, 2)
for dropout in dropout_choices:
plt.plot(solvers[dropout].val_acc_history, 'o', label='%.2f dropout' % dropout)
plt.title('Val accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(ncol=2, loc='lower right')
plt.gcf().set_size_inches(15, 15)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step9: Import and process data
Step10: Neural Network
Step11: Test accuracy of model(s)
Step12: What if the model hasn't seen data from the patient? What then?!
|
<ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
#import tensorflow.contrib.learn.python.learn as learn
import tflearn
import scipy as sp
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from random import shuffle, randint
from sklearn.utils import shuffle as mutualShuf
import os
import pandas as pd
import sklearn
import datetime
%matplotlib inline
def importData(filepath):
ppt = np.genfromtxt(filepath)
dppt = np.diff(np.transpose(ppt))
print(filepath, "Shape:", dppt[1:16,:].shape)
return dppt[1:16,:]
pathIll = "./inData/clean_ecg/ill/"
pathHealth = "./inData/clean_ecg/health/"
illLst = []
healthLst = []
for file in os.listdir(pathIll):
illLst.append(importData(pathIll+file))
for file in os.listdir(pathHealth):
healthLst.append(importData(pathHealth+file))
print("Outputing Frank leads")
healthPat = np.concatenate((healthLst[:]), axis=1)[12:15]
illPat = np.concatenate((illLst[:]), axis=1)[12:15]
print(healthPat.shape, illPat.shape)
def findAbove(arr, threshold, skip):
Return indices for values above threshhold in array, arr. Keep only first items in sequence.
inlst = []
for index, item in enumerate(arr):
if item >= threshold:
inlst.append(index)
return inlst[::skip]
def processClassData(classData):
Process classData.
Returns a one-hot array of shape [len(classData), 2].
# Convert label data to one-hot array
classDataOH = np.zeros((len(classData),2))
classDataOH[np.arange(len(classData)), classData] = 1
return classDataOH
def getSamples(Arr, indexArr, buffer):
Get samples for inputting into CNN.
sampleArr = []
for index, item in enumerate(indexArr):
if Arr[0:, item-buffer:item+buffer].shape != (Arr.shape[0], buffer*2):
pass
else:
sampleArr.append(Arr[0:, item-buffer:item+buffer])
return np.array(sampleArr)
def visualiseData(ecgData, classData, gridSize, axis):
Plot labelled example data in a gridSize*gridSize grid.
fig, ax = plt.subplots(gridSize, gridSize, subplot_kw=dict(projection='3d'))
plt.suptitle("Labelled example data")
r = randint(0,len(classData)-ecgData.shape[1])
k = 0
if gridSize == 1:
ax.plot(ecgData[r+k,0], ecgData[r+k,1], ecgData[r+k,2])
else:
for i in np.arange(0,gridSize,1):
for j in np.arange(0,gridSize,1):
k = k + 1
ax[i,j].plot(ecgData[r+k,0], ecgData[r+k,1], ecgData[r+k,2])
if axis == False:
ax[i,j].axis("off")
ax[i,j].annotate(classData[r+k], xy=(0, 0), xycoords='axes points',\
size=10, ha='left', va='top')
def undiff(ecgData, buffer):
Reverse the differentiation done earlier through np.cumsum.
ecgData = np.array(ecgData)
ecgData = np.reshape(ecgData, (ecgData.shape[0], ecgData.shape[1], buffer*2))
for i in np.arange(0,ecgData.shape[0],1):
for j in np.arange(0,ecgData.shape[1],1):
ecgData[i,j] = np.cumsum(ecgData[i,j])
ecgData = np.reshape(ecgData, (ecgData.shape[0], ecgData.shape[1], buffer*2, 1))
return ecgData
def splitData(coilData, classData):
Split data into healthy and ill types.
illData = []
healthData = []
for index, item in enumerate(classData):
if item == 1:
illData.append(coilData[index])
if item == 0:
healthData.append(coilData[index])
return illData, healthData
def chunkify(lst,n):
Chunk a list into n chunks of approximately equal size
return [ lst[i::n] for i in range(n) ]
def functionTownCat(illArr, healthArr, illThreshold, healthThreshold, skip, shift, buffer, shuffle):
Return the processed ecgData with the leads concatenated into a 2d array per heartbeat
and the classData (one-hot). Also return arrays of ill and healthy ppts.
If shuffle is true, shuffle data.
illPeakArr = findAbove(np.abs(illArr[0]), illThreshold, skip)
sampleArrI = getSamples(illArr, np.array(illPeakArr), buffer)
healthPeakArr = findAbove(np.abs(healthArr[0]), healthThreshold, skip)
sampleArrH = getSamples(healthArr, np.array(healthPeakArr), buffer)
chunkyI = chunkify(sampleArrI, 10000)
chunkyH = chunkify(sampleArrH , 10000)
avgI = []
avgH = []
for i in np.arange(0,len(chunkyI),1):
avgI.append(np.mean(chunkyI[i], axis=0))
for i in np.arange(0,len(chunkyH),1):
avgH.append(np.mean(chunkyH[i], axis=0))
sampleArrI = np.array(avgI)
sampleArrH = np.array(avgH)
print("Total ill samples", len(illPeakArr), ". Compressed to", sampleArrI.shape)
print("Total healthy samples", len(healthPeakArr), ". Compressed to", sampleArrH.shape)
classData = []
for i in np.arange(0, sampleArrI.shape[0], 1):
classData.append(1)
for i in np.arange(0, sampleArrH.shape[0], 1):
classData.append(0)
ecgData = np.concatenate((sampleArrI, sampleArrH), axis=0)
if shuffle == True:
classData, ecgData = mutualShuf(np.array(classData), ecgData, random_state=0)
classDataOH = processClassData(classData)
ecgData = np.reshape(ecgData, [-1, sampleArrI.shape[1], buffer*2, 1])
return ecgData, classDataOH, classData
buffer = 300
healthThreshold = 0.02
illThreshold = 0.02
skip = 1
shift = 0
shuf = True
ecgData, classDataOH, classData = functionTownCat(illPat, healthPat, illThreshold, healthThreshold, skip,\
shift, buffer, shuf)
# Reintegrate the found values...
ecgData = undiff(ecgData, buffer)
# Take 20% for testing later:
testData = ecgData[:round(ecgData.shape[0]*0.2)]
trainData = ecgData[round(ecgData.shape[0]*0.2):]
testLabels = classDataOH[:round(ecgData.shape[0]*0.2)]
trainLabels = classDataOH[round(ecgData.shape[0]*0.2):]
print(ecgData.shape)
visualiseData(np.reshape(ecgData,(-1,ecgData.shape[1],buffer*2))[:,:], classData, 2, True)
#plt.plot(ecgData[0,0,:]*ecgData[0,1,:])
#plt.savefig("./outData/figures/exampleDataECGundiff.pdf")
print(trainData.shape)
sess = tf.InteractiveSession()
tf.reset_default_graph()
tflearn.initializations.normal()
# ecgData = np.zeros((50,12,400,1)) # If ecgData is not defined
# Input layer:
net = tflearn.layers.core.input_data(shape=[None, buffer*2, buffer*2, buffer*2, 1])
# First layer:
net = tflearn.layers.conv.conv_3d(net, 32, 5, activation="leaky_relu")
net = tflearn.layers.conv.max_pool_3d(net, 2)
# Second layer:
net = tflearn.layers.conv.conv_3d(net, 64, 5, activation="leaky_relu")
net = tflearn.layers.conv.max_pool_3d(net, 2)
net = tflearn.layers.core.flatten(net)
# Fully connected layer 1:
net = tflearn.layers.core.fully_connected(net, 1024, regularizer="L2", weight_decay=0.001, activation="leaky_relu")
# Dropout layer:
net = tflearn.layers.core.dropout(net, keep_prob=0.5)
# Output layer:
net = tflearn.layers.core.fully_connected(net, 2, activation="softmax")
net = tflearn.layers.estimator.regression(net, optimizer='adam', loss='categorical_crossentropy',\
learning_rate=0.00001)
model = tflearn.DNN(net, tensorboard_verbose=3)
model.fit(trainData, trainLabels, n_epoch=1, show_metric=True)
# Save model?
#now = datetime.datetime.now()
#model.save("./outData/models/cleanECG_2dconv_12lead_"+now.isoformat()+"_.tflearn")
#model.load("./outData/models/cleanECG_undiff_20e_300buff_0shift_2017-02-21T19:20:35.702943_.tflearn")
#model.load("./outData/models/cleanECG_undiff_20e_150buff_2017-02-21T16:15:02.602923_.tflearn")
#model.load("./outData/models/cleanECG_2dconv_12lead_2017-03-08T10:15:17.200943_.tflearn")
#model.load("./outData/models/cleanECG_2dconv_12lead_2017-03-09T18:05:18.655939_.tflearn")
labellst = classData[:round(ecgData.shape[0]*0.2)]
healthTest = []
illTest = []
for index, item in enumerate(labellst):
if item == 1:
illTest.append(testData[index])
if item == 0:
healthTest.append(testData[index])
healthLabel = np.tile([1,0], (len(healthTest), 1))
illLabel = np.tile([0,1], (len(illTest), 1))
print("Sensitivity:", model.evaluate(np.array(healthTest), healthLabel), "Specifity:",\
model.evaluate(np.array(illTest), illLabel),\
"Accuracy:", model.evaluate(testData, testLabels))
tpathIll = "./inData/clean_ecg/testIll/"
tpathHealth = "./inData/clean_ecg/testHealth/"
tillLst = []
thealthLst = []
for file in os.listdir(tpathIll):
tillLst.append(importData(tpathIll+file))
for file in os.listdir(tpathHealth):
thealthLst.append(importData(tpathHealth+file))
if frank == False:
print("Outputing standard ECG leads...")
thealth = np.concatenate((thealthLst[:]), axis=1)[0:12]
till = np.concatenate((tillLst[:]), axis=1)[0:12]
elif frank == True:
print("Outputing Frank leads...")
thealth = np.concatenate((thealthLst[:]), axis=1)[12:15]
till = np.concatenate((tillLst[:]), axis=1)[12:15]
print(thealth.shape, till.shape)
unseenData, unseenClassOH, unseenClass = functionTownCat(till, thealth, illThreshold, healthThreshold, \
skip, shift, buffer, True)
# Undifferentiate values
unseenData = undiff(unseenData, buffer)
tillarr, thealtharr = splitData(unseenData, unseenClass)
sens = model.evaluate(np.array(thealtharr), np.tile([1,0], (len(thealtharr), 1)))[0]
spec = model.evaluate(np.array(tillarr), np.tile([0,1], (len(tillarr), 1)))[0]
acc = model.evaluate(unseenData, unseenClassOH)[0]
lenh = len(thealtharr)
leni = len(tillarr)
print("Sensitivity:", sens,\
"Specifity:", spec,\
"Accuracy:", acc)
visualiseData(np.reshape(unseenData,(-1,unseenData.shape[1],buffer*2))[:,:,::20], unseenClass, 3, False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Loading data
Step2: By default,
Step3:
Step4: Preprocessing
Step5: Once we're confident about which component(s) we want to remove, we pass them
Step6: Detecting experimental events
Step7: The resulting events array is an ordinary 3-column
Step8: Event dictionaries like this one are used when extracting epochs from
Step9: For paradigms that are not event-related (e.g., analysis of resting-state
Step10: We'll also pass the event dictionary as the event_id parameter (so we can
Step11: Next we'll pool across left/right stimulus presentations so we can compare
Step12: Like
Step13: <div class="alert alert-info"><h4>Note</h4><p>Both
Step14: Estimating evoked responses
Step15: We can also get a more detailed view of each
Step16: Evoked objects can also be combined to show contrasts between conditions,
Step17: Inverse modeling
Step18: Finally, in order to plot the source estimate on the subject's cortical
|
<ASSISTANT_TASK:>
Python Code:
import os
import numpy as np
import mne
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_filt-0-40_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file)
print(raw)
print(raw.info)
raw.plot_psd(fmax=50)
raw.plot(duration=5, n_channels=30)
# set up and fit the ICA
ica = mne.preprocessing.ICA(n_components=20, random_state=97, max_iter=800)
ica.fit(raw)
ica.exclude = [1, 2] # details on how we picked these are omitted here
ica.plot_properties(raw, picks=ica.exclude)
orig_raw = raw.copy()
raw.load_data()
ica.apply(raw)
# show some frontal channels to clearly illustrate the artifact removal
chs = ['MEG 0111', 'MEG 0121', 'MEG 0131', 'MEG 0211', 'MEG 0221', 'MEG 0231',
'MEG 0311', 'MEG 0321', 'MEG 0331', 'MEG 1511', 'MEG 1521', 'MEG 1531',
'EEG 001', 'EEG 002', 'EEG 003', 'EEG 004', 'EEG 005', 'EEG 006',
'EEG 007', 'EEG 008']
chan_idxs = [raw.ch_names.index(ch) for ch in chs]
orig_raw.plot(order=chan_idxs, start=12, duration=4)
raw.plot(order=chan_idxs, start=12, duration=4)
events = mne.find_events(raw, stim_channel='STI 014')
print(events[:5]) # show the first 5
event_dict = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,
'visual/right': 4, 'smiley': 5, 'buttonpress': 32}
fig = mne.viz.plot_events(events, event_id=event_dict, sfreq=raw.info['sfreq'],
first_samp=raw.first_samp)
reject_criteria = dict(mag=4000e-15, # 4000 fT
grad=4000e-13, # 4000 fT/cm
eeg=150e-6, # 150 µV
eog=250e-6) # 250 µV
epochs = mne.Epochs(raw, events, event_id=event_dict, tmin=-0.2, tmax=0.5,
reject=reject_criteria, preload=True)
conds_we_care_about = ['auditory/left', 'auditory/right',
'visual/left', 'visual/right']
epochs.equalize_event_counts(conds_we_care_about) # this operates in-place
aud_epochs = epochs['auditory']
vis_epochs = epochs['visual']
del raw, epochs # free up memory
aud_epochs.plot_image(picks=['MEG 1332', 'EEG 021'])
frequencies = np.arange(7, 30, 3)
power = mne.time_frequency.tfr_morlet(aud_epochs, n_cycles=2, return_itc=False,
freqs=frequencies, decim=3)
power.plot(['MEG 1332'])
aud_evoked = aud_epochs.average()
vis_evoked = vis_epochs.average()
mne.viz.plot_compare_evokeds(dict(auditory=aud_evoked, visual=vis_evoked),
legend='upper left', show_sensors='upper right')
aud_evoked.plot_joint(picks='eeg')
aud_evoked.plot_topomap(times=[0., 0.08, 0.1, 0.12, 0.2], ch_type='eeg')
evoked_diff = mne.combine_evoked([aud_evoked, vis_evoked], weights=[1, -1])
evoked_diff.pick_types(meg='mag').plot_topo(color='r', legend=False)
# load inverse operator
inverse_operator_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis-meg-oct-6-meg-inv.fif')
inv_operator = mne.minimum_norm.read_inverse_operator(inverse_operator_file)
# set signal-to-noise ratio (SNR) to compute regularization parameter (λ²)
snr = 3.
lambda2 = 1. / snr ** 2
# generate the source time course (STC)
stc = mne.minimum_norm.apply_inverse(vis_evoked, inv_operator,
lambda2=lambda2,
method='MNE') # or dSPM, sLORETA, eLORETA
# path to subjects' MRI files
subjects_dir = os.path.join(sample_data_folder, 'subjects')
# plot
stc.plot(initial_time=0.1, hemi='split', views=['lat', 'med'],
subjects_dir=subjects_dir)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Hiding and removing cells
Step2:
|
<ASSISTANT_TASK:>
Python Code:
import ray
import ray.rllib.agents.ppo as ppo
from ray import serve
def train_ppo_model():
trainer = ppo.PPOTrainer(
config={"framework": "torch", "num_workers": 0},
env="CartPole-v0",
)
# Train for one iteration
trainer.train()
trainer.save("/tmp/rllib_checkpoint")
return "/tmp/rllib_checkpoint/checkpoint_000001/checkpoint-1"
checkpoint_path = train_ppo_model()
# This can be useful if you don't want to clutter the page with details.
import ray
import ray.rllib.agents.ppo as ppo
from ray import serve
ray.shutdown()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2D Binning
Step2: Binning and Aggregating Values
Step3: Specifying the Number of Bins
Step4: Mixing binning and non-binned data
Step5: The Size of the Glyph Can be Altered
Step6: Applying a Custom Palette
Step7: Example with 9 Colors
Step8: Viewing Color Bins in Legend
Step9: Build Fruit Data
Step10: Without Dimension Binning
Step11: Unemployment Data
Step12: De-pivoted Data
|
<ASSISTANT_TASK:>
Python Code:
df.head()
hm = HeatMap(df, x=bins('mpg'), y=bins('displ'))
show(hm)
hm = HeatMap(df, x=bins('mpg'), y=bins('displ'), values='cyl', stat='mean')
show(hm)
hm = HeatMap(df, x=bins('mpg'), y=bins('displ', bin_count=15),
values='cyl', stat='mean')
show(hm)
hm = HeatMap(df, x=bins('mpg'), y='cyl', values='displ', stat='mean')
show(hm)
hm = HeatMap(df, y=bins('displ'), x=bins('mpg'), values='cyl', stat='mean',
spacing_ratio=0.9)
show(hm)
hm = HeatMap(df, x=bins('mpg'), y=bins('displ'), stat='mean', values='cyl',
palette=RdYlGn6)
show(hm)
hm = HeatMap(df, x=bins('mpg'), y=bins('displ'), stat='mean', values='cyl',
palette=RdYlGn9)
show(hm)
hm = HeatMap(df, x=bins('mpg'), y=bins('displ'), values='cyl',
stat='mean', legend='top_right')
show(hm)
fruits = {'fruit': ['apples', 'apples', 'apples', 'apples', 'apples',
'pears', 'pears', 'pears', 'pears', 'pears',
'bananas', 'bananas', 'bananas', 'bananas', 'bananas'],
'fruit_count': [4, 5, 8, 12, 4, 6, 5, 4, 8, 7, 1, 2, 4, 8, 12],
'year': [2009, 2010, 2011, 2012, 2013, 2009, 2010, 2011, 2012, 2013, 2009, 2010,
2011, 2012, 2013]}
fruits['year'] = [str(yr) for yr in fruits['year']]
fruits_df = pd.DataFrame(fruits)
fruits_df.head()
hm = HeatMap(fruits, y='year', x='fruit', values='fruit_count', stat=None)
show(hm)
unempl_data = data.copy()
unempl_data.head()
# Remove the annual column if we don't want to show the total
del unempl_data['Annual']
# Convert numerical year to strings
unempl_data['Year'] = unempl_data['Year'].astype(str)
# de-pivot all columns, except for Year, into two columns.
# One column will have the values and the second will have the labels
unempl = pd.melt(unempl_data, var_name='Month', value_name='Unemployment', id_vars=['Year'])
unempl.head()
hm = HeatMap(unempl, x='Year', y='Month', values='Unemployment', stat=None,
sort_dim={'x': False}, width=1000)
show(hm)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Reading the Metadata SQL table
Step2: Exploring the Metadata tables
Step3: There are no missing values but that does not mean there are not any weird values.
Step4: Having information about the minimum, maximum, mean, median, quartiles, etc. is interesting but it is probably more informative to look at the distribution of songs per year in details.
Step5: Looking at the very unbalanced distribution of songs per year it appears that the initial statistical indicators are not carrying very meaningful information.
Step6: The dataset is made of 4680 songs while there is a maximum of 11 songs and 10 unique albums per artist... we can reasonably conclude that there is no bias toward a specific artist or album.
Step7: Despite a couple of pretty long songs (nearly 27minutes for the maximum) the distribution appears roughly symmetrical. This is partially confirmed by the small difference between the mean and the median.
Step8: The fact that the skewness is positive but not very large indicates that the distribution is slightly assymetric, with a tail longer for values higher than the median (as we can see).
Step9: Songs Hotness and Familiarity
Step10: This is not much information to extract from the one-dimensional anlaysis of these two features, later we will come back to the relationship between hotness and familiarity that shows much more interesting patterns.
Step11: Wa can clearly see that there is a correlation between artist familiarity and hotness. In order to quantify such correlation we compute the (Pearson) correlation coefficient between all pairs
Step12: Well, artist familiarity and hotness are highly correlated with a coefficient of ~0.81 while other features are much more independent from each other (coefficient < 0.1).
Step13: Familiarity vs Hotness Clustering
Step14: In order to estimate the optimal number of clusters we iteratively compute the different metrics with GMM
Step15: When looking at the AIC and BIC the optimal number of clusters is given by the minimum while with the $\log[Likelihood]$ we look at the change in the slope ("elbow").
Step16: Well... even if metrics recommend four clusters, resulting groups differ significantly from the intuition we have. Let's try with maybe $k=3$
Step17: That is much closer to the intuition we had.
Step18: It is possible to look separately at the resulting clusters from both methods.
Step19: The nature of the GMM method result in our case in two clusters presenting a gap while both methods recover a pretty similar "main sequence". Therefore we will keep Agglomerative Clustering results.
Step20: We can actually look at what a linear regression would have given on the GMM result
Step21: As expected (and fortunately) results are very similar.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from scipy.stats import kurtosis, skew
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib import cm
import seaborn as sb
import sqlite3
%matplotlib inline
plt.rcParams['figure.figsize'] = (8,6)
plt.rc('axes', titlesize=18)
plt.rc('axes', labelsize=15)
sb.set_palette('Dark2')
sb.set_style('whitegrid')
path = '../MillionSongSubset/'
# creating the connector for the SQL database
con_meta = sqlite3.connect(path+'AdditionalFiles/subset_track_metadata.db')
cur_meta = con_meta.cursor()
# creating and executing an SQL request
res = con_meta.execute("SELECT name FROM sqlite_master WHERE type='table';")
# printing available table names
for name in res:
print(name[0])
# I this cell we load the dataset in a pandas.DataFrame omitting entries without year information
songs = pd.read_sql_query('SELECT * FROM songs WHERE year!=0', con_meta)
songs.head(5)
# Then we simply display a couple of information to know
# which kind of data we are dealing with and check potential NaN values
songs.info()
# simple statistical description
songs.year.describe()
# first we count the number of songs per year in chronological ordre.
songs_per_yr = songs.year.value_counts().sort_index()
songs_per_yr.head(5)
# add missing years in the original dataset with 0
songs_per_yr = songs_per_yr.reindex(index=list(range(songs.year.min(),songs.year.max())),
fill_value=0)
# Visualizing the number of songs per year
l_col = songs_per_yr / songs_per_yr.max()
songs_per_yr.plot.bar(color=cm.plasma(l_col), figsize=(12,7))
plt.xlabel('Year')
plt.ylabel('Number of Songs')
plt.title('Number of Songs vs Year');
# Counting the number of entry per artist
songs_per_artist = songs.artist_name.value_counts()
songs_per_artist[songs_per_artist >= 8].sort_values().plot.barh(color='midnightblue', figsize=(7,7))
plt.title('Songs per Artist (8 and more)')
plt.xlabel('Number of Songs')
plt.ylabel('Artist Name');
# Number of different albums release per artist
songs_per_art_and_rel = songs.groupby(['artist_name']).release.nunique()
songs_per_art_and_rel[songs_per_art_and_rel >= 6].sort_values().plot.barh(color='midnightblue', figsize=(6,10))
plt.title('Albums per Artist (6 and more)')
plt.xlabel('Number of Albums')
plt.ylabel('Artist Name');
songs.duration.describe()
# Looking at the songs distribution in bins of 10seconds
songs.duration.plot.hist(bins=np.arange(0.0, 1610.0, 10.0), figsize=(12,7))
# Visualizing the quartiles
plt.axvline(181.1522, linestyle=':', lw=1.0, c='k')
plt.axvline(227.3824, linestyle='-.', lw=1.0, c='k')
plt.axvline(278.40608, linestyle=':', lw=1.0, c='k')
plt.xlim(0.0,1600)
plt.xlabel('Song Duration [sec]')
plt.ylabel('Number of Songs')
plt.title('Number of Songs VS Duration');
# computing distribution skewness
print('Skewness : {:.3f}'.format(skew(songs.duration.values)))
# computing the kurtosis
print('Kurtosis : {:.3f}'.format(kurtosis(songs.duration.values)))
songs[['artist_hotttnesss','artist_familiarity']].describe()
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(12,6), sharex=True, sharey=True)
songs.artist_familiarity.hist(bins=np.arange(0.0,1.02,0.02), ax=ax[0], color='C2');
ax[0].set_title('Artist Familiarity')
ax[0].axvline(0.54075, linestyle=':', lw=1.0, c='k')
ax[0].axvline(0.622875, linestyle='-.', lw=1.0, c='k')
ax[0].axvline(0.72723, linestyle=':', lw=1.0, c='k')
songs.artist_hotttnesss.hist(bins=np.arange(0.0,1.02,0.02), ax=ax[1], color='C1');
ax[1].set_title('Artist Hotness')
ax[1].axvline(0.36682, linestyle=':', lw=1.0, c='k')
ax[1].axvline(0.420938, linestyle='-.', lw=1.0, c='k')
ax[1].axvline(0.511054, linestyle=':', lw=1.0, c='k')
ax[-1].set_xlim(0.0,1.0)
plt.subplots_adjust(wspace=0.05);
g = sb.PairGrid(songs, vars=['year','duration','artist_familiarity','artist_hotttnesss'])
g.map_upper(sb.kdeplot)
g.map_lower(plt.scatter, marker='.', edgecolor='w')
g.map_diag(plt.hist, bins=51);
features = ['year','duration','artist_familiarity','artist_hotttnesss']
corr_mat = np.corrcoef(songs[features].values.T)
ax = sb.heatmap(corr_mat, xticklabels=features, yticklabels=features,
cmap='RdBu', vmin=-1.0, vmax=1.0, annot=True)
plt.setp( ax.xaxis.get_majorticklabels(), fontsize=15, rotation=60 )
plt.setp( ax.yaxis.get_majorticklabels(), fontsize=15, rotation=0 );
fig, ax = plt.subplots(nrows=1, ncols=2, sharex=True, sharey=True,
figsize=(12,6))
songs.plot.scatter(x='artist_familiarity', y='artist_hotttnesss', marker='+', ax=ax[0])
songs.plot.hexbin(x='artist_familiarity', y='artist_hotttnesss', ax=ax[1],
gridsize=41, mincnt=1.0, cmap='viridis', colorbar=False)
ax[0].set_title('Scatter Plot')
ax[1].set_title('Density Plot')
plt.subplots_adjust(wspace=0.03);
# checking the number of points at hotness = 0
songs[songs.artist_hotttnesss == 0.0].song_id.nunique()
# importing the methods
from sklearn.mixture import GaussianMixture
from sklearn.cluster import AgglomerativeClustering
# Selecting only the useful part of the dataset
songs_cl = songs[songs.artist_hotttnesss != 0.0][['artist_familiarity','artist_hotttnesss']]
# Here we compute metrics for GMM with different number of clusters
scores_n = []
for i in range(2,12):
print('number of clusters : {}'.format(i))
gmm = GaussianMixture(n_components=i, covariance_type='full')
bic, aic, log = 0.0, 0.0, 0.0
for j in range(7):
gmm.fit(songs_cl)
bic += gmm.bic(songs_cl)
aic += gmm.aic(songs_cl)
log += gmm.score(songs_cl)
scores_n.append([np.mean(bic), np.mean(aic), np.mean(log)])
scores_n = np.asarray(scores_n)
# Visualizing the results
fig, ax = plt.subplots(nrows=1, ncols=2, sharex=True, figsize=(10,4))
ax[0].plot(range(2,12), scores_n[:,0] - np.min(scores_n[:,0]), 'C0')
ax[0].plot(range(2,12), scores_n[:,1] - np.min(scores_n[:,1]), 'C1')
ax[0].legend(['BIC','AIC'])
ax[0].set_xlabel('Number of Clusters $k$')
ax[1].plot(range(2,12), scores_n[:,2] - np.min(scores_n[:,2]), 'C2')
ax[1].legend(['Log Likelihood'])
ax[1].set_xlabel('Number of Clusters $k$');
# Defining GMM with k=4 clusters
gmm_4 = GaussianMixture(n_components=4, covariance_type='full')
gmm_4.fit(songs_cl)
gmm_4_pred = gmm_4.predict_proba(songs_cl)
# Visualizing the result
plt.scatter(songs_cl.artist_familiarity, songs_cl.artist_hotttnesss, c=np.argmax(gmm_4_pred, axis=1),
marker='.', alpha=0.5, cmap='Dark2_r')
plt.title('GMM with k=4')
plt.xlabel('Artist Familiarity')
plt.ylabel('Artist Hotness');
# Defining GMM with k=3 clusters
gmm_3 = GaussianMixture(n_components=3, covariance_type='full')
gmm_3.fit(songs_cl)
gmm_3_pred = gmm_3.predict_proba(songs_cl)
# Visualizing the result
plt.scatter(songs_cl.artist_familiarity, songs_cl.artist_hotttnesss, c=np.argmax(gmm_3_pred, axis=1),
marker='.', alpha=0.5, cmap='Dark2_r')
plt.title('GMM with k=3')
plt.xlabel('Artist Familiarity')
plt.ylabel('Artist Hotness');
# Agglomerative Clustering with k=4 clusters
agg_4 = AgglomerativeClustering(n_clusters=4, linkage='average')
agg_4_pred = agg_4.fit_predict(songs_cl)
# Agglomerative Clustering with k=3 clusters
agg_3 = AgglomerativeClustering(n_clusters=3, linkage='average')
agg_3_pred = agg_3.fit_predict(songs_cl)
fig, ax = plt.subplots(nrows=1, ncols=2, sharex=True, sharey=True, figsize=(12,6))
ax[0].scatter(songs_cl.artist_familiarity, songs_cl.artist_hotttnesss, c=agg_3_pred,
marker='.', alpha=1.0, cmap='Dark2_r')
ax[0].set_title('Agg. Clus. with k=3')
ax[0].set_xlabel('Artist Familiarity')
ax[0].set_ylabel('Artist Hotness')
ax[1].scatter(songs_cl.artist_familiarity, songs_cl.artist_hotttnesss, c=agg_4_pred,
marker='.', alpha=1.0, cmap='Dark2_r')
ax[1].set_title('Agg. Clus. with k=4')
ax[1].set_xlabel('Artist Familiarity')
plt.subplots_adjust(wspace=0.025);
songs_cl['gmm'] = np.argmax(gmm_3_pred, axis=1)
songs_cl['agg_clus'] = agg_3_pred
sb.lmplot(data=songs_cl, x='artist_familiarity', y='artist_hotttnesss',
hue='gmm', col='gmm',
fit_reg=False, markers='.', scatter_kws={'alpha':0.7});
sb.lmplot(data=songs_cl, x='artist_familiarity', y='artist_hotttnesss',
hue='agg_clus', col='agg_clus',
fit_reg=False, markers='.', scatter_kws={'alpha':0.7});
songs_linreg = np.polyfit(songs_cl.artist_familiarity[songs_cl.agg_clus == 2].values,
songs_cl.artist_hotttnesss[songs_cl.agg_clus == 2].values, 1)
print("Linear Regression of the 'main sequence' :\n\
Hotness = {:.4f} * Familiarity + {:.4f}".format(songs_linreg[0],songs_linreg[1]))
songs_linreg_gmm = np.polyfit(songs_cl.artist_familiarity[songs_cl.gmm == 0].values,
songs_cl.artist_hotttnesss[songs_cl.gmm == 0].values, 1)
print("Linear Regression of the 'main sequence' :\n\
Hotness = {:.4f} * Familiarity + {:.4f}".format(songs_linreg_gmm[0],songs_linreg_gmm[1]))
plt.figure(figsize=(8,7))
plt.scatter(songs.artist_familiarity, songs.artist_hotttnesss,
marker='.', edgecolors='none', c='C2', alpha=0.7)
plt.plot([0.0,1.0], [0.0,songs_linreg[0]] + songs_linreg[1], 'k', lw=1.0)
plt.xlabel('Artist Familiarity')
plt.ylabel('Artist Hotness');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Charger des données
Step2: 2.1 Charger des colonnes spécifiques
Step3: 2.2 Charger une colonne en index
Step4: Dans cet exemple, en chargant la colonne 'ID' en index, cela permet de faciliter l'accès aux données selon le numéro du ID.
Step5: 2.3 Charger une colonne dans un type particulier
Step6: 3. Charger un fichier sans header
Step7: 3.1 Fournir un header
Step8: 4. Convertir des data types
Step9: 4.2 Convertir avec les fonctions de pandas
Step10: 5. Sélectionner des données
Step11: 5.2 Sélectionner une donnée selon un critère
Step12: 6. Utiliser l'index
Step13: 7. Charger les statistiques
Step14: La fonction Pandas dataframe.quantile() renvoie des valeurs au quantile donné sur l'axe demandé.
Step15: 8. Groupby
Step16: 9. Sauvegarder des données en CSV
Step17: Créer des graphiques avec matplotlib
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
from pandas import DataFrame,read_csv
#Emplacement des compteurs avec la correspondance pour les noms dans les fichiers de comptage.
#http://donnees.ville.montreal.qc.ca/dataset/f170fecc-18db-44bc-b4fe-5b0b6d2c7297/resource/c7d0546a-a218-479e-bc9f-ce8f13ca972c/download/localisation_des_compteurs_velo.csv
compteurFichier = './donnees/localisation_des_compteurs_velo.csv'
compteur = pd.read_csv(compteurFichier)
#cars
carsFichier = './donnees/cars.txt'
cars = pd.read_csv(carsFichier, sep=',')
# .head() affiche les 5 premieres lignes seulement
compteur.head()
colonnes_selectionnees = ['Nom', 'Statut']
compteur = pd.read_csv(compteurFichier, sep=",", usecols = colonnes_selectionnees)
compteur.head()
# mettre les noms des compteurs en index
compteur = pd.read_csv(compteurFichier, index_col = 0)
compteur.head()
#acceder a une ligne particuliere
compteur.loc[100035409]
#types de donnees original de chaque colonne
compteur.dtypes
#charger la colonne Ancien_ID en string (object)
dtypes = {"Ancien_ID":"str"}
nouveauCompteur = pd.read_csv(compteurFichier, sep=',', dtype=dtypes)
nouveauCompteur.dtypes
fichierNoHead = './donnees/localisation_des_compteurs_velo_noheader.csv'
noHead = pd.read_csv(fichierNoHead, sep=';', header=None)
noHead.head()
#avec names
noHead = pd.read_csv(fichierNoHead, sep=';', header=None, names = ['ID', 'Ancien ID', 'Nom', 'Statut', 'Latitude', 'Longitude', 'Annee implantee'])
noHead.head()
# avec .columns
noHead.columns = ['A', 'B', 'C', 'D', 'E', 'F', 'G']
noHead.head()
# avec .rename()
nom = pd.DataFrame({"ID": [100041114, 100002880, 100003032], "Ancien_ID": ["NaN", 10.0, 3.0], "Nom": ["Eco-Display Parc Stanley", "Pont Jacques-Cartier", "Berri"]})
# Ancien_ID à Ancien ID
nom.rename(columns={"Ancien_ID": "Ancien ID"})
nom[:3].rename(index={0: "Premier", 1: "Deuxieme", 2: "Troisieme"})
nom[:3].rename(str.lower, axis='columns')
nom[:3].rename({0: 1, 1: 2, 2: 3}, axis='index')
# convertir la colonne Annee_implante de float64 a int32
compteur = compteur.fillna(0) #remplacer les NaN en 0
compteur = compteur.astype({"Annee_implante": int})
compteur = compteur.astype({"Ancien_ID": int})
compteur.head()
#convertir en numerique
serie = pd.Series(['1.0', '2', -3])
pd.to_numeric(serie)
#convertir en datetime
date = pd.DataFrame({'day': [20, 21], 'month': [6, 7], 'year': [2019, 2020]})
pd.to_datetime(date)
#annee implantee de chaque compteur selon le ID
compteur['Annee_implante']
compteur.Annee_implante
#selectionner les donnees apres 2010
compteur[compteur.Annee_implante>=2010]
# plusieurs conditions
critere1 = compteur['Ancien_ID'] > 0
critere2 = compteur['Annee_implante'] > 2010
compteur['Ancien_ID']>0
compteur[critere1].dtypes
tout_critere = critere1 & critere2
compteur[tout_critere]
# condition avec "et" (&)
compteur[(compteur['Statut']=='Actif') & (compteur['Annee_implante']==2019)]
#retourne les années implantées > 2015 ou (|) tout les ancien ID > 0
compteur[(compteur['Annee_implante'] > 2015) | (compteur['Ancien_ID'] > 0)].head()
compteur[:3]
compteur[5::-1]
cars.head()
# moyenne
cars['Weight'].mean()
# mediane
cars[['Drive_Ratio', 'Horsepower']].median()
# quantile
cars['Weight'].quantile()
# trouver les quantiles sur l'axe d'index
cars.quantile([.1, .25, .5, .75], axis=0)
# std
cars['Horsepower'].std()
# describe
cars[['Drive_Ratio', 'Horsepower', 'Displacement', 'Cylinders']].describe()
cars[['Country']].describe()
# aggregation
cars.agg({'MPG': ['min', 'max', 'median', 'skew'], 'Weight': ['min', 'max', 'median', 'mean']})
# groupby
cars[['Country', 'MPG']].groupby('Country').mean()
groupe = cars.groupby('Car')
groupe.aggregate(np.sum)
groupe = cars.groupby('Country')
groupe['Weight'].agg([np.max, np.min, np.mean, np.sum, np.std])
data = pd.DataFrame({'Veh leger': [1, 2], 'Veh lourd': [5, 10], 'TC': [8, 10], 'Pietons': [2, 3]})
data.to_csv('fichierDonnees1.csv', index=False, sep='\t')
import matplotlib.pyplot as plt
#creation d'un dataframe
df = pd.DataFrame({
'Heures': ['8:00AM', '9:00AM', '10:00AM', '11:00AM', '12:00PM'],
'Veh lourd': [28, 98, 100, 49, 58],
'Veh leger': [2, 8, 45, 30, 33],
'Pietons': [3, 5, 9, 11, 21]
})
#nuage de point 'scatter'
df.plot(kind='scatter', x='Veh lourd', y='Veh leger')
ax = plt.gca()
df.plot(kind='line', x='Heures', y='Veh leger', ax= ax)
df.plot(kind='line', x='Heures', y='Veh lourd', ax= ax)
df.plot(kind='line', x='Heures', y='Pietons', ax= ax)
plt.show()
#diagramme a bar
df.plot(kind='bar', x='Heures', y='Pietons')
#diagramme a bar avec groupby
cars.groupby('Country')['Weight'].nunique().plot(kind='bar')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exercise 1
Step2: b. Cointegration Test II
Step3: Exercise 2
Step4: b. Real Cointegration Test II
Step5: Exercise 3
Step6: b. Testing the Coefficient
Step7: Extra Credit Exercise
|
<ASSISTANT_TASK:>
Python Code:
# Useful Functions
def find_cointegrated_pairs(data):
n = data.shape[1]
score_matrix = np.zeros((n, n))
pvalue_matrix = np.ones((n, n))
keys = data.keys()
pairs = []
for i in range(n):
for j in range(i+1, n):
S1 = data[keys[i]]
S2 = data[keys[j]]
result = coint(S1, S2)
score = result[0]
pvalue = result[1]
score_matrix[i, j] = score
pvalue_matrix[i, j] = pvalue
if pvalue < 0.05:
pairs.append((keys[i], keys[j]))
return score_matrix, pvalue_matrix, pairs
# Useful Libraries
import numpy as np
import pandas as pd
import statsmodels
import statsmodels.api as sm
from statsmodels.tsa.stattools import coint, adfuller
from quantopian.research.experimental import history, continuous_future
# just set the seed for the random number generator
np.random.seed(107)
import matplotlib.pyplot as plt
A_returns = np.random.normal(0, 1, 100)
A = pd.Series(np.cumsum(A_returns), name='X') + 50
some_noise = np.random.exponential(1, 100)
B = A - 7 + some_noise
#Your code goes here
## answer key ##
score, pvalue, _ = coint(A,B)
confidence_level = 0.05
if pvalue < confidence_level:
print ("A and B are cointegrated")
print pvalue
else:
print ("A and B are not cointegrated")
print pvalue
A.name = "A"
B.name = "B"
pd.concat([A, B], axis=1).plot();
C_returns = np.random.normal(1, 1, 100)
C = pd.Series(np.cumsum(C_returns), name='X') + 100
D_returns = np.random.normal(2, 1, 100)
D = pd.Series(np.cumsum(D_returns), name='X') + 100
#Your code goes here
## answer key ##
score, pvalue, _ = coint(C,D)
confidence_level = 0.05
if pvalue < confidence_level:
print ("C and D are cointegrated")
print pvalue
else:
print ("C and D are not cointegrated")
print pvalue
C.name = "C"
D.name = "D"
pd.concat([C, D], axis=1).plot();
cn = continuous_future('CN', offset = 0, roll = 'calendar', adjustment = 'mul')
sb = continuous_future('SB', offset = 0, roll = 'calendar', adjustment = 'mul')
cn_price = history(cn, 'price', '2015-01-01', '2016-01-01', 'daily')
sb_price = history(sb, 'price', '2015-01-01', '2016-01-01', 'daily')
#Your code goes here
#print history.__doc__
## answer key ##
score, pvalue, _ = coint(cn_price, sb_price)
confidence_level = 0.05
if pvalue < confidence_level:
print ("CN and SB are cointegrated")
print pvalue
else:
print ("CN and SB are not cointegrated")
print pvalue
cn_price.name = "CN"
sb_price.name = "SB"
pd.concat([cn_price, sb_price], axis=1).plot();
cl = continuous_future('CL', offset = 0, roll = 'calendar', adjustment = 'mul')
ho = continuous_future('HO', offset = 0, roll = 'calendar', adjustment = 'mul')
cl_price = history(cl, 'price', '2015-01-01', '2016-01-01', 'daily')
ho_price = history(ho, 'price', '2015-01-01', '2016-01-01', 'daily')
#Your code goes here
## answer key ##
confidence_level = 0.05
score, pvalue, _ = coint(cl_price, ho_price)
if pvalue < confidence_level:
print ("CL and HO are cointegrated")
print pvalue
else:
print ("CL and HO are not cointegrated")
print pvalue
cl_price.name = 'CL'
ho_price.name = 'HO'
pd.concat([cl_price, ho_price.multiply(42)], axis=1).plot();
## answer key ##
results = sm.OLS(cl_price, sm.add_constant(ho_price)).fit()
b = results.params['HO']
print b
spread = cl_price - b * ho_price
print "p-value for in-sample stationarity: ", adfuller(spread)[1]
# The p-value is less than 0.05 so we conclude that this spread calculation is stationary in sample
spread.plot()
plt.axhline(spread.mean(), color='black')
plt.legend(['Spread']);
cl_out = get_pricing(cl, fields='price',
start_date='2016-01-01', end_date='2016-07-01')
ho_out = get_pricing(ho, fields='price',
start_date='2016-01-01', end_date='2016-07-01')
#Your code goes here
## answer key ##
spread = cl_out - b * ho_out
spread.plot()
plt.axhline(spread.mean(), color='black')
plt.legend(['Spread']);
print "p-value for spread stationarity: ", adfuller(spread)[1]
# Our p-value is less than 0.05 so we conclude that this calculation of
# the spread is stationary out of sample
# No solution provided for extra credit exercises.
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Column-wise / Vectors (2x2)
Step2: Now we know the answer to this is a linear combination of the two vectors. So we multiply the first vector by 3 and the second vector by 4 and add the two
Step3: Matrix Way (2 x 2) - Using Elimination
Step4: Matrix Way (2x2) - Using Inverse
Step5: 3 x 3 Equation
Step6: Column-wise / Vectors (3 x 3)
Step7: Matrix Way (3 x 3)
Step8: Exercises on Matrices
Step9: 1. Matrix Addition
Step10: What is inverse of $W$ i.e. $W^{-1}$? Why does this not work?
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('fivethirtyeight')
plt.rcParams['figure.figsize'] = (10, 6)
x = np.arange(-10, 10, 1)
y1 = (15 - x)/3
y2 = (2 - 2*x)/-1
plt.plot(x, y1)
plt.text(x[-1], y1[-1], 'row1')
plt.plot(x, y2)
plt.text(x[-1], y2[-1], 'row2')
plt.axhline(0, color='grey', linewidth=1)
plt.axvline(0, color='grey', linewidth=1)
plt.xlabel('x')
plt.ylabel('y')
# All the vectors start at 0, 0
vX1 = np.array([0,0,1,2])
vY1 = np.array([0,0,3,-1])
b = np.array([0,0,15,2])
vector1 = [vX1, vY1, b]
X,Y,U,V = zip(*vector1)
X,Y,U,V
def vector_plot (vector):
X,Y,U,V = zip(*vector)
C = [1,2,3]
plt.figure()
ax = plt.gca()
ax.quiver(X,Y,U,V,C, angles='xy',scale_units='xy',scale=1)
ax.set_xlim([-15,15])
ax.set_ylim([-9,9])
plt.axhline(0, color='grey', linewidth=1)
plt.axvline(0, color='grey', linewidth=1)
plt.axes().set_aspect('equal')
vector_plot(vector1)
# VX1 vectors start at (0, 0), while VY2 starts at the end of VX1
vX2 = np.array([0,0,3,6])
vY2 = np.array([3,6,12,-4])
b = np.array([0,0,15,2])
vector2 = [vX2, vY2, b]
vector_plot(vector2)
from fractions import Fraction
A = np.matrix([[1,3],
[2,-1]])
b = np.matrix([[15],
[2]])
E1 = np.matrix([[1,0],
[-2,1]])
E2 = np.matrix([[Fraction (1,1),Fraction(3, 7)],
[Fraction(0,1),Fraction(-1, 7)]])
A
E1
E1*A
E2*E1*A
E2*E1*b
E2*E1
Ainv = np.linalg.inv(A)
Ainv
Ainv * b
from mpl_toolkits.mplot3d import Axes3D
xrange = np.arange(-10, 10, 1)
yrange = np.arange(-10, 10, 1)
x, y = np.meshgrid(xrange, yrange)
z1 = 3 - x - y
z2 = 12 - 3*x - 8*y
z3 = (15 - 4*x -9 *y)/(2)
plt3d = plt.figure().gca(projection='3d')
plt3d.plot_surface(x,y,z1, color='blue', alpha = 0.4)
plt3d.plot_surface(x,y,z2, color='red', alpha = 0.4)
plt3d.plot_surface(x,y,z3, color='green', alpha = 0.4)
plt3d.set_xlabel('x')
plt3d.set_ylabel('y')
plt3d.set_zlabel('z')
from mpl_toolkits.mplot3d import Axes3D
from matplotlib.patches import FancyArrowPatch
from mpl_toolkits.mplot3d import proj3d
class Arrow3D(FancyArrowPatch):
def __init__(self, xs, ys, zs, *args, **kwargs):
FancyArrowPatch.__init__(self, (0,0), (0,0), *args, **kwargs)
self._verts3d = xs, ys, zs
def draw(self, renderer):
xs3d, ys3d, zs3d = self._verts3d
xs, ys, zs = proj3d.proj_transform(xs3d, ys3d, zs3d, renderer.M)
self.set_positions((xs[0],ys[0]),(xs[1],ys[1]))
FancyArrowPatch.draw(self, renderer)
plt.figure()
ax = plt.gca(projection='3d')
vX = Arrow3D([0,1],[0,3],[0,5], mutation_scale=20, lw=3, arrowstyle="-|>", color="r")
vY = Arrow3D([1,2],[3,11],[5,1],mutation_scale=20, lw=3, arrowstyle="-|>", color="c")
vZ = Arrow3D([2,3],[11,12],[1,4], mutation_scale=20, lw=3, arrowstyle="-|>", color="g")
b = Arrow3D([0,3],[0,12],[0,4],mutation_scale=20, lw=3, arrowstyle="-|>", color="k")
ax.add_artist(vX)
ax.add_artist(vY)
ax.add_artist(vZ)
ax.add_artist(b)
ax.set_xlim([0,12])
ax.set_ylim([0,12])
ax.set_zlim([0,12])
plt.draw()
A1 = np.matrix([[1,1,1],
[3,8,1],
[5,-4,3]])
b1 = np.matrix([[3],
[12],
[4]])
A1
b1
A1inv = np.linalg.inv(A1)
A1inv
A1inv*b1
S = np.matrix([[3, 1, 2],
[1 , 4, 5],
[2 , 5 , 6]])
U = np.matrix([[3, 1, 1],
[3, 8, 1],
[5, -4, 3]])
V = np.matrix([[2, -3, -4],
[3, 5, -6],
[-1, -3, 2]])
T = np.matrix([[2 ,3],
[4 ,6]])
Z = np.matrix([[1, -1, 0]])
W = np.matrix([[2 ,3],
[-1 ,2],
[-3, 1]])
T
W
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Importing the required datasets
Step2: Cleaning and preparing the datasets
Step3: Randomly splitting the dataset into training and testing sets
Step4: First baseline using Linear Regression
Step5: Lets calculate the average RMSE (Cross Validation, 5-Fold)
Step6: Good enough results for the Haryana State
Step7: The Root Mean Square Error
Step8: Constructing dataset for Whole India
Step9: Constructing the features X1 and X2, the production for the last 2 years.
Step10: Lets calculate the average RMSE (Cross Validation, 5-Fold)
Step11: Result
Step12: The range of the values is (0,1362) and rmse is 70.6
Step13: Adding a new feature
Step14: Lets calculate the average RMSE (Cross Validation, 5-Fold)
Step15: Now lets compare with other feature combinations
Step16: Average RMSE (Cross Validation, 5-Fold)
Step17: Avg RMSE with original 4 features
Step18: Avg RMSE
Step19: Avg RMSE with the original 4 features
Step20: Avg RMSE
Step21: Avg RMSE with Linear Regression
Step22: Avg RMSE
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
from math import sqrt
import pprint
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import cross_val_score
from sklearn import metrics
%matplotlib inline
rice = pd.read_csv("/Users/macbook/Documents/BTP/Notebook/Rice.csv")
rice.head()
rice_haryana = rice[rice["State_Name"]=="Haryana"]
rice_haryana.head()
rainfall = pd.read_csv("/Users/macbook/Documents/BTP/Notebook/rainfall.csv")
rainfall.head()
rain_haryana = rainfall[rainfall["State"]=="Haryana"]
print(rain_haryana.head())
print(rain_haryana.describe())
X_hr = pd.read_csv("/Users/macbook/Documents/BTP/Notebook/haryana.csv")
X_hr.head()
X_finite = X_hr[np.isfinite(X_hr["X1"])]
X_finite = X_finite[np.isfinite(X_finite["X2"])]
X_finite = X_finite[np.isfinite(X_finite["X3"])]
X_finite = X_finite[np.isfinite(X_finite["X4"])]
X_finite = X_finite[np.isfinite(X_finite["Y"])]
X_finite.head()
Xn = X_finite
Xn.describe()
y = Xn["Y"]
X = Xn[["X1", "X2", "X3", "X4"]]
plt.figure(figsize=(9, 5))
plt.hist(y, bins=30)
plt.xlabel('Production Value',fontsize=15)
plt.ylabel('Occurences',fontsize=15)
plt.title('Distribution of the Rice Production Values',fontsize=18)
Xplot = Xn[["X1", "X2", "X3", "X4","Y"]]
var_name = "X1"
plt.figure(figsize=(10,6))
sns.regplot(x=var_name, y='Y', data=Xplot, scatter_kws={'alpha':0.6, 's':20})
plt.xlabel(var_name + " (Crop Produce of Last Year)", fontsize=15)
plt.ylabel('Y', fontsize=15)
plt.title("Distribution of y variable with feature "+var_name, fontsize=18)
plt.show()
var_name = "X2"
plt.figure(figsize=(10,6))
sns.regplot(x=var_name, y='Y', data=Xplot, scatter_kws={'alpha':0.6, 's':20})
plt.xlabel(var_name + " (Crop Produce of Last to Last Year)", fontsize=15)
plt.ylabel('Y', fontsize=15)
plt.title("Distribution of y variable with feature "+var_name, fontsize=18)
plt.show()
var_name = "X3"
plt.figure(figsize=(10,6))
sns.regplot(x=var_name, y='Y', data=Xplot, scatter_kws={'alpha':0.6, 's':20})
plt.xlabel(var_name + " (Rainfall of Present Year)", fontsize=15)
plt.ylabel('Y', fontsize=15)
plt.title("Distribution of y variable with feature "+var_name, fontsize=18)
plt.show()
var_name = "X4"
plt.figure(figsize=(10,6))
sns.regplot(x=var_name, y='Y', data=Xplot, scatter_kws={'alpha':0.6, 's':20})
plt.xlabel(var_name + " (Rainfall of Last Year)", fontsize=15)
plt.ylabel('Y', fontsize=15)
plt.title("Distribution of y variable with feature "+var_name, fontsize=18)
plt.show()
# Z-Score Normalization
cols = list(X.columns)
for col in cols:
col_zscore = col + '_zscore'
X[col_zscore] = (X[col] - X[col].mean())/X[col].std(ddof=0)
X = X[["X1_zscore", "X2_zscore", "X3_zscore", "X4_zscore"]]
X.head()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1)
alg = LinearRegression()
alg.fit(X_train, y_train)
coef = alg.coef_
coef = coef.round(decimals=2)
np.set_printoptions(suppress=True) #gem
print("The coefficients for the linear regression model learnt are\n")
print(coef)
print()
y_predict = alg.predict(X_test)
rmse = sqrt(mean_squared_error(y_predict, y_test))
print(rmse)
clf = LinearRegression()
scores = cross_val_score(clf, X, y, cv=5, scoring='neg_mean_squared_error')
for i in range(0,5):
scores[i] = sqrt(-1*scores[i])
print(scores)
avg_rmse = scores.mean()
print("\n\nAvg RMSE is ",scores.mean())
# print(type(y_test))
# print(type(y_predict))
yt = y_test.as_matrix()
print(type(yt))
p = pd.DataFrame()
p["y_predicted"] = y_predict/1000
p["y_test"] = yt/1000
p["y_predicted"] = p["y_predicted"].round(decimals=1)
# p["y_test"] = p["y_test"].round(decimals=1)
p.describe()
print(p)
rmse/1000
rain1 = rainfall
rain2 = pd.read_csv("/Users/macbook/Documents/BTP/Notebook/rainfall_distt_2004-10_nax.csv")
print(rice.describe())
print(rain1.describe())
print(rain2.describe())
a = np.empty((rice.shape[0],1))*np.NAN
rice = rice.assign(X1 = a)
rice = rice.assign(X2 = a)
rice = rice.assign(X3 = a)
rice = rice.assign(X4 = a)
rice.head()
l = rice.shape[0]
for row in range(0,l):
if row-1<0 or rice.iloc[row,1] != rice.iloc[row-1,1]:
continue
else:
rice.iloc[row,8] = rice.iloc[row-1,6]
if row-2<0 or rice.iloc[row,1] != rice.iloc[row-2,1]:
continue
else:
rice.iloc[row,9] = rice.iloc[row-2,6]
rice.head()
def func(s):
x = s.strip()
return x.lower()
rice['ind_district'] = rice['ind_district'].apply(func)
rice['Season'] = rice['Season'].apply(func)
rain1['ind_district'] = rain1['ind_district'].apply(func)
rain2['ind_district'] = rain2['ind_district'].apply(func)
rice.head()
rain1.head()
# can reduce the time by searching only one variable for some cases atleast
rice = rice[rice['Season'] == 'kharif']
l = rice.shape[0]
for row in range(0,l):
dt = rice.iloc[row,1]
yr = rice.iloc[row,2]
if yr<=2002:
# rainfall for the same year
r = rain1.loc[(rain1.ind_district == dt) & (rain1.Year == yr)]
if r.shape[0] == 1:
rice.iloc[row,10] = r.iloc[0,3]
# rainfall for the previous year
r = rain1.loc[(rain1.ind_district == dt) & (rain1.Year == yr-1)]
if r.shape[0] == 1:
rice.iloc[row,11] = r.iloc[0,3]
if yr>2004:
# rainfall for the same year
r = rain2.loc[(rain2.ind_district == dt) & (rain2.Year == yr)]
if r.shape[0] == 1:
rice.iloc[row,10] = r.iloc[0,3]
# rainfall for the previous year
r = rain2.loc[(rain2.ind_district == dt) & (rain2.Year == yr-1)]
if r.shape[0] == 1:
rice.iloc[row,11] = r.iloc[0,3]
# X1 = prod-1
# X2 = prod-2
# X3 = rain
# X4 = rain-1
rice.describe()
ricex = rice[np.isfinite(rice["Production"])]
ricex = ricex[np.isfinite(ricex["X1"])]
ricex = ricex[np.isfinite(ricex["X2"])]
ricex = ricex[np.isfinite(ricex["X3"])]
ricex = ricex[np.isfinite(ricex["X4"])]
ricex.head()
X = ricex[["X1","X2","X3","X4"]]
y = ricex[["Production"]]
ricex.describe()
plt.figure(figsize=(30, 10))
plt.hist(y, bins=250)
plt.xlabel('Production Value',fontsize=25)
plt.ylabel('Occurences',fontsize=25)
plt.title('Distribution of the Rice Production Values',fontsize=30)
Xplot = ricex[["X1", "X2", "X3", "X4","Production"]]
var_name = "X1"
plt.figure(figsize=(10,6))
sns.regplot(x=var_name, y='Production', data=Xplot, scatter_kws={'alpha':0.6, 's':20})
plt.xlabel(var_name + " (Crop Produce of Last Year)", fontsize=15)
plt.ylabel('Y', fontsize=15)
plt.title("Distribution of y variable with feature "+var_name, fontsize=18)
plt.show()
Xplot = ricex[["X1", "X2", "X3", "X4","Production"]]
var_name = "X2"
plt.figure(figsize=(10,6))
sns.regplot(x=var_name, y='Production', data=Xplot, scatter_kws={'alpha':0.6, 's':20})
plt.xlabel(var_name + " (Crop Produce of Last to Last Year)", fontsize=15)
plt.ylabel('Y', fontsize=15)
plt.title("Distribution of y variable with feature "+var_name, fontsize=18)
plt.show()
Xplot = ricex[["X1", "X2", "X3", "X4","Production"]]
var_name = "X3"
plt.figure(figsize=(10,6))
sns.regplot(x=var_name, y='Production', data=Xplot, scatter_kws={'alpha':0.6, 's':20})
plt.xlabel(var_name + " (Rainfall of Present Year)", fontsize=15)
plt.ylabel('Y', fontsize=15)
plt.title("Distribution of y variable with feature "+var_name, fontsize=18)
plt.show()
Xplot = ricex[["X1", "X2", "X3", "X4","Production"]]
var_name = "X4"
plt.figure(figsize=(10,6))
sns.regplot(x=var_name, y='Production', data=Xplot, scatter_kws={'alpha':0.6, 's':20})
plt.xlabel(var_name + " (Rainfall of Last Year)", fontsize=15)
plt.ylabel('Y', fontsize=15)
plt.title("Distribution of y variable with feature "+var_name, fontsize=18)
plt.show()
# Z-Score Normalization
cols = list(X.columns)
for col in cols:
col_zscore = col + '_zscore'
X[col_zscore] = (X[col] - X[col].mean())/X[col].std(ddof=0)
X = X[["X1_zscore", "X2_zscore", "X3_zscore", "X4_zscore"]]
X.head()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1)
alg = LinearRegression()
alg.fit(X_train, y_train)
coef = alg.coef_
intercept = alg.intercept_
coef = coef.round(decimals=2)
pp = pprint.PrettyPrinter()
pp.pprint(coef)
pp.pprint(intercept)
y_predict = alg.predict(X_test)
yp = y_predict
yt = y_test.as_matrix()
type(y_predict)
rmse = sqrt(mean_squared_error(y_predict, y_test))
print(rmse)
clf = LinearRegression()
scores = cross_val_score(clf, X, y, cv=5, scoring='neg_mean_squared_error')
for i in range(0,5):
scores[i] = sqrt(-1*scores[i])
print(scores)
avg_rmse = scores.mean()
print("\n\nAvg RMSE is ",scores.mean())
yt = yt/1000
yp = yp/1000
yt = yt.round(decimals=1)
yp = yp.round(decimals=1)
yo = np.concatenate((yp,yt),axis=1)
p = pd.DataFrame(data=yo,columns=['Predicted','Actual'])
p.describe()
p
rmse/1000
# loading the dataset
ricep = pd.read_csv("/Users/macbook/Documents/BTP/Notebook/rice with soil(P).csv")
# Removing the rows with missing value of phosphorus
ricep = ricep[np.isfinite(ricep["phosphorus"])]
# Adding collumns for the other 4 features
a = np.empty((ricep.shape[0],1))*np.NAN
ricep = ricep.assign(X1 = a)
ricep = ricep.assign(X2 = a)
ricep = ricep.assign(X3 = a)
ricep = ricep.assign(X4 = a)
ricep.head()
# Constructing features X1 and X2
l = ricep.shape[0]
for row in range(0,l):
if row-1<0 or ricep.iloc[row,1] != ricep.iloc[row-1,1]:
continue
else:
ricep.iloc[row,8] = ricep.iloc[row-1,6]
if row-2<0 or ricep.iloc[row,1] != ricep.iloc[row-2,1]:
continue
else:
ricep.iloc[row,9] = ricep.iloc[row-2,6]
# Making the strings in the dataset uniform, with other datasets
ricep['ind_district'] = ricep['ind_district'].apply(func)
ricep['Season'] = ricep['Season'].apply(func)
# Constructing features X3 and X4
l = ricep.shape[0]
for row in range(0,l):
dt = ricep.iloc[row,1]
yr = ricep.iloc[row,2]
if yr<=2002:
# rainfall for the same year
r = rain1.loc[(rain1.ind_district == dt) & (rain1.Year == yr)]
if r.shape[0] == 1:
ricep.iloc[row,10] = r.iloc[0,3]
# rainfall for the previous year
r = rain1.loc[(rain1.ind_district == dt) & (rain1.Year == yr-1)]
if r.shape[0] == 1:
ricep.iloc[row,11] = r.iloc[0,3]
if yr>2004:
# rainfall for the same year
r = rain2.loc[(rain2.ind_district == dt) & (rain2.Year == yr)]
if r.shape[0] == 1:
ricep.iloc[row,10] = r.iloc[0,3]
# rainfall for the previous year
r = rain2.loc[(rain2.ind_district == dt) & (rain2.Year == yr-1)]
if r.shape[0] == 1:
ricep.iloc[row,11] = r.iloc[0,3]
# Removing rows with any missing values
ricep = ricep[np.isfinite(ricep["Production"])]
ricep = ricep[np.isfinite(ricep["X1"])]
ricep = ricep[np.isfinite(ricep["X2"])]
ricep = ricep[np.isfinite(ricep["X3"])]
ricep = ricep[np.isfinite(ricep["X4"])]
ricep.head()
ricep['phosphorus'] = ricep['phosphorus'] + 1
ricep.describe()
X = ricep[["X1","X2","X3","X4","phosphorus"]]
y = ricep[["Production"]]
ricep.to_csv("ricep.csv")
Xplot = ricep[["X1", "X2", "X3", "X4", "phosphorus", "Production"]]
var_name = "phosphorus"
plt.figure(figsize=(10,6))
sns.regplot(x=var_name, y='Production', data=Xplot, scatter_kws={'alpha':0.6, 's':20})
plt.xlabel(var_name + " (in Soil)", fontsize=15)
plt.ylabel('Y', fontsize=15)
plt.title("Distribution of y variable with feature "+var_name, fontsize=18)
plt.show()
# Z-Score Normalization
cols = list(X.columns)
for col in cols:
col_zscore = col + '_zscore'
X[col_zscore] = (X[col] - X[col].mean())/X[col].std(ddof=0)
X = X[["X1_zscore", "X2_zscore", "X3_zscore", "X4_zscore", "phosphorus_zscore"]]
X.head()
X.describe()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1)
alg = LinearRegression()
alg.fit(X_train, y_train)
coef = alg.coef_
intercept = alg.intercept_
coef = coef.round(decimals=2)
pp = pprint.PrettyPrinter()
pp.pprint(coef)
pp.pprint(intercept)
y_predict = alg.predict(X_test)
yp = y_predict
yt = y_test.as_matrix()
type(y_predict)
rmse = sqrt(mean_squared_error(y_predict, y_test))
print(rmse)
clf = LinearRegression()
scores = cross_val_score(clf, X, y, cv=5, scoring='neg_mean_squared_error')
for i in range(0,5):
scores[i] = sqrt(-1*scores[i])
print(scores)
avg_rmse_phos = scores.mean()
print("\n\nAvg RMSE is ",scores.mean())
yt = yt/1000
yp = yp/1000
yt = yt.round(decimals=1)
yp = yp.round(decimals=1)
yo = np.concatenate((yp,yt),axis=1)
p = pd.DataFrame(data=yo,columns=['Predicted','Actual'])
p.describe()
p
rmse/1000
# Just the 4 original features (no soil data)
X_old = X[["X1_zscore", "X2_zscore", "X3_zscore", "X4_zscore"]]
# Seed is fixed, so the vector y_test is going to same as before
X_train, X_test, y_train, y_test = train_test_split(X_old, y, test_size=0.2, random_state=1)
alg = LinearRegression()
alg.fit(X_train, y_train)
coef = alg.coef_
intercept = alg.intercept_
coef = coef.round(decimals=2)
pp = pprint.PrettyPrinter()
pp.pprint(coef)
pp.pprint(intercept)
y_predict = alg.predict(X_test)
yp = y_predict
yt = y_test.as_matrix()
rmse = sqrt(mean_squared_error(y_predict, y_test))
print(rmse)
clf = LinearRegression()
scores = cross_val_score(clf, X_old, y, cv=5, scoring='neg_mean_squared_error')
for i in range(0,5):
scores[i] = sqrt(-1*scores[i])
print(scores)
avg_rmse_orig = scores.mean()
print("\n\nAvg RMSE is ",scores.mean())
X_no_rain = X[["X1_zscore", "X2_zscore"]]
X_train, X_test, y_train, y_test = train_test_split(X_no_rain, y, test_size=0.2, random_state=1)
alg = LinearRegression()
alg.fit(X_train, y_train)
coef = alg.coef_
intercept = alg.intercept_
coef = coef.round(decimals=2)
pp = pprint.PrettyPrinter()
pp.pprint(coef)
pp.pprint(intercept)
y_predict = alg.predict(X_test)
yp = y_predict
yt = y_test.as_matrix()
rmse = sqrt(mean_squared_error(y_predict, y_test))
print(rmse)
clf = LinearRegression()
scores = cross_val_score(clf, X_no_rain, y, cv=5, scoring='neg_mean_squared_error')
for i in range(0,5):
scores[i] = sqrt(-1*scores[i])
print(scores)
avg_rmse_no_rain = scores.mean()
print("\n\nAvg RMSE is ",scores.mean())
from sklearn import linear_model
reg = linear_model.RidgeCV(alphas=[1,2,3,4,5,6,7,7.1,7.2,7.3,8,9,10])
reg.fit(X_old, y)
reg.alpha_
X_train, X_test, y_train, y_test = train_test_split(X_old, y, test_size=0.2, random_state=1)
reg = linear_model.Ridge(alpha = 7.1)
reg.fit (X_train, y_train)
print(reg.coef_)
y_predict = reg.predict(X_test)
rmse = sqrt(mean_squared_error(y_predict, y_test))
print(rmse)
clf = linear_model.Ridge(alpha = 7.1)
scores = cross_val_score(clf, X_old, y, cv=5, scoring='neg_mean_squared_error')
for i in range(0,5):
scores[i] = sqrt(-1*scores[i])
print(scores)
avg_rmse_ridge = scores.mean()
print("\n\nAvg RMSE is ",scores.mean())
from sklearn import linear_model
reg = linear_model.LassoCV(alphas=[0.01,0.02,0.03,0.04,0.05,0.06,0.07,0.08,0.09,0.1])
reg.fit(X_old, y)
reg.alpha_
X_train, X_test, y_train, y_test = train_test_split(X_old, y, test_size=0.2, random_state=1)
las = linear_model.Lasso(alpha = 0.01)
las.fit (X_train, y_train)
print(las.coef_)
y_predict = las.predict(X_test)
rmse = sqrt(mean_squared_error(y_predict, y_test))
print(rmse)
clf = linear_model.Lasso(alpha = 0.01)
scores = cross_val_score(clf, X_old, y, cv=5, scoring='neg_mean_squared_error')
for i in range(0,5):
scores[i] = sqrt(-1*scores[i])
print(scores)
avg_rmse_las = scores.mean()
print("\n\nAvg RMSE is ",scores.mean())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Paso 2
Step2: Paso 3
|
<ASSISTANT_TASK:>
Python Code:
from googlefinance import getQuotes
import time
import json
import os
import sys
from IPython.display import clear_output
def buscar_accion(nombre_accion):
clear_output()
os.system('cls' if os.name=='nt' else 'clear')
print(json.dumps(getQuotes(nombre_accion), indent=2))
buscar_accion("AAPL")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the Data
Step2: Look at the data
Step3: since you can't plot 4 dimensions, try plotting some 2D subsets
Step4: I don't want to do the classification on this subset, so make sure to use the entire data set.
Step5: Split the data into test and train subsets...
Step6: ...and then train...
Step7: some classifiers have properties that are useful to look at. Naive Bayes has means and stddevs...
|
<ASSISTANT_TASK:>
Python Code:
from classy import *
data=load_excel('data/iris.xls',verbose=True)
print(data.vectors.shape)
print(data.targets)
print(data.target_names)
print(data.feature_names)
subset=extract_features(data,[0,2])
plot2D(subset,legend_location='upper left')
C=SVM()
data_train,data_test=split(data,test_size=0.2)
timeit(reset=True)
C.fit(data_train.vectors,data_train.targets)
print("Training time: ",timeit())
print("On Training Set:",C.percent_correct(data_train.vectors,data_train.targets))
print("On Test Set:",C.percent_correct(data_test.vectors,data_test.targets))
C.dual_coef_
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import Motif Definitions
Step2: Convert to DataFrame for Analysis of Duplication
Step3: Move On
|
<ASSISTANT_TASK:>
Python Code:
import venusar
import motif
import thresholds
import motifs
import activity
import tf_expression
import gene_expression
# to get code changes
import imp
imp.reload(motif)
motif_f_base='../../data/HOCOMOCOv10.JASPAR_FORMAT.TF_IDS.txt'
pc = 0.1
th = 0
bp = [0.25, 0.25, 0.25, 0.25]
motif_set_base = motif.get_motifs(motif_f_base, pc, th, bp)
motif_set_base.length()
motif_dict = motif_set_base.motif_count(False)
len(motif_dict)
# drop to valid motifs only; do twice to see if any invalid
motif_dict = motif_set_base.motif_count(True)
len(motif_dict)
range(len(motif_set_base.motifs))
range(0, (len(motif_set_base.motifs) - 1))
import pandas
# -- building data frame from dictionary
#df = pandas.DataFrame(motif_dict) # errors about 'you must pass an index'
# ref: http://stackoverflow.com/questions/17839973/construct-pandas-dataframe-from-values-in-variables#17840195
dfCounts = pandas.DataFrame(motif_dict, index=[0])
dfCounts = pandas.melt(dfCounts) # rotate
dfCounts.rename(columns = {'variable': 'TF', 'value': 'tfCount'}, inplace=True)
# -- handling types
# ref: http://stackoverflow.com/questions/15891038/pandas-change-data-type-of-columns
dfCounts.dtypes
#pandas.to_numeric(s, errors='ignore')
dfCounts.size
dfCounts[dfCounts.tfCount > 1]
# add additional information to the dataframe
msb_names,msb_lengths = motif_set_base.motif_lengths(False)
# define data frame by columns
dfc = pandas.DataFrame(
{
"TF" : msb_names,
"TFLength" : msb_lengths
})
print(dfc)
if False:
# define data frame by row (this is wrong; interlaced value sets)
dfr = pandas.melt(pandas.DataFrame(
[ msb_names,
msb_lengths])).rename(columns = {'variable': 'TF', 'value': 'tfCount'}, inplace=True)
dfr
dfCounts[dfCounts.tfCount > 1].TF
dfc
duplication_set = pandas.merge( dfCounts[dfCounts.tfCount > 1], dfc, how='inner', on='TF' ).sort_values('TF')
duplication_set
#duplication_set.select(['TF','TFLength']).groupby(by='TF').rank(method='min')
duplication_set.groupby(by='TF').rank(method='min')
# ref: http://stackoverflow.com/questions/23976176/ranks-within-groupby-in-pandas
dfRank = lambda x: pandas.Series(pandas.qcut(x,2,labels=False),index=x.index)
dfRank2 = lambda x: pandas.qcut(x,2,labels=False)
# this works: replacing x above with duplication_set['TFLength'] but fails when adding groupby, why? fails using apply too.
# duplication_set['ranks'] = duplication_set.groupby('TF')['TFLength'].apply(dfRank)
# duplication_set['ranks'] = duplication_set['TFLength'].apply(dfRank)
# duplication_set['ranks'] = pandas.qcut((duplication_set['TFLength']),2,labels=False)
duplication_set['ranks'] = dfRank2(duplication_set['TFLength'])
# adding rank to try to pivot multiple rows to columns but no dice
# df.pivot(columns='var', values='val')
#duplication_set[['TF','TFLength','ranks']].pivot(columns='ranks',values='TFLength') # stupidly keeps dropping TF column, why? also duplicating rows and not actually pivoting
# duplication_set.pivot_table(df,index=["TF","Ranks"]) # fails, 'grouper for TF not 1 dimensional
# hack tired of fighting odd outcome, dyplr is much better than pandas
duplication_set
pandas.qcut(duplication_set['TFLength'],2,labels=False) # doesn't error gives ranks
# this is wrong but not clear why?
duplication_set[['TF','TFLength','ranks']].pivot(index='TF', columns='ranks',values='Lengths') # this is wrong too
duplication_set
duplication_set.dtypes
# duplication_set.pivot(index='TF', columns='ranks',values='TFLength') # this is wrong too errors: pandas pivot ValueError: Index contains duplicate entries, cannot reshape
# led to
# ref: http://stackoverflow.com/questions/28651079/pandas-unstack-problems-valueerror-index-contains-duplicate-entries-cannot-re#28652153
# e.set_index(['id', 'date', 'location'], append=True)
# not this doesn't work either and just creates problems
#duplication_set.set_index(['TF', 'ranks', 'TFLength','tfCount'], append=True)#.pivot( columns='ranks',values='TFLength')
duplication_set[duplication_set.TF == 'RFX5']
# repeating motifs.py sub code set
import vcf
import motifs
import sequence
from pyfaidx import Fasta
force_ref_match = False
file_motif='../../data/HOCOMOCOv10.JASPAR_FORMAT.TF_IDS.fpr_0p001.txt.bed_reduced.RFX5.txt'
pc = 0.1
th = 0
bp = [0.25, 0.25, 0.25, 0.25]
ws = 50
motif_set = motif.get_motifs(file_motif, pc, th, bp)
wing_l = max(motif_set.max_positions, ws)
file_reference_genome='../../data/genome_reference/reference_genome_hg19.fa'
fa_ind = Fasta(file_reference_genome) # XXX: need to check, if present skip
file_input = '../../data/FLDL_CCCB_RARE_VARIANTS.MERGED.RNA_DP10.RNA_NODUPS.CHIP_MULTIMARK.SORTED.subset.vcf'
with open(file_input) as vcf_handle:
variant_set = vcf.read_vcf_variant_lines(vcf_handle, False)
for index in range(variant_set.length()):
var_element = variant_set.seq[index] # XXX: WARNING: changes made to element not saved
for index in range(variant_set.length()):
var_element = variant_set.seq[index]
# 1. get reference sequence
var_element.get_surround_seq(wing_l, fa_ind, force_ref_match)
# 2. compute reverse complement
var_element.assign_rev_complement()
# 3. compute int version (faster to process as int)
var_element.assign_int_versions()
ref_seq = var_element.return_full_ref_seq_int(wing_l)
var_seq = var_element.return_full_var_seq_int(wing_l)
print("\tref int: " + format(ref_seq) +
"\n\tvar int: " + format(var_seq))
print("start motif_match_int")
plusmatch = motif_set.motif_match_int(bp, ref_seq, var_seq, wing_l)
print('## Positive Matches ##')
for match in plusmatch:
print( match.name + " vs=" + str(round(match.var_score, 4)) +
" rs = " + str(round(match.ref_score, 4)) )
# 6. Calculate motif matches to reverse complement
ref_seq_rc = var_element.return_full_ref_seq_reverse_complement_int(wing_l)
var_seq_rc = var_element.return_full_var_seq_reverse_complement_int(wing_l)
print("\tref rc int: " + format(ref_seq_rc) +
"\n\tvar rc int: " + format(var_seq_rc))
print("start motif_match_int reverse complement")
minusmatch = motif_set.motif_match_int(bp, ref_seq_rc, var_seq_rc, wing_l)
print('## Reverse Complement Matches ##')
for match in minusmatch:
print( match.name + " vs=" + str(round(match.var_score, 4)) +
" rs = " + str(round(match.ref_score, 4)) )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data Cleaning
Step2: Basic EDA
Step3: This is the basic exploratory analysis to look for any evident patterns or relationships between the features.
Step4: Family and Game apps have the highest market prevelance.
Step5: Generally, most apps do well with an average rating of 4.17.
Step6: The average app ratings across categories is significantly different.
Step7: Almost all app categories perform decently. Health and Fitness and Books and Reference produce the highest quality apps with 50% apps having a rating greater than 4.5. This is extremely high!
Step8: Most top rated apps are optimally sized between ~2MB to ~40MB - neither too light nor too heavy.
Step9: Most bulky apps ( >50MB) belong to the Game and Family category. Despite this, these bulky apps are fairly highly rated indicating that they are bulky for a purpose.
Step10: Most top rated apps are optimally priced between ~1\$ to ~30\$. There are only a very few apps priced above 20\$.
Step11: Shocking...Apps priced above 250\$ !!! Let's quickly examine what these junk apps are.
Step12: Clearly, Medical and Family apps are the most expensive. Some medical apps extend even upto 80\$.
Step13: Distribution of free and paid apps across major categories
Step14: Paid apps have a relatively lower number of downloads than free apps. However, it is not too bad.
Step15: Majority of the paid apps that are highly rated have small sizes. This means that most paid apps are designed and developed to cater to specific functionalities and hence are not bulky.
Step16: A moderate positive correlation of 0.63 exists between the number of reviews and number of downloads. This means that customers tend to download a given app more if it has been reviewed by a larger number of people.
Step17: Health and Fitness apps perform the best, having more than 85% positive reviews.
Step18: Free apps receive a lot of harsh comments which are indicated as outliers on the negative Y-axis.
Step19: FREE APPS
|
<ASSISTANT_TASK:>
Python Code:
print('Number of apps in the dataset : ' , len(df))
df.sample(7)
# - Installs : Remove + and ,
df['Installs'] = df['Installs'].apply(lambda x: x.replace('+', '') if '+' in str(x) else x)
df['Installs'] = df['Installs'].apply(lambda x: x.replace(',', '') if ',' in str(x) else x)
df['Installs'] = df['Installs'].apply(lambda x: int(x))
#print(type(df['Installs'].values))
# - Size : Remove 'M', Replace 'k' and divide by 10^-3
#df['Size'] = df['Size'].fillna(0)
df['Size'] = df['Size'].apply(lambda x: str(x).replace('Varies with device', 'NaN') if 'Varies with device' in str(x) else x)
df['Size'] = df['Size'].apply(lambda x: str(x).replace('M', '') if 'M' in str(x) else x)
df['Size'] = df['Size'].apply(lambda x: str(x).replace(',', '') if 'M' in str(x) else x)
df['Size'] = df['Size'].apply(lambda x: float(str(x).replace('k', '')) / 1000 if 'k' in str(x) else x)
df['Size'] = df['Size'].apply(lambda x: float(x))
df['Installs'] = df['Installs'].apply(lambda x: float(x))
df['Price'] = df['Price'].apply(lambda x: str(x).replace('$', '') if '$' in str(x) else str(x))
df['Price'] = df['Price'].apply(lambda x: float(x))
df['Reviews'] = df['Reviews'].apply(lambda x: int(x))
#df['Reviews'] = df['Reviews'].apply(lambda x: 'NaN' if int(x) == 0 else int(x))
#print(df.loc[df.Size == 0.713]) #index = 3384
#df.loc[df.col1 == '']['col2']
# 0 - Free, 1 - Paid
# df['Type'] = pd.factorize(df['Type'])[0]
#print(df.dtypes)
#print(df.dtypes)
x = df['Rating'].dropna()
y = df['Size'].dropna()
z = df['Installs'][df.Installs!=0].dropna()
p = df['Reviews'][df.Reviews!=0].dropna()
t = df['Type'].dropna()
price = df['Price']
p = sns.pairplot(pd.DataFrame(list(zip(x, y, np.log(z), np.log10(p), t, price)),
columns=['Rating','Size', 'Installs', 'Reviews', 'Type', 'Price']), hue='Type', palette="Set2")
number_of_apps_in_category = df['Category'].value_counts().sort_values(ascending=True)
data = [go.Pie(
labels = number_of_apps_in_category.index,
values = number_of_apps_in_category.values,
hoverinfo = 'label+value'
)]
plotly.offline.iplot(data, filename='active_category')
data = [go.Histogram(
x = df.Rating,
xbins = {'start': 1, 'size': 0.1, 'end' :5}
)]
print('Average app rating = ', np.mean(df['Rating']))
plotly.offline.iplot(data, filename='overall_rating_distribution')
import scipy.stats as stats
f = stats.f_oneway(df.loc[df.Category == 'BUSINESS']['Rating'].dropna(),
df.loc[df.Category == 'FAMILY']['Rating'].dropna(),
df.loc[df.Category == 'GAME']['Rating'].dropna(),
df.loc[df.Category == 'PERSONALIZATION']['Rating'].dropna(),
df.loc[df.Category == 'LIFESTYLE']['Rating'].dropna(),
df.loc[df.Category == 'FINANCE']['Rating'].dropna(),
df.loc[df.Category == 'EDUCATION']['Rating'].dropna(),
df.loc[df.Category == 'MEDICAL']['Rating'].dropna(),
df.loc[df.Category == 'TOOLS']['Rating'].dropna(),
df.loc[df.Category == 'PRODUCTIVITY']['Rating'].dropna()
)
print(f)
print('\nThe p-value is extremely small, hence we reject the null hypothesis in favor of the alternate hypothesis.\n')
#temp = df.loc[df.Category.isin(['BUSINESS', 'DATING'])]
groups = df.groupby('Category').filter(lambda x: len(x) > 286).reset_index()
array = groups['Rating'].hist(by=groups['Category'], sharex=True, figsize=(20,20))
groups = df.groupby('Category').filter(lambda x: len(x) >= 170).reset_index()
#print(type(groups.item.['BUSINESS']))
print('Average rating = ', np.nanmean(list(groups.Rating)))
#print(len(groups.loc[df.Category == 'DATING']))
c = ['hsl('+str(h)+',50%'+',50%)' for h in np.linspace(0, 720, len(set(groups.Category)))]
#df_sorted = df.groupby('Category').agg({'Rating':'median'}).reset_index().sort_values(by='Rating', ascending=False)
#print(df_sorted)
layout = {'title' : 'App ratings across major categories',
'xaxis': {'tickangle':-40},
'yaxis': {'title': 'Rating'},
'plot_bgcolor': 'rgb(250,250,250)',
'shapes': [{
'type' :'line',
'x0': -.5,
'y0': np.nanmean(list(groups.Rating)),
'x1': 19,
'y1': np.nanmean(list(groups.Rating)),
'line': { 'dash': 'dashdot'}
}]
}
data = [{
'y': df.loc[df.Category==category]['Rating'],
'type':'violin',
'name' : category,
'showlegend':False,
#'marker': {'color': 'Set2'},
} for i,category in enumerate(list(set(groups.Category)))]
plotly.offline.iplot({'data': data, 'layout': layout})
groups = df.groupby('Category').filter(lambda x: len(x) >= 50).reset_index()
# sns.set_style('ticks')
# fig, ax = plt.subplots()
# fig.set_size_inches(8, 8)
sns.set_style("darkgrid")
ax = sns.jointplot(df['Size'], df['Rating'])
#ax.set_title('Rating Vs Size')
c = ['hsl('+str(h)+',50%'+',50%)' for h in np.linspace(0, 360, len(list(set(groups.Category))))]
subset_df = df[df.Size > 40]
groups_temp = subset_df.groupby('Category').filter(lambda x: len(x) >20)
# for category in enumerate(list(set(groups_temp.Category))):
# print (category)
data = [{
'x': groups_temp.loc[subset_df.Category==category[1]]['Rating'],
'type':'scatter',
'y' : subset_df['Size'],
'name' : str(category[1]),
'mode' : 'markers',
'showlegend': True,
#'marker': {'color':c[i]}
#'text' : df['rating'],
} for category in enumerate(['GAME', 'FAMILY'])]
layout = {'title':"Rating vs Size",
'xaxis': {'title' : 'Rating'},
'yaxis' : {'title' : 'Size (in MB)'},
'plot_bgcolor': 'rgb(0,0,0)'}
plotly.offline.iplot({'data': data, 'layout': layout})
# heavy_categories = [ 'ENTERTAINMENT', 'MEDICAL', 'DATING']
# data = [{
# 'x': groups.loc[df.Category==category]['Rating'],
# 'type':'scatter',
# 'y' : df['Size'],
# 'name' : category,
# 'mode' : 'markers',
# 'showlegend': True,
# #'text' : df['rating'],
# } for category in heavy_categories]
paid_apps = df[df.Price>0]
p = sns.jointplot( "Price", "Rating", paid_apps)
subset_df = df[df.Category.isin(['GAME', 'FAMILY', 'PHOTOGRAPHY', 'MEDICAL', 'TOOLS', 'FINANCE',
'LIFESTYLE','BUSINESS'])]
sns.set_style('darkgrid')
fig, ax = plt.subplots()
fig.set_size_inches(15, 8)
p = sns.stripplot(x="Price", y="Category", data=subset_df, jitter=True, linewidth=1)
title = ax.set_title('App pricing trend across categories')
#print('Junk apps priced above 350$')
df[['Category', 'App']][df.Price > 200]
fig, ax = plt.subplots()
fig.set_size_inches(15, 8)
subset_df_price = subset_df[subset_df.Price<100]
p = sns.stripplot(x="Price", y="Category", data=subset_df_price, jitter=True, linewidth=1)
title = ax.set_title('App pricing trend across categories - after filtering for junk apps')
# Stacked bar graph for top 5-10 categories - Ratio of paid and free apps
#fig, ax = plt.subplots(figsize=(15,10))
new_df = df.groupby(['Category', 'Type']).agg({'App' : 'count'}).reset_index()
#print(new_df)
# outer_group_names = df['Category'].sort_values().value_counts()[:5].index
# outer_group_values = df['Category'].sort_values().value_counts()[:5].values
outer_group_names = ['GAME', 'FAMILY', 'MEDICAL', 'TOOLS']
outer_group_values = [len(df.App[df.Category == category]) for category in outer_group_names]
a, b, c, d=[plt.cm.Blues, plt.cm.Reds, plt.cm.Greens, plt.cm.Purples]
inner_group_names = ['Paid', 'Free'] * 4
inner_group_values = []
#inner_colors = ['#58a27c','#FFD433']
for category in outer_group_names:
for t in ['Paid', 'Free']:
x = new_df[new_df.Category == category]
try:
#print(x.App[x.Type == t].values[0])
inner_group_values.append(int(x.App[x.Type == t].values[0]))
except:
#print(x.App[x.Type == t].values[0])
inner_group_values.append(0)
explode = (0.025,0.025,0.025,0.025)
# First Ring (outside)
fig, ax = plt.subplots(figsize=(10,10))
ax.axis('equal')
mypie, texts, _ = ax.pie(outer_group_values, radius=1.2, labels=outer_group_names, autopct='%1.1f%%', pctdistance=1.1,
labeldistance= 0.75, explode = explode, colors=[a(0.6), b(0.6), c(0.6), d(0.6)], textprops={'fontsize': 16})
plt.setp( mypie, width=0.5, edgecolor='black')
# Second Ring (Inside)
mypie2, _ = ax.pie(inner_group_values, radius=1.2-0.5, labels=inner_group_names, labeldistance= 0.7,
textprops={'fontsize': 12}, colors = [a(0.4), a(0.2), b(0.4), b(0.2), c(0.4), c(0.2), d(0.4), d(0.2)])
plt.setp( mypie2, width=0.5, edgecolor='black')
plt.margins(0,0)
# show it
plt.tight_layout()
plt.show()
#ax = sns.countplot(x="Category", hue="Type", data=new_df)
#df.groupby(['Category', 'Type']).count()['App'].unstack().plot(kind='bar', stacked=True, ax=ax)
#ylabel = plt.ylabel('Number of apps')
trace0 = go.Box(
y=np.log10(df['Installs'][df.Type=='Paid']),
name = 'Paid',
marker = dict(
color = 'rgb(214, 12, 140)',
)
)
trace1 = go.Box(
y=np.log10(df['Installs'][df.Type=='Free']),
name = 'Free',
marker = dict(
color = 'rgb(0, 128, 128)',
)
)
layout = go.Layout(
title = "Number of downloads of paid apps Vs free apps",
yaxis= {'title': 'Number of downloads (log-scaled)'}
)
data = [trace0, trace1]
plotly.offline.iplot({'data': data, 'layout': layout})
temp_df = df[df.Type == 'Paid']
temp_df = temp_df[temp_df.Size > 5]
#type_groups = df.groupby('Type')
data = [{
#'x': type_groups.get_group(t)['Rating'],
'x' : temp_df['Rating'],
'type':'scatter',
'y' : temp_df['Size'],
#'name' : t,
'mode' : 'markers',
#'showlegend': True,
'text' : df['Size'],
} for t in set(temp_df.Type)]
layout = {'title':"Rating vs Size",
'xaxis': {'title' : 'Rating'},
'yaxis' : {'title' : 'Size (in MB)'},
'plot_bgcolor': 'rgb(0,0,0)'}
plotly.offline.iplot({'data': data, 'layout': layout})
#df['Installs'].corr(df['Reviews'])#df['Insta
#print(np.corrcoef(l, rating))
corrmat = df.corr()
#f, ax = plt.subplots()
p =sns.heatmap(corrmat, annot=True, cmap=sns.diverging_palette(220, 20, as_cmap=True))
df_copy = df.copy()
df_copy = df_copy[df_copy.Reviews > 10]
df_copy = df_copy[df_copy.Installs > 0]
df_copy['Installs'] = np.log10(df['Installs'])
df_copy['Reviews'] = np.log10(df['Reviews'])
sns.lmplot("Reviews", "Installs", data=df_copy)
ax = plt.gca()
_ = ax.set_title('Number of Reviews Vs Number of Downloads (Log scaled)')
reviews_df = pd.read_csv('../input/googleplaystore_user_reviews.csv')
merged_df = pd.merge(df, reviews_df, on = "App", how = "inner")
merged_df = merged_df.dropna(subset=['Sentiment', 'Translated_Review'])
grouped_sentiment_category_count = merged_df.groupby(['Category', 'Sentiment']).agg({'App': 'count'}).reset_index()
grouped_sentiment_category_sum = merged_df.groupby(['Category']).agg({'Sentiment': 'count'}).reset_index()
new_df = pd.merge(grouped_sentiment_category_count, grouped_sentiment_category_sum, on=["Category"])
#print(new_df)
new_df['Sentiment_Normalized'] = new_df.App/new_df.Sentiment_y
new_df = new_df.groupby('Category').filter(lambda x: len(x) ==3)
# new_df = new_df[new_df.Category.isin(['HEALTH_AND_FITNESS', 'GAME', 'FAMILY', 'EDUCATION', 'COMMUNICATION',
# 'ENTERTAINMENT', 'TOOLS', 'SOCIAL', 'TRAVEL_AND_LOCAL'])]
new_df
trace1 = go.Bar(
x=list(new_df.Category[::3])[6:-5],
y= new_df.Sentiment_Normalized[::3][6:-5],
name='Negative',
marker=dict(color = 'rgb(209,49,20)')
)
trace2 = go.Bar(
x=list(new_df.Category[::3])[6:-5],
y= new_df.Sentiment_Normalized[1::3][6:-5],
name='Neutral',
marker=dict(color = 'rgb(49,130,189)')
)
trace3 = go.Bar(
x=list(new_df.Category[::3])[6:-5],
y= new_df.Sentiment_Normalized[2::3][6:-5],
name='Positive',
marker=dict(color = 'rgb(49,189,120)')
)
data = [trace1, trace2, trace3]
layout = go.Layout(
title = 'Sentiment analysis',
barmode='stack',
xaxis = {'tickangle': -45},
yaxis = {'title': 'Fraction of reviews'}
)
fig = go.Figure(data=data, layout=layout)
plotly.offline.iplot({'data': data, 'layout': layout})
#merged_df.loc[merged_df.Type=='Free']['Sentiment_Polarity']
sns.set_style('ticks')
sns.set_style("darkgrid")
fig, ax = plt.subplots()
fig.set_size_inches(11.7, 8.27)
ax = sns.boxplot(x='Type', y='Sentiment_Polarity', data=merged_df)
title = ax.set_title('Sentiment Polarity Distribution')
from wordcloud import WordCloud
wc = WordCloud(background_color="white", max_words=200, colormap="Set2")
# generate word cloud
from nltk.corpus import stopwords
stop = stopwords.words('english')
stop = stop + ['app', 'APP' ,'ap', 'App', 'apps', 'application', 'browser', 'website', 'websites', 'chrome', 'click', 'web', 'ip', 'address',
'files', 'android', 'browse', 'service', 'use', 'one', 'download', 'email', 'Launcher']
#merged_df = merged_df.dropna(subset=['Translated_Review'])
merged_df['Translated_Review'] = merged_df['Translated_Review'].apply(lambda x: " ".join(x for x in str(x).split(' ') if x not in stop))
#print(any(merged_df.Translated_Review.isna()))
merged_df.Translated_Review = merged_df.Translated_Review.apply(lambda x: x if 'app' not in x.split(' ') else np.nan)
merged_df.dropna(subset=['Translated_Review'], inplace=True)
free = merged_df.loc[merged_df.Type=='Free']['Translated_Review'].apply(lambda x: '' if x=='nan' else x)
wc.generate(''.join(str(free)))
plt.figure(figsize=(10, 10))
plt.imshow(wc, interpolation='bilinear')
plt.axis("off")
plt.show()
paid = merged_df.loc[merged_df.Type=='Paid']['Translated_Review'].apply(lambda x: '' if x=='nan' else x)
wc.generate(''.join(str(paid)))
plt.figure(figsize=(10, 10))
plt.imshow(wc, interpolation='bilinear')
plt.axis("off")
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Loading Dataset
Step2: Cleaning Data
Step3: Calculating New Parameters
Step4: a = {}
Step5: Notice the lack of IPOs following the popping of economic bubbles (dot-com, asset-backed securities). This can be visualized nicely with a chart. Notice where dy/dx approaches 0.
Step6: Yikes, is that another plateau coming in 2016? Let's hope not. Anyways, let's take a look at the most common sectors for IPOs. A random sample of around 600 stocks were used.
Step7: Somewhat shockingly, healthcare is the sector that dominates IPOs the most— and by a large margin.
Step8: This bar chart graphs the % of underpricing using the columns generated in the "Calculating New Paramaters" section.
Step9: These two bar chart graph the average share price, grouped first by sector then by market, at First Day open, First Day close, Thirty Days Later close, and One Year Later close.
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import requests
import re
plt.style.use('ggplot')
import matplotlib
%matplotlib inline
# Download the data file from `puu.sh` and save it locally under `file_name`:
url = "http://puu.sh/oBCfW/c006093339.xlsx" # Script was ran ahead of time and uploaded onto this website. Random sample.
file_name = "./IPO_Expanded_Multiprocessing_d.xlsx"
req = requests.get(url)
file = open(file_name, 'wb')
for chunk in req.iter_content(100000):
file.write(chunk)
file.close()
my_data = pd.read_excel(file_name,sheetname="Nasdaq_IPO_Expanded_Multiproces")
my_data.head(3)
df = my_data.copy()
## These symbols were not available
df = df[df.Symbol != "GAV'U"]
df = df[df.Symbol != "AGR'A"]
df = df[df.Symbol != "TAP'A"]
df = df[df.Symbol != "PED'U"]
df.shape
df["First Day Open Price"] = df["First Day Open Price"].replace("-",np.nan).astype('float')
df= df[df["First Day Open Price"]<200]
df.shape
df.Sector.replace(to_replace="&",value="",regex=True,inplace=True)
df["Day_Closing"] = 100 * (df["First Day Open Price"] - df["First Day Close Price"])/(df["First Day Open Price"])
df["Day30_closing"] = 100 *(df["First Day Open Price"] - df["Thirty Days Later Close Price"])/(df["First Day Open Price"])
df["Current_closing"] = 100 * (df["First Day Open Price"] - df["Current Price"])/(df["First Day Open Price"])
plt.figure();
df.Symbol.groupby(df["IPO Date"]).count().plot(title = "Frequency of IPO's Since 1997",
figsize=(15,15),color="b")
df_graph2 = df["First Day Close Price"].groupby(df["IPO Date"]).count()
df_graph2 = pd.DataFrame(df_graph2)
df_graph2['index1'] = df_graph2.index
#df_graph2 = df_graph2.reset_index(drop = True)
df_graph2.columns = ["Number","IPO Date"]
df_graph2["Number"].cumsum().plot(title = "Total IPOs 1997-2016", figsize = (10,10), color="m")
df_graph3 = df.groupby(["Sector"]).count()
df_graph3 = df_graph3.reset_index()
df_graph3.index = df_graph3["Sector"]
df_graph3 = df_graph3[["Symbol"]]
df_graph3.columns = ["Total Number of IPOs"]
df_graph3.plot(kind="barh",title = "Total Number of IPOs by Sector (Random Sample of 600)", figsize = (10,10),color="c")
my_colors = 'cbmg'
df_graph1 = df[["Day_Closing","Day30_closing",]].groupby(df["Sector"]).mean()
df_graph1['index1'] = df_graph1.index
df_graph1.reset_index(drop=True)
df_graph1.plot(kind = "bar",title = "IPO Underpricing Percent by Period",
figsize=(10,10), subplots=False,legend = True,color=my_colors)
df[["First Day Open Price","First Day Close Price",'Thirty Days Later Close Price',
'One Year Later Close Price']].groupby(df["Sector"]).mean().plot(kind = "bar",
legend= True,
figsize=(15,10),
title="Mean Share Price Per Period, Grouped by Sector",
color=my_colors
)
df[["First Day Open Price","First Day Close Price",'Thirty Days Later Close Price',
'One Year Later Close Price']].groupby(df["Market"]).mean().drop(['American Stock Exchange'], axis=0).plot(kind = "bar",
legend= True,
figsize=(15,10),
title="Mean Share Price Per Period, Grouped by Market",
color=my_colors
)
print("FIN")
<END_TASK>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.